Deep Surface Normal Estimation with Hierarchical RGB-D Fusion
The growing availability of commodity RGB-D cameras has boosted the applications in the field of scene understanding. However, as a fundamental scene understanding task, surface normal estimation from RGB-D data lacks thorough investigation. In this paper, a hierarchical fusion network with adaptive feature re-weighting is proposed for surface normal estimation from a single RGB-D image. Specifically, the features from color image and depth are successively integrated at multiple scales to ensure global surface smoothness while preserving visually salient details. Meanwhile, the depth features are re-weighted with a confidence map estimated from depth before merging into the color branch to avoid artifacts caused by input depth corruption. Additionally, a hybrid multi-scale loss function is designed to learn accurate normal estimation given noisy ground-truth dataset. Extensive experimental results validate the effectiveness of the fusion strategy and the loss design, outperforming state-of-the-art normal estimation schemes.
Per-pixel surface normal estimation has been extensively studied in the recent years. Previous works on normal estimation mostly assume single RGB image input [8, 26, 1, 33], providing satisfying results in most cases despite loss of shape features and erroneous results at the highlight or dark areas, as shown in Fig. 1(c).
RGB-D cameras are now commercially available, leading to a great performance enhancement in the applications of scene understanding, e.g., semantic segmentation [27, 5, 23], object detection [11, 20], 3D reconstruction [15, 18, 12], etc. With the depth given by sensors, normal can be easily calculated via a least square optimization [21, 9] as used in the widely used NYUv2 dataset , but the quality of the normal suffers from the corruption in depth, e.g., sensor noise along object edges or missing pixels due to glossy, black, transparent, and distant surfaces , as shown in Fig. 1(d).
This motivates us to combine the advantages of color and depth inputs while compensating for the deficiency of each other in the task of normal estimation. Specifically, the RGB information is utilized to fill the missing pixels in depth; meanwhile the depth clue is merged into RGB results to enhance sharp edges and correct erroneous estimation, resulting in a complete normal map with fine details. However, research on combining RGB and depth for normal estimation has not been extensively studied. To the best of our knowledge, the only work considering RGB-D input for normal estimation adopts early fusion, i.e., using depth as an additional channel to the RGB input, leading to little performance improvement compared with the methods using the RGB input only . The lack of proper network design for combining the geometric information in depth and color image is an impediment to fully take advantage of the depth sensor.
Different from previous works on normal estimation with RGB-D using early fusion , we propose to merge the features from RGB and depth branches at multiple scales at the decoder side in a hierarchical manner, in order to guarantee both global surface smoothness and local sharp features in the fusion results. Additionally, a pixel-wise confidence map is estimated from the depth input for re-weighting depth features before merging into RGB branch, so as to reduce artifact from depth with a smaller confidence on missing pixels and those along the object edges. An example is shown in Fig. 1, where the proposed scheme outperforms state-of-the-art RGB-based, depth-based, RGBD-based methods.
Apart from the lack of RGB-D fusion schemes, the shortage of datasets providing sensor depth and ground-truth depth pairs is another obstacle for RGB-D normal estimation since the performance of DNN approaches is affected by the dataset quality [19, 30]. The widely used training datasets for normal estimation, e.g., NYUv2 , do not provide complete ground-truth normal for the captured RGB-D images since it is directly computed from the captured depth after inpainting . If trained on NYUv2, the network is up to approximate an inpainting algorithm.
Instead we use Matterport3D  and ScanNet  datasets with RGB-D captured by camera and ground-truth normal obtained via multiview reconstruction provided by . Nevertheless, the ground-truth is not perfect due to the multiview reconstruction error, especially at object edges which is crucially for visual evaluation. To overcome the artifact in the ground-truth, we propose a hybrid multi-scale loss function based on the noise statistics in the ground-truth normal map, using loss at the large resolution to obtain sharper results, and loss at small resolution to ensure coarse scale accuracy.
In summary, the main contributions of our work are:
By incorporating RGB and depth inputs via the proposed hierarchical fusion scheme, the two inputs are able to complement each other in the normal estimation, refine details with depth, and fill the missing depth pixels with color;
With the confidence map for depth feature re-weighting, the effect of artifacts in the depth features is reduced;
A hybrid multi-scale loss function is designed by analyzing the noise statistics in the ground-truth, providing sharp results with high fidelity despite the imperfect ground-truth.
Comparison with the state-of-the-art approaches and extensive ablation study validates the design of network structure and loss function. The paper is organized as follows. Related works are discussed in Section 2, and Section 3 provides a detailed discussion of the proposed method. Ablation study and comparison with state-of-the-art methods are demonstrated in Section 4 and the work is concluded in Section 5.
2 Related Work
2.1 Surface Normal Estimation
RGB-based Previous works mostly used a single RGB image as input. Eigen et al.  designed a three-scale convolution network architecture that produced a coarse global prediction with full image first and then refined it with local finer-scale network. Wang et al.  proposed a network structure that integrated different geometric information like local, global, and vanishing point information to predict the surface normal. More recently, Bansal et al.  proposed a skip-connected structure to concatenate the CNN response at different scales to capture corresponding details at each scale, and Zhang et al.  adopted a U-Net structure and achieved state-of-the-art performance.
Due to the difficulty in extracting geometric information and texture interference from the RGB input, the details of predictions are poor, with wrong results in the area of insufficient lighting or high lighting.
Depth-based Surface normal can be inferred from depth with geometric method, which depends on the neighboring pixels’ relative depth geometrically . However, the depth camera used in common datasets, e.g., NYUv2 , Matterport3D , ScanNet  often fails to sense the depth on glossy, bright, transparent and faraway surfaces [32, 29], resulting in holes and corruptions in the obtained depth images. To overcome missing pixels in normal map inferred from depth, some works proposed to inpaint depth images using RGB images [7, 10, 16, 25, 31]. Silberman et al.  used optimization-based method  to fill the holes in depth maps. Zhang et al.  used a convolutional network to predict pixel-wise surface normal with a single RGB image, then used the predicted normal to fill holes in raw depth.
Nevertheless, depth inpainting cannot handle large holes in depth; also, the noise in depth will undermine depth-based normal estimation performance.
Normal-depth consistency based There is a strong geometric correlation between the depth and the surface normal. Normal can be calculated from the depth of neighboring pixels, and depth can be refined with normal variation. For example, Wang et al.  proposed a four-stream convolutional neural network to detect planar regions, then used a dense conditional random field to smooth results based on depth and surface normal correlation in planar region and planar boundary respectively. Chen et al.  established a new dataset, and proposed two loss functions to measure the consistency between predicted normal and depth label for depth and normal prediction. Qi et al.  proposed to predict initial depth and surface normal using color image, then cross-refine each other using geometric consistency.
These methods provide different schemes to promote geometric consistency between normal and depth, but rely on a single RGB input and do not consider noise from depth sensors.
RGB-D based The RGB-D based normal estimation has not been extensively studied in previous works. Normal estimation with RGB-D input has been briefly discussed in  where an early fusion was adopted, reported to be almost the same as using RGB input. However the method is not properly designed and the conclusion is not comprehensive. Although 3D reconstruction based methods like  can be used in normal estimation, a series of RGB-D images is required for those methods, which is beyond the scope of this paper. The lack of design in RGB-D fusion for surface normal estimation motivates our work.
2.2 RGB-D Fusion Schemes
Despite the lack of study in RGB-D based normal estimation, RGB-D fusion scheme has been explored for other tasks, among which semantic segmentation is the most extensively studied one, e.g., early fusion using RGB-D as a four-channel input , late fusion , depth-aware convolution , or using 3D point cloud format .
The difference from those works is that they do not require per-pixel accuracy as much as normal prediction, i.e., the label interior of one object is constant, but for normal estimation, correct prediction at each pixel is required, and the most significant difficulty lies in accurate sharp details. Therefore, we adopt hierarchical fusion with confidence map re-weighting to enhance edge preservation in the fusion result without bringing artifacts in depth.
As illustrated in Fig. 2, the hierarchical RGB-D fusion network is composed of three modules: RGB branch, depth branch, and confidence map estimation. In this section, we introduce the pipeline for the hierarchical fusion of RGB and depth branches with the fusion module at different scales, and confidence map estimation used inside the fusion module for depth conditioning, after which the hybrid loss function design is detailed. A detailed architecture of the deep network is provided in the supplementary.
3.1 Hierarchical RGB-D Fusion
Given color image and sensor depth , we are aimed as estimating surface normal map by minimizing its distance from the ground-truth normal , i.e.,
where denotes the fusion network function to generate normal estimation parameterized by the parameters , which are end-to-end trained via back propagation. A hierarchical fusion scheme is adopted to merge depth branch into RGB branch for both overall surface orientation rectification and visually salient feature enhancement.
3.1.1 Network Design
First, in the RGB branch where the input is the color image , we adopt a similar network structure as used in , where a fully convolutional network (FCN)  is built with VGG-16 back-bone as illustrated in the RGB branch in Fig. 2. Specifically, the encoder is the same as VGG-16 except that in the last two convolution blocks of the encoder, i.e., conv4 and conv5, the channel number is reduced from 512 to 256 to remove redundant model parameters. The encoder is accompanied with a symmetric decoder, and equipped with skip-connections and shared pooling masks for learning local image features.
Meanwhile, is fed into the depth branch to extract geometric features with a similar network structure as the RGB branch, except that the last convolution block in the RGB encoder is removed to give a simplified model.
The fusion takes place at the decoder side. As shown in Fig. 2, the depth features (colored in green) at each scale in the decoder are passed into the fusion module and re-weighted with the confidence map (colored in purple) down-sampled and repeated to the same resolution as the depth feature. Then the re-weighted depth features are concatenated with the color features with the same resolution and passed through a deconvolution layer to give the fusion output features (colored in yellow). Consequently, the fusion module (denoted as FM for short) at scale is given as,
where , are the features from RGB and depth branches at scale , and is the confidence map for depth conditioning. denotes element-wise multiplication and denotes the concatenation operation. The concatenation result after deconvolution layer gives the fusion output. The fusion is implemented at four scales, where the last scale output gives the final normal estimation. The confidence map estimation is addressed later in Section 3.2.
3.1.2 Comparison with Existing RGB-D Fusion Schemes
Existing RGB-D fusion schemes mostly adopt single-scale fusion.  fused RGB-D at the input, i.e., using depth as an additional channel along with RGB. However, RGB and depth are from different domains and cannot be properly handled using the same encoder as a four-channel input. For example, we adopt the same network structure as in , composed of VGG-16 encoder and a symmetric decoder with skip-connection, and use a RGB-D four-channel input instead of a single RGB to generate the normal as shown in Fig. 7(d). The output normal does not exhibit global smoothness, especially in area where depth pixels are missing. This is because a CNN network is incapable of handling different domains information from RGB and depth without prior knowledge about depth artifact.
Late fusion with probability map for RGB and depth is adopted in  for segmentation, and here we generalize the network structure for normal estimation, by replacing the probability map with a binary mask indicating whether the depth pixel is available or not, giving the result in Fig. 7(e). The role of binary mask we use is consistent with that of the probability map in  which indicates how much the source is trustworthy. Similar to early fusion, the result of late fusion has noticeable artifacts along the depth holes indicating the fusion is not smooth.
In light of this, single-scale fusion is not efficient for fusing RGB and depth when RGB and depth contain different noise. RGB is sensitive to lighting conditions while depth is corrupted at object edges and distant surfaces, indicating that the output from RGB and depth can be inconsistent. If depth is integrated into RGB in a single scale, the fusion is hard to eliminate the difference between two sources and give a smooth result. This motivates us to merge depth features into RGB branch at four different scales in a hierarchical manner. In this way, the features from two branches are successively merged, where the global surface orientation error would be corrected at small resolution features, while detail refinement would take place at the final scale. As shown in Fig. 7, the result of the proposed hierarchical fusion gives smoother result with detail well preserved.
3.2 Confidence Map Estimation
While hierarchical fusion improves normal estimation over existing fusion schemes, further examination at pixels around depth holes shows that the transition is not smooth as shown in Fig. 8(e) where the right side of the table has erroneous prediction close to depth hold boundary. This indicates that a binary masking is not sufficient for depth conditioning, and a more adaptive re-weighting would be more favorable. Therefore, a light-weight network for depth confidence map is designed as follows.
Depth along with a binary mask indicating missing pixels in depth are fed into a convolutional network with five layers as shown in Fig. 2, where the first two layers are with 33 kernel size and the following three layers are with 11 kernels. In this way, the receptive field is small enough to restrict local adaption to depth variation. Then the confidence map is down-sampled using shared pooling mask with depth branch and passed into the fusion module to facilitate fusion operation as described in Eq. 2. By comparing Fig. 8(e) and (f), the confidence map leads to a more accurate fusion result, correcting the error at the right side of the table.
To understand the role of the confidence map, we show the confidence map in Fig. 8(d). The edge pixels are with the smallest confidence value indicating a high likelihood of outlier or noise, while the hole area is with a small yet non-zero value, suggesting that to enable smooth transition, information in depth holes can be passed into the merge result as long as RGB features take the dominant role.
3.3 Hybrid Loss
As mentioned in Section 1, we use Matterport3D and ScanNet datasets for training and testing because RGB-D data captured by camera and ground-truth normal pairs are provided. However, the ground-truth normal suffers from multiview reconstruction errors as shown in Fig. 3(b) where the normal map is piece-wise constant inside the mesh triangular and the edge does not align with the RGB input. Given noisy ground-truth like this, improper handling of loss function during training will lead to deficient performance. The reason is as follows.
Given the similar inputs in green and red rectangular in Fig. 3(a), the output would be similar. However, the corresponding ground-truth normal maps are different as shown Fig. 3(b), thus by minimizing the loss function, the network will learn an expectation of all pairs of input and ground-truth :
For loss , the minimization will lead to an arithmetic mean of the observations, while loss will lead to median of the observations.
To see which loss is more proper for the given dataset, we sample patches along the edge in Fig. 3 with same horizontal position as patches in the color rectangles, and compute the mean and median normal results of these sampled patches shown in Fig. 4 where both generate reasonable results though median result has sharper edges than mean result, indicating that loss will generate a more visually appealing result with sharp details.
In this work, we adopt hybrid multi-scale loss function:
where denotes the scales from small to large, and is the weight for loss at different scales and is set to be . loss is used for large scale outputs for detail enhancement, while loss is used for coarse scale outputs for overall accuracy. Using hybrid loss generates clean and visually better result than loss widely used for normal estimation [21, 33, 1] as shown in Fig. 7. The proposed method is named as Hierarchical RGB-D Fusion with Confidence Map, and referred to as HFM-Net for short.
4.1 Implementation Details
Dataset We evaluate our approach on two datasets, Matterport3D  and ScanNet . For the corresponding ground-truth normal data, we use the render normal provided by  which was generated with multiview reconstruction. Matterport3D is divided into 105432 images for training and 11302 for testing; ScanNet is divided into 59743 for training and 7517 for testing with file lists provided in . Since ground-truth normal data in the Matterport3D suffer from reconstruction noise, e.g., in outdoor scenes or mirror area, we remove the samples in the testing dataset with large error so as to avoid unreliable evaluation. After data pruning, 6.47% (782 out of 12084) testing images are removed, leading to 11302 remaining. Details of data pruning can be found in the supplementary.
Training Details We use RMSprop optimizer with initial learning rate set to and decayed at epoch with decay rate . The model is trained from scratch without pretrained model for 15 epochs. We first use loss for all scales in the first 4 epochs and then change to hybrid loss defined in Eq. 4 to ensure stable training at the beginning. We implement with PyTorch on NVIDIA GeForce GTX Titan X GPU.
Evaluation Metrics The normal prediction performance is evaluated with five metrics. We compute the per-pixel angle distance between prediction and ground-truth, then compute mean and median for valid pixels with given ground-truth normal. In addition to mean and median, we also compute the fraction of pixels with angle difference with ground-truth less than where = 11.25, 22.5, and 30 as used in .
4.2 Main Results
We compare our proposed HFM-Net with the state-of-the-art normal estimation methods, which are classified into three categories in accordance with Section 2, while normal-depth consistency based methods are adopted as alternatives for RGB-D fusion thus also put in the RGB-D category.
RGB-based methods include Skip-Net  and Zhang’s algorithm . Pretrained models on Matterport3D and ScanNet of Zhang’s are provided in , and Skip-Net is fine-tuned for Matterport3D and ScanNet based on the pre-trained model on NYUv2 dataset using public available training code.
Depth-based Depth information is used to compute surface normal in existing works [22, 6, 2] based on geometric relation between depth and surface normal. Since the input depth is incomplete, we first implement depth inpainting before converting into normal map. Two algorithms are used to preprocess the input depth images: colorization algorithm in  (denoted as Levin’s) as used in NYUv2 and the state-of-the-art depth completion (shortened as DC) . After depth inpainting, we follow the same procedure in  to generate normal from depth.
RGBD-based For the RGB-D fusion methods, we adopt methods in GFMM  and the state-of-the-art GeoNet  to merge depth input into initial RGB-based normal output for refinement. Specifically, we choose Zhang’s method  for initial normal estimation from RGB, and calculate a rough normal from raw depth image at the same time, then merge the two normal estimations using methods in GFMM  and GeoNet  to estimate the final surface normal map.
We test on two datasets respectively with the five metrics as shown in Table 1, where HFM-Net outperforms all the other schemes in different metrics. In terms of mean value, HFM-Net outperforms RGB-based methods by at least 6.284, and 6.064 over depth-inpainting based methods, and 3.475 over RGBD-based methods. Visual evaluation results are shown in Fig. 5 and Fig. 6. RGB-based methods miss details such as the sofa in Fig. 5 with blurry edges. Depth-based methods have serious errors at the depth hole regions and noticeable noise. Competing RGB-D fusion methods fail to generate accurate results at areas where depth is noisy or corrupted. On the contrary, our HFM-Net is exhibiting nice normal prediction both at smooth planar areas and along sharp edges.
4.3 Ablation Study
For better understanding of how HFM-Net works, we investigate the effect of each component in the network with the following ablation study.
Hierarchical Fusion We compare hierarchical fusion (HF) with single-scale fusion including early fusion and late fusion as described Section 3, denoted as Early-F and Late-F in Table 2 respectively. The binary mask is used for Late-F and HF, and all are trained using hybrid loss if not specified. As can be seen from Table 2, Early-F and Late-F is less effective than HF+Mask+Hybrid, validating the use of HF. Furthermore, Fig. 7(d-f) show the difference between single-scale and hierarchical fusion. The hierarchical fusion provides more accurate results in a planar surface especially in depth hole areas marked in black rectangles.
Confidence Map We compare confidence map with binary mask. Fig. 8 shows the difference between fusion with confidence map and fusion with binary mask. Fusion with confidence map can reduce the negative effect of a depth hole during the fusion, and smooth the prediction around the boundary region of depth holes.
Hybrid Loss Apart from fusion method, different combinations of loss function are examined in the experiment. In comparison of hybrid loss, the confidence map is used in fusion. If the network use loss function in all layers, the prediction will tend to be blurry. On the other hand, a network with loss will tend to preserve more details. A hybrid loss function design, as described in Section 3.3 can generate results with both smooth surface and fine object details, as shown in the comparison in Fig. 7 (g-l).
4.4 Model Complexity and Runtime
Table 1 reports the runtime of our method and other state-of-the-art methods. Skip-Net method uses the official evaluation code in MatCaffe. Levin’s colorization method uses the code provided in NYUv2 dataset. GeoNet-D is the GeoNet with RGBD input, and we implement it in PyTorch. The consistency loss is added to GeoNet-D as a comparison scheme. The network forward runtime is averaged over Matterport3D test set with input images of size 320256 on NVIDIA GeForce GTX TITAN X GPU. Apart from the time cost in neural network forward pass, the runtime of depth-based and RGBD-based methods also includes the time spent on geometric calculation. As in shown in Table 1, our method exceeds competing schemes in metric performance while taking a reasonably fast time.
In this work, we propose a hierarchical fusion scheme to combine RGB-D features at multiple scales with a confidence map estimated from depth input for depth conditioning to facilitate feature fusion. Moreover, a hybrid loss function is designed to generate clean normal estimation even if the training targets suffer from reconstruction noise. Extensive experimental results demonstrate that our HFM-Net outperforms the state-of-the-art methods in providing more accurate surface normal prediction and sharper visually salient features. Ablation studies validate the superiority of the proposed hierarchical fusion scheme over single-scale fusion schemes in existing works, the effectiveness of confidence map in producing accurate estimation around missing pixels in depth input, and the advantage of the hybrid loss function in overcoming dataset deficiency.
-  A. Bansal, B. Russell, and A. Gupta. Marr revisited: 2d-3d alignment via surface normal prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5965–5974, 2016.
-  A. Chang, A. Dai, T. Funkhouser, M. Halber, M. Niessner, M. Savva, S. Song, A. Zeng, and Y. Zhang. Matterport3D: Learning from RGB-D data in indoor environments. International Conference on 3D Vision (3DV), 2017.
-  W. Chen, D. Xiang, and J. Deng. Surface normals in the wild. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, pages 22–29, 2017.
-  Y. Cheng, R. Cai, Z. Li, X. Zhao, and K. Huang. Locality sensitive deconvolution networks with gated fusion for rgb-d indoor semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 3, 2017.
-  H. Chu, W.-C. M. K. Kundu, R. Urtasun, and S. Fidler. Surfconv: Bridging 3d and 2d convolution for rgbd images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3002–3011, 2018.
-  A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. ScanNet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  H. C. Daniel, J. Kannala, L. LadickÃ½, and J. HeikkilÃ¤. Depth Map Inpainting under a Second-Order Smoothness Prior. Springer Berlin Heidelberg, 2013.
-  D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2650–2658, 2015.
-  D. F. Fouhey, A. Gupta, and M. Hebert. Data-driven 3d primitives for single image understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3392–3399, 2013.
-  X. Gong, J. Liu, W. Zhou, and J. Liu. Guided depth enhancement via a fast marching method â. Image & Vision Computing, 31(10):695–703, 2013.
-  S. Gupta, R. Girshick, P. Arbeláez, and J. Malik. Learning rich features from rgb-d images for object detection and segmentation. In European Conference on Computer Vision (ECCV), pages 345–360. Springer, 2014.
-  S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, et al. Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pages 559–568. ACM, 2011.
-  J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila. Noise2Noise: Learning image restoration without clean data. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pages 2965–2974, 2018.
-  A. Levin, D. Lischinski, and Y. Weiss. Colorization using optimization. In ACM transactions on graphics (TOG), volume 23, pages 689–694. ACM, 2004.
-  O. Litany, A. Bronstein, M. Bronstein, and A. Makadia. Deformable shape completion with graph convolutional autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1886–1895, 2018.
-  J. Liu, X. Gong, and J. Liu. Guided inpainting and filtering for kinect depth maps. In International Conference on Pattern Recognition, pages 2055–2058, 2012.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3431–3440, 2015.
-  R. A. Newcombe, D. Fox, and S. M. Seitz. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 343–352, 2015.
-  J. Pang, W. Sun, C. Yang, J. Ren, R. Xiao, J. Zeng, and L. Lin. Zoom and learn: Generalizing deep stereo matching to novel domains. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
-  C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas. Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
-  X. Qi, R. Liao, Z. Liu, R. Urtasun, and J. Jia. Geonet: Geometric neural network for joint depth and surface normal estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 283–291, 2018.
-  N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference from rgbd images. In European Conference on Computer Vision, pages 746–760. Springer, 2012.
-  H. Su, V. Jampani, D. Sun, S. Maji, E. Kalogerakis, M.-H. Yang, and J. Kautz. Splatnet: Sparse lattice networks for point cloud processing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2530–2539, 2018.
-  S. Su, F. Heide, G. Wetzstein, and W. Heidrich. Deep end-to-end time-of-flight imaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6383–6392, 2018.
-  A. K. Thabet, J. Lahoud, D. Asmar, and B. Ghanem. 3d aware correction and completion of depth maps in piecewise planar scenes. In Asian Conference on Computer Vision, pages 226–241, 2014.
-  P. Wang, X. Shen, B. Russell, S. Cohen, B. Price, and A. L. Yuille. Surge: Surface regularized geometry estimation from a single image. In Advances in Neural Information Processing Systems, pages 172–180, 2016.
-  W. Wang and U. Neumann. Depth-aware cnn for rgb-d segmentation. In European Conference on Computer Vision (ECCV). Springer, 2018.
-  X. Wang, D. Fouhey, and A. Gupta. Designing deep networks for surface normal estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 539–547, 2015.
-  J. Zeng, G. Cheung, M. Ng, J. Pang, and C. Yang. 3d point cloud denoising using graph laplacian regularization of a low dimensional manifold model. arXiv preprint arXiv:1803.07252, 2018.
-  J. Zeng, J. Pang, W. Sun, G. Cheung, and R. Xiao. Deep graph laplacian regularization. arXiv preprint arXiv:1807.11637, 2018.
-  H.-T. Zhang, J. Yu, and Z.-F. Wang. Probability contour guided depth map inpainting and superresolution using non-local total generalized variation. Multimedia Tools and Applications, 77(7):9003–9020, 2018.
-  Y. Zhang and T. Funkhouser. Deep depth completion of a single rgb-d image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 175–185, 2018.
-  Y. Zhang, S. Song, E. Yumer, M. Savva, J.-Y. Lee, H. Jin, and T. Funkhouser. Physically-based rendering for indoor scene understanding using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5057–5065. IEEE, 2017.