Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction
We propose a novel approach to reconstruct RGB-D indoor scene based on plane primitives. Our approach takes as input a RGB-D sequence and a dense coarse mesh reconstructed from it, and generates a lightweight, low-polygonal mesh with clear face textures and sharp features without losing geometry details from the original scene. Compared to existing methods which only cover large planar regions in the scene, our method builds the entire scene by adaptive planes without losing geometry details and also preserves sharp features in the mesh. Experiments show that our method is more efficient to generate textured mesh from RGB-D data than state-of-the-arts.
Online and offline RGB-D reconstruction techniques are developing fast in recent years with the prevalence of consumer depth cameras. State-of-the-art online 3D reconstruction methods can capture indoor and outdoor scenes in the real-world environments efficiently with geometry details [12, 17, 18, 4, 13]. However, resulting 3D models of these methods are usually too dense with unsatisfying textures due to many reasons including noisy depth data, incorrect camera poses and oversmoothing in data fusion. These models can not be used directly in most applications without further refinement or post-processing.
In order to lower the density and improve the structure quality of indoor models, one typical strategy is to introduce plane primitives into front-end (such as camera tracking in SLAM or online reconstruction in [5, 9, 7]) or back-end (such as RGB-D mesh and texture refinement in [6, 10, 16]) of reconstruction pipeline, as typical indoor scenes are primarily composed of planar regions, especially buildings and houses with structure following Manhattan-world assumption. However, almost all methods take into account only large planar regions such as walls, ceilings, floors and large table surfaces, and simply ignore and remove other objects with free form surfaces no matter if they contain planar regions or not, such as various indoor furniture and objects on or adhering to large planes. Models with only large planes are too simplified and lack fidelity that they are not applicable to many situations acquiring geometry details. Moreover, geometry details are usually noisy because of noisy RGB-D raw data, and it is difficult and also time-consuming to extract plane primitives or other types of geometry priors from the scene while still preserving the original shape. Besides this, existing back-end methods are usually time-consuming and take hours to process a single scan.
In this paper, we present a novel approach to efficiently reconstruct RGB-D indoor scene using planes and generate a lightweight and complete 3D textured model without losing geometry details. Our method takes as input a RGB-D sequence of indoor scene and a dense coarse mesh reconstructed by some online reconstruction method on this sequence. We firstly partition the entire dense mesh into different planar clusters (Section 2.1), and then simplify the dense mesh into a lightweight mesh without losing geometry details (Section 2.2). Next we create texture patch for each plane and sample points on the plane, and run a global optimization process to maximize the photometric consistency of sampled points across frames by optimizing camera poses, plane parameters and texture colors (Section 2.3). Finally, we optimize the mesh geometry by maximizing consistency between geometry and plane primitives, which further preserves sharp features of original scene such as edges and corners of plane intersections (Section 2.4).
Our method is highly based on Wang and Guo’s method in . Compared to their method, the contribution of our method is to introduce line constraint into both pose-plane-texture and geometry optimization, and this can preserve sharp feature better. Meanwhile, our method is also more efficient than  by introducing parallel computation into the optimization. Experiments show that our method exceeds state-of-the-arts in keeping geometry details and sharp features in the result lightweight 3D textured models.
2 Plane-based reconstruction pipeline
Our reconstruction pipeline takes a RGB-D sequence as input, and firstly uses some state-of-the-art online reconstruction such as VoxelHashing  or BundleFusion  to reconstruct an initial dense mesh.
2.1 Mesh planar partition
We aim to partition the entire mesh into plane primitives to include all geometry details. In our approach we follow the same idea of  to refer to a state-of-the-art surface partition algorithm proposed by Cai et al. . This method proposes a new principle component analysis (PCA) based energy, whose minimization leads to an optimal piecewise-linear planar approximation of the entire surface with high quality. After an input mesh is partitioned into clusters, each cluster is attached with a plane proxy defined by the centroid and normal as the smallest eigenvector direction from the covariance matrix of the cluster.
After we get the initial planar partition, we run a further plane merging step to merge adjacent planes together into large ones to reduce noisy bumpy points on planar regions. Here we also follow the same idea in  to merge adjacent planes only if the angle between their normal directions is small enough, and the average distances between two planes are also small. Besides this, we add an additional rule to merge two neighbor planes if the PCA energy increase after merging is very small compared to the initial energy of one plane. This is for merging one large plane and a small neighbor noisy plane together, such as planes on a bumpy floor.
2.2 Mesh simplification
We simplify the mesh based on clusters to create a lightweight mesh for further optimization. Even though we already have a model composed of planes, we still choose to create a mesh by simplifying the original dense mesh instead of using some mesh generation algorithm (such as Delaunay triangulation) on planes like [2, 10, 11], since it is difficult and also time-consuming to create correct connectivity from complicated plane interceptions in a noisy model, especially an indoor reconstruction mesh containing various geometry objects with free-form shapes. Here we also follow the similar way in  that uses QEM to simplify the inner-cluster edges at first and then all border edges of clusters next. Note that simplification in each cluster is independent with each other, so we run parallel computation on all inner-cluster edges to accelerate the simplification in our experiments.
2.3 Plane, camera pose and texture optimization
Before optimization, we firstly generate an initial texture mapping for all the faces of the mesh. For each cluster, considering that vertices inside this cluster on the mesh are already near co-planar, we simply project these 3D vertices onto the corresponding plane to get a 2D patch, and sample grid points inside the patch boundary to get texel points, and then backproject them to get corresponding 3D texel points. Another thing is about the keyframes selected from RGB-D frames. To reduce time complexity and increase texture quality, we follow the similar idea of color map optimization by Zhou and Koltun  to select only sharp frames in every interval, and quatify the blurriness of each image with the metric by Crete et al. .
The input in our optimization process is color images and depth images of keyframes, all texels’ 3D points sampled on the mesh, initial camera poses (global to camera space) and initial plane parameters . During the optimization, we maximize the photo consistency of 3D texels’ projections on corresponding planes across frames by optimizing camera poses, plane parameters and texture colors by minimizing the objective function
where and are constants to balance different terms.
Photometric consistency term. The photometric energy is designed to measure the photometric error between color of each texel’s projection point on its corresponding plane and its target color across frames:
where is the target color for , and is set of all visible 3D texels in frame , and is the perspective projection from 3D position to 2D color image, and in Eq. (2) is the projection point from onto its corresponding plane represented by 3D normal and a scalar :
Plane constraint term. Plane constraint term is to minimize the sum of distances from 3D texel points to their corresponding planes:
Line constraint term. We want to maximize the consistence between 2D lines and corresponding 3D lines which are borders of adjacent planes:
where is the 2D pixel set with all valid candidate line segments, is inverse perspective function of from 2D to 3D. is obtained by projecting valid 3D line composed of the border vertices shared by clusters onto corresponding visible color image, and then finding its nearest 2D line segment which are computed by line segment detector (LSD)  within a valid distance range. In experiments we only use 2D line segments with long enough length, and only use 3D lines shared by two planes between which the normal direction is large enough.
To minimize the objective function in Eq. (1), we alternate between optimizing different variables with some others fixed and use standard Gauss-Newton method. The optimization of each plane is independent with others so we can solve them in parallel. Compared to , our method removes the image correction term in Eq. (1), since we found that it influences very little to the result but highly increases the time complexity with more than 700 additional image correction parameters to optimize per frame. Meanwhile, we add the line constraint term to better preserve sharp features.
2.4 Geometry optimization
The final step is to optimize the mesh geometry to fit the planes as close as possible to reduce noise from mesh surface and sharpen geometry features, since fused meshes reconstructed from RGB-D data always contain noise or oversmoothed surfaces, such as bumpy surfaces on planar regions and smoothed borders which suppose to be sharp features.
In order to optimize the consistency between geometry and planes, we maximize the consistency between mesh vertices in each cluster and their corresponding planes. Each 2D texel is located inside a triangular face’s projection. We utilize the initial barycentric relationship between each texel and its corresponding face, and try to preserve this relationship between texel points’ projections on planes and the optimized vertices in each face:
where is the geometry consistency term
where is the projection from 3D texel point onto its corresponding plane as described in Eq. (3), is index of the face corresponds to, is the th vertex of face , and is ’s initial barycentric coordinate corresponding to the th vertex in face , and and are constants to balance different terms.
Compared to , we add a new term which is similar to term from Eq. (1) that it is to ensure all border vertices shared by adjacent planes/clusters are as close to their corresponding planes as possible:
where is the border vertex set.
The last term in Eq. (6) is a regularization term to minimize the difference between each vertex and the mass center of all its neighbors:
Here is matrix of target vertices we want to compute, with the number of vertices on the mesh. is matrix denoting the discrete graph Laplacian matrix based on the connectivity of the mesh. That is, we want to minimize the difference between each optimized vertex and the average of its neighbor vertices. This term is added to ensure that problem in Eq. (6) has valid solutions.
The problem in Eq. (6) is actually a sparse linear system and can be solved by Cholesky decomposition efficiently. Figure 1 shows comparison between original dense mesh by BundleFusion  and our mesh on a scan ‘office0’ from BundleFusion dataset. Our method can preserve the sharp features in the final lightweight mesh very well. Compared to the method in , we add another line constraint term to better preserve the line features. Figure 2 shows result mesh comparison with or without line constraints on the same scene as Figure 1.
We tested our method on the same 10 scans in  from three popular RGB-D dataset: 6 models from BundleFusion  (the first 6 rows in Table 1), 2 from ICL-NUIM  (the following 2 rows ) and 2 from TUM RGB-D dataset  (the last 2 rows). Table 1 shows quantitative data of each scan and our result models. Note that the number of faces or vertices of each result model is only 1%-3% of that of original dense model. Figure 3 shows textured mesh between our method and two state-of-the-art systems: BundleFusion  and 3DLite , while more strictly speaking, the dense models by BundleFusion are the input of both 3DLite and our method.
We implemented our method in C++111Result models and part of source code are already open in: https://github.com/chaowang15/plane-opt-rgbd. and tested on a desktop with Intel Core i7 2.5GHz CPU and 16 GB memory. The running time on each scan is in Table 1. Our average running time is only about 5-10 minutes compared to the average time of several hours in 3DLite , and approximately 30 minutes in Wang and Guo’s method  on the same dataset. We use OpenMP on simplifying different clusters and GPU in computing the Jacobian matrix in plane, texture and pose optimization for acceleration.
Limitations. Our method has some similar limitations as . Firstly, our face textures are not as sharp as 3DLite’s since the latter introduces many techniques to optimize texture, such as texture sharpening and color correction across frames. However, these steps are very time-consuming and possibly takes hours in , and we plan to find a faster way to further optimize textures with similar results as 3DLite. Moreover, our method still cannot fill holes and gaps that always appears in the RGB-D scans, while 3DLite can generate a complete geometry from extracted large planes by extrapolating existing planes and filling holes.
-  Yiqi Cai, Xiaohu Guo, Yang Liu, Wenping Wang, Weihua Mao, and Zichun Zhong. Surface approximation via asymptotic optimal geometric partition. IEEE transactions on visualization and computer graphics, 23(12):2613–2626, 2017.
-  Anne-Laure Chauve, Patrick Labatut, and Jean-Philippe Pons. Robust piecewise-planar 3d reconstruction and completion from large-scale unstructured point data. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 1261–1268. IEEE, 2010.
-  Frederique Crete, Thierry Dolmiere, Patricia Ladret, and Marina Nicolas. The blur effect: perception and estimation with a new no-reference perceptual blur metric. In Human vision and electronic imaging XII, volume 6492, page 64920I. International Society for Optics and Photonics, 2007.
-  Angela Dai, Matthias Nießner, Michael Zollhöfer, Shahram Izadi, and Christian Theobalt. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. ACM Transactions on Graphics (TOG), 36(3):24, 2017.
-  Mingsong Dou, Li Guan, Jan-Michael Frahm, and Henry Fuchs. Exploring high-level plane primitives for indoor 3d reconstruction with a hand-held rgb-d camera. In Asian Conference on Computer Vision, pages 94–108. Springer, 2012.
-  Maksym Dzitsiuk, Jürgen Sturm, Robert Maier, Lingni Ma, and Daniel Cremers. De-noising, stabilizing and completing 3d reconstructions on-the-go using plane priors. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 3976–3983. IEEE, 2017.
-  Maciej Halber and Thomas Funkhouser. Fine-to-coarse global registration of rgb-d scans. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2017.
-  A. Handa, T. Whelan, J.B. McDonald, and A.J. Davison. A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In IEEE Intl. Conf. on Robotics and Automation, ICRA, Hong Kong, China, May 2014.
-  Ming Hsiao, Eric Westman, Guofeng Zhang, and Michael Kaess. Keyframe-based dense planar slam. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 5110–5117. IEEE, 2017.
-  Jingwei Huang, Angela Dai, Leonidas Guibas, and Matthias Nießner. 3dlite: towards commodity 3d scanning for content creation. ACM Transactions on Graphics, 2017, 2017.
-  Yangyan Li, Xiaokun Wu, Yiorgos Chrysathou, Andrei Sharf, Daniel Cohen-Or, and Niloy J Mitra. Globfit: Consistently fitting primitives by discovering global relations. In ACM Transactions on Graphics (TOG), volume 30, page 52. ACM, 2011.
-  Matthias Nießner, Michael Zollhöfer, Shahram Izadi, and Marc Stamminger. Real-time 3d reconstruction at scale using voxel hashing. ACM Transactions on Graphics (ToG), 32(6):169, 2013.
-  Victor Adrian Prisacariu, Olaf Kähler, Stuart Golodetz, Michael Sapienza, Tommaso Cavallari, Philip HS Torr, and David W Murray. Infinitam v3: A framework for large-scale 3d reconstruction with loop closure. arXiv preprint arXiv:1708.00783, 2017.
-  J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers. A benchmark for the evaluation of rgb-d slam systems. In Proc. of the International Conference on Intelligent Robot Systems (IROS), Oct. 2012.
-  Rafael Grompone Von Gioi, Jeremie Jakubowicz, Jean-Michel Morel, and Gregory Randall. Lsd: A fast line segment detector with a false detection control. IEEE transactions on pattern analysis and machine intelligence, 32(4):722–732, 2010.
-  Chao Wang and Xiaohu Guo. Plane-based optimization of geometry and texture for rgb-d reconstruction of indoor scenes. In 2018 International Conference on 3D Vision (3DV), pages 533–541. IEEE, 2018.
-  Thomas Whelan, Stefan Leutenegger, R Salas-Moreno, Ben Glocker, and Andrew Davison. Elasticfusion: Dense slam without a pose graph. Robotics: Science and Systems, 2015.
-  Thomas Whelan, Renato F Salas-Moreno, Ben Glocker, Andrew J Davison, and Stefan Leutenegger. Elasticfusion: Real-time dense slam and light source estimation. The International Journal of Robotics Research, 35(14):1697–1716, 2016.
-  Qian-Yi Zhou and Vladlen Koltun. Color map optimization for 3d reconstruction with consumer depth cameras. ACM Transactions on Graphics (TOG), 33(4):155, 2014.