Unsupervised Depth Completion from Visual Inertial Odometry

Unsupervised Depth Completion from Visual Inertial Odometry

Abstract

We describe a method to infer dense depth from camera motion and sparse depth as estimated using a visual-inertial odometry system. Unlike other scenarios using point clouds from lidar or structured light sensors, we have few hundreds to few thousand points, insufficient to inform the topology of the scene. Our method first constructs a piecewise planar scaffolding of the scene, and then uses it to infer dense depth using the image along with the sparse points. We use a predictive cross-modal criterion, akin to “self-supervision,” measuring photometric consistency across time, forward-backward pose consistency, and geometric compatibility with the sparse point cloud. We also present the first visual-inertial + depth dataset, which we hope will foster additional exploration into combining the complementary strengths of visual and inertial sensors. To compare our method to prior work, we adopt the unsupervised KITTI depth completion benchmark, where we achieve state-of-the-art performance.

Visual Learning, Sensor Fusion

I Introduction

A sequence of images is a rich source of information about both the three-dimensional (3D) shape of the environment and the motion of the sensor within. Motion can be inferred at most up to a scale and a global Euclidean reference frame, provided sufficient parallax and a number of visually discriminative Lambertian regions that are fixed in the environment and visible from the camera. The position of such regions in the scene defines the Euclidean reference frame, with respect to which motion is estimated. Scale, as well as two directions of orientation, can be further identified by fusion with inertial measurements (accelerometers and gyroscopes) and, if available, a magnetometer can fix the last (Gauge) degree of freedom.

Fig. 1: Depth completion with Visual-Inertial Odometry (VIO) on the proposed VOID dataset (best viewed in color at ). Bottom left: sparse reconstruction (blue) and camera trajectory (yellow) from VIO. The highlighted region is densified and zoomed in on the top right. Top left shows an image of the same region which is taken as input, and fused with the sparse depth image by our method. On the bottom right is the same view showing only the sparse points, insufficient to determine scene geometry and topology.

Because the regions defining the reference frame have to be visually distinctive (“features”), they are typically sparse. In theory, three points are sufficient to define a Euclidean Gauge if visible at all times. In practice, because of occlusions, any Structure From Motion (SFM) or simultaneous localization and mapping (SLAM) system maintains an estimate of the location of a sparse set of features, or “sparse point cloud,” typically in the hundreds to thousands. These are sufficient to support a point-estimate of motion, but a rather poor representation of shape as they do not reveal the topology of the scene: The empty space between points could be empty, or occupied by a solid with a smooth surface radiance (appearance). Attempts to densify the sparse point cloud, by interpolation or regularization with generic priors such as smoothness, piecewise planarity and the like, typically fail since SFM yields far too sparse a reconstruction to inform topology. This is where the image comes back in.

Inferring shape is ill-posed, even if the point cloud was generated with a lidar or structured light sensor. Filling the gaps relies on assumptions about the environment. Rather than designing ad-hoc priors, we wish to use the image to inform and restrict the set of possible scenes that are compatible with the given sparse points.

Summary of contributions

We use a predictive cross-modal criterion to score dense depth from images and sparse depth. This kind of approach is sometimes referred to as “self-supervised.” Specifically, our method (i) exploits a set of constraints from temporal consistency (a.k.a. photometric consistency across temporally adjacent frames) to pose (forward-backward) consistency in a combination that has not been previously explored. To enable our pose consistency term, we introduce (ii) a set of logarithmic and exponential mapping layers for our network to represent motion using exponential coordinates, which we found to improve reconstruction compared to other parametrizations.

The challenge in using sparse depth as a supervisory (feedback) signal is precisely that it is sparse. Information at the points does not propagate to fill the domain where depth is defined. Some computational mechanism to “diffuse the information” from the sparse points to their neighbors is needed. Our approach proposes (iii) a simple method akin to using a piecewise planar “scaffolding” of the scene, sufficient to transfer the supervisory signal from sparse points to their neighbors. This yields a two-stage approach, where the sparse points are first processed to design the scaffolding (“meshing and interpolation”) and then “refined” using the images as well as priors from the constraints just described.

One additional contribution of our approach is (iv) to introduce the first visual-inertial + depth dataset. The role of inertials is to enable reconstruction in metric scale, which is critical for robotic applications. Although scale can be obtained via other sensors, e.g., stereo, lidar, and RGB-D, we note they are not as widely available as monocular cameras with inertials (almost every modern phone has it) and consume more power. Since inertial sensors are now ubiquitous and typically co-located with cameras in many mobile devices from phones to cars, we hope this dataset will foster additional exploration into combining the complementary strengths of visual and inertial sensors.

To evaluate our method, since no other visual-inertial + depth benchmark is available, and to facilitate comparison with similar methods, we adopt the KITTI benchmark, where a Velodyne (lidar) sensor provides sparse points with scale, unlike monocular SFM, but like visual-inertial odometry (VIO). Although the biases in lidar are different from VIO, this can be considered a baseline. Note that we only use the monocular stream of KITTI (not stereo) for fair comparison.

The result is a (v) two-stage approach of scaffolding and refining with a network that contains much fewer parameters than competing methods, yet achieves state-of-the-art performance in the “unsupervised” KITTI benchmark (a misnomer). The supervision in the KITTI benchmark is really fusion from separate sensory channels, combined with ad-hoc interpolation and extrapolation. It is unclear whether the benefit from having such data is outweighed by the biases it induces on the estimate, and in any case such supervision does not scale; hence, we forgo (pseudo) ground truth annotations altogether.

Ii Related Work

Supervised Depth Completion minimizes the discrepancy between ground truth depth and depth predicted from an RGB image and sparse depth measurements. Methods focus on network topology [16, 26, 29], optimization [4, 5, 31], and modeling [6, 10]. To handle sparse depth, [16] employed early fusion, where the image and sparse depth are convolved separately and the results concatenated as the input to a ResNet encoder. [11] proposed late fusion via a U-net containing two NASNet encoders for image and sparse depth and jointly learned depth and semantic segmentation, whereas [29] used ResNet encoders for late fusion. [6] proposed a normalized convolutional layer to propagate sparse depth and used a binary validity map as a confidence measure. [10] proposed an upsampling layer and joint concatenation and convolution to deal with sparse inputs. All these methods require per-pixel ground-truth annotation. What is called “ground truth” in the benchmarks is actually the result of data processing and aggregation of many consecutive frames. We skip such supervision and just infer dense depth by learning the cross-modal fusion from the virtually infinite volume of un-annotated data.

Unsupervised Depth Completion methods, such as [16, 23, 29] predict depth by minimizing the discrepancy between prediction and sparse depth input as well as the photometric error between the input image and its reconstruction from other viewpoints available only during training. [16] used Perspective-n-Point (PnP) [15] and Random Sample Consensus (RANSAC) [8] to align monocular image sequences for their photometric term with a second-order smoothness prior. Yet, [16] does not generalize well to indoor scenes that contains many textureless regions (e.g. walls), where PnP with RANSAC may fail. [23] used a local smoothness term, but instead minimized the photometric error between rectified stereo-pairs where pose is known. [29] also leveraged stereo pairs and a more sophisticated photometric loss [28]. [29] replaced the generic smoothness term with a learned prior to regularize their prediction. To accomplish this, [29] requires a conditional prior network (CPN) that is trained on an additional dataset (representative of the depth completion dataset) in a fully-supervised manner using ground-truth depth. The CPN does not generalize well outside its training domain (e.g. one cannot use a CPN trained on outdoors scenes to regularize depth predictions for indoors). Hence, [29] is essentially not unsupervised and has limited applicability. In contrast, our method is trained on monocular sequences, is fully unsupervised and does not use any auxiliary ground-truth supervision. Unlike previous works, our method does not require large networks ([11, 16, 23, 29]) nor any complex network operations ([6, 10]). Moreover, our method outperforms [16, 29] on the unsupervised KITTI depth completion benchmark [26] while using fewer parameters.

Rotation Parameterization. To construct the photometric consistency loss during training, an auxiliary pose network is needed if no camera poses are available. While the translational part of the relative pose can be modeled as , the rotational part belongs to the special orthogonal group  [17], which is represented by a matrix. [13] uses quaternions, which require an additional norm constraint; this is a soft constraint imposed in the loss function, and thus is not guaranteed. [7, 30, 32] use Euler angles which may result in a non-orthogonal rotation matrix due to rounding error from the multiplication of many sine and cosine terms. We use the exponential map on to map the output of the pose network to a rotation matrix. Though theoretically similar, we empirically found that the exponential map is more beneficial than the Euler angles in Sec. VII.

Our contributions are a simple, yet effective two-stage approach resulting in a large reduction in network parameters while achieving state-of-the-art performance on the unsupervised KITTI depth completion benchmark; using exponential parameterization of rotation for our pose network; a pose consistency term that enforces forward and backward motion to be the inverse of each other; and finally a new depth completion benchmark for visual-inertial odometry systems with indoor and outdoor scenes and challenging motion.

Fig. 2: System diagram. (best viewed in color at ). We first build a scaffolding from sparse depth estimated by VIO. Then together with the image , is fed to the refinement network as input to produce output . Note: the pose network (blue) is only needed in one operation mode and is only used in training. In the other operation mode, VIO poses are used instead. The scaffolding module (red) is parameter-free – leading to our light-weight two-stage approach.

Iii Method Formulation

We reconstruct a 3D scene given an RGB image and the associated set of sparse depth measurements .

We begin by assuming that world surfaces are graphs of smooth functions (charts) locally supported on a piecewise planar domain (scaffolding). We construct the scaffolding from the sparse point cloud (“Scaffolding” in Fig. 3) to obtain , then learn a completion model refining by leveraging the monocular sequences (), of frames before and after the given time , and the sparse depth . We compose a surrogate loss (Eqn. 2) for driving the training process, using an encoder-decoder architecture parameterized by weights , where the input is an image with its scaffolding , and the output is the dense depth . Fig. 2 shows an overview of our approach.

Iii-a A Two-Stage Approach

Depth completion is a challenging problem due to the sparsity level of the depth input, . As the density of sparse depth measurements covers of the image plane for the outdoor self-driving scenario (Sec. V-A) and less than for the indoor setting (Sec. VII-C), generally only a single measurement will be present within a local neighborhood and in most instances none. This renders conventional convolutions ineffective as each sparse depth measurement can be seen as a Dirac delta and convolving a kernel over the entire sparse depth input will give mostly zero activations. Hence, [6], [10], and [26] proposed specialized operations to propagate the information from the sparse depth input through the network. We, instead, propose a two-stage approach that circumvents this problem by first approximating a coarse scene geometry with scaffolding and training a network to refine the approximation.

Iii-B Scaffolding

Given sparse depth measurements , our goal is to create a coarse approximation of the scene; yet, the topology of the scene is not informed by . Hence, we must rely on a prior or an assumption – that surfaces are locally smooth and piecewise planar. We begin by applying the lifting transform [3] to , mapping from 2-d to 3-d space. We then compute its convex hull [2], of which the lower envelope is taken as the Delaunay triangulation of the points in – resulting in a triangular mesh in Barycentric coordinates.

To form the tessellation of the triangular mesh, we approximate each surface using linear interpolation within the Barycentric coordinates and the resulting scaffolding is projected back onto the image plane to produce . For a given triangle, simple interpolation is sufficient for recovering the plane as a linear combination of the co-planar points. For sets of points not co-planar, interpolation will give an approximation, with which we refine using a network. We note that our approximation cannot be achieved by simply filtering (e.g. Gaussian) to propagate depth values as the filter would produce mostly zeros and even destroy the sparse depth signal.

Iii-C Refinement

Given an RGB image and its corresponding piece-wise planar scaffolding , we train a network to recover the 3-d scene by refining based on information from . Our network learns to refine without ground-truth supervision by minimizing Eqn. 2 (see Sec. IV).

Network Architecture. We propose two encoder-decoder architectures with skip connections following the late fusion paradigm [11, 29]. Each encoder has an image branch and a depth branch, where each contains and of the total encoder parameters, respectively. The latent representation of the branches are concatenated and fed to the decoder. We propose a VGG11 encoder (M parameters) containing 8 convolution layers for each branch as our best performing model, and a VGG8 encoder (M parameters) containing only 5 convolution layers for each branch as our light-weight model. This is in contrast to other unsupervised methods [16] (early fusion) and [29] (late fusion) – both of which use ResNet34 encoders with M and M parameters, respectively. [16, 29] and our approach share the same decoder architecture containing M parameters. We show in Sec. VII that despite having and fewer encoder parameters than [16] and [29], our VGG11 model outperforms both [16] and [29]. Moreover, performance does not degrade by much from VGG11 to VGG8 and VGG8 still surpasses [16] and [29] while having a and reduction in the encoder parameters. Unlike [16, 29], which requires high energy consumption hardware, our approach is computationally cheap, and can be deployed to low-powered agents using an Nvidia Jetson.

{adjustwidth}

-.0in.0in

Fig. 3: Learning to refine (best viewed at with color). Our network learns to refine the input scaffolding. Green rectangles highlight the regions for comparison throughout the course of training. The network first learns to copy the input and later learns to fuse information from RGB image to refine the approximated depth from scaffolding (see row 1 pedestrian and row 2 street signs).

Logarithmic and Exponential Map Layers. To construct our objective (Eqn. 2), we leverage a pose network [13] to regress the relative camera poses . We present a novel logarithmic map layer: , where is the tangent space of , and an exponential map layer: – for mapping between and . We use the logarithmic map to construct the pose consistency loss (Eqn. 6), and the exponential to map the output of the pose network as coordinates in to a rotation matrix:

(1)

where the hat operator maps to a skew-symmetric matrix [17]. We train of our pose network using a surrogate loss (Eqn. 3) without explicit supervision. Ablation studies on the use of exponential coordinates and pose consistency for depth completion can be found in Table III and IV.

Our approach contains two stages: (i) we generate a coarse piecewise planar approximation of the scene from the sparse depth inputs via scaffolding and (ii) we feed the resulting depth map along with the associated RGB image to our network for refinement (Fig. 3). This approach alleviates the network from the need of learning from sparse inputs, for which [16] and [29] compensated with parameters. We show the effectiveness of this approach by achieving the state-of-the-art on the unsupervised KITTI depth completion benchmark with half as many parameters as the prior-art.

Iv Loss Function

Our loss function is a linear combination of four terms that constrain (i) the photometric consistency between the observed image and its reconstructions from the monocular sequence, (ii) the predicted depth to be similar to that of the associated available sparse depth, (iii) the composition of the predicted forward and backward relative poses to be the identity, and (iv) the prediction to adhere to local smoothness.

(2)

where denotes photometric consistency, sparse depth consistency, pose consistency, and local smoothness. Each loss term is described in the next subsections and the associated weight in Sec. VI.

Iv-a Photometric Consistency

We enforce temporal consistency by minimizing the discrepancy between each observed image and its reconstruction from temporally adjacent images , where :

(3)

where are the homogeneous coordinates of , is the relative pose of the camera from time to , denotes the camera intrinsics, and refers to the perspective projection.

Our photometric consistency term is a combination of the average per pixel reprojection residual with an penalty and SSIM [28], a perceptual metric that is invariant to local illumination changes:

(4)

We use image patches centered at location for SSIM. and can be found in Sec. VI.

Iv-B Sparse Depth Consistency

Our sparse depth consistency term provides our predictions with metric scale by encouraging the predictions to be similar to that of the metric sparse depth available from lidar in KITTI dataset (Sec. V-A) and sparse reconstruction in our visual-inertial dataset (Sec. V-B). Our sparse depth consistency loss is the -norm of the difference between the predicted depth and the sparse depth averaged over (the support of the sparse depth):

(5)

Iv-C Pose Consistency

A pose network takes an ordered pair of images and outputs the relative pose (forward pose). When a temporally swapped pair is fed to the network, the network is expected to output (backward pose) – the inverse of , i.e., . The forward-backward pose consistency thus penalizes the deviation of the composed pose from the identity:

(6)

where is the logarithmic map.

Iv-D Local Smoothness

We impose a smoothness loss on the predicted depth by applying an penalty to the gradients in both the x and y directions of the predicted depth :

(7)

where and are the edge-awareness weights to allow for discontinuities in regions corresponding to object boundaries.

Fig. 4: Qualitative evaluation on KITTI benchmark. Top to bottom: input image and sparse depth, results of [16], our results. Results are taken from KITTI online test server. Warmer colors in the error map denote higher error. Green rectangles highlight regions for detail comparison. We perform better in general, particularly on thin structures and far regions. [16] exhibit artifacts resembling scanlines and “circles” for far away regions (highlighted in red).

Iv-E The Role of Inertials

Although inertials are not directly present in the loss, their role in metric depth completion is crucial. Without inertials, a SLAM system cannot produce sparse point clouds in metric scale, which are then used as both the input to the scaffolding stage (Sec. III-B) and a supervisory signal (Eqn. 5).

V Datasets

V-a KITTI Benchmark

We evaluate our approach on the KITTI depth completion benchmark [26]. The dataset provides raw image frames and associated sparse depth maps. The sparse depth maps are the raw output from the Velodyne lidar sensor, each with a density of . The ground-truth depth map is created by accumulating the neighbouring 11 raw lidar scans, with dense depth corresponding to the bottom of the images. We use the officially selected 1,000 samples for validation and we apply our method to 1,000 testing samples, with which we submit to the official KITTI website for evaluation. The results are reported in Table II.

V-B VOID Benchmark

While KITTI provides a standard benchmark for evaluating depth completion in the driving scenario, there exists no standard depth completion benchmark for the indoor scenario. [16, 29] used NYUv2 [19] – an RGB-D dataset – to develop and evaluate their models on indoor scenes. Yet, each performs a different evaluation protocol with different sparse depth samples – varying densities of depth values were randomly sampled from the depth frame, preventing direct comparisons between methods. Though this is reasonable as a proof of concept, it is not realistic in the sense that no sensor measures depth at random locations.

The VOID dataset. We propose a new publicly available dataset for a real world use case of depth completion by bootstrapping sparse reconstruction in metric space from a SLAM system. While it is well known that metric scale is not observable in the purely image-based SLAM and SFM setting, it has been resolved by the recent advances in VIO [12, 18], where metric pose and structure estimation can be realized in a gravity-aligned and scaled reference frame using a inertial measurement unit (IMU). To this end, we leverage an off-the-shelf VIO system 1, atop which we construct our dataset and develop our depth completion model. While there are some visual-inertial datasets (e.g. TUM-VI [22] and PennCOSYVIO [20]), they lack per-frame dense depth measurements for cross-modal validation, and are also relatively small – rendering them unsuitable for training deep learning models. To demonstrate the applicability of our approach, we additionally show qualitative results on the TUM-VI dataset in Fig. 5 using sparse depth density level of .

Our dataset is dubbed “Visual Odometry with Inertial and Depth” or “VOID” for short and is comprised of RGB video streams and inertial measurements for metric reconstruction along with per-frame dense depth for cross-modal validation.

Data acquisition. Our data was collected using the latest Intel RealSense D435i camera 2, which was configured to produce synchronized accelerometer and gyroscope measurements at 400 Hz, along with synchronized VGA-size () RGB and depth streams at 30 Hz. The depth frames are acquired using active stereo and is aligned to the RGB frame using the sensor factory calibration (see Fig. 8). All the measurements are time-stamped.

The SLAM system we use is based on [12] – an EKF-based VIO model. While the VIO recursively estimates a joint posterior of the state of the sensor platform (e.g. pose, velocity, sensor biases, and camera-to-IMU alignment) and a small set of reliable feature points, the 3D structure it estimates is extremely sparse – typically feature points (in-state features). To facilitate 3D reconstruction, we track a moderate amount of out-of-state features in addition to the in-state ones, and estimate the depth of the feature points using auxiliary depth sub-filters [17].

The benchmark. We evaluate our method on the VOID depth completion benchmark, which contains 56 sequences in total, both indoor and outdoor with challenging motion. Typical scenes include classrooms, offices, stairwells, laboratories, and gardens. Of the 56 sequences, 48 sequences ( frames) are designated for training and 8 sequences for testing, from which we sampled frames to construct the testing set. Our benchmark provides sparse depth maps at three density levels. We configured our SLAM system to track and estimate depth of 1500, 500 and 150 feature points, corresponding to and density of VGA size, which are then used in the depth completion task.

Vi Implementation Details

Our approach was implemented using TensorFlow [1]. With a Nvidia GTX 1080Ti, training takes hours for our VGG11 model and hours for our VGG8 model on KITTI depth completion benchmark (Sec. V-A) for 30 epochs; whereas training takes hours and hours on the VOID benchmark (Sec. V-B) for 10 epochs. Inference takes 22 ms per image. We used Adam [14] with and to optimize our network end-to-end with a base learning rates of for KITTI and for VOID . We decrease the learning rate by half after 18 epochs for KITTI and 6 epochs for VOID , and again after 24 epochs and 8 epochs, respectively. We train our network with a batch size of 8 using a resolution for KITTI and for VOID . We are able to achieve our results on the KITTI benchmark using the following set of weights for each term in our loss function: , , , , and . For the VOID benchmark, we increased to and to . We do not use any data augmentation.

{adjustwidth}

-.0in-.0in Metric units Definition MAE mm RMSE mm iMAE 1/km iRMSE 1/km

  • Error metrics for evaluating KITTI and VOID depth completion benchmarks, where is the ground truth.

TABLE I: Error metrics.

Vii Experiments and Results

Vii-a KITTI Depth Completion Benchmark

We show quantitative and qualitative comparisons on the unsupervised KITTI depth completion benchmark in Table II and Fig. 4, respectively. The results of the methods listed are taken directly from their papers. We note that [29] only reported their result in their paper and do have have an entry in KITTI depth completion benchmark for their unsupervised model. Hence, we compare qualitatively with the prior-art [16]. Our VGG11 model outperforms the state-of-the-art [29] on every metric by as much as while using fewer parameters. Our light-weight VGG8 model also outperforms [29] on MAE, RMSE, and iMAE while [29] beat our VGG8 by 2.2% on iRMSE. We note that [29] trains a separate network, using ground truth, to supervise their depth completion model. Moreover, [29] exploits rectified stereo-imagery where the pose of the cameras is known; whereas, we learn our pose by jointly training the pose network with our depth predictor. In comparison to [16] (who also uses monocular videos), both our VGG11 and VGG8 model outperforms them on every metric while using much fewer paramters. We also note that the qualitative results of [16] contains artifacts such as apparent scanlines of the Velodyne and “circles” in far regions.

As an introspective exercise, we plot the mean error of our model at varying distances on the KITTI validation set (Fig. 6) and overlay it with the ground truth depth distribution to show that our model performs very well in distances that matter in real-life scenarios. Our performance begins to degrade at distances larger than 80 meters; this is due to the lack of sparse measurements and insufficient parallax – problems that plague methods relying on multi-view supervision.

{adjustwidth}

-.0in-.0in Method # Parameters MAE RMSE iMAE iRMSE Schneider [21] not reported 605.47 2312.57 2.05 7.38 Ma [16] 350.32 1299.85 1.57 4.07 Yang [29] 343.46 1263.19 1.32 3.58 Ours VGG11 299.41 1169.97 1.20 3.56 Ours VGG8 304.57 1164.58 1.28 3.66

  • We compare our model to unsupervised methods on the KITTI depth completion benchmark [26]. Number of parameters used by each are listed for comparison. [21] stated that they use a fully convolution network, but does not specify the full architecture. Our VGG11 model outperforms state-of-the-art [29] across all metrics. Despite reducing M parameters, our VGG8 model does not degrade by much and outperforms VGG11 marginally on the RMSE metric. Moreover, our VGG8 model also outperforms [16] and [29].

TABLE II: KITTI depth completion benchmark.
Model Encoder Rot. MAE RMSE iMAE iRMSE
Scaffolding - - 443.57 1990.68 1.72 6.43
+ + (vanilla) VGG11 Eul. 347.14 1330.88 1.46 4.22
+ + VGG11 Eul. 327.84 1262.46 1.31 3.87
+ + VGG11 Exp. 312.10 1255.21 1.28 3.86
+ + + VGG11 Exp. 305.06 1239.06 1.21 3.71
+ + + VGG8 Exp. 308.81 1230.85 1.29 3.84
  • We compare variants of our model on the KITTI depth completion validation set. Each model is denoted by its loss function. Regions with missing depth in Scaffolding Only is assigned average depth. It is clear that scaffolding alone (row 1) and our baseline model trained without scaffolding (row 2) do poorly compared to our models that combine both (rows 3-6). Our full model using VGG11 produces the best overall results and achieves state-of-the-art on the test set Table II. Our approach is robust, our light-weight VGG8 model achieves similar performance to our VGG11 model.

TABLE III: KITTI depth completion ablation study.

Vii-B KITTI Depth Completion Ablation Study

We analyze the effect brought by each of our contributions through a quantitative evaluation on the KITTI depth completion validation set (Table III). Our two baseline models, scaffolding and vanilla model trained without scaffolding, perform poorly in comparison to the models that are trained with scaffolding – showcasing the effectiveness of our refinement approach. Although the loss functions are identical, exponential parameterization consistently improves over Euler angles across all metrics. We believe this is due to the regularity of the derivatives of the exponential map [9] compared to other parameterizations – resulting in faster convergence and wider minima during training. While [7, 27, 30] train their pose network using the photometric error with no additional constraint, we show that it is beneficial to impose our pose consistency term (Sec. 6). By constraining the forward and backward poses to be inverse of each other, we obtain a more accurate pose resulting in better depth prediction. Our experiments verify this claim as we see an improvement in across all metrics in Table III. We note that the improvement does not seem significant on KITTI as the motion is mostly planar; however, when predicting non-trivial 6 DoF motion (Sec. VII-D), we see a significant boost when employing this term. Our model trained with the full loss function produces the best results (bolded in Table II) and is the state-of-the-art for unsupervised KITTI depth completion benchmark. We further propose a VGG8 model that only contains M parameters. Despite having fewer paramters than VGG11, the performance of VGG8 does not degrade by much (see Table II, III, V).

Fig. 5: Qualitative results on TUM-VI (best viewed in color at ). We apply our method to TUM-VI and obtained our results using sparse depth input at a density level of . Unlike KITTI and VOID, TUM-VI images are monochrome, and bear a highly distorted fisheye camera model, which was compensated in training. Color bar shows the depth range.
Fig. 6: Error characteristics of our model on KITTI. The abscissa shows the distance of sparse data points measured by Velodyne, of which the percentage of all the data points is shown in red; the blue curve shows the mean absolute error of the estimated depth at the given distance, of which the 5-th and 95-th percentile enclose the light blue region.

Vii-C VOID Depth Completion Benchmark

We evaluate our method on the VOID depth completion benchmark for all three density levels (Table V) using error metrics in Table I. As the photometric loss (Eqn. 4) is largely dependent on obtaining the correct pose, we additionally propose a hybrid model, where the relative camera poses from our visual-inertial SLAM system are used to construct the photometric loss to show an upper bound on performance. In contrast to the KITTI, which provides sparse depth density concentrated on the bottom 30% of the image, the VOID benchmark only provides , and densities in sparse depth. Yet, our method is still able to produce reasonable results for indoor scenes with a MAE of cm on density and cm when given only . Since most scenes contain textureless regions, sparse depth supervision becomes important as photometric reconstruction is unreliable. Hence, performance degrades as density decreases. Yet, we degrade gracefully: as density decreases by 10X, our error only doubles. We note that the scaffolding may poorly represent the scene. In the worst case, where it provides no extra information, our method becomes the common depth completion approach. Also, we observe systematic performance improvement in all the evaluation metrics (Table V) when replacing the pose network with SLAM pose. This can be largely attributed to the necessity for the correct pose to minimize photometric error during training. Our pose network may not be able to consistently predict the correct pose due to the challenging motion of the dataset. Fig. 7 shows two sample RGB images with the densified depth images back-projected to 3D, colored, and viewed from a different vantage point.


Fig. 7: Qualitative evaluation on VOID benchmark. Top: Input RGB images. Bottom: Densified depth images back-projected to 3D, colored, and viewed from a different vantage point.

Vii-D VOID Depth Completion Ablation Study

To better understand the effect of rotation parameterization and our pose consistency loss (Eqn. 6) on the depth completion task, we compare variants of our model and again replace the pose network with SLAM pose to show an upper-bound on performance. Although exponential outperforms Euler parameterization, we note that both perform much worse than using SLAM pose. However, we observe a performance boost when applying our pose consistency term and our model improves over exponential without pose consistency by as much as . Moreover, it approaches the performance of our model trained using SLAM pose. This trend still holds when density decreases (Table V). This suggests that despite the additional constraint, the pose network still has some difficulties predicting the pose due to the challenging motion. This finding, along with results from Table V, highlights the strength of classical SLAM systems in the deep learning era, which also urges us to develop and test pose networks on the VOID dataset which features non-trivial 6 DoF motion – much more challenging than the mostly-planar motion in KITTI.

Method MAE RMSE iMAE iRMSE
Ma [16] 198.76 260.67 88.07 114.96
Yang [29] 151.86 222.36 74.59 112.36
VGG11 PoseNet + Eul. 108.97 212.16 64.54 142.64
VGG11 PoseNet + Exp. 103.31 179.05 63.88 131.06
VGG11 PoseNet + Exp. + 85.05 169.79 48.92 104.02
VGG11 SLAM Pose 73.14 146.40 42.55 93.16
VGG8 PoseNet + Exp. + 94.33 168.92 56.01 111.54
  • We compare the variants of our pose network. SLAM Pose replaces the output of pose network with SLAM estimated pose to gauge an upper bound in performance. When using our pose consistency term with exponential parameterization, our method approaches the performance of our method when using SLAM pose. Note: we trained [16] from scratch using ground-truth pose and adapted [26] to train on monocular sequences. The conditional prior network used in [29] is trained on ground truth from NYUv2 [19].

TABLE IV: VOID depth completion benchmark and ablation study.
Density Pose From MAE RMSE iMAE iRMSE
PoseNet 85.05 169.79 48.92 104.02
SLAM 73.14 146.40 42.55 93.16
PoseNet 124.11 217.43 66.95 121.23
SLAM 118.01 195.32 59.29 101.72
PoseNet 179.66 281.09 95.27 151.66
SLAM 174.04 253.14 87.39 126.30
  • The VOID dataset contains VGA size images () of both indoor and outdoor scenes with challenging motion. For “Pose From”, SLAM refers to relative poses estimated by a SLAM system, and PoseNet refers to relative poses predicted by a pose network.

TABLE V: Depth completion on VOID with varying sparse depth density.

Viii Discussion

While deep networks have attracted a lot of attention as a general framework to solve an array of problems, we must note that pose may be difficult to learn on datasets with non-trivial 6 DoF motion – which the SLAM community has studied for decades. We hope that VOID will serve as a platform to develop models that can handle challenging motion and further foster fusion of multi-sensory data. Furthermore, we show that a network can recover the scene geometry from extremely sparse point clouds (e.g. features tracked by SLAM). We also show that improvements can be obtained by leveraging pose from a SLAM system instead of a pose network. These findings motivate a possible mutually beneficial marriage between classical methods and deep learning.

Fig. 8: Sample RGB + D images in the VOID dataset (best viewed in color at ). Color bar shows the depth range.

Appendix A VOID Dataset

In the main paper, we introduced the “Visual Odometry with Inertial and Depth” (VOID) dataset with which we propose a new depth completion benchmark. We described the data acquisition process, benchmark setup, and evaluation protocols in Sec. V-B and Sec. VII-C. To give some flavor of the VOID dataset, Fig. 9 shows a set of images (top inset) sampled from video sequences in VOID, and output of our visual-inertial odometry (VIO) system (bottom), where the blue pointcloud is the sparse reconstruction of the underlying scene and the yellow trace is the estimated camera trajectory.

Two rows of chairs in a classroom “L” shape formed by desks in a mechanical laboratory
a brick wall with plants on the ground underneath stairs
Fig. 9: Sample sequences in VOID dataset (best viewed in color at ). In each panel, the top inset shows 4 sample images of a video sequence in our VOID dataset; the bottom shows the sparse pointcloud reconstruction (blue) and camera trajectory (yellow) from our VIO.

Appendix B More results on VOID Dataset

In the main paper, we evaluated our approach on the VOID depth completion benchmark in Sec. VII-C, and Sec. VII-D provided quantitative results in Table V and IV and qualitative results in Fig. 7. Here, we provide additional qualitative results in Fig. 10 to show how our approach performs on a variety of scenes – both indoor and outdoor – from the VOID dataset. The figure is arranged in two panels of grids, where each panel contains a sample RGB image (left) that is fed to our depth completion network as input, and the corresponding colored pointcloud (right) produced by our approach, viewed at a different vantage point. The pointclouds are obtained by back-projecting the color pixels to the estimated depth. We used an input sparse depth density level of to produce the results. Our approach can provide detailed reconstructions of scenes from both indoor (e.g. right panel, last row: equipment from mechanical lab) and outdoor settings (e.g. left panel: flowers and leaves of plants in garden). It is also able to recover small objects such as the mouse on the desk in the mechanical lab, and structures at very close range (e.g. left panel, last row: staircase located less than half a meter from the camera).

Fig. 10: Qualitative results on VOID dataset. In each panel, the left shows a sample RGB image fed to our depth completion network as input; the right shows the completed depth map back-projected to 3D, colored, and viewed from a different vantage point. Our method recovers the scene structure with details at various ranges in both indoor and outdoor settings.
{adjustwidth}

-.0in-.0in Pose ATE (m) ATE-5F (m) RPE (m) RRE () Sequence 09 Euler 34.38 0.091 0.107 0.176 Exp. 27.57 0.091 0.108 0.170 Exp. w/ Consistency 18.18 0.080 0.094 0.157 Sequence 10 Euler 32.37 0.067 0.094 0.251 Exp. 25.18 0.059 0.091 0.225 Exp. w/ Consistency 24.60 0.059 0.081 0.218

  • We perform an ablation study on our pose representation by jointly training our depth completion network and pose network on KITTI depth completion dataset and testing only the pose network on KITTI Odometry sequence 09 and 10. We evaluate the performance of each pose network using metrics described in Sec. C-A. While performance of exponential parameterization and Euler angles are similar on ATE-5F, and RPE, exponential outperforms Euler angles in ATE and RRE on both sequences. Our model using exponential with pose consistency performs the best.

TABLE VI: Quantitative Pose Ablation Study KITTI Odometry Sequence 09 and 10.

Appendix C Pose Ablation Study

In the main paper, we focus on the depth completion task and hence we evaluate the effects of different pose parameterizations and our pose consistency term by computing error metrics relevant to the recovery of the 3D scene on both the VOID and KITTI depth completion benchmarks. Here, we focus specifically on pose by directly evaluating the pose network on the KITTI odometry dataset in Table VI. We show qualitative results on the trajectory obtained by chaining pairwise camera poses estimated by each pose network in Fig. 11 and provide an analysis of the results in Sec. C-B.

C-a Pose Evaluation Metrics

To evaluate the performance of the pose network and its variants, we adopt two most widely used metrics in evaluating simultaneous localization and mapping (SLAM) systems: absolute trajectory error (ATE) and relative pose error (RPE) [25] along with two novel metrics tailored to the evaluation of pose networks.

Given a list of estimated camera poses , where , relative to a fixed world frame, and the list of corresponding ground truth poses , where , ATE reads

(8)

where the function extracts the translational part of a rigid body transformation. ATE is essentially the root mean square error (RMSE) of the translational part of the estimated pose over all time indices. [32] proposed a “5-frame” version of ATE (ATE-5F) – the root mean square of ATE of a 5-frame sliding window over all time indices, which we also incorporate.

While ATE measures the overall estimation accuracy of the whole trajectory – suitable for evaluating full-fledged SLAM systems where a loop closure module presents, it does not faithfully reflect the accuracy of our pose network since 1) our pose network is designed to estimate pairwise poses, and 2) thus by simply chaining the pose estimates overtime, the pose errors at earlier time instants are more pronounced. Therefore, we also adopt RPE to measure the estimation accuracy locally:

(9)

which is essentially the end-point relative pose error of a sliding window averaged over time. By measuring the end-point relative pose , where , over a sliding window , we are able to focus more on the relative pose estimator (the pose network) itself rather than the overall localization accuracy. In our evaluation, we choose a sliding window of size 1, i.e., . However, RPE is affected only by the accuracy of the translational part of the estimated pose, as we expand the relative pose error:

(10)
(11)

leading to , where the rotational part of the estimated pose disappears! Therefore, to better evaluate the rotation estimation, and, more importantly, to study the effect of different rotation parameterization and the pose consistency term, we propose the relative rotation error (RRE) metric:

(12)

where extracts the rotational part of a rigid body transformation, and is the logarithmic map for rotations.

Fig. 11: Qualitative Pose Ablation Study KITTI Odometry Sequence 09 and 10. We perform an ablation study on our pose representation by jointly training our depth completion network and pose network on KITTI depth completion dataset and testing only the pose network on KITTI Odometry sequence 09 and 10. We obtain the camera trajectories by chaining the pairwise camera poses estimated by our pose network. We observe that the trajectory of our method using exponential parameterization trained with pose consistency (Eqn. 6) is most closely aligned with the ground-truth trajectory.

C-B Ablation Study on KITTI Odometry

We perform an ablation study on the effects of our pose parameterizations and our pose consistency in Table VI and provide qualitative results showing the trajectory predicted by our pose network in Fig. 11. We jointly trained our depth completion network and our pose network on the KITTI depth completion dataset and evaluate the pose network on sequence 09 and 10 of the KITTI Odometry dataset.

For sequence 09, our pose network using exponential parameterization performs comparably to Euler angles on the ATE-5F and RPE metrics while outperforming Euler by on ATE and on RRE. This result suggests that while within a small window Euler and exponential perform comparably on translation, exponential is a better pose parameterization and globally more correct. We additionally see that exponential outperforms Euler angles on all metrics in sequence 10.

Our best results are achieved using exponential parameterization with our pose consistency term (Eqn. 6): on sequence 09, it outperformed Euler and exponential without pose consistency by and on ATE, and on RPE, and on RRE, respectively, and both by on ATE-5F. On sequence 10, it outperformed Euler and exponential by and on ATE, and on RPE, and and on RRE, respectively. It also beat Euler by on RPE and is comparable to exponential on the metric.

Fig. 12: Qualitative Results on KITTI Depth Completion Test Set. We show results from various scenes on the KITTI test set. The sparse depth input on the KITTI benchmark is concentrated on the lower half of the image domain. Our network learns to predict structures that do not have any sparse points (e.g. street sign in row 3, 5, and 6). Also, we are able to recover predestrians (e.g. rows 2, and 3) and thin structures well (e.g. guard rails in row 1, poles in row 3, 4, 5, and 6, and 7).

Appendix D More Results on KITTI Depth Completion Benchmark

In the main paper, we evaluated our approach on the KITTI depth completion benchmark test set in Sec. VII-A and performed an ablation study on the validation set in Sec. VII-B. Quantitative results are shown in Table II, III and qualitative results in Fig. 4. However, as the KITTI online depth completion benchmark only shows the first 20 samples from the test set, we provide additional qualitative results on a variety of scenes in Fig. 12 to better represent our performance on the test set.

The results in Fig. 12 were produced by our VGG11 model trained using the full loss function (Eqn. 2) with exponential parameterization for rotation. Our method is able to recover pedestrians and thin structures well (e.g. the guard rails, and street poles). Additionally, our network is also able to recover structures that do not have any associated sparse lidar points (e.g structures located on the upper half of the image domain). This can be attributed to our photometric data-fidelity term (Sec. IV-A). As show in Fig. 3, our network first learns to copy the input scaffolding and to output it as the prediction. It later learns to fuse information from the input image to produce a prediction that includes elements from the scene that is missing from the scaffolding.

Fig. 13: Network architectures. Green denotes convolution, orange deconvolution, and purple upsampling. Blue denotes the latent representation, and red the output of pose network. Our VGG11 and VGG8 architectures following the late fusion paradigm [16, 29], and our auxiliary pose network to predict relative pose between two frames for constructing our photometric and pose consistency loss (Eqn. 4, 6). Our auxiliary pose network is used only in training and not inference.

Appendix E Network Architecture

We trained our model using two network architectures (Fig. 13) following the late fusion paradigm: (i) our main model using a VGG11 [24] encoder (Table VII), and (ii) our light weight model using a VGG8 [24] encoder (Table VIII). Both encoders use the same decoder (Table IX).

VGG11 Encoder kernel channels resolution
layer size stride in out in out # params input
Image Branch
conv1_image 5 2 3 48 1 1/2 3.6K image
conv2_image 3 2 48 96 1/2 1/4 41K conv1_image
conv3_image 3 1 96 192 1/4 1/4 166K conv2_image
conv3b_image 3 1 192 192 1/4 1/4 331K conv3_image
conv4_image 3 1 192 384 1/8 1/8 663K conv3b_image
conv4b_image 3 1 384 384 1/8 1/8 1.3M conv4_image
conv5_image 3 1 384 384 1/16 1/16 1.3M conv4b_image
conv5b_image 3 2 384 384 1/16 1/32 1.3M conv5_image
Depth Branch
conv1_depth 5 2 2 16 1 1/2 0.8K depth
conv2_depth 3 2 16 32 1/2 1/4 4.6K conv1_depth
conv3_depth 3 1 32 64 1/4 1/4 18K conv2_depth
conv3b_depth 3 1 64 64 1/4 1/4 37K conv3_depth
conv4_depth 3 1 64 128 1/8 1/8 74K conv3b_depth
conv4b_depth 3 1 128 128 1/8 1/8 147K conv4_depth
conv5_depth 3 1 128 128 1/16 1/16 147K conv4b_depth
conv5b_depth 3 2 128 128 1/16 1/32 147K conv5_depth
Latent Encoding
latent - - 384+128 512 1/32 1/32 0 conv5b_image conv5b_depth
Total Parameters 5.7M
  • Our VGG11 [24] encoder following the late fusion paradigm [11, 29] contains 5.7M parameters as opposed to the 23.8M and 14.8M parameters used by [16] and [29], respectively. The symbol denotes concatenation. Resolution ratio with respect to image size.

TABLE VII: VGG11 Encoder Architecture
VGG8 Encoder kernel channels resolution
layer size stride in out in out # params input
Image Branch
conv1_image 5 2 3 48 1 1/2 3.6K image
conv2_image 3 2 48 96 1/2 1/4 41K conv1_image
conv3b_image 3 2 96 192 1/4 1/8 166K conv2_image
conv4b_image 3 2 192 384 1/8 1/16 663K conv3b_image
conv5b_image 3 2 384 384 1/16 1/32 1.3M conv4b_image
Depth Branch
conv1_depth 5 2 2 16 1 1/2 0.8K depth
conv2_depth 3 2 16 32 1/2 1/4 4.6K conv1_depth
conv3b_depth 3 1 32 64 1/4 1/4 18K conv2_depth
conv4b_depth 3 1 64 128 1/8 1/16 74K conv3b_depth
conv5b_depth 3 2 128 128 1/16 1/32 147K conv4b_depth
Latent Encoding
latent - - 384+128 512 1/32 1/32 0 conv5b_image conv5b_depth
Total Parameters 2.4M
  • Our light-weight VGG8 [24] encoder following the late fusion paradigm [11, 29] contains only 2.4M parameters as opposed to the 23.8M and 14.8M parameters used by [16] and [29], respectively. The symbol denotes concatenation. Resolution ratio with respect to image size. Note that our light-weight model performs similarly to our VGG11 model.

TABLE VIII: VGG8 Encoder Architecture
Decoder kernel channels resolution
layer size stride in out in out # params input
deconv5 3 2 512 256 1/32 1/16 1.2M latent
concat5 - - 256+384+128 768 1/16 1/16 0 deconv5conv4b_imageconv4b_depth
conv5 3 1 768 256 1/16 1/16 1.8M concat5
deconv4 3 2 256 128 1/16 1/8 295K conv5
concat4 - - 128+192+64 384 1/8 1/8 0 deconv4conv3b_imageconv3b_depth
conv4 3 1 384 128 1/8 1/8 442M concat4
deconv3 3 2 128 128 1/8 1/4 147K conv4
concat3 - - 128+96+32 256 1/4 1/4 0 deconv3conv2_imageconv2_depth
conv3 3 1 256 64 1/4 1/4 147K concat3
deconv2 3 2 64 64 1/4 1/2 37K conv3
concat2 - - 64+48+16 128 1/2 1/2 0 deconv2conv1_imageconv1_depth
conv2 3 1 128 1 1/2 1/2 1.2K concat2
output - - - - 1/2 1 0 conv2
Total Parameters 4M
  • Our decoder contains 4M parameters. The symbol denotes concatenation and the symbol denotes upsampling. Resolution ratio with respect to image size.

TABLE IX: Decoder Architecture

Depth completion networks. Our VGG11 and VGG8 model (Fig. 13) contain a total of M and M parameters, respectively. In comparison to [16] with M parameters and [29] with M, our VGG11 model have a and reduction in parameters over [16] and [29], respectively; our VGG8 model have a and reduction over [16] and [29]. The image and depth branches of the encoder process the image and depth inputs separately – weights are not shared. The results of the encoders are concatenated as the latent representation and passed to the decoder for depth completion. The decoder makes the prediction at 1/2 resolution. The final layer of the decoder is an upsampling layer.

Pose Network. Our pose network takes a pair of images as input and regresses the relative pose between the images. Reversing the order of the image will reverse the relative pose as well. We take the average across the width and height dimensions of the pose network output to produce a 6 element vector. We use 3 elements to model rotation and the rest to model translation.

Including Pose Network in Total Parameters. We follow the network parameter computations of [29] who employs an additional network trained on ground truth for regularization during training. Our pose network (Table X) is an auxiliary network that is only used in training, and not during inference. Hence, we do not include it in the total number of parameters. However, even if we do, our pose network has 1M parameters, making our total for VGG11 to be 10.7M and VGG8 to be 7.4M. Our VGG11 model is still has a reduction in parameter, and our VGG8 a over the 27.8M parameters used by [16]. If we include the auxiliary prior network of [29], containing 10.1M parameters, that is used for regularization during training, then [29] has a total of 28.8M parameters. Our VGG11 model, therefore, has a reduction in parameters over [29] and our VGG8 has a reduction.

Pose Network kernel channels resolution
layer size stride in out in out # params input
conv1 7 2 6 16 1 1/2 4.7K image pair
conv2 5 2 16 32 1/2 1/4 13K conv1
conv3 3 2 32 64 1/4 1/8 18K conv2
conv4 3 2 64 128 1/8 1/16 74K conv3
conv5 3 2 128 256 1/16 1/32 295K conv4
conv6 3 2 256 256 1/32 1/64 295K conv5
conv7 3 2 256 256 1/64 1/128 295K conv6
output 3 1 256 6 1/128 1/128 14K conv7
Total Parameters 1M
  • Our auxiliary pose network contains 1M parameters and is only used during training to construct the photometric and pose consistency loss (Eqn. 4, 6). The output is averaged along its width and height dimensions to result in a 6 element vector – of which 3 elements are used to compose rotation and the rest for translation.

TABLE X: Pose Network Architecture

Footnotes

  1. https://github.com/ucla-vision/xivo
  2. https://realsense.intel.com/depth-camera/

References

  1. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving and M. Isard (2016) TensorFlow: a system for large-scale machine learning.. In OSDI, Vol. 16, pp. 265–283. Cited by: §VI.
  2. C. B. Barber, D. P. Dobkin, D. P. Dobkin and H. Huhdanpaa (1996) The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Software (TOMS) 22 (4), pp. 469–483. Cited by: §III-B.
  3. K. Q. Brown (1979) Voronoi diagrams from convex hulls. Information processing letters 9 (5), pp. 223–228. Cited by: §III-B.
  4. N. Chodosh, C. Wang and S. Lucey (2018) Deep Convolutional Compressed Sensing for LiDAR Depth Completion. In Asian Conference on Computer Vision (ACCV), Cited by: §II.
  5. M. Dimitrievski, P. Veelaert and W. Philips (2018) Learning morphological operators for depth completion. In Advanced Concepts for Intelligent Vision Systems, (eng). Cited by: §II.
  6. A. Eldesokey, M. Felsberg and F. S. Khan (2018) Propagating confidences through cnns for sparse data regression. In Proceedings of British Machine Vision Conference (BMVC), Cited by: §II, §II, §III-A.
  7. X. Fei, A. Wong and S. Soatto (2019) Geo-supervised visual depth prediction. IEEE Robotics and Automation Letters 4 (2). Cited by: §II, §VII-B.
  8. M. A. Fischler and R. C. Bolles (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24 (6). Cited by: §II.
  9. G. Gallego and A. Yezzi (2015) A compact formula for the derivative of a 3-d rotation in exponential coordinates. Journal of Mathematical Imaging and Vision 51 (3), pp. 378–384. Cited by: §VII-B.
  10. Z. Huang, J. Fan, S. Cheng, S. Yi, X. Wang and H. Li (2019) Hms-net: hierarchical multi-scale sparsity-invariant network for sparse depth completion. IEEE Transactions on Image Processing. Cited by: §II, §II, §III-A.
  11. M. Jaritz, R. de Charette, E. Wirbel, X. Perrotton and F. Nashashibi (2018) Sparse and dense data with cnns: depth completion and semantic segmentation. In International Conference on 3D Vision (3DV), Cited by: TABLE VII, TABLE VIII, §II, §II, §III-C.
  12. E. Jones and S. Soatto (2011-01) Visual-inertial navigation, mapping and localization: a scalable real-time causal approach. International Journal of Robotics Research. Cited by: §V-B, §V-B.
  13. A. Kendall, M. Grimes and R. Cipolla (2015) Posenet: a convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE international conference on computer vision, Cited by: §II, §III-C.
  14. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §VI.
  15. V. Lepetit, F. Moreno-Noguer and P. Fua (2009) Epnp: an accurate o (n) solution to the pnp problem. International journal of computer vision 81 (2), pp. 155. Cited by: §II.
  16. F. Ma, G. V. Cavalheiro and S. Karaman (2019) Self-supervised sparse-to-dense: self-supervised depth completion from lidar and monocular camera. In International Conference on Robotics and Automation (ICRA), Cited by: Fig. 13, TABLE VII, TABLE VIII, Appendix E, Appendix E, §II, §II, §III-C, §III-C, Fig. 4, §V-B, TABLE II, TABLE IV, §VII-A, TABLE II, TABLE IV.
  17. Y. Ma, S. Soatto, J. Kosecka and S. Sastry (2012) An invitation to 3-d vision: from images to geometric models. Vol. 26, Springer Science & Business Media. Cited by: §II, §III-C, §V-B.
  18. A. I. Mourikis and S. I. Roumeliotis (2007) A multi-state constraint kalman filter for vision-aided inertial navigation. In Robotics and automation, 2007 IEEE international conference on, pp. 3565–3572. Cited by: §V-B.
  19. P. K. Nathan Silberman and R. Fergus (2012) Indoor segmentation and support inference from rgbd images. In ECCV, Cited by: §V-B, TABLE IV.
  20. B. Pfrommer, N. Sanket, K. Daniilidis and J. Cleveland (2017) PennCOSYVIO: A challenging visual inertial odometry benchmark. In 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, May 29 - June 3, 2017, pp. 3847–3854. Cited by: §V-B.
  21. N. Schneider, L. Schneider, P. Pinggera, U. Franke, M. Pollefeys and C. Stiller (2016) Semantically guided depth upsampling. In German Conference on Pattern Recognition, Cited by: TABLE II, TABLE II.
  22. D. Schubert, T. Goll, N. Demmel, V. Usenko, J. Stückler and D. Cremers (2018) The tum vi benchmark for evaluating visual-inertial odometry. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1680–1687. Cited by: §V-B.
  23. S. S. Shivakumar, T. Nguyen, I. D. Miller, S. W. Chen, V. Kumar and C. J. Taylor (2019) Dfusenet: deep fusion of rgb and sparse depth information for image guided dense depth completion. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 13–20. Cited by: §II.
  24. K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: TABLE VII, TABLE VIII, Appendix E.
  25. J. Sturm, N. Engelhard, F. Endres, W. Burgard and D. Cremers (2012) A benchmark for the evaluation of rgb-d slam systems. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580. Cited by: §C-A.
  26. J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox and A. Geiger (2017) Sparsity invariant cnns. In 2017 International Conference on 3D Vision (3DV), pp. 11–20. Cited by: §II, §II, §III-A, §V-A, TABLE II.
  27. C. Wang, J. Miguel Buenaposada, R. Zhu and S. Lucey (2018) Learning depth from monocular videos using direct methods. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2022–2030. Cited by: §VII-B.
  28. Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §II, §IV-A.
  29. Y. Yang, A. Wong and S. Soatto (2019) Dense depth posterior (ddp) from single image and sparse range. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Cited by: Fig. 13, TABLE VII, TABLE VIII, Appendix E, Appendix E, §II, §II, §III-C, §III-C, §V-B, TABLE II, TABLE IV, §VII-A, TABLE II, TABLE IV.
  30. Z. Yin and J. Shi (2018) Geonet: unsupervised learning of dense depth, optical flow and camera pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1983–1992. Cited by: §II, §VII-B.
  31. Y. Zhang and T. Funkhouser (2018) Deep depth completion of a single rgb-d image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 175–185. Cited by: §II.
  32. T. Zhou, M. Brown, N. Snavely and D. G. Lowe (2017) Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §C-A, §II.