Instancewise Depth and Motion Learning from Monocular Videos
Abstract
We present an endtoend joint training framework that explicitly models 6DoF motion of multiple dynamic objects, egomotion and depth in a monocular camera setup without supervision. The only annotation used in our pipeline is a video instance segmentation map that can be predicted by our new autoannotation scheme. Our technical contributions are threefold. First, we propose a differentiable forward rigid projection module that plays a key role in our instancewise depth and motion learning. Second, we design an instancewise photometric and geometric consistency loss that effectively decomposes background and moving object regions. Lastly, we introduce an instancewise minibatch rearrangement scheme that does not require additional iterations in training. These proposed elements are validated in a detailed ablation study. Through extensive experiments conducted on the KITTI dataset, our framework is shown to outperform the stateoftheart depth and motion estimation methods.
1 Introduction
Knowledge of the 3D environment structure and the motion of dynamic objects is essential for autonomous navigation [10, 28]. The 3D structure is valuable because it implicitly models the relative position of the agent, and it is also utilized to improve the performance of the highlevel scene understanding such as detection and segmentation [17, 29, 1, 16]. Besides scene structure, the 3D motion of the agent and traffic participants such as pedestrians and vehicles is also required for safe driving. The relative direction and speed between them are taken as the primary inputs for determining the next direction of travel.
Recent advances in deep neural networks (DNNs) have led to a surge of interest in depth prediction using monocular images [8, 9] and stereo images [22, 5], as well as in optical flow estimation [7, 30, 20]. These supervised methods require a large amount and broad variety of training data with groundtruth labels. Studies have shown significant progress in unsupervised learning of depth and egomotion from unlabeled image sequences [39, 12, 32, 21, 26]. The joint optimization framework uses a network for predicting singleview depth and pose, and exploits view synthesis of images in the sequence as the supervisory signal. However, these works ignore or mask out regions of moving objects for pose and depth inference.
In this work, rather than consider moving object regions as a nuisance [39, 12, 32, 21, 26], we utilize them as an important clue for estimating 3D object motions. This problem can be formulated as motion factorization of object motion and egomotion. Factorizing object motion in monocular sequences is challenging problem, especially in complex urban environments that contain many dynamic objects. Moreover, deformable dynamic objects such as humans make the problem more difficult because of the greater inaccuracy in their correspondence [27].
To address this problem, we propose a novel framework that explicitly models 3D motions of dynamic objects and egomotion together with scene depth in a monocular camera setting. Our unsupervised method relies solely on monocular video for training (without any groundtruth labels) and imposes a photoconsistency loss on warped frames from one time step to the next in a sequence. Given two consecutive frames from a video, the proposed neural network produces depth, 6DoF motion of each moving object, and the egomotion between adjacent frames as shown in Fig. 1. In this process, we leverage the instance mask of each dynamic object, obtained from an offtheshelf instance segmentation module.
Our main contributions are the following:
Forward image warping Differentiable depthbased rendering (which we call inverse warping) was introduced in [39], where the target view is reconstructed by sampling pixels from a source view based on the target depth map and the relative pose . The warping procedure is effective in static scene areas, but the regions of moving objects cause warping artifacts because the 3D structure of the source image may become distorted after warping based on the target image’s depth [4]. To build a geometrically plausible formulation, we introduce forward warping which maps the source image to the target viewpoint based on the source depth and the relative pose . There is a wellknown remaining issue with forward warping that the output image may have holes. Thus, we propose the differentiable and holefree forward warping module that works as a key component in our instancewise depth and motion learning from monocular videos. The details are described in Sec. 3.2.
Instancewise photometric and geometric consistency Existing works [3, 15] have successfully estimated independent object motion with stereo cameras. Approaches based on stereo video can explicitly separate static and dynamic motion by using stereo offset and temporal information. On the other hand, estimation from monocular video captured in the dynamic real world, where both agents and objects are moving, suffers from motion ambiguity, as only temporal clues are available. To address this issue, we introduce instancewise view synthesis and geometric consistency into the training loss. We first decompose the image into background and object (potentially moving) regions using a predicted instance mask. We then warp each component using the estimated singleview depth and camera poses to compute photometric consistency. We also impose a geometric consistency loss for each instance that constrains the estimated geometry from all input frames to be consistent. Sec. 3.3 presents our technical approach (our loss and network details) for inferring 3D object motion, egomotion and depth.
Minibatch arrangement Our work aims to recover instancewise 6DoF object motion regardless of how many instances the scene contains. To accomplish this efficiently, we propose an instancewise minibatch arrangement technique that organizes minibatches with respect to the total number of instances to avoid iterative training.
KITTI video instance segmentation dataset We introduce an autoannotation scheme to generate a video instance segmentation dataset, which is expected to contribute to various areas of selfdriving research. The role is this method is similar to that of [34], but we design a new framework that tailored to driving scenarios. Details are described in Sec. 3.4.
Stateoftheart performance Our unsupervised monocular depth and pose estimation is validated with a performance evaluation, presented in Sec. 4, which shows that our jointly learned system outperforms earlier approaches.
Our codes, models, and video instance segmentation dataset will be made publicly available.
2 Related Works
Unsupervised depth and egomotion learning Several works [39, 12, 32, 21, 26] have studied the inference of depth and egomotion from monocular sequences. Zhou et al. [39] introduce an unsupervised learning framework for depth and egomotion by maximizing photometric consistency across monocular videos during training. Godard et al. [12] offer an approach replacing the use of explicit depth data during training with easiertoobtain binocular stereo footage. It trains a network by searching for the correspondence in a rectified stereo pair that requires only a onedimensional search. Wang et al. [32] show that Direct Visual Odometry (DVO) can be used to estimate the camera pose between frames and the inverse depth normalization leads to a better local minimum. Mahjourian et al. [21] combine 3D geometric constraints using Iterative Closest Point (ICP) with a photometric consistency loss. Ranjan et al. [26] propose a competitive collaboration framework that facilitates the coordinated training of multiple specialized neural networks to solve joint problems. Recently, there have been two works [2, 6] that achieve stateoftheart performance on depth and egomotion estimation using geometric constraints.
Learning motion of moving objects Recently, the joint optimization of dynamic object motion along with the depth and egomotion [4, 3] has gained interest as a new research topic. Casser et al. [4] present an unsupervised imagetodepth framework that models the motion of moving objects and cameras. The main idea is to introduce geometric structure in the learning process, by modeling the scene and the individual objects; camera egomotion and object motions are learned from monocular videos as input. Cao et al. [3] propose a selfsupervised framework with a given 2D bounding box to learn scene structure and object 3D motion from stereo videos. They factor the scene representation into independently moving objects with geometric reasoning. However, this work is based on a stereo camera setup and computes the 3D motion vector of each instance using simple mean filtering.
Video instance segmentation The task of video instance segmentation (VIS) is to simultaneously conduct detection, segmentation and tracking of instances in videos. Yang et al. [34] first extend the image instance segmentation problem to the video domain. To facilitate research on this new task, they present a largescale dataset and a new tracking branch to Mask RCNN to jointly perform the detection, segmentation and tracking tasks simultaneously.
3 Methodology
We introduce an endtoend joint training framework for instancewise depth and motion learning from monocular videos without supervision. Figure 2 illustrates an overview of the complete pipeline. The core of our method is a novel forward rigid projection module to align the depth map from adjacent frames and an instancewise training loss. In this section, we introduce the forward projective geometry and the networks for each type of output: DepthNet, EgoPoseNet, and ObjPoseNet. Further, we describe our novel loss functions and how they are designed for backpropagation in decomposing the background and moving object regions.
3.1 Method Overview
Baseline
Given two consecutive RGB images , sampled from an unlabeled video, we first predict their respective depth maps via our presented DepthNet with trainable parameters . By concatenating two sequential images as an input, our proposed EgoPoseNet , with trainable parameters , estimates the sixdimensional SE(3) relative transformation vectors . With the predicted depth, relative egomotion, and a given camera intrinsic matrix , we can synthesize an adjacent image in the sequence using an inverse warping operation , where is the reconstructed image by warping the reference frame [39, 13]. As a supervisory signal, an image reconstruction loss, , is imposed to optimize the parameters, and .
Instancewise learning
The baseline method has a limitation that it cannot handle dynamic scenes containing moving objects. Our goal is to learn depth and egomotion, as well as object motions, using monocular videos by constraining them with instancewise geometric consistencies. We propose an ObjPoseNet with trainable parameters and number of instances, which is specialized to estimating individual object motions. We annotate a novel video instance segmentation dataset to utilize it as an individual object mask while training the egomotion and object motions. The details of the video instance segmentation dataset will be described in Sec. 3.4. Given two consecutive binary instance masks corresponding to , instances are annotated and matched between the frames. First, in the case of camera egomotion, potentially moving objects are masked out and only the background region is fed to EgoPoseNet. Secondly, the binary instance masks are multiplied to the input images and fed to ObjPoseNet. For both networks, motions of the element are represented as , where means camera egomotion from frame to . The details of the motion models will be described in the following subsections.
Training objectives
The previous works [21, 2, 6] imposed geometric constraints between frames, but they are limited to rigid projections. Regions containing moving objects cannot be constrained with this term and are treated as outlier regions with regard to geometric consistency. In this paper, we propose instancewise geometric consistency. We leverage instance masks to impose geometric consistency regionbyregion. Following instancewise learning, our overall objective function can be defined as follows:

(1) 
where are the reconstruction and geometric consistency losses applied on each instance region including the background, stands for the depth smoothness loss, and are the object translation and height constraint losses. is the set of loss weights. We train the models in both forward () and backward () directions to maximally use the selfsupervisory signals. In the following subsections, we introduce how to constrain the instancewise consistencies.
3.2 Forward Projective Geometry
A fully differentiable warping function enables learning of structurefrommotion tasks. This operation is first proposed by spatial transformer networks (STN) [13]. Previous works for learning depth and egomotion from unlabeled videos so far follow this grid sampling module to synthesize adjacent views. To synthesize from , the homogeneous coordinates, , of a pixel in the are projected to as follows:

(2) 
As expressed in the equation, this operation computes by taking the value of the homogeneous coordinates from the inverse rigid projection using and . As a result, the coordinates are not valid if lies on an object that moves between and . Therefore, the inverse warping operation is not suitable for removing the effects of egomotion in dynamic scenes. Figure 3 describes the point discrepancy while geometric warping with the rigid structure assumption. As shown in Figure 4, the inverse warping distorts the appearance of moving objects.
In order to synthesize the novel view (from to ) properly when there exist moving objects, we propose forward projective geometry as follows:

(3) 
Unlike inverse projection in Eq. (2), this warping process cannot be sampled by the STN since the projection is in the forward direction (inverse of grid sampling). In order to make this operation fully differentiable, we first use sparse tensor coding to index the homogeneous coordinates , of a pixel in the . Invalid coordinates (exiting the view where ) of the sparse tensor are masked out. We then convert this sparse tensor to be dense by taking the nearest neighbor value of the source pixel. However, this process has a limitation that there exist irregular holes due to the sparse tensor coding. Since we need to feed those forward projected images into the neural networks in the next step, the size of the holes should be minimized. To fill these holes as much as possible, we preupsample the depth map of the reference frame. If the depth map is upsampled by a factor of , the camera intrinsics matrix is also upsampled as follows:

(4) 
where are focal lengths along the  and axis. Figure 5 shows the effect of preupsampling reference depth while forward warping. With an upsampling factor of during forward projection, the holes in the warped valid masks are filled properly. In the following subsection, we describe the steps of how to synthesize novel views with inverse and forward projection in each instance region.
3.3 Instancewise View Synthesis and Geometric Consistency
Instancewise reconstruction
Each step of the instancewise view synthesis is described in Figure 2. To synthesize a novel view in an instancewise manner, we first decompose the image region into background and object (potentially moving) regions. With given instance masks , the background mask along frames is generated as

(5) 
The background mask is elementwise multiplied (, Hadamard product) to the input frames , and then concatenated along the channel axis, which is an input to the EgoPoseNet. The camera egomotion is computed as

(6) 
To learn the object motions, we first apply the forward warping, , to generate egomotioneliminated warped images and masks as follows:

(7) 

(8) 
where both equations are applied in the backward direction as well by changing the subscripts and . Now we can generate forwardprojected instance images as and . For every object instance in the image, ObjPoseNet predicts the object motion as

(9) 
where both object motions are composed of sixdimensional SE(3) translation and rotation vectors. We merge all instance regions to synthesize the novel view. In this step, we utilize inverse warping, . First, the background region is reconstructed as

(10) 
where the gradients are propagated with respect to and . Second, the inversewarped instance region is represented as

(11) 
where the gradients are propagated with respect to and . Finally, our instancewise fully reconstructed novel view is formulated as

(12) 
Note that the above three equations are applied in either the forward or backward directions by switching the subscripts and .
Instancewise minibatch rearrangement
While training ObjPoseNet, the number of instance images may change after each iteration. In order to avoid inefficient iterative training, we fix the maximum number of instances per image (sampled in order of instance size) and intermediately rearrange the minibatches with respect to the total number of instances in the minibatch. For example, if the minibatch has four frames and each frame has number of instances, then the rearranged minibatch size is . The scale of the gradients while training ObjPoseNet is normalized according to the total number of instances per minibatch.
Instance mask propagation
Through the process of forward and inverse warping, the instance mask is also propagated to contain the information of instance position and pixel validity. In the case of the instance mask , the forward and inverse warped mask is expressed as follows:

(13) 
Note that the forward warped mask has holes due to the sparse tensor coding. To keep the binary format and avoid interpolation near the holes while inverse warping, we round up the fractional values after each warping operation. The final valid instance mask is expressed as follows:

(14) 
Instancewise geometric consistency
We impose the geometric consistency loss for each region of an instance. Following the work by Bian et al. [2], we constrain the geometric consistency during inverse warping. With the predicted depth map and warped instance mask, can be spatially aligned to the frame by inverse warping, represented as and respectively for background and instance regions. In addition, can be scaleconsistently transformed to the frame , represented as and respectively for background and instance regions. Based on this instancewise operation, we compute the unified depth inconsistency map as:

(15) 

(16) 
where each line is applied to either the background or an instance region, and both are applied in either the forward and backward directions by changing the subscripts and . Note that the above depth inconsistency maps are spatially aligned to the frame . Therefore, we can integrate the depth inconsistency maps from the background and instance regions as follows:

(17) 
Training loss
In order to handle occluded, viewexiting, and valid instance regions, we leverage Eq. (23) and Eq. (17). We generate a weight mask as and this is multiplied to the valid instance mask . Finally, our weighted valid mask is formulated as:

(18) 
The reconstruction loss is expressed as follows:

(19) 
where is the location of each pixel, is the structural similarity index [33], and is set to 0.8 based on crossvalidation. The geometric consistency loss is expressed as follows:

(20) 
To mitigate spatial fluctuation, we incorporate a smoothness term to regularize the predicted depth map. We apply the edgeaware smoothness loss proposed by Ranjan et al. [26], which is described as:

(21) 
Note that the above loss functions are imposed for both forward and backward directions by switching the subscripts and .
Since the dataset has low proportion of moving objects, the motions of objects tend to learn to converge to zero. The same issue has been raised in the previous study [4]. To supervise the approximate amount of object’s movement, we constrain the motion of the object with a translation prior. We compute this translation prior, , by subtracting the mean estimation of the object’s 3D points in the forward warped frame into that of the target frame’s 3D object points. This represents the mean estimated 3D vector of the object’s motion. The object translation constraint loss is defined as follows:

(22) 
where and are predicted object translation from ObjPoseNet and the translation prior on the instance mask.
Although we have designed the instancewise geometric consistency, there still exists a trivial case of the infinite depth of the moving object, which has the same motion as the camera motion, especially the vehicles in front. To mitigate this issue, we adopt the object height constraint loss proposed by the previous study [4], which is described as:

(23) 
where is the mean estimated depth, and (, ) are learnable height prior and pixel height of the instance. The final loss is a weighted summation of those five losses defined as Eq. (1).
3.4 Autoannotation of Video Instance Segmentation Dataset
We introduce an autoannotation scheme to generate a video instance segmentation dataset from the existing KITTI autonomous driving dataset [11]. To this end, we adopt an offtheshelf instance segmentation model, PANet [19], and an optical flow estimation model, PWCNet [30]. Figure 6 shows an example case between two consecutive frames. We first compute the instance segmentation for every image frame, and calculate the Intersection over Union (IoU) score table among instances in each frame. The occluded and disoccluded regions are handled by bidirectional flow consensus proposed in UnFlow [23]. If the maximal IoU in the adjacent frame is above a threshold, then the instance is assumed as matched. Our KITTI video instance segmentation dataset (KITTIVIS) will be publicly available.
4 Experiments
We evaluate the performance of our frameworks and compare with previous unsupervised methods on single view depth and visual odometry tasks. We train and test our method on KITTI [11] for benchmarking.
4.1 Implementation Details
Network details
For DepthNet, we use DispResNet [26] based on ResNet50 with an encoderdecoder structure. The network can generate multiscale outputs (six different scales), but the singlescale training converges faster and produces better performance (based on the Abs Rel error metrics). The structures of EgoPoseNet and ObjPoseNet are the same, but the weights are not shared. They consist of seven convolutional layers and regress the relative pose as three Euler angles and three translation vectors.
Training
Our system is implemented in PyTorch [25]. We train our neural networks using the ADAM optimizer [14] with and on Nvidia RTX 2080 GPUs. The training image resolution is set to and the video data is augmented with random scaling, cropping, and horizontal flipping. We set the minibatch size to 4 and train the networks over 200 epochs with 1,000 randomly sampled batches in each epoch. The initial learning rate is set to and is decreased by half every 50 epochs. The loss weights are set to , , , , and .
While training, we take three consecutive frames as input to train our joint networks. Our threeframe cyclic training is described in Figure 7. Dynamic scenes are hard to handle with rigid projective geometry in a oneshot manner. We utilize an intermediate frame which enables decomposition of egomotiondriven global view synthesis and residual view synthesis by object motions. From this, several warping directions can be proposed. The arrows in Figure 7 represent the warping direction of RGB images and depth maps. We tried to optimize ObjPoseNet by warping to the intermediate views (dashed arrows); however, the network did not converge. One important point here is that we need to feed the supervisory signals generated at the original timestamps , not in the intermediate frames . Although we generate the photometric and geometric supervisions only in the reference or target frames, we utilize the object motions while warping to the intermediate frames. We regularize the object motions by averaging two motions in the same direction (e.g., “intermediate target” and “reference intermediate”, shown as red and green arrows).
4.2 Ablation Study
To validate the effect of our forward projective geometry and instancewise geometric consistency term, we conduct an ablation study that compares to the baseline method. As shown in Table 1, our forward projection works effectively and leads to convergence of DepthNet. It shows that each proposed module improves the quality of depth maps and the best performance is achieved with all proposed components. We observe that ObjPoseNet did not converge without forward projection with the instancewise geometric loss, so the performance of DepthNet was not improved. In other words, optimized ObjPoseNet helps to boost the performance of DepthNet. DepthNet and ObjPoseNet complement each other. We note that the significant performance improvement comes from the instancewise geometric loss incorporated with forward projection.
4.3 Monocular Depth Estimation
Test setup
Results analysis
We show qualitative results on singleview depth estimation in Figure 8. The compared methods are CC [26] and SCSfM [2], which have the same network structure (ResNetbased) for depth map prediction. Ours produces better depth representations on moving objects than the previous methods. As the previous studies do not consider dynamics of objects when finding pixel correspondences, their results of training on object distance could be either farther or closer than the actual distance. This is a traditional limitation for the task of selfsupervised learning of depth from monocular videos, however our networks selfdisentangle moving and static object regions by our instancewise losses.
Table 2 shows the results on the KITTI Eigen split test, where the proposed method achieves stateoftheart performance in the single view depth prediction task with unsupervised training. The advantage is evident of using instance masks and constraining the instancewise photometric and geometric consistencies. Note that we do not need instance masks in the test phase for DepthNet.
4.4 Visual Odometry Estimation
Test setup
We evaluate the performance of our EgoPoseNet on the KITTI visual odometry dataset. Following the evaluation setup of SfMLearner [39], we use the sequences 0008 for training, and sequences 09 and 10 for tests. In our case, since the potentially moving object masks are fed together with the image sequences while training EgoPoseNet, we test the performance of visual odometry under two conditions: with and without instance masks.
Results analysis
Table 3 shows the results on the KITTI visual odometry test. We measure the Absolute Trajectory Error (ATE) and achieve stateoftheart performance. Although we do not use the instance mask, the result of sequence 10 produces favorable performance. This is because the scene does not have many potentially moving objects, e.g., vehicles and pedestrians, so the result is not affected much by using or not using instance masks.
5 Conclusion
In this work, we proposed a novel framework that predicts 6DoF motion of multiple dynamic objects, egomotion and depth with monocular image sequences. Leveraging video instance segmentation, we design an endtoend joint training pipeline in an unsupervised manner. There are four main contribution of our work: (1) an autoannotation scheme for video instance segmentation, (2) differentiable forward image warping, (3) instancewise viewsynthesis and geometric consistency loss, (4) minibatch rearrangement. We show that our method outperforms the existing methods that estimates object motion, egomotion and depth. We also show that each proposed module plays a role in improving the performance of our framework.
In the future, we plan to investigate joint optimization of the instance mask together with the depth and motion. Another future direction is to consider longer input sequences as in bundle adjustment [31].
References
 (2019) SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. In ICCV, Cited by: §1.
 (2019) Unsupervised scaleconsistent depth and egomotion learning from monocular video. In NeurIPS, Cited by: §2, §3.1, §3.3, §4.3, §4.3, Table 2.
 (2019) Learning independent object motion from unlabelled stereoscopic videos. In CVPR, Cited by: §1, §2.
 (2019) Depth prediction without the sensors: leveraging structure for unsupervised learning from monocular videos. In AAAI, Cited by: §1, §2, §3.3, §3.3, §4.3, Table 2, Table 3.
 (2018) Pyramid stereo matching network. In CVPR, Cited by: §1.
 (2019) Selfsupervised learning with geometric constraints in monocular video: connecting flow, depth, and camera. In ICCV, Cited by: §2, §3.1, §4.3, Table 2, Table 3.
 (2015) Flownet: learning optical flow with convolutional networks. In ICCV, Cited by: §1.
 (2014) Depth map prediction from a single image using a multiscale deep network. In NIPS, Cited by: §1, §4.3, Table 2.
 (2016) Unsupervised cnn for single view depth estimation: geometry to the rescue. In ECCV, Cited by: §1, Table 2.
 (2014) 3d traffic scene understanding from movable platforms. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). Cited by: §1.
 (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, Cited by: §3.4, §4.
 (2017) Unsupervised monocular depth estimation with leftright consistency. In CVPR, Cited by: §1, §1, §2, Table 2.
 (2015) Spatial transformer networks. In NIPS, Cited by: §3.1, §3.2.
 (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: §4.1.
 (2019) Learning residual flow as dynamic motion from stereo videos. In IROS, Cited by: §1.
 (2019) Visuomotor understanding for representation learning of driving scenes. In BMVC, Cited by: §1.
 (2017) Vpgnet: vanishing point guided network for lane and road marking detection and recognition. In ICCV, Cited by: §1.
 (2016) Learning depth from single monocular images using deep convolutional neural fields. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). Cited by: Table 2.
 (2018) Path aggregation network for instance segmentation. In CVPR, Cited by: §3.4.
 (2018) Learning rigidity in dynamic scenes with a moving camera for 3d motion field estimation. In ECCV, Cited by: §1.
 (2018) Unsupervised learning of depth and egomotion from monocular video using 3d geometric constraints. In CVPR, Cited by: §1, §1, §2, §3.1, Table 2, Table 3.
 (2016) A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In CVPR, Cited by: §1.
 (2018) UnFlow: unsupervised learning of optical flow with a bidirectional census loss. In AAAI, Cited by: §3.4.
 (2015) ORBslam: a versatile and accurate monocular slam system. IEEE Transactions on Robotics. Cited by: Table 3.
 (2017) Automatic differentiation in pytorch. Cited by: §4.1.
 (2019) Competitive collaboration: joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. In CVPR, Cited by: §1, §1, §2, §3.3, §4.1, §4.3, §4.3, Table 2, Table 3.
 (2014) Video popup: monocular 3d reconstruction of dynamic scenes. In ECCV, Cited by: §1.
 (2004) Pedestrian detection for driving assistance systems: singleframe classification and system level performance. In IEEE Intelligent Vehicles Symposium, Cited by: §1.
 (2019) Roarnet: a robust 3d object detection based on region approximation refinement. In 2019 IEEE Intelligent Vehicles Symposium (IV), Cited by: §1.
 (2018) Pwcnet: cnns for optical flow using pyramid, warping, and cost volume. In CVPR, Cited by: §1, §3.4.
 (1999) Bundle adjustmentâa modern synthesis. In International workshop on vision algorithms, Cited by: §5.
 (2018) Learning depth from monocular videos using direct methods. In CVPR, Cited by: §1, §1, §2, Table 2.
 (2004) Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing. Cited by: §3.3.
 (2019) Video instance segmentation. In ICCV, Cited by: §1, §2.
 (2018) Lego: learning edge with geometry all at once by watching videos. In CVPR, Cited by: Table 2.
 (2018) Unsupervised learning of geometry with edgeaware depthnormal consistency. In AAAI, Cited by: Table 2.
 (2018) Geonet: unsupervised learning of dense depth, optical flow and camera pose. In CVPR, Cited by: Table 2, Table 3.
 (2018) Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In CVPR, Cited by: Table 2.
 (2017) Unsupervised learning of depth and egomotion from video. In CVPR, Cited by: §1, §1, §1, §2, §3.1, §4.3, §4.4, Table 2, Table 3.
 (2018) Dfnet: unsupervised joint learning of depth and flow using crosstask consistency. In ECCV, Cited by: Table 2, Table 3.