Guided Feature Selection for Deep Visual Odometry Supported by the National Key Research and Development Program of China (2017YFB1002601) and National Natural Science Foundation of China (61632003, 61771026).

Guided Feature Selection for Deep Visual Odometry thanks: Supported by the National Key Research and Development Program of China (2017YFB1002601) and National Natural Science Foundation of China (61632003, 61771026).

Fei Xue Key Laboratory of Machine Perception (MOE), School of EECS, Peking University Cooperative Medianet Innovation Center, Shanghai Jiao Tong University
, 44email: weidong@andrew.cmu.edu
44email: jerywangjq@foxmail.com, 44email: zha@cis.pku.edu.cn {feixue, wangqiuyuan, xinwang_cis}@pku.edu.cn
   Qiuyuan Wang Key Laboratory of Machine Perception (MOE), School of EECS, Peking University Cooperative Medianet Innovation Center, Shanghai Jiao Tong University
, 44email: weidong@andrew.cmu.edu
44email: jerywangjq@foxmail.com, 44email: zha@cis.pku.edu.cn {feixue, wangqiuyuan, xinwang_cis}@pku.edu.cn
   Xin Wang Key Laboratory of Machine Perception (MOE), School of EECS, Peking University Cooperative Medianet Innovation Center, Shanghai Jiao Tong University
, 44email: weidong@andrew.cmu.edu
44email: jerywangjq@foxmail.com, 44email: zha@cis.pku.edu.cn {feixue, wangqiuyuan, xinwang_cis}@pku.edu.cn
   Wei Dong
Junqiu Wang
Robotics Institute, Carnegie Mellon University Beijing Changcheng Aviation Measurement and Control Institute
   Hongbin Zha Key Laboratory of Machine Perception (MOE), School of EECS, Peking University Cooperative Medianet Innovation Center, Shanghai Jiao Tong University
, 44email: weidong@andrew.cmu.edu
44email: jerywangjq@foxmail.com, 44email: zha@cis.pku.edu.cn {feixue, wangqiuyuan, xinwang_cis}@pku.edu.cn
Abstract

We present a novel end-to-end visual odometry architecture with guided feature selection based on deep convolutional recurrent neural networks. Different from current monocular visual odometry methods, our approach is established on the intuition that features contribute discriminately to different motion patterns. Specifically, we propose a dual-branch recurrent network to learn the rotation and translation separately by leveraging current Convolutional Neural Network (CNN) for feature representation and Recurrent Neural Network (RNN) for image sequence reasoning. To enhance the ability of feature selection, we further introduce an effective context-aware guidance mechanism to force each branch to distill related information for specific motion pattern explicitly. Experiments demonstrate that on the prevalent KITTI and ICL_NUIM benchmarks, our method outperforms current state-of-the-art model- and learning-based methods for both decoupled and joint camera pose recovery.

Keywords:
Visual Odometry Recurrent Neural Networks Feature Selection

1 Introduction

Visual Odometry (VO) and Visual Simultaneous Localization and Mapping (V-SLAM) estimate camera poses from image sequences by exploiting the consistency between neighboring frames. As an essential task in computer vision, VO has been widely used in autonomous driving, robotics and augmented reality. Features play a key role in building consistency across images, and have been widely used in current VO/SLAM algorithms [28, 10, 26]. Despite the success of these methods, they ignore the discriminative contributions of features to different motions. However, if specific motions, especially rotations and translations, can be recovered by related features, the problems of scale-drifting and error accumulation in VO can be mitigated.

Unfortunately, how to detect appropriate features for recovering specific motions remains a challenging problem. Handcrafted feature descriptors such as SIFT [24] ORB [31], etc. are designed for general visual tasks, lacking the response to motions. Instead, geometry priors such as vanishing points [21], planar structures [19, 32], and depth of pixels [17, 33, 30] are used in VO algorithms for camera pose decoupling. These methods provide promising performance in certain environments. However, they have limited generalization ability and may suffer from noisy input.

Rather than handcrafted features, Convolutional Neural Networks (CNNs) are able to extract deep features, which can encode high level priors and can be fed into Recurrent Neural Networks (RNNs) for end-to-end image sequences modeling and camera motion estimation. A few methods based on regular Long Short-Term Memory (LSTM) [14] have been proposed for camera motion recovery, such as DeepVO [35] and ESP-VO [36]. While achieving promising performances, they did not take into account the different responses of visual cues to motions, thus may output trajectories with large error.

In this paper, we aim to explore the possibility to select features with high discriminative ability for specific motions. Therefore, we can relax the assumptions of scenes required in previous works. We present a novel context-aware recurrent network that learns decoupled camera poses using selected features, as shown in Fig. 1. The main contributions include:

Figure 1: An overview of our architecture. Rotation and translation are estimated separately in a dual-branch recurrent network. The specific motions are calculated using corresponding features selected with the guidance of previous output.
  • We propose a dual-branch recurrent network with convolutional structure underneath for decoupled camera pose estimation, enabling the model to learn different motion patterns via specific features.

  • We incorporate a context-aware feature selection mechanism to steer the network explicitly for distilling motion-sensitive information for each branch, using previous output as guidance spatially and temporally.

  • Our experiments on the public benchmarks show that the proposed approach outperforms state-of-the-art VO methods for both joint and decoupled camera pose prediction.

The rest of this paper is organized as follows. In § 2, related works on monocular VO and context-aware learning strategy are discussed. In § 3, we introduce the architecture of our Guided Feature Selection for Deep Visual Odometry. The performance of the proposed approach is compared with other state-of-the-art methods in § 4. We conclude the paper in § 5.

2 Related Work

2.1 Visual Odometry Based on Joint Pose Estimation

Traditionally, VO algorithms can be roughly categorized into feature-based and direct methods. Feature-based approaches establish correspondences across images via keypoints. VISO2 [10] utilizes circle matching between consecutive frames to realize an efficient monocular VO system. Since outliers and noises are unavoidable, all VO algorithms suffer from scale-drift and error accumulation. The problems can be partially solved in SLAM algorithms such as ORB-SLAM [26] by introducing pose graph optimization. Feature-based methods suffer from heavy time cost for feature extraction, and can fail in environments with limited texture information. Direct methods [8, 7] recover poses by directly minimizing photometric error. These methods do not require expensive feature extraction, yet are sensitive to illumination variations. DSO [7] alleviates this problem by integrating a full photometric calibration. Up to now, both feature-based and direct methods are designed for static scenes and may face problems encountering dynamic objects. Moreover, absolute scale cannot be recovered in these methods without auxiliary information.

Recently, due to the advances of deep learning for computer vision tasks, CNNs and RNNs have been utilized for pose estimation. DeMoN [34] estimates depth and motion from two consecutive images captured by monocular cameras. SfmLearner [43] and its successors [38, 22] recover depth of scenes and ego-motions from unlabeled sequences with view synthesis as supervisory signal. DeepVO [35] learns camera poses from image sequences by combining CNNs and RNNs. It feeds 1D vectors learned by an encoder into a two-layer regular LSTM to predict motion of each frame and builds the loss function over the absolute joint poses at each time step. ESP-VO [36] extends DeepVO by inferring poses and uncertainties directly in a unified framework. VINet [4] fuses visual and inertial information in an intermediate representation level to eliminate manual synchronizations and performs sequence-to-sequence learning.

The methods above, however, consider measly the response of visual cues to different motion types. Besides, spatial connection is ignored in approaches based on regular RNNs, such as DeepVO and ESP-VO.

2.2 Visual Odometry Based on Decoupled Pose Estimation

Generally, instead of sharing the same features with translation, rotation can be recovered via geometric priors of certain scenes. Vanishing point [21, 16] and planar structure [19, 32, 18, 44] are two kinds of frequently-used visual cues. [32, 18, 44] decouple the rotation and translation to estimate orientation by tracking Manhattan frames. [19] extends to compute translational motion in VO system by minimizing de-rotated reprojection error given the rotation. [1] exploits vanishing points to recover the absolute attitude, and uses a 2-point algorithm to estimate translation for catadioptric vision. [17, 33] select features for specific motion estimation according to depth values, since points at infinity are hardly influenced by translation, and hence are appropriate to estimate orientation. The strategy is also adopted in stereo SLAM systems [26].

Methods relying on Manhattan World assumption or depth of features achieve promising results in limited scenes but at a cost of reduced generalization and heavy noise. Instead, our method partially solves these problems by leveraging CNNs to extract features explicitly, and effectively.

2.3 Context-Aware Learning Mechanism

Contextual information is helpful in improving the performance of networks. It has been widely utilized in many computer vision tasks. Specifically, TRACA [3] uses the context of coarse category of tracking targets and proposes multiple expert auto-encoders to construct context-aware correlation filter for real-time tracking. PiCANet [23] learns to selectively attend informative context locations for each pixel to generate contextual attention maps. EncNet [41] uses the semantic context to selectively highlight the class-dependent feature-maps for semantic segmentation. CEN [25] defines the context as attributes assigned to each image and model the bias for image embeddings.

Our model benefits from the small motion between two consecutive views in an image sequence and exploits context i.e. continuity of neighboring frames in content and motion, to infer camera poses in a guided manner.

3 Guided Feature Selection for Deep Visual Odometry

In this section, we introduce our framework (Fig. 1) in detail. First, the model encodes RGB images to high-level features in § 3.1. Then, a context-aware motion guidance is adopted to recalibrate these features in § 3.2. After that, the feature-maps are fed into two branches for learning rotation and translation in § 3.3. Finally, we design a loss function considering both the rotational and translational errors in § 3.4.

3.1 Feature Extraction

We harness the CNN to learn feature representation. Recently, plenty of excellent deep neural networks have been developed to deal with computer vision tasks such as classification [15], objection detection [12], semantic segmentation [2] by focusing on appearance and content of images. The VO task, however, depends on geometrical information in input sequences. By taking the efficiency of transfer learning [39] into consideration, we build the encoder based on Flownet [6] proposed for optical flow estimation. We retain the first 9 convolutional layers, as [35, 36], encoding a pair of images into a 1024-channel stacked 2D feature-maps. The process can be described as

(1)

where are consecutive frames. is the extracted features with channel and size . maps raw images to high-level abstract 3D tensors through parameters . Different from DeepVO [35] and ESP-VO [36], we keep the structure of feature-maps for retaining the spatial formulation rather than compressing features into 1D vectors.

(a) Vanilla model
(b) Guidance model
Figure 2: An illustration of two structures for decoupled motion estimation. A vanilla structure (a) feeds feature-maps into a ConvLSTM unit directly, while the guided model (b) utilizes previous output as guidance for feature selection.

3.2 Dual-Branch Recurrent Network for Motion Separation

VO algorithms aim to recover camera poses from image sequences by leveraging the overlap between two or several consecutive frames. It’s reasonable to model the sequences via LSTM [14], a variation of RNN. In this case, the feature flow passing through recurrent units, carries rich accumulated information from previous observations to infer current output. Unfortunately, standard units of LSTM utilized by DeepVO [35] and ESP-VO [36] requires 1D vector as input, and thus break the spatial structure of features. We rather adopt ConvLSTM [37], an extended LSTM unit with convolution embedded preserving more detailed visual cues to form a two-branch recurrent model. Since gates in ConvLSTM unit such as output gate, input gate, forget gate can be thought intuitively as regulators of the flow of values going through the connections, features are filtrated and reorganized to fit relevant motions. The process can be controlled by

(2)

where and denote the input, output and hidden state at current time point, respectively. is the previous hidden state. Note that are both 3D tensors, so are the hidden states . plays the role of recurrent units of ConvLSTM [37] with representing its parameters.

We create a two-branch recurrent model for decoupled motion prediction, enabling each branch to control corresponding feature flow for one type of motion. In general, a vanilla model feeds extracted features into each branch directly, using the recurrent units for information selection, as show in Fig. 2(a). The unit, however, may be inefficient in feature selection due to finite capacity. Amount of redundant information may ulteriorly aggravates the situation, and hence degrades the accuracy.

Intuitively, previous output contains valuable visual cues for corresponding motion estimation, and thus can serve as a supervisor. In our model, raw feature-maps are reconsidered before fed into each branch as depicted in Fig. 2(b). Thereby, each motion pattern can be learned from most related features, as discussed in § 3.3. The process can be described as

(3)

where and denote the recalibrated features, hidden state and output for specific motion. is the previous hidden state for the branch. Here, motion indicates rotation and translation specifically in our work. Obviously, the model is flexible to accept various motion patterns according to tasks.

Figure 3: SENet-like guidance module. Encoded features are scaled along channel dimension according to only previous output without the participation of current input.

3.3 Guided Feature Selection

To achieve the purpose of generating related information for each branch, our approach benefits from small motion between neighboring views by incorporating a guidance module to selectively distill features for current pose inference adaptively as

(4)

Here, is a function that maps features to motion-sensitive tensors with the supervision of previous output . denotes the weights of . We introduce two strategies for context-aware guidance considering the connections in temporal and spatial domain. The first one is a SENet-like guidance, and the second one is a correlation-based guidance.

SENet-like guidance. Inspired by the work of SENet [15], in which a Squeeze-and-Extinction block is implemented to self-recalibrate channel-wise feature responses. Rather than self-adjusting the weights, we focus on the relationship in temporal domain. The previous output is first passed through a global average pooling (GAP) layer to yield a channel-wise descriptor. Two Fully-Connected (FC) layers are followed to learn inner channel dependence and produce scale values. Then, a sigmoid layer normalizes these values to [0, 1]. The final output is obtained by rescaling features along the channel dimension. A diagram of channel-wise guidance is shown in Fig. 3. The process is formulated as

(5)
(6)

where denote the two FC layers with as biases. indicate the sigmoid and ReLU [27] activation functions. is the obtained scale vectors. are feature-maps of and of channel .

Note that, SENet aims to exploit contextual interdependencies of , while our algorithm focuses on temporal consistency by filtering according to the proposal of .

(a) Point-wise correlation
(b) Channel-wise correlation
Figure 4: An illustration of two types of correlation-based guidance. Point-wise correlation (a) takes values at the same position of all 2D feature-maps as a unity, while channel-wise correlation (b) computes the weight for feature-map of each channel.

Correlation-based guidance. SENet-like subnetwork produces relatively coarse scalars in temporal domain without considering spatial relationship between and . Since the detail information is not kept, the performance is not satisfying. We further explore the guidance at a finer level from the aspect of correlation between and , as depicted in Fig. 4.

Since and are both 3D tensors of stacked 2D-features, there are two different approaches to calculating the cross-correlation parameters by taking each pixel position along channel dimension as a column (Fig. 4(a)) or each feature-map as a unity (Fig. 4(b)). In the first form, we compute the cosine similarity of each corresponding column first, and normalize the weight next. We have tried the sigmoid and softmax for normalization. The sigmoid function gives better performance in our experiments. The process can be described as

(7)
(8)

here and are vectors with size of at each point of and indexed by . is the scale for re-weighted tensor at . Intuitively, if current vectorial feature is close to previous output , it should be assigned a larger weight, otherwise a smaller one.

In the second type, 2D feature map of each channel is unified as a vector, on which we compute the correlation as

(9)

where reshapes a 2D feature map into a vector for correlation computation. is re-weighted adaptively for each branch according to the correlation parameters as (6).

Context-aware motion guidance scheme brings better performance for our model with a limited time cost. We analyze the boosted efficiency in § 4.

3.4 Loss Function

Our architecture learns rotation and translation in two individual recurrent branches separately, hence the final loss consists of both rotational and translational errors. We define the loss on the absolute pose error of each view using the norm. The loss functions are formulated as

(10)
(11)
(12)

Here , and represent the predicted and ground-truth translation and rotation (Euler angles) of the -th view in world coordinate. and denote the rotational and translational error of the -th frame, respectively. The final loss, sums the averaged loss of each time step. is the current frame index in a sequence. is a fixed parameter for balancing the rotational and translational errors. It is set to 100 and 10 in experiments on KITTI and ICL_NUIM dataset respectively.

4 Experiments

We first discuss the implementation details of our network in § 4.1, and introduce the datasets used in § 4.2. We compare the effectiveness of variations of our network, RNN for the regular recurrent network, SRNN for the dual-branch recurrent model, SRNN_se for dual-branch network plus senet-like contextual mechanism, SRNN_channel for dual-branch network plus channel-wise correlation, and SRNN_point for dual-branch network plus point-wise correlation, in § 4.3. Next, we compare our proposal with current methods on the KITTI dataset [9] in § 4.4, and ICL_NUIM dataset [11] in § 4.5.

4.1 Implementation

Training. Our model takes monocular image sequences as input. The image size can be arbitrary because the model has no requirement of compressing images into vectors. We use 7 consecutive frames to construct a sequence considering the time cost, yet our model can accept dynamic lengths of inputs.
Network. Weights of recurrent units are initialized with MSRA [13], while the encoder is based on pre-trained Flownet [6] to speed up convergence. Our networks are implemented by PyTorch [29] on an NVIDIA 1080Ti GPU. We employ the poly learning rate policy [2] with power = and initial learning rate = . Adam [20] with is used as optimizer. The networks are trained with a batch size of 4, a weight decay of for 150,000 iterations in total.

(a) Rotational error of each view
(b) Translational error of each view
Figure 5: Rotational (a) and translational (b) errors along each view of the network for joint motion prediction, vanilla and guided models for separate pose recovery.

4.2 Dataset

KITTI. The public KITTI dataset is used by both model- [26, 10] and learning-based methods [43, 35, 36]. The dataset consists of 22 sequences captured in urban and highway environments at a relatively low sample frequency (10 fps) at speed up to 90km/h. Seq 00-10 provide raw data with ground-truth represented as 6DoF motion parameters by considering the complicate urban environments, while Seq 11-21 provide only raw sensor data. In our experiments, the left RGB images are resized to 1280 x 384 for training and testing. We adopt the same train/test split as DeepVO [35] by using Seq 00, 02, 08, 09 for training and Seq 03, 04, 05, 06, 07 and 10 for quantitative evaluation.
ICL_NUIM. The ICL_NUIM dataset [11] consists of 8 sequences of RGB-D images captured within synthetically generated living room and office. Images in this dataset meet the Manhattan World assumption. The dataset is widely used for VO/SLAM [19, 44, 18] and 3D reconstruction [5]. ICL_NUIM dataset is synthesized by a full 6DoF handheld camera and thus is challenging for monocular VO methods due to complicated motion patterns. Our model is trained on kt0, kt3 and evaluated on kt1, kt2 on the living room and office datasets, respectively. Only RGB images with size of 640 x 480 are used in our experiments.

(a) SRNN
(b) SRNN_se
(c) SRNN_point
(d) SRNN_channel
Figure 6: Qualitative comparison of the trajectories of the vanilla and three guided models for separate motion estimation on the KITTI Seq 10.

4.3 Evaluation of Context-Aware Mechanism

We first evaluate the efficiency of context-aware strategies by analyzing rotational (Fig. 5(a)) and translational (Fig. 5(b)) errors along each view of the sequence on KITTI test datasets. We adopt the orientation and position drift errors divided by traveling length as metric. In Fig. 5, we observe that results of the vanilla network are remarkably improved by the context-aware guidance, and meanwhile, networks with different contextual modules behave diversely.

Among the models with guidance, SRNN_se extends the self-recalibration of SENet [15] by introducing temporal relationship, leading to improvement in decoupled motion learning. Compared with SRNN_se, the superior performance of SRNN_point and SRNN_channel suggests that correlation may be more effective in feature filtration for the VO task. The results of SRNN_channel are slightly better than SRNN_point. We explain that keeping the interdependence of feature-map in each channel may be a better manner for the guidance.

Sequence
Method 03 04 05 06 07 10
VISO2-S [10] 3.21 3.25 2.12 2.12 1.53 1.60 1.48 1.58 1.85 1.91 1.17 1.30
UnDeepVO [22] 5.00 6.17 5.49 2.13 3.40 1.50 6.20 1.98 3.15 2.48 10.63 4.65
Depth-VO-Feat [40] 15.58 10.69 2..92 2.06 4.94 2.35 5.80 2.07 6.48 3.60 12.45 3.46
VISO2-M [10] 8.47 8.82 4.69 4.49 19.22 17.58 7.30 6.14 23.61 19.11 41.56 32.99
SfmLearner [43] 10.78 3.92 4.49 5.24 18.67 4.10 25.88 4.80 21.33 6.65 14.33 3.30
DeepVO [35] 8.49 6.89 7.19 6.97 3.61 5.82 3.91 4.60 8.11 8.83
ESP-VO [36] 6.72 6.46 6.33 6.08 3.35 4.93 7.24 7.29 3.52 5.02 9.77 10.2
6.36 3.62 5.95 2.36 5.85 2.55 14.58 4.98 5.88 2.64 7.44 3.19
5.85 3.77 4.22 2.79 5.33 2.36 13.60 4.21 4.62 2.48 7.08 2.79
5.45 3.33 4.11 1.70 4.74 2.21 12.44 4.45 4.23 2.67 6.79 2.91
5.64 3.98 1.79 3.71 1.70 9.16 3.27 3.57 2.53 6.77 2.82
3.32 3.27 8.50
  • average translational RMSE drift (%) on length from 100, 200 to 800 m.

  • average rotational RMSE drift (/100m) on length from 100, 200 to 800 m.

Table 1: Results on the KITTI dataset. DeepVO [35], ESP-VO [36] and our models are trained on Seq 00, 02, 08 and 09. SfmLearner [43], UndeepVO [22] and Depth-VO-Feat [40] are trained on Seq 00-08 in an unsupervised manner. The best results of monocular VO methods are highlighted without considering stereo ones including VISO2-S, UnDeepVO and Depth-VO-Feat.

4.4 Results on KITTI Dataset

We compare our framework with model- and learning-based monocular VO methods on the KITTI test sequences. The error metrics, i.e., averaged Root Mean Square Errors (RMSEs) of the translational and rotational errors, are adopted for all the subsequences of lengths ranging from 100, 200 to 800 meters.

Most monocular VO methods cannot recover absolute scale, and their results require post alignment with ground-truth. Therefore, the open-source VO library VISO2 [10] estimating scale according to the height of camera is adopted as the baseline method. The results of both monocular (VISO2-M) and stereo (VISO2-S) versions are provided. Table 1 indicates that our models, even the vanilla version, outperform VISO2-M in terms of both rotation and translation estimation by a large margin. Note that the scale is learned during the end-to-end training without any post alignment or relying on priori knowledge such as the height of camera used by VISO2-M. VISO2-S gains superior performance due to the advantages of stereo image pairs in scale recovery and data association. Note that, our SRNN_channel achieves very close performance to VISO2-S provided solely monocular images. Qualitative comparisons are shown in Fig. 7. Our approach outperforms VISO2-M especially in handling complicated motions.

Besides, we compare our approach against current learning-based supervised methods DeepVO [35] and ESP-VO [36], both of which are implemented on a single branch with standard LSTM for coupled motion estimation. Table 1 illustrates the efficiency of our vanilla model. The improvement of this version is slight. We assume that extracting motion-sensitive features from encoded feature-maps directly may limit the accuracy. Fortunately, the deficiency is compensated by the context-aware feature selection mechanism. Our models with guidance outperform DeepVO and ESP-VO consistently. Meanwhile, out method achieves superior performance than unsupervised monocular approach, SfmLearner [43], and yields competitive results than stereo methods such as UnDeepVO [22] and Depth-VO-Feat [40].

(a) Seq 03
(b) Seq 05
(c) Seq 07
(d) Seq 10
Figure 7: The trajectories of ground-truth, VISO2-M, VISO2-S and our model on Seq 03, 05, 07 and 10 of the KITTI benchmark.
Sequence DFRE [19] DEMO [42] DVO [18] MWO [44]
lr kt1 0.010 0.021 0.020 0.023 0.100
lr kt2 0.019 0.031 0.090 0.084 0.052
of kt1 0.015 0.014 0.054 0.045 0.263
of kt2 0.019 0.021 0.079 0.065 0.047
Table 2: Evaluation on the ICL_NUIM dataset. Results of DRFE, DEMO, DVO and MWO are taken directly from [19].

We intensively test our model in various scenes with complicate motions on Seq 11-21. The trajectories of results are illustrated in Fig.8. In this case, our network is trained on all the training sequences (Seq 00-10), providing more data to avoid overfitting and maximize the ability of generation. We use the accurate VISO2-S [10] as reference due to the lack of ground-truth. Our model also obtains outstanding results. The appearing performance of our method reveals that the model generalizes well in unknown scenarios.

4.5 Results on ICL_NUIM Dataset

We further compare our model with methods on the challenging ICL_NUIM dataset [11]. We test our networks on the living room and office datasets individually. The baseline methods include algorithms for both joint (DEMO [42], DVO [18]) and separate (DRFE [19], MWO [44]) pose recovery. The error metric for qualitative analysis is the root mean square error (RMSE) of the relative pose error (RPE), used in [19]. Table 2, indicates that the best performing SRNN_channel yields lower errors in relative pose recovery on three among the four sequences. Note that all the four baselines use depth information while only monocular RGB images are used in our networks. The remarkable results suggest the potential ability of our proposal in dealing with more complicated motion patterns generated by handheld cameras or moving robotics.

(a) Seq 11
(b) Seq 12
(c) Seq 13
(d) Seq 14
(e) Seq 15
(f) Seq 16
(g) Seq 18
(h) Seq 19
(i) Seq 20
Figure 8: The predicted trajectories on KITTI sequences 11-20. Results of VISO2-S are used as reference since ground-truth poses of these sequences are unavailable.

5 Conclusions

In this paper, we propose a novel dual-branch recurrent neural network for decoupled camera pose estimation. The architecture is able to estimate different motion patterns via specific features. To enhance the performance, we incorporate a context-aware feature selection mechanism in both spatial and temporary domain, allowing the network to suppress useless information adaptively. We evaluate our techniques on the prevalent KITTI and ICL_NUIM datasets, and results demonstrate that our method outperforms current state-of-the-art learning- and model-based monocular VO approaches for both joint and separate motion estimation. In the future, we plan to visualize the features used by specific motions. We will also explore the relationship between visual cues and more specific motion patterns such as rotation in different directions.

References

  • [1] Bazin, J.C., Demonceaux, C., Vasseur, P., Kweon, I.: Motion Estimation by Decoupling Rotation and Translation in Catadioptric Vision. CVIU (2010)
  • [2] Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.: DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. TPAMI (2018)
  • [3] Choi, J., Chang, H.J., Fischer, T., Yun, S., Lee, K., Jeong, J., Demiris, Y., Choi, J.Y.: Context-aware Deep Feature Compression for High-speed Visual Tracking. In: CVPR (2018)
  • [4] Clark, R., Wang, S., Wen, H., Markham, A., Trigoni, N.: VINet: Visual-inertial Odometry as A Sequence-to-Sequence Learning Problem. In: AAAI (2017)
  • [5] Dai, A., Nießner, M., Zollhöfer, M., Izadi, S., Theobalt, C.: Bundlefusion: Real-time Globally Consistent 3D Reconstruction Using On-the-fly Surface Reintegration. TOG (2017)
  • [6] Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., van der Smagt, P., Cremers, D., Brox, T.: Flownet: Learning Optical Flow with Convolutional Networks. In: ICCV (2015)
  • [7] Engel, J., Koltun, V., Cremers, D.: Direct Sparse Odometry. TPAMI (2017)
  • [8] Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: Large-scale Direct Monocular SLAM. In: ECCV (2014)
  • [9] Geiger, A., Lenz, P., Urtasun, R.: Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In: CVPR (2012)
  • [10] Geiger, A., Ziegler, J., Stiller, C.: Stereoscan: Dense 3D Reconstruction in Real-time. In: IV (2011)
  • [11] Handa, A., Whelan, T., McDonald, J., Davison, A.J.: A Benchmark for RGB-D Visual Odometry, 3D Reconstruction and SLAM. In: ICRA (2014)
  • [12] He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: ICCV (2017)
  • [13] He, K., Zhang, X., Ren, S., Sun, J.: Delving Deep into Rectifiers: Surpassing Human-level Performance on Imagenet Classification. In: ICCV (2015)
  • [14] Hochreiter, S., Schmidhuber, J.: Long Short-term Memory. Neural Computation (1997)
  • [15] Hu, J., Shen, L., Sun, G.: Squeeze-and-Excitation Networks. In: CVPR (2018)
  • [16] Jo, Y., Jang, J., Paik, J.: Camera Orientation Estimation Using Motion Based Vanishing Point Detection for Automatic Driving Assistance System. In: ICCE (2018)
  • [17] Kaess, M., Ni, K., Dellaert, F.: Flow Separation for Fast and Robust Stereo Odometry. In: ICRA (2009)
  • [18] Kerl, C., Sturm, J., Cremers, D.: Robust Ddometry Estimation for RGB-D Cameras. In: ICRA (2013)
  • [19] Kim, P., Coltin, B., Kim, H.J.: Visual Odometry with Drift-free Rotation Estimation Using Indoor Scene Regularities. In: BMVC (2017)
  • [20] Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization. In: ICLR (2015)
  • [21] Lee, J.K., Yoon, K.J., et al.: Real-time Joint Estimation of Camera Orientation and Vanishing Points. In: CVPR (2015)
  • [22] Li, R., Wang, S., Long, Z., Gu, D.: UnDeepVO: Monocular Visual Odometry through Unsupervised Deep Learning. In: ICRA (2018)
  • [23] Liu, N., Han, J.: PiCANet: Learning Pixel-wise Contextual Attention in ConvNets and Its Application in Saliency Detection. In: CVPR (2018)
  • [24] Lowe, D.G.: Distinctive Image Features from Scale-invariant Keypoints. IJCV (2004)
  • [25] Mac Aodha, O., Perona, P., et al.: Context Embedding Networks. In: CVPR (2018)
  • [26] Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: An Open-source SLAM System for Monocular, Stereo, and RGB-D Cameras. T-RO (2017)
  • [27] Nair, V., Hinton, G.E.: Rectified Linear Units Improve Restricted Boltzmann Machines. In: ICML (2010)
  • [28] Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: Dense Tracking and Mapping in Real-time. In: ICCV (2011)
  • [29] Paszke, A., Gross, S., Chintala, S., Chanan, G.: Pytorch. https://github.com/pytorch/pytorch (2017)
  • [30] Paz, L.M., Piniés, P., Tardós, J.D., Neira, J.: Large-scale 6-DOF SLAM with Stereo-in-Hand. T-RO (2008)
  • [31] Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: An Efficient Alternative to SIFT or SURF. In: ICCV (2011)
  • [32] Straub, J., Bhandari, N., Leonard, J.J., Fisher, J.W.: Real-time Manhattan World Rotation Estimation in 3D. In: IROS (2015)
  • [33] Tardif, J.P., Pavlidis, Y., Daniilidis, K.: Monocular Visual Odometry in Urban Environments Using an Omnidirectional Camera. In: IROS (2008)
  • [34] Ummenhofer, B., Zhou, H., Uhrig, J., Mayer, N., Ilg, E., Dosovitskiy, A., Brox, T.: DeMoN: Depth and Motion Network for Learning Monocular Stereo. In: CVPR (2017)
  • [35] Wang, S., Clark, R., Wen, H., Trigoni, N.: DeepVO: Towards End-to-end Visual Odometry with Deep Recurrent Convolutional Neural Networks. In: ICRA (2017)
  • [36] Wang, S., Clark, R., Wen, H., Trigoni, N.: End-to-end, Sequence-to-sequence Probabilistic Visual Odometry through Deep Neural Networks. IJRR (2017)
  • [37] Xingjian, S., Chen, Z., Wang, H., Yeung, D.Y., Wong, W.K., Woo, W.c.: Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In: NIPS (2015)
  • [38] Yin, Z., Shi, J.: GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose. In: CVPR (2018)
  • [39] Zamir, A.R., Sax, A., Shen, W., Guibas, L., Malik, J., Savarese, S.: Taskonomy: Disentangling Task Transfer Learning. In: CVPR (2018)
  • [40] Zhan, H., Garg, R., Saroj Weerasekera, C., Li, K., Agarwal, H., Reid, I.: Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction. In: CVPR (2018)
  • [41] Zhang, H., Dana, K., Shi, J., Zhang, Z., Wang, X., Tyagi, A., Agrawal, A.: Context Encoding for Semantic Segmentation. In: CVPR (2018)
  • [42] Zhang, J., Kaess, M., Singh, S.: Real-time Depth Enhanced Monocular Odometry. In: IROS (2014)
  • [43] Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised Learning of Depth and Ego-motion from Video. In: CVPR (2017)
  • [44] Zhou, Y., Kneip, L., Rodriguez, C., Li, H.: Divide and Conquer: Efficient Density-based Tracking of 3D Sensors in Manhattan Worlds. In: ACCV (2016)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
320380
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description