Gyroscope-aided Relative Pose Estimation for Rolling Shutter Cameras

Gyroscope-aided Relative Pose Estimation for Rolling Shutter Cameras

Chang-Ryeol Lee
   Ju Hong Yoon
   Min-Gyu Park
   Kuk-Jin Yoon

The rolling shutter camera has received great attention due to its low cost imaging capability, however, the estimation of relative pose between rolling shutter cameras still remains a difficult problem owing to its line-by-line image capturing characteristics. To alleviate this problem, we exploit gyroscope measurements, angular velocity, along with image measurement to compute the relative pose between rolling shutter cameras. The gyroscope measurements provide the information about instantaneous motion that causes the rolling shutter distortion. Having gyroscope measurements in one hand, we simplify the relative pose estimation problem and find a minimal solution for the problem based on the Grobner basis polynomial solver. The proposed method requires only five points to compute relative pose between rolling shutter cameras, whereas previous methods require 20 or 44 corresponding points for linear and uniform rolling shutter geometry models, respectively. Experimental results on synthetic and real data verify the superiority of the proposed method over existing relative pose estimation methods.

1 Introduction

Rolling shutter cameras have become popular due to their low-cost imaging capabilities. However, their line-by-line image capturing nature causes undesirable artifacts as shown in Fig. 1. These artifacts can be critical to geometric vision applications such as structure-from-motion (SfM), simultaneous localization and mapping (SLAM), and dense 3D reconstruction [4]. Therefore, a numerous studies have tackled this problem over the last decade [14, 13, 30, 11, 18, 2, 4, 6].

(a) The tower leans towards the left direction.
(b) The tower bends to the right direction.
Figure 1: Examples of distorted images from a rolling shutter camera. The images were captured by a hand-held rolling shutter camera undergoing arbitrary motion.

Most existing methods were designed for video applications and the majority of them take a temporal interpolation approach to estimate camera poses [14, 13, 11, 18]. They handle the distortion of rolling shutter cameras through nonlinear optimization with a large number of variables and the images from high frame-rate videos. Recently, relative pose estimation problems using unordered still images captured by rolling shutter cameras have been also studied [4, 6]. Albl et al.  [2] proposed an absolute pose estimation algorithm for rolling shutter cameras given six pairs of 2D-3D correspondences. Besides, they analyzed the degeneracy in rolling shutter structure-from-motion, which happens when the readout directions are close to parallel, and they alleviated the degenerate situation by differentiating readout directions [4]. In addition, Dai et al[6] proposed linear and nonlinear algorithms for linear and uniform rolling shutter geometry models to solve the rolling shutter relative pose (RSRP) problem. These algorithms require at least 20 and 44 corresponding points for the linear and uniform models on rolling shutter geometry, respectively. However, these algorithms are sensitive to outliers and time-consuming due to the examination of a huge number of hypotheses. Although they also proposed a nonlinear solver that utilizes 11 and 17 points for the linear and uniform models respectively, the approach is also computationally demanding because of its nonlinear least square optimization procedure, which is performed at each RANSAC iteration. Moreover, the cost function for nonlinear optimization has a large number of unknown parameters and this leads to a local minimum solution rather than a correct solution (global minimum solution). One practical approach to alleviate this problem is to lower the degree-of-freedom (DOF) of the problem. In this sense, several studies [8, 23, 3, 29, 17] adopted inertial measurements to lessen the DOF. They utilize the orientation of the gravitational acceleration, which is also called as vertical direction or gravity direction. Since the vertical direction is directly related to the inclination of the sensors, they [8, 23] solved the relative pose problem for global shutter cameras with two known angles. Albl [3] also exploited the vertical direction for the absolute pose estimation of rolling shutter cameras. However, the vertical direction from an IMU is likely to be corrupted by several factors such as sensor noise as they pointed out in the paper, and therefore, additional refinements are required in general. On the other hand, some researchers [29, 17] tried to utilize gyroscope measurements in order to rectify the distorted images for video stabilization. They also proposed self-calibration, which estimates biases of a gyroscope and the time offset between a gyroscope and a camera, from an image sequence and the corresponding gyroscope measurements.

In this paper, we take one step further with the aid of 3-DOF gyroscope measurements indicating the angular velocity to effectively solve the rolling shutter relative pose (RSRP) problem. The angular velocity measurements can be exploited for general situations, e.g. a hand-held rolling shutter camera undergoing arbitrary motion. Moreover, the angular velocity measurements directly provide the information about instantaneous motion that causes rolling shutter distortion. From this perspective, it would be also possible to use gravity measurements along with the angular velocity measurements since most of off-the-shelf IMUs provide those measurements together. However, we do not use the gravity measurements because the gravity measurements tend to be very noisy as pointed out in [3], and we solve the RSRP problem with only the angular velocity measurements. In addition, unlike [6], our work is based on the angular model for rolling shutter camera geometry assuming that instantaneous linear velocities are zero because hand-held camera motion does not contain high linear velocity causing severe distortion in general. It not only simplifies the problem further but also improves the numerical stability on translation estimation. Then, we find the minimal solution for the simplified problems by using the Grobner basis polynomial equation solver. Note that the proposed algorithms require only five corresponding points for the RSRP problem as the five-point algorithm [27] for global shutter cameras. In addition, we propose a nonlinear refinement scheme for the estimates obtained from the proposed closed-form method. The experiments on synthetic and real data show the superiority of the proposed method. We further analyze the performance of the proposed algorithms under various instantaneous motions and noises. The real and the synthetic data experiments with non-zero linear velocities substantiate that the proposed methods are effective even under the non-zero linear velocities although we do not consider the linear velocity in our rolling shutter geometry.

2 Related Work

The rolling shutter camera was addressed in the Perspective-n-Point (PnP) problem to estimate the camera pose with given 2D projections corresponding to 3D point clouds. Ait-Aider et al[1] proposed to estimate the pose and the speed of fast-moving objects in a single image with a given 2D-3D matching, and proposed nonlinear and linear models for non-planar and planar objects. Magerand et al[25] extended this study by suggesting a polynomial uniform rolling shutter geometry model and solved the problem through the constrained global optimization. Albl et al[2] proposed a new method to estimate the camera pose with 6 points based on the double linearized rolling shutter camera model.

Besides, SfM and SLAM based on a monocular rolling shutter camera have been intensively studied taking into account the rolling shutter effects. Klein and Murray [19] used the constant velocity model in the SLAM framework to predict and correct the rolling shutter distortion occurring in the next frame. Hedborg et al[14, 13] applied the rolling shutter camera projection model to the non-linear optimization step of SfM, i.e. bundle adjustment. Their key idea is to exploit temporal continuity of the camera motion on the video input to deal with the rolling shutter distortion. Saurer et al[30] also considered the rolling shutter distortion in dense 3D reconstruction and they also proposed a minimal solution for absolute pose estimation problem of rolling shutter cameras [31]. Albl et al[4] analyzed the degeneracy of the rolling shutter SfM and suggested how to avoid the degeneracy when shooting videos. Recently, Ito and Okatani [15] derived the degeneracy of rolling shutter SfM as the general expression through a self-calibration-based approach. Zhuang et al[32] proposed a constant acceleration model for relative pose estimation and image rectification in two consecutive images.

On the other hand, the methods to utilize an inertial measurement unit (IMU) to deal with the rolling shutter distortion in visual odometry (VO) and SLAM have been also studied. Jia and Evans [16] proposed a method to estimate the camera orientation using gyroscope measurements and to correct the rolling shutter distortion of the image. Guo et al[11] applied a rolling shutter camera projection model to the visual-inertial odometry framework that uses IMUs and cameras to estimate egomotion. They estimated the readout time of the rolling shutter camera as well as the time delay between the IMU and the camera. Albl et al[3] proposed a method to improve the speed and accuracy of the method in [2] using the gravity measurements obtained from the IMU. In addition, IMUs are also used to solve conventional relative or absolute pose estimation problems for global shutter cameras. It has been studied to estimate relative pose with the partially known orientation angle between two cameras [8, 24], or with known vertical direction [22, 23].

The relative pose estimation is a fundamental problem and of importance in SfM and SLAM. To the best of our knowledge, this is the first work to exploit gyroscope measurements for the relative pose estimation of the rolling shutter cameras.

3 The Proposed Method

In this section, we define and formulate the gyroscope-aided RSRP problem, which simplifies the RSRP problem having 11 DOF as described in Eqs (1)-(3). With the gyroscope measurements, 11 DOF of the RSRP problem is decreased to 5 DOF because we utilize known three-dimensional angular velocities from gyroscopes in two different view points. Then, we solve the problems using the Grobner Basis (GB) method in order to obtain a closed form solution. To apply the GB method, we simplify the RSRP problem with the triple-linearized model for rotation parameters. In the final step, the estimated parameters from the GB method are used as initial values for the nonlinear refinement.

3.1 Rolling Shutter Relative Pose Problem

In this paper, we use the angular model for rolling shutter geometry as in [29, 17], in which the rolling shutter distortion is represented by the instant angular velocity not the instant linear velocity because the angular velocity dominates the image distortion of rolling shutter cameras in general. Indeed, the rolling shutter image distortion usually becomes severe when captured scenes are close to a camera and the camera moves very fast. In such cases, it is difficult to get the proper number of corresponding points between two images for relative pose estimation, and the linear velocity is a very small value in comparison with the fast camera motion. Therefore, considering linear velocity is practically not effective unlike the absolute pose problems in [31, 2, 3], and this is validated in experiments. Moreover, the linear velocity increases the DOF of the RSRP problem and makes the relative translation estimation much more difficult and unstable. For those reasons, we omit the linear velocity from the rolling shutter geometry in this paper.

Then, the geometric relations between a pair of corresponding points, and , in the normalized camera coordinate is formulated by using the epipolar constraint as


Here, and indicate a frame index or a camera index for different cameras throughout the paper. and are a row index of images. The rolling shutter essential matrix is defined as


Here, a relative translation vector and the rotation matrix is defined as


where denotes a rotation matrix, denotes an instant angular velocity vector, represents the relative rotation between two cameras, and is a readout time. The distortion from the instant angular velocities is expressed as with angular velocity vectors. Solving the rolling shutter relative pose problem is identical to computing and from the rolling shutter essential matrix in  Eq. (2).

The difference between a rolling shutter camera and a global shutter camera is depicted in Fig. 2; the rolling shutter essential matrix has 11 DOF ( = 3, = 3, = 3, = 2). Since the the relative translation is up to scale, its DOF is two.

Figure 2: Two-view geometry of a rolling shutter camera. Camera motion is denoted by relative rotation and translation . and are corresponding points. The rolling shutter distortion is described by angular . represents a row index.

Here, we summarize our system setup and the RSRP problem with a gyroscope. We have a calibrated rolling shutter camera–gyroscope system, therefore, we know intrinsics of a camera and a gyroscope, extrinsics between a camera and a gyroscope, and rolling shutter readout times. Then, the inputs of the problem are gyroscope measurements (i.e. angular velocity) and corresponding image points in two images. The output is the relative pose between the two rolling shutter cameras, i.e. the rotation and translation .

3.2 Angular RSRP with Gyroscope Measurements

The angular velocity measurements from the gyroscope of the each camera reduce the DOF of the RSRP problem from 11 to five (i.e. ) because the angular velocities of the problem are given and the relative translation is up to scale. While it is simpler than the original RSRP problem having 11 DOF, we further simplify the problem in order to obtain feasible solutions for real-world applications.

We can express rotations as polynomials for the GB method with the Cayley transform [12, 2]. It has the denominator from the input angle vector . It can be easily removed by multiplication of to the rolling shutter epipolar constraint in Eq. (1) in order to obtain pure polynomials for the GB method. However, it still has second-order monomials on , , and . Thus, a system to use the Cayley transform for the three rotation matrices in Eq. (3) has a huge number of monomials and the large elimination template from the GB method. Therefore, since this system based on the Cayley transform is very time-consuming and numerically unstable, we do not directly use the Cayley model.

Instead, we express the rotation matrices from instant angular velocities as the linearized model. It is a sensible assumption because an image is captured in very short time, commonly 30 to 50, and the instant rotation in rolling shutter cameras is typically very small. Under this assumption, the rotation matrices of cameras can be expressed as


We do substitution for brevity and reducing the number of multiplication in the estimation process. As a result, we have the following rotation representation.


where is the skew symmetric matrix form of a cross product.

Besides, we simplify the relative rotation matrix involving , however, it is not a small rotation unlike the rotations from instant motion. Therefore, we apply the linearization with an initial rotation as in [3], and obtain as


Finally, we obtain the rolling shutter relative rotation matrix as below:


In order to solve the problem, we construct six equations from the rolling shutter essential matrix with five corresponding points and one constraint equation from the scale ambiguity as below:


We reformulate this scale constraint as a polynomial form for the GB method as


We obtain six equations for six unknowns and, from those equations, we have 20 possible solutions. By using the automatic generator from [21], we compute the elimination template that describes which polynomials have to be added to initial equations to obtain Grobner basis and all polynomials required for constructing action matrix. To get the elimination template, we multiply all the monomials of the six equations up to 5-DOF, and generate 205 polynomials from 462 monomials. After removing unnecessary polynomials and monomials, we obtain a final (205225)-dimensional elimination template. Note that we do not need to compute it again once we find the elimination template in a pre-processing stage.

The elimination template, which contains coefficients from input measurements such as corresponding points and gyroscope measurements, is converted to the reduced row echelon form, and the 2020 action matrix is constructed from the template. Then, we obtain solutions from eigenvalues and eigenvectors of the action matrix. We choose one solution among the 20 solutions by removing imaginary solutions.

3.3 Initialization and Robust Estimation

The initial rotation can be obtained from several ways. Firstly, the commercial IMUs usually contain an accelerometer that provides gravity measurements as well as a gyroscope. The vertical direction of a camera from gravity measurements (i.e. pitch and roll angles) can be used to compute two-dimensional relative rotation between two cameras. Also, we can predict the initial pose from a motion model in case of SLAM applications. In this paper, we obtain the initial rotation by using the well-known five-point relative pose estimation algorithm [27].

To increase robustness against noise and/or outliers, we apply the RANSAC[7] to our method. In each RANSAC iteration, we choose the solution with the largest number of inliers among 20 solutions. We use the rolling shutter epipolar constraint in Eq. (1) for the cost function of the RANSAC. A threshold for determining inliers is set to 0.01 and the number of iteration is set to 200.

3.4 Nonlinear Refinement

Although the proposed method estimates the relative pose of two rolling shutter cameras accurately, we further enhance the pose accuracy through nonlinear optimization. In this step, we use the pose estimation result from the proposed method described in the previous section as an initial relative pose, and further refine the pose through the nonlinear optimization with the inlier points obtained from the RANSAC described in Sec. 3.3. The nonlinear energy minimization is formulated as


where is the translation in Eq. (8) and is the unit-quaternion representing the relative rotation, and it is estimated with local parameterization against its tangent plane. The scale ambiguity constraint is added to the cost function as the form of Eq. (9) for easy differentiation. The cost function is defined as follows.


where is expressed as


Here, is the relative rotation between two cameras transformed from the quaternion and it is initialized by the estimates from Section 3.2. Through this refinement, approximation errors regarding relative rotation in Eq. (6) are minimized.

4 Experimental Results

We evaluate the proposed methods on both synthetic and real datasets. In synthetic data experiments, we analyze the performance of the proposed methods with various factors such as rolling shutter distortions and measurement noises. Then, we validate the superiority of the proposed methods in reality with real data.

For evaluation, we compare the basic global shutter five-point algorithm [20, 27] with the proposed rolling shutter algorithm and its refined results. Then, we also compare the nonlinear solver of [6] that uses estimates of GS five-point algorithm as an initial value. Since it does not provide inliers, we use a robust estimator, Cauchy loss function, with all measurements.

We summarize the algorithms used in the experiments as follows.

  • GSRP: The five-point global shutter relative pose estimation algorithm [20, 27].

  • NRSRP: The nonlinear rolling shutter relative pose estimation algorithm [6].

  • G-RSRP: The proposed rolling shutter relative pose estimation algorithm using gyroscope measurements.

  • G-RSRP+: The proposed algorithm (G-RSRP) with the additional nonlinear refinement.

The evaluation metric for relative rotation evaluation is defined as


and for relative translation it is defined as


as in [6], where is assumed to be a unit vector since the translation is up to scale in the RSRP problem. The relative translation error indicates the geodesic distance between ground truth and estimated translation on a unit sphere.

4.1 Experiments on Synthetic Data

In the synthetic data experiments, we perform two kinds of experiments: 1) with different angular and linear velocities and 2) with noises of a camera and a gyroscope.

Synthetic data generation: For each evaluation, we randomly generate 3D points, and positions/orientations of two cameras to generate synthetic data. The standard deviation of the relative rotation and translation are set to 10 and 2, respectively. The number of the 3D points is set to 150 and the they are generated within 60 distance from the cameras. The rolling shutter readout time is set to 60. The image resolution is set to , focal length is set to pixel, and radial distortion is not considered in this experiment. The generated points are projected onto the image plane of each camera with the given intrinsic camera parameters and rolling shutter parameters (angular, linear velocities, and readout time). The rotation from instant angular velocities is computed with the Cayley transform [12], not a linearized model that is used in the proposed method. Then, we remove the points that are out of the field of view or have no corresponding points.

Experiments: We repeat all the experiments 100 times to obtain statistically meaningful results. At first, we perform experiments as increasing instantaneous camera motion in order to analyze the effects of angular velocity and linear velocity. Here, the angular velocities and linear velocities are randomly generated. But we manually set the magnitude of them in order to see the effect on different level of velocities. We designed two experiments: 1) increasing angular velocity with zero linear velocity and 2) increasing angular and linear velocities together. Although the proposed method assumes the angular rolling shutter geometry model, we perform the experiments with the non-zero instant linear velocities in order to see the applicability and robustness of the proposed method in reality. For that reason, we increase the magnitudes of the velocities at five levels. Then, Gaussian noises are added to the camera and gyroscope measurements. The standard deviations of the noises are set to 1 and 0.1 .

(a) Average
(b) Standard Deviation
Figure 3: Performance comparison with different levels of the angular velocity
(a) Average
(b) Standard Deviation
Figure 4: Performance comparison with different levels of angular and linear velocities
(a) With different levels of angular velocity noises
(b) With different levels of camera noises
Figure 5: Performance comparison with different levels of noises

For the experiment on different levels of angular velocity with zero linear velocity, the magnitude of the angular velocity is increased from to by . Figure 3 shows that the average rotation and translation errors of GSRP [27] rapidly increase compared to NRSRP, G-RSRP, G-RSRP+ as the angular velocity increases. G-RSRP, G-RSRP+, and RSRP maintain rotation errors less than 1 and translation errors less then 5 on average even though the angular velocity increases. Besides, the standard deviation of the rotation and translation errors from G-RSRP, G-RSRP+, and NRSRP are less than 1 and 5, respectively. It means that they produce stable estimates. Besides, the standard deviation of the errors from G-RSRP+ is less that of G-RSRP. It indicates that the nonlinear refinements truly improve the stability of the estimation especially under large angular velocities. However, NRSRP generates rather large standard deviation at the 0.5 angular velocity because the nonlinear solver could be fall into local minima in the case that the initial value is not reliable.

(a) Bench
(b) Room
(c) Street
Figure 6: Sample images and correspondences of the three sequences used in the real data experiments. (Upper row: images, lower row: correspondences)

Then, we perform another experiment on different levels of angular and linear velocities. The magnitude of the angular velocity increases the same as in the previous experiment and that of the linear velocity increases from 4 (level 1) to 20 (level 5). For brevity, we denote the magnitude of the angular and linear velocities using their levels. Figure 4 shows that G-RSRP, G-RSRP+, and NRSRP are much more accurate than GSRP. This tendency of the results is similar to that of the previous experiment. Although the average errors and their standard deviations are a bit increased due to the effects of the linear velocities, the average errors are still less than 2 and 10 even at the maximum level of the velocities. It indicates that the effect of the linear velocity is tolerable and not severe. Interestingly, G-RSRP and G-RSRP+ are more stable than NRSRP considering angular and linear velocities. It shows that even complicated models could sometimes produce incorrect estimates.

Secondly, we perform other experiments with different levels of noises of camera and gyroscope measurements in order to investigate the sensitivity of the proposed method. For these experiments, the angular velocity is set to 2.5 and the linear velocity to 20 as in the level 5 of the previous experiments on angular and linear velocities. We determined to use these large values to clearly see the effects of noises in severe rolling shutter distortions. For the experiment on gyroscope noises, we increase the standard deviation of the gyroscope from 0.1 to 0.5 and set the standard deviation of the camera to 1 . On the contrary, we increase the standard deviation of the camera measurement noise from 1 to 5 and set the standard deviation of the gyroscope to 0.1 . Figure 5 shows that gyroscope noises do not affect on the performance of G-RSRP and G-RSRP+. However, large camera noises significantly degrade the translation estimation accuracy of G-RSRP. Interestingly, the proposed nonlinear refinement improves the accuracy of the translation estimation in case that camera noises are severe. Apart from this, G-RSRP+ shows better performance at various noise levels. It indicates that the refinements make the estimates stable as we observed in previous experiments on increasing angular velocity.

4.2 Real Data

Real data preparation: We evaluate the performance of the proposed method in our datasets collected from an Intel RealSense Camera (ZR300). ZR300 consists of a rolling shutter camera, a global shutter camera, a depth sensors, and an IMU including a gyroscope and it also provides their synchronized timestamps. The frame rates of the rolling shutter and global shutter cameras are 30 Hz and that of the gyroscope is 200 Hz. We recorded the image sequences of the rolling shutter and global shutter cameras and gyroscope measurements in indoor and outdoor environments with handheld motion. Then, we used the images from the global shutter camera to generate ground truth relative poses with GSRP [27].

Figure 6 shows the sample images from three sequences that we recorded. We acquire the and sequences for about 1 minute and the sequence for about 10 minutes. In these sequences, we generate image pairs with gyroscope measurements for our relative pose estimation problem. In addition, since we need image pairs having the enough number of correspondences and large camera motion for our experiments, we basically sample images having large angular velocity motion by examining the gyroscope measurements. The sampled images compose image pairs for relative pose estimation. We extract many feature points from the images and track them [5] across images having large angular motion. Figure 6 shows the correspondence samples of the generated image pairs. Although the RealSense Camera provides initial calibration parameters, we perform the calibration [28, 26, 9, 10] of the rolling shutter, global shutter cameras, and the gyroscope by ourselves in order to obtain more accurate calibration parameters. Since there are no public calibration algorithms on a rolling shutter camera and an IMU, we first perform calibrations of 1) the global shutter camera and the IMU and 2) the global shutter and rolling shutter cameras. Then, we compute extrinsic parameters between the rolling shutter camera and the gyroscope from the estimated two extrinsic parameters. Besides, we estimate the intrinsic parameters of the rolling and global shutter cameras, intrinsic parameters of the gyroscope such as biases and the time offset between the gyroscope and cameras. Finally, in order to reduce the noises of the gyroscope, we use an average of the gyroscope measurements within 50 near the image frame timestamp.

Experimental results: Figure 7 shows the performance comparison on three real sequences. Overall, the rotation and translation errors for all methods are larger than those in the synthetic data experiments for angular and linear velocities. The reason is that the real dataset contains outliers (incorrect image correspondences) and noises from the feature tracking on the consecutive frames, and we also have calibration errors. Besides, there exist degenerate situations on camera motion as pointed out in [4]. Nevertheless, the proposed G-RSRP and G-RSRP+ outperform GSRP and NRSRP. In all sequences, the median error and standard deviation of G-RSRP and G-RSRP+ are lower than those of GSRP. Actually, we can see that G-RSRP and G-RSRP+ produce much less errors than GSRP and NRSRP. Although NRSRP a bit reduces the standard deviation of the translation errors in and sequences, it does not make much improvements over GSRP. This is because, while the nonlinear optimization of NRSRP depends on the initial value, the initial value obtained from GSRP is erroneous and this lead to little improvement of NRSRP. Besides, the complicated model sometimes produces incorrect estimates as we observed in the synthetic data experiments.

Figure 7: Accuracy comparison for our real datasets

4.3 Time complexity

Since G-RSRP is the closed-form polynomial solver and the DOF of the problem is small, the computation for obtaining 20 solutions with five points and gyroscope measurements is very fast. Besides, it finds inliers in the RANSAC process easily because G-RSRP considers the rolling shutter geometry. However, GSRP regards rolling shutter distorted points as outliers. Thus, it can take more time for the RANSAC process. In general, NRSRP and G-RSRP+ takes much more time because the nonlinear solver estimates a solution in the iterative scheme with a lot of points.

4.4 Degeneracy

We found that the proposed method fails to estimate the relative pose in planar scenes similarly to the case reported in  [3]. We performed a synthetic data experiment with the 3D landmarks generated in a plane. Interestingly, we observed that the translation estimates converged to forward motion although the rotation estimates were correct. This issue can be resolved by homography estimation considering rolling shutter camera geometry. We can measure the planarity of the scene (i.e. how planar the scene is) via the number of inliers and/or the errors of the measurements. Then, we can avoid the degenerate situation with this measure. The mathematical analysis on degeneracy and homography estimation for rolling shutter cameras sill remain as open problems. Moreover, instead of a geometric solution such as homography, we can judge whether scenes are planar or not with state-of-the-art machine learning techniques. Learning the planarity of the scene is another interesting research topic.

5 Conclusion

We have proposed a new method to estimate relative pose for a rolling shutter camera with the aid of the gyroscope, angular velocity measurements. We exploited angular velocity measurements to simplify the relative pose estimation problem and lowered the DOF of the angular rolling shutter essential matrix from 11 to five. Then, we found the minimal solution of the simplified problem using the GB method and refined the result through nonlinear optimization. We experimentally verified the proposed method using the synthetic and real data, and confirmed that the proposed method produces accurate relative pose estimates compared to the conventional global shutter and rolling shutter relative pose estimation methods. The proposed method can be utilized as a essential component for visual-inertial SLAM/SfM with rolling shutter cameras.


  • [1] O. Ait-Aider, N. Andreff, J. M. Lavest, and P. Martinet. Simultaneous object pose and velocity computation using a single view from a rolling shutter camera. In ECCV, 2006.
  • [2] C. Albl, Z. Kukelova, and T. Pajdla. R6p-rolling shutter absolute camera pose. In CVPR, 2015.
  • [3] C. Albl, Z. Kukelova, and T. Pajdla. Rolling shutter absolute pose problem with known vertical direction. In CVPR, 2016.
  • [4] C. Albl, A. Sugimoto, and T. Pajdla. Degeneracies in rolling shutter sfm. In ECCV, 2016.
  • [5] G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools, 2000.
  • [6] Y. Dai, H. Li, and L. Kneip. Rolling shutter camera relative pose: Generalized epipolar geometry. In CVPR, 2016.
  • [7] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
  • [8] F. Fraundorfer, P. Tanskanen, and M. Pollefeys. A minimal case solution to the calibrated relative pose problem for the case of two known orientation angles. In ECCV, 2010.
  • [9] P. Furgale, T. D. Barfoot, and G. Sibley. Continuous-time batch estimation using temporal basis functions. In 2012 IEEE International Conference on Robotics and Automation, 2012.
  • [10] P. Furgale, J. Rehder, and R. Siegwart. Unified temporal and spatial calibration for multi-sensor systems. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013.
  • [11] C. X. Guo, D. G. Kottas, R. C. DuToit, A. Ahmed, R. Li, and S. I. Roumeliotis. Efficient visual-inertial navigation using a rolling-shutter camera with inaccurate timestamps. Proceedings of Robotics: Science and Systems,(Berkeley, USA), 2014.
  • [12] M. Hazewinkel. Encyclopaedia of Mathematics: Volume 6: Subject Index—Author Index. Springer Science & Business Media, 2013.
  • [13] J. Hedborg, P.-E. Forssén, M. Felsberg, and E. Ringaby. Rolling shutter bundle adjustment. In CVPR, 2012.
  • [14] J. Hedborg, E. Ringaby, P.-E. Forssén, and M. Felsberg. Structure and motion estimation from rolling shutter video. In ICCV Workshops, 2011.
  • [15] E. Ito and T. Okatani. Self-calibration-based approach to critical motion sequences of rolling-shutter structure from motion. In CVPR, 2017.
  • [16] C. Jia and B. L. Evans. Probabilistic 3-d motion estimation for rolling shutter video rectification from visual and inertial measurements. In IEEE 14th International Workshop on Multimedia Signal Processing (MMSP), 2012.
  • [17] A. Karpenko, D. Jacobs, J. Baek, and M. Levoy. Digital video stabilization and rolling shutter correction using gyroscopes. Stanford University Computer Science Tech Report, 1:2, 2011.
  • [18] J.-H. Kim, C. Cadena, and I. Reid. Direct semi-dense slam for rolling shutter cameras. In ICRA, 2016.
  • [19] G. Klein and D. Murray. Parallel tracking and mapping on a camera phone. In ISMAR, 2009.
  • [20] L. Kneip and P. Furgale. Opengv: A unified and generalized approach to real-time calibrated geometric vision. In ICRA, 2014.
  • [21] Z. Kukelova, M. Bujnak, and T. Pajdla. Automatic generator of minimal problem solvers. In ECCV, 2008.
  • [22] Z. Kukelova, M. Bujnak, and T. Pajdla. Closed-form solutions to the minimal absolute pose problems with known vertical direction. In ACCV, 2011.
  • [23] G. H. Lee, M. Pollefeys, and F. Fraundorfer. Relative pose estimation for a multi-camera system with known vertical direction. In CVPR, 2014.
  • [24] B. Li, L. Heng, G. H. Lee, and M. Pollefeys. A 4-point algorithm for relative pose estimation of a calibrated camera with a known relative rotation angle. In IROS, 2013.
  • [25] L. Magerand, A. Bartoli, O. Ait-Aider, and D. Pizarro. Global optimization of object pose and motion from a single rolling shutter image with automatic 2d-3d matching. In ECCV, 2012.
  • [26] J. Maye, P. Furgale, and R. Siegwart. Self-supervised calibration for robotic systems. In 2013 IEEE Intelligent Vehicles Symposium (IV), 2013.
  • [27] D. Nister. An efficient solution to the five-point relative pose problem. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(6):756–770, 2004.
  • [28] L. Oth, P. Furgale, L. Kneip, and R. Siegwart. Rolling shutter camera calibration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013.
  • [29] H. Ovrén and P.-E. Forssén. Gyroscope-based video stabilisation with auto-calibration. In ICRA, 2015.
  • [30] O. Saurer, K. Koser, J.-Y. Bouguet, and M. Pollefeys. Rolling shutter stereo. In ICCV, 2013.
  • [31] O. Saurer, M. Pollefeys, and G. H. Lee. A minimal solution to the rolling shutter pose estimation problem. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1328–1334. IEEE, 2015.
  • [32] B. Zhuang, L.-F. Cheong, and G. H. Lee. Rolling-shutter-aware differential sfm and image rectification. In ICCV, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description