DSAC - Differentiable RANSAC for Camera Localization

DSAC - Differentiable RANSAC for Camera Localization

Eric Brachmann, Alexander Krull, Sebastian Nowozin
Jamie Shotton, Frank Michel, Stefan Gumhold, Carsten Rother
TU Dresden, Microsoft

RANSAC is an important algorithm in robust optimization and a central building block for many computer vision applications. In recent years, traditionally hand-crafted pipelines have been replaced by deep learning pipelines, which can be trained in an end-to-end fashion. However, RANSAC has so far not been used as part of such deep learning pipelines, because its hypothesis selection procedure is non-differentiable. In this work, we present two different ways to overcome this limitation. The most promising approach is inspired by reinforcement learning, namely to replace the deterministic hypothesis selection by a probabilistic selection for which we can derive the expected loss w.r.t. to all learnable parameters. We call this approach DSAC, the differentiable counterpart of RANSAC. We apply DSAC to the problem of camera localization, where deep learning has so far failed to improve on traditional approaches. We demonstrate that by directly minimizing the expected loss of the output camera poses, robustly estimated by RANSAC, we achieve an increase in accuracy. In the future, any deep learning pipeline can use DSAC as a robust optimization component111Source code and trained models are publicly available:

1 Introduction

Introduced in 1981, the random sample consensus (RANSAC) algorithm [11] remains the most important algorithm for robust estimation. It is easy to implement, it can be applied to a wide range of problems and it is able to handle data with a substantial percentage of outliers, i.e. data points that are not explained by the data model. RANSAC and variants thereof [39, 28, 7] have, for many years, been important tools in computer vision, including multi-view geometry [16], object retrieval [29], pose estimation [36, 4] and simultaneous localization and mapping (SLAM) [27]. Solutions to these diverse tasks often involve a common strategy: Local predictions (e.g. feature matches) induce a global model (e.g. a homography). In this schema, RANSAC provides robustness to erroneous local predictions.

Recently, deep learning has been shown to be highly successful at image recognition tasks [37, 17, 13, 31], and, increasingly, in other domains including geometry [10, 19, 20, 9]. Part of this recent success is the ability to perform end-to-end training, i.e. propagating gradients back through an entire pipeline to allow the direct optimization of a task-specific loss function, examples include [41, 1, 38].

In this work, we are interested in learning components of a computer vision pipeline that follows the principle: predict locally, fit globally. As explained earlier, RANSAC is an integral component of this wide-spread strategy. We ask the question, whether we can train such a pipeline end-to-end. More specifically, we want to learn parameters of a convolutional neural network (CNN) such that models, fit robustly to its predictions via RANSAC, minimize a task specific loss function.

RANSAC works by first creating multiple model hypotheses from small, random subsets of data points. Then it scores each hypothesis by determining its consensus with all data points. Finally, RANSAC selects the hypothesis with the highest consensus as the final output. Unfortunately, this hypothesis selection is non-differentiable, meaning that it cannot directly be used in an end-to-end-trained deep learning pipeline.

A common approach within the deep learning community is to soften non-differentiable operators, e.g. in LIFT [41] or visual word assignment in NetVLAD [1]. In the case of RANSAC, the non-differentiable operator is the operator which selects the highest scoring hypothesis. Similar to [41], we might substitute the for a soft , which is a weighted average of arguments [6]. We indeed explore this direction but argue that this substitution changes the underlying principle of RANSAC. Instead of learning how to select a good hypothesis, the pipeline learns a (robust) average of hypotheses. We show experimentally that this approach learns to focus on a narrow selection of hypotheses and is prone to overfitting.

Alternatively, we aim to preserve the hard hypothesis selection but treat it as a probabilistic process. We call this approach DSAC – Differentiable SAmple Consensus – our new, differentiable counterpart to RANSAC. DSAC allows us to differentiate the expected loss of the pipeline w.r.t. to all learnable parameters. This technique is well known in reinforcement learning, for stochastic computation problems like policy gradient approaches [34].

To demonstrate the principle, we choose the problem of camera localization: From a single RGB image in a known static scene, we estimate the 6D camera pose (3D translation and 3D rotation) relative to the scene. We demonstrate an end-to-end trainable solution for this problem, building on the scene coordinate regression forest (SCoRF) approach [36, 40, 5]. The original SCoRF approach uses a regression forest to predict the 3D location of each pixel in an observed image in terms of ‘scene coordinates’. A hypothesize-verify-refine RANSAC loop then randomly select scene coordinates of four pixel locations to generate an initial set of camera pose hypotheses, which is then iteratively pruned and refined until a single high-quality pose estimate remains. In contrast to previous SCoRF approaches, we adopt two CNNs for predicting scene coordinates and for scoring hypotheses. More importantly, the key novelty of this work is to replace RANSAC by our new, differentiable DSAC.

Our contributions are in short:

  • We present and discuss two alternative ways of making RANSAC differentiable, by soft and probabilistic selection. We call our new RANSAC version, with the latter option, DSAC (Differentiable SAmple Consensus).

  • We put both options into a new end-to-end trainable camera localization pipeline. It contains two separate CNNs, linked by our new RANSAC, motivated by previous work [36, 23].

  • We validate experimentally that the option of probabilistic selection is superior, i.e. less sensitive to overfitting, for our application. We conjecture that the advantage of probabilistic selection is allowing hard decisions and, at the same time, keeping broad distributions over possible decisions.

  • We exceed the state-of-the-art results on camera localization by 7.3%.

1.1 Related Work

Over the last decades, researchers have proposed many variants of the original RANSAC algorithm [11]. Most works focus on either or both of two aspects: speed [8, 28, 7], or quality of the final estimate [39, 8]. For detailed information about RANSAC variants we refer the reader to [30]. To the best of our knowledge, this work is the first to introduce a differentiable variant of RANSAC for the purpose of end-to-end learning. In the following, we review previous work on differentiable algorithms and solutions for the problem of camera localization.

Differentiable Algorithms. The success of deep learning began with systems in which a CNN processes an image in one forward pass to directly predict the desired output, e.g. class probabilities [22], a semantic segmentation [25] or depth values and normals [10]. Given a sufficient amount of training data, CNNs can autonomously discover useful strategies for solving a task at hand, e.g. hierarchical part-structures for object recognition [42].

However, for many computer vision tasks, useful strategies have been known for a long time. Recently, researchers started to revisit and encode such strategies explicitly in deep learning pipelines. This can reduce the necessary amount of training data compared to CNNs with an unconstrained architecture [35]. Yi et al. [41] introduced a stack of CNNs that remodels the established sparse feature pipeline of detection, orientation estimation and description, originally proposed in [26]. Arandjelovic et al[1] mapped the Vector of Locally Aggregated Descriptors (VLAD) [2] to a CNN architecture for place recognition. Thewlis et al. [38] substituted the recursive decoding of Deep Matching [32] with reverse convolutions for end-to-end trainable dense image matching.

Similar in spirit to these works, we show how to train an established, RANSAC-based computer vision pipeline in an end-to-end fashion. Instead of substituting hard assignments by soft counterparts as in [41, 1], we enable end-to-end learning by turning the hard selection into a probabilistic process. Thus, we are able to calculate gradients to minimize the expectation of the task loss function [34].

Camera Localization. The SCoRF camera localization pipeline [36], already discussed in the introduction, has been extended in several works. Guzman-Rivera et al. [14] trained a random forest to predict diverse scene coordinates to resolve scene ambiguities. Valentin et al. [40] trained the random forest to predict multi-model distributions of scene coordinates for increased pose accuracy. Brachmann et al[5] addressed camera localization from an RGB image instead of RGB-D, utilizing the increased predictive power of an auto-context random forest. None of these works support end-to-end learning.

In a system similar to SCoRF but for the task of object pose estimation, Krull et al. [23] trained a CNN to measure hypothesis consensus by comparing rendered and observed images. In this work, we adopt the idea of a CNN measuring hypothesis consensus, but learn it jointly with the scene coordinate regressor and in an end-to-end fashion.

Kendall et al. [20] demonstrated that a single CNN is able to directly regress the 6D camera pose given an RGB image, but its accuracy on indoor scenes is inferior to a RGB-based SCoRF pipeline [5].

2 Method

Figure 1: Stochastic Computation Graphs [34]. A graphical representation of three RANSAC variants investigated in this work. The variants differ in the way they select the final model hypothesis: a) non-differentiable, vanilla RANSAC with hard, deterministic selection; b) differentiable RANSAC with deterministic, soft selection; c) differentiable RANSAC with hard, probabilistic selection (named DSAC). Nodes shown as boxes represent deterministic functions, while circular nodes with yellow background represent probabilistic functions. Arrows indicate dependency in computation. All differences between a), b) and c) are marked in red.

2.1 Background

As a preface to explaining our method, we first briefly review the standard RANSAC algorithm for model fitting, and how it can be applied to the camera localization problem using discriminative scene coordinate regression.

Many problems in computer vision involve fitting a model to a set of data points, which in practice usually include outliers due to sensor noise and other factors. The RANSAC algorithm was specifically designed to be able to fit models robustly in the presence of noise [11]. Dozens of variations of RANSAC exist [39, 8, 28, 7]. We consider a general, basic variant here but the new principles presented in this work can be applied to many RANSAC variants, such as to locally-refined preemptive RANSAC [36].

A basic RANSAC implementation consists of four steps: (i) generate a set of model hypotheses by sampling minimal subsets of the data; (ii) score hypotheses based on some measure of consensus, e.g. by counting inliers; (iii) select the best scoring hypothesis; (iv) refine the selected hypothesis using additional data points, e.g. the full set of inliers. Step (iv) is optional, though in practice important for high accuracy.

We introduce our notation below using the example application of camera localization. We consider an RGB image consisting of pixels indexed by . We wish to estimate the parameters of a model that explains . In the camera localization problem this is the 6D camera pose, i.e. the 3D rotation and 3D translation of the camera relative to the scene’s coordinate frame. Following [36], we do not fit model directly to image data , but instead make use of intermediate, noisy 2D-3D correspondences predicted for each pixel: , where is the ‘scene coordinate’ of pixel , i.e. a discriminative prediction for where the point imaged at pixel lives in the 3D scene coordinate frame. We will use as shorthand for . denotes the complete set of scene coordinate predictions for image , and we write for . To estimate from we apply RANSAC as follows:

  1. Generate a pool of hypotheses. Each hypothesis is generated from a subset of correspondences. This subset contains the minimal number of correspondences to compute a unique solution. We call this a minimal set with correspondence indices , where is the minimal set size. To create the set, we uniformly sample correspondence indices: to get . We assume a function which generates a model hypothesis as from the minimal set . In our application, is the perspective-n-point (PNP) algorithm [12], and .

  2. Score hypotheses. Scalar function measures the consensus / quality of hypothesis , e.g. by counting inlier correspondences. To define an inlier in our application, we first define the reprojection error of scene coordinate :


    where is the 2D location of pixel and is the camera projection matrix. We call an inlier if , where is the inlier threshold. In this work, instead of counting inliers, we to aim to learn to directly regress the hypothesis score from reprojection errors , as we will explain shortly.

  3. Select best hypothesis. We take

  4. Refine hypothesis.  is refined using function . Refinement may use all correspondences . A common approach is to select a set of inliers from and recalculate function on this set. The refined pose is the output of the algorithm .

2.2 Learning in a RANSAC Pipeline

The system of Shotton et al. [36] had a single learned component, namely the regression forest that made the predictions . Krull et al. [23] extended the approach to also learn the scoring function as a generalization of the simpler inlier counting scheme of [36]. However, these have thus far been learned separately.

Our work instead aims to learn both, the scene coordinate predictions and the scoring function, and to do so jointly in an end-to-end fashion within a RANSAC framework. Making the parameterizations explicit, we have and . We aim to learn parameters and , where affects the quality of poses that we generate, and affects the selection process which should choose a good hypothesis. We write to reflect that scene coordinate predictions depend on parameters . Similarly, we write to reflect that the chosen hypothesis depends on and .

We would like to find parameters and such that the loss of the final, refined hypotheses over a training set of images is minimized, i.e.


where are ground truth model parameters for . To allow end-to-end learning, we need to differentiate w.r.t. and . We assume a differentiable loss and differentiable refinement .

One might consider differentiating w.r.t. to via the minimal set of the single selected hypothesis of Eq. 2. But learning a RANSAC pipeline in this fashion fails because the selection process itself depends on and , which is not represented in the gradients of the selected hypothesis.222We observed in early experiments that the training loss immediately increases without recovering. Parameters influence the selection directly via the scoring function , and parameters influence the quality of competing hypotheses , though neither influence the initial uniform sampling of minimal sets .

We next present two approaches to learn parameters and – soft selection (Sec. 2.2.1) and probabilistic selection (Sec. 2.2.2) – that do model the dependency of the selection process on the parameters.

2.2.1 Soft Selection (SoftAM)

To solve the problem of non-differentiability, one can relax the operator of Eq. 2 and substitute it for a soft operator [6]. The soft turns the hypothesis selection into a weighted average of hypotheses:


which averages over candidate hypotheses with


In this variant, scoring function has to predict weights that lead to a robust average of hypotheses (i.e. model parameters). This means that model parameters corrupted by outliers should receive sufficiently small weights, such that they do not affect the accuracy of .

Substituting for in Eq. 3 allows us to calculate gradients to learn parameters and . We refer the reader to the appendix for details.

By utilizing the soft operator, we diverge from the RANSAC principle of making one hard decision for a hypothesis. Soft hypothesis selection bears similarity with an independent strain within the field of robust optimization, namely robust averaging, see e.g. the work of Hartley et al. [15]. While we explore soft selection in the experimental evaluation, we introduce an alternative in the next section, that preserves the hard hypothesis selection, and is empirically superior for our task.

2.2.2 Probabilistic Selection (DSAC)

We substitute the deterministic selection of the highest scoring model hypothesis in Eq. 2 by a probabilistic selection, i.e. we chose a hypothesis probabilistically according to:


where is the softmax distribution of scores predicted by (see Eq. 5).

The inspiration for this approach comes from policy gradient approaches in reinforcement learning that involve the minimization of a loss function defined over a stochastic process [34]. Similarly, we are able to learn parameters and that minimize the expectation of loss of the stochastic process defined in Eq. 6:


As shown in [34], we can calculate the derivative w.r.t. parameters as follows (similarly for parameters ):


i.e. the derivative of the expectation is an expectation over derivatives of the loss and the probabilities of model hypotheses. We include further steps of the derivation of Eq. 8 in the appendix.

We call this method of differentiating RANSAC, that preserves hard hypothesis selection, DSAC – Differentiable SAmple Consensus. See Fig. 1 for a schematic view of DSAC in comparison to the RANSAC variants introduced at the beginning of this section. While learning parameters with the vanilla RANSAC is not possible, as mentioned before, both new variants (SoftAM and DSAC) are sensible options which we evaluate in the experimental section.

3 Differentiable Camera Localization

Figure 2: Differentiable Camera Localization Pipeline. Given an RGB image, we let a CNN with parameters predict 2D-3D correspondences, so called scene coordinates [36]. From these, we sample minimal sets of four scene coordinates and create a pool of hypotheses . For each hypothesis, we create an image of reprojection errors which is scored by a second CNN with parameters . We select a hypothesis probabilistically according to the score distribution. The selected pose is also refined.

We demonstrate the principles for differentiating RANSAC for the task of one-shot camera localization from an RGB image. Our pipeline is inspired by the state-of-the-art pipeline of Brachmann et al. [5], which is an extension of the original SCoRF pipeline [36] from RGB-D to RGB images. Brachmann et al. use an auto-context random forest to predict multi-modal scene coordinate distributions per image patch. After that, minimal sets of four scene coordinates are randomly sampled and the PNP algorithm [12] is applied to create a pool of camera pose hypotheses. A preemptive RANSAC schema iteratively refines, re-scores and rejects hypotheses until only one remains. The preemptive RANSAC scores hypotheses by counting inlier scene coordinates, i.e. scene coordinates for which reprojection error . In a last step, the final, remaining hypothesis is further optimized using the uncertainty of the scene coordinate distributions.

Our pipeline differs from Brachmann et al. [5] in the following aspects:

  • Instead of a random forest, we use a CNN (called ‘Coordinate CNN’ below) to predict scene coordinates. For each 42x42 pixel image patch, it predicts a scene coordinate point estimate. We use a VGG style architecture with 13 layers and 33M parameters. To reduce test time we process only 40x40 patches per image.

  • We score hypotheses using a second CNN (called ‘Score CNN’ below). We took inspiration from the work of Krull et al. [23] for the task of object pose estimation. Instead of learning a CNN to compare rendered and observed images as in [23], our Score CNN predicts hypothesis consensus based on reprojection errors. For each of the 40x40 scene coordinate predictions we calculate the reprojection error for hypothesis (see Eq. 1). This results in a 40x40 reprojection error image, which we feed into the Score CNN, a VGG style architecture with 13 layers and 6M parameters.

  • Instead of the preemptive RANSAC schema, we score hypotheses only once and select the final pose, either by applying the soft operator (SoftAM), or by probabilistic selection according to the softmaxed scores (DSAC).

  • Only the final pose is refined. We choose inlier object coordinate predictions (at most 100), i.e. scene coordinates with reprojection error , and solve PNP [24] again using this set. This is iterated multiple times. Since the Coordinate CNN predicts only point estimates we do no further pose optimization using uncertainty.

See Fig. 2 for an overview of our pipeline. Where applicable we use the parameter values reported by Brachmann et al. in [5], e.g. sampling 256 hypotheses, using 8 refinement steps and an inlier threshold of px.

4 Experiments

For comparability to other methods, we show results on the widely used 7-Scenes dataset [36]. The dataset consists of RGB-D images of 7 indoor environments where each frame is annotated with its 6D camera pose. A 3D model of each scene is also available. The data of each scene is comprised of multiple sequences (= independent camera paths) which are assigned either to test or training. The number of images per scene ranges from 1k to 7k for training resp. test. We omit the depth channels and estimate poses using RGB images only. See the appendix for a discussion of the difficulty of the 7-Scenes dataset.

We measure accuracy by the percentage of images for which the camera pose error is below and cm (see Appendix C for a comment on the calculation of this error). For training, we use the following differentiable loss which is closely correlated with the task loss:


where , denotes the axis-angle representation of the camera rotation, and is the camera translation. We measure angle between estimated and ground truth rotation in degree, and distance between estimated and ground truth translation in cm.

Since the dataset does not include a designated validation set, we separated multiple blocks of consecutive frames from the training data to be used as validation data (in total per scene). We fixed all learning parameters on the validation set (e.g. learning rate and total amount of parameter updates). Once all hyper parameters are fixed, we re-train on the full training set.

4.1 Componentwise Training

Our pipeline contains two trainable components, namely the Coordinate CNN and the Score CNN. First, we explain how to train both components using surrogate losses, i.e. train them not in an end-to-end fashion but separately. End-to-end training using differentiable RANSAC will be discussed in Sec. 4.2.

Scene Coordinate Regression. Similar to Brachmann et al. [5], we use the depth information of training images to generate scene coordinate ground truth. Alternatively, this ground truth can also be rendered using the available 3D models. We train the Coordinate CNN using the following surrogate loss: , where is the scene coordinate prediction and is ground truth. We also experimented with other losses including (squared distance), Huber [18] and Tukey [3] which consistently performed worse on the validation set.

We trained with mini batches of 64 randomly sampled training patches. We used the Adam [21] optimizer with a learning rate of . We cut the learning rate in half after each 50k updates, and train for a total of 300k updates.

Score Regression. We synthetically created data to train the Score CNN in the following way. By adding noise to the ground truth pose of training images, we generated poses above and below the pose error threshold of and cm. Using the scene coordinate predictions of the trained Coordinate CNN, we compute reprojection error images of these poses. Poses with a large pose error w.r.t. the ground truth pose will lead to large reprojection errors, and we want the Score CNN to predict a small score. Poses close to ground truth will lead to small reprojection errors, and we want the Score CNN to predict a high score. More formally, the pose error of a hypothesis should be negatively correlated with the score prediction . Thus, we train the Score CNN to minimize the following loss: , where: . Parameter controls the broadness of the score distribution after applying softmax. We use this distribution for weights in SoftAM (see Eq. 5) and to sample a hypothesis in DSAC (see Eq. 6). A value of gave reasonable distributions on the validation set, i.e. poses close to ground truth had a high probability to be selected, and poses far away from ground truth had a low probability to be selected.

We trained the Score CNN with a batch size of 64 reprojection error images of randomly generated poses. We used Adam [21] for optimization with a learning rate of . We train for a total of 2k updates.

Sparse Brachmann Ours: Trained Componentwise Ours: Trained End-To-End
Features [36] et al. [5] RANSAC SoftAM DSAC SoftAM DSAC
Chess 70.7% 94.9% 94.9% 94.8% 94.7% 94.2% -0.6% 94.6% -0.1%
Fire 49.9% 73.5% 75.1% 75.6% 75.3% 76.9% +1.3% 74.3% -1.0%
Heads 67.6% 48.1% 72.5% 74.5% 71.9% 74.0% -0.5% 71.7% -0.2%
Office 36.6% 53.2% 70.4% 71.3% 69.2% 56.6% -14.7% 71.2% +2.0%
Pumpkin 21.3% 54.5% 50.7% 50.6% 50.3% 51.9% +1.3% 53.6% +3.3%
Kitchen 29.8% 42.2% 47.1% 47.8% 46.2% 46.2% -1.6% 51.2% +5.0%
Stairs 9.2% 20.1% 6.2% 6.5% 5.3% 5.5% -1.0% 4.5% -0.8%
Average 40.7% 55.2% 59.5% 60.1% 59.0% 57.9% -2.2% 60.1% +1.1%
Complete 38.6% 55.2% 61.0% 61.6% 60.3% 57.8% -3.8% 62.5% +2.2%
Table 1: Accuracy measured as the percentage of test images where the pose error is below 5cm and 5. Complete denotes the combined set of frames (17000) of all scenes. Numbers in green denote improved accuracy after end-to-end training for SoftAM resp. DSAC compared to componentwise training. Similarly, red numbers denote decreased accuracy. Bold numbers indicate the best result for each scene.
et al. [5]
4.5cm, 2.0
Ours, Trained
RANSAC 4.0cm, 1.6
SoftAM 3.9cm, 1.6
DSAC 4.0cm, 1.6
Ours, Trained
SoftAM 4.0cm, 1.6
DSAC 3.9cm, 1.6
Table 2: Median pose errors of the complete 7-Scenes dataset (17000 frames). Most accurate results marked bold.

Results. We report the accuracy of our pipeline, trained componentwise, in Table 1. We present the accuracy per scene and the average over scenes. Since scenes with few test frames like Stairs and Heads are overrepresented in the average, we additionally show accuracy on the dataset as a whole (denoted Complete, i.e. 17000 test frames).

We distinguish between RANSAC, i.e. non-differentiable hypothesis selection, SoftAM, i.e. differentiable soft hypothesis selection and DSAC, i.e. differentiable probabilistic hypothesis selection.

As can be seen in Table 1, RANSAC, SoftAM and DSAC achieve very similar results when trained componentwise. The probabilistic hypothesis selection of DSAC results in a slightly reduced accuracy of -0.7% on the complete dataset, compared to RANSAC.

We compare our pipeline to the sparse features baseline presented in [36] and the pipeline of Brachmann et al. [5], which is state-of-the-art on this dataset at the moment. All variants of our pipeline surpass, on average, the accuracy of both competitors. Note, conceptually the main advantage over Brachmann et al. [5] is the new scoring CNN. We also measured the median pose error of all frames in the dataset, see Table 2. Compared to Brachmann et al. [5] we are able to decrease both rotational and translational error. PoseNet [20] states median translational errors of around 40cm per scene, so it cannot compete in terms of accuracy.

4.2 End-to-End Training

In order to facilitate end-to-end learning as described in Sec. 2, some parts of the pipeline need to be differentiable which might not be immediately obvious. We already introduced the differentiable loss . Furthermore, we need to derive the model function and refinement w.r.t. learnable parameters.

In our application, is the PNP algorithm. Off-the-shelf implementations (e.g. [12, 24]) are fast enough for calculating the derivatives via central differences.

Refinement involves determining inlier sets and resolving PNP in multiple iterations. This procedure in non-differentiable because of the hard inlier selection procedure. However, because the number of inliers is large (100 in our case), refined poses tend to vary smoothly with changes to the input scene coordinates. Hence, we treat the refinement procedure as a black box, and calculate derivatives via central differences, as well. For stability, we stop refinement early, in case less than 50 inliers have been found. Because of the large number of inputs and to keep central differences tractable, we subsample the scene coordinates for which gradients are calculated (we use 1%), and correct the gradient magnitude accordingly ().

Similar to e.g. [41] or [20], we found it important to have a good initialization when learning end-to-end. Learning from scratch quickly reached a local minimum. Hence, we initialize the Coordinate CNN and the Score CNN with componentwise training, see Sec. 4.1.

We found the same set of training hyperparameters to work well for the validation set for both, SoftAM and DSAC. We use a fixed learning rate of for the Coordinate CNN, and a fixed learning rate of for the Score CNN. Our end-to-end pipeline contains substantial stochasticity because of the sampling of minimal sets . Instead of the Adam procedure, which was unstable, we use stochastic gradient descent with momentum [33] of 0.9, and we clamp all gradients to the range of -0.1 to 0.1, before passing them to the Score CNN or the Coordinate CNN. We train for 5k updates.

Results. See Table 1 for results of both strategies. Compared to the initialization (trained componentwise), we observe a significant improvement for DSAC (+2.2% on the complete dataset, standard error of the mean ). DSAC improves some scenes considerably, with strongest effects for Pumpkin (+3.3%) and Kitchen (+5.0%). SoftAM significantly decreases accuracy compared to the componentwise initialization (-3.8% on the complete dataset). SoftAM overfits severely on the Office scene (-14.7%) and decreases accuracy for most other scenes.

The pipeline learned end-to-end with DSAC improves on the results of Brachmann et al. [5] by 4.9% (scene average) resp. 7.3% (complete set). DSAC also improves the median pose error, see Table 2.

4.3 Insights and Detailed Studies

Figure 3: (a) Effect of end-to-end learning on pose accuracy w.r.t. individual components. (b) Effect of end-to-end training on the average entropy of the score distribution. Set text for details.

Ablation Study. We study the effect of learning the Score CNN and the Coordinate CNN in an end-to-end fashion, individually. We use componentwise training as initialization for both CNNs. See Fig. 3 a) for results on the complete set. For DSAC, training both components in an end-to-end fashion is important for best accuracy. For SoftAM, we see that the bad results on this scene are not due to overfitting on the Score CNN, but its way of learning the Coordinate CNN.

Figure 4: Prediction quality. We analyze scene coordinate prediction quality on an Office test image (a) with ground truth scene coordinates (b) (XYZ mapped to RGB). The prediction after componentwise training can be seen in (c). We vizualize the relative change of prediction error w.r.t. componentwise training in (d) for SoftAM, resp. in (e) for DSAC. We observe an aggressive strategy of SoftAM which focuses large improvements on small areas (14% of predictions improve). DSAC shows small improvements but on large areas (38% of predictions improve). Note that DSAC achieves superior pose accuracy on this scene.

Analysis of Scene Coordinate Predictions. In the componentwise training, the Coordinate CNN learned to minimize the surrogate loss , i.e. the distance of scene coordinate predictions w.r.t. ground truth . In Fig. 4, we visualize how the prediction of the Coordinate CNN changes when trained in an end-to-end fashion, i.e. to minimize the loss . Both end-to-end learning strategies, SoftAM and DSAC, increase the accuracy of scene coordinate predictions in some areas of the scene at the cost of decreasing the accuracy in other areas. We observe very extreme changes for the SoftAM strategy, i.e. the increase and decrease in scene coordinate accuracy is large in magnitude, and improvements are focused to small scene areas. The DSAC strategy leads to a much more cautious tradeoff, i.e. changes are smaller and widespread. Note that we use identical learning parameters for both strategies. We conclude that SoftAM tends to overfit due to overly aggressive changes in scene coordinate predictions.

Score Distribution Entropy. See Fig. 3 b) for an analysis of the effect of end-to-end learning on the average entropy of the softmax score distribution (see Eq. 5). We observe a reduction in entropy for the SoftAM strategy. The larger the pose error of a hypothesis is, the larger is also its influence on the pose average (see Eq. 4). SoftAM has to weigh down such poses aggressively for a good average. DSAC can allow for a broader distribution (only a slight decrease in entropy compared to the original RANSAC) because poses which are unlikely to be chosen, do not affect the loss of poses which are likely to be chosen. This is an additional factor in the stability of DSAC.

Restoring the Selection. After end-to-end training, one may restore the original RANSAC algorithm, e.g. selecting hypotheses w.r.t. scores via . In this case, the average accuracy of DSAC stays at , while the accuracy of SoftAM decreases to .

Test Time. The scene coordinate prediction takes 0.5s on a Tesla K80 GPU. Pose optimization takes 1s. The runtime of hypothesis selection (RANSAC) or probabilistic selection (DSAC) is identical and negligible.

Multi-Modality. Compared to Brachmann et al. [5], our pipeline performs not as well on the Stairs scene (see Table 1). We account this to the fact that the Coordinate CNN predicts only uni-modal point estimates, whereas the random forest of [5] predicts multi-modal scene coordinate distributions. The Stairs scene contains many repeating structures, so we expect multi-modal predictions to help. We also expect bad performance of the SoftAM strategy in case pose hypothesis distributions are multi-modal, because an average is likely to be a bad representation of either mode. In contrast, DSAC can probabilistically select the correct mode. We conclude that multi-modality in scene coordinate predictions and pose hypothesis distributions is a promising direction for future work.

5 Conclusion

We presented two strategies for differentiating the RANSAC algorithm: Using a soft operator, and probabilistic selection. By experimental evaluation we conclude that probabilistic selection is superior and call this approach DSAC. We demonstrated the use of DSAC for learning a camera localization pipeline end-to-end. However, DSAC can be deployed in any deep learning pipeline where robust optimization is beneficial, for example learning structure from motion or SLAM end-to-end.


This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 647769). The computations were performed on an HPC Cluster at the Center for Information Services and High Performance Computing (ZIH) at TU Dresden. We thank the Torr Vision Group of the University of Oxford for inspiring discussions.

Appendix A Derivatives

This appendix contains additional information on the derivative of the task loss function (resp. the expectation thereof) for the SoftAM and DSAC learning strategies. In the second part of the appendix, we illustrate some difficulties of camera localization on the 7-Scenes dataset to motivate the usage of a RANSAC schema for this problem.

a.1 Soft Selection (SoftAM)

To learn our camera localization pipeline in an end-to-end fashion, we have to calculate the derivatives of the task loss function w.r.t. to learnable parameters. In the following, we show the derivative w.r.t. parameters , but derivation w.r.t. parameters works similarly. Applying the chain rule and calculating the total derivative of , we get:


Since is a weighted average of hypothesis (see Eq. 4) we can differentiate it as follows:


Weights follow a softmax distribution of hypothesis scores (see Eq. 5). Hence, we can differentiate as follows:


a.2 Probabilistic Selection (DSAC)

Using the DSAC strategy, we learn our camera localization pipeline by minimizing the expectation of the task loss function:


where we use as a stand-in for . We differentiate following Eq. 10, and log probabilities as:


Appendix B Difficulty of the 7-Scenes Dataset

Please see Fig. 5 for examples of difficult situations in the 7-Scenes dataset. In our experiments, inlier ratios of scene coordinate predictions range from 5% to 85%. See Fig. 6 (left) for the inlier ratio distribution over the complete 7-Scenes dataset. In accordance to [36, 5], we consider a scene coordinate prediction an inlier if it is within 10cm of the ground truth scene coordinate. In Fig. 6 (right) we plot the performance of DSAC against the ratio of inliers. For comparison we plot the performance of a naive approach without RANSAC (pose fit to all scene coordinate predictions).

Figure 5: Difficult frames within the 7-Scenes dataset: Texture-less surfaces (upper left), motion blur (upper right), reflections (lower left), and repeating structures (lower right). DSAC estimates the correct pose in all 4 cases.
Figure 6: Distribution of inlier ratios of our scene coordinate predictions (left), and corresponding pose estimation accuracy of DSAC compared to a naive approach without RANSAC (right).

Appendix C Calculation of the Camera Pose Error

An earlier version of this work differed in the exact numbers presented in Table 1. For example, our method scored approximately on the complete 7-Scenes set when trained componentwise. However, these numbers were produced with an error in the camera pose evaluation. In our formulation of the camera localization problem, we search for the pose which aligns scene coordinate and their projections : . However, in this formulation, is not the camera pose but the scene pose, i.e. the inverse camera pose. The calculation of the pose error (rotational and translation error) depends on whether it is calculated for or . For the camera pose, , rotational errors contribute additionally to translational errors which, therefore, tend to be larger. Using the correct pose evaluation (i.e. using to calculate rotational and translational errors) our results decreased to . However, we found that the implementation of the PnP algorithm used in our experiments was a major limiting factor w.r.t. accuracy. Exchanging it for a standard, iterative PnP algorithm improved our results considerably, yielding the numbers we present in the current version of this work. Note that all conclusions drawn throughout the experimental section were valid for both, the original version and the updated version of this work.


  • [1] R. Arandjelović, P. Gronat, A. Torii, T. Pajdla, and J. Sivic. NetVLAD: CNN architecture for weakly supervised place recognition. In CVPR, 2016.
  • [2] R. Arandjelovic and A. Zisserman. All about vlad. In CVPR, 2013.
  • [3] A. E. Beaton and J. W. Tukey. The fitting of power series, meaning polynomials, illustrated on band-spectroscopic data. Technometrics, 1974.
  • [4] E. Brachmann, A. Krull, F. Michel, S. Gumhold, J. Shotton, and C. Rother. Learning 6d object pose estimation using 3d object coordinates. In ECCV, 2014.
  • [5] E. Brachmann, F. Michel, A. Krull, M. Y. Yang, S. Gumhold, and C. Rother. Uncertainty-driven 6d pose estimation of objects and scenes from a single rgb image. In CVPR, 2016.
  • [6] O. Chapelle and M. Wu. Gradient descent optimization of smoothed information retrieval metrics. Information Retrieval, 2010.
  • [7] O. Chum and J. Matas. Matching with prosac ” progressive sample consensus. In CVPR, 2005.
  • [8] O. Chum, J. Matas, and J. Kittler. Locally Optimized RANSAC. 2003.
  • [9] D. DeTone, T. Malisiewicz, and A. Rabinovich. Deep image homography estimation. CoRR, 2016.
  • [10] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, 2015.
  • [11] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 1981.
  • [12] X.-S. Gao, X.-R. Hou, J. Tang, and H.-F. Cheng. Complete solution classification for the perspective-three-point problem. TPAMI, 2003.
  • [13] R. Girshick. Fast r-cnn. In ICCV, 2015.
  • [14] A. Guzman-Rivera, P. Kohli, B. Glocker, J. Shotton, T. Sharp, A. Fitzgibbon, and S. Izadi. Multi-output learning for camera relocalization. In CVPR, 2014.
  • [15] R. Hartley, K. Aftab, and J. Trumpf. L1 rotation averaging using the Weiszfeld algorithm. In CVPR, 2011.
  • [16] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2004.
  • [17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, 2015.
  • [18] P. J. Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 1964.
  • [19] A. Kanazawa, D. W. Jacobs, and M. Chandraker. Warpnet: Weakly supervised matching for single-view reconstruction. CoRR, 2016.
  • [20] A. Kendall, M. Grimes, and R. Cipolla. Posenet: A convolutional network for real-time 6-dof camera relocalization. In ICCV, 2015.
  • [21] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, 2014.
  • [22] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
  • [23] A. Krull, E. Brachmann, F. Michel, M. Y. Yang, S. Gumhold, and C. Rother. Learning analysis-by-synthesis for 6d pose estimation in rgb-d images. In ICCV, 2015.
  • [24] V. Lepetit, F. Moreno-Noguer, and P. Fua. Epnp: An accurate o (n) solution to the pnp problem. IJCV, 2009.
  • [25] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
  • [26] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
  • [27] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós. ORB-SLAM: a versatile and accurate monocular SLAM system. CoRR, 2015.
  • [28] D. Nistér. Preemptive ransac for live structure and motion estimation. In ICCV, 2003.
  • [29] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. In CVPR, 2007.
  • [30] R. Raguram, J.-M. Frahm, and M. Pollefeys. A comparative analysis of ransac techniques leading to adaptive real-time random sample consensus. In ECCV, 2008.
  • [31] J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. CoRR, 2015.
  • [32] J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid. Deepmatching: Hierarchical deformable dense matching. IJCV, 2016.
  • [33] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Cognitive modeling, 1988.
  • [34] J. Schulman, N. Heess, T. Weber, and P. Abbeel. Gradient estimation using stochastic computation graphs. In NIPS, 2015.
  • [35] S. Shalev-Shwartz and A. Shashua. On the sample complexity of end-to-end training vs. semantic abstraction training. CoRR, 2016.
  • [36] J. Shotton, B. Glocker, C. Zach, S. Izadi, A. Criminisi, and A. Fitzgibbon. Scene coordinate regression forests for camera relocalization in rgb-d images. In CVPR, 2013.
  • [37] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, 2014.
  • [38] J. Thewlis, S. Zheng, P. Torr, and A. Vedaldi. Fully-trainable deep matching. In BMVC, 2016.
  • [39] P. H. S. Torr and A. Zisserman. MLESAC: A new robust estimator with application to estimating image geometry. CVIU, 2000.
  • [40] J. Valentin, M. Nießner, J. Shotton, A. Fitzgibbon, S. Izadi, and P. H. S. Torr. Exploiting uncertainty in regression forests for accurate camera relocalization. In CVPR, 2015.
  • [41] K. M. Yi, E. Trulls, V. Lepetit, and P. Fua. Lift: Learned invariant feature transform. In ECCV, 2016.
  • [42] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description