Convex Relaxations for Pose Graph Optimization with Outliers

Convex Relaxations for Pose Graph Optimization with Outliers

Luca Carlone and Giuseppe C. Calafiore L. Carlone is with the Laboratory for Information & Decision Systems, Massachusetts Institute of Technology, Cambridge, USA, lcarlone@mit.edu G. Calafiore is with the Dipartimento di Elettronica e Telecomunicazioni, Politecnico di Torino, Torino, Italy, giuseppe.calafiore@polito.it
Abstract

Pose Graph Optimization involves the estimation of a set of poses from pairwise measurements and provides a formalization for many problems arising in mobile robotics and geometric computer vision. In this paper, we consider the case in which a subset of the measurements fed to pose graph optimization is spurious. Our first contribution is to develop robust estimators that can cope with heavy-tailed measurement noise, hence increasing robustness to the presence of outliers. Since the resulting estimators require solving nonconvex optimization problems, we further develop convex relaxations that approximately solve those problems via semidefinite programming. We then provide conditions under which the proposed relaxations are exact. Contrarily to existing approaches, our convex relaxations do not rely on the availability of an initial guess for the unknown poses, hence they are more suitable for setups in which such guess is not available (e.g., multi robot localization, recovery after localization failure). We tested the proposed techniques in extensive simulations, and we show that some of the proposed relaxations are indeed tight (i.e., they solve the original nonconvex problem exactly) and ensure accurate estimation in the face of a large number of outliers.

This paper has been accepted for publication in the IEEE Robotics and Automation Letters.

Please cite the paper as: L. Carlone and G. Calafiore, “Convex Relaxations for Pose Graph

Optimization with Outliers”, IEEE Robotics and Automation Letters (RA-L), 2018.

I Introduction

Pose Graph Optimization (PGO) consists in the estimation of a set of poses (i.e., rotations and translations) from pairwise relative pose measurements. This model often arises from a maximum likelihood (ML) approach to geometric estimation problems in robotics and computer vision. For instance, in the context of robot Simultaneous Localization and Mapping (SLAM), PGO is used to estimate the trajectory of the robot (as a discrete set of poses), which in turn allows the reconstruction of a map of the environment, see, e.g., [3]. Another example is multi robot localization, in which multiple robots estimate their poses from pairwise measurements, see [1]. Similarly, a variant of PGO arises in Structure from Motion (SfM) in computer vision, as a workhorse to estimate the poses (up to an unknown scale) of the cameras observing a 3D scene.

PGO leads to a hard non-convex optimization problem, whose (global) solution is the ML estimate for the unknown poses. While a standard approach to solve PGO was to use iterative nonlinear optimization methods, such as the Gauss-Newton method [17, 19] or the gradient method [13, 24], to obtain locally optimal solutions, a very recent set of works shows how to compute globally optimal solutions to PGO via convex relaxations [7, 8, 4, 27]. These works demonstrate that in the noise regimes encountered in practical applications, the (non-convex) PGO problem exhibits zero duality gap, which implies that it can be solved exactly via convex relaxations.

An outstanding problem in PGO is how to make the estimation robust to the presence of spurious measurements. Standard PGO assumes a nominal distribution for the measurement noise (e.g., Gaussian noise on translation measurements), and it produces largely incorrect estimates in presence of outliers, i.e., measurements that move away from this nominal distribution. This issue limits robust operation in practical applications in which the presence of outliers is unavoidable. Outliers may be due to sensor failures, but they are more commonly associated to incorrect data association, see [3].

Related Work. We partition the existing literature into outlier mitigation and outlier rejection techniques. The former techniques estimate the poses while trying to reduce the influence of the outliers. The latter explicitly include binary decision variables to establish whether a given measurement is an outlier or not. Traditionally, outlier mitigation in SLAM and SfM relied on the use of robust M-estimators, see [16]. For instance, the use of the Huber loss is a fairly standard choice in SfM, see [15]. Along this line, Agarwal et al., in [25], dynamically adjust the measurement covariances to reduce the influence of measurements with large errors. Olson and Agarwal, in [23], use a max-mixture distribution to accommodate multiple hypotheses on the noise distribution of a measurement. Casafranca et al., in [9], propose to minimize the -norm of the residual errors and design an iterative scheme to locally solve the corresponding optimization. Lee et al., in [21], use expectation maximization. Pfingsthorn and Birk [26] model ambiguous measurements using hyperedges and multimodal Mixture of Gaussian constraints that augment the pose graph.

Outlier rejection techniques aim at explicitly identifying the spurious measurements. A popular technique is RANSAC, see [10], in which subsets of the measurements are sampled in order to identify an outlier-free set. Sünderhauf and Protzel, in [29, 30], propose to augment the pose graph optimization problem with latent binary variables that are responsible for deactivating outliers. Similar ideas appear in the context of robust estimation, e.g., the Penalized Trimmed Squares estimator of Zioutas and Avramidis, [33]. Latif et al. in [20] and Graham et al. in [11] look for “internally consistent” constraints, which are in mutual agreement. Carlone et al. in [6] use -relaxation to find a large set of mutually-consistent measurements.

A main drawback of the techniques mentioned above is that they rely on the availability of an initial estimate of the poses. This is problematic for two reasons. First, the performance of these techniques heavily depends on the initial guess and they perform poorly when the initial guess is noisy, as it happens in practice. Second, in many applications, the initial guess is simply not available (e.g., in multi robot localization). Recent work in computer vision attempts to overcome the need for an initial guess. In particular, Wang and Singer in [32] provide a convex relaxation for robust rotation estimation. In this paper, we extend [32] to work on poses (rather than rotations), and consider a broader set of robust cost functions.

Contribution. Our first contribution is to propose robust PGO formulations that can cope with heavy-tailed measurement noise. We consider cost functions with unsquared norm, norm, and Huber loss, and, when possible, we provide a probabilistic justification in terms of ML estimation.

Since the resulting optimization problems are nonconvex, our second contribution is to provide a systematic way to derive convex relaxations for those problems, taking the form of semidefinite programs (SDP). The key advantage of our convex relaxations is that they do not require an initial guess, which makes them suitable for problem instances where such estimate is not available or unreliable.

As a third contribution, we provide conditions under which the proposed relaxations are exact, as well as general bounds on the suboptimality of the relaxed solution. Similarly to related works such as [4, 27], these conditions involve the rank of a matrix appearing in the semidefinite relaxation.

Finally, we test the performance of the proposed relaxations in extensive Monte Carlo simulations. The experimental results show that a subset of the convex relaxations discussed in this paper are indeed tight, and work extremely well in simulated problem instances, showing insensitivity to a large amount of outliers, and outperforming related techniques. The paper is tailored to 2D PGO problems, while the approaches and the theoretical guarantees trivially extend to the 3D setup.

Notation. We use lower and upper case bold letters to denote vectors () and matrices (), respectively. Non-bold face letters are used for scalars () and function names (). The identity matrix of size is denoted with . An zero matrix is denoted by . The symbols , , and denote the , , and Frobenious norm, respectively. Given a random variable , we use the notation to denote the probability density of with parameters , and we also write .

Ii Pose Graph Optimization

PGO estimates poses from relative pose measurements. We consider a planar setup, in which each to-be-estimated pose , , comprises a translation vector and a rotation matrix . For a pair of poses , a relative pose measurement , with and , describes a noisy measurement of the relative pose between and . Each measurement is assumed to be sampled from the following generative model:

(1)

where and represent translation and rotation measurement noise, respectively.

The problem can be elegantly modeled through graph formalism: each to-be-estimated pose is associated to a vertex (or node) in the vertex set of a directed graph, while each measurement is associated to an edge in the edge set of the graph. We denote by the number of nodes and by the number of edges. The resulting graph, namely , is usually referred to as a pose graph.

Ii-a Standard PGO

Given the generative model (1), standard approaches to PGO formulate the pose estimation problem in terms of ML estimation: the best pose estimate is the one maximizing the likelihood of the available measurements . Since the expression of the measurement likelihood depends on the assumed distribution of the measurement noise, an ubiquitous assumption in this endeavour is that the translation noise is distributed according to a zero-mean Normal distribution with given covariance , i.e., . For the rotation noise, it has been recently proposed to use the Von Mises distribution as noise model, as this leads to simpler estimators and it is easier to analyze, see [4]. In particular, it is assumed that in (1) represents a rotation of a random angle , distributed according to the Von Mises distribution:

(2)

with mean , concentration parameter , and where is a normalization constant that is irrelevant for ML estimation.

These choices of the noise model lead to the following standard PGO formulation:

Proposition 1 (Standard PGO formulation).

Given the generative model (1) and assuming that the translation noise is and that the rotation noise is , the maximum likelihood estimate of the poses is the solution of the following minimization:

(3)

A proof of this statement is given in [4, Proposition 1]. The estimation in (3) involves a nonconvex optimization problem, due to the nonconvexity of the set . Surprisingly, recent results [4, 27] show that one can still compute a globally optimal solution to (3), whenever the measurement noise is reasonable, using convex relaxations.

Iii Robust PGO formulations

The ML estimator in Proposition 1 is known to perform arbitrarily bad in the presence of even a single outlier. This is due to the quadratic nature of the cost, for which measurements with large residual errors dominate the other terms. This in turn depends on the fact that the estimator assumes a light-tailed noise distribution, i.e., the Normal distribution. Therefore, a natural way to gain robustness is to change the noise assumptions to take into account the presence of measurements with larger errors, and use noise distributions with “heavier-than-normal tails,” see, e.g., [18]. In the rest of this section we consider three alternative noise models and their induced cost functions. Since the corresponding optimization problems remain nonconvex, we discuss how to relax these formulation to convex programs in Section IV-V.

Iii-a Least Unsquared Deviation: -norm cost

We next introduce two distributions: the multivariate exponential power distribution, which we later use to model the translation noise, and the directional Laplace distribution which we use to model noise in .

Definition 2 (Multivariate Exponential Power distribution).

For a random variable , the multivariate exponential power distribution with parameters , , and is defined as:

(4)

where is a normalization constant, independent of .

Definition 3 (Directional Laplace Distribution).

For a random variable , the Directional Laplace Distribution with mean and scale parameter is defined as:

(5)

where is a normalization constant, independent of .

Definition 3 is a slight variation of the definition given in [22], in that it considers as the angular domain, rather than . Using Definitions 2-3, we can now introduce our first robust estimator.

Proposition 4 (Least Unsquared Deviation Estimator).

Given the generative model (1) and assuming that the translation noise is and the rotation noise is , then the ML estimate of the poses is the solution of the following minimization problem:

(6)

The derivation of the ML estimator of Proposition 4 is straightforward, and proceeds by inspection, starting from the noise distributions (4) and (5), and using the relation

(7)

where , , and are the rotation angles corresponding to , , and , respectively.

The estimator (6) is similar to the standard PGO estimator (3), except for the fact that the -norm and the Frobenius norm are unsquared. The intuition behind the use of (6) is that, since the cost is no longer quadratic, it does not favor measurements with large residuals.

Iii-B Least Absolute Deviation: -norm cost

In this section we consider a formulation in which the cost includes the -norm rather than the Euclidean norm.

Definition 5 (Least Absolute Deviation Estimator).

The Least Absolute Deviation Estimator of the poses in the pose graph is the solution of the following minimization problem:

(8)

The formulation in Definition 5 adopts an -norm penalty, which is known to be less sensitive to outliers, see, e.g., [28, p. 10], hence we expect this formulation to perform better in presence of spurious measurements. However, contrarily to Proposition 4, we currently only have a partial probabilistic justification for the cost (8). More precisely, while it is known that the first term in the cost (8) follows from the assumption that the translation noise is distributed according to a Laplace distribution, see [28, p. 10], the choice of the second term () is currently arbitrary, and only justified by the symmetry w.r.t. the first term.

Iii-C Huber loss

In this section we consider a popular M-estimator, based on the Huber loss function. While this is a commonly used robust estimator in SLAM and SfM, our novel contribution is to provide a convex relaxation, which is discussed in Section IV.

Definition 6 (Huber Estimator).

The Huber Estimator of the poses in the pose graph is the minimizer of the following optimization problem:

(9)

where is the Huber loss function, defined as:

(10)

The Huber loss in Eq. (10) is a quadratic function when the argument belongs to the interval , while it is linear otherwise. Ideally, the inliers should fall in the quadratic region, where the Huber estimator behaves as the least squares estimator of Proposition 1; on the other hand, the outliers should ideally fall in the linear region, in which the Huber loss behaves as the least unsquared deviation estimator of Proposition 4. We note that while an alternative definition of the Huber loss, e.g., [15, p. 619], includes an extra parameter that defines the size of the quadratic region (), we are implicitly using the terms and to define the region in which the cost is quadratic. For instance, in the first term of (9), the Huber loss becomes quadratic when:

(11)

i.e., the terms and define the boundaries between the quadratic and the linear behavior.

Iv Convex Relaxations and Rounding Procedure

In this section we discuss a systematic approach for deriving convex relaxations for the problems (6), (8), (9). We refer to the approaches presented in this section as 1-stage techniques since they require the solution of a single convex program. In Section V, instead, we propose an alternative way to solve problems (6), (8), (9), by decoupling rotation and translation estimation. We refer to the latter as 2-stage techniques.

Iv-a Convex Relaxations of Robust PGO Formulations

We consider each formulation in Section III.

Iv-A1 Relaxation of the -norm formulation

Problem (6) is nonconvex and therefore hard to solve globally. The source of nonconvexity is the constraint , while the cost can be made convex (in fact, quadratic) by rearranging the rotations and leveraging the invariance to rotation of the norm. While in our previous work [4] we used a quadratic reformulation of the cost, in the following we propose a more direct relaxation approach.

The first step towards our convex relaxation is to reparametrize Problem (6). We rearrange the unknown poses , with , into a single matrix :

(12)

Moreover we define the following square matrix:

(13)

where we distinguished 4 block matrices within , with , , and . Problem (6) can be written as function of and :

s.t. (14)

where we used the notation to identify the block entry of at row and column , and to identify the block entry of , according to the partition shown in (13). The optimization problem imposes the constraint that can be computed from , since , while the constraints and enforce that the estimated rotation matrices are orthogonal.111By definition . In the following, we drop the determinant constraint for simplicity, and perform estimation over the Orthogonal group rather than the Special Orthogonal group . Empirical evidence from this and previous works, such as [8], shows that dropping the determinant constraint does not impact the relaxation.

Now we note that the constraint is equivalent to (i) , and (ii) , hence we can rewrite (14) in the sole variable as follows:

s.t. (15)

Problem (15) has a convex objective and the only nonconvex constraint is the rank constraint on . Therefore, to obtain a convex problem we drop the rank constraint and we obtain the following convex relaxation.

Proposition 7 (Convex Relaxation with -norm).

The following semidefinite program is a convex relaxation of the nonconvex problem (6):

s.t. (16)

The fact that (16) is a convex relaxation (6) trivially follows from the first part of this section: Problem (16) is convex and its feasible set contains the one of Problem (15), which is simply a reformulation of Problem (6).

Iv-A2 Relaxation of the -norm formulation

The convex relaxation of Problem (8) can be derived in full analogy with the one presented in the previous section, by introducing the matrix and noting that the terms , , and can be written using . Therefore, we obtain the following

Proposition 8 (Convex Relaxation with -norm).

The following semidefinite program is a convex relaxation of the nonconvex problem (8):

s.t. (17)

Iv-A3 Relaxation of Huber formulation

This paragraph provides a relaxation of the robust formulation (9), using the same derivation of the previous sections.

Proposition 9 (Convex Relaxation with Huber loss).

The following semidefinite program is a convex relaxation of the nonconvex problem (9):

s.t. (18)

Again, the proof of the statement follows from the derivation we provided for the -norm case. While the convexity of the costs in (16) and (17) trivially follows from the convexity of the and norms, the convexity of the cost in (18) is less straightforward. To ascertain convexity of the cost in (18) we note that the first term in the cost has the form “”, and is the composition of (i) a convex function () and (ii) a function () which is convex and nondecreasing when restricted to nonnegative arguments, see [15, p. 617]. Properties (i) and (ii) guarantee that the resulting function is convex, see [2, p. 84]. An analogous argument holds for the second summand in (18).

Iv-B Rounding the Relaxed Solutions

The solution of each of the convex relaxations (16), (17), and (18) is a matrix, that we call . However, our goal is to estimate a set of poses. In this section we show how to retrieve the poses , from .

Rotation rounding. The computation of the rotation estimates from proceeds along the same lines of [32] and [4]. Given the matrix we first compute a rank- approximation of via singular value decomposition. The resulting matrix is such that (the previous is an actual equality when has rank 2). Then, since the has the structure described in (12), we know that the first blocks of contain our rotation estimates. Therefore we project each block to , as prescribed in [14, Section 5], and obtain our rotation estimates , .

Translation rounding. The translation computation leverages the rotation estimates , , computed in the previous section. Let us call the matrix stacking all these estimates. Then, by inspection of the matrix in (13) we realize that . Based on this observation, we build our translation estimate as:

(19)

V Decoupling Rotations and Translations: 2-stage Convex Relaxations

In this section we discuss a slightly different approach to compute approximate solutions for the problems (6), (8), (9). These three problems share the fact that the corresponding cost functions have two different terms: the first involving rotation and translations and the second involving only rotations. Based on the empirical observation that the contribution of the first term to the rotation estimate is often negligible (see [5]), in this section we propose to first compute a rotation estimate by minimizing the second summand, and then compute the translation estimate by minimizing the first summand with given rotations. While the decoupling of rotation and translation estimation might sound counterintuitive, Section VII shows that this 2-stage approach is indeed the most effective, leading to accurate estimate in presence of many outliers.

Stage 1: rotation estimation. The rotation subproblem in Eqs. (6), (8), (9), assumes the following general form:

(20)

where, depending on the formulation ((6), (8), (9)), the function denotes the -norm, the -norm, or the Huber loss, properly weighted by . We now propose a convex relaxation which proceeds along the lines of Section IV. First we define two matrices, , and

(21)

Then, we repeat the same derivation of Section IV (but applied to the smaller matrix ) and get the following convex relaxation of the rotation subproblem (20):

s.t. (22)

Calling the solution of the convex program (22), we can recover the rounded rotation estimates, , , from as discussed in Section IV-B. The relaxation (22) is similar to the one used by Wang and Singer in [32], except for the fact that we accommodate different robust cost functions.

Stage 2: translation estimation. The translation subproblem corresponds to the first summand in Eqs. (6), (8), (9), and can be written in the following general form:

(23)

where, depending on the formulation ((6), (8), (9)), the function denotes the -norm, the -norm, or the Huber loss, properly weighted by . In (23), we already substituted the rotation estimates computed in Stage 1, hence the translations are the only unknowns.

Problem (23) is already a convex program, since the problem is unconstrained and is a convex function in all the considered formulations. Therefore, Stage 2 simply consists of solving (23) with an off-the-shelf convex solver to get the translation estimates .

Vi A Posteriori Performance Guarantees

This section provides a posteriori checks that can ascertain the quality of the relaxed solution after solving the convex relaxation. We give the following result, applied to (6), while the very same statement holds for Problems (8), (9).

Proposition 10 (Tightness in robust PGO formulations).

Let be the cost function of the robust PGO formulation (6), with and , and call the corresponding optimal cost. With slight abuse of notation, call the cost function of the convex relaxation (16), and denote with the optimal solution of (16) and with the corresponding rounded estimate. Then, it holds:

(24)

i.e., we can compute a bound on the suboptimality gap by using the optimal cost of the relaxation . Moreover, if the relaxed solution has rank 2 and its rank-2 decomposition is such that the first n blocks of are in , then and the rounded solution is optimal for the original problem (6).

Proof.

Since (16) is a relaxation of Problem (6), it follows that , which implies the inequality (24). Moreover, if has rank 2, then , where is the rank-2 decomposition of . If the first n blocks of are already in , then the rounding does not alter , i.e., . Therefore, it holds that (i) . Since is feasible for (6), by optimality of it follows that (ii) . Combining the inequalities (i) and (ii), it follows that , proving the optimality of . ∎

Proposition 10 provides computational tools to quantify the suboptimality of the rounded solution . Moreover, it gives an a posteriori condition under which the relaxation is tight. Tightness is attained under 2 conditions. The first condition is that the rank of is 2: this guarantees that we did not lose anything when relaxing the rank constraint in (15). The second condition is that (the rank 2 decomposition of ) contains rotation matrices: this guarantees that we did not lose anything by dropping the determinant constraint in (15). As mentioned in Section IV, indeed empirical evidence suggests that the determinant constraint does not impact the results, hence we are mainly interested in the rank of .

An analogous result applies to the rotation estimation of the 2-stage approaches, as discussed below (the proof is identical to the one of Proposition 10).

Proposition 11 (Tightness in 2-stage formulations).

Let be the cost function of the rotation subproblem in (20), with , and call the corresponding optimal cost. With slight abuse of notation, call the cost function of the convex relaxation (22), and denote with the optimal solution of (22) and with the corresponding rounded estimate. Then, it holds that

(25)

i.e., we can compute a bound on the suboptimality gap by using the optimal cost of the relaxation . Moreover, if the relaxed solution has rank 2 and its rank-2 decomposition is such that the first n blocks of are in , then and the rounded solution is optimal for the rotation subproblem in (20).

Note that we only discuss the tightness in the relaxation of the rotation estimation problem (Stage 1), while translation estimation (Stage 2) is already convex.

Vii Numerical Evaluation

This section presents extensive numerical simulations and provides empirical evidence that (i) the 1-stage convex relaxations of Section IV are not tight, while the relaxations of the rotation subproblems in Section V are indeed tight in the tested scenarios; (ii) the 2-stage approaches are robust with respect to increasing probability of outlying measurements and ensure excellent performance in practice; (iii) the 2-stage approaches allows a reliable detection of the outliers; (iv) the 2-stage approaches outperform state-of-the-art techniques based on local optimization, such as Dynamic Covariance Scaling [25].

Vii-a Summary of the Proposed Approaches

In the experimental evaluation we compare the 6 proposed approaches for solving the PGO with outliers:

  1. on poses: solution of Problem (17);

  2. 2-stage: solution of Problems (22) and (23) with and ;

  3. on poses: solution of Problem (16);

  4. 2-stage: solution of Problems (22) and (23) with and ;

  5. Huber on poses: solution of Problem (18);

  6. Huber 2-stage: solution of Problems (22) and (23) with and .

All problems above are convex and can be solved globally by off-the-shelf SDP solvers. We use cvx [12] as convex solver. After solving the convex relaxation, we apply the rounding procedure, in order to obtain a feasible solution of the original problems. We compute statistics over runs.

Vii-B Monte Carlo Analysis and Simulation Setup

We performed a Monte Carlo analysis on synthetic datasets consisting of randomly generated pose graphs. Ground truth positions of the nodes are drawn from a uniform distribution in a square; ground truth orientations are drawn from a uniform distribution over . Connections among nodes are generated according to (i) the Erdős-Rényi random graph model, where each edge is included in the graph with probability independently from every other edge (in our tests we set ); and (ii) the Geometric random graph model, where all nodes closer than a specified distance/radius (in our tests, we used as default value where is the size of the environment) are connected by an edge. Examples of random graphs are provided in Fig. 1(a-b). Since we are interested in connected graphs, random graphs that contain disconnected components are discarded. Once the graph is generated, the relative measurements associated to each edge are generated as:

(26)

where are i.i.d. Bernoulli random variables with parameter , and are such that with probability , and the measurement is assigned an outlier measurement , while if the measurement is assigned an inlier but noisy measurement . Inliers and outliers models are as follows: the inliers are noisy measurements generated according to model (1) with and , , where is the planar rotation matrix of angle . For our tests we set and , which are good proxies for the noise levels found in practice. The outliers are completely wrong measurements, and are obtained from the model (1) by adding large uniformly distributed noise: (where is the size of the environment), and , . Since the general purpose solver in cvx does not scale to large instances, we focus on relatively small graphs, with and .

(a) (b)
(c)
Figure 1: Examples of simulated pose graphs: (a) Erdős-Rényi random graph; (b) Geometric random graph; (c) Grid graph.

Vii-C Tightness of convex relaxations

In this section we evaluate the tightness of the convex relaxations, i.e., their capability of producing accurate solutions for the original (nonconvex) problem. In order to check if the convex relaxations in 1-stage approaches are tight, we check the rank of the solution matrices of problems (16), (17) and (18) (see Proposition 10). In all instances we tested, the rank of matrix is very high, even in absence of outliers (). The solution matrices of problem (17) and (18) have rank equal to , while the rank of the solution matrix of (16) is around and for instances having respectively and nodes. This empirical evidence shows that the SDP relaxations (16), (17) and (18) are not tight in presence of noise and outlying measurements.

On the other hand, in 2-stage approaches the relaxations of the rotation subproblem (i.e. problem (22)) are practically always tight. We computed the stable rank of the matrix , i.e. the squared ratio between the Frobenius norm and the spectral norm. The stable rank is a real number rather than an integer, and provides a more detailed picture of the rank of the matrix, without requiring to commit to a given numerical tolerance for the rank computation. This allows understanding how far is from being a rank-2 matrix. Fig. 2(a) shows that the stable rank of is very close to 2 for all considered values of and for . For the 2-stage approaches, this confirms the tightness of our convex relaxations. Interestingly, also if the 1-stage approaches, even if the matrix has large rank, the submatrix has rank close to 2. This would suggest that the presence of the translations in the 1-stage formulations breaks the tightness. These results are further confirmed by Fig. 2(b), which shows results for larger pose graphs with poses.

(a) (b)
Figure 2: Stable rank of the matrix computed by the different approaches and for increasing probability of outliers.

Vii-D Robust PGO

In this section we evaluate, for increasing levels of outliers , the quality of the pose estimates produced by the proposed approaches. The estimation quality is quantified by the mean rotation and mean translation error, computed with respect to the ground truth. Fig. 3 shows the mean errors, averaged on runs. We note that there is a huge divide between the two set of approaches. On the one hand, the 1-stage approaches show no robustness towards the presence of outliers, since mean translation and rotation errors quickly increase with . On the other hand, the estimates provided by the 2-stage approaches have small errors and are remarkably insensitive to the increase of the probability of outliers (y-axis is in log scale). In particular, Huber 2-stage ensures top performance, followed by 2-stage and 2-stage.

(a) Erdős-Rényi, .
(b) Erdős-Rényi, .
(c) Geometric, .
(d) Geometric, .
Figure 3: Estimation errors for the 6 approaches proposed in this paper. Left column: average translation error. Right column: average rotation error. Erdős-Rényi random graphs with (a) and (b) nodes; Geometric random graphs with (c) and (d) nodes.

Vii-E Sensor Failure Detection via Convex Relaxations

In this section we discuss how to use the proposed techniques for outlier detection. Besides being useful to improve the quality of the pose estimates, the capability of identifying outliers is an important tool for sensor failure detection.

Here we show that since our techniques can produce an accurate estimate even in presence of a large amount of outliers, we can simply detect outliers by evaluating the residual error of each measurement. Given an estimate , with and , and for each measurement , we define the translation residual and the rotation residual as follows:

(27)
(28)

A simple outlier detection technique would first solve one of the proposed convex relaxations to obtain an estimate of the pose graph configuration . Then it would use such estimate to compute the residual errors and for all . Finally, it would classify as outliers the measurements for which either or , where and are given thresholds on the maximum admissible error.

Fig. 4 reports the precision/recall curves of the classification based on the solutions yielded by the 6 approaches discussed in this paper. The curves are obtained by varying the thresholds and in the ranges and , respectively, and evaluating the precision/recall, defined as:

(29)

where is the (true) number of outliers, is the number of correctly identified outliers (true positives), and is the number of inliers wrongly classified as outliers (false positives). As expected, the 2-stage techniques, which we have already seen to ensure accurate estimation independently on the amount of outliers lead to high precision/recall and are fairly insensitive to the choice of the thresholds and . The 1-stage techniques, instead, lead to poorer performance.

(a) . (b) .
Figure 4: Precision/Recall curves for outliers detection based on the residual errors corresponding to the 6 techniques discussed in this paper.

Vii-F Comparison against the State of the Art

In this section we show that the proposed approaches outperform the Dynamic Covariance Scaling approach [25].

We consider a Grid graph, i.e., a Manhattan world model similar to the one used in related work [4, 31]; we simulate a complete grid, as shown in Fig. 1(c). The ground truth poses and measurements associated to the nodes and edges of the grid graph are computed as described in Section VII-B; the Grid graph has an odometric path connecting all nodes. Each node in the Grid has at most 4 neighbors (corresponding to adjacent nodes in the grid) hence it is less connected than the graph used in Section VII-B, and closer to pose graphs found in practice. For the comparison in this section, we focus on the 2-stage approaches, since we already observed that they ensure the best performance. We benchmark the proposed techniques against g2o [19], a non-robust PGO solver based on Gauss-Newton optimization, and Dynamic Covariance Scaling [25] (dcs), a state-of-the-art robust PGO approach. Both approaches need an initial guess: in our tests we compute the initial guess from the odometric edges, following common practice. We use default parameters for both g2o and dcs.

Fig. 5 shows the mean translation and rotation errors, averaged over 30 runs, for increasing amount of outliers, and for Grid graphs with 20 and 50 nodes. Similarly to what we observed in the previous sections, all 2-stage approaches ensure similar performance. Moreover, they dominate g2o and dcs in all test instances. g2o is not a robust solver, hence this result is expected. On the other hand, we observe that also dcs has degraded performance: in our tests the initial guess to dcs is computed from the odometry, and some of the odometric edges may be spurious; in those cases, dcs starts from an incorrect initialization and its robust cost has the undesirable effect of “disabling” the correct edges which are not consistent with the initial guess. This further stresses the importance of designing global techniques for robust PGO that do not rely on an initial guess. On the downside, we observe that while the proposed techniques outperform dcs, their performance is significantly worse than the one observed in Section VII-D. This suggests that the performance of the proposed convex relaxations degrades for graphs with low connectivity.

(a) Grid, .
(b) Grid, .
Figure 5: Estimation errors for the 2-stage approaches proposed in this paper and for g2o and dcs. Left column: average translation error. Right column: average rotation error. Grid graphs with (a) n = 20 and (b) n = 50 nodes.

Viii Conclusion

We proposed robust approaches for the solution of the PGO problem in presence of outliers. We considered three robust cost function (least unsquared deviation, least absolute deviation, and Huber loss) and we provided a systematic way of computing a convex relaxation of the resulting (nonconvex) optimization problems (1-stage techniques). For each technique we also provided an alternative 2-stage solver, which decouples rotation and translation estimation (2-stage techniques), as well as conditions under which the relaxation is exact. Numerical experiments suggest that while the relaxation implies a loss in accuracy in 1-stage techniques, the 2-stage techniques ensure accurate estimation in the face of many outliers, particularly when tested on well-connected graphs. This also enables outlier identification and failure detection.

This work opens several avenues for future research, including the analysis of 3D problem instances, the study of the influence of the graph topology on the performance of the proposed convex relaxations, and a deeper theoretical investigation of the advantage of 2-stage techniques over 1-stage approaches. Future work also includes the investigation of ad-hoc numerical solvers to efficiently solve the SDP relaxations discussed in this paper: while the general purpose SDP solver used in our current implementation scales poorly in the problem size (it requires tens of seconds to solve an instance with 50 poses), our ultimate goal is to solve large PGO instances ( poses) efficiently and robustly.

Acknowledgments. The authors gratefully acknowledge Carlo Tommolillo for his help with the numerical evaluation.

References

  • [1] R. Aragues, L. Carlone, G. Calafiore, and C. Sagues. Multi-agent localization from noisy relative pose measurements. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pages 364–369, 2011.
  • [2] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004.
  • [3] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robotics, 32(6):1309–1332, 2016.
  • [4] L. Carlone, G. Calafiore, C. Tommolillo, and F. Dellaert. Planar pose graph optimization: Duality, optimal solutions, and verification. IEEE Trans. Robotics, 32(3):545–565, 2016.
  • [5] L. Carlone and A. Censi. From angular manifolds to the integer lattice: Guaranteed orientation estimation with application to pose graph optimization. IEEE Trans. Robotics, 30(2):475–492, 2014.
  • [6] L. Carlone, A. Censi, and F. Dellaert. Selecting good measurements via relaxation: a convex approach for robust estimation over graphs. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2014.
  • [7] L. Carlone and F. Dellaert. Duality-based verification techniques for 2D SLAM. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pages 4589–4596, 2015.
  • [8] L. Carlone, D. Rosen, G. Calafiore, J.J. Leonard, and F. Dellaert. Lagrangian duality in 3D SLAM: Verification techniques and optimal solutions. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2015.
  • [9] J.J. Casafranca, L.M. Paz, and P. Piniés. A back-end l 1 norm based solution for factor graph slam. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 17–23. IEEE, 2013.
  • [10] M. Fischler and R. Bolles. Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography. Commun. ACM, 24:381–395, 1981.
  • [11] M. C. Graham, J. P. How, and D. E. Gustafson. Robust incremental slam with consistency-checking. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 117–124, Sept 2015.
  • [12] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming.
  • [13] G. Grisetti, C. Stachniss, and W. Burgard. Non-linear constraint network optimization for efficient map learning. Trans. on Intelligent Transportation systems, 10(3):428–439, 2009.
  • [14] R. Hartley, J. Trumpf, Y. Dai, and H. Li. Rotation averaging. IJCV, 103(3):267–305, 2013.
  • [15] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2000.
  • [16] P. Huber. Robust Statistics. John Wiley & Sons, New York, NY, 1981.
  • [17] M. Kaess, H. Johannsson, R. Roberts, V. Ila, J. Leonard, and F. Dellaert. iSAM2: Incremental smoothing and mapping using the Bayes tree. Intl. J. of Robotics Research, 31:217–236, Feb 2012.
  • [18] S. Kotz, T. J. Kozubowski, and K. Podgórski. The Laplace Distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance, chapter Asymmetric Multivariate Laplace Distribution, pages 239–272. Birkhäuser Boston, 2001.
  • [19] R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard. g2o: A general framework for graph optimization. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), Shanghai, China, May 2011.
  • [20] Y. Latif, C. D. C. Lerma, and J. Neira. Robust loop closing over time. In Robotics: Science and Systems (RSS), 2012.
  • [21] G. H. Lee, F. Fraundorfer, and M. Pollefeys. Robust pose-graph loop-closures with expectation-maximization. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2013.
  • [22] N. Mitianoudis. A directional laplacian density for underdetermined audio source separation. In International Conference on Artificial Neural Networks, pages 450–459, September 2010.
  • [23] E. Olson and P. Agarwal. Inference on networks of mixtures for robust robot mapping. In Robotics: Science and Systems (RSS), Sydney, Australia, July 2012.
  • [24] E. Olson, J. Leonard, and S. Teller. Fast iterative alignment of pose graphs with poor initial estimates. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pages 2262–2269, May 2006.
  • [25] L. Spinello C. Stachniss P. Agarwal, G.D. Tipaldi and W. Burgard. Robust map optimization using dynamic covariance scaling. In IEEE Intl. Conf. on Robotics and Automation (ICRA), 2013.
  • [26] M. Pfingsthorn and A. Birk. Generalized graph slam: Solving local and global ambiguities through multimodal and hyperedge constraints. Intl. J. of Robotics Research, 35(6):601–630, 2016.
  • [27] D.M. Rosen, L. Carlone, A.S. Bandeira, and J.J. Leonard. SE-Sync: A certifiably correct algorithm for synchronization over the Special Euclidean group. In Intl. Workshop on the Algorithmic Foundations of Robotics (WAFR), 2016.
  • [28] P. J. Rousseeuw and A. M. Leroy. Robust Regression and Outlier Detection. John Wiley & Sons, New York, NY, 1981.
  • [29] N. Sünderhauf and P. Protzel. Switchable constraints for robust pose graph SLAM. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2012.
  • [30] N. Sunderhauf and P. Protzel. Towards a robust back-end for pose graph slam. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pages 1254–1261. IEEE, 2012.
  • [31] Niko Sünderhauf and Peter Protzel. Switchable constraints vs. max-mixture models vs. rrr–a comparison of three approaches to robust pose graph slam. In IEEE Intl. Conf. on Robotics and Automation (ICRA), 2013.
  • [32] L. Wang and A. Singer. Exact and stable recovery of rotations for robust synchronization. Information and Inference: A Journal of the IMA, 30, 2013.
  • [33] G. Zioutas and A. Avramidis. Deleting outliers in robust regression with mixed integer programming. Acta Mathematicae Applicatae Sinica, 21(2):323–334, 2005.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
53365
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description