Numerical and experimental realization of analytical SLAM
Abstract
Analytical approach to SLAM problem was introduced in the recent years. In our work we investigate the method numerically with the motivation of using the algorithm in a real hardware experiments. We perform a robustness test of the algorithm and apply it to the robotic hardware in two different setups. In one we try to recover a map of the environment using bearing angle measurements and radial distance measurements. The another setup utilizes only bearing angle information.
I Introduction
Research in area of Simultaneous Localization and Mapping (SLAM) has become popular and largely investigated area, mainly due to the broad usefulness of the results in robotics and control systems. A lot of scientific work has been conducted within this field resulting in most popular techniques such as Extended Kalman filtering, graphbased SLAM and particle filter SLAM.
First of mentioned is based on linearization of kinematic equations what leads to problems such as inconsistencies and convergence complications. The method was studied in e.g. [cheeseman1987] and [moutarlier1989]. However, complexity of standard approaches such as Extended Kalman filtering (EKF) is with being the number of landmarks as these are mutually correlated and thus covariance matrices grow quadratically with number of features. In fact, can be easily of order of thousands or even more in sophisticated environments what motivates to search for more efficient algorithms. Also the convergence properties of e.g. EKF are not ideal  only convergence to the local optimum is ensured. In reality, the convergence proccess approaches local minimum and algorithm often struggles a lot to push the convergence further  closer to true positions of landmarks. Some of the complications of EKF are discussed e.g. in [bailey2006], [julier2000] or [castellanos2004].
Graphbased methods represent positions of landmarks and vehicle(s) as nodes and movement or measurements as edges ([dellaert_kaess_2006] and [duckett_marsland_shapiro_2002]). Drawbacks of the method is its computational complexity due to which it is not performed online.
Particle filtering SLAM was first introduced in [Doucet_2000]. The most famous particle filtering method is FastSLAM ([Montemerlo_2010] and [Montemerlo_2002]).
The complications mentioned while introducing EKF are addressed in [slam] where it is shown that SLAM problem can be posed without any linearization. Even more, using contraction analysis [2013b], stronger convergence properties can be achieved  convergence to the global optimum. Problem of large complexity can be also reduced to by introducing virtual observers and decoupling the problem.
In this article we focus on approach introduced in [slam] that treats kinematic equations unlinearized and by applying contraction analysis it claims the convergence of physical coordinates to its true values (for more details see [2013b]). It also presents concept of Decoupled Unlinearized Networked Kalman filter (DUNK) which reduces computational cost of SLAM problem still avoiding approximations of kinematic equations. This procedure is based on the conditional independence of landmarks given a specific path as shown in [thrun2000] and [Murphy1999] which we employ in our numerical treatment.
This article aims to present numerical simulation of 2D SLAM, analyze it with regard to the application to real vehicle and then provide the full application of the approach to a specific robotic hardware. This will go through dealing with acquirement and processing of necessary data, dealing with noise and probing the influence of nonconstant time steps to the algorithm.
Ii Theory and simulation construction
Our intention is to code a simulation for 2D analytical SLAM. In our framework we will use the approach in which robot moves (both translational and rotational motion considered) in the environment with several landmarks. In this part we will simulate the output of robot’s sensors to calculate its trajectory and position of landmarks.
In considered setup the robot measures its translational and angular velocity and perform bearing and range measurement – angle to landmark with respect to the fixedtorobot frame (calculated from axis in clockwise direction) and radial distance to the landmark .
First we made a simulation in local coordinate system. This system has a fixed origin at the initial position of robot and rotates together with the robot. For one landmark, the equation considering measurement information reads
(1) 
where and with being Cartesian coordinates of landmark and coordinates of vehicle/robot in local frame, zeromean noise of measurement and
This equation originates only in rewriting basic geometrical relations from considered setup, more details can be found in [slam]. Underlying physical model is
(2) 
with being translational velocity of robot in local frame, zeromean noise of the model and matrix defined as diagonal extension of antisymmetric tensor
with angular velocity of robot.
Iia Measurement in simulation and reality
We started the development of simulation code in following setup:

: global coordinates of robot (in lab frame), we set
(3) (4) so the robot moves on the circle and starts at origin in

: in most cases we supposed constant angular velocity of robot and we simulated the measurement by Gaussian dispersion of mean value with error (we also tried uniform distribution of on with similar results), no drifts were considered

: robot moves on the circle with constant velocity (in global – lab system, so in localframe simulation I had to transform the velocity into local frame). Measurement of velocity components was simulated by Gaussian dispersion ; no drifts were considered

bearing angle : I supposed that the measurement of angle does not depend on previous integration/movement, so I generated the angle from the direct position in global coordinates, namely
(5) (6) 
: similar to previous, we used true position of vehicle and landmark with Gaussian dispersion
(7) (8)
In general real experiment, these data can be obtained by measurements, e.g. as follows:

: measured using e.g. gyroscope

: integrating acceleration or using sensors measuring rotation of robot’s wheels

bearing angle : e.g. DVS or processed pictures captured with conventional cameras

: sonar/Doppler radar or used as virtual measurement
IiB Kalman filtering
In our numerical scheme we have two sets of differential equations to evolve. One is well known kinematic set and the other one is set of equations for matrix elements of covariance matrix with where is estimated and true value of coordinates. The second set updates covariance matrix which is used to filter between different types of measurements. Our update procedure is as performed according the following scheme:

from previous step values of and denoted as , we propose actual step values using model (differential equations)
(9) (10) obtaining , . Integration method was , in above .

using Kalman gain we correct these values obtaining actual values
(11) (12) with denoting yielding from the measurements in step. Here, we use optimal Kalman gain [kalman]:
(13) (14) and In our case (see [slam])
(15) where is upper estimate of maximal radial distance in the setup and denotes that the last matrix is upper bound of the previous.
what yields
and we can bound this matrix as follows:
So using above structure we can adapt values of computed quantities in a way that respects the specified uncertainties of model and measurement and make a tradeoff between them.
Above scheme of Kalman filter updates was introduced in discrete way. However, continuous version of the filter is more appropriate for our purposes. Then we end up with
(16)  
(17) 
which we are about to simulate. As argued in [slam], such a form of equations are suitable for application of contraction analysis which provides us with better convergence properties (bigger convergence rate and approaching global minima instead of local).
So far we have kept the notation where vector had four elements  two for single landmark and two for vehicle. It can be straightforwardly extended to vector of elemetns for landmarks and vehicle. But here, the size of all matrices grows as and computational cost as what quickly becomes very awkward for increasing number of landmarks (in some practical applications there can be hundrets or even more landmarks). To overcome this, Decoupled Unlinearized Networked Kalman filtering (DUNK) was introduced in [slam]. Here, for every landmark that is observed at a specific moment there is one virtual vehicle associated to the landmark measuring required quantities and performing the estimation of its new actual position. At the end of each time step, estimated position of every virtual vehicle can be slightly apart from the position of the real vehicle (which it so far unaffected in this time step). Therefore, we adjust the position of real vehicle to minimize square distance to all virtual vehicles. Practically, for every landmark we initialize one virtual vehicle (so far we suppose that the vehicle sees its landmark all the time  e.g. cameras covering whole panorama) and for each vehicle we solve the problem. This leads to complexity instead of for Extended Kalman filter (with being number of landmarks) or even better than FastSLAM algorithm with being number of virtual vehicles set for each landmark [slam]. Thus we have to solve a set consisting of (16) and (17) for each virtual vehicle
(18)  
(19) 
In the previous section, Kalman filter weighed the difference of two components vectors . Note, that the first component of this vector represents error in landmark’s position in tangential direction and second one represents error in radial direction. Now, we can also include additional layer to Kalman filter as proposed in [slam]. So contraction approach in this setup will lead into effort to minimize tangential as well as radian measurement errors and also the distance of virtual vehicle of the real one (this will be reasonable in cases when new positions of virtual vehicles will be much more distinct from the real vehicle than the new time step position of real vehicle according to the equation of motion). Consequently, some of our quantities have to be adjusted:
with . To avoid being singular we add to diagonal term in third and fourth row and to be more simple we replace with .
IiC Initial conditions
As the initial conditions for and we set quantities in the following format
(20)  
(21) 
in which are Cartesian coordinates of landmark in global frame (used for simulation of measurement of bearing angle and radial distance) and is an error in the initial position of the landmark (stands for the error of measurement to see convergence), are diagonal elements of matrix .
IiD Transformation to global coordinates
So far, all the equations are written in the local frame. To obtain the results in the global frame the only thing to do is to rotate our results in each step by angle between axis in local and respective axis in global frame. This transformation to global coordinates is made at the end of simulation – for each time step we integrated heading direction as well and stored it. At the end we perform this rotation and recover results in global frame.
IiE Numerical properties

The basic properties of convergence depend on choosing initial value of . If we say that we precisely know the initial position of robot and are not totally sure about position measurement of landmark, we can set and . In such case if we set initial position of landmark to be incorrect, landmark will converge to correct position and trajectory of vehicle remains almost unchanged. If we are sure e.g. times more about the position of vehicle, we can set for example . In this case both vehicle and landmark trajectories are shifted and if e.g. then trajectory of vehicle shifts by along xaxis towards the landmark and landmark position shifts by along xaxis towards robot (similarly for ). The elements of covariance matrix also converge and the rate of convergence depends on factors such as how large the elements of matrix are or how large our uncertainty of measurements is.

: magnitude of its entries affect the growth of matrix , if is very large (our model is very imprecise) the shift in vehicle trajectory is even bigger than as the one described in previous item

: magnitude of its entries affect the rate of convergence. The smaller these entries are the fastest the convergence is (measurement is taken ”more seriously”)

magnitude of affects dispersion of the final converged position of landmark, magnitudes of act similarly and cause angular dispersion of converged position ( also affects entries of matrix significantly)

are responsible for vehicle’s trajectory dispersion
Iii Simulation investigation
Iiia Trajectory of vehicle
One of the fact which our simulation has to be robust against is choice of vehicle’s trajectory. We have written the code in way to be easily modified to different trajectory of vehicle. The simulation worked successfully also for other courses of vehicle (a line, parabola,…). The results for three landmarks is shown in fig. 4. In this case, m, and s.
IiiB Robustness testing
In this section we test how robust our simulation is when we change the errors present in model and measurement. Basically, in our simulation we have five different errors:

: error of translation velocity components and error of angular velocity of a robot

: error of radial distance measurement

: error of angle measurement
We divided them into these groups intentionally to reduce degrees of freedom in the test (we will suppose that errors in velocities yielding from numerical integration are similar so we placed them into one group). Then we introduced three parameters which will scale the errors of quantities in the simulation. We define them as follows:

with


Then we performed the simulation for (10 times for each combination of parameters) in the setup described in IIA. In them we chose to have three landmarks with some error in initial position estimation. We aimed to quantify how satisfying the convergence for different error magnitudes is.
Once the estimated position of landmark entered the neighbourhood of the true position (with some chosen), we started to calculate the square distance of the estimated position from true position and characterized the precision of the simulation by
(22) 
where is estimated and true position of the landmark and is number of time steps after the landmark position first occurred the neighbourhood of its true position. As we made 10 simulations for each combination, we denote mean of square error with denoting landmark. To quantify how reasonable our estimation is, we also introduce unbiased variance estimator
in which stands for value for landmark and simulation with some specific parameters . In the tab.I, the results for each combination of scale parameters are summarized.
a  b  c  

1  1  1  0.083 0.001  0.088 0.003  0.084 0.004 
1  1  5  0.084 0.001  0.084 0.001  0.084 0.001 
1  1  15  0.083 0.001  0.088 0.004  0.085 0.003 
1  5  1  0.084 0.001  0.086 0.002  0.089 0.005 
1  5  5  0.084 0.002  0.085 0.003  0.086 0.001 
1  5  15  0.085 0.002  0.086 0.003  0.085 0.002 
1  15  1  0.089 0.003  0.089 0.005  0.099 0.006 
1  15  5  0.087 0.002  0.090 0.004  0.098 0.004 
1  15  15  0.086 0.004  0.089 0.005  0.094 0.003 
5  1  1  0.106 0.026  0.104 0.025  0.135 0.052 
5  1  5  0.112 0.030  0.112 0.038  0.150 0.057 
5  1  15  0.107 0.014  0.094 0.008  0.137 0.023 
5  5  1  0.107 0.013  0.112 0.013  0.134 0.029 
5  5  5  0.101 0.029  0.108 0.022  0.126 0.050 
5  5  15  0.103 0.023  0.094 0.011  0.112 0.034 
5  15  1  0.100 0.010  0.100 0.010  0.114 0.012 
5  15  5  0.098 0.012  0.103 0.020  0.123 0.039 
5  15  15  0.102 0.012  0.096 0.014  0.123 0.032 
15  1  1  0.139 0.048  0.133 0.044  0.204 0.082 
15  1  5  0.174 0.043  0.177 0.064  0.264 0.080 
15  1  15  0.131 0.035  0.132 0.044  0.179 0.056 
15  5  1  0.151 0.041  0.158 0.052  0.214 0.091 
15  5  5  0.146 0.035  0.129 0.029  0.186 0.056 
15  5  15  0.147 0.040  0.118 0.027  0.188 0.059 
15  15  1  0.144 0.023  0.118 0.034  0.181 0.035 
15  15  5  0.143 0.052  0.128 0.034  0.182 0.083 
15  15  15  0.136 0.038  0.150 0.050  0.186 0.076 
From tab.I we see that the the convergence is well robust in angles and radial distance measurement errors, its behaviour is slightly worsening with increasing angular and translational velocities errors. On the other hand, we have to consider that mean angular velocity was chosen to be rad/sec so error of magnitude 0.15 rad/sec is relatively very large (as well as translational velocity being of magnitude 0.5 m/s). To sum up, convergence of simulation is robust within the chosen range of error, its precision is mainly influenced by error of angular and translational velocity. Some of the results are shown in fig.5 and 6. We chose the range of error based on the experience with such errors in reality.
Iv Experiment
For experimental purposes we use robot Khepera IV (more details about the robot as well as a lot of preprogrammed functions can be found at [khepera]). This robot is equipped with integrated camera located in front of robot, five ultrasound and eight infrared sensors measuring distances to obstacles. Robot is twowheeled and speed of each wheel can be measured what can be used when calculating velocity and angular velocity of robot.
In the experiment we aim to move robot for a while and use adapted code from the simulation to perform SLAM. The correctness of the method is tested using ground truth obtained from video record of whole experimental scene. We use both angular (bearing angle measurement) and radial (distancetolandmarkmeasurement) information in the experiment. It is possible to do several modifications of this setup as stated in [slam].
Iva Distance measurement
For this purposes we use ultrasound (US) sensors that can measure distance to the object. These sensors can measure distance of object in distance within range cm with precision of cm [khepera]. Specific values of distances can be read directly in cm from function coded in Khepera’s library, only slight modification of original code had to be made. Before experiment we tested the sensors and precision was within this error so no further calibration was needed. From five US sensors we use only three because the range of camera was restricted only to area covered with central sensor and one neighbouring from both sides.
IvB Bearing angle measurement
A single time step information about bearing angles was extracted from image captured by robot’s camera. Resolution of image we set to px. As capacity of robot’s hardware was far not as big as a computer hardware we send captured images via TCP port to PC and process them on computer. As landmarks we use simple LED lights with high RGB components so they can be easily distinguished from the surrounding. For image processing we work in OpenCV. First, we undistorted the image. Necessary coefficients were found at [EPFL]. Then we created the mask which aimed to filter out everything except for light from landmarks. We ended up with 2D black image with several white areas. Then, we projected the image onto 1D array by simple summing all pixels along one axis (we used blue components only)
for where are values of blue component in filtered 2D image and are elements of array after projection. To achieve better robustness we average resulting array several times (smoothing) and then choose only those components larger than some threshold. As a consequence small bright spots that do not originate in light from landmarks are diminished (smoothing) and excluded (threshold). Number of averaging and threshold were chosen experimentally. In the resulting array we found the peaks standing for landmarks and recalculated value of specific pixels to angles in radians. To do this we also needed to connect first and last pixel with specific value of angle. This calibration was done experimentally by finding extremal position of visibility to the right and left from the camera and measure respective angles. The whole procedure resulted into the array of bearing angles (in local frame).
IvC Velocities
Here, we had to deliver three numbers: and  both Cartesian components of translational velocity and angular velocity. But it is clear that as robot can not move freely in 2D plane, only back and forth along axis (0 angle is defined so) and bend only if is rotates. At [khepera] there is function which returns (up to factor ) velocity of right and left wheel in mm/s. From these we get
where is distance between wheels. Such defined, is positive in counterclockwise direction.
As there were no precision state we performed calibration of velocities and found that precision better than 1 % we have to multiply resulting values by . Calibration was done by measuring velocities at different magnitudes several times and then fitting measured values against returned by preprogrammed function.
IvD Processing architecture
Now we summarize how data acquisition and processing is composed together. Here we emphasize that our effort is to process experiment in real time and not to do first data acquisition and then processing. Velocity and distance measurements can be performed fast times per second but image capturing and processing per second. So we constructed two independent TCP servers between PC and robot. One were simultaneously delivering measured velocities (two numbers), distances (three numbers, one for each landmark) and time stamp of respective measurement. The second were delivering RGB values of pixels and time stamp as well to synchronize these two data sources together and with ground truth from video record.
The processing structure was as follow:

both TCP server was started and ready to transmit data from robot to PC

main code of adapted simulation had been running and waiting until new data (processed data  values of angles, distances,…) arrived

port 1 was receiving fast data (distances and velocities) and saving them with respective time stamp

as soon as new image arrived it was processed and synchronized using time stamp with some of distance and velocity measurement. Then for all fast data that had been obtained before new image was received the respective bearing angles were calculated as interpolation of current and last previous values of processed angles

these synchronized and paired values from both ports were fed to the main code in step 2.

this procedure repeated until the maximal time was reached
Trajectory of the robot for specific experiment was set manually  preprogrammed using robot’s functions for setting specific velocities (but these values could not be used as input for processing as measured and prescribed values differed).
IvE Results
We performed the experiment with three landmarks. From fig.8 we can see that experiment was successful and wrong notion about the positions of all landmarks were continuously corrected and reached true position within estimated error boundaries. The 8 also shows very good agreement of true vehicle’s trajectory with results of experiment. Errors indicated in 8 were estimated as follows: error in radial measurement is cm and error of peak in image projection is estimated to be px. Range of robot’s camera is so . Maximal range is cm what yields cm. Combined (neglecting velocities errors) we get
Convergence properties were similar to those known from simulation results and thus we do not show the plots again.
This experiment also aimed to perform SLAM in the real time. Duration of robot’s movement was set to s. Duration of our complete architecture was only a bit slower: s were spent on data reading and preprocessing and s were spent on the numerical part of task  integration, memory operations what sums up to s. The video record with ground truth was processed in software Tracker.
IvF Conclusion
In this article we focused on topic of analytical SLAM  the approach that brings few interesting perspectives  on the one side it provides nicer convergence properties resulting from avoiding the linearization and DUNK SLAM even reduces time complexity of the problem. In our experiment we showed that this approach is applicable on the real hardware in full complexity and in the real time. The other issue is that in our case the time step were of order of s but their specific length varied.
It is to admit that our setup was quite simplistic  only three landmarks and not very long robot’s movement so we avoided problems such as deadreckoning. In future we aim to perform more complex experiment, namely with bearing angle information only and more landmarks on the scene which will not be visible all the time. This approach will be free of issues such as time synchronization of at different rates arriving data but should have worse convergence properties.
Strength of investigated method will manifests mainly in the situations with much more landmarks and in environments where one specific landmark is visible only for some limited time. Convergence to global optimum as argued in [slam] should help the algorithm to correct errors in pose estimation more quickly. In connection with other research areas such as pattern recognition it could be promising in realworld SLAM tasks.
V Acknowledgements
I would like to express my thanks to Dr. Yulia Sandamirskaya (INI, Universitaet Zurich/ETH Zurich) for kind supervision and helpful discussions and Prof. J.J. Slotine (MIT, Cambrigde) for valuable advice and insight into the topic. My gratitude belongs to Tatra banka Foundation for financial support.