SLAM-based Integrity Monitoring Using GPS and Fish-eye Camera

SLAM-based Integrity Monitoring
Using GPS and Fish-eye Camera

Sriramya Bhamidipati, University of Illinois at Urbana-Champaign
Grace Xingxin Gao, Stanford University

Biographies

Sriramya Bhamidipati is a Ph.D. student in the Department of Aerospace Engineering at the University of Illinois at Urbana-Champaign, where she also received her master’s degree in 2017. She obtained her B.Tech. in Aerospace from the Indian Institute of Technology Bombay in 2015. Her research interests include GPS, power and control systems, artificial intelligence, computer vision and unmanned aerial vehicles.  

Grace Xingxin Gao is an assistant professor in the Department of Aeronautics and Astronautics at Stanford University. Before joining Stanford University, she was an assistant professor at University of Illinois at Urbana-Champaign. She obtained her Ph.D. degree at Stanford University. Her research is on robust and secure positioning, navigation and timing with applications to manned and unmanned aerial vehicles, robotics and power systems.

Abstract

Urban navigation using GPS and fish-eye camera suffers from multipath effects in GPS measurements and data association errors in pixel intensities across image frames. We propose a Simultaneous Localization and Mapping (SLAM)-based Integrity Monitoring (IM) algorithm to compute the position protection levels while accounting for multiple faults in both GPS and vision. We perform graph optimization using the sequential data of GPS pseudoranges, pixel intensities, vehicle dynamics and satellite ephemeris to simultaneously localize the vehicle as well as the landmarks, namely GPS satellites and key image pixels in the world frame. We estimate the fault mode vector by analyzing the temporal correlation across the GPS measurement residuals and spatial correlation across the vision intensity residuals. In particular, to detect and isolate the vision faults, we developed a superpixel-based piecewise Random Sample Consensus (RANSAC) technique to perform spatial voting across image pixels. For an estimated fault mode, we compute the protection levels by applying worst-case failure slope analysis to the linearized Graph-SLAM framework.

We perform ground vehicle experiments in the semi-urban area of Champaign, IL and have demonstrated the successful detection and isolation of multiple faults. We also validate tighter protection levels and lower localization errors achieved via the proposed algorithm as compared to SLAM-based IM that utilizes only GPS measurements.

1 Introduction

Integrity Monitoring (IM) serves as a important performance metric to assess the navigation solution estimation [ochieng2003gps]. Vehicles operating in urban areas face challenges [joerger2017towards] due to static infrastructure, such as buildings and thick foliage, dynamic obstacles, such as traffic and pedestrians, and environmental conditions, such as shadows, sunlight and weather. GPS systems receive fewer measurements in urban environments due to degraded satellite visibility. They also suffer from received signal faults caused by multipath and satellite faults caused by anomalies in the broadcast navigation message.

To address the above-mentioned challenges, one possible solution is to incorporate additional redundancy through the sensor fusion of GPS and vision. Vision sensor performs well in urban areas due to the feature-rich surroundings [hol2011sensor]. Sensor fusion [krishnaswamy2008sensor] integrates measurements from multiple sensors to improve the accuracy of the vehicle and provide robust performance. Individual sensors, such as GPS and camera, have inherent limitations in operability that are reliably corrected by combining these complementary sensors in a sensor fusion framework. In particular, occlusion and illumination variations in multiple pixel intensities induce data association errors across images, thereby termed as vision faults [miro2006towards]. Therefore, there is a need for the development of sensor-fusion-based IM techniques that account for multiple faults in both GPS and vision.

Rich literature exists on urban IM approaches for GPS-based navigation systems that utilize external information sources. In [velaga2012map], the authors developed a sequential map-aided IM technique that checks for outliers in position and Geographic Information System (GIS) using traditional RAIM [walter2008shaping] and weight-based topological map-matching process, respectively. Another paper [binjammaz2013gps] developed three phases of integrity checks that include assessing the position quality via traditional Receiver Autonomous Integrity Monitoring (RAIM), speed integrity via GPS Doppler and map matching accuracy via fuzzy inference system. However, these approaches have practical limitations because the offline map database is not always available and its accuracy cannot be guaranteed due to the dynamic changes in the urban surroundings. Another line of prior work [li2017lane, toledo2009lane] utilizes the odometry information obtained from Dead-Reckoning (DR) sensors, such as inertial measurement units, wheel speed encoder and camera, to perform GPS-based IM. But the drawbacks of these approaches are that they do not address the faults associated with DR sensors, and also, do not account for the simultaneous occurrence of faults across multiple sensor sources.

In this paper, we leverage the generalized and flexible platform developed in our prior work [bhamidipati2018multiple], which is Simultaneous Localization and Mapping (SLAM)-based Fault Detection and Isolation (FDI), as the basis for assessing the sensor fusion integrity. Another extension of the SLAM-based FDI platform, described in our prior work [bhamidipati2018receivers], assesses the integrity of cooperative localization using a network of receivers. SLAM [cadena2016past], a well-known technique in robotics, utilizes sensor measurements to estimate the landmarks in a three-dimensional (3D) map while simultaneously localizing the robot within it. Analogous to this, our prior work [bhamidipati2018multiple] on SLAM-based FDI combines the sequential data of GPS measurements, receiver motion model and satellite orbital model in a graph framework to simultaneously localize the robot, which is the GPS receiver, landmarks in the map, which are the GPS satellites. A key feature of this platform is that it utilizes graph optimization techniques [latif2014robust] and therefore, does not require any prior assumption regarding the distribution of states. Given that we localize the landmarks as well, our SLAM-based FDI does not require any prior information regarding the surrounding 3D maps.

We propose SLAM-based IM algorithm using GPS and fish-eye camera to compute the error bounds, termed as protection levels, of the estimated navigation solution by applying worst-case failure slope analysis [salos2013receiver, joerger2014solution] to the Graph-SLAM framework. In this work, we consider global landmarks as the GPS satellites and additional local landmarks as the key image pixels in the world frame. Here, world frame represents the Earth-Centered Earth-Fixed (ECEF) frame. We simultaneously update the state vectors of the vehicle, GPS satellites and key image pixels and thereafter, perform multiple FDI. We constrain the graph via GPS pseudoranges, raw fish-eye images, vehicle dynamics and satellite ephemeris.

For vision measurements, we opt for a fish-eye camera mounted on an vehicle and point it upwards for the following reasons; Firstly, given its wide  Field-Of-View (FOV) the image pixels are spatially spread-out in different directions with respect to the vehicle, thereby, compensating for the restricted spatial geometry of the limited global landmarks, i.e., GPS satellites. Secondly, given that the camera is pointing upwards, the unstructured skyline of the buildings aids in resolving the attitude of the vehicle. Thirdly, the fish-eye image captures the open sky section with respect to the vehicle that is used to distinguish the Line-Of-Sight (LOS) GPS satellites from that of the Non-Line-Of-Sight (NLOS) ones [shytermeja2014proposed].

Figure 1: Architecture of our SLAM-based IM algorithm using GPS and fish-eye camera.

The rest of the paper is organized as follows: Section II describes our SLAM-based IM algorithm that utilizes GPS and fish-eye camera; Section III experimentally validates the proposed algorithm in performing multiple FDI of GPS and vision faults and assessing the corresponding localization accuracy and size of protection levels; Section IV concludes the paper.

2 SLAM-based IM using GPS and Fish-eye Camera

We outline the high-level architecture and later, explain the details of the proposed algorithm. In this work, we focus on the measurement faults that are more frequently observed in urban areas, namely GPS and vision faults. Even though the formulation of the proposed algorithm is capable of addressing other faults, for simplicity, we consider no measurement faults associated with the receiver motion model and satellite orbital model. For reference, details regarding addressing the satellite faults using SLAM-based FDI are described in our prior work [bhamidipati2018multiple]. In Fig. 1, we show the architecture of our SLAM-based IM algorithm using GPS and fish-eye camera that is summarized as follows:

  1. During initialization, we initialize a 3D map using the PVT of the receiver and satellites computed via an established GPS receiver algorithm [lashley2010valid]. We set the initial value of all GPS measurement fault status to indicating neutrality. For the vision module, we perform initial calibration to estimate the scaling from image to global frame.

  2. Firstly, we pre-process the raw image obtained from the fish-eye camera using our hybrid sky detection algorithm to distinguish the sky pixels from the non-sky pixels. The detected sky pixels are used to distinguish the LOS and NLOS satellites and thereafter, formulate the corresponding GPS measurement covariance.

  3. We consider the non-sky pixels along with GPS pseudoranges and carrier-to-noise density  values, receiver motion model and satellite orbital model as input measurements to our algorithm. We combine the measurements in an extended graph optimization module to estimate the overall state vector, which consists of the state vector of the vehicle, GPS satellites and key image pixels using M-estimator [shevlyakov2008redescending]-based Levenberg Marquardt algorithm [lourakis2005brief].

  4. We independently analyze the measurement residuals against an empirical distribution to detect and isolate GPS faults. We develop a superpixel [li2015superpixel]-based piecewise Random Sample Consensus (RANSAC) [conte2009vision] to perform spatial voting for the detection and isolation of vision faults. Based on the estimated fault status of the measurements, we estimate the measurement fault mode, which has binary entries, such that indicates non-faulty and represents faulty.

  5. Finally, utilizing the estimated fault mode and overall state vector, we formulate the failure slope for the Graph-SLAM framework and subsequently, compute the protection levels using worst-case failure mode slope analysis [salos2013receiver, joerger2014solution].

(a) Our hybrid sky detection algorithm
(b) Raw fish-eye image
(c) Hybrid optimal border
(d) Sky area
Figure 2: An example showing the pipeline of pre-processing the fish-eye image in the vision module

The proposed SLAM-based IM algorithm using GPS and fish-eye camera consists of three main modules, namely measurement pre-processing, extended graph optimization and IM for Graph-SLAM. We describe the details as follows:

2.1 Pre-processing the measurements

We consider the following measurements as inputs to our SLAM-based IM algorithm: GPS pseudoranges and values from the GPS receiver, pixel intensities from the fish-eye image, control input obtained from the vehicle motion model and satellite ephemeris decoded from the navigation message.


1. Vision module:  
We pre-process the raw image obtained from fish-eye camera using hybrid sky detection algorithm, to distinguish the sky-pixels from the non-sky pixels. The pipeline of our hybrid sky detection is seen in Fig. 1(a). Our hybrid sky detection takes into account not only the pixel intensities but also prior knowledge regarding the spatial location of the sky pixels.

We convert the raw image to gray scale and then perform median blur operation. The median blur [wang2010new] is a non-linear filter that reduces the noise in the image while keeping the edges relatively sharp. Next, we compute the gradient by combining the magnitude obtained via two Sobel operations [gao2010improved], one executed in horizontal and the other in vertical direction. An example of the image obtained from fish-eye camera operating in urban areas and pointing upwards is seen in Fig. 1(b).

We observe that the probability of sky is highest close to the center and exponentially decreases outwards [haque2008hybrid]. Therefore, the corresponding location-based sky probability, denoted by  is given by

(1)

where is the 2D image coordinates, such that ,  represents the pre-defined domain of the image coordinates.  denotes the cardinality of the image domain and  denotes the 2-norm residual. denotes the pre-defined center coordinates in the 2D image frame.

Combining the location probability with Otsu’s method of intensity thresholding [moghaddam2012adotsu], we compute the hybrid optimal border, seen in Fig. 1(c), that separates the sky region, represented by subscript , from that of the infrastructure, denoted by subscript . We minimize the variance of sky and infrastructure to estimate the Otsu’s intensity threshold  as

(2)

where

  •  denotes the intensity vector that stacks all the pixel intensities in the fish-eye image, such that , where  denotes the intensity of any 2D pixel coordinates ;

  • and denotes the weights associated with the sky and building infrastructure, respectively, such that and ;

  • and denotes the variance of the pixel intensities associated with the sky and building infrastructure.

Utilizing Eqs. (1) and (2), we compute the hybrid sky probability, denoted by at any 2D image coordinate  as

(3)

where and are the maximum and minimum intensity values in the fish-eye image, such that and , respectively. Considering as the pre-defined sky threshold, if , then it is categorized as sky pixel and non-sky pixel otherwise. The sky-enclosed area in the fish-eye image is seen in Fig. 1(d).

Next, using the non-sky detected pixels, we describe the vision measurement model in Eq. (4) that is formulated via omni-directional camera model [caruso2015large] and direct image alignment [engel2014lsd]. Direct image alignment computes the depth maps in an incremental fashion and compares the pixel intensities across the image frames directly, such that the spatial context of the image is preserved. This vision measurement model is utilized later in our extended graph optimization module to formulate the corresponding vision odometry-based component of the cost function.

(4)

such that is pixel noise and from [caruso2015large],

where the subscript  refers to keyframe,

  1.  denotes the intensity of any 2D pixel coordinates  in the keyframe and denotes the image domain of keyframe; Detailed explanation regarding keyframe selection and estimation of semi-dense depth maps is given in prior literature [caruso2015large];

  2.  denotes the intensity of any 2D pixel coordinates  in the current frame and denotes the image domain consisting of non-sky pixels;

  3. denotes the map from 3D world coordinates, denoted by to 2D pixel in image frame;

  4. denotes the 3D warp function that unprojects the pixel coordinates  and transforms it by a relative state vector . The relative state vector  indicates the difference between the current vehicle pose, denoted by with respect to that of the keyframe, denoted by ; Here, x denotes the 3D vehicle position and  denotes the 3D orientation; and denotes the rotation matrix and translation vector of , respectively;

  5. denotes the inverse mapping of 2D pixel coordinates to 3D world coordinates via an inverse distance represented by . Here, and denotes the transformed 2D pixel coordinates. We calibrate the camera parameters, namely , , , and during initialization;

  6. denotes the inverse distance of the pixel coordinates in the keyframe.


2. GPS module:  
In the GPS module, considering  visible satellites, we describe the GPS measurement model as

(5)

where x and  denotes the 3D position of the vehicle and  satellite, respectively;  and represents the receiver clock bias and  satellite clock bias corrections, respectively; represents the measurement noise related to  satellite.

We also formulate the measurement covariance of satellite via the measured  values and the sky area detected via Eq. (3) in the vision pre-processing module. Note that the classification of the satellite as either LOS or NLOS depends on the unknown state vector of the vehicle and satellite. Therefore, the measurement covariance of satellite is given by

(6)

where

  • denotes the vehicle state vector at  time instant comprising of 3D position, 3D velocity, clock bias, clock drift and 3D attitude, respectively, such that ;

  • denotes the state vector of  satellite comprising of its 3D position, 3D velocity, clock bias and clock drift corrections, such that ;

  • and are the vision coefficients, such that and when and and otherwise; is the pre-defined threshold explained in Eq. (3); , , and are constant pre-determined coefficients and denotes the projection of the state vector of satellite in the image frame.

2.2 Extended graph optimization

In our extended graph optimization module, our cost function consists of four error terms, namely GPS pseudoranges, non-sky pixel intensities, receiver motion model and satellite orbital model, as follows:

(7)

where

  •  denotes the overall state vector comprising of the state vector of the vehicle, GPS satellites and key image pixels in the world frame, given by  and is estimated during the graph optimization;

  •  denotes the M-estimator used to transform the corresponding weighted residuals; Details regarding the choice of M-estimator used are explained in our prior work [bhamidipati2018multiple];

  •  denotes the fault status associated with the GPS pseudorange of satellite and estimated at the past time instant; Similarly,  denotes the estimated vision fault status of any 2D pixel  at the previous time instant;

  •  denotes the GPS measurement model; denotes the motion model of the receiver and denotes the satellite orbital model;  and denotes the estimated state vector of the vehicle and satellite, respectively, at the previous time instant;  and  denote the motion control inputs of the vehicle and satellite, respectively;

  • and  denotes the predicted covariance matrix of the vehicle state vector and satellite state vector at the time instant; Explanation regarding estimating these covariances is given in our prior work [bhamidipati2018multiple];

  • denote the measurement covariance of the satellite and is estimated from Eq. (6); Similarly,  denotes the covariance associated with the intensity of the non-sky pixel  and is estimated based on Section  of [engel2014lsd].

The first three terms in the cost function , given in Eq. (7), correspond to the residuals associated with the GPS pseudoranges, satellite ephemeris and vehicle state vector, whose details are provided in our prior work [bhamidipati2018multiple]. The last term represents the summation of intensity residuals across non-sky pixels based on the vision measurement model explained in Eq. (4). In particular, we perform sub-graph optimization at each instant, as seen in Eq. (8), where the cost function is formulated using the past history of measurements.

(8)

where  denotes the number of time instants utilized in the sub-graph optimization thread and  denotes our SLAM-based IM estimate of the overall state vector computed during the sub-graph optimization. We estimate the key image pixels in the world frame, represented by , via inverse-mapping defined in Eq. (4). Details regarding mapping that involves periodically executing full-graph optimization is given in our prior work [bhamidipati2018multiple].

2.3 IM for Graph-SLAM framework

We compute the protection levels associated with the estimated vehicle position using worst-case failure mode slope analysis [salos2013receiver, joerger2014solution]. This is justified because worst-case failure mode slope is derived for weighted least squares estimator and graph optimization via M-estimator-based Levenberg Marquardt algorithm is also a non-linear weighted least squares problem. However, there are certain design challenges involved in applying worst-case failure slope analysis for the protection level computation of Graph-SLAM framework. Firstly, given that the worst-case failure slope is derived for linear measurement model but the cost function associated with Graph-SLAM is non-linear, we linearize the formulation of graph optimization at the estimated overall state vector. Secondly, Graph-SLAM is a sequential methodology, whereas the worst-case failure slope falls under snapshot technique for integrity monitoring. Therefore, we linearize our graph formulation over not only the current time instant, but over the past time history of measurements so as to incorporate the temporal aspect in protection level computation. Thirdly, the graph optimization for SLAM framework consists of a large number of states and measurements. However, evaluating all possible fault modes associated with the measurements is computationally cumbersome. Therefore, we directly compute a single fault mode based on the measurement fault status estimated via multiple FDI module.


1) Multiple FDI module:  
Based on the estimated overall state vector from the extended graph optimization explained in Section 2.2, we independently compute the measurement residuals associated with GPS pseudoranges and non-sky pixel intensities. In our multiple FDI module, we evaluate the GPS residuals by analyzing the temporal correlation of their non-faulty error distribution and vision residuals using spatial correlation across image pixels.


GPS faults: To detect and isolate GPS faults in pseudoranges, we evaluate each residual against an empirical Gaussian distribution, which represents the measurement error distribution during non-faulty conditions. This is justified because we observe that the GPS measurements follow a Gaussian distribution during non-faulty conditions, as explained in our prior work [bhamidipati2018multiple]. We replicate the non-faulty conditions of GPS measurements by executing the initialization procedure in open-sky conditions. Thereafter, deviation of the measurement residual, denoted by , from the Cumulative Distribution Function (CDF) of its empirical Gaussian distribution, denoted by , is categorized as a fault and the corresponding fault status  is computed in Eq. (9). The justification regarding the formulation of fault status is explained in our prior work [bhamidipati2018receivers].

(9)

Vision faults: Unlike GPS faults, vision faults caused by data association errors exhibit high spatial correlation across image pixels and low temporal correlation. This is justified because the vision faults are localized to a group of neighboring pixels and are not isolated to a standalone pixel. We developed a superpixel-based piecewise RANSAC technique that performs spatial voting across the image pixels to detect and isolate vision faults. RANSAC [conte2009vision], a popular outlier detection method in image processing, estimates the optimal fitting parameters of a model via random sampling of data containing both inliers and outliers.

Figure 3: Pipeline for superpixel-based piecewise RANSAC technique used for estimating the vision fault status.

The steps involved in the superpixel-based piecewise RANSAC technique are described as follows: first, we segment the image into clusters, known as superpixels, based on the color similarity and space proximity between image pixels using superpixel segmentation [li2015superpixel]. We denote the number of superpixels depicting non-sky pixels to be , where the total number of superpixels into which the image is segmented is pre-defined during initialization. For each non-sky superpixel, we denote the pixel intensity vector as , which stacks the intensities of pixels within the superpixel. We represent the received intensity, i.e., keyframe pixel intensities Vs expected intensity, i.e., transformed current pixel intensities as a two-Dimensional (2D) plot. Next, we estimate the fitted line using RANSAC that passes through the optimal set of inliers and thereafter, compute the fraction of outliers in the superpixel, which is represented by . Then, utilizing the estimated model parameters of the fitted line, we evaluate the corresponding fraction of outliers at all the other non-sky superpixels, denoted by . Finally, the fault status at each superpixel is computed as the product of all the estimated outlier fractions, as seen in Eq. (10), and the same fault status is assigned to all the pixels within that superpixel. This procedure is repeated for all the non-sky superpixels to compute the fault status of all the non-sky pixels in the keyframe. Our algorithm considers an underlying assumption that there are sufficient number of superpixels to reach a common consensus. If the number of superpixels associated with non-sky pixels is less, such as in open-sky setting, a pre-defined penalty is assigned to the vision fault status.

(10)

2) Protection level computation 
In Eq. (11), based on the design solutions explained in Section 2.3.1, we linearize the overall measurement model of the graph optimization framework using first-order approximation. For simplicity, we derive the protection levels using measurements of the current time instant, but the same formulation is applicable for extension to the past history of measurements.

(11)

where

  1.  denotes the overall measurement vector that concatenates GPS pseudoranges, control input of vehicle, satellite ephemeris and keyframe pixel intensities against an estimated overall state vector ; denotes the overall measurement noise;

  2.  denotes the linearized overall measurement model that vertically stacks the Jacobian associated with GPS pseudoranges, denoted by , vehicle motion model and satellite orbital model, denoted by and non-sky pixel intensities, denoted by , such that ;

  3.  denotes the overall fault vector associated with the overall measurement vector and thereby, stacks measurement faults obtained from individual sensor sources.

As described in Eq. (7) of the graph optimization module, we express the M-estimator-based Levenberg Marquardt formulation, which is a weighted non-linear least squares problem, as

(12)

where

  1.  denotes the estimation matrix of the graph-optimization framework and  denotes the pseudo-inverse matrix, such that ;

  2.  denotes the M-estimator-based weight functions for the GPS pseudoranges, vehicle motion model and satellite orbital model, non-sky image pixel intensities, respectively, and evaluated at ; Details regarding the choice of M-estimator and the corresponding weight functions are explained in our prior work [bhamidipati2018multiple];

  3.  denotes the iterative damping factor associated with the Levenberg Marquardt algorithm.

Next, we define the overall test statistic, denoted by , as the summation of the weighted squared residuals across all the measurements. We consider an assumption that the overall test statistic is chi-square distributed, denoted by under non-faulty conditions and non-central chi-squared, denoted by , under the presence of GPS faults or vision faults or both.

(13)

such that

(14)

where  denotes the number of redundant measurements, i.e., difference between the number of overall measurements, denoted by and overall states, denoted by , such that .  indicates the non-centrality parameter associated with the overall test statistic during faulty conditions.

According to the worst-case failure mode slope analysis [salos2013receiver], as seen in Fig. 4, the protection level is calculated as the projection in the position domain of the measurement faults that would generate a non-centrality parameter  in the overall test statistic  with the maximum slope. In particular, the non-centrality parameter  is estimated from the false-alarm, denoted by  and mis-detection rates, denoted by , which are set according to the pre-defined integrity requirements.

Figure 4: Protection levels computed as the intersection of worst-case failure mode slope and non-centrality parameter [salos2013receiver].

In Eq. (15), we formulate the measurement fault mode, denoted by , using GPS and vision fault status estimated in Eqs. (9) and (10). For this, we consider a pre-defined fault threshold, denoted by , such that if the fault status is above , the measurement is flagged as faulty in the computation of protection levels. Given that we consider measurement faults in only GPS and vision, the fault entries of receiver and satellite motion models are set to zero for this work. However, the corresponding fault vector, which comprises of the exact measurement fault magnitudes, is still unknown. According to [salos2013receiver], for a given fault mode, the worst case fault direction that maximizes the integrity risk, is the one that maximizes the failure mode slope, which is seen in Fig. 4 and denoted by . In this context, we define the square of failure mode slope, denoted by , as the ratio of squared state estimation error in position of the vehicle over the overall test statistic. Using the linearized equations seen in Eqs. (11), (2.3) and (13), we derive the failure slope for the graph optimization framework in terms of unknown fault vector. For this, we consider , which is valid approximation after the iterative convergence of the graph optimization at any time instant since .

(15)

Considering to be the number of non-zero entries in the fault mode  estimated via multiple FDI module, we define fault matrix, denoted by , as and next, re-arrange the rows of the and matrices to match the rows of the fault matrix. Thereafter, we define a transformed fault vector, denoted by , such that . Based on the above-mentioned steps, we describe the failure slope formulation of Graph-SLAM framework in Eq. (2.3).

(16)

where extracts the vehicle 3D position from the overall state vector , such that ,  denotes the residual matrix, such that and  represents the state gain matrix, such that .

Referring to [joerger2014solution], for a given fault mode but unknown fault vector, the worst-case failure slope equals the maximum eigenvalue of the corresponding failure slope formulation. Therefore, we express the worst-case failure slope of the Graph-SLAM framework as

(17)

Next, we compute protection level , seen in Eq. (18) as the y-coordinate that corresponds to the integrity metric  along the line passing through the origin and with slope given by .

(18)
Figure 5: Route taken by a ground vehicle during the experiment conducted for s. Between s, the vehicle experiences GPS faults due to multipath and vision faults due to illumination variations. At s, the overlap of the skyplot of GPS satellites with the fish-eye image shows the multipath affected GPS measurements.

3 Experiment Results

We validate the performance of the proposed SLAM-based IM algorithm that utilizes both GPS and fish-eye camera. We conduct real-world experiments on a moving ground vehicle in the semi-urban area of Champaign, IL along the route shown in Fig. 5. Our experimental setup comprises of a commercial off-the-shelf GPS receiver and a fish-eye camera fitted with FOV lens. During s, the ground vehicle operates in open-sky conditions, thereby experiencing no GPS faults but less visual features. In Fig. 5, the blue highlighted region suffers from vision challenges, namely illumination variations due to sunlight and shadows, that causes data association errors across images. Similarly, the red highlighted region is enclosed with tall buildings that leads to multipath effects in the GPS measurements. For instance, at s we showcase the true overlap of the GPS satellite positions over the fish-eye image, where out of the visible satellites are affected by multipath.

Fig. 6 shows the average fault status of GPS pseudoranges and vision superpixels, as indicated in red and blue, respectively. Given that the ground vehicle navigates in open-sky conditions for s, the average GPS fault status estimated via our multiple FDI module is low, whereas the average vision fault status is high due to the feature-less surroundings. As the vehicle passes through the red highlighted region shown in Fig. 5 that represents the semi-urban area, the average fault status of vision is low but that of GPS increases due to multipath.

We further analyze the performance of our multiple FDI module in the challenging semi-urban area, i.e., for s during which the ground vehicle experiences GPS faults due to multipath and vision faults due to illumination variations. Fig. 6(a) plots that the individual GPS fault status of out of the visible satellites with PRNs , and . In accordance with the skyplot shown in Fig. 5, our proposed SLAM-based IM algorithm successfully flags the satellites with PRN and as faulty while accurately estimating the high-elevation satellite with PRN  as non-faulty. During the same duration, we also analyze the vision fault status associated with the superpixels. In Fig. 6(b), at each time instant, we plot the top four fault status of the superpixels, such that each marker represents a superpixel. We observe that in urban region, the value of the associated vision fault status decreases due to feature-rich tall buildings in urban areas. However, when the vehicle enters the blue highlighted region seen in Fig. 6(b), the illumination variations induced by the bright sunlight causes the fault status associated with certain superpixels to shown an increasing trend.

Figure 6: Performance of our multiple FDI module via average fault status of GPS pseudoranges, indicated in red and vision superpixels, indicated in blue. When the ground vehicle navigates through the semi-urban region, i.e., for s, the average fault status associated with GPS is high due to multipath, whereas vision is low due to rich features.
(a) GPS fault status of PRN
(b) Vision fault status of superpixels
Figure 7: Estimated fault status of (a) GPS measurements and (b) vision superpixels during s, i.e., when the ground vehicle navigates through the semi-urban area. In (a), our multiple FDI module successfully detects satellites with PRN as faulty while accurately estimating the PRN  as non-faulty. In (b), where each marker indicates a superpixel, the trend of fault status associated with superpixels is low given the rich features but later, increases due to illumination changes.

We demonstrate the improved performance of the SLAM-based IM algorithm that utilizes GPS and fish-eye camera seen in Fig. 7(a), as compared to the SLAM-based IM algorithm that utilizes GPS-only seen in Fig. 7(b). By utilizing GPS and fish-eye camera, we demonstrate higher localization accuracy, with an Root Mean Squared Error (RMSE) of m and standard deviation of m, as compared to employing GPS-only that shows an RMSE of m and standard deviation of m. We also validate that the lower mean size of protection levels are estimated using GPS and fish-eye camera, i.e. m than using GPS-only, i.e., m thereby, achieving tighter protection levels.

(a) SLAM-based IM via GPS and fish-eye camera
(b) SLAM-based IM via GPS-only
Figure 8: Comparison of SLAM-based IM: (a) using GPS and fish-eye camera; (b) using GPS-only. Lower localization errors and tighter protection levels are achieved via GPS and fish-eye camera as compared to GPS-only.

4 Conclusions

We proposed a Simultaneous Localization and Mapping (SLAM)-based Integrity Monitoring (IM) algorithm using GPS and fish-eye camera that estimates the protection levels of the Graph-SLAM framework while accounting for multiple faults in GPS and vision. We developed hybrid sky detection algorithm to distinguish the non-sky and sky pixels, which are later used in graph optimization and GPS measurement covariance, respectively. By utilizing the GPS pseudoranges, non-sky pixel intensities, receiver and satellite motion model, we performed graph optimization via M-estimator-based Levenberg Marquardt algorithm. We simultaneously estimated the state vector of the vehicle, GPS satellites and key image pixels in the world frame. We estimated the fault mode vector by independently evaluating the measurement residuals against an empirical Gaussian distribution for GPS faults and using our developed superpixel-based piecewise RANSAC for vision faults. We computed the protection levels via the worst-case failure slope analysis that estimates the maximum eigenvalue associated with the failure slope formulation of the linearized Graph-SLAM framework.

We conducted real-world experiments using a ground vehicle in a semi-urban region to analyze the performance our proposed SLAM-based IM algorithm that utilizes GPS and fish-eye camera. We successfully detected and isolated multiple measurement faults in GPS and vision. We demonstrated higher localization accuracy using our proposed algorithm with an RMSE of m and standard deviation of m, as compared to GPS-only that shows an RMSE of m and standard deviation of m. We also validated that the mean size of protection levels estimated using GPS and fish-eye camera, i.e. m is lower than using GPS-only, i.e., m.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393314
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description