Cooperative Robot Localization Using Event-triggered Estimation

Cooperative Robot Localization
Using Event-triggered Estimation

Michael Ouimet 111Robotics Research Engineer, SPAWAR Systems Center Pacific, San Diego, CA 92152 SPAWAR Systems Center Pacific, San Diego, CA 92152    David Iglesias 222Graduate Research Assistant, Ann and H.J. Smead Aerospace Engineering Sciences Department, University of Colorado Boulder, Boulder, CO 80309 University of Colorado Boulder, Boulder, CO 80309    Nisar Ahmed 333Assistant Professor, Ann and H.J. Smead Aerospace Engineering Sciences Department, University of Colorado Boulder, Boulder, CO 80309, AIAA Member University of Colorado Boulder, Boulder, CO 80309    Sonia Martínez 444Professor, Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA 92093 University of California San Diego,La Jolla, CA 92093
Abstract

This paper describes a novel communication-spare cooperative localization algorithm for a team of mobile unmanned robotic vehicles. Exploiting an event-based estimation paradigm, robots only send measurements to neighbors when the expected innovation for state estimation is high. Since agents know the event-triggering condition for measurements to be sent, the lack of a measurement is thus also informative and fused into state estimates. The robots use a Covariance Intersection (CI) mechanism to occasionally synchronize their local estimates of the full network state. In addition, heuristic balancing dynamics on the robots’ CI-triggering thresholds ensure that, in large diameter networks, the local error covariances remains below desired bounds across the network. Simulations on both linear and nonlinear dynamics/measurement models show that the event-triggering approach achieves nearly optimal state estimation performance in a wide range of operating conditions, even when using only a fraction of the communication cost required by conventional full data sharing. The robustness of the proposed approach to lossy communications, as well as the relationship between network topology and CI-based synchronization requirements, are also examined.

\@author@finish\title@column\frontmatter@setup\frontmatter@title@above\frontmatter@title@format\@ifx\@title@aux\@title@aux@cleared\frontmatter@footnote\@title@aux\frontmatter@title@below\groupauthors@sw\frontmatter@author@produce@group\frontmatter@author@produce@script\frontmatter@RRAPformat\produce@RRAP\produce@RRAP\@received\produce@RRAP\@revised\produce@RRAP\@accepted\produce@RRAP\@published\preprintsty@sw\do@output@MVL\frontmatter@preabstractspace

\@ifdim

>

\@ifx\mini@notes\mini@notes\@pacs@produce\@pacs\@keywords@produce\@keywords\frontmatter@finalspace\titlepage@sw

I Introduction

The decrease in the price and weight of robotic hardware (wireless communication, sensor suites, actuators) can make possible the autonomous deployment of large teams of unmanned aerospace vehicles in surveillance, exploration, search-and-rescue, and cargo-transportation missions. While today’s technology has advanced tremendously, algorithms that can endow robotic teams with the desired autonomy are still lacking. In particular, the successful execution of higher-level tasks often relies on accurate robot position information; e.g. to be used in path planning or decision-making routines. In the case that full position information is unavailable, a relevant question is how vehicles can exploit their partial access to information to produce joint Cooperative Localization (CL) algorithms.

The CL problem arises when mobile robots try to localize themselves with respect to other mobile robots, whom they can also communicate with. In CL, robots typically obtain and share measured/estimated relative pose information to improve their own local pose estimates (which may possibly be augmented with the pose estimates of other robots to obtain a ‘moving map’). This approach has many close connections to the well-known simultaneous localization and mapping (SLAM) problem Thrun-ProbRoboticsBook-2005 (), where in CL the ‘map features/landmarks’ are dynamic and can also actively provide information to one another. As such, many different statistical state estimation techniques for decentralized CL have been developed to most efficiently leverage the sensing and computing capabilities of multiple robots. However, state of the art CL algorithms make many significant trades in terms of required communication bandwidth, computational processing, localization accuracy, network topology constraints, and robustness to statistical inconsistencies (which can easily arise in networked distributed data fusion problems). In particular, the problem of minimizing required communication bandwidth and messaging frequency for CL in networks with arbitrary relative measurement and communication topologies remains quite challenging, as the localization performance, statistical robustness, and computational efficiency of each robot’s state estimator depend heavily on these characteristics.

This work studies a novel Kalman filter-based CL strategy that trades off estimation performance with communication cost. Using an event-triggering strategy, the scheme determines when a robot should pass explicit measurements to neighbors, and then combines these with implicit measurements to produce a network state estimate. As shown in Figure 1, this framework is very applicable to unmanned ground vehicles (UGVs) operating in planetary exploration missions without GPS aiding, unmanned aerial vehicles (UAVs) or space vehicles in GPS-denied environments, or unmanned underwater vehicles (UUVs). UUVs, for instance, which typically have inertial measurement units (IMUs) and GPS as well as an acoustic modem for low-bandwidth communication. The vehicles receive GPS measurements only at the water surface, but they can get time-of-flight ranging measurements from acoustic message transmissions to one another while underway. For best performance, the network should share their occasional GPS measurements and regular ranging measurements at every opportunity. However, transmitting data underwater via acoustic waves requires considerably more energy than using electromagnetic waves through the air. By intelligently controlling the rate with which the robots exchange explicit data messages, the trade off between best estimation performance and increased mission duration (or more bandwidth for other data) can be balanced.

Fig. 1: Event-triggered CL: sufficiently informative data is sent explicitly; otherwise, it is censored/implicit data.

This paper details a novel CL algorithm for a team of robotic agents to estimate the state of the entire network. Employing an event-based approach, agents only send measurements to their neighboring agents when the size of the measurement innovations (i.e. the amount of ‘surprising’ new sensor information) is large relative to some known threshold. Because agents all know the event-triggering condition for measurements to be sent explicitly, the lack of a measurement is informative, and can thus be properly fused into state estimates as ‘implicit data’ using set-based observation updates. This results in a decentralized event-triggered filter, which each robot reproduces locally while handling the explicit/implicit measurements. In order to keep a local covariance error metric bounded whenever they do not receive direct measurements from each other, the robots occasionally combine their state estimates via Covariance Intersection (CI) fusion over the whole network state, to fully benefit from estimate correlations. When the communication network diameter is large, fusion updates may still result in state estimation uncertainties (covariances) that are larger than practically desirable bounds for certain agents. Thus, balancing dynamics are also defined on the individuals’ event-triggering thresholds, which allows better-informed robots to aid poorly informed robots to achieve desired covariance bounds. Simulations with linear and non-linear dynamical models and measurements show that, under ideal communication conditions, the event-triggered CL approach performs nearly as well as the idealized fusion approach based on full measurement sharing, but requires only a fraction of the communication cost. Simulations under non-ideal communication conditions (namely, with complete packet losses and incomplete observability/communications between each robot) show that the proposed event-triggered approach can still provide reasonably reliable and robust localization performance under certain circumstances.

This paper is based on previous work presented by the authors in Ouimet-MFI-2015 () and extends that work in several important ways. First, this paper generalizes the formulation of the event-triggered filter to non-linear dynamical systems, and specifically examines the effectiveness of the filter in robot networks defined by nonlinear Dubins dynamics and nonlinear range and bearing measurements in simulations. Secondly, given that the event-triggered algorithm assumes measurements expected but not received were intentionally censored, this work also investigates the sensitivity of the proposed algorithm’s performance to random communication losses between agents. Finally, this paper includes simulations on various communication topologies to examine relative performance and necessity of sporadic CI-based synchronization in the proposed algorithm.

Ii Literature review and related work

ii.1 Related work: Multi-robot SLAM

Much research in the robotics and autonomous vehicles literature has focused heavily on localization and mapping problems, with simultaneous localization and mapping (SLAM) being an important special case Thrun-ProbRoboticsBook-2005 (). In SLAM, a robot tracks its own pose in an unknown map, represented by landmarks (or more general map features) whose locations are sensed and tracked online via relative sensing data. Following seminal work in the 1990s and early 2000s JH-HFDW:91 (); GD-PN-HFDW-SC-MC:01 (), the statistical state estimation framework for SLAM soon became widely adopted, and matured quite rapidly to accommodate a wide variety of robotic sensors and vehicle types.

Single robot SLAM algorithms are especially mature, and can be generally categorized as either filter-based algorithms or batch methods. Filter-based methods focus on obtaining estimates of an environmental map and the most current robot state only. These are typically based on the Kalman filter variants, such as the extended Kalman filter (EKF) or unscented Kalman filter (UKF) Holmes-ICRA-2008 (), or approximations to the more general Bayes filter (e.g. particle filters Thrun-AI-2001 (); Montemerlo-ICRA-2003 (); Grisetti-RAS-2007 (), Gaussian sum filters Kwok-ICRA-2005 ()). In contrast, batch methods seek estimates of the map along with the robot’s entire state trajectory over time, and tackle the problems of robot state filtering and smoothing together. Batch methods are typically formulated as non-linear optimization problems, and efficiently implemented as non-linear least squares, maximum likelihood or Bayesian maximum a posteriori (MAP) estimators via specialized solvers Sunderhauf-IROS-2012 (); Mur-TRO-2015 (). Filter-based methods are often computationally cheaper to implement and can be implemented online through recursive propagation of state-map estimates and uncertainties (covariances). Batch methods are more demanding computationally, but can provide very high fidelity maps and state estimates, since non-linear motion/sensing models, dense data sources (e.g. camera imagery), and subtle constraints such as loop closure can be easily handled. A key assumption for tractability in both filter-based and batch single robot SLAM methods is that the map itself always remains observable and static, i.e. only the robot is assumed to move, while the map features do not change or move.

Multi-robot SLAM methods draw heavily from single robot SLAM techniques, but also offer a richer set of strategies for exploiting the sensing and computing abilities of multiple robots. Ref. PR-MA-BK-MK:16 (), for instance, describes a distributed approach in which a single processing hub is responsible for fusing sensor data from multiple robots into a consistent map; this hub then acts as a global map server for each robot, which periodically downlinks the updated map to correct its private onboard SLAM solution. Alternatively, completely decentralized approaches such as those described in Jones-Allerton-2011 () and Cunningham-ICRA-2013 () can also be used, where each robot remains fully capable of performing SLAM on its own but occasionally fuses its estimated map with its neighbors’ maps. As with single robot SLAM methods, these multi-robot SLAM techniques also rely on the persistent availability of static map features; more importantly, all robots must be able to sense, identify and reason over a common set of features. An important drawback of these and other similar approaches is the high computational expense of marginalizing out robot poses to communicate and fuse large scale maps. Such approaches must also contend with issues of landmark/feature association and coordinate frame alignment across multiple partially overlapping maps Indelman-CSM-2016 ().

ii.2 Multi-robot Cooperative Localization

Cooperative Localization (CL) SIR-GAB:02 () describes methodologies where robots aim to localize themselves with respect to a combination of moving and static objects/robots. In CL, a team of mobile robotic agents, each with equivalent computing and sensing capabilities, use relative distance measurements with respect to each other to jointly estimate the pose of the network, resulting in a more accurate state estimate. Loosely speaking, the CL problem can be framed as ‘multi-robot SLAM with moving landmarks’. Due to the coupling between the estimates of all agents’ positions, this technique largely benefits from sporadic GPS (or other inertial) measurements, effectively pinning down the relative position information into absolute global coordinate frame positions. If robots are truly cooperative (i.e. part of the same team) and can actively communicate with one another, then the problem of landmark/feature association can also be solved more easily. Furthermore, rather than requiring robots to maintain estimates of an entire map, robots need only maintain position estimates of (some/all) other robots. Robots can thus can be opportunistic about how and when to share information to improve each other’s localization performance. As in multi-robot SLAM, the distributed nature of CL also affords many opportunities to exploit the sensing and computing capabilities of multiple robots for improved scalability. While centralized single-point processing approaches to CL could be considered (wherein all robots send their information to a single data fusion center), these are generally vulnerable to communication dropouts as well as failure of the central processing point. As such, distributed CL algorithms that exploit decentralized local processing at each robot are the focus here.

Distributed CL methods can be broadly categorized as either weakly coupled or strongly coupled, depending on how the state estimator for each robot is configured. In strongly coupled methods, each robot augments its own pose estimate with pose estimates of some other robots (any subset of the full network). That is, each robot explicitly maintains a ‘moving map’ of other robots, in order to fully track and exploit all dependencies on other robot states which get updated via relative measurement and/or state estimate exchanges. In weakly coupled methods, each robot does not augment its state estimator with other vehicle states, but rather only estimates its own pose with relative measurement and/or state estimate exchanges. These methods avoid the full computational overhead of tracking a moving robot map, but are often less accurate and ‘lossy’ compared to strongly coupled methods, since they do not fully exploit local statistical correlations between neighboring robots that develop over time. In both sets of methods, different trades are made in terms of balancing requirements for communication bandwidth and onboard computing, as well as in terms of addressing the more subtle issue of statistical consistency. Consistency issues arise in the CL problem if one or more robots in the network fail to account for dependencies between different robot state estimates and/or measurements. This can lead to the ‘data incest’ problem, were information is incorrectly double-counted such that state estimates become incorrectly overconfident in the presence of large errors throughout the network. Since exact tracking of information dependencies is generally an intractable problem for multi-robot networks (except in special cases) Campbell-CSM-2016 (), both strongly and weakly coupled methods use conservative data fusion approximations like Covariance Intersection (CI) SJ-JU:07 () or other alternatives to address this issue. In the following, several examples of strongly coupled and weakly coupled distributed CL algorithms are highlighted. While not exhaustive, this brief review is meant to provide a representative sample of existing CL strategies.

ii.2.1 Strongly Coupled Methods:

Many examples of strongly coupled CL algorithms can be found in the aerospace literature for vehicle formation control; these approaches are often based on decentralized Kalman/information filters for joint minimum mean squared error (MMSE) network state estimation Ferguson-GNC-2003 (); Smith-IET-2007 (); McLoughlin-JGCD-2007 (). Filtering-based CL algorithms are particularly amenable to vehicle formation problems, since good priors can usually be determined to model the dynamics and control inputs of each vehicle in the team, and since the problem can be easily solved in a relative frame of reference with respect to designated ‘lead’ vehicles. However, such conditions can be relaxed for strongly coupled CL, and approximations to decentralized MMSE state estimators can also be used. For instance, Arambel, et al. POA-CR-RKM:01 () developed a CI-based sub-optimal conservative data fusion strategy for cooperative satellite localization, where each satellite tracks it own pose and velocities relative to inertial frame, along with separation distances and rates relative to all other vehicles.

A variety of other strongly coupled CL algorithms have also been developed in the robotics and sensor networks literature. Ref. SIR-GAB:02 () describes a strongly coupled algorithm in which robots use a multi-centralized filtering algorithm to broadcast measurements one at a time to the whole network. Leung, et al. developed an algorithm that guarantees equivalence to an idealized centralized state estimator for all robots KYKL-TDB-HHTL:10 (). In this approach, robots must only consider their knowledge of the full network topology and share appropriate local information with neighbors (including odometry data, relative position measurements and/or state estimates) as determined by temporal ‘checkpoints’, which helps ensure that each robot’s state estimators remain statistically consistent. While this approach is robust to dynamic topologies without guarantees of full connectivity, it also requires high volume information transfer between robots to exchange sensor measurements and state estimator outputs (means and covariances for robot poses). In contrast, ref. NK-FC-RA-RC:06 () describes an approach based only on local state estimator output exchange between vehicles. Each robot in this approach maintains two sets of extended Kalman filter state estimates for the entire robot network: one that can be updated by direct fusion with state estimates provided by other robots (but cannot be shared further with other vehicles); and another that can be updated by the ego robot’s sensor measurements and shared with other robots. This exploits the fact that Kalman filter mean and covariance estimates summarize all information from previously fused sensor measurements; the use of two filters on each robot avoids information double-counting, and thus ensures statistical consistency. However, this approach still imposes heavy communication requirements between robots.

To address excessive communication overhead, ref.NT-SIR-GBG:09 () develops an alternative measurement-only exchange approach in which real-valued sensor data are quantized into binary messages. For example, using 1 bit to represent relative measurements, it is shown that ‘sign of innovation’ EKF and MAP estimators can be obtained for each robot that provide reasonable estimation performance in many situations. This approach is perhaps most closely related qualitatively to the event-triggered approach proposed here, with key differences being that: (i) event-triggering does not depend on the sign of innovation, but rather on expected values and covariances of the innovations (which in turn are a function of motion and sensor models); and (ii) rather than always transmitting a fixed-size quantized measurement, an event-triggering sender robot either sends no data (0 bits) or full real-valued data to the receiver robot.

ii.2.2 Weakly Coupled Methods:

Ref. SEW-JMW-LLW-RME:13 () formulates a relatively simple ‘one-way fusion’ approach in which a designated vehicle (server) broadcasts its position estimates to other robots (clients), which then update their pose estimates via extended information filters. Kia, et al.  SSK-SFR-SM:14 () describe a more sophisticated decentralized EKF approach in which mobile robots with correlated states exchange different sets of variables to update their local correlations, only when there is a new relative measurement between them (rather than at each time step). Ihler, et al. ATI-JWF-RLM-ASW:05 () exploited a similar idea for node self-localization in static sensor networks, by using probabilistic nonparametric (Monte Carlo) belief propagation to fuse non-linear relative position measurements with respect to other network nodes. The main drawback of these approaches is that they require restricted network topologies for interagent measurements and communications, to guarantee correct convergence and avoid data incest issues.

Other weakly coupled CL methods overcome these limitations by sharing both measurement data and state estimates together. In AB-MRW-JJL:09 (), each robot maintains two independent EKF tracks for each other robot it communicates with: one in which its own pose estimate is not updated with information received from the other robot, and another in which its pose is updated by this information. By exchanging relative measurements and pose information from both EKFs, and by tracking the pedigree of information received from all other agents, pairs of robots can be guaranteed to avoid data incest. Approaches that use other strategies to simplify each robot’s state estimator requirements have also been explored. Prorok, et al. AP-AB-AM:12 () developed a particle filtering approach in which each robot shares with detected neighbors relative range/bearing measurements and compressed sets of local particle estimates of their own states. Knuth and Barooah JK-PB:09 () and more recently Luft, et al LL-TS-SIR-WB:16 () developed similar strategies using parametric EKFs; in the former, robots opportunistically exchange relative pose measurements and pose estimates at each time step, while robots in the latter also maintain estimates of cross correlations of pose estimation errors for other robots that have been communicated with previously. In these approaches, each robot uses the received information to derive sub-optimal approximate statistical updates for its own pose estimate. In many cases, however, these methods require expensive fusion update operations for each robot’s state estimate and lead to message sizes that do not scale well to large networks.

Other weakly coupled techniques have been developed to address scaling and data incest issues simultaneously, using techniques that are guaranteed to avoid data incest without extra bookkeeping or topology restrictions. Čurn, et al. JC-DM-NO-VC:13 () developed a CL approach based on the Common Past Invariant Ensemble Kalman filter (CPI-EnKF), a variant of the Monte Carlo-based Ensemble Kalman filter that is widely used for large scale data assimilation. Hao and Nashashibi HL-FN:13 () developed a method based on Split Covariance Intersection, which addresses the over-conservativeness of classical CI by decomposing covariances for each robot’s pose into two exclusive components: one independent of and another dependent on relative measurement information sent by other robots. Refs.  LCCA-EDN-JLG-SIR:13 () and HM-DGE:14 () describe yet another related approach, where a robot uses its own state estimate to send a ‘pseudo state estimate’ to the another robot involved in a relative measurement; the receiving robot uses CI to fuse its current estimate with the new pseudo estimate in a statistically conservative (but memoryless) manner. This approach is applicable to dynamic/ad hoc network topologies and only requires robots to estimate their own poses; this results in lower computational complexity, but at the expense of sacrificing information from joint estimate correlations.

ii.2.3 Batch CL methods:

The majority of techniques developed for CL are filter-based. However, batch estimation methods for both weakly and strongly coupled decentralized CL can also been considered. In early work, Kurazume, et al. proposed a ‘cooperative positioning system’, in which a subset of robots in the network would take turns to deliberately remain stationary in order to act as static landmarks for the other mobile robots, which then fuse their relative measurements via batch least squares Kurazume-ICRA-1996 (); Kurazume-ICRA-1998 (). Building on the success of batch SLAM techniques, Nerurkar, et al. devised a distributed MAP CL estimator for multiple moving vehicles, using distributed data allocation and distributed conjugate gradient optimization methods EDN-SIR-AM:09 (). Ref. Paull-ICRA-2015 () proposed an alternative decentralized batch CL approach for GPS-denied underwater vehicle localization based on a dual factor graph formulation, which accounts for inter-vehicle localization on top of local vehicle SLAM. Ref. Wymeersch-ProcIEEE-2009 () also considered a factor graph CL approach based on wireless network signaling. While factor graphs and other related modern batch approaches often cope well with highly non-linear dynamics and measurement models, they require sophisticated optimization solvers and are often difficult to implement online for constrained onboard computing hardware. As such, filtering-based CL is the focus of this work.

ii.3 Sparse communications and event-triggered algorithms

A major feature of both weakly and strongly coupled CL algorithms developed to date is their reliance on frequent explicit information sharing between agents. However, this also leads to significant practical limitations in multi-robot systems, since it requires each robot to expend considerable energy and consume significant communication bandwidth to maintain good CL performance. For applications such as those involving UUVs, this limitation can be particularly costly and severe, and motivates consideration of alternative cooperative state estimation strategies that require less energy expenditure and communication bandwidth.

Recent work in the networked controls and cyberphysical systems literature has shown that event-triggered algorithms are an attractive means for significantly reducing communication overhead in distributed control and estimation problems. The key idea is that, unlike typical periodic (time-triggered) control and estimation algorithms that require explicit information transmission and acquisition at regular time intervals, event-triggered algorithms require explicit information transmission only on the occurrence of certain events – usually if some signal related to states of interest fall outside a certain threshold. Hence, information is only communicated when needed and as needed, and control/estimation routines can still be performed in the absence of explicit information being sent/received by processors in the network – since the lack of any explicit information being received is itself still an informative event (provided all processors are aware of each other’s event-triggering thresholds).

Algorithms for centralized event-triggered estimation in distributed systems are fairly well established, e.g. see the survey in Liu-SSCE-2014 () and the study in Trimpe-EBCCSP-2015 () for a comparison of possible thresholding mechanisms. Yet, theory and techniques for decentralized event-triggered estimation are not as well developed. In particular, the literature to date on distributed event-triggered estimation deals with either theoretical estimation error analysis for linear systems Yan-NonlinearDyn-2014 (); Trimpe-CDC-2014 (), or one of the following application areas (with emphasis again on linear dynamics and sensor models): (i) wireless sensor networks where a single centralized state estimator receives measurements from multiple sensor nodes JS-ML:09 (); DS-TC-LS:14 (); Liu-TAC-2015 (); (ii) control networks where multiple sensors/estimators communicate with each other over a common bus Trimpe-IFAC-2011 (); Trimpe-TAC-2014 (); Muehlebach-ACC-2015 (); or (iii) consensus-based Kalman filtering algorithms Battistelli-ACC-2016 (); Li-IET-2016 (). By definition, area (i) is not directly applicable to the decentralized CL problem considered here. Methods in area (ii) make the critical assumption that the communication bus is always reliable and that all data on the bus is accessible to each distributed state estimator, and thus is not generally well-suited to robotic CL problems. While the consensus methods of area (iii) could in principle be adapted to robotic CL, these generally tend to have intrinsically higher communication overhead and worse estimation accuracy than other forms of decentralized data fusion Chong-FUSION-2016 ().

A natural starting point to develop event-triggered algorithms for decentralized CL is to adapt existing centralized event-triggered methods. For instance, refs.  JS-ML:09 (); DS-TC-LS:14 () develop event-triggered estimation algorithms in the context of distributed sensing for a single partially observable dynamic process model, where a single centralized fusion center fuses the explicit and implicit measurements sent to it. In this paper, the centralized event-triggered estimation techniques from these works are adapted and evolved further into a novel strongly coupled decentralized CL framework, in which multiple partially observable dynamic processes (robot pose states) are estimated simultaneously.

Iii Technical Preliminaries

To build up to our formal problem statement (Section IV) and technical approach (Section V), this part presents useful notation and concepts from probability theory.

Let be the set of real numbers and the set of positive or zero real numbers. For a vector , , let be the diagonal matrix with vector along the diagonal. Let be the Euclidean -norm of . For a set of matrices , each in , let be the block-diagonal matrix where the -th diagonal block is matrix . The matrix is the identity matrix of the appropriate size.

Consider a multi-variate random variable , and let denote its probability density function (pdf). Let and denote its expected value and covariance, respectively. Given with pdf and a subset , define a conditional pdf as ; that is, , where is the indicator function and is the probability mass of computed using . With a slight abuse of notation, the conditional random variable with pdf is referred to as the (-distributed) random variable with truncated support . The one-dimensional normal probability density function, , is defined as The normal distribution’s tail probability, i.e., the probability that the random variable is larger than a quantity , is denoted by , and defined as

The next result NLJ-SK-NB:95 () describes how the statistics of a Gaussian random variable change when conditioned on having a bounded support interval.

Lemma III.1

Let be a univariate Gaussian random variable with mean and variance . Then, the mean and variance of with truncated support satisfy and , where

and

Iv The Cooperative Localization Problem

This section formally describes the robots’ dynamical model, sensing, and communication models, and the CL problem. The fundamental concepts behind event-triggered state estimation with innovation thresholds are then presented.

iv.1 Robot dynamics

Assume robots are moving in a particular environment and that their dynamics are modeled as a discrete-time nonlinear system. In this way, at each time step ,

(1)

where represents the state of robot , is the (nonlinear) discrete-time state transition function, and is its control input, for . Here, the noise is assumed to be normally distributed with zero mean and covariance , and uncorrelated in time. One may linearize the nonlinear dynamics model about previous estimates to obtain the system matrices , for all , see e.g. SSK-SFR-SM:14 (). Here, and are the Jacobians evaluated at the current best state estimate . This gives the time-varying linear model

For each robot pair and , the corresponding process noises are assumed mutually uncorrelated. Let be the vector of all robots’ locations. It is further assumed that the robots are realizing a cooperative task with known assigned motions, and so, the control strategies of all robots are known. When performing Extended-Kalman-Filter (EKF) updates for state estimation, the nonlinear model (1) is used to propagate the state estimate during the prediction step, while and are used to propagate the covariance in the prediction step YBS-XRL-TK:01 ().

iv.2 Sensing and communication

Robot can occasionally take a set of local scalar measurements stacked into ,

These may include the components of the relative position between itself and the robots in its view (i.e. nonlinear range and bearing), as well as absolute position measurements (i.e. GPS). When performing EKF updates for state estimation, the measurement model is linearized to compute measurement innovation covariances, using , giving

where , , is the measurement matrix that transforms the state into measurement space. The measurement noise is normally distributed with zero mean, diagonal covariance , and uncorrelated in time. The measurement noise is uncorrelated to the process noise, and uncorrelated with other measurement noises produced by different robots. It is assumed throughout that outliers and corrupt data from invalid measurements are discarded or not present.

A point-to-point communication protocol is assumed to be running on the network that efficiently allows each robot to know which other robots it can communicate with (i.e. the desired network topology is known and fixed in advance). The subset of robots that can communicate with is denoted as . If robot can communicate with robot , then it is assumed that can also communicate with . Furthermore, it is also assumed that robots obtain relative position measurements of each other only if they can communicate with each other. Finally, the neighbor set for each robot remains fixed, such that the number of neighbors for each robot is small compared to the size of the full network (i.e.  as opposed to ) to limit the number of messages exchanged and processed at each step. For simplicity, messages are assumed to be sent and received synchronously within a given time step, up to some fixed range between robots.

iv.3 Cooperative localization

For each robot , the current state and covariance estimate of a subset of other robots in the team at time is denoted by . For simplicity, let , the total number of robots, for all robots ; that is, assume each robot is interested in the state of all robots in the network, even if only a subset of all robots are in .

In the CL problem considered here, robot shares some or all of its local measurement vector with robots in . Each robot fuses information received from its local sensors and other robots to minimize , . Since measurements are not broadcast to everyone, each robot will obtain a different network state estimate. In particular, exchanging relative range and bearing measurements between robots and causes the block elements of and to become non-zero, due to the correlation of estimation errors for ’s and ’s states in both and . On the other hand, if a third robot never processes relative measurements between and , then the block of remains zero (assuming has no prior information about this correlation). If robot receives an absolute position measurement, improves its absolute state estimate, and shares the measurement with its neighbors, the cross-correlations allow the members of to improve their local own state estimates as well. Robot updates by fusing data from its onboard sensors with measurements received directly from neighbors . It should also be noted that robots do not ‘forward’ data received from neighbors to other neighbors, i.e. if but , then robot does not receive any data via , and, likewise, robot does not receive any data via (although both and will receive data, including any relative ranges and bearings to both and from ).

In this paper, standard Extended-Kalman-filter (EKF) state prediction and measurement updates are used to fuse measurements sequentially. The generic form of these updates are shown in Algorithms 1 and 2; the latter is applied in turn to each in a measurement queue for any particular robot’s local state estimator, by computing the Kalman gain and measurement innovation for each . This work extends the notion of CL to handle situations where the robots use an event-triggering rule to decide when to send measurements to neighbors.

1:, and
2:
3:
4:return
Algorithm 1 Generic Kalman Filter Dynamics Prediction Update
1: and (measurement obtained locally or from neighbor)
2:
3:if  is first measurement fused at time  then
4:     ,      (from prediction update)
5:end if
6:
7:
8:return
Algorithm 2 Generic Sequential Kalman Filter Measurement Update

iv.4 Event-triggered estimation

This section describes event-triggered estimation using innovation thresholds. Figure 2 illustrates the basic event-triggering concept for a measurement taker , which must decide whether to send its measurement to neighbor at a given time step . In this simple example, suppose is a direct measurement of a scalar state , i.e. , and for now assume both and possess an identical copy the same local estimate of just before measures . Robot can compute the measurement innovation . According to Algorithm 2, for a given Kalman gain, a smaller innovation will tend to produce a smaller effect on the update for . That is, the closer is to its expected value ( in this case), the smaller the ‘surprise factor’ for the measurement update step becomes, meaning that will not change by much. More generally, if the Kalman gain is such that a wide range of and values will typically (in a statistical sense) lead to small updates on the RHS of line 5 in Algorithm 2, then large shifts in can only occur when is relatively large, i.e. when disagrees significantly with . If provides a good estimate of in the MSE sense, then such major disagreements will be statistically infrequent, and tends to remain relatively small.

This insight motivates the idea behind event-triggered filtering: robot only sends measurement to if the magnitude of the ‘surprise factor’ from the innovation is larger than some parameter threshold . Thus, measurements come in at a regular interval to robots, and robots have the opportunity to share or censor each of these measurements over time. Importantly, since the robots are aware of and share a common pre-fusion , the lack of a measurement from can be used by to infer the possible values of obtained at . As such, can still carry out a local fusion update for using implicit information about in the form of set-valued knowledge for .

Fig. 2: Schematic of an robot using an innovation-based triggering strategy.

Figure 3 illustrates how receiving robot handles a measurement communicated by robot at a given time step. If robot does send the measurement explicitly, it can be fused in the usual way via the EKF described in Section IV.3. However, if no explicit measurement is received from robot , robot reasons over the set of measurements that would cause to censor the measurement (i.e. such that the innovation is less than ). This corresponds to a non-Gaussian measurement model, and assumes that robot and have a common to compute the innovation from.

Now, in a decentralized setting with robots, any pair of communicating robots and may also communicate with other robots that are not in or , respectively, and which may therefore contribute additional information that is never received by the other robot in the pair. Therefore, the assumption of a common reference estimate being available to all robots becomes very difficult to enforce (without resorting to techniques like consensus, which as mentioned earlier leads to very high communication overhead and inefficient estimates). Instead, the common assumption is relaxed and adapted to an event-triggering strategy where a common reference estimate must exist only between pairs of communicating robots and . As such, each communicating robot pair must maintain a separate state estimate that tracks only the (explicit and implicit) information that is actually communicated between them. This ‘extra bookkeeping estimator’ strategy shares similarities with other strongly coupled CL methods described earlier, and also bears resemblance to the channel filter algorithm used in Bayesian decentralized data fusion, which tracks and removes common information exchanged by pairs intelligent sensor nodes to avoid data incest SG-HRDW:94 (); Campbell-CSM-2016 ().

Section V describes the full details of this adapted event-triggering strategy for decentralized CL. In short, each robot , propagates a current best state and covariance estimate of the whole network at each timestep , , and also maintains ‘common estimates’ with each other robot that it shares measurements with, denoted by . These common estimates can be interpreted as ’s estimate of ’s estimate of the state of the network. In general, this is a conservative estimate because it assumes that does not receive any other information from the world, i.e. it only accounts for information exchanged between and exclusively. The update laws enforce and , for all communicating pairs and all times , allowing and to have a common state and covariance estimate to compare measurement innovation with the innovation threshold parameter .

Fig. 3: Schematic of an robot’s fusion strategy for implicit and explicit measurements received.

Note that this implicit measurement (later formalized in (V.2.2)) has a non-Gaussian measurement likelihood model, making exact Bayesian fusion of such information analytically intractable since it does not admit a closed-form solution. Thus, following the approach in DS-TC-LS:14 () to fuse set-valued measurements, a Gaussian distribution is used to approximate the posterior pdf on the state (which in this case will be a truncated Gaussian). After an implicit measurement is taken, the first two moments of the prior Gaussian are corrected by weighting its moments with those of a Gaussian with truncated support (Lemma (III.1)). The correction is implemented using a modified Kalman gain in the Kalman filter state estimator update for robot . Algorithm 3 describes concisely how this is done.

Since the Gaussian approximation to the implicit measurement update only has a closed-form solution for implicit measurements, the measurement covariance is assumed to be diagonal (as in Section IV.2) to allow for each measurement to be treated with independent scalar updates. The use of scalar sequential updates is necessitated by the fact that the formulas for determining the moments of a truncated Gaussian pdf are available only in the scalar case (hence, elements of the measurement vector are censored and/or fused one element at a time). In principle, implicit measurement fusion updates can be numerically approximated for non-scalar measurements, e.g. using sparse grid quadrature integration methods for the necessary moment calculations Heiss-JEcon-2007 (); Jia-JGCD-2011 (); however, for simplicity, these updates are not developed here.

V Event-triggered Cooperative Localization

This section describes the proposed event-triggered cooperative localization algorithm, beginning with an overall description of the algorithm.

v.1 Description of event-triggered cooperative localization algorithm

For a robot to reduce the communication cost of transmitting its measurements at every time step, it is desirable to employ an event-triggered strategy to be used within a Kalman filtering framework. The idea is that a robot only broadcasts the -th component of its current measurement vector at time step to a specific neighbor if the norm of that measurement component’s innovation (with respect to their common pair-wise filter estimate ) is larger than a known threshold parameter . The absence of a measurement component from a robot gives implicit information about that measurement. This thresholding based on the innovation is applied to all measurements collected by each agent, i.e. both absolute measurements (such as GPS) as well as relative measurements (such as range/bearing). Each robot’s state estimation filter must therefore be adapted so that implicit information can be included. Figure 4 describes the overall flow of the proposed event-triggered cooperative localization algorithm.

The algorithm described here is a new decentralized adaptation of the centralized event-triggered filter in DS-TC-LS:14 (). This new distributed algorithm provides a tunable method to tradeoff between the quality of state estimation and costly communication of measurements. At every time step, each agent updates its own estimate of the current state of the network as well as its estimate of each other agent ’s estimate of the network (where ). Agent receives measurements from its sensor suite and performs a Kalman filter update to its own posterior state estimate. Using its a priori common state with each other agent , i.e. , it determines which, if any, components of its measurement to send to each neighbor based on an innovation triggering threshold. Agent receives explicit and implicit measurements from other agents. Implicit measurements are defined to be the knowledge gained when a measurement is not sent (i.e. when the innovation is smaller than the threshold). Explicit measurements are fused using a Gaussian measurement model via a Kalman filter update, while implicit measurement updates are performed using Gaussian moment matching approximations for set-based measurement updates. All measurements are used to update , whereas only measurements sent between and (explicitly or implicitly) are used to update .

Fig. 4: Description of the overall event-triggered cooperative localization flow.

v.2 Event-triggered information fusion

The components of the algorithm are now described in detail. The algorithm is initialized with and , for all . At timestep , agent first propagates the state estimates for each : , and , . Note that and .

At each timestep , robot obtains a measurement from its sensor suite. Given a , for each , robot determines for which measurement vector component the expected innovation will be large enough, i.e., . Then, it sends the measurement components for which this holds to agent . In turn, robot will receive explicit measurements from other robots for which a similar condition holds.

v.2.1 Explicit measurement fusion

For each for which the corresponding innovation is larger than , agent sends measurement component to . Robot performs a Kalman update (c.f. Algorithm 2) with the measurement component and corresponding measurement model to its own estimate as well as to the common state . Agent makes the same Kalman update to , keeping the common state estimate for and identical.

v.2.2 Implicit measurement fusion

For each measurement component for which the corresponding predicted innovation is smaller than , agent does not send that measurement component to . Therefore, ‘receives’ the implicit information from that , or

(2)

Agent fuses the implicit measurement of component into its estimate, , and both and also fuse it into their shared estimate, and , using Algorithm 3. Specifically, refers to the specific common shared state which is kept by an agent; that is, if the measurement was implicitly sent from to , then . Other variables refer to the states, covariance, measurements, and error statistics of the problem.

1:
2:
3:
4:
5:
6:
7:
8:
9:
10:return
Algorithm 3 Implicit Measurement Update for Kalman Filter

Algorithm 3 shows the modified KF update that results in the optimal posterior state estimate following fusion of an implicit scalar measurement (which follows from Lemma III.1 and Theorem  in DS-TC-LS:14 ()). Furthermore, , leading to a strict decrease in the covariance.

Vi Event-triggered Cooperative Localization with Covariance Intersection

(a) Implicit/explicit data flow to robot
(b) Implicit/explicit data flow to robot
(c) Covariance intersection information flow
Fig. 5: Event-triggered CL information flow for network, where only receives GPS.

Figure  5 (a) and (b) illustrate information flow according to the event-triggered decentralized CL approach, where any element of any vector can be censored to form implicit data via event-triggering. As before, the goal is for all of the robots to perform event-triggered CL to estimate the states of all robots in the network, each using the subset of explicit measurements received and the implicit information encoded in the lack of a measurement. However, it becomes clear that two issues can arise. Firstly, in networks with robots, the common reference state estimates and will eventually start to differ significantly from and , since robots and can generally receive information from other robots that lie outside of . Secondly, in practice, each robot will need to accurately estimate the states of only some robots in the network, e.g. as only this information may be required to execute certain coordinated tasks, or because other robots need this information relayed to them since they are too far away to directly communicate with/measure certain robots. While the ‘relevant subset’ that any robot cares about can change over time, the network topology clearly plays an important role in determining the observability of robot states: the states of robots that are not directly measured by or by robots in are unobservable to using explicit or implicit measurements only. Both issues can be easily remedied if neighboring robots occasionally augment the event-triggered data sharing protocol with direct fusion of their Kalman filter estimates.

More formally, suppose each robot aims to ensure that remains less than a threshold parameter , for each . For simplicity, assume for now all agents have identical goals , (Sec. VI.2 considers dynamically adjusted ). The preference vector , , defines how sensitive robot is to uncertainty in a given robot’s position components. For instance, one reasonable choice is that a robot would give more weight to the states of nearby/communicating robots than ones farther away, since having more accurate knowledge of these states improves self-localization.

In the proposed event-triggering algorithm, robot always fuses its own measurements and, depending on the state of the network, either implicit or explicit measurements from its neighbors. If none of these measurements pertain to a generic robot , robot ’s error covariance for the location of will grow unbounded due to process noise. If gives zero weight to the error in robot ’s location (, then is not sensitive to unbounded estimation error of when minimizing . However, if , in order to enforce that weighted covariance trace , robot must keep its estimation error of robot bounded.

Since it is assumed that never receives implicit or explicit measurements about , an event-based Covariance Intersection (CI) fusion mechanism is introduced as a way for to improve the estimation of any that it cannot receive measurements about. Note that this approach is novel compared to other decentralized CL algorithms, because robots communicate and fuse either implicit/explicit observation data or state estimates (means and covariances), rather than always sharing explicit measurements only, state estimates only, or both simultaneously.

vi.1 Covariance intersection

CI is a conservative, consistent method for directly fusing two state estimates (with possibly non-disjoint information sets) into a new state estimate that takes into account the information in both estimates SJ-JU:07 (). CI prevents double-counting of measurements in an ad-hoc, arbitrary communication topology. Continuing the previous example, robot could improve its estimation of the unmeasured robot by performing a CI with another robot that has a better estimate of ’s location. Fig.  5 (c) shows how this information propagation leads to ‘filling in’ of the corresponding blocks of the covariance matrix for robots across the network, i.e. robots and receive information about robot (and its correlations to other robots) through CI with , while receives information about and (and their correlations) through CI with . Note that even though cannot measure (and therefore cannot pass measurements pertaining to directly onto and ), receives measurements information about from and also occasionally performs CI with , which also in turn periodically performs CI with . Likewise, even though cannot measure or , information about the latter pair of robots eventually ‘percolates’ back to via CI. Hence, as long as all robots are part of a connected network graph, dependency information for all robot states in the network eventually reaches all robots via occasional neighbor-to-neighbor CI updates. However, in the interim, the event-triggering protocol is used to fuse measurements between neighbors. Note that all elements of estimates shown in (c) are explicitly shared during CI.

Note that, since CI is a ‘single shot’ decentralized fusion process, new information takes time to spread throughout the network. For instance, if robot receives new measurement information about via implicit/explicit measurement exchange with at time , then the information from time will reach and via CI at time as part of ’s Kalman filter estimate at time .

CI requires a parameter that defines the relative weighting of the two estimates based on the relative quality of each estimate. Algorithm 4 describes how one can find the parameter that minimizes the weighted trace of the fused covariance. This minimization can be solved efficiently with gradient-free 1D optimization, e.g. bisection or golden section search Vanderplaats-Book-1984 ().

1:
2:
3:
4:
5:return
Algorithm 4 Covariance Intersection

It should be noted that CI has several drawbacks that make it inapplicable as the primary state estimate updating method for strongly coupled CL. First, it is more costly to send state estimates (mean and covariances) over a communication channel than it is to send a combination of explicit and implicit measurements to neighboring robots. Second, the computation cost is greater than performing a Kalman filter update due to the additional optimization step required for finding the weighting parameter . Third, as alluded to above, CI is a conservative fusion process. Thus, it is guaranteed to discard some new information to avoid data incest and produces suboptimal fusion results (whereas the proposed event-triggered scheme can closely approximate the idealized centralized fusion rule that is given the same information set reaching a particular robot). Nonetheless, CI is guaranteed to be a consistent way to sporadically ‘reset’ common reference state estimates for ad hoc information dependencies and when it is infeasible to directly send measurements throughout the network.

vi.2 Heuristic thresholding dynamics

When the network diameter is large, a single may not ensure that all robots’ covariances actually remain below this threshold via combined event-triggered measurement fusion and CI. That is, there is no guarantee that triggering CI fusion updates for any robot whenever the weighted trace of the covariance exceeds will result in the posterior weighted covariance trace becoming less than for all robots.

To remedy this, heuristic balancing dynamics can be optionally introduced on the robots’ triggering thresholds. In this case, robot performs CI with its neighbors only when , where is now a dynamically varying quantity that helps ensure each robot actually achieves its desired uncertainty goal. Let be the fraction of timesteps that robot has triggered a Covariance Intersection up to timestep . For , the dynamics for agent are defined as:

(3)

The first argument of the minimization ensures that the triggering threshold is never above the goal . The second argument contains three terms, the first of which enforces changes based on the previous value. The second term (agreement dynamics) cause robots with large triggering rates relative to their neighbors to increase their threshold. Robots with triggering rates smaller than their neighbors decrease their threshold (which increases their triggering rate as they perform CI more often). Together, all of the terms in eq. 3 (i.e., with ) cause the triggering rates to converge such that all robots’ weighted covariance traces are below . The final term penalizes deviations from . As seen in Example 2 of Section VII, these dynamics are a heuristic that, when properly tuned, allow the weighted covariance trace to converge closer to for those robots that are the least well-connected in the network. It is possible to formally tune these parameters offline using truth model simulations. Note that, due to the parameter , there can be a large variability in the frequency of robots’ updates throughout the network. As seen later in Figure 7, the overall result is that well-connected robots in the network increase their triggering rate to aid the less-connected ones. Hence, these balancing dynamics are one possible way to allow the information of interest to propagate more efficiently between robots.

Vii Linear Model Simulation Results

Simulations results were obtained to analyze the performance of the decentralized event-triggered CL algorithm for both simple one-dimensional, linear dynamics and nonlinear Dubins vehicle dynamics with nonlinear range and bearing measurements. This section focuses the linear case to provide illustrative proof of concept examples, while Section VIII describes the nonlinear case to examine algorithm performance under more realistic application conditions.

For simplicity, the following simulations consider several robotic agents moving in one dimension with identical known motion control law and dynamics: Here, the additive process noise has zero mean and diagonal covariance matrix has diagonal entries equal to .The agents interact according to a graph line topology; i.e., agent can communicate with , agent can communicate with and , etc. Each agent can take GPS measurements of its own position with no bias and covariance , as well as relative distance to the agents that it can communicate with, with no bias and covariance . The event-triggering parameter . In the following, we include two simulated experiments. In the first, the event-triggering strategy is shown to keep the trace of the covariance bounded for three robots without using the CI reset mechanism. In the second, seven robots employ both the event-triggered measurement propagation as well as CI resets.

Example 1: Three robots, constant thresholds

(a) Robot ’s estimation errors
(b) Trace of network state covariance
(c) Effect of no implicit information
(d) Fraction of messages sent
Fig. 6: Results for Example 1.

For agents, Figure 6(a) depicts the the error between the true robot locations and robot ’s filter estimates (red=1, green=2, blue =3). Note that it was able to track the true network state with small variance. For the same evolution, Figure 6(b) shows the trace of the covariance matrix for each agents’ estimate compared to that obtained through a centralized Kalman filter that fuses the information of all agents’ measurements (red=1, green=2, black=3, blue= centralized). Note that since agent (green) is the most connected, its covariance is nearly equal to the centralized Kalman filter (blue). Figure 6(c) shows the values of the trace of the covariance for the case where agent does not fuse any of the implicit information received from agents and (dashed green), versus the case where robot fuses all implicit/explicit data (solid green) and versus the centralized estimate (blue). The implicit information clearly makes a large difference towards minimizing the estimation error. Figure 6(d) shows the cumulative fraction of the total measurements that were explicitly sent to other agents by other each agent (red=1, green=2, blue =3). Even though approximately of messages were sent, the overall network estimation performance is not much worse than the centralized Kalman filter. This justifies the use of event-triggered measurements to trade off between estimation performance and measurement communication cost. The threshold parameter can be tuned according to a given application’s relative importance between performance and communication cost. This figure only counts explicit measurements sent and does not include messages related to CI. However, because all agents were able to directly or indirectly measure all other agents, no CI updates were required in this execution.

Example 2: Seven robots, with/without adaptive thresholding

Now we consider the network to include agents under the same assumptions as defined above (line graph communication topology, agents can measure their own position and relative position to agents on either side of it). Because agents are not able to measure (either directly or via a communicating neighbor) all other agents, the position covariance of these robots would diverge in time. Thus, agents periodically perform CI with neighbors to fuse their state estimates and gain information about agents they cannot measure.

(a) Network covariance trace, , no adaptive thresholding
(b) CI trigger rate, no adaptive thresholding
(c) Network covariance trace, , adaptive thresholding
(d) CI trigger rate, adaptive thresholding
Fig. 7: Results for Example .

Figure 7(a) shows the trace of the network state covariance matrix estimated by each agent vs. time. Figure 7 (b) shows the CI triggering rate for each agent (number of CI updates divided by number of communication events) vs. time. In Figure 7 (a) and (b), the agents all trigger a CI with their neighbors when the trace of their covariance matrix is above the threshold of (here, ). Note that some agents are more connected than others, and so not all are able to maintain their covariance trace below . As seen in Fig. 7(b), since some agents are less connected (dark blue, black), they must perform CI fusion much more often, while the other agents perform CI only rarely. The robots that implement CI more frequently do not get enough information to lower their covariance trace, because the robots which do not need it are not updating frequently enough.

Figures 7(c) and (d) show the same metrics to illustrate the effect of the heuristic dynamics (3) on the triggering threshold when and . By having some of the more connected agents(light blue, red, maroon) lower their triggering threshold (and so trigger more frequently), we see that all agents are able to maintain their state estimate covariance below the desired threshold of . Additionally, it can be observed that the triggering rate for the less connected agents (black, dark blue) decreases significantly because the more connected agents increased their rates slightly. Setting affects the rate of convergence. In general, with , agents converge to a threshold less than . By increasing , the threshold that agents converge to can be raised. The parameters used here were chosen based on hand tuning to achieve the best output performance. More formal analysis and strategies for automatic adaptation are the subject of ongoing work.

Viii Nonlinear Model Simulation Results

This section analyzes the performance of the decentralized event-triggered CL algorithm for aerospace vehicle models and scenarios that are more realistic than those considered in the previous section. In these simulations, several factors are varied and the effects on the resulting CL performance are analyzed for two different simulated teams of multiple small fixed-wing aerial robots, each having 2D Dubins vehicles dynamics at the same constant altitude and nonlinear relative range and bearing measurement models. Roll and pitch variations for each vehicle are ignored for the sake of simplicity in these simulations, so that only planar translations, velocities, heading, and heading rates are considered. The factors varied in these simulations are:

  1. the tuning of the event-triggered innovation threshold , to understand its effect on the resulting communication volume between robots;

  2. the presence of an imperfect communication channel, i.e. where messages between robots are either completely received or dropped with some fixed probability;

  3. vehicle maneuverability, to assess how the non-linear dynamics and measurement models impact the approximations made in the event-triggered EKF (which relies on linearization);

  4. vehicle network topology, to assess how the combined flow of implicit/explicit data and state estimate information (via CI) impact estimation error convergence rates, especially if only one robot has access to GPS-like measurements.

In the following, the general nonlinear dynamics and measurement models are first described. The simulation studies for a 2-robot decentralized CL case are then discussed, under the assumption that both robots have GPS throughout in order to provide most of the insights for (i)-(iii) above. Finally, the results for 6-robot CL are discussed, where only a single designated robot has access to GPS throughout.

viii.1 Nonlinear dynamics and measurement models

For ease of notation in this section, we overload as the components of the state space where the subscript represents the robot. In the case of estimates, the superscript represents the robot that has that estimate. For example, is robot ’s estimate of the location of robot . We also use and as the specific components of range and bearing, respectively. Here, and is the range/bearing measurement of with respect to . The Dubins vehicle dynamics for vehicle is given by:

where is a constant velocity. The measurement models for vehicle observing vehicle are given by:

(4a)
(4b)

Additionally, robots receive absolute (GPS-like) measurements of their location components , , and with additive noise:

(5a)
(5b)
(5c)

viii.2 2-Agent Simulations

The complete measurement vector for the 2-robot case is

(6)
Fig. 8: Motions used in two robot simulations.

Figure 8 depicts the two robot simulations and we summarize the dynamics below. The vehicles have a fixed speed of  m/s and the control for bearing is  rad/s for robot and rad/s for robot . The simulation runs for seconds and the discretization timestep is seconds. The vehicle dynamics noise covariance matrix is

and the measurement error variances are: (range), (bearing), (GPS-position), and (GPS-orientation). The robots start at initial conditions of

with initial covariance estimates .

In this section, the simulations allow for the possibility of a communication channel where measurements occasionally are dropped. For simplicity, a model parameterized by a communication success probability () is used, where each measurement explicitly sent will be received with probability .

Example 1: Effects of communication rate

Here we present simulations on the effects of performance and consistency as a function of the communication threshold when the communication success probability is .

From Figures 9 and 10, one can see that for a robot setup with robot tracking both its own location as well as robot ’s state, there is no apparent loss of consistency as one increases (decreases the average explicit communication rate) because the mean squared error matches the predicted covariance showing that even with the nonlinear dynamics and measurement models, our algorithm remains consistent.

Fig. 9: MSE for robot tracking its own pose.
Fig. 10: MSE for robot tracking robot ’s pose.

We also compare the MSE of our event-based algorithm with the event-based filter where implicit measurements are not fused in Figure 11 for a two-robot setup. One can see that as increases and so fewer measurements are sent, our algorithm steadily outperforms the version without the implicit information showing that fusing the implicit information in the measurements not sent improves performance. Furthermore, for small values, both filter’s performance will be very close to an EKF that fuses all measurements explicitly (). For larger values, our algorithm has moderate increases in MSE for the benefit of requiring much fewer measurements sent. Figure 12 illustrates the non-linear relationship for the average communication rate between robots 1 and 2 versus .

Fig. 11: MSE of full event-based filter vs. filter that fuses only explicit data.
Fig. 12: Average communication rate between robots 1 and 2 as a function of .

Example 2: Effect of communication probability

In this example, we examine the effect of communication channels where measurement drops are possible. Since this algorithm relies on the knowledge that a measurement not received was intentionally censored, we investigate the robustness of our algorithm to environments where this assumption is violated. Figure 13 depicts the performance of two robots with 50% measurements being explicitly sent on average for a fixed , but where a variable number of messages are dropped with some Communication Probability . Figure 13(a) shows robot ’s estimate of its own location. Since it takes and fuses measurements of this quantity at every time step, the performance is unaffected by the increased . However Figure 13(b) and (c), show that robot ’s estimate of robot ’s location as well as robot ’s estimate of robot ’s location is dependent on . As one decreases , there is an increasing gap between the predicted covariance and the true MSE.

Fig. 13: Results for 50% average explicit data throughput, with random losses from .
Fig. 14: Actual vs. estimated error covariance traces vs. for imperfect communications.
Fig. 15: Confusion ratio for imperfect communication scenarios.

Figure  14(a) depicts the final sum of MSEs for robot for several values (30 sample runs each) , showing that the error increases as more measurements are dropped; Figure (b) shows the trace of the estimated covariance matrix for the same trials. Note the difference in scale between the plots and that for , the filter is overconfident in the filter’s estimates. This is because the filter is interpreting dropped measurements as implicit measurements. Figure 15 depicts the confusion ratio as a function of the threshold parameter and Communication Probability () showing that as increases, fewer measurements are misclassified.

Figure 15 depicts the confusion ratio, i.e. the number of explicit measurements dropped divided by the number of total measurements (implicit and explicit). As increases, fewer measurements are sent explicitly, and therefore, cannot be dropped; this leads to a reduction in the number of dropped data elements that are misinterpreted as implicit measurements.

Figure 16 compares our event-based filter (Full EBF) with one which only fuses explicit data that is sent by the Full EBF; the for both filters. Figure  16 (a) shows the MSE for robot 2’s estimate of robot 1 states (30 sample runs) at different ’s. Figure  16 (b) shows a plot of squared errors vs. time for typical sample run. These results contrasts with Figure 11, which highlights the improved performance of fusing the implicit information when is . Figure 16(a) and (b) both show a limitations of our event-based filter. Since the is only , some of the messages are dropped but our filter interprets them as being intentionally implicitly sent, i.e. misinterpreted as seen in Figure 15. This leads to larger errors in Full EBF and smaller errors when ‘implicit information’ (i.e. real implicit information or dropped measurements) is ignored (EBFni). Figure 16(a) also implies that for low , increasing innovation parameter improves the performance. This seems to be explained by the fact that increasing decreases the chance the an explicit measurement is dropped, and therefore, misinterpreted as an implicit measurement, as seen above in Figure 15.

Fig. 16: Imperfect communication scenario results for full event-based filter vs. explicit-data-only filter.

Example 3: Effect of communication topology

In this section, the performance of the event-based filter for a larger network consisting of 6 robots is studied. Since our ultimate goal is to see the accuracy and consistency of this algorithm in scenarios which are as realistic as possible, additional elements such as multiple robots, different communication graphs and non-ubiquitous GPS are added in the simulations. Figure 17 depicts the 6-robot simulations for the three different graphs or topologies: star, bridge and line.

Fig. 17: Motions for 6-robot scenario with three different topologies (star, bridge, chain).

As before, the robots’ dynamics model is Dubins. The robots’ fixed velocities and turning rates are given by

held constant throughout the simulations. The simulation duration is seconds and the time step is seconds. The dynamics process covariance matrix is

The measurement error variances are  (range), (bearing), (GPS-position), and (GPS-orientation). The initial conditions are

The simulation results show that communication topology plays an important role in the performance of the algorithm. Two main aspects of the 6-robots simulations, observability and covariance intersection effects, are considered next.

Example 3.A: Observability issues

Depending on the communication topology and GPS availability to the robots, the state of the whole network in an absolute reference frame may be impossible to determine. The ability to pin down a subset of robots does not necessarily lead to the ability to do so for all team members. In our algorithm, measurements are shared between communicating pairs of robots, but they are not passed over to additional members.

Observability of the full network state is heavily dependent on the measurement sharing topology. For instance, in the bridge topology the 2 subsets of robots have full communication internally (every robot talks with the rest of the robots in the subset), but there is only one link between the two groups. This means if that link fails or is lost, the two groups become blind to one another. A more extreme case is the chain topology, where the network diameter is largest. For this graph, if information is desired to go from the robot at one edge to the robot at the other edge, all robots in between have to successfully receive and transmit that information at some point, which takes time and makes the system more vulnerable to data drops. An example of such valuable information that would potentially be shared amongst all robots is GPS localization, since the problem of interest is that of robot localization in a global reference frame. On the other side, the star graph has a smaller network diameter, which benefits the system as a whole, but is dangerous for robots individually, since if one of the links breaks, an robot becomes isolated. Complexity increases in situations where GPS is limited or restricted to certain robots and, overall, the particular application dictates which topology should be used, and different communication graphs work better in different scenarios.

There are different ways to cope with observability issues. Well connected networks present fewer problems when it comes to this due to more links existing between robots, at the expense of having increased communication costs and, possibly, having to use higher bandwidths and more expensive hardware. Another option is to have GPS or other forms of absolute positioning available to more robots, or to robots in strategical positions of the network. As an example, let us examine a case with a chain graph. Here, it can be seen that even if the robot with GPS access changes, most robots are still going to be out of reach, not being able to benefit from that information. For the chain graph, Figure 18 (a) shows a case where only one of the robots (robot 4, in the edge of the graph) gets GPS measurements. Here, the components of the estimated state drift and the filter is unable to correct it if only one robot gets GPS measurements. In (b) this behavior is corrected by adding additional robots (robots 1 and 6) that receive GPS measurements as well.

Fig. 18: Robot 3’s estimates in chain graph: (a) one GPS robot; (b) two GPS robots.

However, in some cases, such as UUV’s, this may be impractical, since vehicles have to surface to get absolute measurements, thus incurring in additional energy consumption and losing valuable mission time. In more extreme scenarios this practice is not even possible, depending on mission requirements, GPS signal availability or hardware used. Another method, studied here, is covariance intersection (or CI), and allows robot pairs to fuse their state estimates and covariances with the goal of reducing uncertainty and obtaining an estimate that outperforms the other two. In this way, accurate state estimates from an robot that benefits from absolute measurements flow through the network.

Example 3.B Effect of covariance intersection

Covariance intersection is used as a way for robots to synchronize their estimates and reduce their covariance matrices. It is worth noting that CI involves higher communication and computational costs than performing a traditional Kalman measurement update, so it is not used as the main method for data sharing and fusion, but rather to overcome the problems associated with less observable networks by triggering it when the trace of the covariance matrix for a particular robot exceeds a certain threshold. The value of this threshold is a design choice in which different metrics and problem requirements have to be considered.

Figure 19 shows the effect that performing CI has on diagonal elements of the covariance matrix of an arbitrary robot. The sinusoidal shape of the curves is caused by the circular motion of the robots. This figure depicts a simulation where the robots communicate with one another following a chain-shaped (or line) graph, as can be seen in Figure 17. By introducing additional correlations between robots’ states, the resulting covariance matrix is generally filled with non-zero values in corresponding off-diagonal elements. CI effects manifest by generally sudden jumps in the covariance values, as can be seen around or . These correlations are passed from robot to robot every time that CI is performed. However, states corresponding to two robots that are separated by several links in the communication graph will not become correlated instantaneously. Instead, it will take intermediate robots to perform CI successively for a few time steps. Delays in the propagation of covariance intersection correlations are hard to see in these simulations, since the network is relatively small.

Another interesting aspect of these simulations is the fact that the covariance matrix hits an upper bound even in cases where there are very sparse or no measurements containing information about specific robots. Then, every time covariance intersection is performed new correlations between states are added, which reflects in this bound adopting a different value. Figure 19 shows the variances of the states of all robots, as estimated by robot 5. As can be seen, there is a direct correspondance between the number of links separating robot 5 and the other robots and how large the variances are – for example, robot 6 is the farthest away from 5 (in terms of links or connections) and the associated variance for 6 is the largest, whereas robot 2 is measured by 5 directly, so its covariance is small.

Covariance intersection is not needed in networks where all robots’ states are measured by or shared between one another, since if the filter is properly tuned it will eventually converge. Figure 20, where robots communicate following a star-shaped graph, shows that because the only robot that is receiving GPS measurements (robot 1) acts as a hub and is able to share them with the rest of the network, all other states can be uniquely estimated by virtue of pinning down robot 1. On the other hand, if GPS measurements were provided only to one of the robots that act as leaf nodes (that is, all robots except for robot 1), it would become necessary to perform CI to prevent the estimates from drifting away from the true states.

One important conclusion in multiple robot scenarios is that the number of robots that have access to GPS measurements, as well as their position and ability to communicate with other robots, affects the overall performance of the filter. Additionally, in view of the results it becomes clear that CI plays a dominant role in cooperative localization, and correlations between robots states introduced early on in the simulations have a long-lasting effect that allows a reduction in communication costs without strong penalizations on filter performance. This is particularly useful in poorly connected graphs, although it comes at a price – performing CI more often brings about higher communication costs, which is counterproductive, so an optimal combination of these two values needs to be obtained. This is a problem of its own that would be worth studying in future works.

Fig. 19: Typical variances estimated by robot 5 in chain graph; sudden spikes are CI updates.
Fig. 20: Typical variances estimated by robot 5 in star graph; no CI updates occur.

Ix Conclusions

This paper presented a novel decentralized Cooperative Localization algorithm for a team of robotic robots to estimate the state of the network using sparse event-triggered measurement communication. Since robots all know the event-triggering condition, censored measurements can be interpreted and fused as set-valued information into state estimates at low communication cost. The algorithm also occasionally employs Covariance Intersection estimate fusion, to keep the weighted trace of the error covariance bounded when robots cannot directly measure all other robots. Since it has higher communication and computation cost as compared to sending a combination of explicit/implicit measurements, the Covariance Intersection process is only triggered sporadically as required by the performance goal. Simulations showed the effectiveness of the mixed implicit/explicit data fusion strategy for linear and nonlinear dynamics and measurement models using extended Kalman filters. Simulations also showed that mixed implicit/explicit measurement fusion can provide performance that is nearly equivalent to total explicit data communication and fusion, but requires far less data to be exchanged between robots (under ideal communication conditions). This work also studied measurement erasure over lossy communication channels has on the consistency and performance, due to the algorithmic assumption of lossless communications. Results revealed that estimation consistency is improved in lossy communication channels as the innovation threshold parameter size is increased, at the cost of elevated mean squared error. The simulations also explored several communication topologies and illustrated the benefit of sporadic Covariance Intersection when absolute GPS measurements are not available to all robots in certain topologies. These simulations results focused on relatively small teams of robots, but nevertheless provide useful benchmarks for typical applications of Cooperative Localization with intelligent autonomous vehicle systems (as opposed to swarms of robots with highly limited sensing and computing power).

Open directions for follow-on work include determining if Bayesian channel filters can be used when the communication topology is fixed to further improve performance SG-HRDW:94 (); extending the algorithm to correlated non-scalar measurements; investigating Covariance Intersection schemes that allow the fusion of only a subset of the network’s states  SJJ-JKU:97 (); Ahmed-FUSION-2016 () so each robot may only estimate its own and its neighbors’ locations; and formally analyzing the behavior of the thresholding dynamics. Experimental comparisons to other decentralized Cooperative Localization strategies should also be performed to formally assess the relative strengths and weaknesses for different application scenarios.

The proposed algorithm could also be extended to handle communication dropouts that are incorrectly assumed to be implicit information, since our simulations have confirmed that performance degrades when this assumption is violated under heavy communication losses. For instance, the fact that it will be statistically highly unlikely for an entire measurement vector to be censored via event-triggered estimation can be used by robots to formally detect when an explicit measurement has been completely dropped by senders. It can also be assumed that the robots’ control is unknown a priori but is also sent using an event-triggering strategy; then, it may be possible to investigate how the interplay of both mechanisms affects performance.

Another relevant direction is formal analysis of event-triggered cooperative estimator performance in the presence of measurement outliers and corrupted measurements. These types of measurements were ignored here, but can still arise in many practical scenarios. Outliers correspond to measurement data whose random error statistics are not accurately described by the estimator’s underlying data likelihood model. These can often be attributed to artifacts of noise model mismatch, e.g. if random sensor errors are modeled as Gaussian, but truly follow a non-Gaussian distribution such as a multi-modal/heavy-tailed Gaussian mixture model. Corrupt measurements are those which arise from non-ideal (though not necessarily unpredictable) sensor behavior, e.g. signal saturation or cut-off above some known threshold. Outliers and corrupt data could cause explicit measurement updates to occur more often than they should in an event-triggered estimator (such that the filter will be ‘surprised’ more often than its own model predicts), and as a result communication savings would be compromised. This could lead to large errors in the a posteriori state estimation statistics following either implicit or explicit measurement updates, since the Kalman gain will not be based on the correct noise statistics. There are at least two different ways to handle such data: via deterministic heuristic gating methods (hard classifications of measurement innovations); or via formal probabilistic data fusion, which use more accurate models of all potential sensor error sources to automatically weight sensor information in Bayesian state estimator updates Blackman-AESMag-2004 (). Since the effectiveness of these methods in an event-triggered setting is highly scenario-dependent (and since outliers/corrupt sensor data are not inherent to the basic decentralized cooperative localization problem), a detailed description and analysis is left for future work.

Formal analysis of stability and convergence characteristics for the decentralized event-triggered cooperative localization algorithm can also be further examined in the case of linear vehicle dynamics and sensor models. To our knowledge, no formal stability results have yet been established for decentralized event-triggered state estimators of the type considered here. However, work by Khojasteh-arxiv-2018 () and Trimpe-CDC-2014 () provide promising results for closely related problems that could be adapted and applied for linear systems. Stability and convergence guarantees are unlikely to exist for nonlinear systems via the EKF, since the EKF relies heavily on the validity of linearization assumptions (which will vary by application and operating conditions). The EKF has been long established as a core algorithm in the robot state estimation and navigation literature despite this limitation. Yet, it should be understood that, like all EKFs, the event-triggered EKF developed here for cooperative localization requires proper initialization and tuning to ensure good performance and stable/non-diverging behavior, i.e. such that the state estimates about which dynamics and measurements are repeatedly re-linearized remain close to the actual system state.

Acknowledgments

David Iglesias was supported by a fellowship from the Balsells Fellowship Program.

References

References

  • (1) Thrun, S., Burgard, W., and Fox, D., Probabilistic robotics, MIT press, 2005.
  • (2) Ouimet, M., Ahmed, N., and Martínez, S., “Event-based cooperative localization using implicit and explicit measurements,” IEEE Int. Conf. on Multisensor Fusion and Integration for Intelligent Systems, IEEE, 2015, pp. 246–251.
  • (3) Leonard, J. and Durrant-Whyte, H. F., “Mobile robot localization by tracking geometric beacons,” IEEE Trans. on Robotics, Vol. 7, June 1991, pp. 376–382.
  • (4) Dissanayake, G., Newman, P., Durrant-Whyte, H. F., Clark, S., and Csorba, M., “A solution to the simultaneous localization and map building (SLAM) problem,” IEEE Trans. on Robotics, Vol. 17, No. 3, 2001, pp. 229–241.
  • (5) Holmes, S., Klein, G., and Murray, D. W., “A square root unscented Kalman filter for visual monoSLAM,” IEEE Int. Conf. on Robotics and Automation, IEEE, 2008, pp. 3710–3716.
  • (6) Thrun, S., Fox, D., Burgard, W., and Dellaert, F., “Robust Monte Carlo localization for mobile robots,” Artificial Intelligence, Vol. 128, No. 1-2, 2001, pp. 99–141.
  • (7) Montemerlo, M. and Thrun, S., “Simultaneous localization and mapping with unknown data association using FastSLAM,” IEEE Int. Conf. on Robotics and Automation, Vol. 2, IEEE, 2003, pp. 1985–1991.
  • (8) Grisetti, G., Tipaldi, G.-D., Stachniss, C., Burgard, W., and Nardi, D., “Fast and accurate SLAM with Rao–Blackwellized particle filters,” Robotics and Autonomous Systems, Vol. 55, No. 1, 2007, pp. 30–38.
  • (9) Kwok, N. M., Dissanayake, G., and Ha, Q. P., “Bearing-only SLAM using a SPRT based gaussian sum filter,” IEEE Int. Conf. on Robotics and Automation, IEEE, 2005, pp. 1109–1114.
  • (10) Sünderhauf, N. and Protzel, P., “Switchable constraints for robust pose graph SLAM,” IEEE/RSJ Int. Conf. on Intelligent Robots & Systems, IEEE, 2012, pp. 1879–1884.
  • (11) Mur-Artal, R., Martinez-Montiel, J., and Tardos, J. D., “ORB-SLAM: a versatile and accurate monocular SLAM system,” IEEE Transactions on Robotics, Vol. 31, No. 5, 2015, pp. 1147–1163.
  • (12) Morrison, J. G., Gavez-Lopez, D., and Sibley, G., “Scalable Multirobot Localization and Mapping with Relative Maps: Introducing MOARSLAM,” IEEE Control Systems Magazine, Vol. 36, No. 2, 2016, pp. 75 – 85.
  • (13) Jones, B., Campbell, M., and Tong, L., “Maximum likelihood combining of stochastic maps,” Allerton Conf. on Communications, Control and Computing, IEEE, 2011, pp. 1238–1241.
  • (14) Cunningham, A., Indelman, V., and Dellaert, F., “DDF-SAM 2.0: Consistent distributed smoothing and mapping,” IEEE Int. Conf. on Robotics and Automation, 2013, pp. 5220–5227.
  • (15) Indelman, V., Nelson, E., J. Dong, J., Michael, N., and Dellaert, F., “Incremental distributed inference from arbitrary poses and unknown data association: Using collaborating robots to establish a common reference,” IEEE Control Systems Magazine, Vol. 36, No. 2, 2016, pp. 41–74.
  • (16) Roumeliotis, S. I. and Bekey, G. A., “Distributed multirobot localization,” IEEE Transactions on Robotics, Vol. 18, No. 5, 2002, pp. 781–795.
  • (17) Campbell, M. E. and Ahmed, N. R., “Distributed data fusion: Neighbors, rumors, and the art of collective knowledge,” IEEE Control Systems, Vol. 36, No. 4, 2016, pp. 83–109.
  • (18) Julier, S. and Uhlmann, J., “Using covariance intersection for SLAM,” Robotics and Autonomous Systems, Vol. 55, No. 1, 2007, pp. 3–20.
  • (19) Ferguson, P. and How, J., “Decentralized estimation algorithms for formation flying spacecraft,” AIAA Guidance, Navigation and Control Conference, Vol. 2003, 2003.
  • (20) Smith, R. S. and Hadaegh, F. Y., “Distributed estimation, communication and control for deep space formations,” IET Control Theory & Applications, Vol. 1, No. 2, 2007, pp. 445–451.
  • (21) McLoughlin, T. H. and Campbell, M., “Scalable Sensing, Estimation, and Control Architecture for Large Spacecraft Formations,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 30, No. 2, 2007, pp. 289–300.
  • (22) Arambel, P. O., Rago, C., and Mehra, R. K., “Covariance intersection algorithm for distributed spacecraft state estimation,” American Control Conference, 2001, pp. 4398–4403.
  • (23) Leung, K. Y. K., Barfoot, T. D., and Liu, H. H. T., “Decentralized Localization of Sparsely-Communicating Robot Networks: A Centralized-Equivalent Approach,” IEEE Transactions on Robotics, Vol. 26, No. 1, 2010, pp. 62–77.
  • (24) Karam, N., Chausse, F., Aufrere, R., and Chapuis, R., ‘‘Localization of a group of communicating vehicles by state exchange,” IEEE/RSJ Int. Conf. on Intelligent Robots & Systems, 2006, pp. 519–524.
  • (25) Trawny, N., Roumeliotis, S. I., and Giannakis, G. B., “Cooperative multi-robot localization under communication constraints,” IEEE Int. Conf. on Robotics and Automation, 2009, pp. 4394–4400.
  • (26) Webster, S. E., Walls, J. M., Whitcomb, L. L., and Eustice, R. M., “Decentralized Extended Information Filter for Single-Beacon Cooperative Acoustic Navigation: Theory and Experiments,” IEEE Transactions on Robotics, Vol. 29, No. 4, 2013, pp. 957–974.
  • (27) Kia, S. S., Rounds, S. F., and Martinez, S., “A centralized-equivalent decentralized implementation of Extended Kalman Filters for cooperative localization,” IEEE/RSJ Int. Conf. on Intelligent Robots & Systems, Sept. 2014, pp. 3761–3766.
  • (28) Ihler, A. T., Fisher, J. W., Moses, R. L., and Willsky, A. S., “Nonparametric belief propagation for self-localization of sensor networks,” IEEE Journal of Selected Areas in Communications, Vol. 23, 2005, pp. 809–819.
  • (29) Bahr, A., Walter, M. R., and Leonard, J. J., “Consistent Cooperative Localization,” IEEE Int. Conf. on Robotics and Automation, Kobe, Japan, May 2009, pp. 8908–8913.
  • (30) Prorok, A., Bahr, A., and Martinoli, A., “Low-cost collaborative localization for large-scale multi-robot systems,” IEEE Int. Conf. on Robotics and Automation, Minnesota, MN, USA, 2012, pp. 4236–4241.
  • (31) Knuth, J. and Barooah, P., “Distributed collaborative localization of multiple vehicles from relative pose measurements,” Communication, Control, and Computing, 2009. Allerton 2009. 47th Annual Allerton Conference on, IEEE, 2009, pp. 314–321.
  • (32) Luft, L., Schubert, T., Roumeliotis, S. I., and Burgard, W., “Recursive Decentralized Collaborative Localization for Sparsely Communicating Robots,” Robotics: Science and Systems, AnnArbor, Michigan, June 2016.
  • (33) Curn, J., Marinescu, D., O’Hara, N., and Cahill, V., ‘‘Data incest in cooperative localisation with the Common Past-Invariant Ensemble Kalman filter,” IEEE Int. Conf. on Information Fusion, Istanbul, Turkey, 2013, pp. 68–76.
  • (34) Li, H. and Nashashibi, F., “Cooperative multi-vehicle localization using split covariance intersection filter,” IEEE Intelligent Transportation Systems Magazine, Vol. 5, No. 2, 2013, pp. 33–44.
  • (35) Carrillo-Arce, L. C., Nerurkar, E. D., Gordillo, J. L., and Roumeliotis, S. I., “Decentralized multi-robot cooperative localization using covariance intersection,” IEEE/RSJ Int. Conf. on Intelligent Robots & Systems, Nov. 2013.
  • (36) Hokhtarzadeh, H. and Gebre-Egziabher, D., “Cooperative inertial navigation,” Navigation: Journal of the Institute of Navigation, Vol. 61, No. 2, 2014.
  • (37) Kurazume, R., Hirose, S., Nagata, S., and Sashida, N., “Study on cooperative positioning system (basic principle and measurement experiment),” IEEE Int. Conf. on Robotics and Automation, Vol. 2, IEEE, 1996, pp. 1421–1426.
  • (38) Kurazume, R. and Hirose, S., “Study on cooperative positioning system: optimum moving strategies for CPS-III,” IEEE Int. Conf. on Robotics and Automation, Vol. 4, IEEE, 1998, pp. 2896–2903.
  • (39) Nerurkar, E. D., Roumeliotis, S. I., and Martinelli, A., “Distributed Maximum A Posteriori Estimation for Multi-robot Cooperative Localization,” IEEE Int. Conf. on Robotics and Automation, Kobe, Japan, May 2009, pp. 1402–1409.
  • (40) Paull, L., Huang, G., Seto, M., and Leonard, J. J., “Communication-constrained multi-AUV cooperative SLAM,” Proceedings - IEEE International Conference on Robotics and Automation, Vol. 2015-June, No. June, 2015, pp. 509–516.
  • (41) Wymeersch, H., Lien, J., and Win, M. Z., “Cooperative localization in wireless networks,” Proc. IEEE, Vol. 97, No. 2, 2009, pp. 427–450.
  • (42) Liu, Q., Wang, Z., He, X., and Zhou, D., “A survey of event-based strategies on control and estimation,” Systems Science & Control Engineering, Vol. 2, No. 1, 2014, pp. 90–97.
  • (43) Trimpe, S. and Campi, M. C., “On the choice of the event trigger in event-based estimation,” Event-based Control, Communication, and Signal Processing (EBCCSP), 2015 International Conference on, IEEE, 2015, pp. 1–8.
  • (44) Yan, L., Zhang, X., Zhang, Z., and Yang, Y., “Distributed state estimation in sensor networks with event-triggered communication,” Nonlinear Dynamics, Vol. 76, No. 1, 2014, pp. 169–181.
  • (45) Trimpe, S., “Stability analysis of distributed event-based state estimation,” Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, IEEE, 2014, pp. 2013–2019.
  • (46) Sijs, J. and Lazar, M., “On event-based state estimation,” Hybrid systems: computation and control, 2009, pp. 336–350.
  • (47) Shi, D., Chen, T., and Shi, L., “An event-triggered approach to state estimation with multiple point- and set-valued measurements,” Automatica, Vol. 50, No. 6, 2014, pp. 1641–1648.
  • (48) Liu, Q., Wang, Z., He, X., and Zhou, D., “Event-based recursive distributed filtering over wireless sensor networks,” IEEE Transactions on Automatic Control, Vol. 60, No. 9, 2015, pp. 2470–2475.
  • (49) Trimpe, S. and D’Andrea, R., “An experimental demonstration of a distributed and event-based state estimation algorithm,” IFAC Proceedings Volumes, Vol. 44, No. 1, 2011, pp. 8811–8818.
  • (50) Trimpe, S. and D’Andrea, R., “Event-based state estimation with variance-based triggering,” IEEE Transactions on Automatic Control, Vol. 59, No. 12, 2014, pp. 3266–3281.
  • (51) Muehlebach, M. and Trimpe, S., “LMI-based synthesis for distributed event-based state estimation,” American Control Conference (ACC), 2015, IEEE, 2015, pp. 4060–4067.
  • (52) Battistelli, G., Chisci, L., and Selvi, D., “Energy-efficient distributed state estimation via event-triggered consensus on exponential families,” American Control Conference (ACC), 2016, IEEE, 2016, pp. 6387–6392.
  • (53) Li, W., Jia, Y., and Du, J., “Event-triggered Kalman consensus filter over sensor networks,” IET Control Theory & Applications, Vol. 10, No. 1, 2016, pp. 103–110.
  • (54) Chong, C.-Y., Chang, K.-C., and Mori, S., “Comparison of optimal distributed estimation and consensus filtering,” Information Fusion (FUSION), 2016 19th International Conference on, IEEE, 2016, pp. 1034–1041.
  • (55) Johnson, N. L., Kotz, S., and Balakrishnan, N., Continuous univariate distributions. Volume 1, Vol. 2, Wiley, 1995.
  • (56) Bar-Shalom, Y., Li, X. R., and Kirubarajan, T., Estimation with Applications to Tracking and Navigation, John Wiley & Sons, 2001.
  • (57) Grime, S. and Durrant-Whyte, H., “Data fusion in decentralized sensor networks,” Control engineering practice, Vol. 2, No. 5, 1994, pp. 849–863.
  • (58) Heiss, F. and Winschel, V., “Likelihood approximation by numerical integration on sparse grids,” Journal of Econometrics, Vol. 144, No. 1, 2008, pp. 62–80.
  • (59) Jia, B., Xin, M., and Cheng, Y., “Sparse Gauss-Hermite quadrature filter with application to spacecraft attitude estimation,” Journal of Guidance Control and Dynamics, Vol. 34, No. 2, 2011, pp. 367.
  • (60) Vanderplaats, G. N., Numerical optimization techniques for engineering design: with applications, Vol. 1, McGraw-Hill New York, 1984.
  • (61) Julier, S. J. and Uhlmann, J. K., “Simultaneous localisation and map building using split covariance intersection,” IEEE/RSJ Int. Conf. on Intelligent Robots & Systems, Vol. 3, 2001, pp. 1257–1262.
  • (62) Ahmed, N. R., Whitacre, W. M., Moon, S., and Frew, E. W., “Factorized Covariance Intersection for Scalable Partial State Decentralized Data Fusion,” 2016 International Conference on Information Fusion (FUSION 2016), No. July, 2016.
  • (63) Blackman, S. S., “Multiple hypothesis tracking for multiple target tracking,” Aerospace and Electronic Systems Magazine, IEEE, Vol. 19, No. 1, 2004, pp. 5–18.
  • (64) Khojasteh, M. J., Tallapragada, P., Cortés, J., and Franceschetti, M., “The value of timing information in event-triggered control,” arXiv preprint arXiv:1609.09594, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters