Distributed Filtering for Nonlinear Multi-Agent Systems with Biased Observations\thanksreffootnoteinfo

Distributed Filtering for Nonlinear Multi-Agent Systems with Biased Observations\thanksreffootnoteinfo

[    [    [    [

This paper considers the distributed filtering problem for a class of discrete-time stochastic nonlinear multi-agent systems with biased observations over switching communication topologies. We first build a general model for the systems by considering the distributed output feedback control and state-correlated observation bias. Then, we propose a three-staged distributed Kalman filter with guaranteed consistency, which means that the upper bounds of estimation error covariances can be online calculated by each agent. To alleviate the effect of biased observations, the event-triggered update scheme is proposed and proven to have a tighter bound of error covariance than the typical time-driven update scheme. Also, the proposed scheme can perform better in energy-constrained situations via abandoning redundant observations. Moreover, we rigorously prove the the stability of the estimation error covariances for the proposed two distributed filters, respectively, under the mild conditions of collective observability of multi-agent system and joint connectivity of switching topologies. Finally, we carry out several numerical simulations to validate the theoretical results developed in this paper.

Beijing,kth]Xingkang He, Beijing,cor1] Wenchao Xue , Beijing]Xiaocheng Zhang, Beijing]Haitao Fang

LSC, NCMIS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China 

Department of Automatic Control, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden 

Key words:  Multi-agent system; Distributed filtering; Kalman filter; Nonlinear system; Biased observation


11footnotetext: The material in this paper was not presented at any conference.

1 Introduction

In recent years, multi-agent systems are broadly applied to sensor network [1], environment sensing [2], target tracking [3, 4], smart grid [5], etc. As is well known, the state estimation problems of multi-agent systems are usually modeled as the distributed filtering problems, thus more and more researchers and engineers around the world are paying their attention to the research on the methods and theories of distributed filtering.

In the existing literatures on distributed filtering problems of linear multi-agent systems, many effective approaches and analysis tools have been provided. As is known, for linear stochastic systems, the optimal centralized Kalman filter can provide the estimation error covariances in a recursive manner. However, in distributed Kalman filters [6, 7, 8, 9, 10, 11, 12] for multi-agent systems, the covariances can not be obtained by each agent due to the unknown correlation between estimates of agents. To evaluate the estimation precision in terms of error covariance, we will investigate the consistency of a filter, which means an upper bound of estimation error covariance can be online calculated. Next, the main existing results on distributed filters will be investigated from two aspects, i.e., consistency and stability.

The estimation consistency plays an important role in real-time precision evaluation and covariance-based information fusion, which are essential to distributed state estimation of multi-agent systems. For linear time-invariant systems, without considering the consistency, [13, 14] proposed distributed filters with constant filtering gains, which yet confined the instability of the system dynamics. [8, 7, 15] studied the distributed Kalman filters (DKFs) based on consensus or diffusion strategies, where the state estimates of agents were simply fused by scalar weights. As thus, the information of neighboring agents were not effectively utilized. There were still many other filters focusing on specific problems without maintaining the consistency. The authors of [16] studied the stochastic link activation problem of distributed filtering under a scenario of sensor power constraint. The robust distributed filtering problems were well studied in [17, 18]. Recently, for linear multi-agent systems, the algorithms in [9, 10, 11] possessed consistency, which enabled a sequence of upper bounds of estimation error covariances to be available by each agent.

Stability is one of fundamental properties for filtering algorithms. For unstable linear multi-agent systems with very limited local observable information, finding mild conditions to guarantee the stability of distributed filtering algorithms is a challenging problem. In the results on stability analysis, [8, 19, 1] assumed local observability conditions of linear multi-agent systems, which confined the application scope of distributed filters. On the other hand, the observation errors including both observation bias and stochastic noise, can directly influence the consistency as well as stability of filters. Compared with stochastic observation noise, the observation bias may lead to larger loss in estimation performance of algorithms if the case is not well handled. This is attributed to that it is difficult for each agent to fuse the biased estimates from neighbors. Hence, it is necessary to provide information metric for the quality of observations, so as to judge whether the observations corrupted by state-correlated bias can be utilized at the update stage or simply abandoned. Moreover, existing results usually required the independence between the system state and the random bias [20, 21], which is difficult to be satisfied for feedback control systems with colored random bias processes. Therefore, the research on consistency and stability of distributed Kalman filters for the nonlinear systems seem still open, especially under collective observability, switching topologies, output feedback control input and biased observations, which are much more reasonable for engineering systems.

On the other hand, even based on the traditional centralized frameworks, designing filters with stability and consistency for nonlinear systems is a challenging problem [22, 23, 24, 6, 25]. For known-model nonlinear systems, [6, 25] studied the linearized Kalman filter based algorithms. However, to guarantee the stability of nonlinear filters, they required the initial estimation error and noise should be sufficiently small, which is difficult to be met in practical applications. Moreover, due to the existence of outer disturbance or unmodeled dynamics, many practical systems contain uncertain nonlinearities besides nominally known nonlinearities. To deal with the unknown nonlinearities, some robust estimation methods, such as filters and set valued filters, were studied by researchers [26, 27, 28]. But the estimation performance seem to be quite conservative in practical applications. Considering the instability of the linearization methods and the conservativeness of the robust filters, [29] proposed an novel extended state based Kalman filter (ESKF) for a class of nonlinear uncertain systems. By employing the scheme of [29] to handle the nonlinear uncertainty, this paper will construct the distributed filtering algorithm for nonlinear uncertain multi-agent systems.

In this paper, we consider the distributed filtering problem for a class of discrete-time stochastic nonlinear multi-agent systems with biased observations over switching communication topologies. The main contributions of this paper are threefold.

  1. A three-staged time-driven distributed Kalman filter with guaranteed consistency is proposed for a class of nonlinear uncertain multi-agent systems. It is shown that the proposed filter enables a sequence of upper bounds of error covariances to be online calculated by each agent.

  2. Based on an information metric for local observation statistics, we present an event-triggered observation update scheme. Moreover, based on this scheme, we propose an event-triggered distributed Kalman filter, which is shown to have a tighter bound of error covariance than that based on the typical time-driven update scheme.

  3. We rigorously prove the stability of the estimation error covariances for the proposed two distributed filters. More importantly, our results do not require the assumptions of the noise independence among agents and the uniform nonsingularity of transition matrices, which are usually in the existing results but hardly to be satisfied in practice. Besides, the results suit a class of distributed output feedback control systems, such as the coupled tanks system [30].

The remainder of the paper is organized as follows: Section 2 is on the graph preliminaries and some useful definitions. Section 3 is on the problem formulation. Section 4 analyzes the distributed filter with time-driven update scheme. Section 5 studies the distributed filter with event-triggered update scheme. Section 6 shows the numerical simulations. The conclusion of this paper is given in Section 7.

1.1 Notations

The notations on mathematics and graphs used in this paper are standard and expressed in Table 1.

‘T’ transpose
-dimensional identity matrix
mathematical expectation of
mod(a,b), modulo operation of a by b
block elements are arranged in diagonals
trace of the matrix
independent and identically distributed
set of positive natural numbers
set of -dimensional real vectors
set of integers
union of and
agent number over a multi-agent system
the th weighted digraph
agent set over a multi-agent system
edge set of digraph
weighted adjacency matrix of digraph
the neighbor set of agent at time
Table 1: Notations

2 Graph Preliminaries and Useful Definitions

In this section, we will provide some graph preliminaries and useful definitions serving for the subsequents of this paper. The main notations are provided in Table 1.

We model the communication topologies of a multi-agent system as switching weighted digraphs , where . In a weighted adjacency matrix , all the elements are nonnegative, row stochastic and the diagonal elements are all positive, i.e., . If , there is a link , which means node can directly receive the information of node through the communication channel. In this situation, node is called the neighbor of node and all the neighbors of node including itself can be represented by the set . For a given positive integer , the union of the digraphs is denoted as . is called strongly connected if for any pair nodes , there exists a direct path from to consisting of edges . We call jointly strongly connected if is strongly connected.

In the Kalman filter for the linear time-varying systems with known exact noise statistics, the estimation error covariances can be recursively calculated. However, for the distributed Kalman filters [10, 9], due to the unknown correlation between estimates of agents, the error covariances are usually unaccessible. In order to evaluate the estimation performance, the following definition of consistency is introduced.

Definition 2.1

([31]) Suppose is a random vector and is the estimate of . Then the pair () is said to be consistent if

To study the estimation stability of filtering algorithms, the following definition is provided.

Definition 2.2

Let be the state estimation error of agent at time , then the sequence of estimation error covariances are said to be stable if

Denote as the basic probability space. stands for a filtration of -algebra . A discrete-time sequence is said to be adapted if is measurable to . The definitions of ‘filtration’, ’-algebra’ and ‘measurable’ are given in [32].

Definition 2.3

A discrete-time adapted sequence is called a martingale difference sequence, if and , almost surely.

Since this paper studies a class of time-varying multi-agent systems, an useful definition is provided. Given a positive integer , a matrix sequence and a positive scalar , define the time sequence as

Definition 2.4

Given , if there exists an integer and a scalar , such that for the defined time sequence in (2),

then is called -step supporting sequence (L-SS) of .

Remark 2.1

The definition of L-SS is introduced to study the nonsingularity of the time-varying transition matrices given in next section. In many existing results, is usually assumed to be nonsingular for , which is removed in our paper.

3 Model Description and Problem setup

Consider the following model for a class of stochastic multi-agent systems with nonlinear uncertain dynamics and biased observations


where is the unknown -dimensional system state, is the known state transition matrix and is the unknown zero-mean white process noise. is the -dimensional nonlinear uncertain dynamics consisting of the known nominal model and some unknown disturbance. is the known matrix subject to . is the -dimensional observation vector obtained via agent , is the known observation matrix, is the unknown state-correlated stochastic observation bias of agent , and is the stochastic zero-mean observation noise. is the number of agents over the system. Note that , , , and are simply known to agent . The above matrices and vectors have compatible dimensions.

Let , , for simplicity. In the following, we will provide several assumptions on the system structure and network topology.

Assumption 3.1

On the multi-agent system (2), the following conditions hold.

  • 1) The process noise is independent of and , , subject to , where and sup.

  • 2) The stochastic biases are measurable to , and .

  • 3) are martingale difference sequences such that , where are positive definite matrices such that

  • 4) where is the estimate of for the i-th agent with , and .

Under the condition 2) of Assumption 3.1, the bias sequences are adapted sequences, thus the bias model is built in a general framework, which includes both deterministic phenomena [33] and random noise. Different from [11] where the observation noises of agents are independent, 3) of Assumption 3.1 allows the noise dependence between agents. In addition, compared with [6, 25] which required the initial estimation error is sufficiently small, 4) of Assumption 3.1 is quite general and satisfied by setting a large .

Assumption 3.2

For the system (2), there exists a positive integer , such that the matrix sequence has an L-SS and .

Assumption 3.2 poses no requirement on the stability of the original system (2) which is necessary in many existing studies [34, 35, 36]. Besides, within the scope of distributed filtering for time-varying systems, Assumption 3.2 is milder than that in [6, 11], where the non-singularity of the system state transition matrix is needed for each time.

Assumption 3.3

On the nonlinear uncertain dynamics , the following conditions hold.

  • 1) The nonlinear dynamics is measurable to and .

  • 2) Denote and is the th element of , then there exists a vector , such that

    with and ,

The first condition of Assumption 3.3 permits to be implicitly related with . Under this setting, the model built in (2) also considers the distributed output feedback control systems, such as the coupled tanks system [30]. Different from the existing result that treats the uncertain dynamics as a bounded total disturbance [37], the requirement for the increment of the nonlinear dynamics in 2) of Assumption 3.3 has no restriction on the boundedness of uncertain dynamics.

For the system (2), a new state vector, consisting of the original state and the nonlinear uncertain dynamics , can be constructed. Then a modified system model with respect to the new state vector is given in the following.

To write the above reconstructed system into a concise form, we introduce the following notations.

Then the system (2) can be rewritten as


Considering the system (2) and the reformulated system (3), Lemma 3.1 shows the equivalent conditions on state transition matrices.

Lemma 3.1

On the relationship between in (2) and in (3), the following conclusions hold.

  • 1) if and only if .

  • 2) has an L-SS if and only if has an L-SS.

PROOF. Please see the proof in Appendix A.

Assumption 3.4

(Collective observability) There exist two positive integers , , and a constant such that for any , there is



Assumption 3.4 is a standard collective observability condition for time-varying stochastic systems. If the system is time-invariant, then Assumption 3.4 degenerates to being observable [9, 38]. Besides, if the local observability conditions are satisfied [8, 19, 1], then Assumption 3.4 holds, but not vice versa.

In this paper, the topologies of the networks are assumed to be switching digraphs . is the graph switching signal defined , where is the set of the underlying network topology numbers. For convenience, the weighted adjacency matrix of the digraph is denoted as . To analyze the switching topologies, we consider the infinity interval sequence of bounded, non-overlapping and contiguous time intervals with and for some integer . On the switching topologies of the multi-agent system, the following assumption is needed.

Assumption 3.5

The digraph set is jointly strongly connected across the time interval and the elements belong to , , where is a finite set of arbitrary nonnegative real numbers.

Assumption 3.5 is on the conditions of the network topologies. Since the joint connectivity of the switching digraphs admits the network is unconnected at each moment, it is quite general for the networks confronting link failures. If the network remains connected at each moment or fixed [9, 10], then Assumption 3.5 holds.

Due to the existence of stochastic biases with unknown correlation to the system state, the observations of the system (2) become more unreliable than that simply with random noises. In other words, employing the observations with typical time-driven methods many lead to the degradation of the estimation performance. Thus, for the time-varying system (3), different observation update protocols should be studied. In this paper, we consider two observation update schemes, namely time-driven update and event-triggered update, whose difference lies in whether the biased observation is utilized at the update stage. Obeying a peer-to-peer communication strategy, we propose the following three-staged distributed filter structure of the system (3) for agent , ,


where , and are the extended state’s prediction, update and estimate for agent at the th moment, respectively. and , are the filtering gain matrix and the local fusion matrices, respectively. They are remain to be designed. Additionally,


where stands for the estimate of nominal model by employing the former state estimates . It is noted that the saturation function is utilized to guarantee the boundedness of .

The objectives of this paper are twofold:

a) Under the time-driven and event-triggered update schemes, construct two distributed recursive filters in a distributed manner, respectively, such that the filters are consistent to estimate the extended state .

b) Based on the provided conditions on the system structure and network topology, prove the stability of the estimation error covariances for the designed filters.

4 Distributed filter: time-driven update

In this section, for the filtering structure (5) with employed each time, we will study the design methods of and . Then we will find the conditions to guarantee the stability of the estimation error covariances for the proposed filter with the designed and .

4.1 Filter design

In the next, Lemma 4.1 provides a design method of fusion matrices , which can lead to the consistent estimate of each agent.

Lemma 4.1

Consider the multi-agent system (3) with the filtering structure (5). Under Assumptions 3.1 and 3.3, for , the pairs (),() and () are all consistent, if


where and are recursively calculated through

with and are defined in (B.2).

For the filtering gain matrix , its design can be casted into an optimization problem with closed-form solution given in the following lemma.

Lemma 4.2

Solving the optimization problem


Remark 4.1

In Lemma 4.1, it is shown the upper bounds of estimation error covariances at three typical stages can be obtained by each agent. The bounds can not only contribute to the design of fusion weights as well as filtering gain, but also evaluate the estimation accuracy in real time.

Summing up the results of Lemmas 4.1 and 4.2, the extended state based distributed Kalman filter (ESDKF) is provided in Algorithm 1.

Prediction: Each agent carries out a prediction operation

where is provided in (6), and are given in (B.2).
Update: Each agent uses its own observations to update the estimation

Local Fusion: Each agent fuses (, ) received from its neighbors

Algorithm 1 Extended State Based Distributed Kalman Filter (ESDKF):
Remark 4.2

Algorithm 1 is a fully distributed filter, i.e., its implementation simply requires the local information and the messages received from neighboring agents. Note the observations and observation statistics (i.e., are not shared between agents, which contributes to the privacy of distributed estimation for multi-agent systems [39].

4.2 Stability

In this subsection, we will find the conditions to guarantee the stability of the estimation error covariances for ESDKF in Algorithm 1. Before that, we provide the following lemma for proof convenience.

Lemma 4.3

Under Assumptions 3.1-3.3, if there are positive constants such that and , the following two conclusions hold.

  • 1) At the observation update stage, it holds that

    where .

  • 2) At the prediction stage, there exists a positive scalar such that

    where is an L-SS of .

PROOF. Please see the proof in Appendix D.

Theorem 4.1

Consider the multi-agent system (3) with Algorithm 1. Under Assumptions 3.1-3.5, if and there are positive constants such that and , then the estimation error covariances of each agent are stable, i.e.,

PROOF. Due to the consistency in Lemma 4.1, we turn to prove . Under Assumption 3.2, has an L-SS, which is supposed to be subject to , , where . Without loss of generality, we assume , where is given in Assumption 3.4. Otherwise, a subsequence of can always be obtained to satisfy the requirement. To prove the boundedness of , we divide the sequence set into two non-overlapping time set: and .

1) First, we consider the case of , . For convenience, let . According to Lemma 4.3, we obtain


Denote . By recursively applying (8) for times, one has




and is the th element of . Since the first term on the right hand side of (9) is positive definite, we consider the second term . Under Assumption 3.5, the jointly strongly connected network can lead to From (10), one can obtain

where . It is noted that can be obtained under Assumption 3.5, since the elements of belong to a finite set. Under Assumptions 3.2 and 3.4, there exists a constant positive definite matrix , such that Considering (9), we have


2) Second, we consider the time set . Considering (9), we have Since the length of the interval is bounded by , we can just consider the prediction stage to study the boundedness of , for . Under Assumption 3.2 and Lemma 3.1, there is a scalar such that . Due to sup, it is safe to conclude that there exists an constant matrix , such that


3) Finally, for the time interval , there exists a constant matrix , such that


According to (11), (12) and (13), we have . Q.E.D.

It can be seen from Theorem 4.1 that, under mild conditions including collective observability of a multi-agent system and jointly strong connectedness of switching topologies, the proposed filter can effectively estimate the extended state, which consists of the original state and the nonlinear dynamics.

4.3 Design of parameters

Although the design principles on and have been provided in Theorem 4.1 to guarantee the stability of Algorithm 1, to improve the estimation performance, in this subsection, we give some optimization based design methods for the parameters and . First of all, the objective functions ought to be given. Due to the unknown correlation between estimates of agents, the estimation covariance of distributed Kalman filters is usually not attainable. Thanks to the consistency of Algorithm 1, we can use and to take the roles.

a) Design of

At the prediction stage, the design of the parameter is aimed to minimize the trace of . Mathematically, the optimization problem on is given as

where .

Since Problem 1 is a convex optimization problem, which can be numerically solved by existing convex optimization methods. In Proposition 4.1, we provide the closed-form solution of Problem 1.

Proposition 4.1

Solving the Problem 1 yields the closed-form solution

subject to