Distributed Kalman Filter in a Network of Linear Dynamical Systems

Distributed Kalman Filter in a Network of Linear Dynamical Systems

Damián Marelli Damian.Marelli@newcastle.edu.au Mohsen Zamani Mohsen.Zamani@newcastle.edu.au Minyue Fu Minyue.Fu@newcastle.edu.au Department of Control Science and Engineering and State Key Laboratory of Industrial Control Technology, Zhejiang University, 388 Yuhangtang Road Hangzhou, Zhejiang Province, 310058, P. R. China. French-Argentinean International Center for Information and Systems Sciences, National Scientific and Technical Research Council, Ocampo Esmeralda, Rosario 2000, Argentina. School of Electrical Engineering and Computer Science, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia.
Abstract

This paper is concerned with the problem of distributed Kalman filtering in a network of interconnected subsystems with distributed control protocols. We consider networks, which can be either homogeneous or heterogeneous, of linear time-invariant subsystems, given in the state-space form. We propose a distributed Kalman filtering scheme for this setup. The proposed method provides, at each node, an estimation of the state parameter, only based on locally available measurements and those from the neighbor nodes. The special feature of this method is that it exploits the particular structure of the considered network to obtain an estimate using only one prediction/update step at each time step. We show that the estimate produced by the proposed method asymptotically approaches that of the centralized Kalman filter, i.e., the optimal one with global knowledge of all network parameters, and we are able to bound the convergence rate. Moreover, if the initial states of all subsystems are mutually uncorrelated, the estimates of these two schemes are identical at each time step.

keywords:
Estimation, Kalman Filter, Distributed Systems.
journal: Systems and Control Letters

1 Introduction

There has been an increasing activity in the study of distributed estimation in a network environment. This is due to its broad applications in many areas, including formation control Subbotin and Smith (2009); Lin et al. (2014), distributed sensor network Zhang et al. (2001) and cyber security Teixeira et al. (2015); Zamani et al. (2015). This paper examines the problem of distributed estimation in a network of subsystems represented by a finite dimensional state-space model. Our focus is on the scenario where each subsystem obtains some noisy measurements, and broadcasts them to its nearby subsystems, called neighbors. The neighbors exploit the received information, together with an estimate of their internal states, to make a decision about their future states. This sort of communication coupling arises in different applications. For example, in control system security problems Teixeira et al. (2015), distributed state estimation is required to calculate certain estimation error residues for attack detection. Similarly, for formation control Lin et al. (2016b); Zheng et al. (2015); Lin et al. (2016a), each subsystem integrates measurements from its nearby subsystems, and states of each subsystem need to be estimated for distributed control design purposes. The main objective of this paper is to collectively estimate the states of all subsystems within such a network. We will propose a novel distributed version of the celebrated Kalman filter.

The current paper, in broad sense, belongs to the large body of literature regarding distributed estimation. One can refer to Lopes and Ali (2008); Kar et al. (2012); Conejo et al. (2007); Gómez-Expósito et al. (2011); Marelli and Fu (2015); Olfati-Saber (2005); Ugrinovskii (2011, 2013); Zamani and Ugrinovskii (2014); Khan and Moura (2008); Olfati-Saber (2009) and the survey paper Ribeiro et al. (2010), as well as references listed therein, for different variations of distributed estimation methods among a group of subsystems within a network. A consensus based Kalman filter was proposed in Olfati-Saber (2005). The author of Ugrinovskii (2011) utilized a linear matrix inequality to minimize a index associated with a consensus based estimator, which can be implemented locally. Some of the results there were then extended to the case of switching topology in Ugrinovskii (2013). The same problem was solved using the minimum energy filtering approach in Zamani and Ugrinovskii (2014). A common drawback of the state estimation methods described above is that, being based on consensus, they require, in theory, an infinite number of consensus iterations at each time step. This results in computational and communication overload. To avoid this, in this paper we exploit the network structure to achieve a distributed Kalman filter method which requires only one prediction/update step at each time step. Moreover, it is worthwhile noting that there is a major difference between the above-mentioned works and the problem formulation in the current paper. More precisely, in the former, the aim of each subsystem is to estimate the aggregated state which is common to all subsystems. In contrast, in the problem studied here, each subsystem is dedicated to the estimation of its own internal state, which in general is different from those of other subsystems. This allows the distributed estimation algorithm to be scalable to networked systems with a large number of subsystems where requiring each subsystem to estimate the aggregated state is both computationally infeasible and practically unnecessary.

To show the effectiveness of the proposed algorithm, we compare our method with the classical (centralized) Kalman filter, which is known to be optimal (in the minimum error covariance sense). The classical method requires the simultaneous knowledge of parameters and measurements from all subsystems within the network to carry out the estimation. In contrast, our proposed distributed estimation algorithm runs a local Kalman filter at each subsystem, which only requires the knowledge of local measurements and parameters, as well as measurements from neighbor subsystems. Hence, it can be implemented in a fully distributed fashion. We show that the state estimate, and its associated estimation error covariance matrix, produced by the proposed distributed method asymptotically converge to those produced by the centralized Kalman filter. We provide bounds for the convergence of both the estimate and the estimation error covariance matrix. A by-product of our result is that, if the initial states of all subsystems are uncoupled (i.e., they are mutually uncorrelated), the estimates produced by our method are identical to that of the centralized Kalman filter.

The rest of the paper is structured as follows. In Section 2, we describe the network setup and its associated centralized Kalman filter. In Section 4, we describe the proposed distributed Kalman filter scheme. In Section 5, we demonstrate the asymptotic equivalence between the proposed distributed filter and the centralized one, and provide bounds for the convergence of the estimates and their associated estimation error covariances. Simulation results that support our theoretical claims are presented in Section 6. Finally, concluding remarks are given in Section 7.

2 System Description

In this paper we study networks of linear time-invariant subsystems. Subsystem is represented by the following state-space model

(1)
(2)

The subsystems are interconnected as follows

(3)

where is the state, the output, is an i.i.d Gaussian disturbance process with , and is an i.i.d. Gaussian measurement noise process with . We further suppose that and , and , . We also denote the neighbor set of the subsystem by .

Remark 1.

We note in (1)-(2) that the coupling between neighboring subsystems is solely caused through the term in (3). The main motivation for considering such coupling comes from distributed control where (1) represents the model of an autonomous subsystem (or agent) with the being the control input and that (3) represents a distributed control protocol which employs feedback only from neighboring measurements. This type of distributed control is not only common for control of multi-agent systems (see, for example, (Lin et al., 2014, 2016b, 2016a; Zheng et al., 2015)), but also realistic for large networked systems in the sense that only neighbouring information is both easily accessible and most useful for each subsystem. We emphasize that the distributed state estimation problem arises for the networked system (1)-(3) because of our allowance for measurement noises in (2). This consideration is very important for applications because measurement noises are unavoidable in practice. This also sharply distinguishes our distributed control formulation from most distributed control algorithms in the literature where perfect state measurement is often implicitly assumed.

We define and , where stands for either , , , or ; moreover, we denote , where stands for either , , , or , and .

Using the above notation, we let the initial state of all subsystems have the joint distribution . We can also write the aggregated model of the whole network as

(4)
(5)

with

(6)

It then follows that

(7)

where and .

3 Centralized Kalman Filter

Consider the standard (centralized) Kalman filter. For all , let

(8)

and . Our aim in this subsection is to compute in a standard centralized way. Notice that equation (7) implies that, in the aggregated system formed by (1)-(2), the process noise and the measurement noise are mutually correlated. Taking this into account, it follows from (Anderson and Moore, 1979, S 5.5) that the prediction and update steps of the (centralized ) Kalman filter are given by:

  1. Prediction:

    (9)

    and

(10)
  1. Update:

    (11)
    (12)

    with

    (13)

4 Distributed Kalman Filter

Consider the -th subsystem (1)-(2). Notice that, since the measurements , , are known by the -th subsystem, they can be treated as external inputs. This observation leads us to the following intuitive approach for a distributed Kalman filter scheme.

Let, for all and ,

(14)

and . Then, the prediction and update steps for the proposed distributed Kalman filter are:

  1. Prediction:

    (15)
    (16)
  2. Update:

    (17)
    (18)

    with

    (19)

5 Optimality analysis

Since the distributed Kalman filter approach given in Section 4 is motivated by intuition, the question naturally arises as to which extent it is optimal. In this section we address this question. To this end, we define , where and , to be the outcomes of distributed filter and to be those of centralized one. In Section 5.1, we show that the estimation error covariance of the distributed filter converges to that of the centralized one , and provide a bound for this convergence. In Section 5.2, we do the same for the convergence of to .

5.1 Convergence of to

In this section, we show that the covariance matrices and exponentially converge to each other, and introduce a bound on . To this end, we require the following definition from (Bougerol, 1993, Def 1.4).

Definition 2.

For matrices , the Riemannian distance is defined by

where denote the singular values of matrix .

Several properties of the above definition, which we use to derive our results, are given in the following proposition.

Proposition 3.

(Sui et al., , Proposition 6) For matrices , the following holds true:

  1. .

  2. For any matrix and matrix , we have

    where and .

  3. If , then

The main result of this section is given in Theorem 5. Its proof requires the following technical result.

Lemma 4.

Let and . Then

and

(20)
(21)

where

(22)
(23)

with denoting the diagonal matrix formed by the block diagonal entries of the matrix ,

(24)

and .

Proof.

Let and

(25)
(26)

We can then appeal to the fact that the Riccati equation is monotonic Bitmead et al. (1985), to conclude that, for all ,

(27)
(28)
(29)
(30)

Recall that

Also, from (Anderson and Moore, 1979, p. 139), we have

Clearly, similar relations hold for and . Then, it follows from Proposition 3-3 that,

(31)
(32)

with

It then follows from (31)-(32) and Proposition 3-2, that

with . Finally, the above implies that . Hence, the parameters and given in (25)-(26) are equivalent to and in (22)-(23), respectively, and the result follows. ∎

We now introduce the main result of the section, stating a bound on .

Theorem 5.

Let and . Then

where

Proof.

Using (22)-(23), together with (20)-(21), Proposition 3-4 and Lemma 11, we obtain

5.2 Convergence of to

In this subsection, we study the convergence of state estimate , obtained through the distributed method, and that of the centralized one . Moreover, we derive a bound on the error -. We start by introducing a number of lemmas which are instrumental for establishing our main results.

Lemma 6.

Let . Then

(33)

where

Proof.

Let , and . We can easily obtain

Also, from (Anderson and Moore, 1979, p. 140), we obtain

Then it is easy to check that

and

We then have

Lemma 7.

Let

(34)

Then

(35)

where is the identity matrix, is defined in (24), and

(36)

with

(37)
Proof.

We split the argument in steps:

Step 1) From Lemmas 6 and 12

Now, using Lemma 4,

and

Then

Step 2) From (33) and Lemma 12, we have

with

Clearly, if then . Also, there clearly exists and such that , for all . Hence, , and the result follows. ∎

The following result states a family of upper bounds on the norm of the covariance matrix of .

Theorem 8.

Consider as defined in (34). Let and be the Jordan decompositions of and , respectively. Then for every , there exists such that

where

and

(38)
Proof.

We split the argument in steps:

Step 1) Let

with , From (35), and since , it follows that

(39)

Step 2) Let

From (38), there exists such that, for all ,

Then, for all ,

Step 3) We have

Let . Then

with

Taking -transform we get

Hence,

and the result follows from the definition of and (39). ∎

Theorem 8 states that the covariance of the difference between and is bounded by two exponential terms. The term is due to the convergence of the Kalman gain to , while the term is due to the convergence of the states given by the system dynamics. In order to use this result to show the asymptotic convergence of to , we need that and , for some . While it is clear from (24) that the former is true, guaranteeing the latter is not that straightforward. The following proposition addresses this issue.

Proposition 9.

If the pair is completely detectable and the pair is completely stabilizable, then where denotes the spectral radius of matrix .

Proof.

Let . From Theorem 5,

Now,

Hence, if we had that , for all , then

However, under the same assumption, according to Lemma 6, . Hence,

i.e., equals the matrix that determines the asymptotic dynamics of the centralized Kalman filter’s estimation error. Then, in view of the model (4)-(5), the result follows from (Anderson and Moore, 1979, S 4.4). ∎

5.3 The case w