Distributed Parameter Estimation Under Event-triggered Communications

# Distributed Parameter Estimation Under Event-triggered Communications

Xingkang He, Qian Liu, Junfeng Wu, Karl Henrik Johansson The work is supported by Knut & Alice Wallenberg foundation, and by Swedish Research Council.Xingkang He and Karl Henrik Johansson are with the ACCESS Linnaeus Centre, School of Electrical Engineering and Computer Science. KTH Royal Institute of Technology, Sweden (xingkang@kth.se, kallej@kth.se). Qian Liu is with LSC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, China; and she is also with University of Chinese Academy of Sciences, China (liuqian@amss.ac.cn).Junfeng Wu is with College of Control Science and Engineering. Zhejiang University, China (jfwu@zju.edu.cn).
###### Abstract

In this paper, we study a distributed parameter estimation problem with an asynchronous communication protocol over multi-agent systems. Different from traditional time-driven communication schemes, in this work, data can be transmitted between agents intermittently rather than in a steady stream. First, we propose a recursive distributed estimator based on an event-triggered communication scheme, through which each agent can decide whether the current estimate is sent out to its neighbors or not. With this scheme, considerable communications between agents can be effectively reduced. Then, under mild conditions including a collective observability, we provide a design principle of triggering thresholds to guarantee the asymptotic unbiasedness and strong consistency. Furthermore, under certain conditions, we prove that, with probability one, for every agent the time interval between two successive triggered instants goes to infinity as time goes to infinity. Finally, we provide a numerical simulation to validate the theoretical results of this paper.

Distributed parameter estimation, event-triggered, strong consistency

## 1 Introduction

As one of the hottest research topics over the last decade, multi-agent systems have attracted a lot of attention of researchers around the world due to their broad applications in sensor networks, cyber-physical systems, computer games, transportation, etc. With the development of network technology and the increasing of data amount, distributed learning and estimation protocols without requiring a data center are becoming more and more popular.

Distributed parameter estimation over multi-agent systems is on the problem of estimation or learning of an unknown parameter based on data transmission between neighboring agents. Numerous practical applications, such as temperature monitoring, weather prediction and environmental exploration, can be cast into distributed parameter estimation problems. Due to environmental complexity, the estimation problem is usually modeled under stochastic frameworks, where measurements of each agent are polluted by random noises. In [1, 2, 3, 4], the distributed parameter estimation problems are investigated with respect to the estimation properties including consistency and asymptotic normality. The distributed parameter estimation problems over random networks and imperfect communication channels are studied in [5, 6]. The connection between graph topologies and estimation performance in terms of asymptotic variances are effectively analyzed in above literature.

Design and analysis of communicaton schemes between agents is an essential research topic of networked estimation and control. Due to the limitations of channel capacity and energy resources, traditional time-driven communication schemes may not be suitable to some practical applications, such as wireless agent networks. Thus, in the existing literature, there have been a few results which consider event-triggered communication schemes. Event-triggered measurement scheduling problems are well studied in [7, 8, 9, 10, 11, 12, 13]. In these literature, the parameter estimation or filtering problems are investigated under centralized frameworks, where a center can process the transmitted messages to obtain estimates for parameter vector or state vector. [14, 15, 16] study distributed filtering problems with event-triggered communications, where the messages of state estimates or covariance bounds are transmitted to other agents intermittently. However, to the best knowledge of authors, the distributed parameter estimation problems with event-triggered communications have not been well studied in the existing literature. The main difficulty is to design and analyze triggering conditions so as to reduce communication frequency between agents with guaranteed estimation properties.

In this paper, we study the distributed parameter estimation problem with event-triggered intermittent communications between neighboring agents. The contributions of this work are two fold. First, we propose an event-triggered communication scheme, through which each agent can decide whether the current estimate is sent out to its neighbors or not. With this scheme, redundant communications between agents can be effectively reduced. Second, under mild conditions, for the considered distributed estimator, we prove the main estimation properties including asymptotic unbiasedness and strong consistency. Besides, we prove that, for every agent the time interval between two successive triggered instants goes to infinity as time goes to infinity in the sense of almost sure, which means the communication frequency between any two neighboring agents is tremendously reduced if time is sufficiently large. It should be noted that the main difference between the event-triggered framework proposed in this work and the existing literature is that our triggering threshold will go to zero as time goes to infinity, which is necessary to guarantee the asymptotic convergence of estimates in mild collective observability conditions.

The remainder of the paper is organized as follows: Section 2 is on preliminaries and problem formulation. Section 3 considers the event-triggered communication scheme and some main asymptotic estimation properties. Section 4 provides a numerical simulation. The conclusion of this paper is given in Section 5.

### 1-a Notations

The superscript “T” represents the transpose. stands for the -dimensional square identity matrix. stands for the -dimensional vector with all elements being one. denotes the mathematical expectation of the stochastic variable , and represent the diagonalizations of block elements. Additionally, is the abbreviation of ‘independent identically distribution’. is the Kronecker product of and . is the norm of a vector . The mentioned scalars, vectors and matrices of this paper are all real-valued. is the real matrix with rows and columns. w.r.t. is the abbreviation of ‘with respect to’.

## 2 Preliminaries and Problem Formulation

In this section, we provide some necessary graph preliminaries and then formulate the problem studied in this work.

### 2-a Graph Preliminaries

In this paper, the communication between agents of a network is modeled as an undirected graph , which consists of the set of nodes , the set of edges , and the adjacency matrix . is a symmetry matrix consisting of one and zero. If , there is an edge , which means node can exchange information with node , and node is called a neighbor of node . For node , the neighbor set of agent is denoted by . We suppose that the graph has no self-loop, which means for any . is called connected if for any pair nodes , there exists a path from to consisting of edges . Besides, we denote , where is called Laplacian matrix and called degree matrix. is a diagonal matrix consisting of numbers of neighboring nodes. For detailed definitions, the readers are referred to [17]. On the connectivity of a graph, the following theorem holds.

###### Theorem 2.1.

[17] The graph is connected if and only if .

### 2-B Problem Setup

Consider the unknown parameter vector is observed by agents with the following model

 yi(t)=Hiθ+vi(t),i=1,2,⋯,N, (1)

where is the measurement vector, is the zero-mean measurement noise with covariance , and represents the known measurement matrix of agent . The noise covariance matrix of all agents is , where and . Note that we simply require the temporal independence of measurement noises, thus the noises of agents could be spatially correlated.

Assume is the estimate of agent at time for the parameter vector . In [2], the following estimator is studied

 xi(t+1)= xi(t)+β(t)∑j∈Ni(xj(t)−xi(t)) (2) +α(t)KHTi(yi(t)−Hixi(t)),

where and are time-varying steps satisfying certain conditions. is the parameter to be designed.

To reduce limitation of energy consumption and alleviate burden of communication channels, we focus on studying event-triggered communication scheme, in which the state estimates of each agent will not be consistently transmitted.

In the following of this paper, we focus on solving the problems as follows. 1) How to design a fully distributed event-triggered communication scheme for each agent? 2) What conditions are required to guarantee essential estimation properties, including asymptotic unbiasedness and strong consistency for each agent? 3) How does the event-triggered communication scheme contribute to reducing the communication frequency of agents with guaranteed properties?

## 3 Main results

In this section, we will propose an event-triggered communication scheme and analyze the main estimation performance of a recursive distributed estimator based on the triggering scheme.

### 3-a Event-triggered communication scheme

In this subsection, we consider design an event-triggered scheme, which can decide for each agent whether the current estimate is sent out to its neighbors or not. Let be the th triggering time of the th agent, and it is the latest triggering time of agent . Then, we define the triggering event

 (3)

where is a positive scalar addressed in the following, and is the latest state estimate sent out by agent at time .

Let , and the following random indicator variable be

 γi(t)={0, if Ei(t) % occurs,1, otherwise. (4)

Note that the distribution of influences the communication frequency of the whole multi-agent system. If , , , then the communications between agents will not happen almost surely. And if , , , the communication scheme is equivalent to the time-driven one almost surely.

Based on the triggering scheme (3) - (4), for agent , we propose the following event-triggered distributed parameter estimator

 xi(t+1)=xi(t)+α(t)HTi(yi(t)−Hixi(t)), (5) +β(t)∑j∈Ni(γi(t)xj(tjk)+(1−γi(t))xj(t)−xi(t)),

where and are time-varying steps addressed in Assumption 3.4.

###### Remark 3.1.

To achieve the estimator (5), each agent should reserve the latest state estimates which its neighboring agents sent out. If new estimates come, the stored ones can be updated.

###### Remark 3.2.

Different from existing results [7, 8, 9, 10, 11, 12, 13, 14, 15, 16], the triggering threshold goes to zero as goes to infinity. If the threshold does not go to zero as time goes to infinity and the collective observability condition (see Assumption 3.2) without agent is not satisfied, the estimates of all agents except agent will not converge to the true parameter.

### 3-B Performance Analysis

For convenience, we provide the following notations.

 (6)

The following assumptions are needed in this paper.

###### Assumption 3.1.

The graph is connected, i.e., .

###### Assumption 3.2.

The observation system (1) is collectively observable, i.e., is full rank.

###### Assumption 3.3.

There exists a positive scalar , such that

###### Assumption 3.4.

The steps in (5) are set with and , where , . Besides, .

###### Remark 3.3.

Assumption 3.1 is a common condition of distributed estimation and control for multi-agent systems. Assumption 3.2 is a collective observability condition, which is satisfied even if any local observability condition is not satisfied. Assumption 3.3 is on the moment condition of noises, which requires a little severe than boundedness of mean square. Assumption 3.4 provides feasible design conditions of the steps in (5).

On the triggering scheme in (4), if , the agent will send its estimate to its neighbors, who then update the stored estimate with . Thus, we rewrite (5) in the following form

 xi(t+1)= xi(t)+β(t)∑j∈Ni(xj(t)−xi(t)) +α(t)HTi(yi(t)−Hixi(t)) (7) +β(t)∑j∈Ni(xj(tjk)−xj(t)).

Considering the notations in (6), we obtain the compact form of (3-B) in the following

 X(t+1)= X(t)−β(t)(L⊗IM)X(t) +α(t)¯DH(Y(t)−¯DTHX(t)) (8) +β(t)(A⊗IM)(X(tk)−X(t)).

We have the following lemma on the error between transmitted estimate vector and current estimate vector .

###### Lemma 3.1.

Consider (3-B), then there exists a scalar , such that

 ∥X(tk)−X(t)∥≤¯m(t+1)ρ0. (9)

The following two lemmas are useful to further analysis.

###### Lemma 3.2.

Under Assumption 3.1 and Assumption 3.2, is a positive definite symmetry matrix. Furthermore, there exists a constant matrix and a sufficiently large integer , such that for any ,

 α(t)M0≤β(t)(L⊗IM)+α(t)DH< IN×M.
###### Proof.

The proof is similar to Lemma 6 of [2]. ∎

###### Lemma 3.3.

(Lemma 6, [3]) Consider a scalar sequence satisfying

 z(t+1)=(1−r1(t))z(t)+r2(t),

with initial value , where and , with , , , , , and . Then

• if , for all , we have

 limt→∞(t+1)δ0z(t)=0. (10)
• if and , (10) holds.

On the estimator (5), the asymptotic unbiasedness is studied in the following theorem.

###### Theorem 3.1.

(Asymptotically Unbiased) If , the estimate sequence by (5) is asymptotically unbiased for the true parameter , i.e., ,

###### Proof.

According to (3-B), we have

 X(t+1)= X(t)−β(t)(L⊗IM)X(t) +α(t)DH(Θ−X(t)) +α(t)¯DHV(t) (11) +β(t)(A⊗IM)(X(tk)−X(t)).

Let and . By , we have

 ~X(t+1) = ~X(t)−β(t)(L⊗IM)~X(t)−α(t)DH~X(t) +α(t)¯DHV(t)+β(t)(A⊗IM)¯X(t) (12) = (IM×N−β(t)(L⊗IM)−α(t)DH)~X(t) +α(t)¯DHV(t)+β(t)(A⊗IM)¯X(t).

Taking expectation on both sides of (3-B), we have

 E{~X(t+1)} = (IM×N−β(t)(L⊗IM)−α(t)DH)E{~X(t)} +β(t)(A⊗IM)E{¯X(t)}. (13)

According to Lemma 3.2, there exists a sufficiently large integer , such that for any ,

 α(t)M0≤β(t)(L⊗IM)+α(t)DH< IN×M.

Then, for , taking norm operator on both sides of (3-B) yields

 ∥∥E{~X(t+1)}∥∥ ≤ ∥(IM×N−β(t)(L⊗IM)−α(t)DH)∥∥E{~X(t)}∥ (14) ≤ (1−α(t)m0)∥∥E{~X(t)}∥∥+β(t)MN∥∥E{¯X(t)}∥∥,

where .

Recall , then there exists a constant scalar , such that . As a result, from (3-B), we have

 ∥∥E{~X(t+1)}∥∥ ≤ (1−α(t)m0)∥∥E{~X(t)}∥∥+MNm1(t+1)τ2+ρ0 (15) = (1−am0(t+1)τ1)∥∥E{~X(t)}∥∥+MNm1(t+1)τ2+ρ0.

Without losing generality, here we suppose . Otherwise, we can obtain a sufficiently large by increasing and maintaining the value of Due to , according to Lemma 3.3 and (3-B), goes to zero as goes to infinity. ∎

We can see from Theorem 3.1 that the initial estimation biases of agents can be removed by the estimator (5) as time goes to infinity.

###### Lemma 3.4.

Under Assumptions 3.1 - 3.4, if , there exists a finite random variable , such that

 P(supi∈N∥X(t)∥≤R)=1.
###### Proof.

Due to page limitation, the proof is omitted. ∎

To study the convergence of estimates in (5), first we introduce a centralized estimator with strong consistency, i.e., the estimate sequence converges to the true parameter almost surely. Then, we prove the estimates of (5) can reach consensus, and the consensus value can asymptotically converge to the estimates of the centralized estimator. Thus, the strong consistency of estimates in (5) can be proved.

###### Definition 3.1.

(Centralized Linear Estimator) A centralized linear estimator has the following form

 u(t+1)=u(t)+αc(t)NN∑i=1HTi(yi(t)−Hiu(t)), (16)

where for some and .

###### Lemma 3.5.

[2] For the centralized linear estimator given in Definition 3.1, we have the following results

1) The estimate sequence is of strong consistency w.r.t. , i.e.,

 (17)

2) Let with . Then the sequence is asymptotically normal, i.e.,

 √t+1(u(t)−θ)⇒N(0,Sc),

where

 Sc=a2cN2∫∞0eΣ1vS1eΣT1vdv Σ1=−acNG+12IM S1=(1⊗IM)T¯DHRv¯DTH(1⊗IM).

Define . In the following lemma, we provide conditions such that the estimates of agents reach consensus.

###### Lemma 3.6.

Let Assumptions 3.1 - 3.4 hold. Then, for any

 0≤τ0

we have

 P(limt→∞(t+1)τ0∥∥xi(t)−xavg(t)∥∥=0)=1,∀i∈V.
###### Proof.

Due to page limitation, the proof is omitted. ∎

Next, we show that the consensus value, i.e., the average estimates, will converge to the estimates of the centralized estimator in (16).

###### Lemma 3.7.

Let Assumptions 3.1 - 3.4 hold. Suppose is the centralized estimates given in Definition 3.1 with , . If then for any

 0≤τ0

we have

 P(limt→∞(t+1)τ0∥xavg(t)−u(t)∥=0)=1.
###### Proof.

Due to page limitation, the proof is omitted. ∎

The strong consistency of estimator (5) is provided in the following theorem.

###### Theorem 3.2.

(Strong Consistency) Consider the algorithm (5). Let Assumptions 3.1 - 3.4 hold. If , the estimate sequence is of strong consistency w.r.t. , i.e.,

 P(limt→∞xi(t)=θ)=1,∀i∈V. (18)
###### Proof.

According to Lemmas 3.5 - 3.7, taking , the conclusion holds. ∎

In the next theorem, we provide the convergence speed that the estimates by (5) converge to the estimates of centralized estimator in (16).

###### Theorem 3.3.

(Centralized Approximation) Let the algorithm (5) share the same parameter setting as a centralized estimator in that . Assume Assumptions 3.1 - 3.4 hold, and if , further suppose

 a>Nτ0λmin(G). (19)

Then, if , for each subject to

 0≤τ0

we have

 P(limt→∞(t+1)τ0∥xi(t)−u(t)∥=0)=1,∀i∈V. (20)
###### Proof.

According to Lemma 3.6, Lemma 3.7, the conclusion holds. ∎

Communication frequency is essential to the research of event-triggered distributed estimation. In the following theorem, the triggering interval of the defined event in (3) is investigated in the sense of infinite time.

###### Theorem 3.4.

(Triggering Interval) Let be th triggering instant of agent , Assumptions 3.1 - 3.4 hold and agents share the same threshold, i.e., . If

 ρ0<τ1−12+ϵ1, (21)

then for each agent, the time interval between two successive triggered instants goes to infinity, i.e.,

 P(limk→∞(tik+1−tik)=∞)=1,∀i∈V. (22)
###### Proof.

Note that is th triggering instant of agent , then we focus on analyzing the time interval length of in the following.

According to (3-B), for , we have

 xi(t+1)−xi(tik) (23) = xi(t)−xi(tik)+β(t)∑j∈Ni(xj(t)−xi(t)) +α(t)HTi(yi(t)−Hixi(t)) +β(t)∑j∈Ni(xj(tjk)−xj(t)).

Taking norm operator on both sides of (23) yields

 ∥∥xi(t+1)−xi(tik)∥∥ (24) ≤ +α(t)∥∥HTi(yi(t)−Hixi(t))∥∥ +β(t)∥∥ ∥∥∑j∈Ni(xj(tjk)−xj(t))∥∥ ∥∥.

According to Lemma 3.6, for , we have

 P(limt→∞(t+1)τ0∥∥xi(t)−xj(t)∥∥=0)=1,∀i,j∈V,

Then there exits a scalar , such that

 (25)

By Lemma 3.4 and Assumption 3.3, there exists a scalar and a sufficiently small , such that

 ∥∥HTi(yi(t)−Hixi(t))∥∥≤c4(t+1)12+ϵ1+δ. (26)

According to Lemma 3.1, there exists a scalar such that

 ∥∥ ∥∥∑j∈Ni(xj(tjk)−xj(t))∥∥ ∥∥≤c5(t+1)−ρ0. (27)

Taking (25), (26) and (27) into (24), we have

 ∥∥xi(t+1)−xi(tik)∥∥ (28) ≤ ∥∥xi(t)−xi(tik)∥∥+β(t)c3(t+1)−τ0 +α(t)c4(t+1)12+ϵ1+δ+β(t)c5(t+1)−ρ0 ≤ ∥∥xi(t)−xi(tik)∥∥+c31(t+1)τ0+τ2 +c41(t+1)τ1−12+ϵ1−δ+c51(t+1)ρ0+τ2. (29)

Considering , by choosing a sufficiently small , we have . Denote . Then, there exists a sufficiently large integer and a scalar , such that for ,

 ∥∥xi(tik+1)−xi(tik)∥∥ = ∥∥xi(tik+Lik)−xi(tik)∥∥ ≤ ⋮ ≤ c6tik+Lik−1∑s=tik1(s+1)τ0+τ2 ≤ c6(tik+1)τ0+τ2Lik.

A necessary condition to guarantee that the event in (3) is triggered for agent is

 c6(tik+1)τ0+τ2Lik>1(tik+1+1)ρ0 ⟺ c6(tik+1)τ0+τ2Lik>1(tik+1+Lik)ρ0 ⟺ (tik+1+Lik)ρ0(tik+1)τ0+τ2Lik>ρ0c6. (30)

Due to , there exists a scalar , such that . Then,

 τ0+τ2=min{ρ0+τ2,τ1−12+ϵ1}−¯δ

Recall the condition (21), and let go to zero, then To make sure the satisfaction of (22), we need to show that goes to infinity when goes to infinity. By contradiction, we suppose that there is an integer , such that , . A necessary condition of (3-B) is

 (tik+1+¯Li)ρ0(tik+1)τ0+τ2>ρ0c6¯Li, (31)

which however cannot be satisfied as is very large due to . Therefore, goes to infinity as goes to infinity. ∎

## 4 Numerical Simulation

In this section, we provide a numerical simulation to testify the effectiveness of distributed estimator based on event-triggered communication scheme proposed in this paper.

Consider an undirected network with four agents. The adjacency matrix of the network is . The true parameter vector is supposed to be The observation matrices and the initial parameter estimates of these agents have the following forms

 H1=[1,0]T,H2=[0,1]T H3=[1,1]T,H4=[1,2]T x1(0)=[10,20]T,x2(0)=[10,−10]T x3(0)=[10,−20]T,x2(0)=[20,−10]T.

We consider the time sequence . Let , and , for The noises of each agent are supposed to be and Gaussian. The noises of agents are spatially independent. The distribution of measurement noises is mean zero and variance .

Under the above setting, by employing the distributed estimator (5) with triggering scheme (4) and the centralized estimator 16, we obtain simulation results in Fig. 1, Fig. 2 and Fig. 3. We see from Fig. 1 that the average estimates are asymptotically convergent to the true parameters of the system. By Fig. 2, the consistency of the estimator for each agent is shown. Besides, we see that the centralized estimator has faster convergence speed, since it utilizes all measurements. The triggering time instants satisfying the triggering scheme (4) during the whole estimation process is plotted in Fig. 3 with communication rate111Communication rate is the ratio of whole triggering time instants over the whole time-driven communication time instants . Thus, the communication frequency of the agents has been tremendously reduced with guaranteed convergence properties.

## 5 Conclusion

In this paper, a distributed parameter estimation problem with intermittent communications was studied. First, we proposed an event-triggered communication scheme for each agent, by comparing a decaying threshold with the difference between the current estimate and the latest one sent out to neighboring agents. Then, we analyze some main estimation properties including asymptotic unbiasedness and strong consistency. We also showed that, with probability one, for every agent the time interval between two successive triggered instants goes to infinity as time goes to infinity.

## References

• [1] K. R. Rad and A. Tahbaz-Salehi, “Distributed parameter estimation in networks,” in IEEE Conference on Decision and Control, pp. 5050–5055, 2010.
• [2] S. Kar and J. M. Moura, “Convergence rate analysis of distributed gossip (linear parameter) estimation: Fundamental limits and tradeoffs,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 4, pp. 674–690, 2011.
• [3] S. Kar, J. M. Moura, and H. V. Poor, “Distributed linear parameter estimation: Asymptotically efficient adaptive strategies,” SIAM Journal on Control and Optimization, vol. 51, no. 3, pp. 2200–2229, 2013.
• [4] F. S. Cattivelli and A. H. Sayed, “Diffusion strategies for distributed Kalman filtering and smoothing,” IEEE Transactions on Automatic Control, vol. 55, no. 9, pp. 2069–2084, 2010.
• [5] Q. Zhang and J.-F. Zhang, “Distributed parameter estimation over unreliable networks with markovian switching topologies,” IEEE Transactions on Automatic Control, vol. 57, no. 10, pp. 2545–2560, 2012.
• [6] S. Kar, J. M. Moura, and K. Ramanan, “Distributed parameter estimation in sensor networks: Nonlinear observation models and imperfect communication,” IEEE Transactions on Information Theory, vol. 58, no. 6, pp. 3575–3605, 2012.
• [7] K. You, L. Xie, and S. Song, “Asymptotically optimal parameter estimation with scheduled measurements,” IEEE Transactions on Signal Processing, vol. 61, no. 14, pp. 3521–3531, 2013.
• [8] D. Shi, T. Chen, and L. Shi, “Event-triggered maximum likelihood state estimation,” Automatica, vol. 50, no. 1, pp. 247–254, 2014.
• [9] Y. Mo and B. Sinopoli, “Kalman filtering with intermittent observations: Tail distribution and critical value,” IEEE Transactions on Automatic Control, vol. 57, no. 3, pp. 677–689, 2012.
• [10] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. I. Jordan, and S. S. Sastry, “Kalman filtering with intermittent observations,” IEEE Transactions on Automatic Control, vol. 49, no. 9, pp. 1453–1464, 2004.
• [11] D. Han, K. You, L. Xie, J. Wu, and L. Shi, “Optimal parameter estimation under controlled communication over sensor networks.,” IEEE Trans. Signal Processing, vol. 63, no. 24, pp. 6473–6485, 2015.
• [12] D. Han, Y. Mo, J. Wu, S. Weerakkody, B. Sinopoli, and L. Shi, “Stochastic event-triggered sensor schedule for remote state estimation,” IEEE Transactions on Automatic Control, vol. 60, no. 10, pp. 2661–2675, 2015.
• [13] J. Weimer, J. Araújo, and K. H. Johansson, “Distributed event-triggered estimation in networked systems,” IFAC Proceedings Volumes, vol. 45, no. 9, pp. 178–185, 2012.
• [14] X. He, C. Hu, W. Xue, and H. Fang, “On event-based distributed Kalman filter with information matrix triggers,” in IFAC World Congress, pp. 14873–14878, 2017.
• [15] G. Battistelli, L. Chisci, and D. Selvi, “A distributed Kalman filter with event-triggered communication and guaranteed stability,” Automatica, vol. 93, pp. 75–82, 2018.
• [16] X. He, C. Hu, Y. Hong, L. Shi, and H. Fang, “Distributed Kalman filters with state equality constraints: Time-based and event-triggered communications,” arXiv preprint arXiv:1711.05010, 2017.
• [17] M. Mehran and E. Magnus, Graph theoretic methods in multiagent networks. Princeton University Press, 2010.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters