FADE: Fast and Asymptotically efficient Distributed Estimator for dynamic networks

# FADE: Fast and Asymptotically efficient Distributed Estimator for dynamic networks

António Simões and João Xavier,  This work was supported in part by the Fundação para a Ciência e Tecnologia, Portugal, under Project UID/EEA/50009/2013 and Grant PD/BD/135013/2017. (Corresponding author: António Simões.) The authors are with the Instituto Superior Técnico, Universidade de Lisboa, 1649-004 Lisbon, Portugal, and also with the Institute for Systems and Robotics, Laboratory for Robotics and Engineering Systems, 1049-001 Lisbon, Portugal. (e-mail: asimoes@isr.ist.utl.pt; jxavier@isr.ist.utl.pt).
###### Abstract

Consider a set of agents that wish to estimate a vector of parameters of their mutual interest. For this estimation goal, agents can sense and communicate. When sensing, an agent measures (in additive gaussian noise) linear combinations of the unknown vector of parameters. When communicating, an agent can broadcast information to a few other agents, by using the channels that happen to be randomly at its disposal at the time.

To coordinate the agents towards their estimation goal, we propose a novel algorithm called FADE (Fast and Asymptotically efficient Distributed Estimator), in which agents collaborate at discrete time-steps; at each time-step, agents sense and communicate just once, while also updating their own estimate of the unknown vector of parameters.

FADE enjoys five attractive features: first, it is an intuitive estimator, simple to derive; second, it withstands dynamic networks, that is, networks whose communication channels change randomly over time; third, it is strongly consistent in that, as time-steps play out, each agent’s local estimate converges (almost surely) to the true vector of parameters; fourth, it is both asymptotically unbiased and efficient, which means that, across time, each agent’s estimate becomes unbiased and the mean-square error (MSE) of each agent’s estimate vanishes to zero at the same rate of the MSE of the optimal estimator at an almighty central node; fifth, and most importantly, when compared with a state-of-art consensus+innovation (CI) algorithm, it yields estimates with outstandingly lower mean-square errors, for the same number of communications—for example, in a sparsely connected network model with agents, we find through numerical simulations that the reduction can be dramatic, reaching several orders of magnitude.

Distributed estimation, linear-gaussian models, dynamic networks, consensus+innovations
a.s.
almost surely
EVD
eigenvalue decomposition
i.i.d.
independent and identically distributed
LMS
least mean square
ML
maximum likelihood
MVU
minimum variance and unbiased
SLLN
strong law of large numbers
WSN
wireless sensor network

## I Introduction

Data is increasingly collected by spatially distributed agents, the term agent meaning some physical device that measures data locally, say, a robot. Moreover, not only is the data at these agents being collected with at an ever-growing rate and precision (as formerly pricy high-quality sensors such as high-resolution cameras have meanwhile became affordable commodities), but the number of agents themselves collecting the data is soaring. Indeed, a present-day wireless sensor network for agriculture precision easily spans tenths, if not hundreds, of agents, not to mention the blooming vehicular networks or mobile internet-of-things whose size will escalate to even larger scales [1]. This steadily increase in both the volume of data and the number of its collectors, however, is at odds with the usual way of extracting information from data in distributed setups: centralized processing.

Centralized processing. In centralized processing, the data collected by the agents is usually first routed in raw form (or maybe slightly digested) to a special central agent that then performs the bulk of the needed computations on the incoming data to squeeze out the desired information. Centralized processing is poorly suited to the current trend of big data in distributed multi-agent systems, for centralized processing is too fragile and cumbersome. Fragile because, as soon as the central agent breaks down, the whole infrastructure of agents is rendered pointless, incapable of reasoning from the pouring data; cumbersome because, as data falls on the agents at increasing speeds and volumes, the capacity of the physical pathways that convey the data to the central agent must swell in tandem until, of course, this capacity hits a fundamental limit (such as bandwidth of available wireless channels) and the show stops. This explains why centralized processing is gradually eclipsing, giving way to growing research on a different approach for processing data that meshes better with current trends: distributed processing.

Distributed processing. The vision of distributed processing is to untether the data-collecting agents from the central agent and to have the data-collecting agents recreate themselves the centralized solution, by properly coordinating the agents in the form of short messages exchanged locally between them.

Thus, no central agent exists and no particular agent is key; all equally share the computation burden. As a result, distributed processing is more robust. If an agent breaks down, the whole infrastructure does not come to a halt: just the particular data stream of the faulty agent ceases to inform the solution, with the remaining agents keeping to collaborate to reason from their collective data. In sum, sudden catastrophe in centralized processing (as happening when the central agent collapses) gives place to graceful degradation in distributed processing.

Distributed processing also does away with the problem of needing larger and larger capacities for the physical pathways that relay the voluminous collected data to the central agent, for the simple reason that the central agent is crossed out from the picture. Not that any form of communication from agents becomes unnecessary; agents do need to communicate to coordinate. But distributed processing aims at having the agents to exchange short messages only, each message ideally about the size of the information that is desired to squeeze out of the data—a goal which requires modest capacities for the communication channels.

Closest related work on distributed estimation. Research on distributed processing develops vigorously along several exciting threads, among them distributed optimization, filtering, detection, and estimation. The literature is too vast to discuss at length here, so we point the reader to references [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], a cross-section of representative work.

This paper contributes to the thread of distributed estimation. To put our work in perspective, we now single out from this thread some of the closest research. Most of such research assumes a common backdrop: a set of agents is linked by a communication network, with the agents collecting local data about the same vector of parameters; the challenge is to create an algorithm that coordinates the agents by specifying what they should communicate to each other along time over the available communication channels so that the agents arrive at an estimate of the vector of parameters, preferably an estimate as good as a central node would provide for the same time horizon.

Against this shared backdrop, details vary. For example, [15] considers a linear-gaussian measurement model at each agent, the measurement matrices being time-variant and the observation noise possibly correlated across agents, though the communication network is assumed static, not dynamic (so, communication channels do not change along time). For this setup, the authors work out a distributed estimator, for which they are able to secure a much desired feature: they prove that the proposed estimator is efficient, in that its mean-square error (MSE) decays at the same rate of the MSE of the centralized estimator, as times unfolds.

Scenarios in which the vector of parameters changes over time, possibly over dynamic networks, can be tackled by the successful suite of distributed algorithms of the diffusion type, developed, for instance, in [16, 17, 9, 10]. Being able to track time-varying parameters, this type of algorithms is fitted to distributed least-mean squares (LMS) or Kalman filtering applications, applications in which these diffusion algorithms can, in fact, be proved to be stable in the mean-square sense.

We believe that the work that most resembles ours, however, is the accomplished consensus+innovations (CI) algorithm, put forward in [7]. Indeed, similar to what we assume in this paper, the authors of [7] consider a static vector of parameters, measured at each agent through a noise-additive linear model, and a dynamic communication network linking the agents that changes randomly over time. Thus, although the CI algorithm is applicable to a broader range of scenarios (e.g., gaussian noise is not assumed in [7]), we will use it as a benchmark throughout the paper, both in section III, where we contrast its form with our FADE algorithm, and in section V, where we compare their performance in practice. (Other distributed algorithms built around the consensus+innovations principle, and not necessarily for estimation purposes, can be found in[7, 8, 6, 18, 19].)

Contributions. We contribute a distributed estimator called FADE (Fast and Asymptotically efficient Distributed Estimator), which coordinates agents measuring a common vector of parameters in a linear-gaussian model and communicating with each other over a set of channels that changes randomly over time. FADE is endowed with several assets, both on the theoretical side and on the practical side.

On the theoretical side, FADE is simple to derive, emerging from the central estimator after a couple of intuitive steps. (By comparison, the blueprint of the CI algorithm [7], say, is somehow more involved to arrive at, resorting to the machinery of stochastic approximation with mixed time-scales—perhaps a complexity price to pay for its more general applicability.) More importantly, FADE comes with strong convergence guarantees: it is strongly convergent (estimates converge to the true parameters with probability one); it is asymptotically unbiased (bias of the estimates vanishes with time); and it is asymptotically efficient (MSE of the estimates goes to zero at the same rate of the MSE of the centralized estimator, as time proceeds). Our theoretical proofs use tools from martingale probability theory.

On the practical side, FADE is shown, through numerical simulations, to supply estimates to the agents that are notably more accurate than the ones yielded by the CI algorithm, for the same time period. Said in other words: although both FADE and the CI algorithm are asymptotically efficient, numerical simulations show that FADE reaches the asymptotic optimal performance significantly sooner. As an illustration, in one of the simulations we find that the MSE of FADE is lower than the MSE of CI by five orders of magnitude.

Paper organization. We organize this paper as follows. In section II, we detail the measurement and communication models, along with usual blanket assumptions, thus stating precisely the problem at hand. The problem is tackled in section III, in which we derive our solution—the FADE algorithm—whose strong theoretical convergence properties are laid out in section IV. In section V, we compare by numerical simulation the accuracy of FADE with the accuracy of the CI algorithm, both in a dense and a sparse network model. Section VI closes the paper, with parting conclusions. Appendices give the proofs of theorems stated throughout the paper.

## Ii Problem statement

The vector of parameters that the agents wish to estimate is .

The measurement model: how agents measure . Each agent measures a linear map of in additive gaussian noise. Specifically, at time-step , agent measures

 yn(t)=Hnθ+vn(t), (1)

where is the linear map of agent , and is gaussian noise, detailed in the following assumption.

Assumption 1. (Gaussian noise) For each agent and time-step , the random variable is a sample of a gaussian distribution with zero mean and unit covariance, written . These random variables are independent across agents and time; that is, is independent of if or .

As an aside, note that if the covariance of the noise was other than the identity matrix, say, , then the unit-covariance feature could be restored at once by premultiplying with (redefining also in the process). In sum, this convenient assumption entails no loss of generality.

We also need a basic observability assumption that states can be identified from all the agents’ measurements. Specifically, stack the observations (1) in the column vector ; this gives , where and . The vector can be seen as a network-wide measurement: it collects the measurements at time  from all the agents. We assume that is identifiable from these network-wide measurements, which is equivalent to assuming that the network-wide sensing matrix is full column-rank.

Assumption 2. (Global observability) The matrix is full column-rank.

For the scalar case , this assumption just means that some sensing gain is nonzero. In general, note that this assumption exempts the local sensing matrices from any particular structure; e.g., all sensing matrices could even lack full column-rank, which would make unindentifiable from any agent (if is not full column-rank, then agent  can not discern between from , where is any nonzero vector in the kernel of ). Assumption II ensures that can be identified whenever agents work as a team.

The communication model: how agents can exchange information. The communication network linking the agents is modelled as an undirected graph that changes randomly over time: at time-step , we call this graph

 G(t)=(V,E(t)). (2)

Here, is the set of agents, and is the set of edges available at time step . An edge between agents and models a communication channel between them; note that, because the graph is undirected, the channels are bidirectional (which means that if, at a given time step , agent can send information to agent , then the reverse also holds: agent can send information to agent  as well).

We let channels appear and disappear randomly over time, to model agents that move around or data packets that get lost; therefore, the terms of the sequence of edge-sets from (2) vary along time. Note, however, that each term in this sequence necessarily takes values in a finite collection of edge-sets, say, a collection with edge-sets: . Such collection is finite because each must be an edge-set over the node-set and, since is fixed, the collection of all edge-sets on is itself finite. Of course, for a particular application scenario, not all possible edge-sets need to turn out: in fact, the collection is the subset of those edge-sets that have a strictly positive probability of turning out for this application scenario.

Finally, we assume that the sequence satisfies a standard and basic property called average connectivity.

Assumption 3. (Average connectivity) The random sequence of edge-sets , is independent and identically distributed (i.i.d.). Each edge-set takes values in the finite collection with probability , where and . Moreover, the average edge-set is connected, that is, the graph is connected.

Recall that a graph is connected if there is a path between any two nodes, the path consisting possibly of many edges. Assumption II means that the average communication graph is connected. In other words, the assumption means that if we overlay all edge-sets with positive probability of occurring, a connected graph results. Of course, at each time step  the graph is allowed to be disconnected; such is the case, for instance, in single-neighbor gossip-like protocols where only two neighbour agents talk at a time (in such a case, each contains only one edge).

The problem addressed in this paper. We address the problem of creating an algorithm that runs at each agent and that, by using the locally available sensing and communication resources, provides an estimate—as accurate as possible—of at each agent. For this problem we propose FADE, a Fast and Asymptotically efficient Distributed Estimator, which we derive in the next section.

## Iii Deriving FADE

FADE is an intuitive algorithm, simple to derive. For clarity, we first focus in section III-A on a scalar parameter, ; the extension to the vector case, , is plain and appears in section III-B.

### Iii-a FADE algorithm for scalar parameter

We derive FADE in a progression of three easy steps, starting from the stance of an almighty central node.

Step 1. The optimal algorithm at a central node. We start by deriving the optimal estimator. This is the estimator that could run only at a central node—a fictitious, almighty node that would know instantaneously the measurements of all the agents.

Such optimal estimator is the minimum variance and unbiased (MVU) estimator or, in our linear-gaussian setup, also the maximum likelihood (ML) estimator. It is given by a weighted combination of the average measurements of the agents; specifically, at time , it is given by

 ˆθ(t)=N∑m=11Ncm¯¯¯ym(t), (3)

where is the weight of agent , and

 ¯¯¯ym(t)=1tt∑s=1ym(s) (4)

is the average measurement of agent  by time . We skip the details on how to obtain the optimal estimator (3) because they are well-known (e.g., see [20]).

Now, we can write (4) in the recursive form ; and plugging this recursion in (3) gives the following update for the optimal estimator:

 ˆθ(t)=1NN∑m=1(ˆθ(t−1)+1tcm(ym(t)−¯¯¯ym(t−1))), (5)

for , where we set and for all .

Update (5) reveals an interesting feature: at time step , the optimal estimator in (5) needs only to know from each agent  the number . This means that, besides a central node, the optimal estimator can also run in a certain distributed scenario—-a scenario in which the communication graph is static and complete, as we show in the next step.

Step 2. The optimal algorithm in a static, complete graph Consider a static, complete graph. Being static, its edges are fixed over time. Being complete, it contains all possible edges, that is, any pair of agents is linked by an edge. In this graph any given agent can obtain whatever information it needs from any other agent  at any time step; agent  has only to send that information through the channel linking agents and . In particular, agent  can send . So, any agent  can carry out the optimal update (5).

Letting, then, be the estimate at agent  in such a graph, we have

 ˆθn(t)=1NN∑m=1(ˆθn(t−1)+1tcm(ym(t)−¯¯¯ym(t−1))), (6)

with . Note that the agents’ estimates keep the same through time, , and recreate the optimal update in (5).

Let us pass to a more convenient vector form. Stack all agent’s estimates in the vector . It follows from (6) that

 ˆθ(t)=J(ˆθ(t−1)+1tC(y(t)−¯¯¯y(t))), (7)

where (with ) is the consensus matrix, is a diagonal matrix with th diagonal entry equal to , , and .

Step 3. The FADE algorithm in a general graph. The optimal estimator (7) is unable to run in a general graph changing over time. Indeed, let be the set of available edges for communication at time step . At this time step, only a few communication channels typically link a given agent  to other agents. Specifically, agent  can receive information only from those agents for which the (undirected) edge is in , a subset called the neighborhood of agent  at time step  and denoted by . Sadly, the update (7) requires agent  to receive information, not just from some neighbor agents, but from all agents. As such, the update (7) cannot run in a general graph.

Inspired by the form of the recursion (7), however, we now suggest a simple modification that can run in general graphs. The key idea is to note that the obstruction is just the consensus matrix . Indeed, each entry of is non-zero (to be more specific, ): this makes the update at agent to depend on the information at any other agent . But, would that entry be zero, and the update at agent  would no longer depend on the information at agent .

Our idea now almost suggests itself: simply replace the consensus matrix in (7) with a matrix, say , that has the right sparsity at time . That is, a matrix where each entry nonzero if and only if agents and are neighbors at time : if and only if . We finally arrive at our FADE algorithm:

 ˆθ(t)=W(t)(ˆθ(t−1)+1tC(y(t)−¯¯¯y(t−1))), (8)

or, expressed for a generic agent :

 ˆθn(t)=N∑m=1Wnm(t)(ˆθm(t−1)+1tcm(ym(t)−¯¯¯ym(t−1))). (9)

About properties on the matrices that make FADE succeed, we will say much more in the next section III-C.

To conclude, we interpret each iteration of FADE (9) in terms of sensing and communication. The iteration (9) can be seen as unfolding in two halves: in the first half, each agent updates its estimate by absorbing its measurement, , thus using its sensing resource; in the second half, each agent sends its updates to its neighbors, thus using is communication resource. Upon receiving the updates from its neighbors, each agent  combines these updates with its own, .

Comparing the FADE algorithm with the CI algorithm. We now compare the form of the FADE algorithm in (8) with the consensus+innovations (CI) algorithm from [7], holding in mind the optimal estimator in (7) as a reference point. We will see that, in a certain sense, FADE is closer to the idealized estimator (7). Let be the estimate of the parameter that the CI algorithm produces at agent  and at time ; letting be the vector of estimates across the network, we have (see [7])

 ˜θ(t)=(IN−β(t)L(t))˜θ(t−1)+α(t)C(y(t)−H˜θ(t−1)). (10)

Here, is the laplacian matrix of the graph , that is, a matrix filled with zeros save for the nondiagonal entries correponding to edges in (which are filled with ) and the diagonal entries (which are filled with the number of neighbors of the corresponding agent, thus making ); the matrix is diagonal with th diagonal entry equal to , and both and are positive step sizes which obey certain requirements: for our purposes, and with do. Now, letting and , we can rewrite (10) as

 ˜θ(t)=W(t)˜θ(t−1)+1tC(y(t)−H˜θ(t−1)). (11)

When we compare both the FADE (8) and the CI (11) updates with the ideal update (7), two main differences spring up: first, even if the communication graph was static and fully connected (so, with laplacian matrix ), the CI algorithm would differ from the optimal form (7), whereas the FADE update and the optimal one would become the same (for we could assign ); second, the rightmost term in the optimal recursion (7)—that is, the innovation term —finds itself replaced by in the CI algorithm, whereas it is left intact in FADE. So, the algorithm most faithful to the idealized estimator (7) is FADE. This gives us an inkling of why FADE estimates with an accuracy closer to the accuracy of a central node outstandingly sooner than the CI algorithm, as the numerical simulations in section V attest.

### Iii-B FADE algorithm for a vector of parameters

Extending the FADE algorithm (8) to a vector of parameters with is plain. Recall the FADE update for a scalar parameter, at each agent  given in (9).

In the case of a vector of parameters, the scalar becomes a matrix : recall (1). Accordingly, we can upgrade the scalar to the matrix

 Cm=((1/N)N∑i=1HTiHi)−1HTm∈Rd×dm (12)

and the FADE update to

 ˆθn(t)= N∑m=1Wnm(t)(ˆθm(t−1)+1tCm(ym(t)−¯¯¯ym(t−1))),

where . This is the general FADE algorithm at each agent ; or, repacked in matrix form,

 ˆθ(t)=(W(t)⊗Id)(ˆθ(t−1)+1tC(y(t)−¯¯¯y(t−1))), (14)

where , is Kronecker product, is a block-diagonal matrix with th block equal to , , and .

### Iii-C The weight matrices W(t)

We now give conditions on the weight matrices that allow FADE to succeed.

First, recall that each mirrors the sparsity of the edge-set . That is, entry is nonzero if and only if there is an edge between agents  and in the edge-set .

We assume that each is symmetric ; has nonnegative entries (); and is row-stochastic, i.e., the entries in each of its rows sum to one (). We also assume that each diagonal entry of is positive: for all . Note that the consensus matrix (which is meant to replace in (7)) has all these properties.

Metropolis weights. A simple way to make sure these properties hold for each matrix is to choose the entries of as in the Metropolis rule [21]: , if agents and are neighbors in ;
, if ; and , otherwise. Here, is the neighborhood of agent  (in the graph , and is the degree of agent , i.e., the number of its neighbors (the cardinality of the set ). The Metropolis rule allows each agent to compute its weights ( for ) locally: agent  ignores the global edge-set , a fit property in practice, for otherwise agent  would need to know which channels turned out in far away corners of the network at each time step ; in the Metropolis tule, agent  needs to know only its degree and the degrees of its neighbors (an easy-to-get information that can be passed by the neighbors themselves).

The Metropolis rule, then, associates to each edge-set a weight matrix . Because the edge-set is random and takes values in a finite collection of edge-sets with probability , it follows likewise that is random and takes values in a finite collection of weighting matrices, say, , with corresponding probability . Note also that, as a consequence, the sequence , is i.i.d. In sum, we have the following assumption on the weight matrices.

Assumption 4. (Weight matrices) Each weight matrix in (14) mirrors the sparsity of the edge-set . Also, each has a positive diagonal and is symmetric, nonnegative, and row-stochastic. Moreover, each takes values in a finite set with probability (where ), and the sequence , is i.i.d. .

This assumption, together with assumption II on average connectivity of the edge-sets , guarantees key properties for two matrices that will prove important in the theoretical analysis of FADE (see next section IV): the average weighting matrix and the average off-consensus matrix

 ˜W=E(˜W(t)T˜W(t))=K∑k=1πk˜WTk˜Wk, (15)

with

 ˜W(t)=(IN−J)W(t)(IN−J) (16)

and . The key properties are stated in the following lemma.

###### Lemma 1

Let assumptions II and III-C hold. Then, is a primitive matrix and is a contraction matrix.

Recall that a primitive matrix is a square nonnegative matrix such that, for some positive integer , the matrix is positive (i.e., each entry of is a positive number). In our case, note that is a symmetric matrix, which makes all of its eigenvalues real-valued; moreover, because , one of these eigenvalues is the number with the vector as its associated eigenvector. Now, given that is a nonnegative matrix and that lemma 1 states is also a primitive matrix, it follows from standard Perron-Frobenius theory (see [22]) that is, in fact, the dominant eigenvalue: if is another eigenvalue of , then . Finally, the matrix being a contraction means that its spectral radius is strictly less than one, .

The proof of lemma 1 is omitted because these these general properties are well-known and follow from classic Perron-Frobenius theory: for example, [23] shows that is a primitive matrix in Proposition 1.4, and that is a contraction matrix in p. 35.

## Iv Theoretical analysis of FADE

In this section, we state the two chief properties that FADE enjoys: almost sure convergence to the true vector of parameters, and asymptotic unbiasedness and efficiency.

We start by establishing almost sure (a.s.) convergence. Let be the sequence of estimates of the vector of parameters that FADE produces at agent : see (LABEL:unpack2). Note that this sequence is random because both the measurements and the edges are random. Theorem 1 states that the sequence converges almost surely to the correct vector of parameters .

###### Theorem 1 (FADE converges almost surely)

Let assumptions IIIIII, and III-C hold. Then, , a.s., for .

###### Proof:

See appendix A. \qed

We now pass to asymptotic unbiasedness and efficiency. Asymptotic unbiasedness means that the sequence of estimates becomes unbiased, that is, converges to , as . Asymptotic efficiency means that the mean-square error (MSE) of each term in the sequence of estimates decays at the same rate as the optimal estimator. Specifically, let be the optimal (maximum-likelihood) estimator—the estimator that runs at a central node—, given by

 ˆθML(t)=N∑n=11NCn¯¯¯yn(t), (17)

where is defined in (12) and  in (4). (For a scalar parameter, , this estimator coincides with the one given in (3).) With standard tools [20], it is easy to show that the optimal estimator is unbiased at all times, , and its MSE, given by , decays to zero as follows:

 MSE(ˆθML(t))=tr((∑Nn=1HTnHn)−1)t. (18)

Asymptotic efficiency of FADE means that the decay rate of the MSE of the FADE estimate at any agent matches the decay rate of MSE of the optimal estimator at the central node. The result is stated precisely in the next theorem.

###### Theorem 2 (FADE is asymptotically unbiased and efficient)

Let assumptions IIIIII, and III-C hold. Then, and

 limt→∞MSE(ˆθn(t))MSE(ˆθML(t))=1, (19)

for .

###### Proof:

See appendix B. \qed

In this sense, FADE succeeds in making all agents as powerful as the central node.

## V Numerical simulations

We compare three estimators: the proposed FADE estimator (given in (14)), the state-of-art CI estimator from [7] (given in (10) for scalar parameters), and the centralized estimator (given in (17)).

We compare the estimators in two kinds of simulations. In the first kind of simulations, section V-A, we look at almost sure convergence (theorem 1); we compare the speed at which the estimators converge to the true vector of parameters . In the second kind of simulations, section V-B, we look at MSEs (theorem 2); we compare the speed at which the accuracy of all estimators, as measured by their MSEs, goes to zero.

Simulation setup. To compare the estimators, we set up a dense network and a sparse one. Both networks consist of a set of agents.

Also, both networks have their edges changing randomly over time. For the dense network, the random edge-set takes values in a finite collection of edge-sets, , with this collection satisfying assumption II; that is, the graph , which results from overlaying all edge-sets in the collection , is connected. We call this network dense because is dense. Specifically, about pairs of agents have an edge between them in the edge-set and, for each , the average degree (number of neighbors) of an agent is about six. As for the sparse network, the corresponding graph connects directly just pairs of agents, and the average degree of the agents per drops to a number close to one.

Whether the network is dense or sparse, agent  measures the vector of parameters through the same linear-gaussian sensing model , where is standard gaussian noise and . We made each matrix to have only rank ; this is to make sure that no single agent could identify , even if given an infinite supply of measurements. Thus, the vector of parameters is identifiable only through collaboration (the set of matrices , chosen secures global observability, see assumption II).

Finally, we use the Metropolis weights for FADE (see section III-C), and the step-sizes and for CI (see (10)).

### V-a First kind of simulations: almost sure convergence

All three estimators—FADE , CI , and the centralized one —converge almost surely (a.s.) to  as the number of communications grows unbounded:

 limt→∞ˆθn(t)=limt→∞˜θn(t)=limt→∞ˆθML(t)=θ,a.s., (20)

for any agent . For the optimal estimator, (20) follows at once from (17) and the strong law of large numbers (which assures almost surely); for FADE, see the proof of theorem 1; for CI, see [7].

But the theoretical analysis fails to tell us how fast the convergence in (20) occurs. The reason is that the analysis is asymptotic; it explains only what happens in the long run, for . In this remote horizon—after an infinite number of communications—(20) shows that all estimators look the same. We ignore, however, how the estimators compare in a more realistic horizon: after a number of communications that is practical.

Results for the sparse network. We use the challenging sparse network to find the speeds at which the three estimators approach the limit (20). Specifically, we focus on agent  and track its estimate of the third entry of the vector of parameters , as yielded by the three estimators: for FADE, for CI, and for the centralized estimator. According to (20), all estimates go to ,

 limt→∞ˆθ(3)1(t)=limt→∞˜θ(3)1(t)=limt→∞ˆθ(3)ML(t)=70,a.s.. (21)

Figure 1 reveals, however, that the estimates go to at stunningly different speeds: CI lags appreciably behind, while FADE goes hand in hand with the swift centralized estimator.

For this example, we blinded agent  to the third entry of , , by filling the third column of the sensing matrix with zeros. This means that has no bearing on the measurements of agent , and that can only be learned quickly at agent  through effective teamwork. Figure 1 shows that FADE delivers such effective coordination, for it supplies agent  with an accurate guess of the missing parameter, promptly.

### V-B Second kind of simulations: scaled MSEs

The MSEs of the three estimators decay to zero at the rate . Specifically, when we scale the MSEs by the number of time-steps , the scaled MSEs converge to the same limit:

 limt→∞tMSE(ˆθn(t)) = limt→∞tMSE(˜θn(t)) = limt→∞tMSE(ˆθML(t)) = tr⎛⎝(N∑i=1HTiHi)−1⎞⎠,

for any agent . For the optimal estimator, (LABEL:tmses) follows from (18); for FADE, see the proof of (19) in appendix B; for CI, see [7].

For the optimal estimator, the convergence in (LABEL:tmses) is instantaneous. This is because is constant (so, already equal to its limit from the first time-step); see (18). For FADE and CI, however, the available theoretical analysis is unable to inform about how fast the convergence in (LABEL:tmses) takes place.

As the following numerical results show, the proposed FADE estimator converges outstandingly faster.

Results for the dense network. We look at the scaled MSEs that the three estimators (FADE, CI, and the centralized one) provide at agent  for the dense network model. We run Monte-Carlos, each one consisting of a stretch of time-steps, and we average at the end the Monte-Carlos to find out how the scaled estimators— for FADE, for CI, and for the centralized estimator—behave throughout the time-steps . Figure 2 shows the results: the proposed FADE estimator follows closely the quick centralized estimator, while the CI estimator is off by six orders of magnitude.

Results for the sparse network. For the sparse network, the difference in performance grows even larger, to seven orders of magnitude; see Figure 3.

## Vi Conclusions

We proposed a new algorithm for distributed parameter estimation with linear-gaussian measurements. Our algorithm, called FADE (Fast and Asymptotically efficient Distributed Estimator), is simple to derive and copes with communication networks that change randomly. FADE comes with strong theoretical guarantees: not only is it strongly consistent, but also asymptotically efficient. Compared with a state-of-art consensus+innovations algorithm, FADE yields estimates with significantly smaller mean-square-error (MSE)—in numerical simulations, FADE features estimates with MSEs that can be six or seven orders of magnitude smaller.

## Appendix A Proof of theorem 1

Scalar parameter. We will prove the theorem for the case of a scalar parameter——for clarity. The proof for the general vector case , , is immediate and left to the reader. For the case of a scalar parameter, the network-wide measurement vector is , where is a nonzero vector, thanks to assumption II.

The FADE algorithm is

 ˆθ(t)=W(t)(ˆθ(t−1)+1tC(y(t)−¯¯¯y(t−1))), (23)

for , with . Also, recall that is a diagonal matrix with th-entry , and with .

The goal is to show that

 limt→∞ˆθ(t)=θ1,a.s.. (24)

The in-consensus and off-consensus orthogonal decomposition. Any vector can be decomposed as an orthogonal sum of two vectors: one vector aligned with the vector , and the other vector orthogonal to . That is,

 u=u⊤1+u⊥ (25)

where the scalar and the vector . It follows that (recall that ) is the consensus matrix. The vector is called the in-consensus component of ; the vector , its off-consensus component.

Decomposing as such the vector

 ˆθ(t)=ˆθ⊤(t)1+ˆθ⊥(t), (26)

we see that (24) ammounts to

 limt→∞ˆθ⊤(t)=θ,a.s., (27)

and

 limt→∞ˆθ⊥(t)=0,a.s.. (28)

We will prove (27) and (28) separately.

### A-a Proof of (27)

From assumption III-C, each matrix is symmetric and row-stochastic, which means that each is also column-stochastic: . So, multiplying (23) on the left by gives

 ˆθ⊤(t)=ˆθ⊤(t−1)+hT∥h∥21t(y(t)−¯¯¯y(t−1)), (29)

for , with and .

For , we have , and (29) implies

 ˆθ⊤(1) = hT∥h∥2(hθ+v(1)) (30) = θ+hT∥h∥2v(1).

For , we have and (29) implies

 ˆθ⊤(t)=ˆθ⊤(t−1)+hT∥h∥21t(v(t)−¯¯¯v(t−1)). (31)

Rolling the recursion (31) from (30) yields, for

 ˆθ⊤(t)=θ+hT∥h∥2¯¯¯v(t) . (32)

Finally, the strong law of large numbers gives , a.s., which, when plugged in (32), proves (27).

### A-B Proof of (28)

A recursive equality for . Recall that, by definition, , where . Thus, using (23), we have

 ˆθ⊥(t)=(IN−J)W(t)(ˆθ(t−1)+1tC(y(t)−¯¯¯y(t−1))). (33)

Now, since each is row- and column-stochastic ( and ), it follows that

 (IN−J)W(t)=˜W(t)(IN−J), (34)

where is defined as (recall (16)) .

Plugging (34) into (33) gives the recursion

 ˆθ⊥(t)=˜W(t)ˆθ⊥(t−1)+1t˜W(t)C(y(t)−¯¯¯y(t−1)). (35)

In obtaining (35), we also used the identity .

A recursive inequality for . Fix some positive number (which will be set judiciously soon). Using the fact that holds for generic vectors and , we deduce from (35) the inequality

 ∥∥ˆθ⊥(t)∥∥2 ≤(1+ϵ)∥∥˜W(t)ˆθ⊥(t−1)∥∥2+ +(1+1ϵ)1t2∥∥˜W(t)C(y(t)−¯¯¯y(t−1))∥∥2. (36)

Denote the spectral norm (maximum singular value) of a matrix by and recall that for any matrix and vector . We derive from (36) that

 ∥∥ˆθ⊥(t)∥∥2 ≤(1+ϵ)ˆθ⊥(t−1)T˜W(t)T˜W(t)ˆθ⊥(t−1)+ +(1+1ϵ)1t2∥∥˜W(t)C∥∥2∥∥y(t)−¯¯¯y(t−1)∥∥2. (37)

A key inequality for . Let be the natural filtration, that is, is the sigma-algebra generated by all random objects until time : .

Note that the random matrix is independent from . Thus, using this fact and standard properties of conditional expectation (e.g., see [24]), we deduce from (37) that

 E(∥∥ˆθ⊥(t)∥∥2|F(t−1))≤(1+ϵ)ˆθ⊥(t−1)T˜Wˆθ