Consensus and Products of Random Stochastic Matrices: Exact Rate for Convergence in Probability

Consensus and Products of Random Stochastic Matrices: Exact Rate for Convergence in Probability

Dragana Bajović, João Xavier, José M. F. Moura and Bruno Sinopoli Work of Dragana Bajović, João Xavier and Bruno Sinopoli is partially supported by grants CMU-PT/SIA/0026/2009 and SFRH/BD/33517/2008 (through the Carnegie Mellon/Portugal Program managed by ICTI) from Fundação para a Ciência e Tecnologia and also by ISR/IST plurianual funding (POSC program, FEDER). Work of José M. F. Moura is partially supported by NSF under grants CCF-1011903 and CCF-1018509, and by AFOSR grant FA95501010291. Dragana Bajović holds fellowship from the Carnegie Mellon/Portugal Program.Dragana Bajović is with the Institute for Systems and Robotics (ISR), Instituto Superior Técnico (IST), Lisbon, Portugal, and with the Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA dragana@isr.ist.utl.pt, dbajovic@andrew.cmu.eduJoão Xavier is with the Institute for Systems and Robotics (ISR), Instituto Superior Técnico (IST), Lisbon, Portugal jxavier@isr.ist.utl.ptBruno Sinopoli and José M. F. Moura are with the Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA brunos@ece.cmu.edu, moura@ece.cmu.edu; ph: (412)268-6341; fax: (412)268-3890
Abstract

Distributed consensus and other linear systems with system stochastic matrices emerge in various settings, like opinion formation in social networks, rendezvous of robots, and distributed inference in sensor networks. The matrices are often random, due to, e.g., random packet dropouts in wireless sensor networks. Key in analyzing the performance of such systems is studying convergence of matrix products . In this paper, we find the exact exponential rate for the convergence in probability of the product of such matrices when time grows large, under the assumption that the ’s are symmetric and independent identically distributed in time. Further, for commonly used random models like with gossip and link failure, we show that the rate is found by solving a min-cut problem and, hence, easily computable. Finally, we apply our results to optimally allocate the sensors’ transmission power in consensus+innovations distributed detection.

Keywords: Consensus, consensus+innovations, performance analysis, random network, convergence in probability, exponential rate.

I Introduction

Linear systems with stochastic system matrices find applications in sensor [1], multi-robot [2], and social networks [3]. For example, in modeling opinion formation in social networks [3], individuals set their new opinion to the weighted average of their own opinion and the opinions of their neighbors. These systems appear both as autonomous, like consensus or gossip algorithms [4], and as input-driven algorithms, like consensus+innovations distributed inference [5]. Frequently, the system matrices are random, like, for example, in consensus in wireless sensor networks, due to either the use of a randomized protocol like gossip [4], or to link failures–random packet dropouts. In this paper, we determine the exact convergence rate of products of random, independent identically distributed (i.i.d.) general symmetric stochastic111By stochastic, we mean a nonnegative matrix whose rows sum to . Doubly stochastic matrices besides row have also column sums equal to . matrices , see Section III. In particular, they apply to gossip and link failure. For example, with gossip on a graph , each realization of has the sparsity structure of the Laplacian matrix of a one link subgraph of , with positive entries being arbitrary, but that we assume bounded away from zero.

When studying the convergence of products , it is well known that, when the modulus of the second largest eigenvalue of is strictly less than , this product converges to almost surely [6] and, thus, in probability, i.e., for any ,

(1)

where denotes the spectral norm. This probability converges exponentially fast to zero with  [7], but, so far as we know, the exact convergence rate has not yet been computed. In this work, we compute the exact exponential rate of decay of the probability in (1).

Contributions. Assuming that the non-zero entries of are bounded away from zero, we compute the exact exponential decay rate of the probability in (1) by solving with equality (rather than lower and upper bounds) the corresponding large deviations limit, for every :

(2)

where the convergence rate . Moreover, we characterize the rate and show that it does not depend on . Our results reveal that the exact rate  is solely a function of the graphs induced by the matrices and the corresponding probabilities of occurrences of these graphs. In general, the computation of the rate is a combinatorial problem. However, for special important cases, we can get particularly simple expressions. For example, for a gossip on a connected tree, the rate is equal to , where is the probability of the link that is least likely to occur. Another example is with symmetric structures, like uniform gossiping and link failures over a regular graph for which we show that the rate equals , where is the probability that a node is isolated from the rest of the network. For gossip with more general graph structures, we show that the rate where is the min-cut value (or connectivity [8]) of a graph whose links are weighted by the gossip link probabilities; the higher the connectivity is (the more costly or, equivalently, less likely it is to disconnect the graph) the larger the rate and the faster the convergence are. Similarly, with link failures on general graphs, the rate is computed by solving a min-cut problem and is computable in polynomial time.

We now explain the intuition behind our result. To this end, consider the probability in (1) when 222 It turns out, as we will show in Section III, that the rate does not depend on . Remark also that, because the matrices are stochastic, the spectral norm of is less or equal to for all realizations of ,…, . Thus, the probability in 1 is equal to for . , i.e., when the norm of stays equal to . This happens only if the supergraph of all the graphs associated with the matrix realizations is disconnected. Motivated by this insight, we define the set of all possible graphs induced by the matrices , i.e., the set of realizable graphs, and introduce the concept of disconnected collection of such graphs. For concreteness, we explain this here assuming gossip on a connected tree with links. For gossip on a connected tree, the set of realizable graphs consists of all one-edge subgraphs of the tree (and thus is of size ). If any fixed graphs were removed from this collection, the supergraph of the remaining graphs is disconnected; this collection of the remaining graphs is what we call a disconnected collection. Consider now the event that all the graph realizations (i.e., activated links) from time to time belong to a fixed disconnected collection, obtained, for example, by removal of one fixed one-edge graph. Because there were two isolated components in the network, the norm of would under this event stay equal to . The probability of this event is , where we assume that the links occur with the same probability . Similarly, if all the graph realizations belong to a disconnected collection obtained by removal of one-edge graphs, for , the norm remains at , but now with probability . For any event indexed by from this graph removal family of events, the norm stays at in the long run, but what will determine the rate is the most likely of all such events. In this case, the most likely event is that a single one-edge graph remains missing from time to time , the probability of which is , yielding the value of the rate . This insight that the rate  is determined by the probability of the most likely disconnected collection of graphs extends to the general matrix process.

Review of the literature. There has been a large amount of work on linear systems driven by stochastic matrices. Early work includes [9, 10], and the topic received renewed interest in the past decade [11, 12]. Reference [12] analyzes convergence of the consensus algorithm under deterministic time-varying matrices . Reference [4] provides a detailed study of the standard gossip model, that has been further modified, e.g., in [13, 14]; for a recent survey, see [15]. Reference [6] analyzes convergence under random matrices , not necessarily symmetric, and ergodic – hence not necessarily independent in time. Reference [16] studies effects of delays, while reference [17] studies the impact of quantization. Reference [18] considers random matrices and addresses the issue of the communication complexity of consensus algorithms. The recent reference [19] surveys consensus and averaging algorithms and provides tight bounds on the worst case averaging times for deterministic time varying networks. In contrast with consensus (averaging) algorithms, consensus+innovations algorithms include both a local averaging term (consensus) and an innovation term (measurement) in the state update process. These algorithms find applications in distributed inference in sensor networks, see, e.g., [5, 20, 21] for distributed estimation, and, e.g., [22, 23, 24], for distributed detection. In this paper, we illustrate the usefulness of the rate of consensus in the context of a consensus+innovations algorithms by optimally allocating the transmission power of sensors for distributed detection.

Products of random matrices appear also in many other fields that use techniques drawn from Markov process theory. Examples include repeated interaction dynamics in quantum systems [25], inhomogeneous Markov chains with random transition matrices [26, 25], infinite horizon control strategies for Markov chains and non-autonomous linear differential equations [27], or discrete linear inclusions [28]. These papers are usually concerned with deriving convergence results on these products and determining the limiting matrix. Reference [25] studies the product of matrices belonging to a class of complex contraction matrices and characterizes the limiting matrix by expressing the product as a sum of a decaying process, which exponentially converges to zero, and a fluctuating process. Reference [27] establishes conditions for strong and weak ergodicity for both forward and backward products of stochastic matrices, in terms of the limiting points of the matrix sequence. Using the concept of infinite flow graph, which the authors introduced in previous work, reference [26] characterizes the limiting matrix for the product of stochastic matrices in terms of the topology of the infinite flow graph. For more structured matrices, [29] studies products of nonnegative matrices. For nonnegative matrices, a comprehensive study of the asymptotic behavior of the products can be found in [30]. A different line of research, closer to our work, is concerned with the limiting distributions of the products (in the sense of the central limit theorem and large deviations). The classes of matrices studied are: invertible matrices [31, 32] and its subclass of matrices of determinant equal to  [33] and, also, positive matrices [34]. None of these apply to our case, as the matrices that we consider might not be invertible ( has a zero eigenvalue, for every realization of ) and, also, we allow the entries of to be zero, and therefore the entries of might be negative with positive probability. Furthermore, as pointed out in [35], the results obtained in [31, 32, 33] do not provide ways to effectively compute the rates of convergence. Reference [35] improves on the existing literature in that sense by deriving more explicit bounds on the convergence rates, while showing that, under certain assumptions on the matrices, the convergence rates do not depend on the size of the matrices; the result is relevant from the perspective of large scale dynamical systems, as it shows that, in some sense, more complex systems are not slower than systems of smaller scale, but again it does not apply to our study.

To our best knowledge, the exact large deviations rate in (2) has not been computed for i.i.d. averaging matrices , nor for the commonly used sub-classes of gossip and link failure models. Results in the existing literature provide upper and lower bounds on the rate , but not the exact rate . These bounds are based on the second largest eigenvalue of or , e.g., [4, 36, 6]. Our result (2) refines these existing bounds, and sheds more light on the asymptotic convergence of the probabilities in (1). For example, for the case when each realization of has a connected underlying support graph (the case studied in [12]), we calculate the rate to be equal (see Section III), i.e., the convergence of the probabilities in (1) is faster than exponential. On the other hand, the “rate” that would result from the bound based on is finite unless . This is particularly relevant with consensus+innovations algorithms, where, e.g., the consensus+innovations distributed detector is asymptotically optimal if [37]; this fact cannot be seen from the bounds based on .

The rate  is a valuable metric for the design of algorithms (or linear systems) driven by system matrices , as it determines the algorithm’s asymptotic performance and is easily computable for commonly used models. We demonstrate the usefulness of  by optimizing the allocation of the sensors’ transmission power in a sensor network with fading (failing) links, for the purpose of distributed detection with the consensus+innovations algorithm [23, 24].

Paper organization. Section II introduces the model for random matrices and defines relevant quantities needed in the sequel. Section III proves the result on the exact exponential rate  of consensus. Section IV shows how to compute the rate for gossip and link failure models via a min-cut problem. Section V addresses optimal power allocation for distributed detection by maximizing the rate . Finally, section VI concludes the paper.

Ii Problem setup

Model for the random matrices . Let be a discrete time (random) process where , for all , takes values in the set of doubly stochastic, symmetric, matrices.

Assumption 1

We assume the following.

  1. The random matrices are independent identically distributed (i.i.d.).

  2. The entries of any realization of are bounded away from whenever positive. That is, there exists a scalar , such that, for any realization , if , then . An entry of with positive value, will be called an active entry.

  3. For any realization , for all , .

Also, let denote the set of all possible realizations of .

Graph process. For a doubly stochastic symmetric matrix , let denote its induced undirected graph, i.e., , where is the set of all nodes and

We define the random graph process through the random matrix process by: , for . As the matrix process is i.i.d., the graph process is i.i.d. as well. We collect the underlying graphs of all possible matrix realizations (in ) in the set :

(3)

Thus, the random graphs take their realizations from . Similarly, as with the matrix entries, if , we call an active link.

We remark that the conditions on the random matrix process from Assumption 1 are satisfied automatically for any i.i.d. model with finite space of matrices ( could be taken to be the minimum over all positive entries over all matrices from ). We illustrate with three instances of the random matrix model the case when the (positive) entries of matrix realizations can continuously vary in certain intervals, namely, gossip, -adjacent edges at a time, and link failures.

Example 1 (Gossip model)

Let be an arbitrary connected graph on vertices. With the gossip algorithm on the graph , every realization of has exactly two off diagonal entries that are active: , for some , where the entries are equal due to the symmetry of . Because is stochastic, we have that , which, together with Assumption 1, implies that must be bounded (almost surely) by . Therefore, the set of matrix realizations in the gossip model is:

Example 2 (Averaging model with d-adjacent edges at a time)

Let be a -regular connected graph on vertices, . Consider the following averaging scheme where exactly off-diagonal entries of are active at a time: , for some fixed and all such that . In other words, at each time in this scheme, the set of active edges is the set of edges adjacent to some node . Taking into account Assumption 1 on , the set of matrix realizations for this averaging model is:

where denotes the th column of matrix .

Example 3 (Link failure (Bernoulli) model)

Let be an arbitrary connected graph on vertices. With link failures, occurrence of each edge in is a Bernoulli random variable and occurrences of edges are independent. Due to independence, each subgraph of , , is a realizable graph in this model. Also, for any given subgraph of , any matrix with the sparsity pattern of the Laplacian matrix of and satisfying Assumption 1 is a realizable matrix. Therefore, the set of all realizable matrices in the link failure model is

Supergraph of a collection of graphs and supergraph disconnected collections. For a collection of graphs on the same set of vertices , let denote the graph that contains all edges from all graphs in . That is, is the minimal graph (i.e., the graph with the minimal number of edges) that is a supergraph of every graph in :

(4)

where denotes the set of edges of graph .

Specifically, we denote by 333Graph is associated with the matrix product going from time until time . The notation indicates that the product is backwards; see also the definition of the product matrix in Section III. the random graph that collects the edges from all the graphs that appeared from time to , , i.e.,

Also, for a collection we use to denote the probability that a graph realization belongs to :

(5)

We next define collections of realizable graphs of certain types that will be important in computing the rate in (2).

Definition 4

The collection is a disconnected collection of if its supergraph is disconnected.

Thus, a disconnected collection is any collection of realizable graphs such that the union of all of its graphs yields a disconnected graph. We also define the set of all possible disconnected collections of :

(6)

We further refine this set to find the largest possible disconnected collections on .

Definition 5

We say that a collection is a maximal disconnected collection of (or, shortly, maximal) if:

  1. , i.e., is a disconnected collection on ; and

  2. for every , is connected.

In words, is maximal if the graph that collects all edges of all graphs in is disconnected, but, adding all the edges of any of the remaining graphs (that are not in ) yields a connected graph. We also define the set of all possible maximal collections of :

(7)

We remark that . We now illustrate the set of all possible graph realizations , and its maximal collections with two examples.

Example 6 (Gossip model)

If the random matrix process is defined by the gossip algorithm on the full graph on vertices, then ; in words, is the set of all possible one-link graphs on vertices. An example of a maximal collection of is

where is a fixed vertex, or, in words, the collection of all one-link graphs except of those whose link is adjacent to . Another example is

Example 7 (Toy example)

Consider a network of five nodes with the set of realizable graphs , where the graphs , are given in Figure 1. In this model, each realizable graph is a two-link graph, and the supergraph of all the realizable graphs is connected.

Fig. 1: Example of a five node network with three possible graph realizations, each being a two-link graph

If we scan over the supergraphs of all subsets of , we see that , and are connected, whereas the , and , are disconnected. Therefore, and .

We now observe that, if the graph that collects all the edges that appeared from time to time is disconnected, then all the graphs that appeared from through belong to some maximal collection .

Observation 8

If for some and , , is disconnected, then there exists a maximal collection , such that , for every , .

Iii Exponential rate for consensus

Denote , and , for . The following Theorem gives the exponential decay rate of the probability .

Theorem 9

Consider the random process under Assumption 1. Then:

where

and

is the probability of the most likely maximal disconnected collection.

To prove Theorem 9, we first consider the case when is nonempty, and thus when . In this case, we find the rate by showing the lower and the upper bounds:

(8)
(9)

Subsection III-A proves the lower bound (8), and subsection III-B proves the upper bound (9).

Iii-a Proof of the lower bound (8)

We first find the rate for the probability that the network stays disconnected over the interval .

Lemma 10

Having Lemma 10, the lower bound (8) follows from the following relation:

that is stated and proven in Lemma 13 further ahead.

Proof of Lemma 10.

If all the graph realizations until time belong to a certain maximal collection , by definition of a maximal collection, is disconnected with probability . Therefore, for any maximal collection , the following bound holds:

The best bound, over all maximal collections , is the one that corresponds to the “most likely” maximal collection:

(10)

We will next show that an upper bound with the same rate of decay (equal to ) holds for the probability of the network staying disconnected. To show this, we reason as follows: if is disconnected, then all the graph realizations until time , , belong to some maximal collection. It follows that

Finally, we bound each term in the previous sum by the probability of the most likely maximal collection, and we obtain:

(11)

where is the number of maximal collections on .

Combining (10) and (11) we get:

which implies

Iii-B Proof for the upper bound in (9)

The next lemma relates the products of the weight matrices with the corresponding graph and is the key point in our analysis. Recall that

Lemma 11

For any realization of matrices , , :444The statements of the results in subsequent Corollary 12 and Lemma 13 are also in the point-wise sense.

  1. if , , then ;

  2. for ;

  3. if , , then

where is the Laplacian matrix of the graph , and is the second smallest eigenvalue (the Fiedler eigenvalue) of a positive semidefinite matrix .

Proof.

Parts 1 and 2 are a consequence of the fact that the positive entries of the weight matrices are bounded below by by Assumption 1; for the proofs of 1 and 2, see [38], Lemma 1 a), b). Part 3 follows from parts 1 and 2, by noticing that, for all , , such that , we have:

To show part 4, we notice first that is the second largest eigenvalue of , and, thus, can be computed as

Since is a symmetric stochastic matrix, it can be shown, e.g., [12], that its quadratic form can be written as:

(12)

where the last inequality follows from part 3. Further, if the graph contains some link , then, at some time , , a realization with occurs. Since the diagonal entries of all the realizations of the weight matrices are positive (and, in particular, those from time to time ), the fact that implies that . This, in turn, implies

Using the latter, and the fact that the entries of are non-negative, we can bound the sum in (12) over by the sum over only, yielding

Finally, is equal to the Fiedler eigenvalue (i.e., the second smallest eigenvalue) of the Laplacian . This completes the proof of part 4 and Lemma 11. ∎

We have the following corollary of part 4 of Lemma 11, which, for a fixed interval length , and for the case when is connected, gives a uniform bound for the spectral norm of .

Corollary 12

For any and , , if is connected, then

(13)

where is the Fiedler value of the path graph on vertices, i.e., the minimum of over all connected graphs on vertices [39] .

Proof.

The claim follows from part 4 of Lemma 11 and from the fact that for connected : . ∎

The previous result, as well as part 4 of Lemma 11, imply that, if the graph is connected, then the spectral norm of is smaller than . It turns out that the connectedness of is not only sufficient, but it is also a necessary condition for . Lemma 13 explains this.

Lemma 13

For any and , :

Proof.

We first show the if part. Suppose is connected. Then, and the claim follows by part 4 of Lemma 11. We prove the only if part by proving the following equivalent statement:

To this end, suppose that is not connected and, without loss of generality, suppose that has two components and . Then, for and , , and, consequently, , for all , . By definition of , this implies that the -th entry in the corresponding weight matrix is equal to zero, i.e.,

Thus, every matrix realization from time to time has a block diagonal form (up to a symmetric permutation of rows and columns)

where is the block of corresponding to the nodes in , and similarly for . This implies that will have the same block diagonal form, which, in turn, proves that . This completes the proof of the only if part and the proof of Lemma 13. ∎

We next define the sequence of stopping times , by:

(14)

The sequence defines the times when the network becomes connected, and, equivalently, when the averaging process makes an improvement (i.e., when the spectral radius of drops below ).

For fixed time , let denote the number of improvements until time :

(15)

We now explain how, at any given time , we can use the knowledge of to bound the norm of the “error” matrix . Suppose that . If we knew the locations of all the improvements until time , , then, using eq. (13), we could bound the norm of . Intuitively, since for fixed and fixed the number of allocations of ’s is finite, there will exist the one which yields the worst bound on . It turns out that the worst case allocation is the one with equidistant improvements, thus allowing for deriving a bound on only in terms of . This result is given in Lemma 14.

Lemma 14

For any realization of and the following holds:

(16)
Proof.

Suppose and (, for , because ). Then, by Corollary 12, for , we have . Combining this with submultiplicativity of the spectral norm, we get:

To show (16), we find the worst case of ’s, by solving the following problem:

(18)

(here should be thought of as ). Taking the of the cost function, we get a convex problem equivalent to the original one (it can be shown that the cost function is concave). The maximum is achieved for , . This completes the proof of Lemma 14. ∎

Lemma 14 provides a bound on the norm of the “error” matrix in terms of the number of improvements up to time . Intuitively, if is high enough relative to , then the norm of cannot stay above as increases (to see this, just take in eq. (16)). We show that this is indeed true for all random sequences for which or higher, for any choice of ; this result is stated in Lemma 15, part 1. On the other hand, if the number of improvements is less than , then there are at least available slots in the graph sequence in which the graphs from the maximal collection can appear. This yields, in a crude approximation, the probability of for the event ; part 1 of Lemma 15 gives the exact bound on this probability in terms of . We next state Lemma 15.

Lemma 15

Consider the sequence of events , where , . For every :

  1. There exists sufficiently large such that

    (19)
  2. (20)
Proof.

To prove 1, we first note that, by Lemma 14 we have:

(21)

This gives for fixed , :

(22)

where , for , and denotes the smallest integer not less than . For fixed , each of the probabilities in the sum above is equal to for those such that . This yields:

(23)

where is the switch function defined by:

Also, as is, for fixed , decreasing in , it follows that for . Combining this with eqs. (III-B) and (23), we get:

We now show that will eventually become , as increases, which would yield part 1 of Lemma 15. To show this, we observe that has a constant negative value at :

Since , as , there exists such that , for every . Thus, for every . This completes the proof of part 1.

To prove part 2, we observe that

(24)

Recalling the definition of , we have , for ; this, by further considering all possible realizations of , , yields

(25)

where the summation is over all possible realizations , . Next, we remark that, by definition of stopping times , supergraph is disconnected with probability , for