Finding Consensus in Multi-Agent Networks Using Heat Kernel Pagerank

Finding Consensus in Multi-Agent Networks Using Heat Kernel Pagerank

Abstract

We present a new and efficient algorithm for determining a consensus value for a network of agents. Different from existing algorithms, our algorithm evaluates the consensus value for very large networks using heat kernel pagerank. We consider two frameworks for the consensus problem, a weighted average consensus among all agents, and consensus in a leader-following formation. Using a heat kernel pagerank approximation, we give consensus algorithms that run in time sublinear in the size of the network, and provide quantitative analysis of the tradeoff between performance guarantees and error estimates.

\IEEEoverridecommandlockouts\overrideIEEEmargins

1 Introduction

The problem of consensus among multi-agent systems has wide applications in situations where members of a distributed network must agree. For example, the communication, feedback, and decision-making between distinct unmanned aerial vehicles (UAVs) [18, 10] is closely related to the consensus problem. In addition to UAVs, the distributed coordination of networks has important implications in cooperative control of distributed sensor networks [9], flocking and swarming behavior [27], and communication congestion control [21]. Further, they form the foundation of the field of distributed computing [14]. The consensus problem is studied in [19], and several variations and extensions are examined by [20, 17, 15, 4].

We consider the classical model (see [19]) of agents with fixed, bidirectional communication channels and associated state. State changes occur continuously, influenced by communication with neighbors. A consensus algorithm is a continuous time protocol that specifies the information exchange between agents and provides a mechanism for systematically computing the consensus value, a unanimous state and an equilibrium of the system. In this paper, we focus on an efficient method for approximating the state values in a network in which agents reach consensus by following a linear protocol. We give algorithms for two different frameworks. The first computes a global consensus value involving all the agents in the network and runs in time sublinear in the size of the network. The second is a local algorithm to compute a consensus value for a subset of agents under external influence, and runs in time sublinear in the size of the specified subset. In both, the consensus value returned is within an error bound of for a given .

Our algorithm involves solving a linear system by approximating the heat kernel pagerank of the network and relies on spectral analysis. The tools we present are relevant to numerous graph problems, including partitioning and clustering algorithms [3, 25, 16], flow and diffusion modeling [24], electrical network theory [5], and regression on graphs [2].

1.1 Previous Work

The consensus problem

In [19], Olfati-Saber and Murray design a linear protocol for agents to reach a consensus value which is an average of initial states. They consider a network of agents as an undirected graph and use the Laplacian potential, defined in terms of the graph Laplacian (to be defined in Section 2.2), as a measure of disagreement among nodes. With this tool, they transform the the problem of reaching consensus to that of minimizing the Laplacian potential.

An alternate formulation given in [10] abides by a linear protocol which favors the values of more highly connected nodes. In this way, agents which are more visible will have more of an impact on the group decision. Yet another variation is consensus in a leader-following formation, in which a set of agents called leaders abide by individual protocol but continue to influence to rest of the network. This problem has been studied in [23, 17].

Laplacian linear systems

Fast methods for solving systems of linear equations gained awareness with the nearly-linear time solver of [26]. Their algorithm implements a recursive procedure for sparsifiying a graph related to the coefficient matrix so that solving the system is easy at the base of the recursion. This work was improved in [11, 12] with a higher quality sparsifier which reduced the depth of recursion. A parallel solver for SDD systems is given in [22] which runs in polylogarithmic time and nearly-linear work, an improvement to previous bounds.

The methods in this paper are closely related to previous work on approximating the discrete Green’s function (or pseudo-inverse of the Laplacian matrix) [8]. They give an algorithm for solving Laplacian linear systems with a boundary condition on a subset of vertices that improves previous time bounds by using the method of heat kernel.

1.2 Main Results

In the model we consider, the communication protocol followed by the agents in the network forms a linear system of equations, and the solution to the linear system is the state of the network as a function of time. Thus, computing the consensus value involves solving a linear system.

We consider two forms of consensus. In the group consensus framework, we seek a consensus value that is a weighted average of intial states of the system, with weights proportional to node degrees. In this case, our algorithm computes the consensus value by approximating the state vector corresponding to the equilibrium of the system in sublinear time. In the local framework, a subset of agents imposes an external influence on an adjacent subset. The consensus achieved in this case is referred to as a leader-following consensus. Our algorithm for computing leader-following consensus on the subset of followers involves sampling vectors that approximate the equilibrium state vector, and runs in sublinear time.

Specifically, our contributions are:

  1. We give a new algorithm for approximating the state of a system in a weighted average consensus framework to within a multiplicative factor and additive term in time where is the size of the network. We call this algorithm AvgConsensus and present it in Section 3.2.

  2. We give a new algorithm for approximating the state of a subset of agents in a leader-following consensus framework to within a multiplicative factor and additive term in time , where is the size of the subset of followers. We call this algorithm LFConsensus and present it in Section 4.

Our sublinear time algorithms for computing consensus value rely on the efficiency of an algorithm for approximating heat kernel pagerank. Heat kernel pagerank is introduced in detail in [6] and [7] as a variant of Personalized PageRank [1]. The heat kernel pagerank approximator is introduced Section 3.2 and in more detail in [8].

2 Preliminaries

2.1 Networked Multi-Agent Systems

A dynamic multi-agent system is given by a tuple where is the state of the system and is the communication network topology, represented by a graph. Namely, each agent is represented by a node and the communication network between agents is represented by the edge set . Let be a real scalar value assigned to such that denotes the state at time .

For an undirected graph of size , let the nodes of be arbitrarily indexed by index set such that . For a node , let be the set of neighbors of and let be the degree of . Two nodes are said to agree if and only if . The goal of consensus is to minimize the total disagreement among nodes.

Definition 2.1 (Consensus).

Let the value of nodes be the solution to the equation

(1)

Let be an operator on that generates a decision value . Then we say all nodes of the graph have reached consensus with respect to in finite time if and only if all nodes agree and . We call the consensus value.

One notion of consensus is a weighted average consensus, given by

We show (Theorem 3.1) that any connected undirected graph globally asymptotically reaches weighted average consensus when each node applies the distributed linear protocol

(2)

We assume is connected for the remainder of the paper.

2.2 Graph Laplacians and Heat Kernel

In this work, we consider graphs which are weight-normalized so that every entry of the weighted adjacency matrix is , and the unordered pair if and only if . Let denote the diagonal degree matrix .

The Laplacian is defined . Let be the graph matrix . We call the Laplace operator. We note that is similar to the matrix .1

The heat kernel of a graph is a solution to the heat differential equation

The heat kernel can be formulated in the context of random walks on graphs. Consider the transition probability matrix associated to a random walk given by . Then heat kernel is defined:

(3)
(4)

The following similarity of the heat kernel, , is of interest for its symmetry. Using definition (3),

Heat kernel pagerank is a row vector determined by two parameters; , and a preference row vector . It is given by the following equation:

Specifically, it is an exponential sum of random walks generated from a starting vector, . As an added benefit, heat kernel pagerank simultaneously satisfies the heat equation with the rate of diffusion controlled by the parameter . Both properties are powerful tools in consensus problems.

3 Heat Kernel Pagerank for Weighted-Average Consensus

In this section we present a linear consensus protocol for a dynamic network and show how to compute a weighted average consensus for the protocol using heat kernel pagerank.

We first recall some principles of control theory. Consider the system with controls as in (1). A point is an equilibrium point of the system if , and is an equilibrium point if and only if is a trajectory. The system is globally asymptotically stable if, for every trajectory , as . To check this, two conditions are sufficient.

Definition 3.1 (Global asymptotic stability).

A system is globally asymptotically stable if

  1. it is stable in the Lyapunov sense, and

  2. the equilibrium is convergent, i.e., for every , there is some time such that

    for every time .

In particular, when considering a time-invariant linear state space model , for some matrix , condition 1 is satisfied if is positive semidefinite.

3.1 Consensus and the Laplacian

Consider the network of integrator agents with dynamics where each agent applies the distributed linear protocol (2). We can characterize the dynamics of the system by the Laplace operator for the underlying graph, as described by the following theorem:

Theorem 3.1.

Let be a dynamic multi-agent system and suppose each node of applies the distributed linear protocol (2). Then the value of at time is given by the solution to the system

(5)

Additionally, this protocol globally asymptotically reaches a weighted average consensus.

Proof.

Let be an equilibrium of the system . Then by definition of equilibrium, and therefore is a right eigenvector associated to the eigenvalue . In particular is in the null space of . Since is connected, has exactly one zero eigenvalue. Upon consideration, we see that the corresponding eigenvector is , the all-one’s vector, as the row sums of are all exactly zero. Thus, for some . Now, note that for the protocol (2), and so the weighted average value , determined by , is in fact invariant with respect to . In other words, , and

Therefore this equilibrium is in fact the weighted average of the initial values of the nodes, and all nodes reach this value. Also, as the system is time-invariant, the system is stable since is positive semidefinite.

By Definition 3.1, the Theorem is proved. ∎

Now we can summarize the state of the system with a single heat kernel pagerank vector.

Theorem 3.2.

Let be a dynamic multi-agent system and suppose each node of applies the distributed linear protocol (2). Let be the diagonal degree matrix of . Then the state of the system is given by

(6)

where denotes the transpose.

Proof.

The solution to (5) is the evolving state of the system. This solution is

(7)

Using the symmetrized version of heat kernel,

(8)

where line 8 uses the symmetry of and . Thus, the values given by (7) are related to the heat kernel pagerank vector with preference vector . ∎

To compute the equilibrium state at which all agents reach consensus, we know that time is an upper bound. Figure 3 depicts the results of computing weighted average consensus with heat kernel pagerank as in Theorem 3.2 with different values for . The network is an undirected social network of dolphins [13] with initial state values randomly chosen from the interval (Figure 3). The chart plots total disagreement for disagreement vector , where for . The vertical line corresponds to .

{subfigure}

.5 {subfigure}.5

Figure 1: Dolphin social network.
Figure 2: Total disagreement over varying times . Disagreement is computed in terms of the weighted average consensus . The red line denotes the data point for .
Figure 3: Weighted average consensus convergence results.

3.2 An Algorithm for Computing Consensus Value Using Approximate Heat Kernel Pagerank

Our weighted average consensus algorithm uses an algorithm for approximating heat kernel pagerank as a subroutine. We use the following definition of an approximate heat kernel pagerank.

Definition 3.2.

Let be a vector over nodes of a graph and let be the heat kernel pagerank vector over according to and . Then we say that is an -approximate heat kernel pagerank vector if

  1. for every node in the support of ,
    , and

  2. for every node with , it must be that .

Theorem 3.3 (Weighted Average Consensus in Sublinear Time).

Let be a dynamic -agent system and suppose each node of applies the distributed linear protocol (2). Then the state of the system can be approximated to within a multiplicative factor of and an additive term of for any in time .

We call the algorithm AvgConsensus. The algorithm makes a call to ApproxHKPR, an extension of the algorithm presented in [8] for quickly computing an approximation of a restricted heat kernel pagerank vector. For the sake of completeness, the algorithm and a summary of the results of [8] are given at the end of this section.

Proof of Theorem 3.3.

First, the -approximate vector returned by ApproxHKPR  is an approximation of the true state by Theorem 3.2. Thus we have left to verify the approximation guarantee and the running time. The total running time is dominated by the heat kernel pagerank approximation, which is by Theorem 3.4, below. Theorem 3.4 also verifies the approximation guarantee. ∎

{pseudocode}

[framebox]AvgConsensusG,x,t,ϵ \COMMENTinput:

\COMMENT

G as the -adjacency matrix

\COMMENT

, initial state vector

\COMMENT

\COMMENT

, error parameter

\COMMENT

output:

D \GETSdiagonal matrix of rowsums(G)

f \GETSx^TD

y \GETSApproxHKPR()

\RETURN

yD^-1

3.3 A Sublinear Time Heat Kernel Pagerank Approximation Algorithm

The analysis for approximating heat kernel pagerank follows easily from that for a restricted heat kernel pagerank vector by considering the entire vertex set rather than a subset. We refer the reader to [8] for a more complete description.

input: a graph , , preference vector , error parameter .
output: , an -approximation of .

initialize -vector of dimension , where
normalize to be a probability distribution vector
for  iterations do
     choose a starting vertex according to the distribution vector
     Start
         simulate a random walk where steps are taken with probability and
         let be the last vertex visited in the walk
         
     End
end for
return
Algorithm 1 ApproxHKPR()
Theorem 3.4.

Let be a graph, , and . Then, the algorithm ApproxHKPR()outputs an -approximate vector of the heat kernel pagerank for with probability at least . The running time of ApproxHKPR is .

4 Heat Kernel Pagerank for Consensus in Leader-Following Formations

In this section we consider a multi-agent network in which a certain subset of agents are leaders, and the rest are dubbed followers. In this scenario, leaders will adjust their values according to individual protocol, while followers in the system adjust according to communication channels as usual. The consensus goal in this case is a leader-following consensus, in which all agents agree on a value by following the leaders.

Let denote the protocol among the set of followers and let denote the control dictated by the leaders and influencing the followers. Similarly, let denote the state of the followers and denote the state of the leaders. The vectors and can be understood as the usual state vector restricted to following and leading agents, respectively. Then we have the following definition.

Definition 4.1 (Leader-following consensus).

A leader-following consensus of a system is achieved if for every agent there is a local protocol such that for some finite time and some operator . In this case, we call the value the leader-following consensus value.

For the protocol

(9)

the value for is given by the dynamics

We let the followers abide by protocol (9).

Let be the Laplacian restricted to rows and columns corresponding to the followers, and be with rows restricted to the followers and columns restricted to the leaders. Then the dynamics of the followers can be summarized by:

Since is control of the subnetwork induced by the group of followers, this can be rewritten

Indeed, as long as the subgraph induced by the subset of followers is connected, the inverse exists. We have arrived at the following.

Theorem 4.1.

Let be a dynamic multi-agent system with proper subsets of leaders, , and followers, , such that the induced subgraph on is connected. Suppose the followers apply the protocol (9), and suppose the leaders apply some individual protocol dictated only by that leader’s state. Then the followers’ state values at time are given by the solution to the system

An efficient algorithm called GreensSolver  for solving linear systems with a linear protocol applied to a subset specified by is given in [8]. They show that the solution can be computed with the symmetric heat kernel using the relationship

(10)

where is with rows and columns restricted to the set .

The solution can be approximated by sampling sufficiently many values of . Further, it is given that the solution can be approximated in time, where is the size of the subset of followers.

{pseudocode}

[framebox]LFConsensusG,x,t,f,l,u^l,ϵ \COMMENTinput:

\COMMENT

G as the

\COMMENT

, initial state vector

\COMMENT

\COMMENT

, subset of followers

\COMMENT

, subset of leaders

\COMMENT

, protocol applied by the leaders

\COMMENT

, error parameter

\COMMENT

output:

\PROCEDURE

FollowerProtG,x,t \FOREACHi∈l \DOu^f[i] \GETSu_i(t) = 1/d_i ∑_j∈N_i (didjx_j(t) - x_i(t))

\RETURN

u^f \ENDPROCEDURE\PROCEDUREbt u^f \GETS\CALLFollowerProtG,x,t

b \GETS-(u^f(t) + L_flu^l)

\ENDPROCEDURE

b \GETS\CALLbt

s \GETS—f—

T \GETSs^3log(1/ϵ)

N \GETST/ϵ

r \GETSlogs + log(ϵ-12

initialize a 0-vector x^f of dimension s

\FOR

i=0 \TOr \DO\BEGINdraw from uniformly at random

x_i \GETSApproxHK(G,jT/N,b,f,ϵ)

x^f \GETSx^f + x_i

\END
\RETURN1r

xD_S^-1/2

The running time and approximation guarantees of LFConsensus follow from the running time of GreensSolver [8], and we have the following:

Theorem 4.2 (Leader-Following Consensus in Sublinear Time).

Let be a dynamic multi-agent system with proper subsets of leaders, , and followers, , such that the induced subgraph on is connected. Suppose the followers apply the protocol (9), and suppose the leaders apply some individual protocol dictated only by that leader’s state. Then the state of the system can be approximated to within a multiplicative factor of and an additive term of for any in time , where is the size of the subset of followers.

5 Discussion

The significance of sublinear running times is scalability. The robustness and efficiency of the algorithms AvgConsensus and LFConsensus are of great importance for networks too large to fit in memory, and the running time/approximation tradeoff allows for appropriate tuning. This is especially notable for local algorithms, which reduce computation over large networks to a small subset. For instance, while the group of leaders may be small in a leader-following framework, the difference in complexity for computing consensus in a leader-following formation as opposed to full group consensus can be significant. The subset of followers influenced by the leaders may be a small portion of the entire graph, so that , and we are spared work over the entire graph in the case that we are interested in only a small area. In these cases, the gain in running times are valueable.

Footnotes

  1. The Laplacian used in [19] is the matrix , a common variation.

References

  1. S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, pages 107–117, 1998.
  2. D. Cai, X. He, and J. Han. Spectral regression for efficient regularized subspace learning. In IEEE 11th International Conference on Computer Vision, pages 1–8. IEEE, 2007.
  3. P. K. Chan, M. D. Schlag, and J. Y. Zien. Spectral k-way ratio-cut partitioning and clustering. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 13(9):1088–1096, 1994.
  4. L. Cheng, Z.-G. Hou, and M. Tan. Reaching a consensus in networks of high-order integral agents under switching directed topology. arXiv preprint arXiv:1304.3972, 2013.
  5. P. Christiano, J. A. Kelner, A. Madry, D. A. Spielman, and S.-H. Teng. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. CoRR, abs/1010.2921, 2010.
  6. F. Chung. The heat kernel as the pagerank of a graph. Proceedings of the National Academy of Sciences, 104(50):19735–19740, 2007.
  7. F. Chung. A local graph partitioning algorithm using heat kernel pagerank. Internet Mathematics, 6(3):315–330, 2009.
  8. F. Chung and O. Simpson. Solving linear systems with boundary conditions using heat kernel pagerank. In Algorithms and Models for the Web Graph, pages 203 – 219, 2013.
  9. J. Cortés and F. Bullo. Coordination and geometric optimization via distributed dynamical systems. SIAM Journal on Control and Optimization, 44(5):1543–1574, 2005.
  10. J. Fax and R. Murray. Information flow and cooperative control of vehicle formations. IEEE Transactions on Automatic Control, 49(9):1465–1476, 2004.
  11. I. Koutis, G. L. Miller, and R. Peng. Approaching optimality for solving sdd linear systems. In IEEE 51st Annual Symposium on Foundations of Computer Science, pages 235–244. IEEE, 2010.
  12. I. Koutis, G. L. Miller, and R. Peng. A nearly- time solver for sdd linear systems. In IEEE 52nd Annual Symposium on Foundations of Computer Science, pages 590–598. IEEE, 2011.
  13. D. Lusseau, K. Schneider, O. Boisseau, P. Haase, E. Slooten, and S. Dawson. The bottlenose dolphin community of doubtful sound features a large proportion of long-lasting associations. Behavioral Ecology and Sociobioloy, 54:396–405, 2003.
  14. N. A. Lynch. Distributed Algorithms. Morgan Kaufmann, 1996.
  15. K. L. Moore, T. Vincent, F. Lashhab, and C. Liu. Dynamic consensus networks with application to the analysis of building thermal processes. In Proceedings of 18th IFAC World Congress, Milane, Italy, 2011.
  16. A. Y. Ng, M. I. Jordan, Y. Weiss, et al. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2:849–856, 2002.
  17. W. Ni and D. Cheng. Leader-following consensus of multi-agent systems under fixed and switching topologies. Systems & Control Letters, 59(3–4):209–217, 2010.
  18. R. Olfati-Saber and R. Murray. Graph rigidity and distributed formation stabilization of multi-vehicle systems. In Proceedings of the 41st IEEE Conference on Decision and Control, volume 3, pages 2965–2971, 2002.
  19. R. Olfati-Saber and R. Murray. Consensus protocols for networks of dynamic agents. In Proceedings of the American Control Conference, volume 2, pages 951–956, 2003.
  20. R. Olfati-Saber and R. Murray. Consensus problems in networks of agents with switching topology and time-delays. IEEE Transactions on Automatic Control, 49(9):1520–1533, 2004.
  21. F. Paganini, J. Doyle, and S. Low. Scalable laws for stable network congestion control. In Proceedings of the 40th IEEE Conference on Decision and Control, volume 1, pages 185–190, 2001.
  22. R. Peng and D. A. Spielman. An efficient parallel solver for sdd linear systems. arXiv:1311.3286, November 2013.
  23. A. Rahmani, M. Ji, M. Mesbahi, and M. Egerstedt. Controllability of multi-agent systems from a graph-theoretic perspective. SIAM Journal on Control and Optimization, 48(1):162–186, 2009.
  24. A. Raj, A. Kuceyeski, and M. Weiner. A network diffusion model of disease progression in dementia. Neuron, 73(6):1204–1215, 2012.
  25. J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000.
  26. D. A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In Proceedings of the thirty-sixth annual ACM symposium on Theory of Computing, pages 81–90. ACM, 2004.
  27. J. Toner and Y. Tu. Flocks, herds, and schools: A quantitative theory of flocking. Phys. Rev. E, 58:4828–4858, Oct 1998.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
107194
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description