Control of Asynchronous Imitation Dynamics on Networks

# Control of Asynchronous Imitation Dynamics on Networks

James Riehl, Pouria Ramazi and Ming Cao
###### Abstract

Imitation is widely observed in populations of decision-making agents. Using our recent convergence results for asynchronous imitation dynamics on networks, we consider how such networks can be efficiently driven to a desired equilibrium state by offering payoff incentives for using a certain strategy, either uniformly or targeted to individuals. In particular, if for each available strategy, agents playing that strategy receive maximum payoff when their neighbors play that same strategy, we show that providing incentives to agents in a network that is at equilibrium will result in convergence to a unique new equilibrium. For the case when a uniform incentive can be offered to all agents, this result allows the computation of the optimal incentive using a binary search algorithm. When incentives can be targeted to individual agents, we propose an algorithm to select which agents should be chosen based on iteratively maximizing a ratio of the number of agents who adopt the desired strategy to the payoff incentive required to get those agents to do so. Simulations demonstrate that the proposed algorithm computes near-optimal targeted payoff incentives for a range of networks and payoff distributions in coordination games.

11footnotetext: This work was supported in part by the European Research Council (ERCStG-307207).22footnotetext: ENTEG, Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands, {j.r.riehl,p.ramazi,m.cao}@rug.nl

## I Introduction

Networks in which agents make decisions by imitating their most successful neighbors appear frequently in sociology, biology, economics, and engineering [1, 2]. Such networks of success-based learners often exhibit complex non-convergent behaviors even when the agents are homogeneous. In particular, these networks are less likely to converge than networks of agents who use best responses; in other words, focusing on the success of others hinders the agents from reaching satisfactory decisions [3]. This non-convergence relates to volatility and instability, which can have consequences ranging from costly inefficiencies to catastrophic failures. Imitation is also known to lead to selfish behaviors in various social contexts [2], which can manifest as social dilemmas such as tragedy of the commons, in which the pursuit of selfish goals leads to globally suboptimal outcomes. However, in many of these cases it may be possible to circumvent the undesired global outcomes by administering some small control input to the agents, locally. Given that this could require a large amount of total control effort, it is critical to develop methods for achieving these goals as efficiently as possible. Game theory is widely used to model distributed optimization and learning in large populations of autonomous agents [4, 5, 6, 7, 8, 9, 10, 11], but more specifically, evolutionary game theory allows for strategies to propagate through populations by means other than rational choice, and therefore provides an ideal framework to model networks of imitative agents [12, 13, 14, 15, 16, 17].

The use of payoff incentives to drive populations towards a desired strategy is gaining in popularity [18, 15, 19, 20]. Several approaches have been used to control such networks, including offering incentives uniformly to the agents in a network [21, 22], targeting individual agents with incentives [22], and directly controlling the strategies of some of the agents [23, 19]. There are two key properties that facilitate the design of control algorithms for each of these cases; if the network is at some equilibrium, then providing incentives to the agents should (i) cause no agent to switch away from a desired strategy, and (ii) result in convergence of the network to a unique equilibrium state. In [22], we have established these properties for certain types of agents who update asynchronously with best responses, but the conditions under which networks of imitative agents can be driven to a desired equilibrium remain to be discovered.

In this paper, we design efficient incentive-based control algorithms for three different control problems on finite networks of heterogeneous decision-making individuals who asynchronously imitate their highest earning neighbors. We start by building a general framework for asynchronous network games with two available strategies, and . Our main theoretical contribution is to show that in any such network game, regardless of the update rule, if all agents are -coordinating, i.e., agents who update to strategy would also do so if they had more neighbors playing , then providing incentives to the agents when the network is at equilibrium (i) causes no agent to switch from to , and (ii) leads the network to a unique equilibrium regardless of the agents’ activation sequence. Next we establish that networks governed by imitation dynamics satisfy these conditions provided that all agents are opponent coordinating, i.e., agents’ payoffs are maximized when their neighbors play the same strategy that they do. These results make possible the design of efficient control algorithms, using payoff incentives, to guarantee the convergence of networks of imitative agents to a desired strategy. First, for the case when an incentive is offered uniformly to all agents, we provide a simple binary search algorithm to compute the optimal incentive. Next, when incentives can vary from agent to agent, inspired by our approach for controlling best-response networks [22], we propose the Iterative Potential-to-Reward Optimization (IPRO) algorithm, which selects agents to be targeted based on iteratively maximizing a weighted ratio of the number of agents who adopt the desired strategy to the payoff incentive required to get those agents to do so. Simulations show that the IPRO algorithm achieves near optimal performance in a variety of cases and outperforms several other incentive targeting algorithms based on degree, earnings, and other criteria.

## Ii Asynchronous network games

Although this paper focuses primarily on imitative agents, since some of the results apply to a broader class of dynamics, we present here a generalized framework for two-strategy asynchronous games on networks.

Consider an undirected network , in which the nodes represent agents and the edges define two-player games between pairs of adjacent neighbors. Each agent starts by playing one of the strategies or against all neighbors, then accumulates the resulting payoffs, and the process repeats at each time . The possible payoffs to an agent are defined by the payoff matrix , whose entry represents agent ’s payoff when playing strategy against a neighbor playing strategy , where , and we define , for the purposes of matrix indexing. For compactness of notation, we stack all payoff matrices in a 3-dimensional matrix . The payoff of agent against all neighbors at time is given by

 ui(k)=∑j∈Niπixi(k),xj(k),

where is the set of agent ’s neighbors. After collecting all payoffs, one random agent activates at time and updates to a new strategy at time according to an update rule, which we denote by :

 xi(k+1)=⎧⎪⎨⎪⎩A if fi(x(k))={A}B if fi(x(k))={B}zi if fi(x(k))={A,B} (1)

where the function . Each is fixed and equals either or . By a network game we mean a network of agents with payoff matrices , who update based on the rule . We do not prescribe any particular process for driving the activation sequence, but we do make the following assumption, which simply ensures that no agent will stop updating after some amount of time.

###### Assumption 1.

Every agent activates infinitely many times as time goes to infinity.

Agents’ strategies evolve under the update rule according to the sequence in which agents activate, and may converge to an equilibrium state or continue to fluctuate. An equilibrium of the network game is a state at which no agents will switch strategies under their respective update rules, implying that if for some , then , regardless of which agent is active at time . Our eventual goal is to use payoff incentives to control the dynamics of network games such that the network reaches or gets as close as possible to a desired equilibrium state in which every agent plays strategy . By offering payoff incentives to a network, we mean offering non-negative rewards for playing to one or more agents in the network, which equates to adding non-negative constants to the first row of the corresponding payoff matrices. To set the foundation for the main control results, we first investigate when a network game governed by an arbitrary update rule will reach a unique equilibrium after offering payoff incentives.

## Iii Unique equilibrium convergence of A-coordinating network games

Equilibrium convergence is a key property of network games, and can be guaranteed for certain classes of update rules and agent payoff matrices [3, 13]. However, it is not generally the case that such networks will converge to a unique equilibrium, a property which is highly desirable for the design of efficient and predictable control algorithms. Here we establish conditions on the agents and update rule under which unique equilibrium convergence can be guaranteed.

We say a network game is -coordinating if any agent who updates to strategy would also do so if some agents currently playing were instead playing . Formally, we have the following definition.

###### Definition 1.

A network game is -coordinating if for any two strategy vectors satisfying

 yi=A⇒zi=A∀i∈V, (2)

the following holds

 fi(y)={A}⇒fi(z)={A}∀i∈V (3)

and

 fi(y)={A,B}⇒A∈fi(z)∀i∈V. (4)

The -coordinating property implies that an increase in the number of agents playing may lead agents to switch from to but will never cause agents to switch from to , yielding a monotone behavior in agents’ strategy updates. We say that a network game is -monotone if, after offering payoff incentives to one or more agents when the network is at any equilibrium, no agent will ever switch from to . The following proposition holds.

###### Proposition 1.

Every -coordinating network game is -monotone.

We need the following lemma for the proof.

###### Lemma 1.

Consider an -coordinating network game . If for some agent , one of the following holds at some time :

1. and ,

2. and ,

then an agent has switched from to at time .

###### Proof:

We prove by contradiction. Assume the negation of Lemma 1 holds for a network at some time and let denote the state of the network at that time. Since no agent has switched from to at time , the vectors and satisfy Condition (2). Now if Case 1 takes place, then either , violating (3) or , violating (4), a contradiction, yielding the result. If on the other hand, Case 2 takes place, then , violating (3) since , a contradiction, leading to the proof. ∎

###### Proof:

We again prove by contradiction. Assume the contrary and let be the first time that some agent switches from to . Then one of the following cases holds:

Case 1: . On the other hand, either the strategy of agent is at , yielding since the network is at equilibrium at , or there is some time such that agent switches to at , yielding . So in any case, there exists some such that . Therefore, since there exists some time such that and . In view of Lemma 1, this implies that an agent has switched from to at , a contradiction since is the first time that such a switch takes place, yielding the result.

Case 2: and . On the other hand, either the strategy of agent is at , yielding since and that the network is at equilibrium at , or there is some time such that agent switches to at , yielding . So in any case, there exists some such that . Therefore, since there exists some time such that and . In view of Lemma 1, this implies that an agent has switched from to at , a contradiction, leading to the proof. ∎

Moreover, we say that a network switches sequence-independently if, after offering incentives to one or more agents when the network is at equilibrium, any agent who switches from to under one activation sequence will do so under any activation sequence (possibly at a different time).

###### Proposition 2.

Every -coordinating network switches sequence-independently.

This proposition can be explained intuitively as follows. Consider two activation sequences and . Let be the first agent who switches from to under the sequence . Since switches from to are impossible due to Proposition 1, this must be the first switch of any kind. The first time agent is active under the sequence , she will also switch from to since up to that time, agents may have switched only from to under , again due to the monotonicity established in Proposition 1. Then by induction, the same can be shown for the second and later agents who switch their strategies from to under . We formalize and prove this statement in the following Lemma, borrowing some ideas from our previous result in [22]. Let be the first time when agent is active in . Then for , define as the first time after that agent is active in . The time exists because of the assumption that each agent activates infinitely many times. Denote by and , the strategies of agent under the activation sequences and , respectively.

###### Lemma 2.

Consider an -coordinating network game which is at equilibrium at time . Suppose that some payoff incentives are offered at time . Then given any two activation sequences and , the following holds for

 x2js(s+1)=A⇒x1js(ks+1)=A. (5)
###### Proof:

We prove by induction on . The statement is first shown for . Suppose . The initial strategy of agent is the same under both sequences, i.e., . Therefore, since the network game is -monotone in view of Proposition 1, no agent has switched to before time , under . So since the network game is -coordinating, it follows that if , verifying (5) for .

Now assume that (5) holds for . Suppose . Now since (5) holds for all , and because of Proposition 1, we obtain that if any agents have switched and hence fixed their strategies from to under up to the time , they have also done so under up to any time . Moreover, no agent has switched from to under . Thus, the strategy vectors and satisfy the condition in (3). Hence, (5) is true for since the network game is -coordinating. ∎

###### Proof:

The proof follows directly from Lemma 2. ∎

These two properties of -coordinating network games lead to the main result of this section. We say that a network game is uniquely convergent, if after offering some payoff incentives when the network is at equilibrium, the network will again reach an equilibrium state which is unique and does not depend on the sequence in which agents activate.

###### Theorem 1.

Every -coordinating network game is uniquely convergent.

###### Proof:

According to Proposition 1, no agent switches from to . Since by Assumption 1, every agent activates infinitely many times, it follows that the network will reach an equilibrium state at some finite time after a maximum of total strategy switches. It remains to prove the uniqueness of the equilibrium for all activation sequences, which we do by contradiction. Assume that there exist two activation sequences and that drive the network to two distinct equilibrium states. This implies the existence of an agent whose strategy differs between the two equilibria, say under the equilibrium of and under the equilibrium of , without loss of generality. However, in view of Proposition 2, agent will switch from to at some time under and will not change afterwards because of Proposition 1, implying that the network was not at equilibrium, which is a contradiction and completes the proof. ∎

## Iv Imitation update rule

The imitation update rule dictates that agent , active at time , updates at time to the strategy of the agent earning the highest payoff at time in the neighborhood . If neighbors using both strategies earn the highest payoff, we assume agent does not switch:

 xi(k+1)=⎧⎪⎨⎪⎩ASMi(k)={A}BSMi(k)={B}xi(k)SMi(k)={A,B} (6)

where is the set of strategies earning the maximum payoff in the neighborhood of agent , that is

 SMi(k)\lx@stackrelΔ={xj(k)∣∣uj(k)=maxr∈Ni∪{i}ur(k)}.

Asynchronous imitation updates may not in general result in convergence to an equilibrium, but we have established in [3] that such networks will converge when all agents are opponent coordinating, i.e., earn maximum payoff when their neighbors play the same strategy that they do. Equivalently, each diagonal entry of the payoff matrix of an opponent-coordinating agent is greater than the off-diagonal in the same row:

 πi1,1>πi1,2,πi2,2>πi2,1. (7)

The following proposition implies that such networks are also -coordinating.

###### Proposition 3.

Every network of opponent-coordinating agents who update according to the imitation rule is -coordinating.

###### Proof:

Consider two strategy vectors satisfying (2), and let the network be at state . First we look at the case when for some agent , implying that the highest-earning agent in the neighborhood of agent is an -playing agent. Now, if the strategy of some of the -playing agents are changed to so that the network reaches , then the payoff of no -playing agent decreases and the payoff of no -playing agent increases since all agents are opponent coordinating. Hence, the highest-earning agent in the neighborhood of agent will still be an -playing agent, yielding , resulting in (3). The case when can be proven similarly. ∎

The next corollary follows directly from Theorem 1.

###### Corollary 1.

Every network of opponent-coordinating agents is -monotone and uniquely convergent.

That is, when a network of opponent-coordinating agents is at equilibrium at time , if non-negative rewards are offered to one or more agents whenever they play at some time , then no agent will switch from to at any time , and the whole network will reach a unique equilibrium at some finite time.

## V Control through Payoff Incentives

Having established the unique equilibrium convergence in networks of opponent-coordinating agents, we now investigate the efficient use of payoff incentives to drive such networks of imitative agents from any undesired equilibrium toward a desired equilibrium in which all or at least more agents play strategy .

### V-a Uniform Reward Control

Suppose that some central agency has the ability to offer a reward of to all agents whenever they play strategy . The resulting payoff matrix is given by

for each agent . Let denote the -dimensional strategy vector in which each agent plays . The control objective in this case is the following.

###### Problem 1 (Uniform reward control).

Given a network game and initial strategies , find the infimum reward such that for every , every agent will eventually play .

In networks of opponent-coordinating agents, it is relatively straightforward to compute the optimal value of once we have established the properties in Section III. We take advantage of two key properties established in Corollary 1. First, the number of agents who converge to is monotone in the value of due to the -monotone property. Second, simulations of the network game are fast to compute due to the unique convergence property. That is, to test the effect of a particular payoff incentive, since all activation sequences will result in the same equilibrium, we can choose a sequence consisting only of agents who will switch from to , which will have a maximum length of before reaching equilibrium.

We begin by generating a set containing all possible candidate infimum rewards. This set is generated by computing all possible payoff differences between agents playing and agents playing when they are neighbors or linked by another initially -playing agent. Consider a network of opponent-coordinating agents that is at equilibrium at time zero. Let denote the number of neighbors of agent who initially play . Since no agent will switch from to , the possible payoffs of an agent when playing (resp. ) at any time step are contained in the sets

 ΠAi \lx@stackrelΔ={ai(nAi+δi)+bi(degi−nAi−δi):δi∈Δi} ΠBi \lx@stackrelΔ={ci(nAi+δi)+di(degi−nAi−δi):δi∈Δi},

where .

Now consider an agent who initially plays and has a neighbor whose strategy was either initially or became at some other time. In order for agent to cause agent to switch to , the payoff of agent must be greater than that of each -playing agent (denoted by ) in the neighborhood of agent . Therefore, the reward given to agent must be greater than for some and . This leads to the following set of all candidate infimum rewards, formally derived in the proof of Proposition 4.

 R\lx@stackrelΔ={yBi−yAjdegj∣∣yBi∈ΠBi,yAj∈ΠAj, j∈Ns,i∈Ns∪{s},xi(0)=B, s∈V,xs(0)=B}∪{0}.
###### Proposition 4.

For a network of opponent-coordinating agents with initial strategies , .

###### Proof:

Should all agents’ strategies be initially , the result is trivial since . So consider the situation where at least one -playing agent exists. We observe that the network will reach the state of all after offering the reward at time , if the following condition is satisfied: for every agent who initially plays , there exists some time such that and . Equivalently, for every initially -playing agent , there must exist some time and -playing neighbor , such that for all -playing agents , ,

 rdegj+uj(ks)>ui(ks).

Since , and , the condition is satisfied if the following holds: for every initially -playing agent , there exists some time and agent such that for all , ,

 rdegj+yAj>yBi

for some and some . Now since this is a sufficient condition for to drive the network to the all- state, we have that

 r∗0=inf {r∣∣r>yBi−yAjdegj,yBi∈ΠBi,yAj∈ΠAj, j∈Ns,i∈Ns∪{s},xi(0)=B, s∈V,xs(0)=B},

implying that

 r∗0∈ {r∣∣r=yBi−yAjdegj,yBi∈ΠBi,yAj∈ΠAj, j∈Ns,i∈Ns∪{s},xi(0)=B, s∈V,xs(0)=B} = R−{0}.

By summarizing this case and the case when , we complete the proof. ∎

Next we sort the elements of from low to high and denote this vector by . Algorithm 1 performs a binary search over to find the infimum reward such that all agents in the network will eventually play . Denote by the -dimensional vector containing all ones. In what follows, we also denote by the unique equilibrium resulting from a particular set of incentives being offered to a network of -coordinating agents starting from .

###### Proposition 5.

Algorithm 1 computes the reward that solves Problem 1 for networks of opponent-coordinating agents.

###### Proof:

Since due to Proposition 4, we can restrict our search to the set . Due to Theorem 1, we know that if a given incentive results in all agents switching to for one activation sequence, then it does so for every activation sequence. Therefore, we can determine the exact equilibrium resulting from offering an incentive by using any activation sequence. Now consider two incentives . Corollary 1 implies that no agent can switch from to after offering an incentive to a network at equilibrium. Hence, if an agent who initially plays does not switch to under incentive , then that agent will also not switch to under incentive . Otherwise, offering the additional incentive to a network at the equilibrium resulting from incentive would cause this agent to switch from to , a contradiction. It follows from this monotonicity property that a binary search on the ordered list will yield the solution to Problem 1. ∎

### V-B Targeted Reward Control

Suppose that rather than offering a uniform incentive to all agents who play strategy , one has the ability to offer a different reward to each agent. By targeting the most influential agents in the network, it may be possible to achieve the desired outcome at much lower cost than with uniform rewards, but which agents should be targeted and how much reward should be offered to each of these agents?

Let denote the vector of rewards offered to each agent, where is the reward offered to agent i, resulting in the following payoff matrix for each agent :

The targeted control objective is the following.

###### Problem 2 (Targeted reward control).

Given a network game and initial strategies , find the targeted reward vector that minimizes such that if for each , then every agent will eventually play .

Towards a solution to this problem, we observe that for a network at some equilibrium state , the only way to get imitating agents to switch from to through positive rewards is to offer those rewards to agents who start at or will switch to at some time and who have at least one neighbor playing . For such an agent, the infimum reward such that at least one -playing neighbor will switch to is

 ˇri=maxj∈NBimaxk∈NBj¯uk−¯ui, (8)

where denotes the payoff of agent when the state of the network is , and denotes the self-inclusive set of neighbors of agent who are playing . Due to Corollary 1, offering this incentive to agent will result in a unique equilibrium regardless of the activation sequence. As a result, we can repeatedly use (10) to construct a targeted reward vector starting from the previous equilibrium. Indeed, an algorithm which iteratively offers rewards in this manner can be used to compute a reward vector that achieves uniform convergence to . A generic version of such an algorithm is described below, in which the key step is the choice of the agent at each iteration, and denotes an arbitrarily small positive constant.

It is possible to find the exact solution to Problem 2 using Algorithm 2 to perform an exhaustive search on every possible sequence of agents in the initial set . However, the computational complexity of such an algorithm prohibits its use on large networks. In Section VI, we explore the use of various heuristics for choosing an agent to target at each iteration, including random selection, maximum degree, and maximum payoff earnings. Next, we propose a slightly more advanced heuristic for incentive targeting inspired by a similar approach to controlling best-response networks [22].

Consider a network of opponent-coordinating agents, which is at some equilibrium state . In order to identify which agents should be offered incentives, we propose a simple potential function

 Φ(x)=n∑i=1nAi(x), (9)

where denotes the number of neighbors of agent who play strategy in the state . This function has a unique maximum, which occurs when all agents play , and increases whenever an agent switches from to . Problem 2 is thus equivalent to the problem of finding the infimum reward vector that maximizes . Therefore we propose a type of greedy algorithm which iteratively chooses the agent who maximizes the ratio of the change in potential to the reward required to achieve that change. Let denote the equilibrium resulting from offering the reward to agent . We define the iterative potential to reward algorithm (IPRO) as Algorithm 2 in which the targeted agent is selected as follows.

 j∗=argmaxj∈ABΔΦ(¯x)αˇrβj, (10)

where , and are free design parameters.

### V-C Budgeted Targeted Reward Control

In this section, we suppose that there is a limited budget from which to offer rewards and pose the following variation to Problem 2.

###### Problem 3 (Budgeted targeted reward control).

Given a network game , initial strategy state , and budget constraint , find the reward vector that maximizes the number of agents in the network who will eventually play .

Algorithm 3 slightly modifies Algorithm 2 to approximate the solution to Problem 3. The only difference is that the algorithm will now terminate if no more agents can be offered a reward without violating the budget constraint .

## Vi Simulations

Here we compare the performance of the IPRO algorithm to some alternative approaches for controlling networks of agents with imitative dynamics. Each of these methods is applied iteratively, targeting agents with payoff rewards until either the control objective is achieved or the budget limit is reached. Short descriptions of each algorithm under consideration are provided below.

• Iterative Random (rand): target random agents in the network

• Iterative Degree-Based (deg): target agents with maximum degree

• Iterative Maximum Earning (IME): target -playing agents earning the highest payoffs while having at least one neighbor playing

• Iterative Potential Optimization (IPO): target agents resulting in the maximum increase of the potential function (, )

• Iterative Reward Optimization (IRO): target agents requiring minimum reward (, )

• Iterative Potential-to-Reward Optimization (IPRO): target agents maximizing the potential-change-to-reward ratio (, )

• Optimal: perform exhaustive search to find optimal solution (only practical for small networks)

For each set of simulations, we generate geometric random networks by randomly distributing agents in the unit square and connecting all pairs of agents who lie within a distance of each other.

Heterogeneous payoffs for the agents are generated as follows: , where denotes the coordination level, denotes the payoff variance, and is a matrix whose elements are drawn independently at random from a uniform distribution on the interval . Also, the matrices are independent across all agents. Next, we present four brief simulation studies and provide graphical results, which are also summarized in Table I.

### Vi-a Uniform vs. Targeted Reward Control

First, we investigate the difference between uniform and targeted reward control to estimate the expected cost savings when individual agents can be targeted for rewards rather than offering a uniform reward to all agents. Fig. 1 shows that targeted reward control offers a large cost savings over uniform rewards, but also that the savings decreases as the networks get larger. Notably, this differs from our findings on best-response networks, in which the opposite effect was observed [22].

### Vi-B Targeted-Reward Control: Network Size

Next, we compare algorithm performance for various sizes of networks of opponent-coordinating agents, using the same network setup as the previous section. Fig. 2 shows that in this case the IPRO and degree-based algorithms perform the best across all network sizes.

### Vi-C Targeted-Reward Control: Network Connectivity

We now investigate the effect of network connectivity on the total reward required to achieve consensus in strategy . We consider geometric random networks of 20 agents, which is small enough that we can compute the optimal solution using an exhaustive search algorithm and compare this with the proposed algorithms. Fig. 3 shows that there is a sharp decrease in the mean incentive required as the networks become more densely connected, likely due to the fact that high-earning agents become more influential. All of the algorithms except for random and IPO yielded near-optimal results in these tests, with IPRO performing the best.

### Vi-D Targeted-Reward Control: Payoff Variance

Finally, we vary the parameter to understand how the algorithms perform for varying degrees of agent heterogeneity. Fig. 4 shows that the IPRO algorithm performs the best of the algorithms regardless of the degree of homogeneity or heterogeneity of the agents.

## Vii Concluding Remarks

We have revealed three properties of asynchronous -coordinating network games under any update rule after incentives are offered to agents when the network is at equilibrium: (i) no agent will switch from to ; (ii) switches occur independent of the sequence in which agents activate; (iii) the network will converge to a unique equilibrium. This predictability after offering rewards facilitates the design of efficient and in some cases optimal control protocols. We have further shown that a subset of networks in which agents asynchronously imitate their highest earning neighbor, i.e., networks of opponent-coordinating agents, are indeed -coordinating, and therefore satisfy the above three properties. Based on this result, we proposed protocols for three control problems that apply to this class of networks: uniform reward control, targeted reward control, and budgeted targeted reward control. In particular, the IPRO algorithm, which iteratively chooses agents who maximize the ratio of change in potential to offered reward, performs near-optimal in several different cases and outperforms those based on other heuristics such as maximum payoff-earning or minimum required reward. In the future, it would be useful to extend this research to networks containing non-coordinating agents. Although a similar approach may remain effective for this more general case, it might require relaxing the problem statements since it is unlikely that monotonicity and unique convergence will still hold.

## References

• [1] A. Traulsen, D. Semmann, R. D. Sommerfeld, H.-J. Krambeck, and M. Milinski, “Human strategy updating in evolutionary games,” Proceedings of the National Academy of Sciences, vol. 107, no. 7, pp. 2962–2966, 2010.
• [2] P. van den Berg, L. Molleman, and F. J. Weissing, “Focus on the success of others leads to selfish behavior,” Proceedings of the National Academy of Sciences, vol. 112, no. 9, pp. 2912–2917, 2015.
• [3] P. Ramazi, J. Riehl, and M. Cao, “Imitating successful neighbors hinders reaching satisfactory decisions,” under review, 2017.
• [4] A. Cortés and S. Martinez, “Self-triggered best-response dynamics for continuous games,” IEEE Transactions on Automatic Control, vol. 60, no. 4, pp. 1115–1120, 2015.
• [5] J. R. Marden and T. Roughgarden, “Generalized efficiency bounds in distributed resource allocation,” IEEE Transactions on Automatic Control, vol. 59, no. 3, pp. 571–584, 2014.
• [6] N. Li and J. R. Marden, “Decoupling coupled constraints through utility design,” IEEE Transactions on Automatic Control, vol. 59, no. 8, pp. 2289–2294, 2014.
• [7] P. Wiecek, E. Altman, and Y. Hayel, “Stochastic state dependent population games in wireless communication,” IEEE Transactions on Automatic Control, vol. 56, no. 3, pp. 492–505, 2011.
• [8] J. S. Shamma and G. Arslan, “Dynamic fictitious play, dynamic gradient play, and distributed convergence to nash equilibria,” IEEE Transactions on Automatic Control, vol. 50, no. 3, pp. 312–327, 2005.
• [9] P. Guo, Y. Wang, and H. Li, “Algebraic formulation and strategy optimization for a class of evolutionary networked games via semi-tensor product method,” Automatica, vol. 49, no. 11, pp. 3384–3389, 2013.
• [10] B. Gharesifard and J. Cortés, “Evolution of players’ misperceptions in hypergames under perfect observations,” IEEE Transactions on Automatic Control, vol. 57, no. 7, pp. 1627–1640, 2012.
• [11] T. Mylvaganam, M. Sassano, and A. Astolfi, “Constructive-nash equilibria for nonzero-sum differential games,” IEEE Transactions on Automatic Control, vol. 60, no. 4, pp. 950–965, 2015.
• [12] D. Cheng, F. He, H. Qi, and T. Xu, “Modeling, analysis and control of networked evolutionary games,” Automatic Control, IEEE Transactions on, vol. 60, no. 9, pp. 2402–2415, 2015.
• [13] P. Ramazi, J. Riehl, and M. Cao, “Networks of all confirming and all non-confirming individuals tend to reach satisfactory decisions,” Proceedings of the National Academy of Sciences, 2016.
• [14] M. A. Nowak, Evolutionary Dynamics: Exploring the Equations of Life.   Harvard University Press, 2006.
• [15] B. Zhu, X. Xia, and Z. Wu, “Evolutionary game theoretic demand-side management and control for a class of networked smart grid,” Automatica, vol. 70, pp. 94–100, 2016.
• [16] P. Ramazi, M. Cao, and F. J. Weissing, “Evolutionary dynamics of homophily and heterophily,” Scientific reports, vol. 6, 2016.
• [17] P. Ramazi, J. Hessel, and M. Cao, “How feeling betrayed affects cooperation,” PloS one, vol. 10, no. 4, p. e0122205, 2015.
• [18] A. Montanari and A. Saberi, “The spread of innovations in social networks,” Proceedings of the National Academy of Sciences, vol. 107, no. 47, pp. 20 196–20 201, 2010.
• [19] J. R. Riehl and M. Cao, “Towards optimal control of evolutionary games on networks,” IEEE Transactions on Automatic Control, vol. 62, no. 1, pp. 458–462, 2017.
• [20] P. Ramazi and M. Cao, “Analysis and control of strategic interactions in finite heterogeneous populations under best-response update rule,” in 2015 54th IEEE Conference on Decision and Control (CDC).   IEEE, 2015, pp. 4537–4542.
• [21] H. Liang, M. Cao, and X. Wang, “Analysis and shifting of stochastically stable equilibria for evolutionary snowdrift games,” Systems & Control Letters, vol. 85, no. 3, pp. 16–22, 2015.
• [22] J. Riehl, P. Ramazi, and M. Cao, “Control of asynchronous best-response dynamics on networks through payoff incentives,” Automatica (under review).
• [23] J. Riehl and M. Cao, “Towards control of evolutionary games on networks,” In Proc. of the 53rd IEEE Conference on Decision and Control, pp. 2877–2882, 2014.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters