Emergent Hierarchical Structures in Multiadaptive Games

Emergent Hierarchical Structures in Multiadaptive Games

Sungmin Lee IceLab, Department of Physics, Umeå University, 90187 Umeå, Sweden    Petter Holme IceLab, Department of Physics, Umeå University, 90187 Umeå, Sweden Department of Energy Science, Sungkyunkwan University, Suwon 440–746, Korea    Zhi-Xi Wu Institute of Computational Physics and Complex Systems, Lanzhou University, Lanzhou, Gansu 730000, China IceLab, Department of Physics, Umeå University, 90187 Umeå, Sweden
Abstract

We investigate a game-theoretic model of a social system where both the rules of the game and the interaction structure are shaped by the behavior of the agents. We call this type of model, with several types of feedback couplings from the behavior of the agents to their environment, a multiadaptive game. Our model has a complex behavior with several regimes of different dynamic behavior accompanied by different network topological properties. Some of these regimes are characterized by heterogeneous, hierarchical interaction networks, where cooperation and network topology coemerge from the dynamics.

pacs:
02.50.Le,89.75.Hc,89.75.Fb,87.23.Ge

Game theory is a language for describing systems in biology, economy, and society where the success of an agent depends both on its own behavior and the behaviors of others. Perhaps the most important question for gametheoretic research is to map out the conditions for cooperation to emerge among egoistic individuals Axelrod (1984). To this end, researchers have developed a number of different types of models, capturing different game-theoretic scenarios. In this Letter, we investigate a generalization in a new direction, relaxing constraints of other models, with feedback effects at different levels to the behavior of the agents.

In most game-theoretical studies, the rules of the game are fixed in time, but in real systems there is a feedback from the behavior of the agents to the environment and thus to the rules of the game. The payoff of a playerÕs action in a specific situation is parametrized by payoff matrices. A straightforward way of modeling feedback from the system to the rules is to let the entries of the matrices be variables, dependent on the state of the system Tomochi and Kono (2002). Another feature that often is modeled as static when, in reality, it does not have to be, is the contact structure. If agents can change their interaction patterns in response to the outcome of the game, then the model will also capture the social network dynamics. Such adaptive-network models Zimmermann and Eguíluz (2005); W. Li et al. (2007); Zimmermann2000 (2000) can address a wide range of problems: not only how interaction determines the evolution of cooperation, but also how the interaction patterns themselves emerge. In this Letter we investigate a situation where agents can adjust their social ties to maximize their payoffs and the collective behavior of the agents shapes the rules of the game. Our model is an adaptive-network model with adaptive payoff matrices—a multiadaptive game, for short.

A classic model for studying the evolution of cooperation in spatial game theory is the Nowak-May (NM) game Nowak and May (1992) (technically speaking, on the border between the archetypical prisonerÕs dilemma and chicken games). It captures a situation where at any moment defection has the highest expected payoff, but under some conditions agents can do better in a long time perspective by establishing trust and cooperation. An interaction in the NM game gives the following payoff: zero to anyone interacting with a defector (), one to a cooperator () meeting another cooperator, and to a meeting a . This model has been used to explain the emergence of cooperation among egoistic agents in disciplines as diverse as political science, economics, and biology Axelrod (1984), and will be the starting point of our work.

In our model we place (in this Letter, we use ) agents on a square grid with fixed boundary condition. Besides interacting with local spatial neighbors (, , and for internal, boundary, and corner agents, respectively), each agent has one additional link free to optimize its position in the interaction network W. Li et al. (2007). The rationale behind this arrangement is that people invest more in their spatially close contacts (e.g., family and coworkers) and thus are less likely to break these, whereas the long-range edges are more businesslike and open to optimization. In sociology this situation goes by the name “strength of weak ties” Granovetter (1973).

Figure 1: (Color online) Parameter dependence of the game reflected in the temptation and average cooperator density. (a) shows average density of cooperators, , as a function of the initial temptation, , with , and . The bar represents points averaged over the last (of ) steps. (b) and (c) correspond to the time evolution of and , respectively, for different values of . (d) shows the diagram over the three regions in - space. The curves are averages over runs.

In the NM game there is one parameter, the temptation to defect b, representing external conditions of the game (“society” in a social interpretation of the game, “environment” in the context of evolutionary biology). In this work, we investigate the case when is higher in a uniformly rich society, whereas the motivation to cooperate is higher in a society in unrest. Assuming a linear dependence of the temptation to defect on the prosperity, in our case measured by the density of cooperators in the population, we use a response function Tomochi and Kono (2002)

(1)

where we choose (representing a neutral cooperation level from the societyÕs perspective) as for simplicity. The value of controls the strength of feedback from the environment to the game rules. We will, unless otherwise stated, use . We update the state of the system, both strategies and long-range linked neighbors of the agents, synchronously. At a time step, each agent acquires payoff by playing the Nowak-May game with all its local and long-range neighbors. When an agent , updating its strategy, has a higher payoff than its neighbors, nothing happens. Otherwise, adopts the strategy of the neighbor with the highest payoff with a probability , and simultaneously rewires its free link to the long-range neighbor of . Following Ref. W. Li et al. (2007), we use

(2)

where controls the noise in the choice of whom to imitate. This way of parametrizing noise is further discussed in Ref.  Szabó and Tőke (1998). We use in our present study, which is enough to create heterogeneous structures but not enough to overshadow the strategies as a factor in the dynamics.

Figure 2: (Color online) Correlations between the strategy and network structure. Circles (squares) correspond to the average density of cooperators (defectors) with degree , . (a) is for (region I), (b) is for (region II), and (c) is for (region III). In panels (b) and (c), the exponent of the power-law is .

Turning to the numerical results, in Fig. 1(a), we plot the average density of cooperators as a function of for three values of . For example, if and , the system converge with certainty to a state with . We call the region of parameter space with this behavior region I and denote the large- border of this region . For the -values of Fig. 1(a) . For cooperation vanishes. We call this part of - space region III. Between these extremes, there is a region II of complex behavior where, depending on , the cooperation density converges to , (at least a value very close to ) or with probabilities depending on and . With increasing , the probability that the system end in all- decreases, and vanishes completely at .

In Figs. 1(b) and (c) we display trajectories of and , averaged over runs, for different values. These curves show the system stabilizing to a steady cooperation level after about time steps. These transient oscillations can be explained by the adaptive payoff dynamics. Assuming a well-mixed case, in which the strategy adoption rate is proportional to the relative success of the strategies, one can approximate the dynamics by the replicator equation system

(3a)
(3b)

The factors and of Eq. (3a) give the fixed points and . From these equations, we can also understand the oscillatory behavior of Fig. 1(b) and (c). If and , then will increase and decrease. This will, after some time, make and thus negative. If is negative, then will eventually become positive. Taken together, this explains the cyclic behavior. Such oscillations—growing and shrinking () clusters that drive the oscillations in —can be seen with our Java applet of the model java (). For all parameter values we study, the cyclic behavior will either increase in amplitude until reaches a fixed point, or be dampened to the fixed point close to . The perhaps most interesting observation is the onset of the all- state. As an example, for in Fig.1(b), starts increasing again, but it is too late—the emergence of a hub, combined with the fact that is still smaller than , drives the system to the all- state. For large , goes toward its final value monotonically, while, for smaller values of , the convergence is oscillatory. For , the system hits the fixed points faster than the response from the environment can tune the value of . In an extended model where can appear, by mutation, in an all- state, all- would not be evolutionary stable.

In Fig. 1(d), we plot a diagram over the regions of - parameter space with distinct dynamic behavior. We identify region I as when at convergence is less than from , i.e., and region III as when the converged is less than . We note that the boundary value, , separating region I from II decreases with an increasing ( for and as grows towards ). In region I, for all measured values of , the system relaxes to a steady state with and converges to a stable value. For example, gives [Figs. 1(b) and (c)]. This happens when the feedback in Eq. (1) is strong enough to balance . When increases beyond , the feedback from the environment starts affecting so strongly that the system inevitably hits an absorbing state. At a fixed point, grows (if ) or decreases (if ) unboundedly. In this situation, as the fixed points in any real system would be metastable rather than permanent, should not be overinterpreted. Alternatively, one can limit the temptation by, in Eq. (1), letting if and letting if . If is large enough (, for our parameter values). The conclusions from such a model are the same as for the one presented in this Letter, otherwise region II can vanish (results not shown). Preliminary studies suggest that an all- state also require a frequent updating of the strategies. Now strategies and links are updated equally often, but if the link update is 100 times more frequent than strategy updating, all- states almost never happen. If, on the other hand, the time scale is skewed in the other direction, the conclusions from Eq. 1 remain the same. As a final note about Fig. 1(d), we see that , separating region II from III, increases monotonously with . That is, cooperation is enhanced by the feedback from the environment to the payoff matrices.

Now we turn to the connection between game dynamics and network structure. In this analysis, we only consider the network of long-range links, not the background square grid. In Fig. 2, we show , the fraction of cooperators or defectors of a particular degree in the steady state (). The three different regions show different structure. For region I, represented by [Fig. 2(a)], is larger for cooperators than defectors if . If , all nodes are cooperators. Since the final densities of and are equal in such situation, the high-degree can protect their neighbors from imitating defectors, and thus support cooperation. For region II, exemplified by [Fig. 2(b)] where the steady state is all- (so for all ), we find that has a functional form closely described by a power-law with exponential cutoff and a decay exponent is about . Since the steady state, in this case, is all-, the payoff an agent can accrue will depend linearly on its degree. Consequently, during the process of rewiring, the probability of getting new links of the agents will be approximately proportional to the degrees they already have. In a strictly growing network, “preferential attachment” is known to generate a power-law degree distribution Barabási et al. (1999). In this case, with networks fixed in size, preferential attachment is not enough to explain the degree distribution. In such a case, the preferential attachment needs to be balanced by an antipreferential deletion of edges salathe () in order for a power-law degree distribution to appear. The power-law-like degree distribution remains for larger values of despite the different steady-state values of . For a system in the all- state, the rewiring process behaves differently than in the all- case. Since the payoff a defector gets is independent of the total number of links it already has, its nonlocal link will be rewired randomly to another , which generates networks with a Poisson degree distribution, as observed in Fig. 2(c).

Fig. 2 suggests that the coevolution of the contact patterns and the payoff matrix, in region II, makes the underlying network change from its initially random state to a heterogeneous structure. As shown in Fig. 3(a), the cumulative degree distribution (the probability an observed degree is larger than ) depends strongly on . Especially for where, the distribution follows a power-law over two decades. For sufficiently large , we observe a decay of the form ( are fitting parameters)—a combination of an exponential and a stretched exponential form. The stretched exponential can, as mentioned above, be generated by a (non-linear) preferential attachment krapi (). In Fig. 3(b), we investigate the hierarchical features of the steady state networks in greater detail. It has been argued that a characteristic feature of hierarchical networks is that the clustering coefficient (the fraction of possible triangles a node is member of with given the degree) is inversely proportional to degree Ravasz et al. (2002). This is indeed what we observe for large values.

Figure 3: (Color online) Structural properties of network in the steady state for different values of . (a) displays the cumulative degree distribution. The line for follows a decay like a sum of an exponential and stretched exponential function. (b) shows the clustering coefficient as a function of degree . The line marks a scaling with exponent .

In conclusion, we have studied a game-theoretical model with feedback from the behavior of the agents to the rules of the game, via the payoff matrix, and an active optimization of both the contact structure between the agents and their strategies. With respect to the average cooperation density, the model is a non-equilibrium model. This makes the initial temptation value a crucial model parameter. We identify three regions of distinct dynamic behavior. In region I, the average cooperator density relaxes to a stable level through damped oscillations; in region III the systems reaches an all-defect state. For intermediate -values (region II), the system ends at one of three fixed points, , or , with parameter-dependent probabilities. For some parameter values in this region, the system will almost certainly reach an all-Cooperator state. The all-cooperator state is absorbing, but if one extends this model to a non-equilibrium model, it would not be stable to mutations in . In the all- state, the network has the most heterogeneous degree distribution, and also a clear scaling of the clustering coefficient. Ref. Ravasz et al. (2002) argues that this feature is indicative of a hierarchical organization of the system. This is in contrast to usual explanations of social hierarchies as resulting from external factors such as age or fitness wilson () or internal heterogeneities. The latter case is true also for our model in the limit of no environmental feedback, in which case it reduces to the model of Ref. W. Li et al. (2007). But also the network dynamics is needed for the hierarchical topology and cooperation to co-emerge. If there is no network dynamics, the cooperation stabilizes at some intermediate -value and does not reach the all- state. In this case a power-law degree distribution emerges for intermediate cooperator levels. In other game-theoretic situations, hierarchical organization has sometimes proven to support cooperation Vukov and Szabó (2005), sometimes destabilizing it Fu et al. (2009). The source of the co-emergence of cooperation and a hierarchical topology in our model comes from the cooperators being stabilized by high-degree nodes, while there is no similar effect for the defectors. A similar positive feedback mechanisms between degree and payoff of cooperators drive the emergence of cooperation in the model of Ref. zsch:2010 (). This model differs from ours in that the payoff matrix is fixed and not a function of the state of the system.

In summary, our work shows a new possible mechanism for the coemergence of hierarchical structures and cooperation. We foresee more studies of games in flexible settings where the game itself determines its rules and the player can choose when Wu et al. (2006) and with whom Zimmermann and Eguíluz (2005) to interact from its strategy.

Acknowledgements.
This research was supported by the Wenner–Gren Foundations (S.L.), the Swedish Foundation for Strategic Research (P.H.), the Swedish Research Council (Z.X.W. and P.H.), and the WCU program through NRF Korea funded by MEST R31-2008-000-10029-0 (PH).

References

  • Axelrod (1984) R. Axelrod, The evolution of cooperation (Basic Books, New York, 1984); J. Maynard Smith, Evolution and the Theory of Games (Cambridge University Press, Cambridge, 1982); M. Nowak, Evolutionary Dynamics: Exploring the Equations of Life (Harvard University Press, Cambridge MA, 2006).
  • Tomochi and Kono (2002) M. Tomochi and M. Kono, Phys. Rev. E 65, 026112 (2002).
  • Zimmermann and Eguíluz (2005) M. G. Zimmermann and V. M. Eguíluz, Phys. Rev. E 72, 056118 (2005); J. M. Pacheco, A. Traulsen, and M. A. Nowak, Phys. Rev. Lett. 97, 258103 (2006); P. Holme and G. Ghoshal, Phys. Rev. Lett. 96, 098701 (2006); T. Gross and B. Blasius, J. Roy. Soc. Interface 5, 259 (2008).
  • W. Li et al. (2007) W. Li, X. Zhang, and G. Hu, Phys. Rev. E 76, 045102 (2007).
  • Zimmermann2000 (2000) M. G. Zimmermann, V. M. Eguíluz, M. San Miguel, and A. Spadaro, in Applications of Simulation to Social Sciences (Hermes Science Publications, Paris, France, 2000).
  • Nowak and May (1992) M. A. Nowak and R. M. May, Nature 359, 826 (1992).
  • Granovetter (1973) M. S. Granovetter, Am. J. Sociol. 78, 1360 (1973).
  • Szabó and Tőke (1998) L. E. Blume, Games Econ. Behav. 5 (1993) 387; G. Szabó and C. Tőke, Phys. Rev. E 58, 69 (1998); J. Miekisz, in Multiscale Problems in the Life Sciences, From Microscopic to Macroscopic, V. Capasso and M. Lachowicz (eds.), Lecture Notes in Mathematics 1940 (2008) 269.
  • (9) http://www.tp.umu.se/~jrpeter/multiadaptivegame/
  • Barabási et al. (1999) A. L. Barabási, R. Albert, and H. Jeong, Physica A 272, 173 (1999).
  • (11) M. Salathé, R. M. May, and S. Bonhoeffer, J. Roy. Soc. Interface 2, 533 (2005).
  • (12) P. L. Krapivsky, S. Redner, F. Leyvraz, Phys. Rev. Lett. 85, 4629 (2000).
  • Ravasz et al. (2002) E. Ravasz, A. L. Somera, D. A. Mongru, Z. N. Oltvai, and A.-L. Barabási, Science 297, 1553 (2002).
  • (14) E. O. Wilson, Sociobiology (Harvard University Press, Cambridge MA, 1975).
  • Vukov and Szabó (2005) J. Vukov and G. Szabó, Phys. Rev. E 71, 036133 (2005); F. C. Santos and J. M. Pacheco, Phys. Rev. Lett. 95, 098104 (2005).
  • Fu et al. (2009) F. Fu, L. Wang, M. A. Nowak, and C. Hauert, Phys. Rev. E 79, 046707 (2009).
  • (17) G. Zschaler, A. Traulsen, and T. Gross, New J. Phys. 12 (2010), 093015.
  • Wu et al. (2006) Z.-X. Wu, X.-J. Xu, S.-J. Wang, and Y.-H. Wang, Phys. Rev. E 74, 021107 (2006).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
117458
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description