Evolutionary Prisoner’s Dilemma Game in Flocks

Evolutionary Prisoner’s Dilemma Game in Flocks

Abstract

We investigate an evolutionary prisoner’s dilemma game among self-driven agents, where collective motion of biological flocks is imitated through averaging directions of neighbors. Depending on the temptation to defect and the velocity at which agents move, we find that cooperation can not only be maintained in such a system but there exists an optimal size of interaction neighborhood, which can induce the maximum cooperation level. When compared with the case that all agents do not move, cooperation can even be enhanced by the mobility of individuals, provided that the velocity and the size of neighborhood are not too large. Besides, we find that the system exhibits aggregation behavior, and cooperators may coexist with defectors at equilibrium.

Keywords: cooperation,flocks,evolutionary games,prisoner’s dilemma

1 Introduction

Cooperation is commonly observed in genomes, cells, multi-cellular organisms, social insects, and human society, but Darwin’s theory of evolution implies fierce competition for existence among selfish and unrelated individuals. In past decades, much effort has been devoted to understanding the mechanisms behind the emergence and maintenance of cooperation. In this context, the prisoner’s dilemma game is a widely used model to illustrate the conflict between selfish and cooperative behavior.

The traditional prisoner’s dilemma (PD) game is a two-player game, where each player can choose either cooperation (C) or defection (D). Mutual cooperation pays each a reward , while mutual defection brings each a punishment . If one player chooses to cooperate while the other prefers to defect, the cooperator obtains the sucker’s payoff and the defector gains the temptation . The four payoff values satisfy the following conditions: and . According to the inequalities, defection is the optimal strategy to maximize payoff for a selfish player in a one-shot game, no matter what the opponent dose. But the total income of two defectors is lower than that of two cooperators. Hence the dilemma arises, and defection is evolutionarily stable in a well-mixed population [1].

The spatial PD games have attracted much attention since Nowak and May reported the stable coexistence of cooperators and defectors in a two-dimensional lattice [2]. After that, many works have been done to add randomness to the deterministic game dynamics. For example, the noise can be introduced based on the payoff difference, which allows an inferior strategy to be followed with certain probability [3]. The mapping of game payoffs to individual fitness can follow different distributions, which accounts for social diversity [4]. And in the dynamic preferential selection model, the more frequently a neighbor’s strategy is adopted by the focal player, the larger probability will be chosen to refer to in the subsequent rounds [5]. Besides, networks describing connections among individuals have also been also extended from lattices to complex networks [6, 7, 8, 9, 10]. For more details about spatial evolutionary games, please see Ref. [11, 12, 13] and references therein.

In spatial games mentioned above, players are located on the vertices of the network, and edges among vertices determine who plays with whom. Often the network is assumed to be static. However, in real social systems, the network size may continuously change as individuals join or quit, and the network structure can also evolve as links are created or broken. It has been reported that co-evolution constitutes a key mechanism for the sustainability of cooperation in dynamic networks [14, 15, 16, 17, 18, 19].

For a network, the movement of individuals may either change its size or its structure. For example, when people drive, cell phones connect with different base stations in the mobile communication network. And moving house brings one new neighbors in the acquaintance network. In fact, the motion of individuals is an important characteristic of the social network [20], and the patterns of human mobility have drawn much attention in the past years [21, 22]. When the spatial structure has been introduced, it is natural to consider the evolution of cooperation in mobile individuals.

By intuition the introduction of mobility would lead to the dominance of defection because mobile defectors can expect more cooperators to employ than that of the static network, and escape retaliation of former partners by running away. Yet, the correlation between cooperation and mobility is more complex than intuition. Mobility could affect the origin of altruism, while the rise of altruism cost would lead to an evolutionary reduction of mobility [23]. With a win-stay, lose-shift rule cooperation would be evolutionary stable under generalized reciprocity [24]. Further, in agent-based models, mobility of individuals can be involved explicitly as the movement of agents. “Walk Away”, a simple strategy of contingent movement, can outperform complex strategies under a number of conditions [25]. And success-driven migration may promote the spontaneous outbreak of cooperation in a noisy world, which is dominated by selfishness and defection [26]. Even in a blind pattern of mobility, cooperation is not only possible but may also be enhanced for a broad range of parameters, when compared with the case that all agents never move [27, 28, 29].

In the present work we study the evolution of cooperation among mobile players, which are allowed to move in a two-dimensional plane without periodic boundary conditions. The movement of every agent is non-contingent, imitating the direction alignment process in biological flocks. We find that there exists an optimal size of interaction neighborhood, which can induce the maximum cooperation level. When compared with the case that all agents do not move, cooperation can not only be maintained but even be enhanced by the movement of players. We also investigate the dependence of the cooperator frequency on the density of agents, and the coexistence of different strategies is illustrated.

2 The Model

Let and , , denote the position and moving direction of the agent at time , , respectively. Assume that each agent has the same absolute velocity . When , all agents are randomly distributed in an square without boundary restrictions, and their directions, , are uniformly distributed in . The position of each agent is updated according to

(1)

where (t) is characterized by and . In addition, is set to between two updates on the positions.

In biological systems, such as flocks of birds and schools of fish, individuals tend to align their moving directions with that of nearby neighbors. To simulate the process of direction alignment in flocks, the angle of agent is updated according to the average direction of its neighbors. Then

(2)

where denotes the neighbors set of the agent at time .

In real populations, people are believed to interact much more with their neighbors than with those who are far away [3]. Based on this point, when players are located on the nodes of a fixed network, interactions often take place among immediate players. When players are kept moving, distances can be used to find neighbors close to the focal one [30]. Note in the Vicsek model [31], the neighbors set is defined as agents within the circle of radius centered at the agent . To exclude the effects from fluctuations of the neighborhood size, we assume that each agent will only interact with nearest neighbors at time . Thus can be written as

(3)

where the function means to find smallest elements given in , and denotes the Euclidean distance between and in the two-dimensional space. In simulations, distances between the focal agent and the others are calculated at first. Then they are sorted in an ascending order, which means . Here denote the distance between and , and the suffix represents its order. If , nearest agents are chosen as neighbors. If and , agents are randomly selected among agents when . The sorting process leads a directed interaction network, however. It means does not imply . Here, and denote the neighbors sets of and at time respectively.

Next we introduce the evolutionary rules of our game. Initially all players are randomly assigned one strategy of the PD with equal probability. The strategy of each player can be denoted by an unit vector or , which indicates cooperation or defection respectively. At each time step, each player plays the PD game with his neighbors , accumulating a payoff . Here we assume that . And following the common practices [2], the payoff matrix of the PD takes a rescaled form as

(4)

where . Then, every player chooses the strategy that gains the highest payoff among itself and its neighbors at the next time step [2]. Though the evolution of strategies and the movement of agents are characterized by two time scales respectively, they are treated as the same here. It means that every agent modifies its position and direction after strategy update. This process is repeated until the system reaches equilibrium.

In our model, distances among each agent determine the network of contacts, and the agents continuously change their positions. As a result, the neighbors may be different at each step, though the size of neighborhood is fixed. To characterize the evolution of the interaction network, we calculate the new neighbors that all agents meet at time as

(5)

where represents the set size.

Fig. 1 shows typical evolutions of , which is divided by for normalization, and the frequency of cooperators . One can find that decreases to when . For comparison, we also plot the evolution of average normalized velocity defined in Ref. [31]. As the decrease of , also reaches a steady value, which indicates a stable distribution of moving directions of the agents. These findings imply that given a sufficient relaxation time, each agent owns a fixed neighborhood. Later, we will show that without periodic boundary conditions, the system forms many disconnected components after a long run time. Thus the variation of neighbors, if any, would be constrained within a fraction of agents in the population.

Fig. 1: Representative time evolutions of , and for , and .

The simulations were carried out in a system with , . In each realization, we first check whether the interaction network is fixed after a suitable relaxation time. The relaxation time is varied from to generations, and the longer run time corresponds to small neighborhood size or velocity . If , and this condition can hold for time steps, the network would be treated as a static one. Then we evaluate the frequency of cooperators at equilibrium by averaging over the last 1000 generations. All data points shown in each figure are acquired by averaging over 400 realizations of independent initial states.

3 Results and Discussions

Fig. 2 illustrates the dependence of the frequency of cooperators on the temptation in the stationary state for different sizes of neighborhood with a fixed absolute velocity . Under a fixed and , shows a step structure, and gradually decreases as the increase of . However, the size of neighborhood can strongly affect the evolution of cooperation. One can see that cooperators are more prone to die out in the case of large . But before the system is completely occupied by defectors, there exists an appropriate to promote cooperation for a fixed . As shown in Fig. a, the cooperation level for is always higher than that for other values of , if . If , the highest level of cooperation can be achieved when . If , defectors dominate the population, no matter the values of . These findings suggest a non-monotonous dependence of the cooperator frequency on the neighborhood size . Besides, the absolute velocity also plays an important role in the evolution of cooperation. Comparing Fig. b with Fig. a, one can find that the increase of leads an apparent drop of for or . But for or , the increment of does not cause many changes to the cooperation level. And when , the highest level of cooperation for is still above . These findings imply that as the variance of , the movement of individuals has different influence on .

(a) v=0.05
(b) v=0.1
(c) v=0.15
(d) v=0.35
Fig. 2: The frequencies of cooperators versus the temptation to defect for , , and respectively, where the cases correspond to different sizes of neighborhood, and ranges from to with an interval of .

To investigate the role of the neighborhood size , Fig. 3 presents the cooperator frequency as a function of the neighborhood size for a fixed temptation . Dai et al. reported the promotion of cooperation through enlarging the size of neighborhood among mobile agents [28], where molecular dynamics is used to describe repulsion and attraction between agents in flocks. But in our model, a resonance-like behavior can be observed: there exists a peak of at some values of . In fact, the same behavior has been found in three typical networks, where the density of cooperators peaks at some specific values of the average degree [32]. Here our work can be viewed as extensions of previous work to dynamical networks. Next we will give a simple explanation for the non-trivial relation between and . On square lattices and regular ring-graphs, fixed locations of players provide continuous interactions within local neighborhoods, and cooperators can cluster together to resist the invasion of defectors [3, 6]. The increment of average degree indeed hampers cooperation, because the well-mixed limit is nicely approached for a sufficiently large size of neighborhood [33]. When players are kept moving, however, the cluster of cooperators may be destroyed by time-variant neighborhoods. The smaller is, the longer time the system needs to form a fixed interaction network. The increment of enhances the probability of future encounters between players and their former neighbors. As a result, interactions among cooperators can be maintained. But defectors can also exploit more cooperators as increases, and large values of reproduce the mean field situation. To promote cooperation, there should be a compromise between the two limits of discussed above. That is why the cooperation level reaches the maximum only at intermediate values of . At the same time, the positive effect coming from intermediate local connections on cooperation is greatly constrained by the absolute velocity and the temptation . In Fig. 3, the value of at the peak point decreases as increases. And when increases to in Fig. b, the curve of is almost leveled off for .

(a) b=1.05
(b) b=1.2
Fig. 3: The frequency of cooperators as a function of the size of neighborhood for and respectively. And ranges from to with an interval of .

To study the effect of the absolute velocity on the cooperation level, Fig. 4 demonstrates the frequency of cooperators as a function of the temptation for different values of with . In our model, velocity measures the movement speed of players. One can find that for , the cooperator frequency is lower than that for , and decreases gradually as increases. In fact, when the agents move with a high velocity, they have greater chance to contact with different neighbors than that in the case of small . Before the interaction network gets fixed, neighbors of each agent change quite often, or might be completely different at each time step. As a result, there is a small probability of forming compact clusters of cooperators, which leads to the dominance of defectors. For , however, the situation is reversed. Compared with the case that agents do not move, the cooperation level is promoted throughout the whole parameter range of when or . It suggests that cooperation among mobile individuals is not only possible but may even be enhanced, and this finding is in accordance with the previous work [27, 28, 29]. But such effect relies on the size of neighborhood, as shown in Fig. 4, which presents the dependence of on for different values of with . For , the cooperation level increases with , and reaches the maximum value around . When , a drop of appears. For or , changes little when increases. This can be explained by the occurrence of mean field situation at large values of , which offsets the enhancement of cooperation from mobility.

Fig. 4: (a) The frequency of cooperators as a function of the temptation for with various velocities . ranges from to with an interval of . (b) The frequency of cooperators versus the absolute velocity for with different sizes of neighborhood. A logarithmic scale is used for the axis.

The evolution of system relies on the density of agents at , which can be defined as . And it has been reported that there is a optimal region of for cooperation, when the neighbors are chosen according to a prescribed distance [29]. Fig. 5 shows the combining effect of and on the cooperator frequency for and . For a fixed , one can find that decreases monotonously as increases, and the decreasing velocity increases with . Clearly, our finding is different with that reported in Ref. [29], and this difference is rooted in the definition of neighborhoods. Here indicates how dense players distribute on the plane when . In Ref. [29], determines the average degree of the interaction network, and increases with . Previous work has revealed that moderate values of average degree can enhance cooperation [32]. Then the existence of the optimal region of for cooperation becomes understandable. In our model, however, each agent plays with a constant number of neighbors. The increasing of produces a dense population, which brings fast change in neighborhoods for the players. And for a fixed , a sufficiently large would hamper the evolution of cooperation, as discussed above. Hence the system shows low values of for large and . Fig. 5 sheds more light on the role of when increases. When , the increase of leads an apparent decrease of . While for , variation of densities only causes small fluctuations of .

Fig. 5: (a) The cooperator frequencies versus the absolute velocity and the density for , . (b) The cooperator frequencies versus the density for , . ranges from to with an interval of , and ranges from to with an interval of .

To have an insight into the evolution of cooperation among mobile players, Fig. 6 provides snapshots of spatial configurations at equilibrium, which is obtained in one realization. And to eliminate additional mechanisms that favor cooperation, the value of the temptation is near the extinction threshold of cooperators. One can find that the system gradually splits into many small flocks, in which all the agents move toward a same direction. Because the agents are located in a plane without boundary restrictions, they fly apart and never meet again. During the process of direction alignment, cooperators can survive by forming compact clusters. And the two strategies may coexist at equilibrium, as shown in the last three figures. One can see that defectors are located on the border of flocks, or surrounded by cooperators. For cooperators adjacent to defectors, mutual cooperation make their cooperative neighbors earn higher payoffs than the income of defectors. According to the best-takes-over rule of strategy update, the cooperators will follow the strategies of their cooperative neighbors. That is why cooperation can be maintained in population, and this mechanism has been found in the lattice structure [2].

(a) t=0
(b) t=25
(c) t=100
(d) t=1300(equilibrium)
Fig. 6: Snapshots of the evolution of cooperation with , and . Cooperators (red circles) form clusters to resist the invasion of defectors (white circles). At an equilibrium state, players running toward the same direction stay together, and their velocities are denoted by arrows. The last three figures present details of the labeled components in (d). To give a clear figure of spatial configuration, not all directions of the agents are denoted in (e), (f) and (g).

4 Conclusion

To summarize, we investigate the effects of mobility on the evolution of cooperation in the direction alignment process of flocks. Numerical simulations show that cooperation can be maintained in mobile players with simple strategies. Depending on the temptation to defect and the velocity at which the agents move, there exist an optimal size of interaction neighborhood to produce the maximum cooperation level. When compared with the case that all agents do not move, the cooperation level can even be enhanced by the mobility of individuals, if the velocity and the size of neighborhood are small. The cooperation level is also affected by the density of agents, and decreases as the increase of . Moreover, the system exhibits aggregation behavior, and we illustrate the coexistence of different strategies at equilibrium. Our work may be relevant for understanding the role of information flows in cooperative, multi-vehicle systems [34].

This work is supported by the Key Fundamental Research Program of Shanghai (Grant No.09JC1408000), the National Key Fundamental Research Program (Grant No.2002cb312200) and the National Natural Science Foundation of China (Grant No.60575036).

References

  1. C. Hauert, M. Doebeli, Spatial structure often inhibits the evolution of cooperation in the snowdrift game, Nature 428 (2004) 643–646.
  2. M. A. Nowak, R. M. May, The spatial dilemmas of evolution, Int. J. Bifurcation Chaos 3 (1993) 35–78.
  3. G. Szabó, C. Tőke, Evolutionary prisoner’s dilemma game on a square lattice, Phys. Rev. E 58 (1998) 69–73.
  4. M. Perc, A. Szolnoki, Social diversity and promotion of cooperation in the spatial prisoner’s dilemma game, Phys. Rev. E 77 (2008) 011904.
  5. Z. X. Wu, X. J. Xu, Z. G. Huang, S. J. Wang, Y. H. Wang, Evolutionary prisoner’s dilemma game with dynamic preferential selection, Phys. Rev. E 74 (2006) 021107.
  6. F. C. Santos, J. M. Pacheco, Scale-free networks provide a unifying framework for the emergence of cooperation, Phys. Rev. Lett. 95 (2005) 098104.
  7. J. Gomez-Gardenes, M. Campillo, L. M. Floria, Y. Moreno, Dynamical organization of cooperation in complex topologies, Phys. Rev. Lett. 98 (2007) 108103.
  8. J. Ren, W. X. Wang, F. Qi, Randomness enhances cooperation: A resonance-type phenomenon in evolutionary games, Phys. Rev. E 75 (2007) 045101.
  9. X. J. Chen, L. Wang, Promotion of cooperation induced by appropriate payoff aspirations in a small-world networked game, Phys. Rev. E 77 (2008) 017103.
  10. W. B. Du, X. B. Cao, M. B. Hu, H. X. Yang, H. Zhou, Effects of expectation and noise on evolutionary games, Physica A 388 (2009) 2215–2220.
  11. G. Szabó, G. Fáth, Evolutionary games on graphs, Phys. Rep. 446 (2007) 97–216.
  12. M. A. Nowak, Five rules for the evolution of cooperation, Science 314 (5805) (2006) 1560–1563.
  13. M. Doebeli, C. Hauert, Models of cooperation based on the prisoner’s dilemma and the snowdrift game, Ecol. Lett. 8 (7) (2005) 748–766.
  14. M. G. Zimmermann, V. M. Eguíluz, M. San Miguel, Coevolution of dynamical states and interactions in dynamic networks, Phys. Rev. E 69 (2004) 065102.
  15. F. C. Santos, J. M. Pacheco, T. Lenaerts, Cooperation prevails when individuals adjust their social ties, PLOS Comp. Biol. 2 (2006) 1284–1291.
  16. A. Szolnoki, M. Perc, Z. Danku, Making new connections towards cooperation in the prisoner’s dilemma game, EPL 84 (5) (2008) 50007.
  17. A. Szolnoki, M. Perc, Coevolution of teaching activity promotes cooperation, New J. Phys. 10 (2008) 043036.
  18. A. Szolnoki, M. Perc, Promoting cooperation in social dilemmas via simple coevolutionary rules, Eur. Phys. J. B 67 (3) (2009) 337–344.
  19. F. Fu, C. Hauert, M. A. Nowak, L. Wang, Reputation-based partner choice promotes cooperation in social networks, Phys. Rev. E 78 (2008) 026117.
  20. M. C. González, P. G. Lind, H. J. Herrmann, System of mobile agents to model social networks, Phys. Rev. Lett. 96 (2006) 088702.
  21. D. Brockmann, L. Hufnagel, T. Geisel, The scaling laws of human travel, Nature 439 (2006) 462–465.
  22. M. C. González, C. A. Hidalgo, A. L. Barabási, Understanding individual human mobility patterns, Nature 453 (2008) 779–782.
  23. J. F. Le Galliard, R. Ferrière, U. Dieckmann, Adaptive evolution of social traits: Origin, trajectories, and correlations of altruism and mobility, Am. Nat. 165 (2005) 206–224.
  24. I. M. Hamilton, M. Taborsky, Contingent movement and cooperation evolve under generalized reciprocity, Proc. R. Soc. B 272 (2005) 2259–2267.
  25. C. A. Aktipis, Know when to walk away: contingent movement and the evolution of cooperation, J. Theor. Biol. 231 (2004) 249–260.
  26. D. Helbing, W. J. Yu, The outbreak of cooperation among success-driven individuals under noisy conditions, Proc. Natl. Acad. Sci. USA 106 (10) (2009) 3680–3685.
  27. M. H. Vainstein, A. T. C. Silva, J. J. Arenzon, Does mobility decrease cooperation?, J. Theor. Biol. 244 (2007) 722–728.
  28. X. B. Dai, Z. Y. Huang, C. X. Wu, Evolution of cooperation among interacting individuals through molecular dynamics simulations, Physica A 383 (2007) 624–630.
  29. S. Meloni, A. Buscarino, L. Fortuna, M. Frasca, J. Gomez-Gardenes, V. Latora, Y. Moreno, Effects of mobility in a population of prisoner’s dilemma players, Phys. Rev. E 79 (2009) 067101.
  30. V. Dossetti, F. J. Sevilla, V. M. Kenkre, Phase transitions induced by complex nonlinear noise in a system of self-propelled agents, Phys. Rev. E 79 (2009) 051115.
  31. T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, O. Shochet, Novel type of phase transition in a system of self-driven particles, Phys. Rev. Lett. 75 (1995) 1226–1229.
  32. C. L. Tang, W. X. Wang, X. Wu, B. H. Wang, Effects of average degree on cooperation in networked evolutionary game, Eur. Phys. J. B 53 (3) (2006) 411–415.
  33. F. C. Santos, J. F. Rodrigues, J. M. Pacheco, Graph topology plays a determinant role in the evolution of cooperation, Proc. R. Soc. B 273 (2006) 51–55.
  34. R. Olfati-Saber, J. A. Fax, R. M. Murray, Consensus and cooperation in networked multi-agent systems, Proc. IEEE 95 (2007) 215–233.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
105160
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description