The N-player war of attrition in the limit of infinitely many players

The N-player war of attrition in the limit of infinitely many players


The War of Attrition is a classical game theoretic model that was first introduced to mathematically describe certain non-violent animal behavior. The original setup considers two participating players in a one-shot game competing for a given prize by waiting. This model has later been extended to several different models allowing more than two players. One of the first of these -player generalizations was due to J. Haigh and C. Cannings in [9] where two possible models are mainly discussed; one in which the game starts afresh with new strategies each time a player leaves the game, and one where the players have to stick with the strategy they chose initially. The first case is well understood whereas, for the second case, much is still left open.
In this paper we study the asymptotic behavior of these two models as the number of players tend to infinity and prove that their time evolution coincide in the limit. We also prove new results concerning the second model in the -player setup.

Key words and phrases:
game theory, war of attrition, evolutionary stable strategy, n-player
2010 Mathematics Subject Classification:

1. Introduction

Game theory has ever since the pioneering works by J. von Neumann developed in to an important tool in the study of various areas of research such as economical science, computer science, political science, biology, social science and even in philosophy. A common point of view is that game theory constitutes a theory of rational and strategic decision making describing how rational players would optimize their play, often in terms of Nash-equilibrium. During the years especially economical science has earned a lot of success applying game theory in various situations, and this has resulted in several Nobel-prizes. The latest of these was given to Alvin E. Roth and Loyd S. Shapley in 2012 “for the theory of stable allocations and the practice of market design”. However, when applying game theory to problems in biology and animal behavior it is obvious that the common view point of having rational players is insufficient. Even though many situations in biology, in principle, could be described as some kind of game, one can not consider animals as being actively rational. One rather expect animal behavior to be in agreement with game theory as a consequence of natural selection in evolution. In 1973 in [12] J. Maynard Smith and G. R. Price introduced the notion of Evolutionary Stable Strategy, in short ESS, that was to take the same place in game theoretic biology as Nash-equilibrium had had in game theoretic economy. The ESS serves as the natural candidate for what type of animal behavior that evolution eventually would produce by natural selection. In 1974, published in [11], J. Maynard Smith developed a game theoretic, non-violent, conflict scenario called War of Attrition to describe potential animal behavior in e.g. territorial competition. The model considers two players competing for one single prize by waiting. The cost of waiting is modeled as proportional to the duration of the game, and it is payed in the same amounts by both parts when the first player decides to leave. The remaining player wins the game and collects the prize . In [3] it was proven by D.T. Bishop and C. Cannings that the war of attrition has one unique mixed ESS given by choosing waiting time at random from an exponential probability distribution having mean . In 1999 John Maynard Smith, together with E. Mayr and G. C. Williams, was honored with the Crafoord prize for his work in evolutionary biology in connection with game theory.
In 1989 J. Haigh and C. Cannings in [9] generalized the two player model of the war of attrition to models involving several players. One could of course think of many ways of constructing such generalizations, but the ones considered in [9] are probably the most natural extensions. In this text we will refer to these models as the dynamic model and the static model1. The -player dynamic model of the war of attrition is a repetitive game in rounds in which one player drops out of the game in each round until there is only two players left in the final round. Between the rounds the remaining players are allowed to change their strategies for the next round. The dynamic model is well understood and the existence and uniqueness of an ESS is proven in [9] under very general conditions.
In the -player static model of the war of attrition all participating players choose their waiting time at the beginning of the game. Each of them are then bound to stick to their chosen waiting time. Hence the static model is a one-shoot game, i.e. the outcome of the game is known as soon as all players have made their choice. In contrast to the dynamic model far less is know about how to play the static model. In [9] it is proven by specific examples that the static model admit a unique ESS in some cases while in other cases it does not, and much is left open.
The war of attrition has through time developed into one of the most classic game theoretic models. It has been studied from a different interesting point of view in [7].

2. Preliminaries and Introductory Results

We begin with a heuristic discussion. For the simplest setup of the war of attrition, from now on WA, (see [11]) we consider a two player game in which the contestants are competing for a prize by waiting. There is a cost connected to the duration of the game modelled linearly as . The game ends once one of the players decide to withdraw by paying the collected time cost and leave the price to the opponent player, who also pays the time cost. If we name the players by and , and their corresponding waiting times by and , we get the WA pay-off function for player as:


In the case of equal waiting times we define


It is clear that this setup of the game can not have a pure strategy ESS, or even a pure strategy Nash-equilibrium, since if there were such a strategy it would be given by a fixed waiting time . It would therefore always be possible to brake this strategy by waiting just a bit longer than . However, according to [3], there is a unique mixed ESS given by letting , i.e. letting be randomly distributed with an exponential density of mean . As mentioned in the introduction, in [9] J. Haigh and C. Cannings generalized the above two player setup of the WA to two different models allowing players; one repetitive game, the dynamic model, and one one-shot game, the static model. In both cases one consider a sequence of positive numbers representing the prizes that the players are competing for. In this text we will assume this prize sequence to be an increasing sequence of real positive numbers, i.e. .
The dynamic -player model is divided into distinct rounds. In the beginning of the first round all the players, independently of each other, choose their waiting times. Then the players wait and the contestant having the least waiting time leaves the game by receiving the prize and paying the time cost . The remaining players also pay the cost and proceed into the second round where the game starts afresh and proceeds as in the first round, playing for the prize instead. The game goes on until the ’th player leaves in the final round by receiving and paying , thus leaving the final player left to claim the prize for a total cost of .
It was proven in [9] that there exists a unique mixed ESS for each round in the above dynamic model by choosing waiting time according to an exponential distribution with mean . In what follows we will use this result to investigate -player limit of the dynamic model in a “sketchy manner”. Given the increasing sequence of prizes we associate a piecewise linear function, on , by declaring and so that every pair is joined together by a line segment. It is clear that the function may have a very bad behaviour in the limit as . For instance if we would get an a.e. unbounded function in the limit. However, if we suppose that the prize sequence is such that as the dynamic -player WA will have meaning in the limit2 and we can investigate the limiting behaviour. If we denote the density function of the mixed ESS of round by , and let for some fixed (), we have that


as . The number represents the fraction of players that, at the moment, have left the game. Of course, in this setup depends on the time and we would like to analyze its time evolution and how it relates to . For this we introduce the mean field density function describing the fraction of players still left at time after the game has started, with chosen waiting times . Thus will lose mass as players are quitting according to


and since the -marginal should be exponentially distributed like (2.3) for every we get


Thinking of as an approximation of the distribution of players in the -player game at time , let denote the number of players in the vicinity of at (i.e. having ). Then

On the other hand, since and are of the same time scale and is continuous we should also have that


and therefore . This means that the number of players having waiting times equals to the number of players that will leave the game in the interval . By this we get that


which in turn yields a differential equation for the dynamics of as


Since , equation (2.8) suggests that


and since the game by definition will end when all the players are out, i.e. when , we get by (2.9) that the total duration of the game is given by the formula


In the following lemma we show that this result is consistent with the corresponding result one would get from [9] in the limit of .

Lemma 1.

Let , where each is a random variable with density given by the first order statistics of number of exponentially distributed random variables with parameter . Then

as .


If and we have that . Thus, for the sequence we have


as , since the quotient independently of , and we end up with a telescopic sum. ∎

In the dynamic -player WA the quantity is a natural measure of the expected total time a typical game will last. In the first round all players chose their waiting times according to an exponential distribution with mean . If denotes the waiting time of player in the first round, we get a sequence of waiting times . The first round of the game will therefor last for a time , that is, the first order statistics of the waiting time sequence. In this case, since are i.i.d. and exponentially distributed, it is well known that (see e.g. [5]). After the first round the game starts afresh and the players (independently of the previous round) chose their new waiting times according to an exponential distribution, now having mean . The expected time of round two is again derived by first order statistics. The game continues like this until the ’th player leaves and the final prize is collected by the “winner”. Thus the expected duration of the -player game is precisely given by . The result of Lemma 1 indicates consistency between the heuristic arguments that led to (2.10) and [9].
Given an increasing -prize function on the unit interval it is an easy task to solve the ode in (2.8) and thereby derive the mean field density of players . Below we have included some numerical results of and for some different choices of prize function.

The results above concerning the time evolution of are of course based on non rigorous arguments, but the main conclusion of being the inverse function of makes sense from a game theoretic point of view. To be more precise; let be a fixed point of time representing any pure strategy in the limiting dynamic WA with prize function . If all players in the game are playing according to the payoff of playing any pure strategy would be regardless of . In other words, represents a Nash-equilibrium in the limit of infinitely many players. In the next section we investigate this more thoroughly.

3. Convergence in the -player limit of the dynamic model

In this section we consider the dynamic -player generalization of the WA according to [9] and its behaviour as the number of players grows to infinity. We will assume the sequence to be positive and strictly increasing and consider an -round game (playing for in the ’th round) in which each round starts afresh once a player drops out. As stated in the introduction, we know that the dynamic model of the game has a unique mixed ESS given by a certain exponential distribution in each round. We assume that there is an increasing -function defined on the unit interval, denoted by , such that and .
Since the mixed strategy ESS is an exponential distribution we may consider the evolution of the game as a continuous time Markov chain , where counts the total fraction of players that have decided to leave at . Since a player that left the game never returns will be a pure birth process. More specifically; if all the players left in the game after the first rounds play according to the ESS the time it takes to play the ’th round is given by the random variable , where are i.i.d. exponentially distributed with mean . Therefore


and if we let , for , we have the finite birth process below describing the time evolution of the game as players are quitting:


We define since the game ends as soon as players has left. Now, consider the stochastic jump process


Then is a continuous time Markov process having the value during the ’th round of the game. We are interested in the limit and to prove convergence towards (see 2.8). In order to find a closed form expression of the expectation of we use standard methods from continuous time Markov chain theory (see e.g. [1]). To the pure birth process in (3.3) we have the associated intensity matrix


The Chapman-Kolomogorov equations state that


where is the matrix of transition probabilities from state to state in (3.3), at time . Henceforth, for the sake of simplicity, we will assume that . Allowing equalities would make the analysis much more involved, but it would not contribute to any more interesting results. Solving (3.5) by hand is tedious but straight forward, and it is possible to find a closed form expression of the transition matrix (see Appendix A). The state probabilities , collected in , can be computed using the relation together with the initial condition . The result is




In order to proceed and to understand this complicated expression the following lemma is useful:

Lemma 1.

Let be a sequence of positive and distinct real numbers. If , then


Consider the Laplace transform of the convolution:


where denotes the Laplace transform of . Splitting the above product into partial fractions yields


and since all the are distinct by assumption we get that


Thus, using the inverse transform to get back in to the time domain, we are done. ∎

We are now ready to state and prove a theorem concerning the convergence properties of as .

Theorem 2.

Let and let be the prize function defining the Markov process in (3.3). Then, if we define


we have the following:

  1. ,      

with the limits taken in the stated order. Here is the Heaviside function.


We start by proving . Using the Laplace transform and the conclusion from Lemma 1 we get that

which is well defined for all such that . Now, investigate the product in the above expression.

where we take to be the principal branch of the complex logarithm. Using that for all and that for some we get that


where we have used that for large enough, in the final equality. Expression (3.11) is local and well defined for all , i.e. the disc centered at the origin with radius . By the equality and a Taylor expansion of about we get for all that

where the ordo terms have been included in . Thus,

and by considering the -player limit of this expression we finally get (making the change of variables using that is increasing) that

which in turn yields

Since is bounded on a compact interval we can use Lebesgue’s theorem on dominated convergence to interchange the order between the limits and the Laplace transform and then, by the inversion formula, we get that

For proving it will by assuming suffice to consider the -norm of the sum from to .

where we used the triangle inequality and the fact that the convolution is a probability density in the second inequality. The statement in follows immediately. Finally, for proving the statement in we analyse the limiting behavior of . Note that

By Lemma 1 we get

From the proof of we know that

and it follows that . This proves since and . ∎

Following the same line of reasoning as in the proof of Theorem 2 one can also prove that