We investigate the complexity of computing approximate Nash equilibria in anonymous games. Our main algorithmic result is the following: For any player anonymous game with a bounded number of strategies and any constant , an approximate Nash equilibrium can be computed in polynomial time. Complementing this positive result, we show that if there exists any constant such that an approximate equilibrium can be computed in polynomial time, then there is a fully polynomialtime approximation scheme for this problem.
We also present a faster algorithm that, for any player strategy anonymous game, runs in time and computes an approximate equilibrium. This algorithm follows from the existence of simple approximate equilibria of anonymous games, where each player plays one strategy with probability , for some small , and plays uniformly at random with probability .
Our approach exploits the connection between Nash equilibria in anonymous games and Poisson multinomial distributions (PMDs). Specifically, we prove a new probabilistic lemma establishing the following: Two PMDs, with large variance in each direction, whose first few moments are approximately matching are close in total variation distance. Our structural result strengthens previous work by providing a smooth tradeoff between the variance bound and the number of matching moments.
1 Introduction
Anonymous games are multiplayer games in which the utility of each player depends on her own strategy, as well as the number (as opposed to the identity) of other players who play each of the strategies. Anonymous games comprise an important class of succinct games — wellstudied in the economics literature (see, e.g., [Mil96, Blo99, Blo05]) — capturing a wide range of phenomena that frequently arise in practice, including congestion games, voting systems, and auctions.
In recent years, anonymous games have attracted significant attention in TCS [DP07, DP08, DP09, DP15, GT15, CDO15, DDKT16, DKS16a], with a focus on understanding the computational complexity of their (approximate) Nash equilibria. Consider the family of anonymous games where the number of players, , is large and the number of strategies, , is bounded. It was recently shown by Chen et al. [CDO15] that computing an approximate Nash equilibrium of these games is PPADComplete when is exponentially small, even for anonymous games with strategies^{2}^{2}2[CDO15] showed that computing an equilibrium of strategy anonymous games is PPADComplete, but of the strategies in their construction can be merged, resulting in a strategy anonymous game..
On the algorithmic side, Daskalakis and Papadimitriou [DP07, DP08] presented the first polynomialtime approximation scheme (PTAS) for this problem with running time . For the case of strategies, this bound was improved [DP09, DDS12, DP15] to , and subsequently sharpened to in [DKS16b]).
In recent work, Daskalakis et al. [DDKT16] and Diakonikolas et al. [DKS16a] generalized the aforementioned results [DP15, DKS16b] to any fixed number of strategies, obtaining algorithms for computing wellsupported equilibria with runtime of the form . That is, the problem of computing approximate Nash equilibria in anonymous games with a fixed number of strategies admits an efficient polynomialtime approximation scheme (EPTAS). Moreover, the dependence of the running time on the parameter is quasipolynomial – as opposed to exponential.
We note that all the aforementioned algorithmic results are obtained by exploiting a connection between Nash equilibria in anonymous games and Poisson multinomial distributions (PMDs). This connection – formalized in [DP07, DP08] – translates constructive upper bounds on covers for PMDs to upper bounds on computing Nash equilibria in anonymous games (see Section 2 for formal definitions). Unfortunately, as shown in [DDKT16, DKS16a], this “coverbased” approach cannot lead to qualitatively faster algorithms, due to a matching existential lower bound on the size of the corresponding covers. In a related algorithmic work, Goldberg and Turchetta [GT15] studied twostrategy anonymous games () and designed a polynomialtime algorithm that computes an approximate Nash equilibria for .
The aforementioned discussion prompts the following natural question: What is the precise approximability of computing Nash equilibria in anonymous games? In this paper, we make progress on this question by establishing the following result: For any , and any player anonymous game with a constant number of strategies, there exists a time algorithm that computes an approximate Nash equilibrium of the game, for ^{3}^{3}3The runtime of our algorithm depends exponentially in . We remind the reader that the algorithms of [DDKT16, DKS16a] run in quasipolynomial time for any value of inverse polynomial in .. Moreover, we show that the existence of a polynomialtime algorithm that computes an approximate Nash equilibrium for , for any small constant – i.e., slightly better than the approximation guarantee of our algorithm – would imply the existence of a fully polynomialtime approximation scheme (FPTAS) for the problem. That is, we essentially show that the value is the threshold for the polynomialtime approximability of Nash equilibria in anonymous games, unless there is an FPTAS. In the following subsection, we describe our results in detail and provide an overview of our techniques.
1.1 Our Results and Techniques
We study the following question:
For player strategy anonymous games, how small can be (as a function of ), so that an approximate Nash equilibrium can be computed in polynomial time?
Upper Bounds.
We present two different algorithms (Theorems 1.1 and 1.2) for computing approximate Nash equilibria in anonymous games. Both algorithms run in polynomial time and compute approximate equilibria for an inverse polynomial above a certain threshold.
Theorem 1.1 (Main).
For any , and any player strategy anonymous game, there is a time algorithm that computes an approximate equilibrium of the game.
Theorem 1.2.
For any player strategy anonymous game, we can compute an approximate equilibrium in time .
Prior to our work, for , no polynomial time approximation was known for any inverse polynomial . For , the best previous result is due to [GT15] who gave a polynomialtime algorithm for .
Overview of Techniques. The highlevel idea of our approach is this: If the desired accuracy is above a certain threshold, we do not need to enumerate over an cover for the set of all PMDs. Our approach is in part inspired by [GT15], who design an algorithm (for and ) in which all players use one of the two preselected mixed strategies. We note that for , PMDs are tantamount to Poisson Binomial distributions (PBDs), i.e., sums of independent Bernoulli random variables. The [GT15] algorithm can be equivalently interpreted as guessing a PBD from an appropriately small set. One reason this idea succeeds is the following: If every player randomizes, then the variance of the resulting PBD must be relatively high, and (as a result) the corresponding subset of PBDs has a smaller cover.
Our quantitative improvement for the case is obtained as follows: Instead of enforcing players to selected specific mixed strategies – as in [GT15] – we show that there always exists an approximate equilibrium where the associated PBD has variance at least . When for some , the variance is an inverse polynomial of . We then construct a polynomialsize cover for the subset of PBDs with variance at least this much, which leads to a polynomialtime algorithm for computing approximate equilibria in strategy anonymous games.
The idea for the general case of is similar, but the details are more elaborate, since the structure of PMDs is more complicated for . We proceed as follows: We start by showing that there is an approximate equilibrium whose corresponding PMD has a large variance in each direction. Our main structural result is a robust momentmatching lemma (Lemma 3.4), which states that the closeness in lowdegree moments of two PMDs, with large variance in each direction, implies their closeness in total variation distance. The proof of this lemma uses Fourier analytic techniques, building on and strengthening previous work [DKS16a]. As a consequence of our momentmatching lemma, we can construct a polynomialsize cover for PMDs with such large variance. We then iterate through this cover to find an approximate equilibrium, using a dynamic programming approach similar to the one in [DP15].
We now provide a brief intuition of our momentmatching lemma. Intuitively, if the two PMDs in question are both very close to discrete Gaussians, then the closeness in the first two moments is sufficient. Lemma 3.4 can be viewed as a generalization of this intuition, which gives a quantitative tradeoff between the number of moments we need to approximately match and the size of the variance. The proof of Lemma 3.4 exploits the sparsity of the Fourier transform of our PMDs, and the fact that higher variance allows us to take fewer terms in the Taylor expansion when we use moments to approximate the logarithmic Fourier transform. This completes the proof sketch of Theorem 1.1.
Our second algorithm (Theorem 1.2) addresses the need to play simple strategies. Players tend to favor simple strategies which are easier to learn and implement, even if these strategies might have slightly suboptimal payoffs [Sim82]. In addition, our algorithm is significantly faster in this case. We build on the idea of [GT15] to “smooth” an anonymous game by forcing all the players to randomize. We prove that the perturbed game is Lipschitz and therefore admits a pure Nash equilibrium, which corresponds to simple approximate equilibria of a specific form in the original game: Each player plays one strategy with probability for some small , and plays other strategies uniformly at random with probability . To prove that the perturbed game is Lipschitz, we make essential use of the recently established multivariate central limit theorem (CLT) in Daskalakis et al. [DDKT16] and Diakonikolas, Kane and Stewart [DKS16a] to show that if we add a little more noise (corresponding to ), the associated PMD is sufficiently close to a discrete Gaussian.
Lower Bounds.
When , we can show that there is an approximate equilibrium where the associated PMD has a variance at least in every direction. Unfortunately, the PMDs in the explicit quasipolynomialsize lower bounds given in [DDKT16, DKS16a] satisfy this property. Thus, we need a different approach to get a polynomialtime algorithm for or smaller.
In fact, we prove the following results, which states that even a slight improvement of our upper bound in Theorem 1.1 would imply an FPTAS for computing Nash equilibria in anonymous games. It is important to note that Theorem 1.3 applies to all algorithms, not only the ones that leverage the structure of PMDs.
Theorem 1.3.
For player strategy anonymous games with , if we can compute an approximate equilibrium in polynomial time for some constant , then there is an FPTAS ^{4}^{4}4A fully polynomialtime approximation scheme (FPTAS) is an algorithm that runs in time and returns an optimal solution, or in our context, returns an approximate Nash equilibrium. for computing (wellsupported) Nash equilibria of strategy anonymous games.
Remark. As observed in [DDKT16], because there is a quasipolynomial time algorithm for computing an approximate equilibrium in anonymous games, the problem cannot be PPADComplete unless PPAD QuasiPTIME. On the other hand, we do not know how to improve the quasipolynomialtime upper bounds of [DDKT16, DKS16a] when .
Recall that computing an approximate equilibrium of a twoplayer generalsum game (2NASH) for constant also admits a quasipolynomialtime algorithm [LMM03]. Very recently, Rubinstein [Rub16] showed that, assuming the exponential time hypothesis (ETH) for PPAD, for some sufficiently small universal constant , quasipolynomialtime is necessary to compute an approximate equilibrium of 2NASH. It is a plausible conjecture that quasipolynomialtime is also required for Nash equilibria in anonymous games, when for some constant . In particular, this would imply that there is no FPTAS for computing approximate Nash equilibria in anonymous games, and consequently the upper bound of Theorem 1.1 is essentially tight.
2 Notation and Background
Anonymous Games.
We study anonymous games with players labeled by , and common strategies labeled by for each player. The payoff of a player depends on her own strategy, and how many of her peers choose which strategy, but not on their identities. When player plays strategy , her payoffs are given by a function that maps the possible outcomes (partitions of all other players) to the interval , where .
Approximate Equilibria.
We denote by a distribution on the set . A mixed strategy is an element of , and a mixed strategy profile maps every player to her mixed strategy . We use to denote the strategies of players other than in .
A mixed strategy profile is an approximate Nash equilibrium for some iff
where is the partition formed by random samples (independently) drawn from according to the distributions . Note that given a mixed strategy profile , we can compute a player’s expected payoff to precision in time by straightforward dynamic programming, and hence throughout this paper we assume that we can compute players’ payoffs exactly given their mixed strategies.
Poisson Multinomial Distributions.
A Categorical Random Variable (CRV) is a vector random variable supported on the set of dimensional basis vectors . A CRV is maximal if is its most likely outcome (break ties by taking the smallest index ). A Poisson Multinomial Distribution of order , or an PMD, is a vector random variable of the form where the ’s are independent CRVs. The case of is usually referred to as Poisson Binomial Distribution (PBD).
Note that a mixed strategy profile of an player strategy anonymous game corresponds to the CRVs where . The expected payoff of player for playing pure strategy can also be written as .
Let be an PMD such that for and we denote , where . For , we define the parameter moments of to be . We refer to as the degree of the parameter moment .
Total Variation Distance and Covers.
The total variation distance between two distributions and supported on a finite domain is
If and are two random variables ranging over a finite set, their total variation distance is defined as the total variation distance between their distributions. For convenience, we will often blur the distinction between a random variable and its distribution.
Let be a metric space. Given , a subset is said to be a proper cover of with respect to the metric , if for every there exists some such that . In this work, we will be interested in constructing covers for highvariance PMDs under the total variation distance metric.
Multidimensional Fourier Transform.
For , we will denote . The (continuous) Fourier Transform of a function is the function defined as . For the case that is a probability mass function, we can equivalently write .
Let be an PMD with . To avoid clutter in the notation, we will sometimes use the symbol to denote the corresponding probability mass function. With this convention, we can write that .
3 Searching Fewer Moments: Proof of Theorem 1.1
In this section, we present a polynomialtime algorithm that, for player anonymous games with bounded number of strategies, computes an approximate equilibrium with for any constant . As a warm up, we start by describing the simpler setting of twostrategy anonymous games (). The main results of this section is Theorem 1.1 that applies to general strategy anonymous games for any constant .
At a high level, we first prove the existence of approximate Nash equilibria in which the corresponding PMDs have high variance and every player randomizes (Lemma 3.1). We then use our robust moment matching lemma (Lemma 3.4) to show that when two PMDs have high variances, the closeness in their constantdegree parameter moments implies their closeness in total variation distance. The fact that matching the constantdegree moments suffices allows us to construct a polynomialsize cover for set subset of all PMDs with large variance. We then iterate through this cover to find an approximate equilibrium (Algorithm 2).
Lemma 3.1.
For an player strategy anonymous game, there always exists an approximate equilibrium where every player plays each strategy with probability at least .
Proof.
Given an anonymous game , we smooth players’ utility functions by requiring every player to randomize. Fix , we define an perturbed game as follows. When a player plays some pure strategy in , we map it back to the original game as if she plays strategy with probability , and plays some other strategy uniformly at random (i.e., she plays with probability ). Her payoff in also accounts for such perturbation, and is defined to be her expected payoff given that all the players (including herself) would deviate to other strategies uniformly at random with probability .
Formally, let denote the CRV that takes value with probability , and takes value with probability for each . The payoff structure of is given by
where is an PMD that corresponds to the perturbed outcome of the partition of all other players.
Let denote any exact Nash equilibrium of . We can interpret this mixed strategy profile in equivalently as , where , where . We know that under each player has no incentive to deviate to the mixed strategies for all , therefore a player can gain at most by deviating to pure strategies in , so is an approximate equilibrium with for all , . ∎
Warmup: The Case of Strategies.
For twostrategy anonymous games (), if all the players put at least probability mass on both strategies, the resulting PBD is going to have variance at least . When for some constant , the variance is at least . We can now use the following lemma from [DKS16c], which states that if two PBDs and are close in the first few moments, then and are close in total variation distance. Note that without any assumption on the variance of the PBDs, we would need to check the first moments, but when the variance is , which is the case in our application, we only need the first constant number of moments to match.
Lemma 3.2 ([DKS16c]).
Let . Let and be PBDs with having parameters and , and having parameters and . Suppose that and let be a sufficiently large constant. Suppose furthermore that for and for all positive integers it holds
(1) 
Then .
Let . For Lemma 3.2 we have and . The difference in the moments of parameters of and in Equation (1) is bounded from above by , so whenever , the condition in Lemma 3.2 is automatically satisfied for sufficiently large because
So it is enough to search over the first moments when each player put probability at least on both strategies. The algorithm for finding such an approximate equilibrium uses moment search and dynamic programming, and is given for the case of general in the remainder of this section.
The General Case: Strategies.
We now present our algorithm for player anonymous games with strategies and prove Theorem 1.1. The intuition of the case carries over to the general case, but the details are more elaborate. First, we show (Claim 3.3) that there exists an approximate equilibrium whose corresponding PMD has variance in all directions orthogonal to the vector . Then, we prove (Lemma 3.4) that when two PMDs have such high variances, the closeness in their constantdegree parameter moments translates to their closeness in total variation distance. This structural result allows us to build a polynomialsize cover for all PMDs with high variance, which leads to a polynomialtime algorithm for computing approximate Nash equilibria (Algorithm 2).
We first prove that when all the players put probability at least on each strategy, the covariance matrix of the resulting PMD has relatively large eigenvalues, except the zero eigenvalue associated with the allone eigenvector. The allone eigenvector has eigenvalue zero because the coordinates of always sum to .
Claim 3.3.
Let be an PMD and let be the covariance matrix of . If for all and , then all eigenvalues of but one are at least .
Proof.
For any unit vector that is orthogonal to the allone vector , i.e., and , combining this with the assumption that we have,
Therefore,
So, for all eigenvectors orthogonal to , we have as claimed. ∎
The following robust momentmatching lemma provides a bound on how close degree moments need to be so that two PMDs are close to each other, under the assumption that (the anonymous game has many players and few strategies) and (every player randomizes). Lemma 3.4 allows us to build a polynomialsize cover for PMDs with high variance, and since we know that there is an approximate equilibrium with a high variance, we are guaranteed to find one in our cover.
Lemma 3.4.
Fix and let . Assume that for some sufficiently large constant in the exponent. Let , be PMDs with , where each , is an maximal PMD. Let and denote the covariance matrices of and respectively. If all eigenvalues of but one are at least , and for all the parameter moments of degree satisfy that
Then, we have that .
Lemma 3.4 follows from the next proposition whose proof is given in the following subsection.
Proposition 3.5.
Let . Let , be PMDs with , where each , is an maximal PMD. Let and denote the covariance matrices of and respectively, where all eigenvalues of and but one are at least , where . Suppose that for , , for all moments of degree with , we have that
(2) 
for a sufficiently large constant . Then .
The proof of Proposition 3.5 exploits the sparsity of the continuous Fourier transform of our PMDs, as well as careful Taylor approximations of the logarithm of the Fourier transform.
Proof of Lemma 3.4 from Proposition 3.5.
In order to guarantee that , Proposition 3.5 requires the following condition to hold for a sufficiently large constant :
(3) 
To prove the lemma, we use the fact that and essentially ignore all the terms except polynomials of . Formally, we first need to show that
under the assumption that , and . After substituting , observe that , so the term inside the th power is greater than 1. Thus, we only need to check this inequality for , which simplifies to and holds true.
In addition, we need to show that condition (3) holds automatically for . This follows from the fact that the difference in parameter moments is at most and ,
We recall some of the notations for readability before we describe the construction of our cover of highvariance PMDs. We use to denote a generic PMD for some , and we denote . We use to denote the set of maximal CRVs in , where a CRV is maximal if is its most likely outcome, and we use to denote the maximal component PMD of . For a vector , we define parameter moment of to be We refer to as the degree of . We use to denote the set of all CRVs whose probabilities are multiples of .
Lemma 3.4 states that the highdegree parameter moments match automatically, which allows us to impose an appropriate grid on the lowdegree moments to cover the set of highvariance PMDs. The size of this cover can be bounded by a simple counting argument: We have at most moments with degree at most , and we need to approximate these moments for each maximal component PMDs, so there are at most moments that we care about. We approximate these moments to precision , and the moments are at most , so the size of the cover is .
We define this grid on lowdegree moments formally in the following lemma. For every PMD with , we associate some data with , which is a vector of the approximate values of the lowdegree moments of .
Lemma 3.6.
Fix and , let . We define the data of a CRV as:
For , we define the data of an PMD to be the sum of the data of its CRVs: . The data satisfies two important properties:

(Representative) If for two PMDs or two PMDs, then .

(Extensible) For independent PMDs and , we have that .
Proof.
The extensible property follows directly from the definition of . To see the representative property, note that we round to the nearest integer multiple of , so the error in the moments of is at most . When we add up the data of an PMD or PMD, the error in the moments of each maximal component PMDs is at most . So if two PMDs and have the same data, their lowdegree moments differ by at most , and then by Lemma 3.4 we have . ∎
Our algorithm (Algorithm 2) for computing approximate equilibria is similar to the approach used in [DP15] and [DKS16a]. We start by constructing a polynomialsized ()cover of highvariance PMDs (Algorithm 1), and then iterate over this cover. For each element in the cover, we compute the set of bestresponses for each player, and then run the cover construction algorithm again, but this time we only allow each player to choose from her bestresponses. If we could reconstruct a PMD whose moments are close enough to the one we started with, then we have found an approximate Nash equilibrium.
Recall that a mixed strategy profile for a strategy anonymous game can be represented as a list of CRVs , where describes the mixed strategy of player . Recall that is an approximate Nash equilibrium if for each player we have for all , where is the distribution of the sum of other players strategies.
Lemma 3.7.
Fix an anonymous game with payoffs normalized to . Let and be two lists of CRVs. If is a best response to , and , then is a best response to . Moreover, if is a approximate equilibrium, and for all , then is a approximate equilibrium.
Proof.
Since for all and , we have that
Therefore, if , and player cannot deviate and gain more than when other players play , then she cannot gain more than when other players play instead of . The second claim combines the inequality above with the fact that, if player plays instead of and the mixed strategies of other players remain the same, her payoff changes by at most . Formally,
The next lemma states that by rounding an (/10)approximate equilibrium, we can obtain an approximate equilibrium where all the probabilities are integer multiples of .
Claim 3.8.
There is an approximate Nash equilibrium , such that for all and , the probabilities are multiples of , and also .
Proof.
We start with an approximate Nash equilibrium from Lemma 3.1 with , and then round the probabilities to integer multiples of . We construct from as follows: for every , we set to be rounded down to a multiple of and we set so the probabilities sum to 1. By triangle inequality of total variation distance, for every we have and . An application of Lemma 3.7 shows that is an approximate equilibrium. ∎
We are now ready to prove Theorem 1.1. We need to show that Algorithm 2 always outputs an approximate Nash equilibrium, and bound the running time.
Proof of Theorem 1.1.
We first show that the output is an approximate equilibrium. Recall that is the set of all CRVs whose probabilities are multiples of , and is the set of approximate bestresponses of player . When we put in , we checked that is a best response to , note that , so by Lemma 3.6 for all . By Lemma 3.7, is indeed an best response to for all .
Next we show the algorithm must always output something. By Claim 3.8 there exists an approximate equilibrium with each . If the algorithm does not terminate successfully first, it eventually considers . Because is an PMD, the algorithm can find some with , and by Lemma 3.6 we have for all . Since is an best response to , Lemma 3.7 yields that is a (3/5)best response to , so we would add each to . Then our cover construction algorithm is guaranteed to generate a set of data that includes , and Algorithm 2 would produce an output.
Finally, we bound the running time of Algorithm 2. Let denote the size of the cover for the highvariance PMDs. The cover can be constructed in time as we try to add one CRV from in each step. We iterate through the cover, and for each element in the cover, we need to find the subset of best responses for player , and then run the cover construction algorithm again using only the best responses . So the overall running time of the algorithm is . When both and are constants, the running time is polynomial in , as claimed in Theorem 1.1. ∎
3.1 Proof of Proposition 3.5
This subsection is devoted to the proof of Proposition 3.5. For two PMDs with variance at least in each direction, Proposition 3.5 gives a quantitative bound on how close degree moments need to be (as a function of , , and , but independent of ), in order for the two PMDs to be close in total variation distance.
The proof of Proposition 3.5 exploits the sparsity of the continuous Fourier transforms of our PMDs, as well as careful Taylor approximations of the logarithm of the Fourier transform. The fact that our PMDs have large variance enables us to take fewer lowdegree terms in the Taylor approximation. For technical reasons, we split our PMD as the sum of independent component PMDs, , where all the CRVs in the component PMD