Settling the Complexity of Computing Two-Player Nash Equilibria

Settling the Complexity of Computing Two-Player Nash Equilibria

Xi Chen Department of Computer Science, Tsinghua University, Beijing, P.R.China. email:    Xiaotie Deng Department of Computer Science, City University of Hong Kong, Hong Kong SAR, P.R. China. email:    Shang-Hua Teng Department of Computer Science, Boston University, Boston and Akamai Technologies Inc., Cambridge, MA, USA. email:

We settle a long-standing open question in algorithmic game theory. We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD ( Polynomial Parity Argument, Directed version ) introduced by Papadimitriou in 1991.

This is the first of a series of results concerning the complexity of Nash equilibria. In particular, we prove the following theorems:

  • Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time.

  • The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time.

Our results demonstrate that, even in the simplest form of non-cooperative games, equilibrium computation and approximation are polynomial-time equivalent to fixed point computation. Our results also have two broad complexity implications in mathematical economics and operations research:

  • Arrow-Debreu market equilibria are PPAD-hard to compute.

  • The P-Matrix Linear Complementary Problem is computationally harder than convex programming unless every problem in PPAD is solvable in polynomial time.

1 Introduction

In 1944, Morgenstern and von Neumann [43] initiated the study of game theory and its applications to economic behavior. At the center of their study was von Neumann’s minimax equilibrium solution for two-player zero-sum games [56]. In a two-player zero-sum game, one player’s gain is equal to the loss of the other. They observed that any general -player (non-zero-sum) game can be reduced to an -player zero-sum game. Their work went on to introduce the notion of cooperative games and the solution concept of stable sets.

In 1950, following the original spirit of Morgenstern and von Neumann’s work on two-player zero-sum games, Nash [45, 44] formulated a solution concept for non-cooperative games among multiple players. In a non-cooperative game, the zero-sum condition is relaxed and no communication and coalition among players are allowed. Building on the notion of mixed strategies of [56], the solution concept, now commonly referred to as the Nash equilibrium, captures the notion of the individual rationality of players at an equilibrium point. In a Nash equilibrium, each player’s strategy is a best response to other players’ strategies. Nash proved that every -player, finite, non-cooperative game has an equilibrium point. His original proof [45, 39] was based on Brouwer’s Fixed Point Theorem [7]. David Gale suggested the use of Kakutani’s Fixed Point Theorem [30] to simplify the proof. Mathematically, von Neumann’s Minimax Theorem for two-player zero-sum games can be proved by linear programming duality. In contrast, the fixed point approach to Nash’s Equilibrium Theorem seems to be necessary: even for two-player non-cooperative games, linear programming duality is no longer applicable.

Nash’s equilibrium concept has had a tremendous influence on economics, as well as in other social and natural science disciplines [27]. Nash’s approach to non-cooperative games has played an essential role in shaping mathematical economics, which consider agents with competing individual interests. A few years after Nash’s work, Arrow and Debreu [3], also applying fixed point theorems, proved a general existence theorem for market equilibria. Since then, various forms of equilibrium theorems have been established via fixed point theorems.

However, the existence proofs based on fixed point theorems do not usually lead to efficient algorithms for finding equilibria. In fact, in spite of many remarkable breakthroughs in algorithmic game theory and mathematical programming, answers to several fundamental questions about the computation of Nash and Arrow-Debreu equilibria remain elusive. The most notable open problem is that of deciding whether the problem of finding an equilibrium point in a two-player game is solvable in polynomial time.

In this paper, we settle the complexity of computing a two-player Nash equilibrium and answer two central questions regarding the approximation and smoothed complexity of this game theoretic problem. In the next few subsections, we will review previous work on the computation of Nash equilibria, state our main results, and discuss their extensions to the computation of market equilibria.

1.1 Finite-Step Equilibrium Algorithms

Since Nash and Arrow-Debreu’s pioneering work, great progress has been made in the effort to find constructive and algorithmic proofs of equilibrium theorems. The advances for equilibrium computation can be chronologically classified according to the following two periods:

  • Finite-step period: In this period, the main objective was to design equilibrium algorithms that terminate in a finite number of steps and to understand for which equilibrium problems finite-step algorithms do not exist.

  • Polynomial-time period: In this period, the main objective has been to develop polynomial-time algorithms for computing equilibria and to characterize the complexity of equilibrium computation.

The duality-based proof of the minimax theorem leads to a linear programming formulation of the problem of finding an equilibrium in a two-player zero-sum game. One can apply the simplex algorithm, in a finite number of steps in the Turing model111The simplex algorithm also terminates in a finite number of steps in various computational models involving real numbers, such as the Blum-Shub-Smale model [5]., to compute an equilibrium in a two-player zero-sum game with rational payoffs. A decade or so after Nash’s seminal work, Lemke and Howson [38] developed a path-following, simplex-like algorithm for finding a Nash equilibrium in a general two-player game. Like the simplex algorithm, their algorithm terminates in a finite number of steps for a two-player game with rational payoffs.

The Lemke-Howson algorithm has been extended to non-cooperative games with more than two players [57]. However, due to Nash’s observation that there are rational three-player games all of whose equilibria are irrational, finite-step algorithms become harder to obtain for games with three or more players. For those multi-player games, no finite-step algorithm exists in the classical Turing model.

Similarly, some exchange economies do not have any rational Arrow-Debreu equilibria. The absence of a rational equilibrium underscores the continuous nature of equilibrium computation. Brouwer’s Fixed Point Theorem — that any continuous map from a convex compact body, such as a simplex or a hypercube, to itself has a fixed point — is inherently continuous. Mathematically, the continuous nature does not hinder the definition of search problems for finding equilibria and fixed points. But to measure the computational complexity of these continuous problems in the classical Turing model, some imprecision or inaccuracy must be introduced to ensure the existence of a solution with a finite description [51, 52, 46, 26, 21]. For example, one possible definition of an approximate fixed point of a continuous map is a point in the convex body such that for a given [51].

In 1928, Sperner [53] discovered a discrete fixed point theorem that led to one of the most elegant proofs of the Brouwer’s Fixed Point Theorem. Suppose that is a -dimensional simplex with vertices , and that is a simplicial decomposition of . Suppose assigns to each vertex of a color from such that, for every vertex of , if the component of the barycentric coordinate of , in terms of , is 0. Then, Sperner’s Lemma asserts that there exists a simplex cell in that contains all colors. This fully-colored simplex cell is often referred to as a panchromatric simplex or a Sperner simplex of . Consider a Brouwer map with Lipschitz constant over the simplex . Suppose further that the diameter of each simplex cell in is at most . Then, one can define a color assignment such that each panchromatric simplex in must have a vertex satisfying . Thus, a panchromatic simplex of can be viewed as an approximate, discrete fixed point of .

Inspired by the Lemke-Howson algorithm, Scarf developed a path-following algorithm, using simplicial subdivision, for computing approximate fixed points [51] and competitive equilibrium prices [52]. The path-following method has also had extensive applications to mathematical programming and has since grown into an algorithm-design paradigm in optimization and equilibrium analysis. One can take a similar approximation approach to study the complexity of Nash equilibria, especially for games involving three or more players.

1.2 Computational Complexity of Nash Equilibria

Since 1960s, the theory of computation has shifted its focus from whether problems can be solved on a computer to how efficiently problems can be solved on a computer. The field has gained maturity with rapid advances in algorithm design, algorithm analysis, and complexity theory. Problems are categorized into complexity classes, capturing the potential difficulty of decision, search, and optimization problems. The complexity classes P, RP, and BPP, and their search counterparts such as FP, have become the standard classes for characterizing computational problems that are tractable222FP stands for Function Polynomial-Time. In this paper, as we only consider search problems, without further notice, we will (ab)use P and RP to denote the classes of search problems that can be solved in polynomial time or in randomized polynomial-time, respectively. We believe doing so will help more general readers. .

The desire to find fast and polynomial-time algorithms for computing equilibria has been greatly enhanced with the rise of the Internet [48]. The rise has created a surge of human activities that make computation, communication and optimization of participating agents accessible at microeconomic levels. Efficient computation is instrumental to support the basic operations, such as pricing, in this large scale on-line market [49]. Many new game and economic problems have been introduced, and in the meantime, classical game and economic problems have become the subjects for active complexity studies [48]. Algorithmic game theory has grown into a highly interdisciplinary field intersecting economics, mathematics, operations research, numerical analysis, and computer science.

In 1979, Khachiyan made a ground-breaking discovery that the ellipsoid algorithm can solve a linear program in polynomial time [34]. Shortly after, Karmarkar improved the complexity for solving linear programming with his path-following, interior-point algorithm [32]. His work initiated the implementation of theoretically-sound linear programming algorithms. Motivated by a grand challenge in Theory of Computing [16], Spielman and Teng [54] introduced a new algorithm analysis framework, smoothed analysis, based on perturbation theory, to provide rigorous complexity-theoretic justification for the good practical performance of the simplex algorithm. They proved that although almost all known simplex algorithms have exponential worst-case complexity [35], the smoothed complexity of the simplex algorithm with the shadow-vertex pivoting rule is polynomial. As a result of these developments in linear programming, equilibrium solutions of two-player zero-sum games can be found in polynomial time using the ellipsoid or interior-point algorithms and in smoothed polynomial time using the simplex algorithm.

However, no polynomial-time algorithm has been found for computing discrete fixed points or approximate fixed points, rendering the equilibrium proofs based on fixed point theorems non-constructive in the view of polynomial-time computability.

The difficulty of discrete fixed point computation is partially justified in the query model. In 1989, Hirsch, Papadimitriou, and Vavasis [26] proved an exponential lower bound on the number of function evaluations necessary to find a discrete fixed point, even in two dimensions, assuming algorithms only have a black-box access to the fixed point function. Their bound has recently been improved [8] and extended to the randomized query model [13] and to the quantum query model [23, 13].

Motivated by the pivoting structure used in the Lemke-Howson algorithm, Papadimitriou introduced the complexity class PPAD [46]. PPAD is an abbreviation for Polynomial Parity Argument in a Directed graph. He introduced several search problems concerning the computation of discrete fixed points. For example, he defined the problem Sperner to be the search problem of finding a Sperner simplex given a polynomial-sized circuit for assigning colors to a particular simplicial decomposition of a hypercube. Extending the model of [26], he also defined a search problem for computing approximate Brouwer fixed points. He proved that even in three dimensions, these fixed point problems are complete for the PPAD class. Recently, Chen and Deng [9] proved that the problem of finding a discrete fixed point in two dimensions is also complete for PPAD.

In [46], Papadimitriou also proved that Bimatrix, the problem of finding a Nash equilibrium in a two-player game with rational payoffs is member of PPAD. His proof can be extended to show that finding a (properly defined) approximate equilibrium in a non-cooperative game among three or more players is also in PPAD. Thus, if these problems are PPAD-complete, then the problem of finding an equilibrium is polynomial-time equivalent to the search problem for finding a discrete fixed point.

It is conceivable that Nash equilibria might be easier to compute than discrete fixed points. In fact, by taking advantage of the special structure of Nash’s normal form games, Lipton, Markarkis, and Mehta [40] developed a sub-exponential time algorithm for finding an approximate Nash equilibrium. In their notion of an -approximate Nash equilibrium, for a positive parameter , each players’ strategy is at most an additive worse than the best response to other players’ strategies. They proved that if all payoffs are in , then an -approximate Nash equilibrium can be found in time.

In a complexity-theoretic breakthrough, Daskalakis, Goldberg and Papadimitriou [18] proved that the problem of computing a Nash equilibrium in a game among four or more players is complete for PPAD. To cope with the fact that equilibria may not be rational, they considered an approximation version of equilibria by allowing exponentially small errors. The complexity result was soon extended to the three-player game independently by Chen and Deng [10] and Daskalakis and Papadimitriou [20], with different proofs. The reduction of [18] has two steps: First, it reduces a PPAD-complete discrete fixed point problem, named 3-Dimensional Brouwer, to the problem of finding a Nash equilibrium in a degree-three graphical game [33]. Then, it reduces the graphical game to a four-player game, using a result of Goldberg and Papadimitriou [25]. This reduction cleverly encodes fixed points by Nash equilibria.

The results of [18, 10, 20] characterize the complexity of computing -player Nash equilibria for . They also show that the fixed point approach is necessary in proving Nash’s Equilibrium Theorem, at least for games among three or more players. However, these latest complexity advances on the three/four-player games have fallen short on the two-player game.

1.3 Computing Two-Player Nash Equilibria and Smoothed Complexity

There have been amazing parallels between discoveries concerning the two-player zero-sum game and the general two-player game. First, von Neumann proved the existence of an equilibrium for the zero-sum game, then Nash did the same for the general game. Both classes of games have rational equilibria when payoffs are rational. Second, more than a decade after von Neumann’s Minimax Theorem, Dantzig developed the simplex algorithm, which can find a solution of a two-player zero-sum game in a finite number of steps. A decade or so after Nash’s work, Lemke and Howson developed their finite-step algorithm for Bimatrix. Then, about a quarter century after their respective developments, both the simplex algorithm [35] and the Lemke-Howson algorithm [50] were shown to have exponential worst-case complexity.

A half century after von Neumann’s Minimax Theorem, Khachiyan proved that the ellipsoid algorithm can solve a linear program and hence can find a solution of a two-player zero-sum game with rational payoffs in polynomial time. Shortly after that, Borgwardt [6] showed that the simplex algorithm has polynomial average-case complexity. Then, Spielman and Teng [54] proved that the smoothed complexity of the simplex algorithm is polynomial. If history is of any guide, then a half century after Nash’s Equilibrium Theorem, one should be quite optimistic to prove the following two natural conjectures.

  • Polynomial 2-Nash Conjecture: There exists a (weakly) polynomial-time algorithm for Bimatrix.

  • Smoothed Lemke-Howson Conjecture: The smoothed complexity of the Lemke-Howson algorithm for Bimatrix is polynomial.

An upbeat attitude toward the first conjecture has been encouraged by the following two facts. First, unlike three-player games, every rational bimatrix game has a rational equilibrium. Second, a key technical step involving coloring the graphical games in the PPAD-hardness proofs for three/four-player games fails to extend to two-player games [18, 10, 20]. The Smoothed Lemke-Howson Conjecture was asked by a number of people [1]. Indeed, whether the smoothed analysis of the simplex algorithm can be extended to the Lemke-Howson algorithm [38] has been the question most frequently raised during talks on smoothed analysis. The conjecture is a special case of the following conjecture posted by Spielman and Teng [55] in a survey of smoothed analysis of algorithms.

  • Smoothed 2-Nash Conjecture: The smoothed complexity of Bimatrix is polynomial.

The Smoothed 2-Nash Conjecture was inspired by the result of Bárány, Vempala and Vetta [4] that an equilibrium of a random two-player game can be found in polynomial time.

1.4 Our Contributions

Despite much effort in the last half century, no significant progress has been made in characterizing the algorithmic complexity of finding a Nash equilibrium in a two-player game. Thus, Bimatrix, the most studied computational problem about Nash equilibria, stood out as the last open problem in equilibrium computation for normal form games. Papadimitriou [48] named it, along with Factoring, as one of the two “most concrete open problems” at the boundary of P. In fact, ever since Khachiyan’s discovery [34], Bimatrix has been on the frontier of natural problems possibly solvable in polynomial time. Now, it is also on the frontier of the hard problems, assuming PPAD is not contained in P.

In this paper, we settle the computational complexity of the two-player Nash equilibrium. We prove:

Theorem 1.1.

Bimatrix is PPAD-complete.

Our result demonstrates that, even in this simplest form of non-cooperative games, equilibrium computation is polynomial-time equivalent to discrete fixed point computation. In particular, we show that from each discrete Brouwer function , we can build a two-player game and a polynomial-time map from the Nash equilibria of to the fixed points of . Our proof complements Nash’s proof that for each two-player game , there is a Brouwer function and a map from the fixed points of to the equilibrium points of .

The success in proving the PPAD completeness of Bimatrix inspires us to attempt to disprove the Smoothed 2-Nash Conjecture. A connection between the smoothed complexity and approximation complexity of Nash equilibria ([55], Proposition 9.12) then leads us to prove the following result.

Theorem 1.2.

For any , the problem of computing an -approximate Nash equilibrium of a two-player game is PPAD-complete.

This result enables us to establish the following fundamental theorem about the approximation of Nash equilibria. It also enables us answer the question about the smoothed complexity of the Lemke-Howson algorithm and disprove the Smoothed 2-Nash Conjecture assuming PPAD is not contained in RP.

Theorem 1.3.

Bimatrix does not have a fully polynomial-time approximation scheme unless PPAD is contained in P.

Theorem 1.4.

Bimatrix is not in smoothed polynomial time unless PPAD is contained in RP.

Consequently, it is unlikely that the -time algorithm of Lipton, Markakis, and Mehta [40], the fastest algorithm known today for finding an -approximate Nash equilibrium, can be improved to poly. Also, it is unlikely that the average-case polynomial time result of [4] can be extended to the smoothed model.

Our advances in the computation, approximation, and smoothed analysis of two-player Nash equilibria are built on several novel techniques that might be interesting on their own. We introduce a new method for encoding boolean and arithmetic variables using the probability vectors of the mixed strategies. We then develop a set of perturbation techniques to simulate the boolean and arithmetic operations needed for fixed point computation using the equilibrium conditions of two-player games. These innovations enable us to bypass the graphical game model and derive a direct reduction from fixed point computation to Bimatrix. To study the approximation and smoothed complexity of the equilibrium problem, we introduce a new discrete fixed point problem on a high-dimensional grid graph with a constant side-length. We then show that it can host the embedding of the proof structure of any PPAD problem. This embedding result not only enriches the family of PPAD-complete discrete fixed point problems, but also provides a much needed trade-off between precision and dimension. We prove a key geometric lemma for finding a high-dimensional discrete fixed point, a new concept defined on a simplex inside a unit hypercube. This geometric lemma enables us to overcome the curse of dimensionality in reasoning about fixed points in high dimensions.

1.5 Implications and Impact

Because the two-player Nash equilibrium enjoys several structural properties that Nash equilibria with three or more players do not have, our result enables us to answer some other long-standing open questions in mathematical economics and operations research. In particular, we have derived the following two important corollaries.

Corollary 1.5.

Arrow-Debreu market equilibria are PPAD-hard to compute.

Corollary 1.6.

The P-matrix Linear Complementary Problem is computationally harder than convex programming, unless PPAD is contained in P, where a P-matrix is a square matrix with positive principle minors.

To prove the first corollary, we use a recent discovery of Ye [58] (see also [15]) on the connection between two-player Nash equilibria and Arrow-Debreu equilibria in two-group Leontief exchange economies. The second corollary concerns the linear complementary problem, in which we are given a rational -by- matrix and a rational -place vector , and are asked to find vectors and such that , , and . Our result complements Megiddo’s observation [41] that if it is NP-hard to solve the P-Matrix linear complementarity problem, then NP = coNP.

By applying a recent reduction of Abbott, Kane, and Valiant [2], our result also implies the following corollary.

Corollary 1.7.

Win-Lose Bimatrix is PPAD-complete, where, in a win-lose bimatrix game, each payoff entry is either 0 or 1.

We further refine our reduction to show that the Nash equilibria in sparse two-player games are hard to compute and hard to approximate in fully polynomial time.

We have also discovered several new structural properties about Nash equilibria. In particular, we prove an equivalence result about various notions of approximate Nash equilibria. We exploit these equivalences in the study of the complexity of finding an approximate Nash equilibrium and in the smoothed analysis of Bimatrix. Using them, we can also extend our result about approximate Nash equilibria as follows.

Theorem 1.8.

For any , the problem of finding the first bits of an exact Nash equilibrium in a two-player game, even when the payoffs are integers of polynomial magnitude, is polynomial-time equivalent to Bimatrix.

Recently, Chen, Teng, and Valiant [14] extended our approximation complexity result to win-lose two-player games; Huang and Teng [28] extended both the smoothed complexity and the approximation results to the computation of Arrow-Debreu equilibria. Using the connection between Nash equilibria and Arrow-Debreu equilibria, our complexity result on sparse games can be extended to market equilibria in economies with sparse exchange structures [12].

1.6 Paper Organization

In Section 2, we review concepts in equilibrium theory. We also prove an important equivalence between various notions of approximate Nash equilibria. In Section 3, we recall the complexity class PPAD, the smoothed analysis framework, and the concept of polynomial-time reduction among search problems. In Section 4, we introduce two concepts: high-dimensional discrete Brouwer fixed points and generalized circuits, followed by the definitions of two search problems based on these concepts. In Section 5, we state our main results and also provide an outline of our proofs. In Section 6, we show that one can simulate generalized circuits with two-player Nash equilibria. In Section 7, we prove a PPAD-completeness result for a large family of high-dimensional fixed point search problems. In Section 8, we complete our proof by showing that discrete fixed points can be modeled by generalized circuits. In Section 9, we discuss some extensions of our work and present several open questions and conjectures motivated by this research. In particular, we will show that sparse Bimatrix does not have a fully polynomial-time approximation scheme unless PPAD is contained in P. Finally, in Section 10, we thank many wonderful people who helped us in this work.

This paper combines the papers “Settling the Complexity of 2-Player Nash-Equilibrium”, by Xi Chen and Xiaotie Deng, and “Computing Nash Equilibria: Approximation and Smoothed Complexity”, by the three of us. The extended abstracts of both papers appeared in the Proceedings of the 47th Annual Symposium on Foundations of Computer Science, IEEE. The result that Bimatrix is PPAD-complete is from the first paper. We also include the main result from the paper “Sparse Games are Hard”, by the three of us, presented at the the 2nd International Workshop on Internet and Network Economics.

1.7 Notation

We will use bold lower-case Roman letters such as , , to denote vectors. Whenever a vector, say is present, its components will be denoted by lower-case Roman letters with subscripts, such as . Matrices are denoted by bold upper-case Roman letters such as and scalars are usually denoted by lower-case roman letters, but sometimes by upper-case Roman letters such as , , and . The entry of a matrix is denoted by . Depending on the context, we may use to denote the row or the column of .

We now enumerate some other notations that are used in this paper. We will let to denote the set of -dimensional vectors with positive integer entries; to denote the dot-product of two vectors in the same dimension; to denote the unit vector whose entry is equal to 1 and all other entries are 0. Finally, for , by , we mean .

2 Two-Player Nash Equilibria

A two-player game [45, 37, 38] is a non-cooperative game between two players. When the first player has choices of actions and the second player has choices of actions, the game, in its normal form, can be specified by two matrices and . If the first player chooses action and the second player chooses action , then their payoffs are and , respectively. Thus, a two-player game is also often referred to as a bimatrix game. A mixed strategy of a player is a probability distribution over its choices. The Nash’s Equilibrium Theorem [45, 44], when specialized to bimatrix games, asserts that every two-player game has an equilibrium point, i.e., a profile of mixed strategies, such that neither player can gain by changing his or her strategy unilaterally. The zero-sum two-player game [43] is a special case of the bimatrix game that satisfies .

Let denote the set of all probability vectors in , i.e., non-negative, -place vectors whose entries sum to 1. Then, a profile of mixed strategies can be expressed by two column vectors .

Mathematically, a Nash equilibrium of a bimatrix game is a pair such that

Computationally, one might settle with an approximate Nash equilibrium. There are several versions of approximate equilibrium points that have been defined in the literature. The following are two most popular ones.

For a positive parameter , an -approximate Nash equilibrium of a bimatrix game is a pair such that

An -relatively-approximate Nash equilibrium of is a pair such that

Nash equilibria of a bimatrix game are invariant under positive scalings, meaning, the bimatrix game has the same set of Nash equilibria as , as long as . They are also invariant under shifting: For any constants and , the bimatrix game has the same set of Nash equilibria as . It is easy to verify that -approximate Nash equilibria are also invariant under shifting. However, each -approximate Nash equilibrium of becomes a -approximate Nash equilibrium of the bimatrix game for . Meanwhile, -relatively-approximate Nash equilibria are invariant under positive scaling, but may not be invariant under shifting.

The notion of the -approximate Nash equilibrium is defined in the additive fashion. To study its complexity, it is important to consider bimatrix games with normalized matrices in which the absolute value of each entry is bounded, for example, by 1. Earlier work on this subject by Lipton, Markakis, and Mehta [40] used a similar normalization. Let denote the set of matrices with real entries between and . In this paper, we say a bimatrix game is normalized if and is positively normalized if .

Proposition 2.1.

In a normalized two-player game , every -relatively-approximate Nash equilibrium is also an -approximate Nash equilibrium.

To define our main search problems of computing and approximating a two-player Nash equilibrium, we need to first define the input models. The most general input model is the real model in which a bimatrix game is specified by two real matrices . In the rational model, each entry of the payoff matrices is given by the ratio of two integers. The input size is then the total number of bits describing the payoff matrices. Clearly, by multiplying the common denominators in a payoff matrix and using the fact that two-player Nash equilibria are invariant under positive scaling, we can transform a rational bimatrix game into an integer bimatrix game. Moreover, the total number of bits in this game with integer payoffs is within a factor of poly of the input size of its rational counterpart. In fact, Abbott, Kane, and Valiant [2] go one step further to show that from every bimatrix game with integer payoffs, one can construct a “homomorphic” bimatrix game with 0-1 payoffs who size is within a polynomial factor of the input size of the original game.

It is well known that each rational bimatrix game has a rational Nash equilibrium. We may verify this fact as following. Suppose is a rational two-player game and is one of its Nash equilibria. Let and . Let and denote the row of and the column of , respectively. Then, by the condition of the Nash equilibrium, is a feasible solution to the following linear program:

In fact, any solution to this linear program is a Nash equilibrium of . Therefore, has at least one rational equilibrium point such that the total number of bits describing this equilibrium is within a polynomial factor of the input size of . By enumerating all possible row supports and column supports and applying the linear program above, we can find a Nash equilibrium in the bimatrix game . This exhaustive-search algorithms takes time where is the input size of the game, and and are, respectively, the number of rows and the number of columns.

In this paper, we use Bimatrix to denote the problem of finding a Nash equilibrium in a rational bimatrix game. Without loss of generality, we make two assumptions about our search problem: all input bimatrix games are positively normalized in which both players have the same number of choices of actions. Thus, two important parameters associated with each instance to Bimatrix are: , the number of actions, and , the total number of bits in the description of the game. Thus, Bimatrix is in P if there exists an algorithm for Bimatrix with running time poly. As a matter of fact, for the two-player games that we will design in our complexity studies, is bounded by a polynomial in .

We also consider two families of approximation problems for two-player Nash equilibria. For a positive constant ,

  • let Exp-Bimatrix denote the following search problem: Given a rational and positively normalized bimatrix game , compute a -approximate Nash equilibrium of , if and are matrices;

  • let Poly-Bimatrix denote the following search problem: Given a rational and positively normalized bimatrix game , compute an -approximate Nash equilibrium of , if and are matrices.

In our analysis, we will use an alternative notion of approximate Nash equilibria as introduced in [18], originally called -Nash equilibria. In order to avoid confusion with more commonly used -approximate Nash equilibria, we will refer to this alternative approximation as the -well-supported Nash equilibrium. For a bimatrix game , let and denote the row of and the column of , respectively. In a profile of mixed strategies , the expected payoff of the first player when choosing the row is , and the expected payoff of the second player when choosing the column is .

For a positive parameter , a pair of strategies is an -well-supported Nash equilibrium of if for all and ,

A Nash equilibrium is a -well-supported Nash equilibrium as well as a -approximate Nash equilibrium. The following lemma, a key lemma in our complexity study of equilibrium approximation, shows that approximate Nash equilibria and well-supported Nash equilibria are polynomially related. This polynomial relation allows us to focus our attention on pair-wise approximation conditions. Thus, we can locally argue certain properties of the bimatrix game in our analysis.

Lemma 2.2 (Polynomial Equivalence).

In a bimatrix game with , for any ,

  1. each -well-supported Nash equilibrium is also an -approximate Nash equilibrium; and

  2. from any -approximate Nash equilibrium , one can find in polynomial time an -well-supported Nash equilibrium .


The first statement follows from the definitions. Because is an -approximate Nash equilibrium, we have

Recall that denotes the row of and denotes the column of . We use to denote the set of indices such that , for some . Let be an index such that . Now by changing , to and changing to we can increase the first-player’s profit by at least , implying . Similarly, we define . Then we have .

We now set all these and to zero, and uniformly increase the probabilities of other strategies to obtain a new pair of mixed strategies .

Note for all , , because we assume the value of each entry in is between and . Therefore, for every pair , the relative change between and is no more than . Thus, any that is beaten by some by a gap of is already set to zero in . ∎

We conclude this section by pointing out that there are other natural notions of approximation for equilibrium points. In addition to the rational representation of a rational equilibrium, one can use binary representations to define entries in an equilibrium. As each entry in an equilibrium is a number between and , we can specify it using its binary representation , where and Some rational numbers may not have a finite binary representation. Usually, we round off the numbers to store their finite approximations. The first bits give us a -bit approximation of .

For a positive integer , we will use -Bit-Bimatrix to denote the search problem of computing the first bits of the entries of a Nash equilibrium in a rational bimatrix game. The following proposition relates -Bit-Bimatrix with Poly-Bimatrix.

Proposition 2.3.

Suppose is a Nash equilibrium of a positively normalized two-player game with rows and columns. For a positive integer , let be the -bit approximation of . Let and . Then, is a -approximate Nash equilibrium of .


A similar proposition is stated and proved in [14].

Let . Suppose is not a -approximate Nash equilibrium. Without loss of generality, assume there exists such that . We have

which contradicts our assumption. To see the first inequality, note that since the game is positively normalized, every component in is between and . The inequality follows from the fact that for all , and . The other inequalities can be proved similarly. ∎

3 Complexity and Algorithm Analysis

In this section, we review the complexity class PPAD and the concept of polynomial-time reduction among search problems. We then define the perturbation models in the smoothed analysis of Bimatrix and show that if the smoothed complexity of Bimatrix is polynomial, then we can compute an -approximate Nash equilibrium of a bimatrix game in randomized poly time.

3.1 PPAD and Polynomial-Time Reduction Among Search Problems

A binary relation is polynomially balanced if there exist constants and such that for all pairs , , where denotes the length of string . It is polynomial-time computable if for each pair , one can decide whether or not in time polynomial in . One can define the NP search problem Search specified by as: Given , return a such satisfying , if such exists, otherwise, return a special string “no”.

A relation is total if for every string , there exists such that . Following Megiddo and Papadimitriou [42], let TFNP denote the class of all NP search problems specified by total relations. A search problem is polynomial-time reducible to problem if there exists a pair of polynomial-time computable functions such that for every of , if satisfies that , then . Search problems Search and Search are polynomial-time equivalent if Search is also reducible to Search.

The complexity class PPAD [47] is a sub-class of TFNP, containing all search problems polynomial-time reducible to following problem called End-of-Line:

Definition 3.1 (End-of-Line).

The input instance of End-of-Line is a pair where is a circuit of size polynomial in that defines a function satisfying :

  • for every , is an ordered pair where .

  • and the first component of is .

This instance defines a directed graph with and , if and only if is the second component of and is the first component of .

The output of this problem is an end vertex other than , where a vertex of is an end vertex if the summation of its in-degree and out-degree is equal to one.

Note that in graph , both the in-degree and the out-degree of each vertex are at most 1. Thus, edges of form a collection of directed paths and directed cycles. Because has in-degree 0 and out-degree 1, it is an end vertex in . must have at least one directed path. Hence, it has another end vertex and End-of-Line is a member of TFNP.

In fact, has an odd number of end vertices other than . By evaluating the polynomial-sized circuit on an input , we can access the predecessor and the successor of .

Many important problems, such as the search versions of Brouwer’s Fixed Point Theorem, Kakutani’s Fixed Point Theorem, Smith’s Theorem, and Borsuk-Ulam Theorem, have been shown to be in the class PPAD [46].

Bimatrix is also in PPAD [46]. As a corollary, for all , Poly-Bimatrix and Exp-Bimatrix are in PPAD. However, it is not clear whether -Bit-Bimatrix, for a positive integer , is in PPAD.

3.2 Smoothed Models of Bimatrix Games

In the smoothed analysis of the bimatrix game, we consider perturbed games in which each entry of the payoff matrices is subject to a small and independent random perturbation. For a pair of normalized matrices and , in the smoothed model, the input instance333 For the simplicity of presentation, in this subsection, we model the entries of payoff matrices and perturbations by real numbers. Of course, to connect with the complexity result of the previous section, where entries of matrices are in finite representations, we are mindful that some readers may prefer that we state our result and write the proof more explicitly using the finite representations. Using Equations (16) and (17) in the proof of Lemma 3.2 (see Appendix A), we can define a discrete version of the uniform and Gaussian perturbations and state and prove the same result. is then defined by where and are, respectively, independent perturbations of and with magnitude .

There might be several models of perturbations for and with magnitude [55]. The two common perturbation models are the uniform perturbation and the Gaussian perturbation.

In the uniform perturbation with magnitude , and are chosen uniformly from the intervals and , respectively. In the Gaussian perturbation with variance , and are, respectively, chosen with density

We refer to these perturbations as -uniform and -Gaussian perturbations, respectively.

The smoothed complexity of an algorithm for Bimatrix is defined as following: Let be the complexity of for finding a Nash equilibrium in a bimatrix game . Then, the smoothed complexity of under perturbations of magnitude is

where we use to denote that is a perturbation of according to .

An algorithm has a polynomial smoothed time complexity [55] if for all and for all positive integer , there exist positive constants , and such that

Bimatrix is in smoothed polynomial time if there exists an algorithm with polynomial smoothed time complexity for computing a two-player Nash equilibrium.

The following lemma shows that if the smoothed complexity of Bimatrix is low, under uniform or Gaussian perturbations, then one can quickly find an approximate Nash equilibrium.

Lemma 3.2 (Smoothed Nash vs Approximate Nash).

If Bimatrix is in smoothed polynomial time under uniform or Gaussian perturbations, then for all , there exists a randomized algorithm to compute an -approximate Nash equilibrium in a two-player game with expected time or , respectively.


Informally argued in [55]. See Appendix A for a proof. ∎

4 Two Search Problems

In this section, we consider two search problems that are essential to our main results. In the first problem, the objective is to find a high-dimensional discrete Brouwer fixed point. To define the second problem, we introduce a concept of the generalized circuit.

4.1 Discrete Brouwer Fixed Points

The following is an oblivious fact: Suppose we color the endpoints of an interval by two distinct colors, say red and blue, insert points evenly into this interval to subdivide it into unit subintervals, and color these new points arbitrarily by one of the two colors. Then, there must be a bichromatic subinterval, i.e., an unit subinterval whose two endpoints have distinct colors.

Our first search problem is built on a high-dimensional extension of this fact. Instead of coloring points in a subdivision of an intervals, we color the vertices in a hypergrid. If the dimension is , we will use colors.

For and in , let denote the vertices of the hypergrid with side lengths specified by . The boundary of , , is the set of points with for some . Let .

In one dimension, the interval is the union of unit subintervals. A hypergrid can be viewed as the union of a collection of unit hypercubes. For a point , let be the vertices of the unit hypercube with as its corner closest to the origin.

We can color the vertices of a hypergrid with colors . Like in one dimension, the coloring of the boundary vertices needs to meet certain requirements in the context of the discrete Brouwer fixed point problem. A color assignment of is valid if satisfies the following condition: For , if there exists an such that then ; otherwise . In the later case, , and , .

The following theorem is a high-dimensional extension of the one-dimensional fact mentioned above. It is also an extension of the two-dimensional Sperner’s Lemma.

Theorem 4.1 (High-Dimensional Discrete Brouwer Fixed Points).

For and in , for any valid coloring of , there is a unit hypercube in whose vertices have all colors.

In other words, Theorem 4.1 asserts that there exists a such that assigns all colors to . We call a panchromatic cube. However, in -dimensions, a panchromatic cube contains vertices. This exponential dependency in the dimension makes it inefficient to check whether a hypercube is panchromatic. We introduce the following notion of discrete fixed points.

Definition 4.2 (Panchromatic Simplex).

A subset is accommodated if for some point . is a panchromatic simplex of a color assignment if it is accommodated and contains exactly points with distinct colors.

Corollary 4.3 (Existence of Panchromatic Simplex).

For and in , for any valid coloring of , there exists a panchromatic simplex in .

We can define a search problem based on Theorem 4.1, or precisely, based on Corollary 4.3. An input instance is a hypergrid together with a polynomial-sized circuit for coloring the vertices of the hypergrid.

Definition 4.4 (Brouwer-Mapping Circuit and Color Assignment).

For and , a Boolean circuit with input bits and output bits is a valid Brouwer-mapping circuit (with parameters and ) if the following is true.

  • For every , the output bits of evaluated at satisfy one of the following cases:

    • Case , : and all other bits are ;

    • Case : , and .

  • For every , if there exists an such that , letting , then the output bits satisfy Case , otherwise (, and , ), the output bits satisfy Case .

The circuit defines a valid color assignment by setting , if the output bits of evaluated at satisfy Case .

To define our high-dimensional Brouwer’s fixed point problems, we need a notion of well-behaved functions ( please note that this is not the function for the fixed point problem ) to parameterize the shape of the search space. An integer function is called well-behaved if it is polynomial-time computable and there exists an integer constant such that for all . For example, , , , and are all well-behaved.

Definition 4.5 (Brouwer).

For each well-defined function , the search problem Brouwer is defined as following: Given an input instance of Brouwer, , where is a valid Brouwer-mapping circuit with parameters and where , , find a panchromatic simplex of .

The input size of Brouwer is the sum of and the size of the circuit . Brouwer is a two-dimensional search problem over grid and Brouwer is a three-dimensional search problem over grid , while Brouwer is a -dimensional search problem over grid . Each of these three grids contains about hypercubes. Both Brouwer [9] and Brouwer [18] are known to be PPAD-complete. In section 7, we will prove the following theorem, which states that the complexity of finding a panchromatic simplex is essentially independent of the shape or dimension of the search space. In particular, it implies that Brouwer is also PPAD-complete.

Theorem 4.6 (High-Dimensional Discrete Fixed Points).

For each well-behaved function , Brouwer is PPAD-complete.

4.2 Generalized Circuits and Their Assignment Problem

To effectively connect discrete Brouwer fixed points with two-player Nash equilibria, we use an intermediate structure called the generalized circuit. This family of circuits, motivated by the reduction of [18, 10, 20], extends the standard classes of Boolean or Arithmetic circuits in several aspects.

Syntactically, a generalized circuit is a pair, where is a set of nodes and is a collection of gates. Every gate is a -tuple in which

  • is the type of the gate;

  • are the first and second input nodes of the gate;

  • is the output node, and .

The collection of gates must satisfy the following property: For every two gates and in , .

Figure 1: An example of generalized circuits

Suppose in . If , then the gate has no input node and . If , then and . If , then and . Parameter is only used in and gates. If , then and . If , then . For other types of gates, .

The input size of a generalized circuit is the sum of and the total number of bits needed to specify the parameters in . As an important point which will become clear later, we make the following remark: In all generalized circuits that we will construct, the number of bits of each parameter is upper bounded by poly.

In addition to its more expanded list of gate types, the generalized circuit differs crucially from the standard circuit in that it does not require the circuit to be acyclic. In other words, in a generalized circuit, the directed graph defined by connecting input nodes of all gates to their output counterparts may have cycles. We shall show later that the presence of cycles is necessary and sufficient to express fixed point computations with generalized circuits.

Semantically, we associate every node with a real variable . Each gate requires that the variables of its input and output nodes satisfy certain constraints, either arithmetic or logical, depending on the type of the gate. By setting , the constraints are defined in Figure 2. The notation in Figure 2 will be defined shortly. A generalized circuit defines a set of constraints, or a mathematical program, over the set of variables .

   :  :  :  :  :  : ;  :  :  :


Figure 2: Constraints , where

Suppose is a generalized circuit and . For every , an -approximate solution to circuit is an assignment to the variables such that

  • the values of satisfy constraint and

  • for each gate , the values of and satisfy the constraint , defined in Figure 2.

Among the nine types of gates, and are arithmetic gates implementing arithmetic constraints like addition, subtraction and constant multiplication. is a brittle comparator; it only distinguishes values that are properly separated. Finally, and are logic gates. For an assignment to variables , the value of represents boolean with precision , denoted by , if ; it represents boolean with precision , denoted by , if . We will use to denote the constraint that the value of lies in . The logic constraints implemented by the three logic gates are defined similarly as the classical ones.

From the reduction in Section 6, we can prove the following theorem. A proof can be found in Appendix B.

Theorem 4.7.

For any constant , every generalized circuit has a -approximate solution.

Let be a positive constant. We use Poly-Gcircuit, and Exp-Gcircuit to denote the problems of finding a -approximate solution and a -approximate solution, respectively, of a given generalized circuit with nodes.

5 Main Results and Proof Outline

As the main technical result of our paper, we prove the following theorem.

Theorem 5.1 (Main).

For any constant , Poly-Bimatrix is PPAD-complete.

This theorem immediately implies the following statements about the complexity of computing and approximating two-player Nash equilibria.

Theorem 5.2 (Complexities of Bimatrix).

Bimatrix is PPAD-complete. Moreover, it does not have a fully-polynomial-time approximation scheme, unless PPAD is contained in P.

By Proposition 2.1, Bimatrix does not have a fully polynomial-time approximation scheme in the relative approximation of Nash equilibria.

Setting , by Theorem 5.1 and Lemma 3.2, we obtain following theorem on the smoothed complexity of two-player Nash equilibria:

Theorem 5.3 (Smoothed Complexity of Bimatrix).

Bimatrix is not in smoothed polynomial time, under uniform or Gaussian perturbations, unless PPAD is contained in RP.

Corollary 5.4 (Smoothed Complexity of Lemke-Howson).

If PPAD is not contained in RP, then the smoothed complexity of the Lemke-Howson algorithm is not polynomial.

By Proposition 2.3, we obtain the following corollary from Theorem 5.1 about the complexity of Bit-Bimatrix.

Corollary 5.5 (Bit-Bimatrix).

For any constant , -Bit-Bimatrix, the problem of finding the first bits of a Nash equilibrium in a bimatrix game is polynomial-time equivalent to Bimatrix.

To prove Theorem 5.1, we will start with the discrete fixed point problem Brouwer (recall that for all ). As is a well-behaved function, Theorem 4.6 implies that Brouwer is a PPAD-complete problem. We then apply the following three lemmas to reduce Brouwer to Poly-Bimatrix.

Lemma 5.6 (Fpc to Gcircuit).

Brouwer is polynomial-time reducible to Poly-Gcircuit.

Lemma 5.7 (Gcircuit to Bimatrix).

Poly-Gcircuit is polynomial-time reducible to Poly-Bimatrix.

Lemma 5.8 (Padding Bimatrix Games).

If Poly-Bimatrix is PPAD-complete for some constant , then Poly-Bimatrix is PPAD-complete for every constant .

We will prove Lemma 5.6 and Lemma 5.7, respectively, in Section 8 and Section 6. A proof of Lemma 5.8 can be found in Appendix C.

6 Simulating Generalized Circuits with Nash Equilibria

In this section, we reduce Poly-Gcircuit, the problem of computing a -approximate solution of a generalized circuit of nodes, to Poly-Bimatrix. As every two-player game has a Nash equilibrium, this reduction also implies that every generalized circuit with nodes has a -approximate solution.

6.1 Outline of the Reduction

Suppose is a generalized circuit. Let and . Let be a one-to-one map from to . From every vector , we define two maps : For every node , supposing , we set and .

In our reduction, we will build an bimatrix game . Our construction will take polynomial time and ensure the following properties for .

  • Property : , for all and

  • Property : for every -well-supported Nash equilibrium of game , is an -approximate solution to .

Then, we normalize to obtain