Nash Equilibria in Perturbation Resilient Games

Nash Equilibria in Perturbation Resilient Games

Maria-Florina Balcan1    Mark Braverman 2
11School of Computer Science, College of Computing, Georgia Institute of Technology, Atlanta, Georgia.
22Princeton University.
Abstract

Motivated by the fact that in many game-theoretic settings, the game analyzed is only an approximation to the game being played, in this work we analyze equilibrium computation for the broad and natural class of bimatrix games that are stable to perturbations. We specifically focus on games with the property that small changes in the payoff matrices do not cause the Nash equilibria of the game to fluctuate wildly. For such games we show how one can compute approximate Nash equilibria more efficiently than the general result of Lipton et al. [18], by an amount that depends on the degree of stability of the game and that reduces to their bound in the worst case. Furthermore, we show that for stable games the approximate equilibria found will be close in variation distance to true equilibria, and moreover this holds even if we are given as input only a perturbation of the actual underlying stable game.

For uniformly-stable games, where the equilibria fluctuate at most quasi-linearly in the extent of the perturbation, we get a particularly dramatic improvement. Here, we achieve a fully quasi-polynomial-time approximation scheme: that is, we can find -approximate equilibria in quasi-polynomial time. This is in marked contrast to the general class of bimatrix games for which finding such approximate equilibria is PPAD-hard. In particular, under the (widely believed) assumption that PPAD is not contained in quasi-polynomial time, our results imply that such uniformly stable games are inherently easier for computation of approximate equilibria than general bimatrix games.

1 Introduction

The Nash equilibrium solution concept has a long history in economics and game theory as a description for the natural result of self-interested behavior [20, 22]. Its importance has led to significant effort in the computer science literature in recent years towards understanding their computational structure, and in particular on the complexity of finding both Nash and approximate Nash equilibria. A series of results culminating in the work by Daskalakis, Goldberg, and Papadimitriou [10] and Chen, Deng, and Teng [7, 8] showed that finding a Nash equilibrium or even a -approximate equilibrium, is PPAD-complete even for 2-player bimatrix games. For general values of , the best known algorithm for finding -approximate equilibria runs in time , based on a structural result of Lipton et al. [18] showing that there always exist -approximate equilibria with support over at most strategies. This structural result has been shown to be existentially tight [15]. Even for large values of , despite considerable effort [11, 12, 21, 15, 6, 17], polynomial-time algorithms for computing -approximate equilibria are known only for  [21]. These results suggest a difficult computational landscape for equilibrium and approximate equilibrium computation on worst-case instances.

In this paper we go beyond worst-case analysis and investigate the equilibrium computation problem in a natural class of bimatrix games that are stable to perturbations. As we argue, on one hand, such games can be used to model many realistic situations. On the other hand, we show that they have additional structure which can be exploited to provide better algorithmic guarantees than what is believed to be possible on worst-case instances. The starting point of our work is the realization that games are typically only abstractions of reality and except in the most controlled settings, payoffs listed in a game that represents an interaction between self-interested agents are only approximations to the agents’ exact utilities. 333For example, if agents are two corporations with various possible actions in some proposed market, the precise payoffs to the corporations may depend on specific quantities such as demand for electricity or the price of oil, which cannot be fully known in advance but only estimated. As a result, for problems such as equilibrium computation, it is natural to focus attention to games that are robust to the exact payoff values, in the sense that small changes to the entries in the game matrices do not cause the Nash equilibria to fluctuate wildly; otherwise, even if equilibria can be computed, they may not actually be meaningful for understanding behavior in the game that is played. In this work, we focus on such games and analyze their structural properties as well as their implications to the equilibrium computation problem. We show how their structure can be leveraged to obtain better algorithms for computing approximate Nash equilibria, as well as strategies close to true Nash equilibria. Furthermore, we provide such algorithmic guarantees even if we are given only a perturbation of the actual stable game being played.

To formalize such settings we consider bimatrix games that satisfy what we call the perturbation stability condition, meaning that for any game within distance of (each entry changed by at most ), each Nash equilibrium in is -close to some Nash equilibrium in , where closeness is given by variation distance. Clearly, any game is perturbation stable for any and the smaller the the more structure the perturbation stable games have. In this paper we study the meaningful range of parameters, several structural properties, and the algorithmic behavior of these games.

Our first main result shows that for an interesting and general range of parameters the structure of perturbation stable games can be leveraged to obtain better algorithms for equilibrium computation. Specifically, we show that all -action perturbation-stable games with at most Nash equilibria must have a well-supported -equilibrium of support . This yields an -time algorithm for finding an -equilibrium, improving by a factor in the exponent over the bound of [18] for games satisfying this condition (and reducing to the bound of [18] in the worst-case when ).444One should think of as a function of , with both possibly depending on . E.g., and . Moreover, the stability condition can be further used to show that the approximate equilibrium found will be -close to a true equilibrium, and this holds even if the algorithm is given as input only a perturbation to the true underlying stable game.

A particularly interesting class of games for which our results provide a dramatic improvement are those that satisfy what we call -uniform stability to perturbations. These are games that for some and some satisfy the stability to perturbations condition for all . For games satisfying -uniform stability to perturbations with , our results imply that we can find -approximate equilibria in time, i.e., achieve a fully quasi-polynomial-time approximation scheme (FQPTAS). This is especially interesting because the results of [8] prove that it is PPAD-hard to find -approximate equlibria in general games. Our results shows that under the (widely believed) assumption that PPAD is not contained in quasi-polynomial time [9], such uniformly stable game are inherently easier for computation of approximate equilibria than general bimatrix games.555The generic result of  [18] achieves quasi-polynomial time only for . Moreover, variants of many games appearing commonly in experimental economics including the public goods game, matching pennies, and identical interest game [14] satisfy this condition. See Sections 35, and Appendix C for detailed examples.

Our second main result shows that computing an -equilibrium in a game satisfying the perturbation stability condition is as hard as computing a -equilibrium in a general game. For our reduction, we show that any general game can be embedded into one having the perturbation stability property such that an equilibrium in the new game yields a -equilibrium in the original game. This result implies that the interesting range for the -perturbation stability condition, where one could hope to do significantly better than in the general case, is .

We also connect our stability to perturbations condition to a seemingly very different stability to approximations condition introduced by Awasthi et al. [2]. Formally, a game satisfies the strong -approximation stability condition if all -approximate equilibria are contained inside a small ball of radius around a single equilibrium.666 [2] argue that this condition is interesting since in situations where one would want to use an approximate Nash equilibrium for predicting how players will play (which is a common motivation for computing a Nash or an approximate Nash equilibrium), without such a condition the approximate equilibrium found might be far from the equilibrium played. We prove that our stability to perturbations condition is equivalent to a much weaker version of this condition that we call the well-supported approximation stability. This condition requires only that for any well-supported -approximate equilibrium 777Recall that in an -Nash equilibrium, the expected payoff of each player is within from her best response payoff; however the mixed strategies may include poorly-performing pure strategies in their support. By contrast, the support of a well-supported -approximate equilibrium may only contain strategies whose payoffs fall within of the player’s best-response payoff. there exists a Nash equilibrium that is -close to . Clearly, the well supported approximation stability condition is more general than strong -approximation stability considered by [2] since rather than assuming that there exists a fixed Nash equilibrium such that all -approximate equilibria are contained in a ball of radius around , it requires only that for any well-supported -approximate equilibrium there exists a Nash equilibrium that is -close to . Thus, perturbation-stable games are significantly more expressive than strongly approximation stable games and Section 3 presents several examples of games satisfying the former but not the latter. However, our lower bound (showing that the interesting range of parameters is ) also holds for the strong stability to approximations condition.

We also provide an interesting structural result showing that each Nash equilibrium of an perturbation stable game with Nash equilibria must be -close to well-supported -approximate equilibrium of support only . Similarly, a -uniformly stable game with equilibria has the property that for any , each equilibrium is -close to a well-supported -approximate equilibrium with support of size . This property implies that in quasi-polynomial time we can in fact find a set of approximate-Nash equilibria that cover (within distance ) the set of all Nash equilibria in such games.

It is interesting to note that for our algorithmic results for finding approximate equilibria we do not require knowing the stability parameters. If the game happens to be reasonably stable, then we get improved running times over the Lipton et al. [18] guarantees; if this is not the case, then we fall back to the Lipton et al. [18] guarantees.888This is because algorithmically, we can simply try different support sizes in increasing order and stop when we find strategies forming a (well-supported) -equilibrium. In other words, given , the desired approximation level, we can find an -approximate equilibrium in time where is the smallest value such that the game is perturbation stable. However, given a game, it might be interesting to know how stable it is. In this direction, we provide a characterization of stable constant-sum games in Section 6 and an algorithm for computing the strong stability parameters of a given constant-sum game.

1.1 Related Work

In addition to results on computing (approximate) equilibria in worst-case instances of general bimatrix games, there has also been a series of results on polynomial-time algorithms for computing (approximate) equilibria in specific classes of bimatrix games. For example, Bárány et al. [4] considered two-player games with randomly chosen payoff matrices, and showed that with high probability, such games have Nash equilibria with small support. Their result implies that in random two-player games, Nash equilibria can be computed in expected polynomial time. Kannan and Theobald [16] provide an FPTAS for the case where the sum of the two payoff matrices has constant rank and Adsul et al. [1] provide a polynomial time algorithm for computing an exact Nash equilibrium of a rank-1 bimatrix game.

Awasthi et al. [2] analyzed the question of finding an approximate Nash in equilibrium in games that satisfy stability with respect to approximation. However, their condition is quite restrictive in that it focuses only on games that have the property that all the Nash equilibria are close together, thus eliminating from consideration most common games. By contrast, our perturbation stability notion, which (as mentioned above) can be shown to be a generalization of their notion, captures many more realistic situations. Our upper bounds on approximate equilibria can be viewed as generalizing the corresponding result of [2] and it is significantly more challenging technically. Moreover, our lower bounds also apply to the stability notion of [2] and provide the first (nontrivial) results about the interesting range of parameters for that stability notion as well.

In a very different context, for clustering problems, Bilu and Linial [5] analyze maxcut clustering instances with the property that if the distances are perturbed by a multiplicative factor of , then the optimum does not change; they show that under this condition, for , one can find the optimum solution in polynomial time. Recently, Awasthi et al. [3], have shown a similar result for the k-median clustering problem and showed a similar result for . Our stability to perturbations notion is inspired by this work, but is substantially less restrictive in two respects. First, we require stability only to small perturbations in the input, and second, we do not require the solutions (Nash equilibria) to stay fixed under perturbation, but rather just ask that they have a bounded degree of movement.

The notion of stability to perturbations we consider in our paper is also related to the stability notions considered by Lipton et al. [19] for economic solution concepts. The main focus of their work was understanding whether for a given solution concept or optimization problem all instances are stable. In this paper, our main focus is on understanding how rich the class of stable instances is, and what properties one can determine about their structure that can be leveraged to get better algorithms for computing approximate Nash equilibria.999Just as in [19], one can show that for the stability conditions we consider in our paper, there exist unstable instances. We provide the first results showing better algorithms for computing approximate equilibria in such games.

2 Preliminaries

We consider 2-player general-sum -action bimatrix games. Let denote the payoff matrix to the row player and denote the payoff matrix of the column player. If the row player chooses strategy and the column player chooses strategy , the payoffs are and respectively. We assume all payoffs are scaled to the range .

A mixed strategy for a player is a probability distribution over the set of his pure strategies. The th pure strategy will be represented by the unit vector , that has in the th coordinate and elsewhere. For a mixed strategy pair , the payoff to the row player is the expected value of a random variable which is equal to with probability . Therefore the payoff to the row player is . Similarly the payoff to the column player is . Given strategies and for the row and column player, we denote by and the support of and , respectively.

A Nash equilibrium [20] is a pair of strategies such that no player has an incentive to deviate unilaterally. Since mixed strategies are convex combinations of pure strategies, it suffices to consider only deviations to pure strategies. In particular, a pair of mixed strategies is a Nash-equilibrium if for every pure strategy of the row player we have , and for every pure strategy of the column player we have . Note that in a Nash equilibrium , all rows in the support of satisfy and similarly all columns in the support of satisfy .

Definition 1

A pair of mixed strategies is an -equilibrium if both players have no more than incentive to deviate. Formally, is an -equilibrium if for all rows , we have , and for all columns , we have .

Definition 2

A pair of mixed strategies is a well supported -equilibrium if for any (i.e., s. t. ) we have , for all ; similarly, for any (i.e., s. t. ) we have , for all .

Definition 3

We say that a bimatrix game specified by is an -perturbation of specified by if we have and for all .

Definition 4

For two probability distributions and , we define the distance between and as the variation distance:

We define the distance between two strategy pairs as the maximum of the row-player’s and column-player’s distances, that is:

It is easy to see that is a metric. If , then we say that is -close to .

Throughout this paper we use “log” to mean log-base-e.

3 Stable Games

The main notion of stability we introduce and study in this paper requires that any Nash equilibrium in a perturbed game be close to a Nash equilibrium in the original game. This is an especially motivated condition since in many real world situations the entries of the game we analyze are merely based on measurements and thus only approximately reflect the players’ payoffs. In order to be useful for prediction, we would like that equilibria in the game we operate with be close to equilibria in the real game played by the players. Otherwise, in games where certain equilibria of slightly perturbed games are far from all equilibria in the original game, the analysis of behavior (or prediction) will be meaningless. Formally:

Definition 5

A game satisfies the stability to perturbations condition if for any that is an -perturbation of and for any Nash equilibrium in , there exists a Nash equilibrium in such that is -close to . 101010Note that the entries of the perturbed game are not restricted to the interval, and are allowed to belong to . This is a proper way to formulate the notion because it implies, for instance, that if is stable to perturbations, then for any , is stable to perturbations. Theorem 1 provides further evidence that this definition is proper.

Observe that fixing , a smaller means a stronger condition and a larger means a weaker condition. Every game is -perturbation stable, and as gets smaller, we might expect for the game to exhibit more useful structure.

Another stability condition we consider in this work is stability to approximations:

Definition 6

A game satisfies the -approximation stability condition if for any -equilibrium there exists a Nash equilibrium such that is -close to .

A game satisfies the well supported -approximation stability condition if for any well supported -equilibrium there exists a Nash equilibrium such that is -close to .

Clearly, if a game satisfies the -approximation stability condition, then it also satisfies the well supported -approximation stability condition. Interestingly, we show that the stability to perturbations condition is equivalent to the well supported approximation stability condition. Specifically:

Theorem 1

A game satisfies the well supported -approximation stability condition if and only if it satisfies the -stability to perturbations condition.

Proof: Consider an bimatrix game specified by and and assume it satisfies the well supported -approximation stability condition; we show it also satisfies the -stability to perturbations condition. Consider and , where and , for all . Let be an arbitrary Nash equilibrium in the new game specified by and . We will show that is a well supported -approximate Nash equilibrium in the original game specified by and . To see this, note that by definition, (since is a Nash equilibrium in the game specified by and ) we have for all ; therefore , so , for all . On the other hand we also have for all . Therefore, , for all and for all . Similarly we can show , for all and for all . This implies that is well supported -approximate Nash in the original game, and so by assumption is -close to a Nash equilibrium of the game specified by and . So, this game satisfies the -stability to perturbations condition.

In the reverse direction, consider an bimatrix game specified by and and assume it satisfies the -stability to perturbations condition. Let be an arbitrary well supported Nash equilibrium in this game. Let us define matrices and such that for all and for all , for all and for all . Since is a well supported Nash equilibrium we know this can be done such that and , for all (in particular, we have to add quantities in to all the elements in rows of in the support of and subtract quantities in from all the elements in rows of not in the support of ; similarly for ). By design, is a Nash equilibrium in the game defined by , , and from the -stability to perturbations condition, we obtain that it is -close to a true Nash equilibrium of the game specified by and . Thus, any well supported Nash equilibrium in the game specified by and is -close to a true Nash equilibrium of this game, as desired.     

One can show that the well supported approximation stability is a strict relaxation of the approximation stability condition. For example, consider the bimatrix game

For this game satisfies the well supported -approximation stability condition, but does not satisfy -approximation stability for any . To see this note that , , , and for any and . This implies that the only well supported -Nash equilibrium is identical to the Nash equilibrium , thus the game is well supported -approximation stable. On the other hand, the pair of mixed strategies with and is an -Nash equilibrium. The distance between and the unique Nash is , thus this game is not -approximation stable for any .

Interestingly, the notion of approximation stability which is a restriction of the well-supported approximation stability and of stability to perturbations conditions is a relaxation of the stability condition considered by Awasthi et al. [2] which requires that all approximate equilibria be contained in a ball of radius around a single Nash equilibrium. In this direction, we define the strong version of stability conditions given in Definitions 3 and 4 to be a reversal of quantifiers that asks there be a single such that each relevant (equilibrium in an -perturbed game, -approximate equilibrium, or well-supported -approximate equilibrium) is -close to . Formally:

Definition 7

A game satisfies the strong stability to perturbations condition if there exists a Nash equilibrium of such that for any that is an -perturbation of we have that any Nash equilibrium in is -close to .

A game satisfies the strong (well supported) -approximation stability condition if there exists a Nash equilibrium of such that any (well supported) -equilibrium is -close to .

It is immediate from its proof that Theorem 1 applies to the strong versions of the definitions as well. We also note that our generic upper bounds in Section 4 will apply to the most relaxed version (perturbation-stability) and our generic lower bound in Section 5 will be apply to the most stringent version (strong approx stability).

Range of parameters

As shown in [2], if a game satisfies the strong -approximation stability condition and has a non-trivial Nash equilibrium (an equilibrium in which the players do not both have full support), then we must have . We can show that if a game satisfies the -approximation stability and if the union of all -balls around all Nash equilibria does not cover the whole space,111111If the union of all -balls around all Nash equilibria does cover the whole space, this is an easy case from our perspective. Any would be a -equilibria. then we must have – see Lemma 5 in Appendix B. In Section 5 we further discuss the meaningful range of parameters from the point of view of the equilibrium computation problem.

Examples

Variants of many classic games including the public goods game, matching pennies, and identical interest games are stable. As an example, consider the following modified identical interest game. Both players have available actions. The first action is to stay home, and the other actions correspond to different possible meeting locations. If a player chooses action (stay home), his payoff is no matter what the other player is doing. If the player chooses to go out to a meeting location, his payoff is if the other player is there as well and it is otherwise. This game has pure equilibria (all ) and equilibria (all ) and it is well-supported -approximation stable for all . Note that it does not satisfy strong (well-supported) stability because it has multiple very distinct equilibria. For further examples see Lemma 2 in Section 5, as well as Appendix C.

4 Equilibria in Stable Games

In this section we show we can leverage the structure implied by stability to perturbations to improve over the best known generic bound of [18]. We start by considering and as given. We can show:

Theorem 2

Let us fix and , . Consider a game with at most Nash equilibria which satisfies the well supported -approximation stability condition (or the -stability to perturbations condition). Then there exists a well supported -equilibrium where each player’s strategy has support .

This improves by a factor in the exponent over the bound of [18] for games satisfying these conditions (and reduces to the bound of [18] in the worst-case when ).

Proof idea

We start by showing that any Nash equilibrium of must be highly concentrated. In particular, we show that for each of , any portion of the distribution with substantial norm (having total probability at least ) must also have high norm: specifically the ratio of norm to norm must be . This in turn can be used to show that each of has all but at most of its total probability mass is concentrated in a set (which we call the high part) of size . Once the desired concentration is proven, we can then perform a version of the [18] sampling procedure on the low parts of and (with an accuracy of only ) to produce overall an -approximate equilibrium of support only a constant factor larger. The primary challenge in this argument is to prove that and are concentrated.121212We note that  [2] prove an upper bound for the strong approximation stability condition using the same concentration idea. However, proving the desired concentration is significantly more challenging in our case since we deal with many equilibria. This is done through our key lemma, Lemma 1 below. In particular, Lemma 1 can be used to show that if (or ) had a portion with substantial norm and low norm, then there must exist a deviation from (or ) that is far from all equilibria and yet is a well-supported approximate-Nash equilibrium, violating the stability condition. Proving the existence of such a deviation is challenging because of the large number of equilibria that may exist. Lemma 1 synthesizes the key points of this argument and it is proven through a careful probabilistic argument.

In the following we consider and let .

Lemma 1

Let us fix and , . Let be an arbitrary distribution over . Let and fix such that . Assume that the entries of can be partitioned into two sets and such that , , . Let us fix n-dimensional vectors with entries in and distributions , where and . Then there exists with such that:

  1. and

  2. for all .

  3. for all ..

Proof: We show the desired result by using the probabilistic method. Let us define the random variable with probability and with probability . Define

We have . By applying McDiarmid’s inequality (see Theorem 6) and using the fact that , we obtain that with probability at least we have both:

(1)

Assume that this happens. In this case, is a legal mixed strategy for the row player and by construction we have .

Let be an arbitrary vector in . We have:

Define

so we have:

Using McDiarmid’s inequality we get that with probability at least , each of the quantities is within of its expectation; we are using here the fact that the value of can change any one of the quantities by at most , so the exponent in the McDiarmid bound is . Also, we have , and . So, we get that with probability at least we have

Finally, using the fact that , we get

yielding the desired bound . Applying the union bound over all we obtain that the probability that there exists in such that is at most .

Consider an arbitrary distribution in . Assume that . By definition, we have:

where the first inequality follows from applying relation (1) to the denominators, and the last equality follows from the fact that .

Let us denote by . We have:

therefore

We can now apply McDiarmid’s inequality (see Theorem 6) to argue that with high probability is within of its expectation. Note that Therefore:

This then implies that

By the union bound we get that the probability that there exists a in such that is at most . Summing up overall all possible events we get that there is a non-zero probability of (1), (2), (3) happening, as desired.     

Proof (Theorem 2): Let be an arbitrary Nash equilibrium. We show that each of and are highly concentrated, meaning that all but at most of their total probability mass is concentrated in a set of size . Let’s consider (the argument for is similar). We begin by partitioning it into its heavy and light parts. Specifically, we greedily remove the largest entries of and place them into a set (the heavy elements) until either:

  1. the remaining entries (the light elements) satisfy the condition that , for as in Lemma 1, or

  2. ,

whichever comes first. Using the fact that the game satisfies the well supported -approximation stability condition, we will show that case (a) cannot occur first, which will imply that is highly concentrated.

In the following, we denote as the total probability mass over . Assume by contradiction that case (a) occurs first. Note that we have , , and

so . Let . Since is a Nash equilibrium we know that for all and for all .

By Lemma 1 there exists such that (1) , (2) for all and for all and (3) for all