Efficiently characterizing games consistent with perturbed equilibrium observations
We study the problem of characterizing the set of games that are consistent with observed equilibrium play. Our contribution is to develop and analyze a new methodology based on convex optimization to address this problem for many classes of games and observation models of interest. Our approach provides a sharp, computationally efficient characterization of the extent to which a particular set of observations constrains the space of games that could have generated them. This allows us to solve a number of variants of this problem as well as to quantify the power of games from particular classes (e.g., zero-sum, potential, linearly parameterized) to explain player behavior. We illustrate our approach with numerical simulations.
This paper considers inference in game theoretic models of complete information. More precisely, we study the problem of recovering properties and characterizing parameter values of the games that are consistent with observed equilibrium play, and provide a simple procedure based on convex optimization to recover both the region of consistent games and properties of said region, in a computationally efficient manner. Further, our approach has the power to compute the size of the region of consistent games, and hence to determine when approximate point identification of the true payoff matrices (or parameter values) is possible.
Our approach differs from most related work in that we depart from the usual distributional assumptions on the observer’s knowledge of the unobserved variables and payoff shifters; instead, we adopt the weaker assumption that the unobserved variables belong to a known set. This increases the robustness of our approach, reflecting ideas from the robust optimization literature—see [9, 11, 7]. Furthermore, our approach is formally computationally efficient and, when implemented, is able to handle games of much larger size than those considered in previous work.
Our approach may be viewed as complementary to a model-driven approach, in that the tools we provide here may be used to objectively evaluate the quality of fit one achieves under certain modeling assumptions. Our approach also allows us to explore a variety of assumptions about the information that might available to an observer of game play, and the effects that these assumptions would have on constraining the space of consistent games.
1.1 Summary of results
We consider a setting where, at each step, a finite-action, 2-player game is played, and an observer observes a correlated equilibrium (a more permissive concept than Nash equilibrium) of the game.111Our framework extends to multi-player games with succinct representations; for clarity, we focus here on the two-player case. See Section 5.1 for a discussion of succinct multi-player games. We assume that the games played on each step are closely related, in that each reflects a small perturbation in the payoffs of some underlying game; such perturbations are often referred to as payoff shifters.
In a departure from previous work, we do not make distributional assumptions on the payoff shifters, nor do we make any assumption on how the players decide which equilibrium to play, when multiple equilibria are present. Instead, we assume that the observer knows nothing of the equilibrium selection rule, and that the information the observer has on the unobserved payoff shifters is simply that the unobserved payoff shifters belong to a known set (see Section 2 for more details); this is a significantly weaker assumption than knowing exactly what distribution the payoff shifters are taken from. For example, imagine an analyst observes a routing game every day; the shifts in payoffs may come from a combination of several events such as changes in road conditions, traffic accidents, and work zones, whose potential effects on the costs of paths in a routing game may be difficult to predict and quantify precisely as a probability distribution.
In this setting, we give a computationally efficient characterization of the set of games that are consistent with the observations (Section 3.1); this set is “sharp”, in the sense that it does not contain any game that is not consistent with the observations. One of our main new contributions is computational efficiency itself: the pioneering work of Beresteanu, Molchanov, and Molinari  only checks membership of a game to the set of consistent games, and does so in a manner that is tractable in small games but intractable for larger games—see Section 1.2 for a more in-depth discussion. We also show that our framework accommodates an alternate model wherein the observer learns the expected payoff of each player at each equilibrium he sees; in our routing game example, think of an observer who sees the expected time each player spends in traffic. We refer to this setting as “partial payoff information,” and discuss it in Sections 3 and 4.
Our second main contribution is our ability to quantify the size of the set of consistent games. We give an efficient algorithm (see Section 3.4, Algorithm 1) that takes a set of observations as input and computes the diameter of the sharp region of consistent games. The diameter of the consistent set is of interest to an observer, because it gives him a measure of how sharp the conclusions he can draw from the observations are (the larger the diameter, the less sharp the conclusions), and in particular the diameter quantifies the level of approximate point identification that is achievable in a particular setting. Additionally, in Section 4, Lemmas 2 and 3, we give structural conditions on the sets of observations that allow for accurate recovery. We also exhibit examples in which said conditions do not hold, and therefore accurate recovery is not possible.
We show we can extend our framework (Section 3.3) to find the set of consistent games when restricted to games with certain linear properties, e.g., zero-sum games, potential games, and games whose utilities can be parametrized by linear functions; this allows us to determine to what extent the observed behavior is consistent with such assumptions on the underlying game.
In Section 5, we show we can extend our framework to finite games with a large number of players, provided the game has a succinct representation. We further show our framework’s potential to deal with games with infinite action spaces, using Cournot competition as an example.
1.2 Related work
One important modeling issue is whether and why one would ever observe multiple, differing behaviors of a single agent. A natural, well-established approach models different observations found in the data as stemming from random perturbations to the agents’ utilities, as in [13, 15, 25, 24, 2, 3, 4, 6, 5]. In dynamic panel models, one observes equilibria across several markets sharing common underlying parameters, and in particular  considers a setting in which a unique, fixed equilibrium is played within each market. We adopt a similar approach here, and assume that we have access to several markets or locations that play perturbed versions of the same game, and that a single (mixed) equilibrium is played in each market.
In the branch of the econometrics literature that aims to infer parameters in game theoretic model, it is often the case that one requires that the game be small or that the utilities of the players can be written as simple functions of a restricted number of parameters. For example, 2-player entry games with entry payoffs parametrized as linear functions of a small number of variables, as seen in  and subsequent work, are among the most-studied in the literature. One drawback of this literature is that when the space of parameters is high-dimensional or when multiple equilibria exist, identification of the true parameters of the game often becomes impossible, since the observations do not correspond to a unique consistent explanatory game. A number of recent papers [1, 17, 10, 21] consider instead the problem of constructing regions of parameters that contain the true value of the parameters they aim to recover from equilibrium observations of games. For example, Nekipelov, Syrgkanis, and Tardos  study a dynamic sponsored search auction game, and provide a characterization of the rationalizable set, consisting of the set of private parameters that are consistent with the observations, under the relaxed assumption that players need not follow equilibrium play, but rather use some form of no-regret learning. Relatedly, Andrews, Berry, and Jia  and Ciliberto and Tamer  compute confidence regions for the value of the true parameter, but their regions are not “sharp,” in the sense that they may contain parameter values that are not consistent with some of the implications of their models.
Perhaps closest to the present work, Beresteanu, Molchanov, and Molinari  combine random set theory and convex optimization to give a representation of the sharp identification region as the set of values for which the solution to a convex optimization program with a random objective function is almost surely (in the payoff shifters) equal to .222Our notion of the consistent set is closely analogous to the sharp identification region of . We use different terminology to highlight that they are derived under somewhat different settings. Hence, verifying membership of a parameter value to the sharp identification region can be done efficiently in simple settings such as entry-games with linearly parametrized payoffs. This is an exciting advance, especially when considering games with few players and small action sets; however, for computational reasons, the approach becomes impractical in large games, such as 2-player games with many actions per player:
The Beresteanu, Molchanov, and Molinari  framework can verify that a vector of parameter values belongs to the sharp identification set, but does not provide an efficient, searchable representation of the sharp identification set itself, nor an algorithm to efficiently find a point in the set.
One can verify that a parameter vector belongs to the sharp identification set by checking that a particular condition holds for almost all possible realizations of the payoff shifters. Beresteanu, Molchanov, and Molinari  further show that one can cluster payoff shifters into groups such that all perturbed games in the same group have the same set of Nash equilibria; one then must check the condition only once per group. In particular, in their entry-game example, the number of such groups is small, and thus this is a computationally tractable task. However, in more complex games, or for more general equilibrium concepts, the number of such groups can become intractably large in the number of actions available to each player.
Finally, the BMM framework  relies on being able to compute all equilibria of each of the perturbed games, which may be impractical. (In general, in the worst case, it is computationally hard to find any Nash equilibrium in time polynomial in the number of actions of each player, even for 2-player games .)
The goal of the present paper is similar to the goal of , in the sense that we wish to sharply understand the set of games that are consistent with a set of observations (for us, correlated equilibria of perturbed games). We also use the setting of their simulations as the jumping off point for our own experimental section. However, our approach differs from that of  in two main ways. First, we make weaker assumptions on the information on the unobserved payoff shifters available to the observer; our approach to modeling the perturbations is inspired by the concept of uncertainty sets in robust optimization (see [9, 11, 7]). Second, our framework provides a computationally efficient characterization of the consistent set, both in theory and in practice, on games of large size, and also gives efficient and practical algorithms to find points in the consistent set, compute its diameter, and test whether it contains games with certain properties.
A handful of papers in the computer science literature have looked at slightly different but related questions, arising when observing equilibria of games whose payoffs are unknown. In particular, Bhaskar et al.  and Rogers et al.  study a network routing setting in which equilibrium behavior can be observed but edge costs are unknown, and study the query complexity of devising a variant of the game to induce desired target flows as equilibria. Barman et al.  adopt a model in which the observer observes what joint strategies are played when restricting the actions of the players in a complete information game with no perturbations, and show that data with certain properties can be rationalized by games with low complexity. Perhaps closest to our work,  gives a convex and computationally efficient characterization of the set of consistent games when the true game is succinct and its structure is known; however, their setting does not incorporate payoff shifters, instead assuming they observe different equilibria from different succinct games with different structures, but that all these games share the same parameter value.
2 Model and setting
2.1 Players’ behavior
Consider a finite two-player game ; we will refer to it as the true or underlying game. Let be the finite sets of actions available to players and , respectively, and let and be the number of actions available to them. For every , we denote by the payoff of player when player chooses action and player chooses action . is the vector representation of the utility of player , and we often abuse notation and write . The strategies available to player are simply the distributions over . A strategy profile is a pair of strategies (distributions over actions), one for each player. A joint strategy profile is a distribution over pairs of actions (one for each player); it is not required to be a product distribution. We refer to strategies as pure when they place their entire probability mass on a single action, and mixed otherwise.
We consider perturbed versions of the game , indexed by so that the perturbed game is denoted ; one can for instance imagine each as a version of the game played in a different location or market . The same notation as for applies to the ’s.
Throughout the paper, we assume that for each , the players’ strategies are given by a correlated equilibrium of the complete information game . In the presence of several such equilibria, no assumption is made on the selection rule the players use to pick which equilibrium to play (though we assume they both play according to the same equilibrium). Correlated equilibria are defined as follows:
A probability distribution is a correlated equilibrium of game if and only if
The notion of correlated equilibrium extends the classical notion of Nash equilibrium by allowing players to act jointly; as every Nash equilibrium of a game is a correlated equilibrium of the same game, many of our results also have implications for Nash equilibria.
2.2 Observation model
Our observer does not have access to the payoffs of the underlying game nor of the perturbed games , for any in . We model an observer as observing, for each perturbed game , the entire correlated equilibrium distribution , where denotes the joint probability in the th perturbed game of player playing action while player plays action .333The reader may interpret this assumption as describing a situation in which each perturbed game is played repeatedly over time, with the same (possibly mixed) equilibrium played each time, allowing the observer to infer the probability distribution over actions that is followed by the players. Technically, using samples to estimate an empirical distribution over actions would yield a -approximate equilibrium for some that depends on the sample size. Our framework can trivially be extended to deal with such approximate correlated equilibria; for simplicity, we omit further discussion of this issue. Note that as represents a probability distribution, we require and . In this paper, we consider two variants of the model of observations we just described:
In the partial payoff information setting, the observer has access to equilibrium observations , and additionally to the expected payoff of equilibrium on perturbed games , for each player and for all ; we denote said payoff and note that .
In the payoff shifter information setting, at each step , a payoff shifter is added to game , and the perturbed games result from the further addition of small perturbations to the ’s. The observer knows and observes of perturbed games . This setting represents a situation in which changes in the behavior of agents are observed as a function of changes in observable economic parameters (taxes, etc.).
While the payoff shifter information setting is the model of perturbations that is commonly used in the literature, the partial payoff information setting has not been used in previous work, to the best of our knowledge. We introduce it to model the following types of situation:
Two firms are competing for customers in Los Angeles, and an observer follows what actions the two L.A. firms take over the course of each quarter. The observer also learns the quarterly revenue of each firm.
Several agents are playing a routing game, and the observer sees not just the routes the players choose, but also the amount of time players spend in traffic.
2.3 Observer’s knowledge about the perturbations
Our paper aims to characterize the games that explain equilibrium observations under the partial payoff and payoff shifter information settings when the perturbations are known to be “small” and the perturbed games are thus “close” to the underlying game. The next few definitions formalize our notion of closeness, and Assumption 1 formalizes the information the observer has about the perturbations added to the underlying game .
A game is -close to games with respect to metric for if and only if .
We think of as distances and therefore convex functions of the perturbations for all . For the above definitions to make sense in the context of this paper, we need a metric whose value on a set of games is small when are close in terms of payoffs. We consider the following metrics:
The sum-of-squares distance between games and is given by
The maximum distance between games and is defined as
where denotes the usual infinity norm.
Both distances are useful, in different situations. The sum-of-squares distance is small when the variance of the perturbations added to is known to be small, but allows for worst-case perturbations to be large. An example is when the ’s are randomly sampled from a distribution with mean , unbounded support, and small covariance matrix, in which case some of the perturbations may deviate significantly from the mean but with low probability, while the average squared perturbation remains small. If the distribution of perturbations is i.i.d Gaussian, the sum-of-squares norm replicates the log-likelihood of the estimations and follows a Chi-square distribution. The maximum distance, in contrast, is small when it is known that all perturbations are small and bounded; one example is when the perturbations are uniform in a small interval .
Throughout the paper, we make the following assumption on the information about the perturbations that is available to the observer:
Let be the underlying game and be the perturbed games that generated observations .
In the partial payoff information settings, the observer knows that is -close to games with respect to some metric and magnitude .
In the payoff shifter information setting with observed shifters , the observer knows that is -close to the unshifted games with respect to some metric and magnitude .
Assumption 1 defines a convex set the observer knows the perturbations must belong to, much like the uncertainty sets given in [7, 9, 11]. We note that the and distances we focus on define respectively an ellipsoidal and a polyhedral uncertainty set (as seen in ).
While we make Assumption 1 for convenience and simplicity of exposition, our framework is able to handle more general sets of perturbations. In particular, the results of Section 3 can easily be extended to any convex set of perturbations that has an efficient, easy-to-optimize-over representation. This includes classes of sets defined by a tractable number of linear or convex quadratic constraints, which in turn encompasses many of the uncertainty sets considered in , such as the central limit theorem or correlation information sets, and most of the typical sets presented therein.
2.4 Consistent games
In this paper, as in , we adopt an observation-driven view that describes the class of games that are consistent with the observed behavior. Given a set of observations, we define the set of consistent games as follows:
Definition 4 (-consistency).
We say a game is -consistent with the observations when there exists a set of games such that for all , is an equilibrium of , and:
If in the partial payoff information model of observations, for all players and .
If in the payoff shifter information model, .
The set of all -consistent games with respect to metric is denoted .
Given the specifications of our model, it is often the case that, given a set of observations with no additional assumption on the distribution of perturbations nor on a the rule used to select among multiple equilibria, it is not possible to recover an approximation to a unique game that generated these observations (no matter what recovery framework is used). That is, the diameter of the consistent set can sometimes be too large for approximate point identification to be possible, which is highlighted in the following example:
Take any set of observations under the no payoff information observation model, and let be the all-constant game, i.e., for some and for all . Let . Then for all , is an equilibrium of , and . That is, is a trivial game, and it is consistent with all possible observations. Even when are generated by a non-trivial , without any additional observations, an observer cannot determine whether or is the underlying game. In fact, both games are consistent with all implications of our model. We note that this issue arises regardless of how inferences will be drawn about the observations, so long as the approach does not discard consistent games.
It may thus be of interest to an observer to compute the diameter of the consistent set, either to determine whether point identification is possible, or simply to understand how tightly the observations constrain the space of consistent games. We define it as follows:
The diameter of consistent set is given by
When the diameter is small, then every game in the consistent set is close to the true underlying game, and approximate point identification is achievable. When the diameter grows large, point identification is impossible independently of what framework is used for recovery, as there exist two games that are -far apart in terms of payoff, yet either could have generated all observations.
3 A convex optimization framework
In this section, we show how techniques from convex optimization can be used to recover the perturbation-minimizing explanation for a set of observations, determine the extent to which observations are consistent with certain assumptions on the underlying game, and determine whether a set of observations tightly constrains the set of games that could explain it well. The results in this section are not tied to a specific observation model.
3.1 Efficient characterization of the set of consistent games
We will show that for every , and , the set of consistent games has an efficient, convex representation.
If in the “partial payoff information” model of observations:
If in the “payoff shifter information” model:
Follows from the definion of -consistency (Definition 4) ∎
We remark that as in , our sets are sharp: any game that explains the observations belongs to this set, and any game that belongs to this set is consistent with our assumptions and observations. Indeed, if is in the consistent set, there must exist perturbations of valid magnitude (given by the corresponding ) and an equilibrium of each perturbed game that together would lead to our observations, by the definition of the consistent set.
These consistent sets have efficient convex representations, for two reasons. First, all constraints are always linear except those of the form
. When , is a simple convex quadratic constraint, while when , is equivalent to the following collection of linear constraints:
Second, the number of constraints describing each set is quadratic in the number of player actions and .
As mentioned in Section 2, in all observation models, the assumption that can easily be replaced by an assumption of the perturbations being in any tractable convex set. In particular, many of the sets considered in  fit this requirement, and they describe robust information that an observer without distributional knowledge of the perturbations could realistically have on said perturbations: for example, an observer could know that the sum or average of the perturbations satisfies certain lower- and upper-bounds.
3.2 Recovering the perturbation-minimizing consistent game
Here, we consider the problem of recovering a game that best explains a given set of observations from perturbed games, according to the desired distance metric . One reason to do so is that it enables an observer to test whether there exists any game in that is consistent with specific properties and to give a measure of how much of has said properties—see Section 3.3. Or, it could be that the observer is simply interested in recovering the “best” game according to any simple convex metric of interest. For any metric and any observation model, this can be done simply by solving:
It is easy to see that this program returns the game and the minimum value of such that (resp. ) in the partial payoff information setting (resp. the payoff shifter information setting) where the ’s satisfy all equilibrium constraints, hence Program (1) returns the perturbation-minimizing that is consistent with all observations. When , this is a linear program; when , this is a second-order cone program, using the same reasoning as in Section 3.1 (this holds even with as a variable). Both types of programs can be solved efficiently, as seen in .
3.3 Can observations be explained by linear properties?
This convex optimization-based approach can further be used to determine whether there exists a game that is compatible with the observations and that also has certain additional properties, as long as the properties of interest can be expressed as a tractable number of linear equalities and inequalities. One can then solve program (1) with said linear equalities and inequalities as additional constraints (the program remains a SOCP or LP with a tractable number of constraints), then check whether the optimal value is greater than or less than . If the optimal value is greater than , then there exists no game with those properties that belongs to the -consistent set; if the optimal value is smaller than , then the recovered game displays the additional properties and belongs to the -consistent set. In what follows, we present a few examples of interesting properties that fit this framework.
3.3.1 Zero-sum games
A zero-sum game is a game in which for each pure strategy , the sum of the payoff of player and the payoff of player for is always . One can restrict the set of games we look for to be zero-sum games, at the cost of separability of Program (1), by adding constraints .
3.3.2 Exact potential games
3.3.3 Games generated through linear parameter fitting
It is common in the literature to recover a game with the help of a parametrized function whose parameters are calibrated using the observations. In many applications, linear functions of some parameters are considered—entry games are one example. Our framework allows one to determine whether there exist parameters for such a linear function that provide good explanation for the observations. When such parameters exist, one can use the mathematical program to find a set of parameters that describe a game which is consistent with the observations. Take two functions and that are linear in the vector of parameters and output a vector in . It suffices to add the optimization variable and the linear constraints and to Program (1) to restrict the set of games we look for to games linearly parametrized by .
3.4 Computing the diameter of the consistent set
In this section, we provide an algorithm (Algorithm 1) for computing the diameter of , for any given value of . Because the diameter is a property of the consistent set and not of the framework used to recover an element from said set, this tells an observer whether approximate point identification is possible independently of what framework is used for recovery. In particular, when the diameter is small, our framework approximately recovers the true underlying game (see Section 3.2). When the diameter is large, no framework can achieve approximate point identification of a true, underlying game.
Algorithm 1 is computationally efficient for the considered metrics and : it solves linear programs for , and second-order cone programs (SOCP) for with a tractable number of constraints. The algorithm computes the diameter of the consistent set:
The output of Algorithm 1 run with input satisfies .
See Appendix A. ∎
4 Consistent games with partial payoff information: when is recovery possible?
This section considers the partial payoff information variant of the observation model described in Section 2. We ask the following question: when is it possible to approximate the underlying game, in the presence of partial payoff information? We answer this question by giving bounds on the diameter of the consistent set as a function of and the observations , for both metrics and .
Recall that in this setting, for an equilibrium observed from perturbed game , the observer learns not only , but also the expected payoff of player in said equilibrium strategy on game . Similar to the previous sections, we are interested in computing a game that is close to some perturbed games that (respectively) have equilibria with payoffs . For simplicity of presentation, we recall that the optimization program that the observer solves is separable and note that he can thus solve the following convex optimization problem for player 1, and a similar optimization problem for player 2:
We take and make the following assumption for the remainder of this subsection, unless otherwise specified:
There exists a subset of size such that the vectors in are linearly independent.
We abuse notation and denote by the matrix in which row is given by the element of set , for all ; also, we write , i.e., is the part of that corresponds to player . For every , let be the p-norm. We can define the corresponding induced matrix norm that satisfies for any matrix .
Lemmas 2 and 3 highlight that if one has linearly independent observations (among the equilibrium observations) such that the induced matrix of observations is well-conditioned, and the perturbed games are obtained from the underlying game through small perturbations, any optimal solution of Program (4) necessarily recovers a game whose payoffs are close to the payoffs of the underlying game. The statements are given for both metrics introduced in Section 2.
Let be the underlying game, and be the games generating observations , where . Suppose that for player , . Let ( be an optimal solution of Program (4) for player with distance function . Then
For simplicity of notation, we drop the indices. We first remark that is feasible for Program (4); as is optimal, it is necessarily the case that
Let us write . We know that for all , , and thus . We can write
Let . We then have , as is a symmetric, positive semi-definite, stochastic matrix, all its eigenvalues are between and and
It immediately follows that ∎
Let be the underlying game, and be the games generating observations , where . Suppose that for player , . Let ( be an optimal solution of Program (4) for player with distance function . Then
See Appendix B. ∎
When is far from being singular, as long as the perturbations are small, we can accurately recover the payoff matrix of each player. This has a simple interpretation: the further is from being singular, the more diverse the observations are. More diverse observations means more information for the observer, and allows for more accurate recovery. On the other hand, if the matrix is close to being singular, it means that one sees the same or similar observations over and over again, and does not gain much information about the underlying game—there are more payoffs to recover than different observations, and the system is underdetermined.
An extreme example arises when we take to be the identity matrix, in which case we observe every single pure strategy of the game and an approximation of the payoff of each of these strategies, allowing us to approximately reconstruct the game. It is also the case that there are examples in which is large and there exist two games that are far from one another, yet both explain the observations, making our bound essentially tight:
Consider the square matrix with probability on the diagonal and off the diagonal, i.e., we observe four equilibria, each placing probability slightly higher than on a different action profile; the first equilibrium has a higher probability on action profile (1,1), the second on (1,2), the third on (2,1) and the last one on (2,2). Suppose the vector of observed payoffs is , where is the payoff for the equilibrium. Note that there exists a constant such that for all small enough, .
In the rest of the example, we fix the payoff matrix of player for all considered games to be all-zero so that it is consistent with every equilibrium observation, and describe a game through the payoff matrix of player . Let be the all-zero game, be the game with payoff on actions (1,1) and (1,2) and 0 everywhere else, and be the game with payoff on actions (2,1) and (2,2) and 0 everywhere else. The ’s are consistent with the payoff observations as the payoffs are constant across rows on the same column, making no deviation profitable, and the payoff of equilibria and on and is indeed , and for and on and . We have
Now, take to be the game that has payoff for action profiles (1,1) and (1,2), and for (2,1) and (2,2). Take to be the game with payoffs in the first column, and in the second column; similarly, take to be the game with payoffs in the first column and in the second column. The observations are equilibria of the ’s and yield payoff . Now, note that for ,
Therefore, both and are good explanations of the equilibrium observations, in the sense that for , is -close to and is -close to that have as equilibria, respectively. However,
which immediately implies
In the case of sparse games, in which some action profiles are never profitable to the players, and are therefore never played, one can reduce the number of linearly independent, well-conditioned observations needed for accurate recovery. Under the assumption that the action profiles that are never played with positive probability have payoffs strictly worse than the lowest payoff of any action profile played with non-zero probability, one can solve the optimization problem on the restricted set of action profiles that are observed in at least one equilibrium, and set the payoffs of the remaining action profiles to be lower than the lowest payoff of the recovered subgame, without affecting the equilibrium structure of the game. While the recovered game may not be the unique good explanation of the observations when looking at the full payoff matrix, it is unique with respect to the subgame of non-trivial actions when one has access to sufficiently many linearly independent, well-conditioned equilibrium observations.
In this section, we show extensions of our framework—first to succinct games with many players, and second to some games with infinite action spaces.
5.1 Linear succinct games (as per )
In general, computational tractability cannot be achieved as the number of players increases. A reason for this is that in the general case, an intractable, exponential number (in the number of players) of variables need be used to represent the game and its equilibria: in a game with players and actions per player, there are pure action profiles, hence variables are needed simply to represent the payoff matrices and the equilibria of the recovered games.
However, if the game and the observed equilibria have a compact representation, the equilibrium constraints can be written down using a tractable number of variables, and our framework provides efficient algorithms to find an element in the consistent set, compute its diameter, and test for linear properties. Kuleshov and Schrijvers  consider linear succinct games and show that if the structure of the succinct game is known and if we observe an equilibrium such that the “equilibrium summation property holds” (roughly, the exact expected utility of the players can be computed efficiently), then a game is consistent with the equilibrium observations if and only if a polynomial number of tractable, linear constraints are satisfied. Such constraints can easily be incorporated into our framework. (See Property and Lemma of  for more details.)
5.2 Cournot competition and infinite action space
In the general case, our framework cannot directly deal with games with infinite action spaces in a tractable way: to write down an equilibrium constraint, one needs a constraint for each of the infinite number of possible deviations. In this section, we show that, nevertheless, for some games with infinite action spaces, only a finite, tractable number of constraints is needed to characterize the equilibria; we show how to adapt our framework to such games. For illustration, we focus on the Cournot competition game with continuous spaces of production levels.
Consider a Cournot competition with players selling the same good. Each player chooses a production level , and sells all produced goods at price common to all players, and each player incurs a production cost to produce units of the good; we write G= where . We assume that is concave in each .
We assume the observer knows the function and wants to recover the costs of the players, where the underlying costs change slightly with each observation. Formally, consider that we have perturbed games such that in every pertubed game , the players play a Cournot competition with the same, commonly known price function but perturbed cost functions for each player , known to be convex. We obtain equilibrium observations , where is the equilibrium production level of player in perturbed game and , that is, .
Suppose the following hold:
The observer knows the costs belong to the space of polynomials of any chosen fixed degree ; i.e., the observer parameterizes the underlying and perturbed cost functions in the following way:
where the ’s are now the variables the observer want to recover.
can be written as a tractable number of semidefinite constraints on the ’s and ’s (this includes, but is not limited to, the and distances).
and can be computed efficiently for , given .
Then has an efficiently computable and tractable representation (as a function of , and ) as the intersection of SDP constraints. This means in particular that optimizing a linear function over can be cast as a tractable semidefinite program, for which efficient solvers are known – see ; one such solver, that we use in the simulations of Section 6, is CVX . This enables us to efficiently recover a game in the consistent set, efficiently compute its diameter, and efficiently test for linear properties (by simply adding linear and thus SDP constraints, as needed).
To obtain such a tractable characterization, we only need to note that i) the equilibrium constraints can be rewritten as a tractable number of tractable linear constraints, and ii) convexity constraints on polynomials can be classically cast as tractable SDP constraints. This is the object of Appendix C.1 and C.2.
6 Simulations for entry games and Cournot competition
In this section, we run simulations for two concrete settings to illustrate the power of our approach. We first (Section 6.1) illustrate how our framework performs on a simple entry game. We then (Section 6.2) show that it is able to handle much larger games.
6.1 2-player entry game
We first consider an entry game, in which each of two players (think of them as companies deciding whether to open a store in a new location) has two actions available to him (enter the market; don’t enter the market). Entry games are common in the literature, as seen in [1, 17, 10], and, because of their simplicity, allow us to cleanly visualize the consistent region.
Each player has two actions: ; if player does not enter the market, if he does. The utility of a player is given by for some parameters and , similarly to : if player does not enter the market, his utility is zero; if he enters the game but the other player does not, has a monopoly on the market and gets non-negative utility; finally, if both players enter the game, they compete with each other and get less utility than if they had a monopoly.
In our simulations, we fix values for the parameters and generate the perturbed games as follows:
In the partial payoff information setting, we add independent Gaussian noise with mean and standard deviation to (we vary the value of ) to obtain the perturbed games .
In the payoff shifter information setting, we sample the payoff shifters such that for all , for all players , follows a normal distribution of mean and standard deviation . We then add Gaussian noise with mean and standard deviation to to obtain the perturbed games .
In both observation models, paralleling the setting of , no observed payoff shifter nor unknown noise is added to the payoff of action for player ; action is always assumed to yield payoff for player , independently of . In order to generate the equilibrium observations, once the perturbed games are generated, we find the set of equilibria of each of the , and sample a point in said set. In the payoff information case, we also compute .
In order to parallel the setting of Beresteanu et al. , we assume the observer knows the form of the utility function, i.e., that and , and that he aims to recover the values of and . Thus, we add linear constraints and in the optimization programs that we solve (see Program (1)) in the payoff shifter information and partial payoff information settings. Furthermore, we assume as in  that the observer knows that perturbations are only added to and , and therefore we add linear constraints and for all to the optimization problems for player in each of the observation models. All optimization problems are solved in Matlab, using CVX (see ).
Our model for entry-games is similar to the ones presented in  and used in simulations in , so as to facilitate informal comparisons of the simulation results of both papers; in particular, the parametrization of the utility functions of the players in our simulations is inspired by , and noise is generated and added in a similar fashion. However, while we attempt to parallel the simulations run by Beresteanu et al. , it is important to note that this is not an apples-to-apples comparison, because of key differences in the setting. In particular, our observation models (seeing full equilibria) and the information available to the observer (no distributional assumptions) are different from those in .
6.1.1 Consistent regions for Player 1
We fix , , in all simulations, and vary the values of and . Because the observations are generated by adding i.i.d Gaussian noise with mean and variance to the two payoffs for entry of each player, if is the underlying game and are its perturbations,
follows a Chi-square distribution with degrees of freedom in the partial payoff information case (resp. in the payoff shifter case). We choose such that , and suppose the observer sees said value of . While the observer does not have access to the distribution of perturbations, it is extremely likely he will observe a magnitude of perturbations equal to or less than , and we can use as a high-probability upper bound on the information on the perturbations accessible to the observer.
In all plots, the colored region in the plots is the projection over the space for player of the set of parameters that are in the -consistent region. The darker the region, the smaller the objective value of the best explanation for the corresponding values of and . The black, center of the region represents the value of that minimizes .
Figure 1 shows the evolution of the consistent region when varying and in the payoff shifter information setting. The smaller the standard deviation of the unknown noise, the tighter the consistent region. On the other hand, reasonably increasing the value of can be beneficial, at least when it comes to centering the consistent region on the true values of the parameters: this comes from the fact that when the game is sufficiently perturbed, new equilibria arise and new, informative behavior is observed, while not adding significant uncertainty to the payoffs of the game.
Figure 2 shows the evolution of the consistent region when varying . The larger the value of , the larger the consistent region, and the further away its center is from the underlying, true value of the parameters.
6.1.2 Testing for linear properties
We also illustrate via simulation how our framework can test the ability of linear properties to explain observed behavior. In particular, here we test whether a set of observations is likely to be explained by a zero-sum game. We consider entry games as defined in the previous section, and assume the observer wants to test whether observations were generated by a game that is approximately zero-sum, without any information on the parametric form of the game (the observer does not know the game is an entry game).
Formally, we say a game is -zero-sum with respect to the -norm if and only if . Note that a game being -zero-sum is a linear property and therefore can be included in our framework. The smaller the value of , the more stringent the condition is and the closer must be to a zero-sum game. We use , , in all simulations.
As before, we pessimistically assume the observer sees such that
, that is, for , . Figure 3 shows for which values of one can recover a -zero-sum game with objective value less than that explains the observations for different values of and . Values of to the left of the intersection between the red and the blue line are impossible, while values to the right of this intersection indicate there is a -zero-sum game that explain the observations. In both cases, we see that no zero-sum game or game close to being zerosum is a good explanation for the observations; in the payoff information setting, no game less than -zerosum explains the observations, while in the payoff shifter setting, no game less than -zerosum explains the observations.
6.2 Multiplayer Cournot competition
In this section, we run simulations on a Cournot competition with varying number of players. See Section 5.2 for a discussion of how our framework can be modified to accommodate Cournot games with many players and an action set of infinite size for each player. All simulations were performed on a laptop with an Intel Core i7-4700MQ at 2.40GHz and GB RAM.
6.2.1 Generating the games
Let be the number of players, and the production level of player . We fix a parameter , and set the price function to be given by ; the price function is known to the observer. We fix the form of the cost function to be linear; that is, the cost of producing of goods incurred by player is given by . Without loss of generality, we set : does not affect the maximization problem nor the first order condition solved by player , and hence does not impact the chosen production level of the players.
We generate underlying Cournot games with heterogeneous, linear cost functions as follows:
We first set for every player .
We generate each of the games by adding i.i.d. truncated Gaussian noise with mean and standard deviation to the ’s. I.e., where can be written as and is a non-truncated Gaussian with mean , standard deviation . This ensures the ’s are always non-negative, hence the production costs are always non-decreasing.
Note that the same games are used in all plots and simulations.
For each of the games, we then generate perturbed games by adding truncated Gaussian noise with standard deviation to each of the ’s. As before, the noise is truncated to ensure non-negativity of the perturbed ’s. We then solve the first-order condition to obtain equilibrium observations and note that al