A competitive game whose maximal Nash-equilibrium payoff requires quantum resources for its achievement

A competitive game whose maximal Nash-equilibrium payoff requires quantum resources for its achievement

Charles D. Hill cdhill@unimelb.edu.au    Adrian P. Flitney aflitney@unimelb.edu.au School of Physics, University of Melbourne, Parkville, VIC 3010, Australia    Nicolas C. Menicucci nmenicucci@perimeterinstitute.ca Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada
Abstract

While it is known that shared quantum entanglement can offer improved solutions to a number of purely cooperative tasks for groups of remote agents, controversy remains regarding the legitimacy of quantum games in a competitive setting—in particular, whether they offer any advantage beyond what is achievable using classical resources. We construct a competitive game between four players based on the minority game where the maximal Nash-equilibrium payoff when played with the appropriate quantum resource is greater than that obtainable by classical means, assuming a local hidden variable model. The game is constructed in a manner analogous to a Bell inequality. This result is important in confirming the legitimacy of quantum games.

Quantum games; Bell inequalities; minority game
pacs:
03.67.-a, 02.50.Le

I Introduction

Game theory is a branch of mathematics dealing with strategic interactions of competitive agents where the outcome is contingent upon the combined actions of the agents. In 1999, game theory was formally extended into the quantum realm by replacing the classical information with qubits and the player actions by quantum operators Eisert1999 (); Meyer1999 (). Since then much work has been done in the new discipline of quantum game theory Flitney2008 (); Guo2008 () and attempts have been made to put it on a more formal footing Gutoski2006 (). There have been objections that quantum games are not truly quantum mechanical and have little to do with the underlying classical games vanEnk2000 (); vanEnk2002 (); Levine2005 (). However, attempts have been made to counter these arguments Meyer2000 (). In addition, quantum games have been shown to be more efficient than classical games, in terms of information transfer, and that finite classical games are a proper subset of quantum games Lee2003 (), thus demonstrating that not all quantum games can be reduced to classical ones. Quantum game protocols have also been proposed that use the non-local features of quantum mechanics which have no classical analogue Iqbal2005 (); Iqbal2006 ().

In the present work we construct a game that is a minimal quantum generalization of a possible classical game and that has a Nash equilibrium that is not achievable by any classical hidden variable model. Our model is distinguished by its competitive nature from situations, also referred to in the literature as quantum games, that involve a number of agents solving a cooperative task by quantum means Cleve2004 (); Brassard2005 ().

The minority game was introduced in 1997 Challet1997 () as a simple multi-agent model that is able to reproduce much of the behaviour of financial markets. The agents independently select one of two choices (‘buy’ or ‘sell’) and those in the minority win, the idea being that when everyone is buying prices are inflated and it is best to be a seller and vice versa. In a one-shot minority game the best players can do is to select among the alternatives at random with an unbiased coin. The simplest non-trivial situation is the four player game: only one player can win; however, there is a fifty percent chance that there is no minority, in which case all players receive zero payoff. Versions of the minority game utilizing quantum resources have attracted attention since the probability of the no-minority case can be eliminated in the four player game Benjamin2001 (), or reduced for even  Chen2004 (). These result are robust even in the presence of decoherence Flitney2007 (). In addition, utilizing a particular set of tunable four-party entangled states as the quantum resource shared by the players, there is an equivalence between the optimal game payoffs and the maximal violation of the four-party MABK-type Bell inequality Mermin1990 (); Ardehali1992 (); BK93 () for the initial state Flitney2009 ().

In the quantum minority game each player receives one qubit from a known entangled state. They can act on their qubit with a local unitary operator, the choice of which is their strategy. The qubits are then measured in the computational basis, and payoffs are awarded as in the classical game. Starting with the GHZ state , if each player operates with

(1)

the resulting superposition contains only those states where one of the four player is in the minority and so the average payoff is , compared with for the classical game Benjamin2001 (). When all players use this strategy the result is a Nash equilibrium, a strategy profile from which no player can improve their payoff by a unilateral change in strategy. The result is also Pareto optimal, one from which no player can improve their payoff without someone else being worse off.

One complaint, however, that can be leveled at the quantum versions is that the same outcome can be achieved by a purely classical local hidden variable model. For example, in a four-player minority game a trusted third party could choose one of the eight classical messages , , , , , , , or at random and then inform each of the players of their selected value. None of the players has an incentive to vary their choice, and the expected payoff is fair to all players. Such an arrangement would also yield .

In this paper we introduce a competitive game having a Nash-equilibrium maximal payoff that requires the use of quantum resources. In other words, this payoff cannot be achieved by players who only have access to classical resources (i.e., resources whose statistical properties can be modeled using local hidden variables). This is in contrast to previous work on cooperative games such as the XOR game, odd cycle game, and magic square game Cleve2004 (): even though those games were also shown to be equivalent to a corresponding Tsirelson-type inequality and can be used to demonstrate a Bell inequality, the critical distinction is that the game we consider has a competitive aspect, and it is therefore essential to consider the Nash equilibrium.

Ii Definition of the game

We now define the game that is the subject of this paper. This four-player game will be based partly on the minority game and partly on what we call the anti-minority game. While the minority game provides a payoff of 1 for the player who answers differently from the other three (if there is exactly one such player) and no payoff to any other player, the anti-minority game rewards the case where there is no minority, providing a payoff of to all players when all players give the same answer or there is a 50/50 split. That is, all the players score on just those occasions when there would be no winner in a minority game.

The overall game is a combination of these two games. The players do not know beforehand whether the payoff matrix will be that of the minority game or that of the anti-minority game. (The way it is selected will be described shortly.) The players are allowed to meet privately before the game to agree on a joint strategy and, if they wish, prepare physical resources (classical or quantum) for each of them to bring with them to the game. The players are subsequently isolated and prevented from communicating for the rest of the game. An impartial referee (someone other than the players) then asks each of the isolated players one of two questions: either, “What is the value of ?” or, “What is the value of ?” to which the player must respond with either or as she chooses. Each player may, if she wishes, use whatever physical resource she brought with her to aid in answering her question 111Cell phones and other communication devices, which are certainly physical resources, are permitted but are useless due to the no-communication requirement, which can always be enforced in principle by sufficient physical separation and strict time limits on responding to the question.. The game being played (minority or anti-minority)—and thus the payoff matrix—is determined by the set of questions asked by the referee. If the referee has asked three of the players for the value of and one of the players for the value of , then the players are playing the minority game, and the payoff matrix is the same as for the standard minority game (independent of which player has been asked which particular question). If the referee asks three of the players for the value of and one player for the value of , then the payoff matrix is that of the anti-minority game. The referee has chosen the question list uniformly at random from the following chart before the game begins:

minority game;
anti-minority game.

Thus, each of these lists has probability  of being asked by the referee. Other question lists (such as or ) are promised not to be used. Notice that once the payoff matrix is fixed (by the total number of each question asked), there is no further dependence on which player was asked which question, with the payoff determined entirely by the players’ answers (of ).

The list , for instance, represents player 1 being asked for the value of and players 2–4 being asked for the value of . According to the chart, this corresponds to the minority game. Now let’s say, for example, that player 3 answers , while players 1, 2, and 4 answer . Then player 3 receives a payoff of 1, and the others receive nothing. It does not matter which question player 3 was asked (in this case, “What is the value of ?”), only that his answer () is different from the others’ () and that the game being played (as determined by all four questions together) is the minority game.

Iii Analysis of the game

In devising a strategy for this overall game, the challenge, of course, is that the players don’t know a priori whether they are playing the (competitive) minority game or the (cooperative) anti-minority game. Since all eight possibilities have equal probabilities, the two games are equally likely.

The players gain partial information about the game being played, however, once they receive their individual questions. If player 2, for example, is asked for the value of , then he knows that only four possible question lists remain: , , and , which correspond to the anti-minority game, as well as , which corresponds to the minority game. This means that if a player is asked for the value of , he will believe himself to be playing the anti-minority game with 75% certainty and the minority game with 25% certainty. Similarly, being asked for the value of  will reverse these probabilities: 75% for the minority game, 25% for the anti-minority game.

In principle, the players can use this information when devising their strategies. Notice that in all cases, one player will “get it wrong” in that she will only assign 25% to the actual game being played. One might suspect that, especially if the actual game being played is the minority game, which is competitive, this asymmetry in information about the game could be used by the other three players to ensure that the odd player out is never allowed to win. We have not proven this; we only use it to illustrate the fact that since a competitive game is never completely ruled out for any player, the strategic analysis is nontrivial.

Iv Bounds on expected payoff regardless of strategy

At this point, it is useful to examine the bounds on a player’s expected payoff regardless of strategy. Whether such a payoff is achievable by any particular strategy is a separate—and important—question that will be addressed shortly. But there are a few things that can be stated about the game that must hold for any strategy:

  1. The maximum total expected payoff (sum for all players) is 1. This bound is saturated if and only if the players can guarantee that they will always produce a win condition when asked any of the eight possible question lists.

  2. If there exists a Nash-equilibrium strategy that achieves a given total expected payoff, then there also exists a symmetric Nash equilibrium 222In game-theoretic terms the equilibria we shall discuss are correlated Nash equilibria Aumann1987 () since we allow the players to share information before the game. However, we shall continue to use the term ‘Nash equilibrium’ for brevity and to be consistent with the previous literature in quantum games, where the distinction between Nash equilibria and correlated equilibria has not been made. at that total expected payoff. There is nothing to distinguish the players before the game begins—they all have the same information, and they all factor into the game in the same way. Thus, if the payoffs differ for different players using a given Nash-equilibrium strategy , then a correlated strategy that permutes the players’ labels in a uniformly random fashion (using, for example, resources shared before the game) and then employs  is necessarily a symmetric Nash equilibrium with the same total expected payoff.

  3. The maximum symmetric Nash-equilibrium expected payoff for each player is , which is also Pareto optimal. This follows directly from the two points above.

Note that in point (3) we are not claiming that this optimal payoff is achievable. We show below, however, that a strategy using quantum resources can be employed to achieve an expected payoff of for each player, and we subsequently show this strategy to be a Nash equilibrium. Thus, by point (3), this quantum strategy truly is Pareto optimal and overall “the best the players can do” for this game. Following this, we show that any strategy limited to classical resources—i.e., those whose statistical properties can be reproduced using a local hidden variable model—always performs strictly worse than this, thus demonstrating that quantum resources are required for achieving the optimal outcome.

V Stabilizer formalism

We can use the stabilizer formalism to create a strategy using quantum resources that achieves the Pareto-optimal payoff of for each player. The strategy involves the players preparing a four-qubit entangled quantum state, one qubit of which is taken by each player to be used during the game. Players each make projective measurements on their system in accord with the question asked ( or ) and record in accord with the result. The win conditions for the two possible games can be seen to be mutually exclusive and based on the product of the players’ answers: for the minority game, for the anti-minority game.

A stabilizer state is defined as the -eigenstate of a set of commuting observables. The set of all such observables is called the stabilizer of the state Nielsen2000 (). In order for the players to guarantee a win in the cases where the minority game is being played, measurements made in accord with the questions asked must always multiply to . Such a state is stabilized by the following four operators: , , , and . Notice that these operators commute. Therefore they form a valid set of generators of a stabilizer group, which specifies a unique pure state

The state guarantees that there is a winner in each of the four minority games, no matter which of these four versions is played. This state is locally equivalent to the four-qubit cat state.

The full stabilizer contains more than the four operators listed above. It also contains all products of these four operators. Multiplying together all choices of three of these four operators reveals the following four operators to also be part of the stabilizer: , , , and . These correspond to ensuring that the players’ answers multiply to for each of the four variants of the anti-minority game. Thus, the state also guarantees the players win if the anti-minority game is played.

No matter which of the eight question lists is asked, making measurements in accord with the questions on the state guarantees that there is a winner for each of them. Furthermore, the state is symmetric under permutation of the players’ labels, so each player’s expected payoff is when this quantum strategy is used.

Vi Proof of Nash Equilibrium

We now prove that this quantum strategy is a Nash equilibrium. To do this, we must show that no player can do better by unilaterally choosing to do something other than make a projective measurement in accord with his question ( or ) on the jointly prepared state  from Eq. (V). What are the alternatives? Since he must answer with one of two possible responses (), the most general measurement he can perform is a two-outcome positive-operator-valued measurement (POVM) Nielsen2000 (). This is more general than a projective measurement, but in the two-outcome case it is equivalent to adding bias and noise to such a measurement and can be simulated with classical randomness 333To see this, consider that any two-outcome POVM must consist of two elements of the form , with . Thus, the two elements are diagonal in the same basis. Transforming to that basis, we have  and , with . (If we want , we can just exchange the labels on .) A true projective measurement would have and so that and . The statistics of the POVM depend only on the diagonal elements of the density operator when written in the eigenbasis of . Thus, they are effectively classical and can be simulated by projectively measuring in this basis and postprocessing the result with flips of weighted coins. Specifically, notice that the probability of getting the outcome corresponding to  is . The conditional probabilities can be interpreted as the heads-probability for a weighted coin whose weight is conditioned on the outcome of a projective measurement in the  basis. The POVM can therefore be simulated by the following procedure. First, projectively measure the qubit in this basis. Next, prepare a weighted coin whose weighting depends on the outcome of that measurement: if the result is , then use ; if it’s , then use . Flip this coin. If the result is heads, return . Otherwise, return . The statistics of this coin flip are the same as that of the POVM.. Thus, we will only consider projective measurements as the possible alternative strategies for the player (since adding bias and noise will only reduce his expected payoff).

Instead of starting directly with , the players prepare for themselves the initial state . We now define the unitary operation 444The overall factor of in Eq. (3) has no physical effect and is included for notational convenience only.

(3)

which satisfies . Now we can demonstrate that the strategy of each player applying to his qubit, measuring according to the question asked ( or ), and reporting the corresponding result is a symmetric Nash equilibrium.

Suppose that the first three players choose to follow this strategy, while the fourth, , elects to measure in a different basis, which could depend on which question she is asked. Without loss of generality, this is equivalent to acting with a question-dependent unitary  or  followed by a projective measurement of  or , respectively. Any such unitary operator may be parameterized in general (up to an overall phase) by

(4)

where , and . Note that . We can calculate the expected payoff for  for each of the question lists that could be asked of the players. Interestingly, the payoff does not depend on the game being played but rather only on the question that  is asked. If she is asked for , then

(5)

and if she is asked for , then

(6)

By inspection, Eqs. (56) both achieve their maximum value of by the (non-unique) choice of , that is, by choosing . Since  cannot improve her payoff by a unilateral change of strategy away from regardless of which of the games is being played or which question she is asked, this is her Nash-equilibrium strategy. By symmetry, the other players also maximize their payoffs with the same strategy. Indeed, by point (3) in the Section IV, an average payoff of for each player is the maximum possible and therefore Pareto optimal.

Vii Local Hidden Variable Model

We now prove that no classical strategy (by which we mean a strategy admitting a local hidden variable model) can reproduce the quantum mechanical payoff. There are many possible classical strategies that may be employed. These include both mixed strategies, which have an element of randomness, and pure strategies, which are deterministic. However, mixed strategies can be replaced by corresponding pure strategies with all randomness relegated to the classical resource prepared before the game.

The players meet before the game to prepare a classical resource to take with them to the game. This resource can be prepared stochastically, but the existence of a local hidden-variable model (by assumption) guarantees that all statistical properties of the resource can be modeled as arising from a joint probability distribution  over a set of definite, classical resources . This means that at the end of the joint meeting, each player will always come away with a definite classical resource , even if the particular resource was chosen partially using coin flips and spins of roulette wheels. This already introduces a source of stochasticity into the problem, with deterministic preparation being a special case.

Once at the game, each player is asked a question , after which each player must make a choice to either report the answer  or to report . Each may use a mixed strategy to do this. That is, each player can choose also to bring with her coins for flipping and roulette wheels for spinning after the question has been asked, in order to help her make a decision of which value to report, but this is redundant. The players know the possible questions before the game starts, and the existence of hidden variable models means that all probabilities may be interpreted as lack of knowledge of the real state of affairs of a system. Thus, each player can flip a coin or spin a roulette wheel in the pre-game meeting and record the outcomes for use in the game instead of generating these outcomes after the fact. Thus, all strategies for player  that involve stochastic dependence on the classical resource  (i.e., those which involve additional stochastic variables ) can be simulated using a strategy that depends deterministically on a new classical resource . (Note that we drop the prime in what follows.) Notice that the existence of a local hidden-variable model for all statistical aspects of the classical game is required for this replacement to be made—it cannot in general be done when quantum resources (specifically, certain entangled states) are used.

We can therefore consider each player’s strategy to be defined by a function , which defines a pure strategy such that the reply to the question is determined by , which—in a slight abuse of notation—we shorten to for clarity. By point (1) in Section IV, simulating the Pareto-optimal Nash-equilibrium quantum strategy where each player has an expected payoff of requires that the players be able to guarantee a winner in any of the eight cases that could be posed to them. This condition can be written as a set of simultaneous equations:

(7a)
(7b)
(7c)
(7d)
(7e)
(7f)
(7g)
(7h)

Since  is a deterministic function, it must take on a single value regardless of which equation it appears in. Straightforward calculation reveals that there is no solution to these equations. For instance, since each function must return , multiplying equations (a), (b), and (c) gives

in direct contradiction to (h). Hence, no classical resource (either deterministic or stochastic, using the argument above) can be prepared that guarantees there is always a winner of each possible game. This means that, using only classical resources, , where the inequality is strict. Thus, there is no classical strategy which reproduces the result that can be achieved using quantum resources.

Viii Bounds on expected payoff using only classical resources

Having proved the main point of the paper, we could stop there, but we will instead go one step further by demonstrating a finite gap between the maximum symmetric expected payoff of  in the quantum case and that in the classical case. This bound will be derived independently of any particular classical strategy used. It is thus not known whether it is achievable or whether it is a Nash equilibrium. The only purpose is to quantify how good one could ever hope to do with only classical resources.

Consider first the fact that once a player has been asked his question, the list of eight equations to be satisfied shrinks to four. These always have the form of three instances of one game and one instance of the other. For instance, if player 4 is asked for the value of , then Eqs. (7d–g) are eliminated. Unfortunately, as shown in the last section, the remaining four cannot be satisfied simultaneously. The two sets are exchanged if she is asked for the value of , but this doesn’t help because Eqs. (7d–g) have no solution either. The game is symmetric under exchange of the players’ labels, so this applies to all players equally. Thus, once the players know their questions, each of them is left with a (different) set of four simultaneously insoluble equations that cannot be ruled out.

Thus, it is not rational (even in the case of players with very clever strategies!) for any player to believe that there will always be a winner in each of the four cases he must still consider. Thus, he is forced to conclude that there will be no winner in at least one of those games, even in the best possible scenario. The question he must ask then is, Which win am I willing to sacrifice? The answer, of course, depends on how well he expects to do in each case.

Again, let’s push the envelope as far as we can, while still remaining rational. There are two win conditions that must be considered: a win of the anti-minority game always pays , while a win of the minority game has an expected payoff anywhere between 0 and 1, since there remains a question of which player is the winner. The minority game win is thus the case that requires more consideration. If a player considers the case where a minority game is being played, she knows that three players have the same information (they have each been asked for the value of ), while one player has other information (he has been asked for the value of ). Being rational, the players who were asked  must have equal expected payoffs , since there is nothing to distinguish between them, but it need not be the same as the expected payoff  of the player who was asked for . Still, the total payoff of the minority game is 1, so when the minority game is being considered. We are considering the best possible case for the players, so we choose to saturate this bound and write

(8)

since any other choice only reduces the players’ payoffs. Notice that , while . The upper bound of 1 on  corresponds to the notion that a very clever player might have a strategy that lets him win every time the minority game is being played and he is asked for , while is upper-bounded by  because there are three players who are asked  in the case of a minority game, so the full payoff must have an equal chance of being obtained by any of them.

Let us consider now the case of a player who has been asked for the value of . There is now one possible way that a minority game is being played and three possible ways that an anti-minority game is being played. She knows that she cannot guarantee a win in all four cases, yet each is equally likely. She now has to choose between allowing a loss (for all players) in the minority game and allowing a loss in one of the anti-minority games. Sacrificing the minority game gives an expected payoff of

(9)

while sacrificing one of the three anti-minority games gives

(10)

For this case, which game to sacrifice depends on the expected payoff  in the case of a win condition in the minority game. When , then , and the player is better off using a strategy that sacrifices the win in the minority game. However, when , the player is better off sacrificing one of the anti-minority game wins.

Similarly, if a player is asked for the value of , there are three possible ways that the minority game is being played and only one possible way that the anti-minority game is being played, each again being equally likely. He must choose which one to allow a loss in. Sacrificing one of the minority games gives

(11)

where we have used (8) in the last equality. Instead, sacrificing the anti-minority game gives

(12)

Once again, the cross-over point between the two occurs when , with a sacrifice of the minority game preferred when and sacrificing the anti-minority game preferred when .

The overall expected payoff is an average of the two optimal payoffs. We consider the two cases of  separately. When , then

(13)

Similarly, when , then

(14)

Recalling the bounds on , plugging  into (13) and plugging  into (14) both give

(15)

Once again, we have not shown that this is achievable by any particular strategy nor that such a strategy is a Nash equilibrium. However, if a symmetric Nash-equilibrium strategy is to exist for this game under the restriction that the players only use classical resources, the best any player can hope for is an expected payoff of , instead of  using quantum resources.

Ix Conclusion

By selecting either a minority or anti-minority game and choosing one of four measurement bases in each case, we have constructed a competitive game with an average Nash-equilibrium payoff that is also Pareto optimal. There is no classical prescription—that is, a set of values for measurements of  and —that can produce a winner in all four minority and all four anti-minority games. Thus, no classical strategy, even allowing for initial classical communication amongst the players, can achieve the payoff that we have demonstrated using quantum entanglement. The best conceivable average payoff using only classical resources is only of that achievable by distributing an entangled quantum state amongst the players.

Although this game is somewhat contrived, it answers the two main criticisms of quantum games presented in Ref. vanEnk2002, : (1) that the “quantum solution” to a classical game does not faithfully solve the original classical game, and (2) that the quantum solution can also be obtained through classical means. Our responses follow. We also address the “side issue” presented in that paper involving the cooperative aspect of games, which we discuss separately below.

In response to the first objection, the quantum and classical versions of the game we consider are identical except for the resources that are available to the players in each case. In the end, each player must produce a definite classical answer () for the referee, even if quantum resources were used to make the decision—i.e., the players answer with bits, not qubits. This looks to be as close as one could possibly get to a “minimal generalization” of a classical game to a quantum one: simply replace classical resources with quantum ones while requiring the same rules and interactions with the referee. This generalization from classical to quantum is more conservative than the “standard” method of quantizing a classical game presented in Ref. Eisert1999, since the quantum resources (if any) possessed by each player can only be manipulated locally before a classical answer is produced. As such, many of the objections to the method of Ref. Eisert1999, do not apply to our game. The second objection is addressed directly in Section VIII, where we prove that players of our game perform strictly better when they are able to use quantum resources than when they are restricted to classical ones.

It is worth noting that the only difference between the games we consider is whether the players are capable of using quantum resources in addition to classical ones or not. All other aspects of the game—such as the availability of correlated randomness, the ability to make (and break!) agreements, etc.—remain unchanged. In fact, the inability to use quantum resources can be framed in physical terms as, for example, ignorance of the fact that the world is quantum mechanical (e.g., if the game were played in the year 1900) or insufficient technical expertise in preparing and manipulating quantum resources. This means that we really just have one game that we are analyzing, with the two “versions” of it arising simply from two different scenarios regarding the knowledge and abilities of the players.

The final issue in Ref. vanEnk2002, is whether the quantum version of a classical game has made an inherently non-cooperative game into a game with a cooperative aspect. Our response is that the degree to which our game is, respectively, “cooperative” and “competitive” does not change when comparing the classical and quantum versions since the game itself has not changed. It is worth emphasizing a related point, however: the issue of competition. The implied skepticism in Ref. vanEnk2002, about whether entanglement can help the players improve their payoff in a wholly noncooperative game is well taken, and we note that it is the uncertainty in whether it is best to compete or cooperate that appears to make the improvement possible in the case we consider. While our game retains a cooperative aspect, it also involves competition between the players. This is the key feature that distinguishes it from others in the literature that are wholly cooperative Cleve2004 (); Brassard2005 (): entangled quantum resources assist the players in improving their expected payoff in a competitive setting. Thus, one might conjecture that uncertainty about cooperation/competition is a generic requirement for allowing entanglement to assist with a competitive game (beyond what is achievable using classical correlations). We leave this question to further research.

Acknowledgements.
The authors are grateful for useful comments by David Pritchard, University of Waterloo and Steven van Enk, University of Oregon. This work was supported by the Australian Research Council, the Australian Government, and the US National Security Agency (NSA) and the Army Research Office (ARO) under contract number W911NF-08-1-0527. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation.

References

  • (1) J. Eisert, M. Wilkens, and M. Lewenstein, “Quantum Games and Quantum Strategies,” Phys. Rev. Lett. 83, 3077 (1999).
  • (2) D. A. Meyer, “Quantum strategies,” Phys. Rev. Lett. 82, 1052 (1999).
  • (3) A. P. Flitney, “Review of quantum games,” in Game Theory: Strategies, Equilibria, and Theorems, I. N. Haugen and A. S. Nilsen, eds., (Nova Scientific Inc, 2008),  pp. 1–40.
  • (4) H. Guo, J. Zhang, and G. J. Koehler, “A survey of quantum games,” Decis. Support Syst. 46, 318 (2008).
  • (5) G. Gutoski and J. Watrous, “Towards a general theory of quantum games,” in Proc. 39th Symp. on Theory of Computing (STOC) (2007),  pp. 565–574.
  • (6) S. J. van Enk, “Quantum and classical game strategies,” Phys. Rev. Lett. 84, 789 (2000).
  • (7) S. J. van Enk and R. Pike, “Classical rules in quantum games,” Phys. Rev. A 66, 024306 (2002).
  • (8) D. K. Levine, “Quantum games have no news for economics,”, 2005, working paper, Dept. of Economics, UCLA and Federal Reserve Bank of Minneapolis.
  • (9) D. A. Meyer, “Meyer replies,” Phys. Rev. Lett. 84, 790 (2000).
  • (10) C. F. Lee and N. F. Johnson, “Efficiency and formalism of quantum games,” Phys. Rev. A 67, 022311 (2003).
  • (11) A. Iqbal, “Playing games with EPR type experiments,” J. Phys. A 38, 249 (2005).
  • (12) A. Iqbal, Investigations in quantum games using EPR-type set-ups (PhD thesis, University of Hull, UK, 2006), Eprint: quant-ph/0604188.
  • (13) R. Cleve, P. Hoyer, B. Toner, and J. Watrous, “Consequences and limits of nonlocal strategies,” in Proc. IEEE Conf. Comput. Complexity,  pp. 236–249 (2004).
  • (14) G. Brassard, A. Broadbent, and A. Tapp, “Quantum pseudo-telepathy,” Found. Phys. 35, 1877 (2005).
  • (15) D. Challet and Y.-C. Zhang, “Emergence of cooperation and organization in an evolutionary game,” Physica A 246, 407 (1997).
  • (16) S. C. Benjamin and P. M. Hayden, “Multiplayer quantum games,” Phys. Rev. A 64, 030301 (2001).
  • (17) Q. Chen, Y. Wang, J.-T. Liu, and K.-L. Wang, “N-player quantum minority game,” Phys. Lett. A 327, 98 (2004).
  • (18) A. P. Flitney and L. C. L. Hollenberg, “Multiplayer quantum Minority game with decoherence,” Quantum Inf. Comput. 7, 111 (2007).
  • (19) N. D. Mermin, “Extreme quantum entanglement in a superposition of macroscopically distinct states,” Phys. Rev. Lett. 65, 2838 (1990).
  • (20) M. Ardehali, “Bell inequalities with a magnitude of violation that grows exponentially with the number of particles,” Phys. Rev. A 46, 5375 (1992).
  • (21) A. V. Belinskii and D. N. Klyshko, “Interference of light and Bell’s theorem,” Sov. Phys. Usp. 36, 653 (1993).
  • (22) A. P. Flitney, M. Schlosshauer, C. Schmid, W. Laskowski, and L. C. L. Hollenberg, “Equivalence between Bell inequalities and quantum Minority game,” Phys. Lett. A 373, 521 (2009).
  • (23) M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, UK, 2000).
  • (24) R. J. Aumann, “Correlated equilibria as an expression of Bayesian rationality,” Econometrica 55, 1 (1987).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
249535
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description