Learning to Communicate Implicitly by Actions

Learning to Communicate Implicitly by Actions

Zheng Tian, Shihao Zou, Ian Davies, Tim Warr, Lisheng Wu, Haitham Bou Ammar , Jun Wang
University College London
Huawei RD UK
{zheng.tian.11, shihao.zou.17, ian.davies.12, tim.warr.17, lisheng.wu.17}@ucl.ac.uk
haitham.bouammar@huawei.com
jun.wang@cs.ucl.ac.uk
Abstract

In situations where explicit communication is limited, human collaborators act by learning to: (i) infer meaning behind their partner’s actions, and (ii) convey private information about the state to their partner implicitly through actions. The first component of this learning process has been well-studied in multi-agent systems, whereas the second — which is equally crucial for successful collaboration — has not. To mimic both components mentioned above, thereby completing the learning process, we introduce a novel algorithm: Policy Belief Learning (PBL). PBL uses a belief module to model the other agent’s private information and a policy module to form a distribution over actions informed by the belief module. Furthermore, to encourage communication by actions, we propose a novel auxiliary reward which incentivizes one agent to help its partner to make correct inferences about its private information. The auxiliary reward for communication is integrated into the learning of the policy module. We evaluate our approach on a set of environments including a matrix game, particle environment and the non-competitive bidding problem from contract bridge. We show empirically that this auxiliary reward is effective and easy to generalize. These results demonstrate that our PBL algorithm can produce strong pairs of agents in collaborative games where explicit communication is disabled.

Introduction

In collaborative multi-agent systems, communication is essential for agents to learn to behave as a collective rather than a collection of individuals. This is particularly important in the imperfect-information setting, where private information becomes crucial to success. In such cases, efficient communication protocols between agents are needed for private information exchange, coordinated joint-action exploration, and true world-state inference.

In typical multi-agent reinforcement learning (MARL) settings, designers incorporate explicit communication channels hoping to conceptually resemble language or verbal communication which are known to be important for human interaction [baker1999role]. Though they can be used for facilitating collaboration in MARL, explicit communication channels come at additional computational and memory costs, making them difficult to deploy in decentralized control [roth2006communicate].

Environments where explicit communication is difficult or prohibited are common. These settings can be synthetic such as those in games, e.g., bridge and Hanabi, but also frequently appear in real-world tasks such as autonomous driving and autonomous fleet control. In these situations, humans rely upon implicit communication as a means of information exchange [rasouli2017agreeing] and are effective in learning to infer the implicit meaning behind others’ actions [heider1944experimental]. The ability to perform such inference requires the attribution of a mental state and reasoning mechanism to others. This ability is known as theory of mind [premack_woodruff_1978]. In this work, we develop agents that benefit from considering others’ perspectives and thereby explore the further development of machine theory of mind [Rabinowitz2018].

Previous works have considered ways in which an agent can, by observing an opponent’s behavior, build a model of opponents’ characteristics, objectives or hidden information either implicitly [He2016, Bard2013] or explicitly [Raileanu2018, li2018dynamic]. Whilst these works are of great value, they overlook the fact that an agent should also consider that it is being modeled and adapt its behavior accordingly, thereby demonstrating a theory of mind. For instance, in collaborative tasks, a decision-maker could choose to take actions which are informative to its teammates, whereas, in competitive situations, agents may act to conceal private information to prevent their opponents from modeling them effectively.

In this paper, we propose a generic framework, titled policy belief learning (PBL), for learning to cooperate in imperfect information multi-agent games. Our work combines opponent modeling with a policy that considers that it is being modeled. PBL consists of a belief module, which models other agents’ private information by considering their previous actions, and a policy module which combines the agent’s current observation with their beliefs to return a distribution over actions. We also propose a novel auxiliary reward for encouraging communication by actions, which is integrated into PBL. Our experiments show that agents trained using PBL can learn collaborative behaviors more effectively than a number of meaningful baselines without requiring any explicit communication. We conduct a complete ablation study to analyze the effectiveness of different components within PBL in our bridge experiment.

Related Work

Our work is closely related to [Albrecht2017, lowe2017multi, Mealing, Raileanu2018] where agents build models to estimate other agents’ hidden information. Contrastingly, our work enhances a “flat” opponent model with recursive reasoning. “Flat” opponent models estimate only the hidden information of opponents. Recursive reasoning requires making decisions based on the mental states of others as well as the state of the environment. In contrast to works such as I-POMDP [Gmytrasiewicz:2005] and PR2 [wen2018probabilistic] where the nested belief is embedded into the training agent’s opponent model, we incorporate level-1 nested belief “I believe that you believe” into our policy by a novel auxiliary reward.

Recently, there has been a surge of interest in using reinforcement learning (RL) approaches to learn communication protocols [FoersterAFW16a, LazaridouPB16b, MordatchA17, SukhbaatarSF16]. Most of these works enable agents to communicate via an explicit channel. Among these works, \citeauthorMordatchA17 \shortciteMordatchA17 also observe the emergence of non-verbal communication in collaborative environments without an explicit communication channel, where agents are exclusively either a sender or a receiver. Similar research is also conducted in [DEWEERD201510]. In our setting, we do not restrict agents to be exclusively a sender or a receiver of communications – agents can communicate mutually by actions.  \citeauthorKnepper:2017:ICJ:2909824.3020226 \shortciteKnepper:2017:ICJ:2909824.3020226 propose a framework for implicit communication in a cooperative setting and show that various problems can be mapped into this framework. Although our work is conceptually close to [Knepper:2017:ICJ:2909824.3020226], we go further and present a practical algorithm for training agents. The recent work of [FOERSTER2018BAD] solves an imperfect information problem as considered here from a different angle. We approach the problem by encouraging agents to exchange critical information through their actions whereas \citeauthorFOERSTER2018BAD train a public player to choose an optimal deterministic policy for players in a game based on publicly observable information. In Cooperative Inverse RL (CIRL) where robotic agents try to infer a human’s private reward function from their actions [hadfield2016cooperative], optimal solutions need to produce behavior that coveys information.

\citeauthor

dragan2013legibility \shortcitedragan2013legibility consider how to train agents to exhibit legible behavior (i.e. behavior from which it is easy to infer the intention). Their approach is dependent on a hand-crafted cost function to attain informative behavior. Mutual information has been used as a means to promote coordination without the need for a human engineered cost function. \citeauthorStrouse:2018:LSH:3327546.3327688 \shortciteStrouse:2018:LSH:3327546.3327688 use a mutual information objective to encourage an agent to reveal or hide its intention. In a related work, \citeauthorjaques2019social \shortcitejaques2019social utilize a mutual information objective to imbue agents with social influence. While the objective of maximal mutual information in actions can yield highly effective collaborating agents, a mutual information objective in itself is insufficient to necessitate the development of implicit communication by actions. \citeauthoreccles2019learning \shortciteeccles2019learning introduce a reciprocity reward as an alternative approach to solve social dilemmas..

A distinguishing feature of our work in relation to previous works in multi-agent communication is that we do not have a predefined explicit communication protocol or learn to communicate through an explicit channel. Information exchange can only happen via actions. In contrast to previous works focusing on unilaterally making actions informative, we focus on bilateral communication by actions where information transmission is directed to a specific party with potentially limited reasoning ability. Our agents learn to communicate through iterated policy and belief updates such that the resulting communication mechanism and belief models are interdependent. The development of a communication mechanism therefore requires either direct access to the mental state of other agents (via centralized training) or the ability to mentalize, commonly known as theory of mind. We investigate our proposed algorithm in both settings.

Problem Definition

We consider a set of agents, denoted by , interacting with an unknown environment by executing actions from a joint set , with denoting the action space of agent , and the total number of agents. To enable models that approximate real-world scenarios, we assume private and public information states. Private information states, jointly (across agents) denoted by are a set of hidden information states where is only observable by agent , while public states are observed by all agents. We assume that hidden information states at each time step are sampled from an unknown distribution , while public states evolve from an initial distribution , according to a stochastic transition model . Having transitioned to a successor state according to , agents receive rewards from , where we have used to denote joint state descriptions that incorporate both public and private information. Finally, rewards are discounted over time by a factor . With this notation, our problem can be described succinctly by the tuple: , which we refer to as an imperfect-information Markov decision process (I2MDP)111We also note that our problem can be formalized as a decentralized partially observable Markov decision process (Dec-POMDP) [Bernstein2013DECPOMDP].. In this work, we simplify the problem by assuming that hidden information states are temporally static and are given at the beginning of the game.

We interpret the joint policy from the perspective of agent such that , where is a compact representation of the joint policy of all agents excluding agent . In the collaborative setting, each agent is presumed to pursue the shared maximal cumulative reward expressed as

(1)

where is the current full information state, are joint actions taken by agent and all other agents respectively at time and is a discount factor.

Policy Belief Learning

(a) Payoff for the matrix game
(b) Learning curves of PBL and baselines over 100 runs
Figure 1: Matrix game experiment and results.

Applying naive single agent reinforcement learning (SARL) algorithms to our problem will lead to poor performance. One reason for this is the partial observability of the environment. To succeed in a partially observable environment, an agent is often required to maintain a belief state. Recall that, in our setting, the environment state is formed from the union of the private information of all agents and the publicly observable information, . We therefore learn a belief module to model other agents’ private information which is the only hidden information from the perspective of agent in our setting. We assume that an agent can model given the history of public information and actions executed by other agents . We use a NN to parameterize the belief module which takes in the history of public information and produces a belief state . The belief state together with information observable by agent forms a sufficient statistic, , which contains all the information necessary for the agent to act optimally [Astrom:1965]. We use a separate NN to parameterize agent ’s policy which takes in the estimated environment state and outputs a distribution over actions. As we assume hidden information is temporally static, we will drop the time script for it in the rest of the paper.

The presence of multiple learning agents interacting with the environment renders the environment non-stationary. This further limits the success of SARL algorithms which are generally designed for environments with stationary dynamics. To solve this, we adopt centralized training and decentralized execution, where during training all agents are recognized as one central representative agent differing only by their observations. Under this approach, one can imagine belief models and sharing parameters . The input data, however, varies across agents due to the dependency on both and . In a similar fashion, we let policies share the parameters . Consequently, one may think of updating and using one joint data set aggregated across agents. Without loss of generality, in the remainder of this section, we discuss the learning procedure from the point of view of a single agent, agent .

We first present the learning procedure of our belief module. At iteration , we use the current policy to generate a data set of size , , using self-play and learn a new belief module by minimizing:

(2)

where is the Kullback–Leibler(KL) divergence and we use a one-hot vector to encode the ground truth, , when we calculate the relevant KL-divergence.

With updated belief module , we learn a new policy for the next iteration, , via a policy gradient algorithm. Sharing information in multi-agent cooperative games through communication reduces intractability by enabling coordinated behavior. Rather than implementing expensive protocols [heider1944experimental], we encourage agents to implicitly communicate through actions by introducing a novel auxiliary reward signal. To do so, notice that in the centralized setting agent has the ability to consult its opponent’s belief model thereby exploiting the fact that other agents hold beliefs over its private information . In fact, comparing to the ground-truth enables agent to learn which actions bring these two quantities closer together and thereby learn informative behavior. This can be achieved through an auxiliary reward signal devised to encourage informative action communication:

(3)

where is agent ’s best belief (so-far) about agent ’s private information:

In other words, encourages communication as it is proportional to the improvement in the opponent’s belief (for a fixed belief model ), measured by its proximity to the ground-truth, resulting from the opponent observing agent ’s action . Hence, during the policy learning step of PBL, we apply a policy gradient algorithm with a shaped reward of the form:222Please note, we omit the agent index in the reward equation, as we shape rewards similarly for all agents.

(4)

where is the reward from the environment, is the communication reward and balances the communication and environment rewards.

1:  Initialize: Randomly initialize policy and belief
2:  Pre-train
3:  for  to max_iterations do
4:     Sample episodes for belief training using self-play forming the data set
5:     Update belief network using data from solving Equation 2
6:     Given updated beliefs , update policy (policy gradients with rewards from Equation 4)
7:  end for
8:  Output: Final policy, and belief model
Algorithm 1 Per-Agent Policy Belief Learning (PBL)

Initially, in the absence of a belief module, we pre-train a policy naively by ignoring the existence of other agents in the environment. As an agent’s reasoning ability may be limited, we may then iterate between Belief and Policy learning multiple times until either the allocated computational resources are exhausted or the policy and belief modules converge. We summarize the main steps of PBL in Algorithm 1. Note that, although information can be leaked during training, as training is centralized, distributed test-phase execution ensures hidden-private variables during execution.

Machine Theory of Mind

In PBL, we adopt a centralized training and decentralized execution scheme where agents share the same belief and policy models. In reality, however, it is unlikely that two people will have exactly the same reasoning process. In contrast to requiring everyone to have the same reasoning process, a person’s success in navigating social dynamics relies on their ability to attribute mental states to others. This attribution of mental states to others is known as theory of mind [premack_woodruff_1978]. Theory of mind is fundamental to human social interaction which requires the recognition of other sensory perspectives, the understanding of other mental states, and the recognition of complex non-verbal signals of emotional state [Lemaignan2015MutualMI]. In collaboration problems without an explicit communication channel, humans can effectively establish an understanding of each other’s mental state and subsequently select appropriate actions. For example, a teacher will reiterate a difficult concept to students if she infers from the students’ facial expressions that they have not understood. The effort of one agent to model the mental state of another is characterized as Mutual Modeling [dillenbourg:hal-00190240].

In our work, we also investigate whether the proposed communication reward can be generalized to a distributed setting which resembles a human application of theory of mind. Under this setting, we train a separate belief model for each agent so that and do not share parameters . Without centralization, an agent can only measure how informative its action is to others with its own belief model. Assuming agents can perfectly recall their past actions and observations, agent computes its communication reward as:333Note the difference of super/sub-scripts of the belief model and its parameters when compared to Equation 3.

where and . In this way, an agent essentially establishes a mental state of others with its own belief model and acts upon it. We humbly believe this could be a step towards machine theory of mind where algorithmic agents learn to attribute mental states to others and adjust their behavior accordingly.

The ability to mentalize relieves the restriction of collaborators having the same reasoning process. However, the success of collaboration still relies on the correctness of one’s belief about the mental states of others. For instance, correctly inferring other drivers’ mental states and conventions can reduce the likelihood of traffic accidents. Therefore road safety education is important as it reduces variability among drivers reasoning processes. In our work, this alignment amounts to the similarity between two agents’ trained belief models which is affected by training data, initialization of weights, training algorithms and so on. We leave investigation of the robustness of collaboration to variability in collaborators’ belief models to future work.

Experiments & Results

We test our algorithms in three experiments. In the first, we validate the correctness of the PBL framework which integrates our communication reward with iterative belief and policy module training in a simple matrix game. In this relatively simple experiment, PBL achieves near optimal performance. Equipped with this knowledge, we further apply PBL to the non-competitive bridge bidding problem to verify its scalability to more complex problems. Lastly, we investigate the efficacy of the proposed communication reward in a distributed training setting.

(a) Learning curves for non-competitive bridge bidding
(b) Comparison of PBL and PQL
Figure 2: a) Learning curves for non-competitive bridge bidding with a warm start from a model trained to predict the score distribution (average reward at warm start: 0.038). Details of warm start provided in Appendix B.3. b) Bar graph comparing PBL to variants of PQL, with the full version of PQL results as reported in [Yeh16].

Matrix Game

We test our PBL algorithm on a matrix card game where an implicit communication strategy is required to achieve the global optimum. This game is first proposed in [FOERSTER2018BAD]. There are two players and each player receives a card drawn from independently at the beginning of the game. Player 1 acts first and Player 2 responds after observing Player 1’s action. Neither player can see the other’s hand. By the design of the payoff table (shown in Figure. 0(a)), Player 1 has to use actions C and A to signify that it holds Cards 1 and 2 respectively so that Player 2 can choose its actions optimally with the given information. We compare PBL with algorithms proposed in [FOERSTER2018BAD] and vanilla policy gradient. As can be seen from Figure 0(b), PBL performs similarly to BAD and BAD-CF on this simple game and outperforms vanilla policy gradient significantly. This demonstrates a proof of principle for PBL in a multi-agent imperfect information coordination game.

Contract Bridge Case-Study

Figure 3: An example of a belief update trace showing how PBL agents use actions for effective communication. Upon observing from East, West decreases its HCP belief in all suits. When West bids , East improves belief in clubs. Next, East bids . West recalculates its belief from last time step and increases its HCP belief in hearts.

Non-competitive contract bridge bidding is an imperfect-information game that requires information exchange between agents to agree high-quality contracts. Hence, such a game serves as an ideal test-bed for PBL. In bridge, two teams of two (North-South vs East-West) are situated in opposing positions and play a trick-taking game using a standard 52-card deck. Following a deal, bidding and playing phases can be effectively separated. During the bidding phase, players sequentially bid for a contract until a final contract is reached. A PASS bid retains previously proposed contracts and a contract is considered final if it is followed by three consecutive PASS bids. A non-PASS bid proposes a new contract of the form , where integer takes integer values between one and seven, and suit belongs to . The number of tricks needed to achieve a contract are , and an NT suit corresponds to bidding to win tricks without trumps. A contract-declaring team achieves points if it fulfills the contract, and if not, the points for the contract go to the opposing team. Bidding must be non-decreasing, meaning integer is non-decreasing and must increase if the newly proposed trump suit precedes or equals the currently bid suit in the ordering .

In this work, we focus on non-competitive bidding in bridge, where we consider North (N) and South (S) bidding in the game, while East (E) and West (W) always bid PASS. Hence, the declaring team never changes. Thus, each deal can be viewed as an independent episode of the game. The private information of player , , is its hand. is a 52-dimensional binary vector encoding player ’s 13 cards. An agent’s observation at time step consists of its hand and the bidding history: . In each episode, Players N and S are dealt hands respectively. Their hands, together, describe the full state of the environment , which is not fully observed by either of the two players. Since rolling out via self-play for every contract is computationally expensive, we resort to double dummy analysis (DDA) [haglund2010search] for score estimation. Interested readers are referred to [haglund2010search] and the appendix for further details. In our work, we use standard Duplicate bridge scoring rules [duplicatebridgelaw] to score games and normalize scores by dividing them by the maximum abusolute score.

Benchmarking & Ablation Studies: PBL introduces several building blocks, each affecting performance in its own right. We conduct an ablation study to better understand the importance of these elements and compare against a state-of-the-art method in PQL [Yeh16]. We introduce the following baselines:

  1. Independent Player (IP): A player bids independently without consideration of the existence of the other player.

  2. No communication reward (NCR): One important question to ask is how beneficial the additional communication auxiliary reward is in terms of learning a good bidding strategy. To answer this question, we implement a baseline using the same architecture and training schedule as PBL but setting the communication reward weighting to zero,

  3. No PBL style iteration (NPBI): To demonstrate that multiple iterations between policy and belief training are beneficial, we compare our model to a baseline policy trained with the same number of weight updates as our model but no further PBL iterations after training a belief network at PBL iteration .

  4. Penetrative Q-Learning (PQL): PQL as proposed by Yeh and Lin \shortciteYeh16 as the first bidding policy for non-competitive bridge bidding without human domain knowledge.

(a) Learning curves for Silent Guide
(b) CR
(c) NCR
Figure 4: a) Learning curves for Silent Guide. Guide agent trained with communication reward (CR) significantly outperforms the one trained with no communication reward (NCR). b) A trajectory of Listener (gray circle) and Guide (blue circle) with CR. Landmarks are positioned randomly and the Goal landmark (blue square) is randomly chosen at the start of each episode. c) A trajectory of Listener and Guide with NCR. Trajectories are presented with agents becoming progressively darker over time.

Figure 1(a) shows the average learning curves of our model and three baselines for our ablation study. We obtain these curves by testing trained algorithms periodically on a pre-generated test data set which contains 30,000 games. Each point on the curve is an average score computed by Duplicate bridge scoring rules [duplicatebridgelaw] over 30,000 games and 6 training runs. As can been seen, IP and NCR both initially learn faster than our model. This is reasonable as PBL spends more time learning a communication protocol at first. However, IP converges to a local optimum very quickly and is surpassed by PBL after approximately 400 learning iterations. NCR learns a better bidding strategy than IP with a belief module. However, NCR learns more slowly than PBL in the later stage of training because it has no guidance on how to convey information to its partner. PBL outperforming NPBI demonstrates the importance of iterative training between policy and belief modules.

Restrictions of PQL: PQL [Yeh16] is the first algorithm trained to bid in bridge without human engineered features. However, its strong bidding performance relies on heavy adaption and heuristics for non-competitive Bridge bidding. First, PQL requires a predefined maximum number of allowed bids in each deal, while using different bidding networks at different times. Our results show that it will fail when we train a single NN for the whole game, which can been seen as a minimum requirement for most DRL algorithms. Second, PQL relies on a rule-based function for selecting the best contracts at test time. In fact, removing this second heuristic significantly reduces PQL’s performance as reported in Figure 1(b). In addition, without pre-processing the training data as in [Yeh16], we could not reproduce the original results. To achieve state-of-the-art performance, we could use these (or other) heuristics for our bidding algorithm. However, this deviates from the focus of our work which is to demonstrate that PBL is a general framework for learning to communicate by actions.

Belief Update Visualization: To understand how agents update their beliefs after observing a new bid, we visualize the belief update process (Figure 3). An agent’s belief about its opponent’s hand is represented as a 52-dimensional vector with real values which is not amenable to human interpretation. Therefore, we use high card points (HCPs) to summarize each agent’s belief. For each suit, each card is given a point score according to the mapping: A=4, K=3, Q=2, J=1, else=0. Note that while agents’ beliefs are updated based on the entire history of its opponent’s bids, the difference between that agent’s belief from one round to the next is predominantly driven by the most recent bid of its opponent, as shown in Figure 3.

Learned Bidding Convention: Whilst our model’s bidding decisions are based entirely on raw card data, we can use high card points as a simple way to observe and summarize the decisions which are being made. For example, we observe our policy opens the bid with if it has HCPs of spade or higher but lower HCPs of any other suits. We run the model on the unseen test set of 30,000 deals and summarize the learned bidding convention in Appendix C.1.

Imperfect Recall of History: the length of the action history players can recall affects the accuracy of the belief models. The extent of the impact depends on the nature of the game. In bridge, the order of bidding encodes important information. We ran an ablation study where players can only recall the most recent bid. In this setting, players do worse (average score 0.065) than players with perfect recall. We conjecture that this is because players can extract less information and therefore the accuracy of belief models drops.

Silent Guide

We modify a multi-agent particle environment [lowe2017multi] to test the effectiveness of our novel auxiliary reward in a distributed setting. This environment also allows us to explore the potential for implicit communication to arise through machine theory of mind. In the environment there are two agents and three landmarks. We name the agents Guide and Listener respectively. Guide can observe Listener’s goal landmark which is distinguished by its color. Listener does not observe its goal. However, Listener is able to infer the meaning behind Guide’s actions. The two agents receive the same reward which is the negative distance between Listener and its goal. Therefore, to maximize the cumulative reward, Guide needs to tell Listener the goal landmark color. However, as the “Silent Guide” name suggests, Guide has no explicit communication channel and can only communicate to Listener through its actions.

In the distributed setting, we train separate belief modules for Guide and Listener respectively. The two belief modules are both trained to predict a naive agent’s goal landmark color given its history within the current episode but using different data sets. We train both Guide and Listener policies from scratch. Listener’s policy takes Listener’s velocity, relative distance to three landmarks and the prediction of the belief module as input. It is trained to maximize the environment reward it receives. Guide’s policy takes its velocity, relative distance to landmarks and Listener’s goal as input. To encourage communication by actions, we train Guide policy with the auxiliary reward proposed in our work. We compare our method against a naive Guide policy which is trained without the communication reward. The results are shown in Figure 4. Guide when trained with communication reward (CR) learns to inform Listener of its goal by approaching to the goal it observes. Listener learns to follow. However, in NCR setting, Listener learns to ignore Guide’s uninformative actions and moves to the center of three landmarks. While Guide and Listener are equipped with belief models trained from different data sets, Guide manages to use its own belief model to establish the mental state of Listener and learns to communicate through actions judged by this constructed mental state of Listener. We also observe that a trained Guide agent can work with a naive RL listener (best reward -0.252) which has no belief model but can observe PBL guide agent’s action. The success of Guide with CR shows the potential for machine theory of mind. We obtain the learning curves by repeating the training process five times and take the shared average environment reward.

Conclusions & Future Work

In this paper, we focus on implicit communication through actions. This draws a distinction of our work from previous works which either focus on explicit communication or unilateral communication. We propose an algorithm combining agent modeling and communication for collaborative imperfect-information games. Our PBL algorithm iterates between training a policy and a belief module. We propose a novel auxiliary reward for encouraging implicit communication between agents which effectively measures how much closer the opponent’s belief about a player’s private information becomes after observing the player’s action. We empirically demonstrate that our methods can achieve near optimal performance in a matrix problem and scale to complex problems such as contract bridge bidding. We conduct an initial investigation of the further development of machine theory of mind. Specifically, we enable an agent to use its own belief model to attribute mental states to others and act accordingly. We test this framework and achieve some initial success in a multi-agent particle environment under distributed training. There are a lot of interesting avenues for future work such as exploration of the robustness of collaboration to differences in agents’ belief models.

References

Appendix A Bridge

a.1 Playing Phase in Bridge

After the final contract is decided, the player from the declaring side who first bid the trump suit named in the final contract becomes Declarer. Declarer’s partner is Dummy. The player to the left of the declarer becomes the first leading player. Then Dummy lays his cards face up on the table and then play proceeds clockwise. On each trick, the leading player shows one card from their hand and other players need to play the same suit as the leading player if possible; otherwise, they can play a card from another suit. Trump suit is superior to all other suits and, within a suit, a higher rank card is superior to lower rank one. A trick is won by the player who plays the card with the highest priority. The winner of the hand becomes the leading player for the next trick.

a.2 Double Dummy Analysis (DDA)

Double Dummy Analysis assumes that, for a particular deal, one player’s hand is fully observed by other players and players always play cards to their best advantage. However, given a set , the distribution of remaining cards for the two non-bidding players East and West is still unknown. To reduce the variance of the estimate, we repeatedly sample a deal times by allocating the remaining cards randomly to East and West and then estimate by taking the average of their DDA scores,

(5)

where are hands for East and West from the sampling respectively. For a specific contract , the corresponding score is given by . In our work, we set .

a.3 Bridge Scoring

  Score ()
  
  
  
  if  then
      {Contract tricks}
     if score  then
         {Game Bonus}
     else
         {PARTSCORE}
     end if
     if  then
         {Slam bonus}
     else if  then
         {Grand Slam bonus}
     end if
     if  then
         {Over-tricks}
     end if
  else
      {Under-tricks}
  end if
Algorithm 2 Bridge Duplicate Scoring

Algorithm 2 shows how we score a game under Duplicate Bridge Scoring rules. We obtain the average of using Double Dummy Analysis [haglund2010search] given the hands of players North and South , the declarer and the trump suit. The score function above has a scale and bias for each trump suit. The scale is 20 for and and 30 for all others. Bias is zero for all trumps, except NT which has a bias of 10.

Note that Double is only a valid bid in response to a contract proposed by one’s opponents. Also, a Redouble bid must be preceded by a Double bid. In the non-competitive game, opponents do not propose contracts, so these options are naturally not included.

a.4 Double Pass Analysis

At the beginning of bidding, when two players both bid Pass (Double Pass), all players’ hands are re-dealt and a new episode starts. If we ignore these episodes in training, a naive strategy emerges where a player always bids Pass unless it is highly confident about its hand and therefore bids at level 1 whose risk is at the minimum. In this work, we are interested in solving problems where private information needs to be inferred from observed actions for better performance in a game. Therefore, this strategy is less meaningful and the opportunity cost of bidding Pass could be high when a player could have won the game with high reward. To reflect this opportunity cost in training, we set the reward for Double Pass as the negative of the maximum attainable reward given the players’ hands: . Therefore, a player will be penalized heavily by bidding Pass if it could have obtained a high reward otherwise and awarded slightly if they could never win in the current episode. It is worthy of note that an initial Pass bid can convey information; however, if it is followed by a second Pass bid the game ends and hence no further information is imparted from a second Pass.

We note that, by Duplicate Bridge Laws 77 and 22 of [duplicatebridgelaw], if the game opens with four Pass bids all players score zero. In our setting, however, East and West play pass regardless of their hands and this will give North and South Player extra advantages. Therefore, we use to avoid results where North and South only bid where they have particularly strong hands; we discourage full risk averse behavior.

Appendix B Experiment Details in Bridge

b.1 Offline Environment

Generating a new pair of hands for North and South and pre-calculating scores for every possible contract at the start of a new episode during policy training is time inefficient. Instead, we pre-generate 1.5 million hands and score them in advance. Then we sample new episodes from this data set when we train a policy. We also generate a separate test data set containing 30,000 hands for testing our models.

b.2 Model Architecture

Figure 5: The architecture of our Policy Network and Belief Network in bridge. represents environment and FC stands for fully connected layers. For player , we add their private information and belief together to form as the input to its policy network . Player action becomes part of player ’s history , which is the input to Player ’s belief network .

We parameterize policy and belief modules with two neural networks and respectively. Fig. 5 shows our model for the bridge experiments. Weights of belief and policy modules are shared between players. The input to a player’s policy is the sum of its private information and belief . The last layer of the belief module has a Sigmoid non-linear activation function, so that the belief vector has elements in the interval . By adding belief to , we could reuse parameters associated with for and avoid training of extra parameters.

b.3 Policy Pre-training

In the first PBL iteration , to have a good initial policy and avoid one with low-entropy, we train our policy to predict a distribution formed using Softmax on with temperature . The loss for pre-train policy given is:

where KL is the KL-Divergence. To have a fair comparison with other benchmarks, all our benchmarks are initialized with this pre-trained policy. Supervising a bidding policy to predict pre-calculated scores for all actions only provides it with a basic understanding of its hand.

b.4 Policy Training

We utilize Proximal Policy Optimization (PPO) [schulman2017proximal] for policy training. Optimization is performed with the Adam optimizer [kingma2014adam] with , , . The initial learning rate is which we decay exponentially with a rate of decay of and a decaying step of .

The distance measure used in communication rewards is cross entropy and we treat each dimension of belief and ground truth as an independent Bernoulli distribution.

We train PBL with 8 PBL iterations, we do policy gradient 200 times per iteration and we sample 5000 episodes each time. Each mini-batch is then a sub-sample of 2048 episodes. The initial communication weight is , and we decay it gradually. We train all other baselines with the same PPO hyperparameters.

b.5 Learning A Belief Network

When a player tries to model its partner’s hand based on the observed bidding history, we assume it can omit the restriction that its partner can only hold 13 cards. Therefore, we take the prediction of the partner’s hand given the observed bidding history as a 52-label classification problem, where each label represents one corresponding card being in the partner’s hand. In other words, we treat each card from a 52-card deck being in the partner’s hand as an independent Bernoulli distribution and we train a belief network by maximizing the joint likelihood of these 52 Bernoulli distributions given a bidding history . This gives the loss for belief network as:

where and are elements of one-hot encoding vectors of a partner’s hand and one agent’s belief . The reasoning behind this assumption is we think it is more important to have a more accurate prediction over an invalid distribution than a less accurate one over a valid distribution as the belief itself is already an approximation.

For each iteration of belief training, we generate 300,000 data episodes with the current policy to train the belief network. Optimization is performed with the Adam optimizer with , , and a decay rate of . The initial learning rate is . The batch size is 1024. We split the data set such that of it is used for training and is for early stopping check to prevent overfitting.

Appendix C Learned Opponent Model and Bidding Convention

c.1 High Card Points Tables

Bid Total PASS 1.4 1.3 1.4 1.4 5.4 1 2.3 2.1 2.3 4.5 11.2 1 2.3 2.2 4.7 1.8 11.0 1 2.2 4.4 2.1 2.0 10.7 1 4.8 1.9 2.1 2.1 10.9 3NT 4.4 4.4 4.9 4.6 18.3 4 3.0 6.5 3.1 3.4 16.1 4 6.6 5.3 2.3 2.3 16.5
Table 1: Opening bid - own aHCPs.
Bid Total PASS 1.4 1.3 1.3 1.3 5.3 1 2.3 2.1 2.2 4.6 11.1 1 2.2 2.2 4.7 1.9 11.0 1 2.3 4.5 2.0 2.0 10.8 1 4.8 2.0 2.2 2.2 11.1 3NT 4.4 4.6 4.8 4.7 18.5 4 2.8 6.8 3.3 3.1 16.0 4 6.2 4.0 3.0 2.8 16.0
Table 2: Belief HCPs after observing opening bid.
Bid Total PASS 4.4 4.5 4.4 4.5 17.8 1 4.4 4.3 4.5 7.1 20.3 1 4.6 4.4 7.0 4.9 20.9 1 4.3 6.7 4.8 4.9 20.7 1 7.2 4.3 4.7 4.9 21.1 2 4.8 4.7 5.2 8.5 23.2 2 4.9 5.1 8.6 5.1 23.7 3NT 6.6 6.6 7.1 7.2 27.6 4 5.2 8.5 5.5 5.6 24.9 4 8.4 5.6 5.4 5.4 24.8 6 5.7 6.2 9.9 6.9 28.7 6 6.4 9.2 7.3 7.1 30.0 6 10.8 4.0 10.0 6.5 31.3 6NT 7.5 7.0 8.7 9.0 32.2
Table 3: Responding bid - own + belief aHCPs.

Table 3 shows the average HCPs (aHCPs) present in a hand for each of the opening bidding decisions made by North. Once an opening bid is observed, South updates their belief; Table 3 shows the effect which each opening bid has on South’s belief. We show in Table 3 the responding bidding decisions made by South; aHCP values in table 3 are the sum of HCPs in South’s hand and South’s belief over HCPs in North’s hand. The values highlighted in bold for each row are the maximum values for the respective row. This is only done for rows where the bid is has a specified trump suit.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
399442
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description