Learning to Communicate in Multi-Agent Reinforcement Learning : A Review

Learning to Communicate in Multi-Agent Reinforcement Learning : A Review

Mohamed Salah Zaiem
Ecole polytechnique, Paris
&Etienne Bennequin1
Ecole polytechnique, Paris
Both authors contributed equally
1footnotemark: 1
Abstract

We consider the issue of multiple agents learning to communicate through reinforcement learning within partially observable environments, with a focus on information asymmetry in the second part of our work. We provide a review of the recent algorithms developed to improve the agents’ policy by allowing the sharing of information between agents and the learning of communication strategies, with a focus on Deep Recurrent Q-Network-based models. We also describe recent efforts to interpret the languages generated by these agents and study their properties in an attempt to generate human-language-like sentences. We discuss the metrics used to evaluate the generated communication strategies and propose a novel entropy-based evaluation metric. Finally, we address the issue of the cost of communication and introduce the idea of an experimental setup to expose this cost in cooperative-competitive game.

Disclaimer

This review has been written in December 2018 as coursework. In a very dynamic field, interesting results may have been found meanwhile, and a few assumptions here may have become outdated. However, we believe that this remains a good start for people discovering this field or trying to get an idea on what has been done about communication in multi-agent reinforcement learning settings.

1 Introduction

In recent years, Multi-Agent Reinforcement Learning has received a lot of interest, with more and more complex algorithms and structures improving the policy of multiple agents in more and more complex environments [1]. In Multi-agent settings, agents take actions and learn from their rewards simultaneously in the same environment. These interactions could be either competitive [21] or cooperative or a mix of the two. This review mainly tackles the case of partially observable multi-agent environments. In real world environments it is rare that the full state of the system can be provided to the agent or even be determined. In this kind of environments, agents may hold very valuable information for the decision making of other agents. In the case of common reward, agents would need to communicate seamlessly with the other agents in order to pass that information, coordinate their behaviours and increase the common reward [15].

Classic independent Q-learning or DQN learning have showed poor performance for this kind of setups [17]. Since no communication is involved, other agents learning and acting accordingly are considered part of the environment in these models. The environment is therefore non stationary, and agents fail to converge to an optimal policy. Allowing the agents to share messages and learn what to communicate can significantly improve their adaptability to a given situation.

To allow this information to circulate between agents, there has been a lot of research work on creating communication channels and protocols. These channels can be either continuous, therefore allowing easy derivation and optimization, or discrete with a set of symbols/letters copying human messages’ generation. These messages can either be determined in advance by a human operator or be learned by the agents, using the reward as the only learning signal. In this work, we will focus on the latter case.

We will separate two different uses of this framework. They can be roughly explained this way : the first tries to improve agent’s ability in a task through communication between agents, while the second one tries to improve the quality of the generated ”language” (or communication) and its closeness to natural language through mastering a task.

The first use does not give a primary importance to the format of communication. Information could circulate through handmade communication channels and protocols as explained above. But it also could be latent and implied, through sharing the Deep Q-Networks weights.

The second use is motivated by the work of linguists on the emergence of language. Word2Vec embeddings arose from applying to huge textual datasets a famous quote of John Rupert Firth ” You shall know a word by the company it keeps.” Concerning the rise of language, Wittgenstein [25] said that ”Language derives its meaning from use”. From this quote, a lot of works have been trying to create a language relying on its functionality, and Reinforcement Learning represents a perfect framework for such creation. In other words, language is a way to coordinate and achieve common objectives and cannot be represented by simple statistical associations like Language Models. Moreover, cognitive studies tend to show that feedback is paramount for the language learning process [20].

Emergent communication and language is not a new field. Wagner et al. [24] have already proposed a thorough review on the progress of the research in this domain. Since the majority of the works presented by them are not Reinforcement Learning related, and since the Deep Q-Networks offered a lot of new possibilities for researchers, we will not present these works in details.

We will start by defining the useful tools used in the different setups and models described in the review, then we will see the effect of introducing communication channels and algorithms on the performance of agents in multi-agent partially observable settings. We will then inspect the link between the generated languages and natural ones, studying the attempts of researchers to ground the communication sequences into human language. In the final part, we will discuss the metrics used in the experiments proposing a new entropy-based one, and we will propose an new multi-agent environment that could expose a cost in communication between agents.

2 Useful tools

Deep Q-Networks

Mnih et al. proposed in 2015 a representation of the Q-function of the policy of an agent using deep neural networks [18].

Assume a single-agent observing at each time step the current state (fully observable) and taking an action following a policy with the objective of maximizing an expectation of the sum of future -discounted rewards . In Q-learning, this expectation is represented by the value function of the policy , with the optimal value function obeying the Bellman equation :

In DQN, a neural network is used to represent the value function where is the parameters of the network. The training of the network is done using experience replay, i.e. a dataset of past experiences sampled at random to constitute mini-batches, and the loss function w.r.t. the target ( being the parameters of a copy of the network, frozen at the beginning of each update iteration) :

In the case of multiple agents, this can be extended to Independent DQN, in which the Q-network of each agent is updated independently from the others. It has to be noted that this algorithm can lead to convergence problems, since the environment can appear non-stationary to one agent because of the learning of the other agents. Also, this assumes full observability of the state of the environment by all agents, which is a strong assumption in practice.

Deep Recurrent Q-Networks

Hausknecht & Stone proposed an adaptation of the DQN able to address the issue of a partially observable environment using Recurrent Neural Networks [9]. Instead of approximating a value function , this approximates , where is the observation made by the agent. Here, is computed using a RNN (usually, an LSTM layer), i.e. we have at each time step . The observations can then be aggregated over time.

3 Multi-agent Cooperation

3.1 Cooperation & communication

Most real life situations can be modeled as several or many agents interacting in an environment. These agents can cooperate, compete, or some combination of the two. Tampuu et al.’s work presents all of these three possibilities in a game of Pong [23]. In their experiment, two agents play the famous Atari game with each other. Each of them can fully observe the state of the environment and tries to find its optimal policy using the independent DQN algorithm. No reward is given during the point. At the end of each point, the player who scored receives a reward , and the other one receives a reward . Then the authors make vary between +1 and -1. When , the game is fully competitive, since each player has a full incentive to score. When , the game is fully cooperative : the two players are encouraged to keep the ball alive. Their experiment showed that with a competitive rewarding scheme, the number of paddle-bounces between the two players is about a hundred times lower than it is with a cooperative rewarding scheme, which means that in the latter, both players learned how to cooperate in order not to drop the ball.

This experiment shows that the notion of cooperation in a multi-agent environment is close to the notion of shared reward between the agent. If several agents evolving in the same environment share the same common reward, then they have full incentive to cooperate.

However, this does not mean that agents sharing the same reward will have an incentive to communicate. According to most papers in this area, the incentive to communicate actually comes from a partially observable environment. Take for instance the switch riddle : one hundred prisoners arrive in a new prison. The warden tells them that each day, one of them will be chosen at random (among all prisoners, including those having already been picked) and placed in a room containing only a light bulb. This prisoner can observe the state of the light bulb (on or off) and choose whether to change its state or not. He can also announce that he believes that all prisoners have visited the room. If he is right, all prisoners will be set free, but if he is wrong, they will all be executed. The prisoners can agree in advance on a strategy but as soon as the challenge begin, they will not be able to communicate with each other, neither will they know who is picked to go to the light bulb room.

In this riddle, prisoner-agents cannot observe the full state. They cannot observe anything, unless they are picked to go in the room, in which case they only know that they are the one in the room, along with the state of the switch. However, the common knowledge of all agents covers the full state and allows to solve the riddle. Therefore, the agents have an incentive to communicate, through the light bulb.

The problem of a partially observable multi-agent environment was already addressed in 2004 by Ghavamzadeh & Mahadevan [8], who used Hierarchical Reinforcement Learning to allow multiple taxi drivers to learn to pick-up as many customers as possible in a finite time. After executing an action, a taxi driver is able to receive the state of the other cabs. However, communication in this case is not free : there is a arbitrarily predefined cost for getting the information from the other taxi drivers. Thus each agent had to compare the value function without communication with the sum of the value function with communication and the cost of communication. The authors did a comparison between a ”selfish multi-agent” baseline, in which the agents do not communicate with each other, and their algorithm for different values of the communication cost. They found that even though the results were always better with their algorithm, the number of satisfied customers decreased when the communication cost increased, tending close to the baseline results. These experiments have to their credit that they managed to quantify the trade-off between partial observation and communication cost, and therefore to quantify the incentive to communicate.

3.2 Centralised learning and Deep Reccurent Q-Networks

Some constraints on the nature of the communication can make it non-differentiable, thus invalidating a lot of learning algorithms. A widely used solution to avoid this issue is centralised training for decentralised execution. Agents have a learning period in which they develop their communication schemes along with their policies, which they apply afterwards, at test time, possibly with more constraints on the message.

Learning communication with backpropagation

Sukhbaatar et al. [22] developed a ”Communication Network”, that takes as entries all the partial observations of the agents and outputs the actions. Each layer of this -layered network has cells . Each cell takes as input where is the output of (with ) and is the message coming from other agents (it can also be computed only on a subset of the set of other agents), and outputs :

where is some non-linearity (as an alternative, can also be an LSTM). Finally . The weights of the network are trained all-together using backpropagation from a loss related to the reward on a batch of experiences. Their model proved to heavily outperform a baseline model where communication inside the network occurs in discrete symbol.

After some promising results, the research on centralised learning of communication schemes took a turn with the utilisation of Deep Recurrent Q-Networks. Unlike the independent Deep Q-Network used in Tampuu et al., Deep Recurrent Q-Networks do not assume full observability and do not lead to convergence problems (in independent DQN, one agent’s learning makes the environment appear non stationnary to the others).

Foerster et al. proposed the two following algorithms deriving from DRQN in which multiple agents learn to communicate in a centralized fashion and sharing the weights of their network. Both make the assumption of a partially observable environment in which agents share the same reward.

Deep Distributed Recurrent Q-Network [5]

This algorithm derives from a combination of DRQN and independent Q-learning, with three modification : the experience replay is forgotten (as said before, the experience of one agent is made obsolete by the learning of the others), the last action is added as input, and most importantly the weights of the network are shared between all the agents : there is not one Q-network for each agent, but one Q-network in total, allowing more efficient (and way less costly) common learning. Each agent can still behave differently from the others since it gets its own individual inputs. The index of the agent is also given as input, in order to allow each agent to specialise. The Q-function to be learned by the RNN becomes of the form , with and respectively the observation and the action of agent at time step , and the hidden state of agent after time step . The network is trained in the following way : 1) the multiple agents evolve during one episode with an -greedy policy following the Q-function given by the RNN. 2) The weights of the network are updated following Bellman equations.

Differentiable Inter-Agent Learning [6]

At each time-step, in addition to an environment action , each agent takes a communication action . This message is not restricted during the centralised learning but is restricted to a limited-bandwidth channel during the decentralised execution.
Here, the Q-network does not only output the Q-function, but also the real-valued (thus differentiable) message which is fed at time-step to the Q-networks of the other agents (that still share the same weights). Then two separate gradients back-propagate through the Q-network : 1) the one associated to the reward, coming from the environment. 2) the one coming from the error backpropagated by the recipients of the message in the next turn. Figure 0(b) shows the DIAL architecture with two agents. With this model, messages can be updated more quickly to minimize future DQN loss.

Note that in the first model, there is no explicit message fluctuating between the agents. The communication lies in the actions that are taken by the multiple agents and can influence the observations of other agents in the future. But in both algorithms, there is a big deal of implicit communication during the learning, when all agents actually come together and agree on a common strategy, just like the prisoners in the switch riddle.

Both algorithms having been tested on their performance on this switch riddle, we can compare their results. Both algorithms usually converge to the same rewards : of the Oracle (maximal possible reward) for , and for . It is also interesting to see that the same communication protocol is used by both algorithms after convergence for (this protocol is shown in Figure 0(a)). However, DIAL converges about 20 times faster than DDRQN in both cases (after one thousand episodes for DIAL, but 20 thousands for DDRQN). Plus, in the case of , even though DDRQN converges in most instances to of the Oracle, there are a significant number of instances in which we cannot see any convergence after 500 thousands episodes. This is a good argument in favor of differentiable message directly adjusted following downstream DQN loss.

(a) Figure 0(a) : Communication protocol developped with DIAL and DDRQN for the resolution of the switch riddle with
(b) Figure 0(b) : Dial architecture with two agents : red arrows represent backpropagation. Different Q-Net cells are copies of the same shared network.

3.3 Cooperation through public belief

However, the DIAL method assume the existence of a channel on which agents can communicate freely without affecting the environment. In practice, this is not always the case. Foerster et al. [7] introduce the notion of public belief, which can be defined as the estimation of the probability of possible states (i.e. observable or not) knowing all public observations (i.e. observations available to all agents) : assuming that at time step , is the discrete set of features of the environment that compose the state , and that is the subset of features that are commonly known by all agents, we define the public belief .

Bayesian Action Decoder [7]

The authors proposed the BAD algorithm to use the notion of public belief in order to allow agents to learn a cooperative policy.
They assume a partially observable environment with no communication channel. At each time step , agent takes an action following a policy knowing all its past observations and actions. All agents share a common reward. The training is centralised, which means that each agent knows the policies of all the other agents.
The main idea is to generate a virtual third-party agent with public belief policy that conditions on public information and public belief, and therefore can be computed by each agent through a common algorithm. At each time step, this virtual third-party outputs a policy that maps all possible private observations made by the agent acting at this time step to the set of its possible actions. Then, at time step , since all agents know the policy , the action that agent has taken following this policy gives public information about the private observation he made :

with being the state features observed by agent at time . This allowing to update the public belief.

The authors chose to apply their algorithm to the cooperative firework-themed playing card game Hanabi. In this game, each player can see the other players’ cards but not its own, and can chose to hint a teammate about one of this teammate’s cards, or to play one of its own cards, using the hints given by the other players. In the case of a two players game, the score converged after ten thousands games, achieving state of the art results. What is even more interesting is that agents developed their own communication conventions through their actions. For instance, the authors found that an agent hinting ”red” or ”yellow” is a huge incentive for the other agent to use its newest card at the next action. In this way, the agents learned a crucial ability in the interactions between humans, which is the ability to deduce, from the action of someone, latent information about their situation and observations.

4 Learning a language

4.1 Main models : Referential Games

As we mentioned earlier, emergent languages and communication is not a new research topic, but the development of Deep Recurrent Q-Networks methods offered a whole new lot of possibilities to language researchers.

A majority of the reinforcement learning setups we will describe in this part rely on referential games. Referential games are a variant of the Lewis signaling game [14] and have been often used in linguistics and cognitive sciences. They start by presenting a target object to a the first agent (the speaker). The speaker is therefore allowed to send a message describing the object to a second agent (the listener). Finally, the listener tries to guess the target shown to the speaker among a list of possible candidates. The communication is successful, and thus the reward positive, if the listener picks the correct candidate.

Let us describe more precisely a specific model, the one developed in [13]. Target objects and candidates are sampled pictures from ImageNet. They represent basic concepts like cats, cars… classified among 20 general categories (animal, vehicle…). As it receives raw pixel input, the speaker agent has to detect by itself the features it could send to the listener. Other papers like [12] tried to use disentangled inputs with attribute-based object vectors.

Two images and are drawn randomly from the set. One of them is the target . The sender knows which one is the target, and generates a message following the policy from a vocabulary V of size K. The listener tries to decode the message and guess the target image following a policy . If , the guess of the listener, is equal to , they both receive a positive reward, otherwise, they receive 0 as a reward.

4.1.1 Vocabulary

The size of the vocabulary has a huge influence on the performance of communicating agents. Foerster et al. [6] started with sending simple 1-bit message, with their environments needing ”small” information (for instance, the switch riddle or finding the parity of a number), but the more complex the problems get, the more vocabulary is needed to ensure informative messages. Lazaridou et al. [13] tried single-symboled messages with a vocabulary size going from 0 to 100. Later research, inspired by natural language, suggested the use of LSTM based encoder-decoder architectures to generate sequences [10] [12]. An important point is that these symbols have no a priori meaning. They do get linked to visual or physical attributes only during learning.

One of the main benefits of generating sequences is allowing the speakers to generate languages ensuring compositionnality. With a compositional language, the speaker would be able to describe a red box, if he had prior suffixes or prefixes for ”red” and ”box” given the training set, even if no ”red box” is seen during training.

4.1.2 Q-Models

Each agent (speaker and listener) has an input network that takes the incoming inputs (images, messages, previous actions…) then embeds them using Recurrent Neural Networks or simple shallow Feedforward networks. Lazaridou [13] observed that producing these features with Convolutional Neural Networks in the case of images input yielded better features and therefore better results in the guessing phase.

To generate a sequence of symbols, Lazaridou et al. [12] chose a recurrent policy for the decoder [9]. The purpose being that the RNN will be able to retain information from past states and use them to perform better on long sequences generation. Evtimova et al. [4] added an attention mechanism during the generation of messages, reporting that it improved the communication success with unseen objects, as the embedding layers focus on recognizable objects.

To make it clearer, from the images and , a convolutional network outputs a hidden state that is input to a single-layer LSTM decoder (the speaker’s recurrent policy). This layer generates a message . The listener applies the single-layer LSTM encoding for the message (since it is also dealing with sequence input) producing an encoding . Coupled with encoding for the candidate images, it predicts a target following similarity measures between candidates and encoded vector .

All the weights of both the speaker and listener agents are jointly optimized using the choice of the agents as the only learning signal. Since their task is different, no weights are shared between the trained agents.

4.1.3 Variants

We described above a popular model in language emergence, but since the emergent languages literature has become consistent, there is a few interesting variants which we will discuss here.
A first interesting one is proposed in the work of [11] and [2]. To get closer to human-like interactions, they implemented multi-step communication between the agents. In [11], the task is very similar, recognizing target images (celebrities in this case), but both agents generate messages. The guessing agent starts the communication given the candidates by asking ”questions”. The answering agent, who knows the target, answers these questions by yes or no, like in the ”Guess Who” game. The best results were obtained using two rounds of question-answer, and their analysis shows that the questioning agent targets different features in Question 1 and Question 2, getting more information.

Cao et al. [2] also tested multi-step communication but with important changes. It is no longer about referential games, but a negotiation to trade items having different utilities for the two parties. Two channels are implemented, the first one is made to communicate information (like the channels seen earlier), the second one is a concrete trade offer that can be either accepted or rejected by the other agent. When the agents cooperate, i.e. the reward is common, the messages indicate to the other agent which items are most valuable and their value. The results show that the ”linguistic” channel helps reach a Nash equilibrium and reduces the variance in joint optimality, making the trade system more robust.

4.2 Results

The first question this part should answer is whether the communication is successful, i.e. whether the agents complete their task. In almost all the papers we have read, agents learn to coordinate almost perfectly and reach very high accuracies.

For instance, in the work of Lazaridou et al. [13], depending on the model, agents reach a success rate, while the best ones do not fail at choosing the right target. The main differences concern the number of episodes needed before convergence. Best models are the ones that encode best the input images, thus facilitating message generation by a better representation/classification of the described objects.

In [12], the higher number of distractors (i.e. non target inputs to the listener) slightly reduces the accuracy (). We have to note that success rate increases highly with the size of the alphabet at the beginning, before stabilizing when the vocabulary is large enough to answer the task properly. Table 1 shows that influence. Lexicon size denotes the effective number of unique messages used by the speaker agent.

Max length Alphabet Size Lexicon size Training accuracy
2 10 31 92.0%
5 17 293 98.2%
10 40 355 98.5%
Table 1: Results of Lazaridou et al. 2018 [12]

4.3 Link with Natural Language

As we said in the introduction, research work in multi-agent communication and language emergence did not stick to improving the behaviour of agents facing partially-observable environments. Apart from getting closer to optimal policies, emergent communication has been seen as a way to tackle Natural Language Processing issues. Roughly put, if machines are able to generate a language conveying the same basic functions as human language, an eventual translator from ”machine” to human could solve a lot of unsolved NLP tasks.

Therefore, two challenges arise. The first is interpreting the generated messages. The second is one grounding the emergent language into natural language. Some works have been trying to link the generated messages to human communication, going until twisting the tasks to force that closeness.

4.3.1 Interpretability

Starting from rough and noisy messages, interpretation is a rough task. In a lot of cases, researchers fail to understand completely which information the agents are passing and stick to formulating vague hypotheses about meanings. Figure 3 shows the attemps of Jorge et al. [11] to understand the meaning of Question A and B (questions asked by the questioning agent). Looking at the table, it seems that A checks whether the top of the photo is coloured or not(dark or white hair). The operation is even harder with sequences, Table 3 shows the top messages for each object(an object is identified by its shape and color) in the work of Choi et al. (ICLR).

Figure 1: Figure 3 : Samples presented according to the answer they triggered to two questions [11]
Figure 2: Table 3 : Most common sequences describing objects. (The letters between brackets represent common alternatives for the last letter) [3]

Lazaridou et al. [13] tried to estimate closeness to human representations. They used the purity index to assess the quality of ”message clusters”, i.e. the images represented by the same message by the speaker agent. We compare these clusters with the initial ones (the groups englobing the concepts depicted in the images, Dog Animals for instance). The purity of a clustering is the proportion of category labels in the clusters that agree with the respective cluster majority category. In other terms, for a given message, we take the label having a majority of elements represented by that message, and check how many images do not pertain to that label but are still represented by the same message. With CNN encoding, and 100 symbols in the vocabulary (43 of them are effectively used) their model reaches purity.

Choi et al. [3] evaluate closeness to human language by assessing the ability of agents to compose meanings. In this purpose, they use zero-shot evaluation. In their setup, the listener has to guess an image among a set of 4 candidates. Each image is generated randomly given its shape (cylinder, cube…) and color. Zero-shot evaluation consists in showing the trained speaker an unseen combination of shape/color, with previously (multiple times) seen color and shape separately. The point is to evaluate his ability to convey the message of an unseen element composing the knowledge he had on its elements. Communication accuracies do not vary much between seen and unseen objects and stay very close to 1, around 0.97. The emerged language seems able to compose since it succeeds in describing unseen objects.

4.3.2 Grounding it into Natural Language

In order to get easy interpretable languages, researchers have tried to compel the agents to generate messages closer to human representations.In [13], they introduced a few changes to the referential game. Instead of presenting the same image to the speaker and the listener (among other candidates for the latter), they presented an image pertaining to the same concept. For instance, during training, they showed a dog to the speaker, and then the listener had to pick the picture of another dog among a list of non-dog candidates. The point of this manipulation is to force the speaker to send the message ”dog”, like a human would do, and not send messages on the color of the background, or the luminosity, for instance. This change induced a slight increase in purity reaching . Furthermore, to force even more the speaker to classify ”humanly” the images, they alternate its learning between playing the game (generating messages to the listener) and a classic image classification task.

5 Discussion and propositions

5.1 Lack of evaluation metrics

The accuracy of the choice is a natural and relevant metric to evaluate the communication’s quality, and thus the quality of the language generated. But the metrics used to assess the link with natural language are completely biased by the human macro-representation of raw images. To be clearer, the purity index introduced above is not a valid evaluation of interpretability. We believe that forcing the agent to describe macro-representations(dog, car..) instead of raw image descriptions ( color, background, luminosity..), especially given the lack of prior information on these macro-representations, is not a natural way to solve the task for the agents, and it may harm their communication success.

However, we believe that it would be interesting to develop an evaluation metric for a learned communication protocol that would be independent from human assumptions as well as from its effectiveness in the execution of a singular task. Zero-shot evaluation for compositionnality is indeed relevant in this perspective, but it cannot capture all the features that make a language relevant.

In 2003, Prokopenko & Wang studied the entropy on the public belief in a setting with multi-agent coordination [19]. We can put that in relation with the results of the Bayesian Action Decoder on the Hanabi game, which shows that the entropy of the public belief (i.e. the posterior on all possible states) decreases as the algorithm develops useful communication conventions. In the same spirit, we introduce here the idea of an evaluation metric on a language or communication protocol, independently from the task to be completed by the agents.

Language-Entropy Evolution

Let us assume a two-agent environment, Agent Smith being the speaker and Agent Lee being the listener. At each time step , Smith observes the full state and sends to Lee a message . Lee then computes the probability of each possible state given all messages . We evaluate the entropy of this probability distribution :

The comparison between and gives a measure of the uncertainty that has been lifted by message , in other words how discriminant message is in this context.

Note that the probabilities computed by Lee can be on the action set instead of the state set. Then the entropy measures the uncertainty about which action to take instead of the uncertainty about which state the agents are in. In some applications where it is uneasy to compute a probability distribution, this alternative might be more relevant.

We believe that this metric can be useful in the evaluation of learned languages and communication protocol.

EDIT : This idea has been explored in the work of Lowe et al. [16] , along with other interesting evaluation metrics.

5.2 Cost of communication

Figure 3: Figure 3 : 4 Players 2 vs 2 Pong Game

We also noticed that an interesting study could take place on the possible cost of communication in these systems. Apart from the obvious computational cost, we were thinking about environments with competitive groups of cooperative agents.

In this context, we have been thinking about a game setting where we would observe a cost on communication in a cooperation/competition setup. The game is represented in Figure 3, the goal would be to make teams of two agents play against each other a 4-players Pong setting. Multiple cases could be tested :

  • No communication channel.

  • The communication between members of the same team is not heard by others.

  • Communication is public, or communication is public for one team and secret for the other.

We believe that these experiments could lead to a better understanding of the cost of communication in an environment where sending a message has an indirect influence on the state of the environment, through the reaction of the opposite team.

6 Conclusion

We reviewed the main recent algorithms addressing the issue of communication and cooperation between multiple agents in a Reinforcement Learning environment. We also presented the current efforts in interpreting the language learned by these algorithms, and proposed a novel metric to evaluate the relevance of a generated communication strategy.

In the future, we want to implement the 4-players pong environment with the multiple cases described above in a personal project to see how the policies of the teams evolve depending on the constraints on their communication protocol.

References

  • [1] L. Busoniu, R. Babuska, and B. De Schutter (2008-03) A comprehensive survey of multiagent reinforcement learning. Trans. Sys. Man Cyber Part C 38 (2), pp. 156–172. External Links: ISSN 1094-6977, Link, Document Cited by: §1.
  • [2] K. Cao, A. Lazaridou, M. Lanctot, J. Z. Leibo, K. Tuyls, and S. Clark (2018) Emergent communication through negotiation. CoRR abs/1804.03980. External Links: Link, 1804.03980 Cited by: §4.1.3, §4.1.3.
  • [3] E. Choi, A. Lazaridou, and N. de Freitas (2018) Compositional obverter communication learning from raw visual input. CoRR abs/1804.02341. External Links: Link, 1804.02341 Cited by: Figure 2, §4.3.1.
  • [4] K. Evtimova, A. Drozdov, D. Kiela, and K. Cho (2017) Emergent language in a multi-modal, multi-step referential game. CoRR abs/1705.10369. External Links: Link, 1705.10369 Cited by: §4.1.2.
  • [5] J. N. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson (2016) Learning to communicate to solve riddles with deep distributed recurrent q-networks. CoRR abs/1602.02672. External Links: Link, 1602.02672 Cited by: §3.2.
  • [6] J. N. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson (2016) Learning to communicate with deep multi-agent reinforcement learning. CoRR abs/1605.06676. External Links: Link, 1605.06676 Cited by: §3.2, §4.1.1.
  • [7] J. N. Foerster, F. Song, E. Hughes, N. Burch, I. Dunning, S. Whiteson, M. Botvinick, and M. Bowling (2018) Bayesian action decoder for deep multi-agent reinforcement learning. CoRR abs/1811.01458. External Links: Link, 1811.01458 Cited by: §3.3, §3.3.
  • [8] M. Ghavamzadeh and S. Mahadevan (2004) Learning to communicate and act using hierarchical reinforcement learning. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3, AAMAS ’04, Washington, DC, USA, pp. 1114–1121. External Links: ISBN 1-58113-864-4, Link, Document Cited by: §3.1.
  • [9] M. J. Hausknecht and P. Stone (2015) Deep recurrent q-learning for partially observable mdps. CoRR abs/1507.06527. External Links: Link, 1507.06527 Cited by: §2, §4.1.2.
  • [10] S. Havrylov and I. Titov (2017) Emergence of language with multi-agent games: learning to communicate with sequences of symbols. CoRR abs/1705.11192. External Links: Link, 1705.11192 Cited by: §4.1.1.
  • [11] E. Jorge, M. Kågebäck, and E. Gustavsson (2016) Learning to play guess who? and inventing a grounded language as a consequence. CoRR abs/1611.03218. External Links: Link, 1611.03218 Cited by: Figure 1, §4.1.3, §4.3.1.
  • [12] A. Lazaridou, K. M. Hermann, K. Tuyls, and S. Clark (2018) Emergence of linguistic communication from referential games with symbolic and pixel input. CoRR abs/1804.03984. External Links: Link, 1804.03984 Cited by: §4.1.1, §4.1.2, §4.1, §4.2, Table 1.
  • [13] A. Lazaridou, A. Peysakhovich, and M. Baroni (2016) Multi-agent cooperation and the emergence of (natural) language. CoRR abs/1612.07182. External Links: Link, 1612.07182 Cited by: §4.1.1, §4.1.2, §4.1, §4.2, §4.3.1, §4.3.2.
  • [14] D. Lewis (1969) Convention: a philosophical study. Wiley-Blackwell. Cited by: §4.1.
  • [15] M. L. Littman (1994) Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the Eleventh International Conference on International Conference on Machine Learning, ICML’94, San Francisco, CA, USA, pp. 157–163. External Links: ISBN 1-55860-335-2, Link Cited by: §1.
  • [16] R. Lowe, J. N. Foerster, Y. Boureau, J. Pineau, and Y. N. Dauphin (2019) On the pitfalls of measuring emergent communication. CoRR abs/1903.05168. External Links: Link, 1903.05168 Cited by: §5.1.
  • [17] L. Matignon, G. J. Laurent, and N. Le Fort-Piat (2012) Independent reinforcement learners in cooperative markov games: a survey regarding coordination problems. The Knowledge Engineering Review 27 (1), pp. 1–31. External Links: Document Cited by: §1.
  • [18] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis (2015-02) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529–533. External Links: ISSN 00280836, Link Cited by: §2.
  • [19] M. Prokopenko and P. Wang (2003) Relating the entropy of joint beliefs to multi-agent coordination. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 2752, pp. 367–374 (English). External Links: ISSN 0302-9743 Cited by: §5.1.
  • [20] J. Sachs, B. Bard, and M. L. Johnson (1981-02) Language learning with restricted input: case studies of two hearing children of deaf parents. Applied Psycholinguistics 2, pp. 33 – 54. External Links: Document Cited by: §1.
  • [21] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. P. Lillicrap, K. Simonyan, and D. Hassabis (2017) Mastering chess and shogi by self-play with a general reinforcement learning algorithm. CoRR abs/1712.01815. External Links: Link, 1712.01815 Cited by: §1.
  • [22] S. Sukhbaatar, A. Szlam, and R. Fergus (2016) Learning multiagent communication with backpropagation. CoRR abs/1605.07736. External Links: Link, 1605.07736 Cited by: §3.2.
  • [23] A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, J. Aru, and R. Vicente (2015) Multiagent cooperation and competition with deep reinforcement learning. CoRR abs/1511.08779. External Links: Link, 1511.08779 Cited by: §3.1.
  • [24] K. Wagner, J. A. Reggia, J. Uriagereka, and G. S. Wilkinson (2003) Progress in the simulation of emergent communication and language. Adaptive Behavior 11 (1), pp. 37–69. External Links: Document, Link, https://doi.org/10.1177/10597123030111003 Cited by: §1.
  • [25] L. Wittgenstein (1953) Philosophical investigations. Basil Blackwell, Oxford. External Links: ISBN 0631119000 Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398157
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description