Overlapping Coalition Formation via Probabilistic Topic Modeling

Overlapping Coalition Formation via Probabilistic Topic Modeling

Michalis Mamakos \IfArrayPackageLoaded
Northwestern University Evanston, IL, USA
Northwestern University Evanston, IL, USA
mamakos@u.northwestern.edu
   Georgios Chalkiadakis \IfArrayPackageLoaded
Technical University of Crete Chania, Greece
Technical University of Crete Chania, Greece
gehalk@intelligence.tuc.gr
Abstract

Research in cooperative games often assumes that agents know the coalitional values with certainty, and that they can belong to one coalition only. By contrast, this work assumes that the value of a coalition is based on an underlying collaboration structure emerging due to existing but unknown relations among the agents; and that agents can form overlapping coalitions. Specifically, we first propose Relational Rules, a novel representation scheme for cooperative games with overlapping coalitions, which encodes the aforementioned relations, and which extends the well-known MC-nets representation to this setting. We then present a novel decision-making method for decentralized overlapping coalition formation, which exploits probabilistic topic modeling—and, in particular, online Latent Dirichlet Allocation. By interpreting formed coalitions as documents, agents can effectively learn topics that correspond to profitable collaboration structures.

Overlapping Coalition Formation via Probabilistic Topic Modeling

Michalis Mamakos \IfArrayPackageLoaded
Northwestern University
Evanston, IL, USA
Northwestern University
Evanston, IL, USA
mamakos@u.northwestern.edu and Georgios Chalkiadakis \IfArrayPackageLoaded
Technical University of Crete
Chania, Greece
Technical University of Crete
Chania, Greece
gehalk@intelligence.tuc.gr

1 Introduction

Cooperative game theory [7] provides a rich framework for the coordination of the actions of self-interested agents. Despite the maturity of the related literature, it is usually assumed that an agent can be a member of exactly one coalition. Nevertheless, in many real-world scenarios this is simply not realistic. In environments where agents hold an amount of a divisible resource (e.g., time, money, computational power), which they can invest to earn utility, it is natural for them to divide that resource in order to simultaneously participate in a number of overlapping coalitions [27, 9, 6, 35, 36, 37, 24], to maximize their profits.

As real-world environments exhibit a high level of uncertainty, it is more natural than not to assume that agents do not have complete knowledge of the utility that can be yielded by every possible team of agents [28, 21, 5, 16]. Moreover, coalitional value is often determined by an underlying structure defined given relations among the members of the coalition. These relations reflect the synergies among the coalition members. It is natural to posit that agents do not know the exact synergies at work in their coalitions. Against this background, in our system the coalitional value depends on the amount of resources the agents invest, and, crucially, the explicit relations among coalition members. As such, we build on the idea of marginal contribution nets (MC-nets) [15] and introduce Relational Rules (RRs), a representation scheme for cooperative games with overlapping coalitions. The RR scheme allows for the concise representation of the synergies-dependent coalition value.

Now, an agent can make an observation of the utility that can be earned by the resource offerings of the members of a coalition, but it is a much more complex task to determine her relations with subsets of agents of that coalition. Probabilistic topic modeling (PTM) [3] is a form of unsupervised learning which is particularly suitable for unravelling information from massive sets of documents. Probabilistic topic models infer the probability with which each word of a given “vocabulary” is part of a topic. Intuitively, the words that have high probability in a topic, are very likely to appear together in a document that refers to this topic with high probability. Therefore, a topic, which is essentially a probability distribution of the words of a given vocabulary, reveals the underlying hidden structure. One of the most popular PTM algorithms [3] is online Latent Dirichlet Allocation (online LDA) [12], which, as its name indicates, is a an online version of the well-known Latent Dirichlet Allocation (LDA) [4] algorithm. LDA is a generative probabilistic model for sets of discrete data, while online LDA can handle documents that arrive in streams, enabling the continuous evolution of the topics.

The method we develop employs online LDA to allow agents to learn how well they can cooperate with others. In our setting, agents repeatedly form overlapping coalitions, as the game takes place over a number of iterations. Thus, we utilize a simple, yet appropriate, protocol, under which in each iteration an agent is (randomly) selected in order to propose (potentially) overlapping coalitions. Agents that use our method take decisions on which coalitions to join by exploiting the topics of the model that they have learned via employing online LDA: by interpreting formed coalitions as documents, represented given an appropriate vocabulary, agents are able to use online LDA to update beliefs regarding the hidden collaboration structure—and thus implicitly learn rewarding synergies with others (synergies which are in our experiments described by RRs). Moreover, agents are able to gain knowledge regarding coalitions that are costly, and should thus be avoided. Hence, agents can, over time, pick partners with which to cooperate effectively. We have evaluated our approach against two reinforcement learning (RL) algorithms we developed for this setting, and which serve as baselines. Our algorithm vastly outperforms the baselines, implying a high degree of accuracy in the beliefs of the agents, and a high quality of agent decisions.

To the best of our knowledge, the recent work of [24] is the only one that has so far approached overlapping coalition formation under uncertainty, but it is concerned with the class of Threshold Task Games [6], which greatly differs to the more general setting we study here. Moreover, ours is the first paper that employs probabilistic topic modeling for multiagent learning: existing literature on multiagent learning [11, 32], in both non-cooperative [23, 13, 14] and cooperative [21, 22, 5, 2] game settings, is largely preoccupied with the study of RL, PAC learning, or simple belief updating algorithms. As such, this paper introduces an entirely novel paradigm for (decentralized) learning employed by rational autonomous decision makers in multiagent settings.

2 Background and Related Work

In this section, we provide an overview of previous work on overlapping coalition formation, multiagent learning and agent decision-making under uncertainty. Furthermore, we offer the necessary background on Probabilistic Topic Modeling, and in particular (online) Latent Dirichlet Allocation—which is employed in our proposed agent-learning method.

2.1 Overlapping Coalition Formation

Overlapping coalition formation was initially studied in [27], which provided an approximate solution to the corresponding optimal coalition structure generation problem [26], in a setting where the costs of the coalitions and the capabilities of the agents are globally announced. The method proposed in [27] employs concepts from combinatorics and approximation algorithms. Though related, our approach differs in that it is decentralized, since the (overlapping) coalitions are formed by the agents themselves, and are not provided for the agents by an algorithm. The subsequent work of [9] presented an application of overlapping coalitions in sensor networks. An approximate greedy algorithm with worst-case guarantees is introduced, and constitutes a real-world example of employing overlapping coalitions. However, in that work, the agents do not form coalitions acting in a completely autonomous manner, since they are, at a step of the algorithm, hardwired to agree on taking a specific action (regarding the choice of the members of the coalition).

As illustrated by the work of [6] which formally introduced cooperative games with overlapping coalitions (or OCF games), whenever an agent can be part of a number of coalitions simultaneously, coalition structures are much more complex than in non-OCF games—and so is the concept of deviation. We provide a further discussion on the richness of OCF games later in Section 8 of this paper. Furthermore, in [6] an expressive class of OCF games, threshold task games (TTGs), is also presented. In TTGs, a coalition achieves a task and earns utility if its members manage to collect a number of resource units which exceeds a threshold .

TTGs provide the framework of study for the work of [24]. In that work, probability bounds for the resources contributed by members of overlapping coalitions are computed, and subsequently exploited to form (overlapping) coalitions that are deemed, with some probabilistic confidence, capable of carrying over assigned tasks since they are believed to possess resources exceeding the required threshold. The paper uses Bayesian updating to update agent beliefs regarding partners’ resources following coalition formation and task execution, but no actual machine learning technique is used in that work.

In a series of works [35, 36, 37] following [6], Zick and colleagues study stability [7] with respect to the behaviour of non-deviating players towards deviators in OCF settings. Several variants of the core are developed, and the approach is based on the notion of arbitration functions, which define the payoff of the deviators according to the attitude of the non-deviators.

A class of games highly related to cooperative games with overlapping coalitions, is that of fuzzy coalitional games [1]. In a fuzzy game, an agent can be part of a coalition at various levels. Thus, the coalitional value of is defined by the level at which the agents have joined . There is a number of differences between overlapping coalition formation and fuzzy games, the biggest one being that in fuzzy games the core is the only acceptable outcome. Finally, coalition structure generation with overlapping coalitions is studied in [34], where a metaheuristic is developed, based on particle swarm optimization [10].

2.2 Uncertainty and Learning

Stochasticity in the value of payoffs in non-overlapping cooperative games has been studied in [28], in a setting where agents have different preferences over a set of random variables. The focus of that study is on core-stability. Bayesian coalitional games are introduced in [16], where suitable variations of the core are also defined. In [21, 22] agents have incomplete information regarding the costs that the other agents incur by performing a task within a coalition, while the formation of the coalitions takes place through information-revealing negotiations and the conduction of auctions. The formation of overlapping coalitions is not allowed in [21, 22].

One of the very early attempts in approaching learning in cooperative settings was presented in [8], where the dynamics of a set of RL algorithms [29] were studied. A Bayesian approach to reinforcement learning for coalition formation is presented in [5], along with the introduction of a variation of the core. The more recent work of [2] explores a PAC (probably approximately correct) model for obtaining theoretical predictions for the value of coalitions that have not been observed in the past. The links between evolutionary game theory and multiagent reinforcement learning is the topic of study in [31, 19].

Multi-agent learning in non-cooperative games [11] has been studied for a longer time. Much of the early seminal work [23, 13, 14] is interested in the study of Q-learning algorithms and their convergence to Nash equilibria [25]. In particular, the algorithm presented in [13] is shown to converge to a Nash equilibrium if every state and action has been visited infinitely often and the learning rate satisfies some conditions regarding the values it takes over time. Overall, the literature on multiagent learning [32], in both cooperative and non-cooperative settings, is largely concerned with the study of reinforcement learning algorithms.

2.3 Probabilistic Topic Modeling

Probabilistic topic models (PTMs) consist of statistical methods that analyze words of documents, in order to discover the topics (or themes) to which these refer to, and the ways the topics interconnect. One hugely popular and successful PTM is the Latent Dirichlet Allocation (LDA) [4].

2.3.1 Latent Dirichlet Allocation

We begin by defining basic terms, following [4, 3]:

  • A word is the basic unit of discrete data. A vocabulary consists of words and is indexed by , while it is fixed and has to be known to the LDA.

  • A document is a series of words, denoted by , where the word is denoted by .

  • A corpus is a collection of documents, denoted by .

  • A topic is a distribution over a vocabulary.

LDA is a Bayesian probabilistic model, the intuition behind it being that a document is a mixture of topics. For each document in , LDA assumes a generative process where a random distribution over topics is chosen, and for each word in a topic is chosen from that topics’ distribution, finally choosing a word from that topic. Documents share the same set of topics, but exhibit topics in different portions.

While LDA observes only series of words, its objective is to discover the topic structure which is hidden. It is thus assumed that the generative process includes latent variables. The topics are , where is their number; each topic is a distribution over the vocabulary, where ; and is the probability of word in topic . For the document the topic proportion of topic is , as is a distribution over the topics. The topic assignments for the document are denoted by , with being the topic assignment for the word of the document. Thus, and are the latent variables of the model, while the only observed variable is , where is the word observed in the document. Given the documents, the posterior of the topic structure is:

where the computation of , the probability of seeing the given documents under any topic structure, is intractable [3]. Furthermore, LDA introduces priors, so that and .

Though the exact computation of the posterior, and thus the topic structure as a whole, cannot be efficiently computed, it can be approximated [4]. The two most prominent alternatives for this are Markov Chain Monte Carlo (MCMC) sampling methods [18] and variational inference [17].

In variational inference for LDA, the true posterior is approximated by a simpler distribution that depends on parameters (matrices) , and , defined as follows:

The variable is the number of times that word has been observed in document . Parameters and are associated with , while denotes the probability (under distribution ) that the topic assignment of word in document is  [4]. The variational inference algorithm minimizes the Kullback-Leibler divergence between the variational distribution and the true posterior. This is achieved via iterating between assigning values to document-level variables and updating topic-level variables.

2.3.2 Online Latent Dirichlet Allocation

In online LDA [12], documents can arrive in batches (streams), and the value of is updated through analyzing each batch of documents. The variable controls the rate at which the documents of batch impact the value of . Furthermore, the algorithm (Alg. 1) requires an estimation, at least, of the total number of documents , in case this is not known in advance. The values of and can be assigned once and remain fixed. Essentially, the probability of word in topic , can be estimated as .

1 Initialize randomly
2 for t = 1 to  do
3      
4       E step :
5       Initialize randomly
6       repeat
7             Set
8             Set
9            
10      until change in ;
11      M step :
12       Compute
13       Set
Algorithm 1 Online Variational Inference for LDA [12].

3 Relational Rules

Agents have to form coalitions under what we term structural uncertainty. This notion describes the uncertainty agents face regarding the value of synergies among them. Such synergies are, in a non-overlapping setting, concisely described by marginal contribution nets (MC-nets). In MC-nets, coalitional games are represented by a set of rules of the form , where is a conjunction of literals (representing the participation or absence of agents), and applies to coalition if satisfies , with being added to the coalitional value of .

We now extend MC-nets to overlapping environments by introducing Relational Rules (RR), with the following form:

where  (with being the set of agents), ; is a coalition such that ; is the portion of her resource that has invested in coalition : i.e., , where is the total resource quantity (continuous or discrete) that holds and is the amount she has invested in . Therefore, , since  ( = essentially means that ), and , since can offer to at most .

A rule applies to coalition if and only if , and in that case utility is added to the coalitional value of . Note that it is not required that an agent’s total resource quantity has to be communicated to ’s other members, since a rule is applied by the environment. In non-overlapping games, RRs reduce to MC-nets rules without negative literals, as it then holds that .

Example 1.

Assume that , , , and the Relational Rules of the game are:

Let coalition = form, with =  ( = and =  ( = . The value of will be determined by rules (1) and (2), since rule does not apply, as agent . Applying rule (1) to will result in value and applying rule (2) to will result in value . Thus, .

In our setting, the value of a coalition is determined through RRs, but agents do not know the RRs in effect, and hence cannot determine the value of a coalition with certainty. Thus, agents do not know how well they can do with others, and cannot determine their relations just by an observation of a coalitional value. However, in Section 4 we show how PTMs can be exploited so that agents learn the underlying RR-described collaboration structure.

4 Learning by Interpreting Coalitions as Documents

In this section, we present how agents can employ online LDA in order to effectively learn the underlying collaboration structure. We let each agent maintain and train her own online LDA model. Thus, there are such models in the system. The agents’ formation decision-making process (Section 5) employs the learned topics.

For each (possibly overlapping) coalition formed,111Note that to improve readability we use the set notation “” to refer to coalitions that can in reality be overlapping: these are in fact vectors of the resource quantities that each agent contributes to this coalition, i.e. a coalition is a vector [6]. , observes the earned utility . The contribution of agent to coalition is known to each other agent , once is successfully formed. However, in order to supply that information to her online LDA model, an agent must maintain a vocabulary. We define the vocabulary of an agent to include words, one for each agent (including herself), indicating their contribution, plus two words for the utility, one representing gain and the other representing loss, since the value earned from a coalition can be either positive or negative. Therefore, the vocabulary of an agent consists of words. Assuming a game that proceeds in rounds, in round agent interprets the coalitional configuration regarding , as a document by “writing” in the document the word that indicates the contribution of agent times—where is the contribution of to . The restriction of the resource contributions of agents to positive () natural numbers is thus necessary when LDA is used, since a word can only appear in a document a discrete number of times. Thus, agent “writes” in the document, that corresponds to , either the word that indicates gain or the one that indicates loss as many times as the absolute value of the utility earned by the coalition is.222The number of times that the word for utility is written may require scaling when its domain ranges from very low to very high values. Since words are discrete data, cannot be real-valued; so, we let the actual value earned by be , instead of the computed by the application of the RRs related to . The number of documents that an agent passes in an iteration (round) to her online LDA is equal to the number of coalitions that she is member of.

Example 2.

Let an agent’s vocabulary include the words “ag1”, “ag2”, “gain” and “loss”, corresponding respectively to agents’ and contribution and the positive and negative utility. Thus, for coalition = where = , = , and = , each agent forms the document:

Example 3.

Following Example 2, let also agent participate in the game, and so an agent’s vocabulary includes the words “ag1”, “ag2”, “ag3”, “gain” and “loss”, with the corresponding meaning of each being as defined in Example 2. Now, for coalition = where = , = , , and = , each agent forms the document:

Since LDA is a “bag-of-words” model, the order of the words in the document does not matter. The batch of documents the online LDA model of agent is supplied with at iteration , consists of the interpreted-as-documents coalitions has joined at . The intuition behind the notion of a topic is that the words that appear in it with high probability are very likely to appear together in a document that exhibits this topic with high probability. Thus, the probability with which the word corresponding to an agent’s contribution appears in a topic, is correlated with the amount of her contribution. Therefore, the meaning of a topic identified by agent , is that has observed in many documents certain agents who contributed a lot, and some that contributed less; and this configuration results to gain or loss with the corresponding probabilities.

(a) A “profitable” learned topic.
(b) A “non-profitable” learned topic.
Figure 1: Typical topics, as formed by a randomly selected agent at the end of a random iteration in an experiment, where an agent’s vocabulary consists of words (). The two last words in a topic indicate the probability of gain and loss respectively, while the rest correspond to agents’ contribution. In (a), the “profitable” learned topic, the word for loss appears with near-zero probability; in (b), the “non-profitable” topic, the word for gain has near-zero probability.

Thus, the topic in Fig. 1(a) implies that if joins a coalition with the agents that appear in the topic with high probability, then that coalition would be profitable. On the other hand, the topic in Fig. 1(b) implies that forming a coalition with the agents that appear in it with high probability would result in loss. Note that learning a topic’s profitability corresponds to acquiring information on the RRs associated with that topic. However, these RRs are not explicitly learned; what is learned is the underlying collaboration structure (which might, in the general case, be generated by means other than RRs). It is natural to expect that agents who appear with (relatively) high probability in a topic which has been associated with loss, like the one in Fig. 1(b), will not appear (as a group) with high probability in a topic that has been associated with gain, like the one in Fig. 1(a). Such occurrence would reflect that an agent’s beliefs indicate that cooperation with a group of agents is (paradoxically) both beneficial and harmful. Furthermore, as an agent observes documents that always include the word that corresponds to her own contribution, it is expected that her corresponding word will have a non-trivial probability in her topics.

5 Taking Formation Decisions

We now present OVERPRO, a method for agent decision-making in iterated OVERlapping coalition formation games, via PRObabilistic topic modeling (here, online LDA).

5.1 A Repeated OCF Protocol

The protocol of our game operates in iterations (rounds). At the beginning of an iteration one agent is randomly selected, from the set of agents , as the proposer. Then, this agent proposes a number of (overlapping) coalitions, where for each such coalition she offers an integer quantity of her resource and asks for a (possibly different) resource quantity from each agent of each coalition. Therefore, proposer is asked to pass a list of tuples of the form , where is an -dimensional vector whose entry denotes the (integer) resource quantity that the proposer asks from agent for joining , and denotes the amount of resource that offers to coalition . Naturally, if the proposer does not ask from to participate in , then the entry of is . By limiting the agents’ resource investments to discrete quantities we disallow the formation of an infinite number of coalitions. Then, every agent is a responder and gets informed of the proposals in which she is involved, while she has to respond to each such proposal by either accepting (and thus offering the requested resource quantity) or rejecting it. A (possibly overlapping) coalition forms if and only if all involved agents accept to participate in it.

At the end of each round, all coalitions are dissolved and the resources of the agents are replenished. This removes the need for long-term strategic reasoning by the agents—and thus removes unnecessary distractions from the study of the effectiveness of the method used for learning the collaboration structure (which is what we focus on in this paper).

The utility that earns from coalition is proportional to her contribution, i.e., , where is the total utility earned by the coalition.333The use of more elaborate reward allocation methods is interesting future work. An agent receives information regarding partners’ contributions to, and the total coalitional utility of, her own formed coalitions only.

5.2 The OVERPRO Method

The main idea behind OVERPRO is that an agent exploits her learned LDA topic model for profitable coalition formation. Specifically, by considering as “profitable” (“non-profitable”) the topics in which the probability of gain (loss) is higher than the probability of loss (gain), the agent can identify coalitions that will potentially result in gain (loss). Now, it might be that not all of the topics are significant, in the sense that they are not clearly profitable or harmful. This is because some of them might not be well formed (especially at the early iterations of the game). We define a topic to be significant if the absolute value of the difference between the probability of the word representing gain and the probability of the word representing loss is greater than . Therefore, the significant topics of agent are:

where is the (out of the ) topic of agent . Furtheremore, we define Good (profitable) topics as the ones in which the probability of the word representing gain is greater than that of the word representing loss, and are significant. Bad topics are defined analogously. Formally:

How much probability should an agent appear with in a topic in order for the agent to be considered significant for that topic? For instance, in Fig. 1(a) not all agents appear in the profitable topic with similar probability values. Note that, due to initialization of Dirichlet distributions, each word appears in a topic with positive probability, no matter how small. We define the significant agents of topic of agent , denoted as , as those whose corresponding words in topic have probability higher than the mean value of the probabilities of the words corresponding to agents, plus the standard deviation of those. Formally:

Given the above, the approach of OVERPRO is that a proposer proposes one coalition for every topic  (proposing just the one coalition corresponding to the most profitable topic is risky, since a single rejection would mean formation failure), with the proposed members of the coalition stemming from topic being . Then, the resource quantity offered to by is proportional to , where is the topic in which coalition was identifibed as profitable. Thus, a proposer’s contribution to a coalition is affected by both his effect on the corresponding topic, and the profitability of this topic. The proposer asks from agent to offer to the quantity , since assumes that others will respond according to the topics that has observed.

Now, an agent often faces the dilemma to either exploit her best-so-far action, or explore different options [29]. We deal with this issue by allowing agent to do both at the same time, since is divisible. Specifically, at iteration proposer dedicates of in exploring and in exploiting (with the exploitation part defined as described above). Then, an agent performs exploration by proposing to random coalitions, offering to each (and asking from each one’s participating agents) the minimum possible resource quantity .

In each iteration, responders receive the proposals in which they are involved, and decide, for each proposal in turn whether to accept it (invest the resource quantity) or reject it (offering nothing). OVERPRO employs a parameter , so that an agent rejects a proposed coalition if she identifies a non-profitable (bad) topic in which at least of the agents in are significant. The intuition behind the employment of parameter is that it suffices to observe a certain percentage of agents of a proposed coalition in a “non-profitable” topic in order to reject it. Parameter can have different values at different iterations, so we refer to its value at iteration as . As agents make more observations, and thus become more confident about their beliefs over time, they gradually become more strict about who they cooperate with, and thus the value of decreases with . If a proposal to form coalition is not rejected, then it is checked whether there is a profitable (good) topic in which at least of the agents in are significant. Since responder has to split her resources among the proposals she has received, a proposal associated with a profitable topic like the one described above, is an item of a KNAPSACK problem that has to solve where the value of the item is : this stands for the profit portion of , multiplied by the profitability of the corresponding topic. The weight of the item is the requested quantity , and the constraint is that responder cannot invest more than in total. Thus, responder accepts the coalitions which correspond to the items given by solving the KNAPSACK problem, while she rejects the rest. Despite that KNAPSACK is an NP-Hard problem, the dynamic programming pseudopolynomial algorithm [20], which we used in our experiments, often admits not excessive running times, while an alternative is to use an FPTAS. After the decisions regarding the coalitions identified by profitable topics are made, the responder replies positively (if there is sufficient resource quantity) to an offer which has been neither accepted nor rejected (no relevant information found) if either the requested quantity is , or, if not, with probability  (in an exploratory sense).

The training of LDA, and thus consequently that of online LDA, takes polynomial time in the number of documents and topics [4]. Despite the fact that the number of documents depends on the resource quantities of the agents, which are numeric values and thus imply pseudopolynomial complexity, in practice the number of documents formed is far from the worst case, which can be attested by our experimental results. The “exploitation” part of OVERPRO takes polynomial time in the number of topics and agents, while the exploration part, which is independent of OVERPRO and can be replaced by one’s choosing, takes pseudopolynomial time, as it linearly depends on the proposer’s resource quantity.

Furthermore, topics convergence is guaranteed [12] if (online LDA parameter). Therefore, assigning such a value to and letting the resource portion dedicated by an agent for exploration decrease over time, we have as a corollary that the actions of the agents employing OVERPRPO converge.

6 Reinforcement Learning for OCF

To the best of our knowledge, this is the first work on (decentralized) multiagent learning for overlapping coalition formation under uncertainty not restricted in the context of Threshold Task Games [24], and thus there is no algorithm to use as a means for comparison. To this end, we have developed a a Greedy top-k algorithm and a Q-learning style [33, 8] one, and use these as baselines.

6.1 Greedy top-k algorithm

An agent that uses our Greedy top-k algorithm maintains the k most profitable coalitions she has observed, along with their values, and the resources offered by participating agents. A proposer makes k proposals, one for each of the top-k coalitions. The resource offered to coalition by proposer is proportional to the amount previously offered to and ’s corresponding value derived by applying the softmax function [29] over the maintained values (observed utilities) of the top-k coalitions; while asks from agent an amount equal to , multiplied by the ratio of their previous offerings (with ’s being on the numerator and ’s on the denominator) and a random value in . The approach to the exploitation-vs-exploration problem is exactly the same as in OVERPRO, and thus the parameter is also employed here. A responder adds a proposal to the input of a KNAPSACK problem if at least of the agents in the corresponding (proposed) coalition appear in one of the top-k coalitions. The value of the KNAPSACK item is equal to the sum of the values of the coalitions in which the agents of the proposed coalition were identified, multiplied by . The weight of the corresponding item is, naturally, the requested quantity . A responder accepts a proposal which was not included in the input of the KNAPSACK instance, and for which there is sufficient remaining resource, if the requested quantity is , or else with probability .

6.2 Q-learning for OCF

An agent that uses our Q-learning algorithm employs two distinct kinds of Q-values. The first one, denoted as , maintains agent-level values; while the second, denoted as , maintains coalition size-level values. Employing two different sets of Q-values is necessary, since the alternative of maintaining a Q-value for every possible coalition requires exponential space in the number of the agents (rendering the problem practically intractable in large settings). Agent maintains for each agent a value, and for each a value; keeping a value for is redundant since the decision-maker always includes herself in a coalition. Furthermore, a learning rate is employed [29], as is common in Q-learning, where is the game iteration. After , is formed and coalitional value is observed, agent updates her Q-values as follows:

A proposer employing our Q-learning algorithm iteratively selects some quantity of her resource, at random, to offer to a coalition, until it is depleted. Then, at each iteration, the size of the coalition to propose (excluding herself) is selected using the softmax function over the values, and afterwards the agents to include in the coalition are selected using the softmax function over the values. The proposer asks from each member in the same quantity she has offered to multiplied by . Exploration is employed in the same way as in the other methods.

Responder has to solve a KNAPSACK problem, where a proposal regarding coalition is given as input only if is positive, with its value being and its weight . If is negative, accepts (if she can afford it) joining if the requested quantity is , or else with probability .

7 Experimental Evaluation

We evaluated OVERPRO’s effectiveness and robustness in environments with 50 and 250 agents. Agent resource quantities were generated from uniformly at random. The RRs were for , and for , where the value of each RR was generated from . We added stochasticity in our setting, so that with probability the value of a coalition, as resulted by applying RRs, is multiplied by a factor generated from . Every game ran for = 1000 iterations, and thus, agent can observe at most documents. Coding was in Python 3 and online LDA was implemented as in [12].444https://github.com/blei-lab/onlineldavb The same exploration rate was set for all methods, decreasing quadratically from to . The value of decreases linearly, for every method as well, from to . We tested OVERPRO, which requires (number of topics),555In some LDA implementations the value of is automatically derived [30], but we use the standard online LDA algorithm, which requires the value of as a parameter. and (that determine the impact of a batch of documents on the topics), and Q-learning, which requires , for a number of different parameters; while for Greedy top-k we set values of k equal to the number of topics for OVERPRO. The value of used in OVERPRO was equal to the vocabulary’s length raised to , i.e., . Experiments ran on a grid (each execution instance ran sequentially) with 4GB RAM 2.6GHz computers.

sw () participation time (sec)
95.60 24.91 0.525
117.27 24.97 0.528
108.59 25.09 0.540
119.47 25.17 0.543
34.54 37.70 0.363
51.72 37.64 0.366
14.53 38.15 0.009
10.69 38.08 0.009
Table 1: Results (averages over runs) for agents and different values of for OVERPRO, k for Greedy top-k, and and for Q-learning. Participation and time are per agent per iteration  (there is a unique proposer in ).

In Table 1 we present for the average: social welfare (total utility) earned in a game (sw); number of coalitions in which an agent participates in a round (participation); and game completion time per agent per iteration.

As observed in Table 1, OVERPRO vastly outperforms both Greedy top-k and Q-learning in terms of social welfare. For the best set of parameters of OVERPRO, , the average social welfare earned in a game was more than double of that earned when the best alternative was employed, which is Greedy top-, since . Thus, we can conclude that a stochasticity probability even as low as can have a largely negative impact on Greedy top-k. On the other hand, this demonstrates the robustness of OVERPRO. Q-learning, for both values of , performed very poorly, as in both cases the social welfare was not far above zero: for and for . This suggests deficiency in matching good agent-level Q-values to coalition size-level ones, and thus unsuitability of Q-learning approaches, when Q-values for every coalition cannot be maintained. For both and topics, the social welfare was better for than for . Since and determine the impact that a batch of documents has on the formation of the topics, can be interpreted as a learning rate. Now, higher values of and result in smaller values of . Therefore, it can be conjectured that lower learning rates are preferred over higher ones.

Despite the better social welfare performance of OVERPRO against any of the alternatives, the average agent participation per iteration is lower when OVERPRO is employed, as seen in Table 1. In particular, an agent using OVERPRO joins about coalitions per iteration, while one using either Greedy top-k or Q-learning joins about . By the end of a game an agent employing OVERPRO will have trained her online LDA with more than k documents, since one coalition corresponds to one document, an agent participates in at least coalitions in a round (value for ), and =. Notice that the number of coalitions in which an agent participates when OVERPRO is employed is much smaller than her resource quantity, which is at least (= max OVERPRO participation). Thus, the number of documents is much smaller than the maximum possible, which implies that the pseudopolynomial complexity related to the number of documents does not have an actual impact. The time taken per agent per iteration is less than sec for OVERPRO and sec for Greedy top-k, while it is about two orders of magnitude lower for Q-learning.

Now, one cannot draw accurate conclusions regarding the real power of an agent decision-making algorithm relying solely on social welfare. For instance, when more coalitions form, this will likely have a positive impact on social welfare—but rational agents aim to maximize their own utility. Therefore, we define efficiency as the ratio of social welfare (total utility) to total resource quantity invested by all agents in every coalition in a round. This efficiency metric is natural, as it takes the focus away from social welfare.

Figure 2: Average efficiency defined as the ratio of social welfare to total resource quantity invested by all agents. For OVERPRO the values of  (number of topics), and respectively for Greedy top- the values of , are denoted on the left of each bar. Results are averages over all rounds over multiple runs: runs for , and for .

It can be observed in Fig. 2 that OVERPRO, for  (orange bars), outperforms both Greedy top-k and Q-learning in terms of efficiency. In particular, the highest efficiency value, which appears for , is more than double of the best efficiency value of the alternatives, observed for Greedy top- (the former being over , and the latter lower than ). We can thus conclude that agents employing OVERPRO are more efficient in terms of earning utility (as a function of resources invested), as they focus more on coalitions identified as profitable.

Now, as depicted by the blue bars in Fig. 2, OVERPRO achieves even better efficiency for than for , and it thus appears to effectively exploit the richer emerging collaboration structure. Moreover, we observed through experimentation that the number of topics should increase sublinearly to , and we have thus used and as its values. We set the values of for Greedy top-k accordingly. We observe that for and , for set to either or , the achieved efficiency is over , thus vastly outperforming the RL algorithms, whose efficiency is always lower than ; and at the same time supporting our conjecture that lowering learning rates are associated with increased performance. Agents employing Q-learning performed better for than for in terms of efficiency, but their performance was still very far below that of the ones using OVERPRO. The efficiency of Greedy top-k deteriorated for = 250 (dropping even below for ), as it fails to identify and exploit patterns in the collaboration structure, while it is more difficult for the method to identify profitable coalitions in this larger setting. Each agent using OVERPRO had trained her online LDA model with more than k documents (coalitions) in total, at the end of a game. The OVERPRO running time for , was sec per agent per iteration.

8 Discussion

One of the aims of this paper is to provide insights towards new research directions. In particular, this work diverges from the standard RL/MDPs paradigm used for multiagent learning in games. As such, we expect that it will raise intriguing questions, and bears the potential for the development of exciting new theory to be applied to challenging problems. It is, for one, interesting to study the effect of adopting PTMs for multiagent learning. Taking a reverse point of view, it is of interest of examine the effect that specific multiagent environments can have on PTM properties. For instance, the convergence property of online LDA tranfers “for free” to OVERPRO, but in certain environments, or under additional assumptions on the structure of the game, the convergence rate might be different.

Moreover, a somewhat “orthogonal” contribution in this paper was the introduction of the novel concept of Relational Rules (RRs), which consist a natural scheme for representing synergies in overlapping settings. As such, pursuing their further study could lead to new stability results for overlapping cooperative games.

Indeed, the concept of stability in games with overlapping coalitions is quite elaborate and different than the disjoint coalitional ones, since it is not just the membership of an agent in a coalition that matters, but the degree by which she participates in that; and since the number of different coalition structures cannot be enumerated in such settings [6]. Additionally, in such settings, it is not just agents or coalitions that can deviate, but entire coalition structures, since agents can withdraw just a portion of their resource from the formed coalitions. All these render the study of stability challenging in OCF domains. Though we did not pursue the study of OCF stability in this paper, it would be interesting to define and study stability concepts that take into account agent preferences regarding collaboration structures learned using our method.

9 Conclusions and Future Work

We have presented a novel approach for multiagent learning in cooperative game environments, where probabilistic topic modeling is exploited. Furthermore, this is the first work to tackle overlapping coalition formation under uncertainty, where the uncertainty is on the relations entailing synergies among the agents. To this end, we first proposed Relational Rules, a representation scheme which extends MC-nets to cooperative games with overlapping coalitions; and then showed how to use online LDA to implicitly learn the agents’ synergies described by (unknown) RRs. Simulations confirm the method’s effectiveness.

As immediate future work, we intend to test these ideas in non-transferable utility settings. Moreover, we would like to apply our method to non-cooperative environments. Naturally, this would require adjustments to the “vocabulary” used in “documents” representing coalitions. Finally, we intend to apply alternative PTM algorithms, to this or different game theoretic settings.

References

  • [1] Jean-Pierre Aubin. Cooperative fuzzy games. Mathematics of Operations Research, 6(1):1–13, 1981.
  • [2] Maria-Florina Balcan, Ariel D Procaccia, and Yair Zick. Learning cooperative games. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, pages 475–481, 2015.
  • [3] David M Blei. Probabilistic topic models. Communications of the ACM, 55(4):77–84, 2012.
  • [4] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003.
  • [5] Georgios Chalkiadakis and Craig Boutilier. Bayesian reinforcement learning for coalition formation under uncertainty. In Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multiagent Systems, pages 1090–1097, 2004.
  • [6] Georgios Chalkiadakis, Edith Elkind, Evangelos Markakis, Maria Polukarov, and Nick R Jennings. Cooperative games with overlapping coalitions. JAIR, 39(1):179–216, 2010.
  • [7] Georgios Chalkiadakis, Edith Elkind, and Michael Wooldridge. Computational aspects of cooperative game theory. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2011.
  • [8] Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In AAAI, pages 746–752, 1998.
  • [9] Viet Dung Dang, Rajdeep K Dash, Alex Rogers, and Nicholas R Jennings. Overlapping coalition formation for efficient data fusion in multi-sensor networks. In AAAI, pages 635–640, 2006.
  • [10] Russ C Eberhart, James Kennedy, et al. A new optimizer using particle swarm theory. In Proc. of the 6th international symposium on micro machine and human science, volume 1, pages 39–43. New York, NY, 1995.
  • [11] Drew Fudenberg and David K Levine. The theory of learning in games. MIT press, 1998.
  • [12] Matthew Hoffman, Francis R Bach, and David M Blei. Online learning for latent dirichlet allocation. In NIPS, pages 856–864, 2010.
  • [13] Junling Hu and Michael P Wellman. Multiagent reinforcement learning: theoretical framework and an algorithm. In Proceedings of the 15th International Conference on Machine Learning, pages 242–250, 1998.
  • [14] Junling Hu and Michael P Wellman. Nash q-learning for general-sum stochastic games. JMLR, pages 1039–1069, 2003.
  • [15] Samuel Ieong and Yoav Shoham. Marginal contribution nets: a compact representation scheme for coalitional games. In Proc. of the 6th Conference on Electronic Commerce, pages 193–202, 2005.
  • [16] Samuel Ieong and Yoav Shoham. Bayesian coalitional games. In AAAI, pages 95–100, 2008.
  • [17] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999.
  • [18] Michael Irwin Jordan. Learning in graphical models. Springer Science & Business Media, 1998.
  • [19] Michael Kaisers and Karl Tuyls. Replicator dynamics for multi-agent learning: an orthogonal approach. In International Workshop on Adaptive and Learning Agents, pages 49–59. Springer, 2009.
  • [20] Hans Kellerer, Ulrich Pferschy, and David Pisinger. Knapsack problems. Springer, Berlin, 2004.
  • [21] Sarit Kraus, Onn Shehory, and Gilad Taase. Coalition formation with uncertain heterogeneous information. In Proceedings of the 2nd International Joint Conference on Autonomous Agents and Multiagent Systems, pages 1–8, 2003.
  • [22] Sarit Kraus, Onn Shehory, and Gilad Taase. The advantages of compromising in coalition formation with incomplete information. In Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multiagent Systems, pages 588–595, 2004.
  • [23] Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the 11th International Conference on Machine Learning, pages 157–163, 1994.
  • [24] Michail Mamakos and Georgios Chalkiadakis. Probability bounds for overlapping coalition formation. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 331–337, 2017.
  • [25] John Nash. Non-cooperative games. Annals of mathematics, pages 286–295, 1951.
  • [26] Talal Rahwan, Tomasz P Michalak, Michael Wooldridge, and Nicholas R Jennings. Coalition structure generation: A survey. Artificial Intelligence, 229:139–174, 2015.
  • [27] Onn Shehory and Sarit Kraus. Methods for task allocation via agent coalition formation. Artificial Intelligence, 101(1):165–200, 1998.
  • [28] Jeroen Suijs, Peter Borm, Anja De Waegenaere, and Stef Tijs. Cooperative games with stochastic payoffs. European Journal of Operational Research, 113(1):193–205, 1999.
  • [29] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press Cambridge, 1998.
  • [30] Yee Whye Teh, Michael I Jordan, Matthew J Beal, and David M Blei. Hierarchical dirichlet processes. Journal of the American Statistical Association, pages 1566–1581, 2006.
  • [31] Karl Tuyls and Simon Parsons. What evolutionary game theory tells us about multiagent learning. Artificial Intelligence, 171(7):406–416, 2007.
  • [32] Karl Tuyls and Gerhard Weiss. Multiagent learning: Basics, challenges, and prospects. AI Magazine, 33(3):41, 2012.
  • [33] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8:279–292, 1992.
  • [34] Guofu Zhang, Jianguo Jiang, Zhaopin Su, Meibin Qi, and Hua Fang. Searching for overlapping coalitions in multiple virtual organizations. Information Sciences, 180(17):3140–3156, 2010.
  • [35] Yair Zick, Georgios Chalkiadakis, and Edith Elkind. Overlapping coalition formation games: Charting the tractability frontier. In Proc. of the 11th International Joint Conference on Autonomous Agents and Multiagent Systems, pages 787–794, 2012.
  • [36] Yair Zick and Edith Elkind. Arbitrators in overlapping coalition formation games. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume, pages 55–62, 2011.
  • [37] Yair Zick, Evangelos Markakis, and Edith Elkind. Arbitration and stability in cooperative games with overlapping coalitions. Journal of Artificial Intelligence Research, 50:847–884, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
191404
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description