Distributed Learning in Multi-Armed Bandit with Multiple Players

Distributed Learning in Multi-Armed Bandit with Multiple Players

Keqin Liu,     Qing Zhao
University of California, Davis, CA 95616
{kqliu, qzhao}@ucdavis.edu
Abstract

We 00footnotetext: This work was supported by the Army Research Laboratory NS-CTA under Grant W911NF-09-2-0053, by the Army Research Office under Grant W911NF-08-1-0467, and by the National Science Foundation under Grants CNS-0627090 and CCF-0830685. Part of this result will be presented at ICASSP 2010. formulate and study a decentralized multi-armed bandit (MAB) problem. There are distributed players competing for independent arms. Each arm, when played, offers i.i.d. reward according to a distribution with an unknown parameter. At each time, each player chooses one arm to play without exchanging observations or any information with other players. Players choosing the same arm collide, and, depending on the collision model, either no one receives reward or the colliding players share the reward in an arbitrary way. We show that the minimum system regret of the decentralized MAB grows with time at the same logarithmic order as in the centralized counterpart where players act collectively as a single entity by exchanging observations and making decisions jointly. A decentralized policy is constructed to achieve this optimal order while ensuring fairness among players and without assuming any pre-agreement or information exchange among players. Based on a Time Division Fair Sharing (TDFS) of the best arms, the proposed policy is constructed and its order optimality is proven under a general reward model. Furthermore, the basic structure of the TDFS policy can be used with any order-optimal single-player policy to achieve order optimality in the decentralized setting. We also establish a lower bound on the system regret growth rate for a general class of decentralized polices, to which the proposed policy belongs. This problem finds potential applications in cognitive radio networks, multi-channel communication systems, multi-agent systems, web search and advertising, and social networks.

Decentralized multi-armed bandit, system regret, distributed learning, cognitive radio, web search and advertising, multi-agent systems.

I Introduction

I-a The Classic MAB with A Single Player

In the classic multi-armed bandit (MAB) problem, there are independent arms and a single player. Playing arm () yields i.i.d. random rewards with a distribution parameterized by an unknown . At each time, the player chooses one arm to play, aiming to maximize the total expected reward in the long run. Had the reward model of each arm been known to the player, the player would have always chosen the arm that offers the maximum expected reward. Under an unknown reward model, the essence of the problem lies in the well-known tradeoff between exploitation (aiming at gaining immediate reward by choosing the arm that is suggested to be the best by past observations) and exploration (aiming at learning the unknown reward model of all arms to minimize the mistake of choosing an inferior arm in the future).

Under a non-Bayesian formulation where the unknown parameters are considered deterministic, a commonly used performance measure of an arm selection policy is the so-called regret or the cost of learning defined as the reward loss with respect to the case with known reward models. Since the best arm can never be perfectly identified from finite observations (except certain trivial scenarios), the player can never stop learning and will always make mistakes. Consequently, the regret of any policy grows with time.

An interesting question posed by Lai and Robbins in 1985 [1] is on the minimum rate at which the regret grows with time. They showed in [1] that the minimum regret grows at a logarithmic order under certain regularity conditions. The best leading constant was also obtained, and an optimal policy was constructed under a general reward model to achieve the minimum regret growth rate (both the logarithmic order and the best leading constant). In 1987, Anantharam et al. extended Lai and Robbins’s results to MAB with multiple plays: exactly arms can be played simultaneously at each time [2]. They showed that allowing multiple plays changes only the leading constant but not the logarithmic order of the regret growth rate. They also extended Lai-Robbins policy to achieve the optimal regret growth rate under multiple plays.

I-B Decentralized MAB with Distributed Multiple Players

In this paper, we formulate and study a decentralized version of the classic MAB, where we consider () distributed players. At each time, a player chooses one arm to play based on its local observation and decision history. Players do not exchange information on their decisions and observations. Collisions occur when multiple players choose the same arm, and, depending on the collision model, either no one receives reward or the colliding players share the reward in an arbitrary way. The objective is to design distributed policies for each player in order to minimize, under any unknown parameter set , the rate at which the system regret grows with time. Here the system regret is defined as the reward loss with respect to the maximum system reward obtained under a known reward model and with centralized scheduling of the players.

The single-player MAB with multiple plays considered in [2] is equivalent to a centralized MAB with multiple players. If all players can exchange observations and make decisions jointly, they act collectively as a single player who has the freedom of choosing arms simultaneously. As a direct consequence, the logarithmic order of the minimum regret growth rate established in [2] provides a lower bound on the minimum regret growth rate in a decentralized MAB where players cannot exchange observations and must make decisions independently based on their individual local observations111While intuitive, this equivalence requires the condition that the best arms have nonnegative means (see Sec. IV for details)..

I-C Main Results

In this paper, we show that in a decentralized MAB where players can only learn from their individual observations and collisions are bound to happen, the system can achieve the same logarithmic order of the regret growth rate as in the centralized case. Furthermore, we show that this optimal order can be achieved under a fairness constraint that requires all players accrue reward at the same rate.

A decentralized policy is constructed to achieve the optimal order under the fairness constraint. The proposed policy is based on a Time Division Fair Sharing (TDFS) of the best arms where no pre-agreement on the time sharing schedule is needed. It is constructed and its order optimality is proven under a general reward model. The TDFS decentralized policy thus applies to and maintains its order optimality in a wide range of potential applications as detailed in Sec. I-D. Furthermore, the basic structure of TDFS is not tied with a specific single-player policy. It can be used with any single-player policy to achieve an efficient and fair sharing of the best arms among distributed players. More specifically, if the single-player policy achieves the optimal logarithmic order in the centralized setting, then the corresponding TDFS policy achieves the optimal logarithmic order in the decentralized setting. The order optimality of the TDFS policy is also preserved when player’s local polices are built upon different order-optimal single-player polices.

We also establish a lower bound on the leading constant in the regret growth rate for a general class of decentralized polices, to which the proposed TDFS policy belongs. This lower bound is tighter than the trivial bound provided by the centralized MAB considered in [2], which indicates, as one would have expected, that decentralized MAB is likely to incur a larger leading constant than its centralized counterpart.

I-D Applications

With its general reward model, the TDFS policy finds a wide area of potential applications. We give a few examples below.

Consider first a cognitive radio network where secondary users independently search for idle channels temporarily unused by primary users [3]. Assume that the primary system adopts a slotted transmission structure and the state (busy/idle) of each channel can be modeled by an i.i.d. Bernoulli process. At the beginning of each slot, multiple distributed secondary users need to decide which channel to sense (and subsequently transmit if the chosen channel is idle) without knowing the channel occupancy statistics (i.e., the mean of the Bernoulli process). An idle channel offers one unit of reward. When multiple secondary users choose the same channel, none or only one receives reward depending on whether carrier sensing is implemented.

Another potential application is opportunistic transmission over wireless fading channels [4]. In each slot, each user senses the fading realization of a selected channel and chooses its transmission power or date rate accordingly. The reward can model energy efficiency (for fixed-rate transmission) or throughput. The objective is to design distributed channel selection policies under unknown fading statistics.

Consider next a multi-agent system, where each agent is assigned to collect targets among multiple locations (for example, ore mining). When multiple agents choose the same location, they share the reward according to a certain rule. The log-Gaussian distribution with an unknown mean may be used as the reward model when the target is fish [5] or ore [6].

Another potential application is Internet advertising where multiple competing products select Web sites to posts advertisements. The reward obtained from the selected Web site is measured by the number of interested viewers whose distribution can be modeled as Poisson [7].

I-E Related Work

In the context of the classic MAB, there have been several attempts at developing index-type policies that are simpler than Lai-Robbins policy by using a single sample mean based statistic [8,9]. However, such a simpler form of the policy was obtained at the price of a more restrictive reward model and/or a larger leading constant in the logarithmic order. In particular, the index policy proposed in [8] mainly targets at several members of the exponential family of reward distributions. Auer et al. proposed in [9] several index-type policies that achieve order optimality for reward distributions with a known finite support.

Under a Bernoulli reward model in the context of cognitive radio, MAB with multiple players was considered in [10] and [11]. In [10], a heuristic policy based on histogram estimation of the unknown parameters was proposed. This policy provides a linear order of the system regret rate, thus cannot achieve the maximum average reward. In [11], Anandkumar et al. have independently established order-optimal distributed policies by extending the index-type single-user policies proposed in [9]. Compared to [11], the TDFS policy proposed here applies to more general reward models (for example, Gaussian and Poisson reward distributions that have infinite support). It thus has a wider range of potential applications as discussed in Sec. I-D. Furthermore, the policies proposed in [11] are specific to the single-player polices proposed in [9], whereas the TDFS policy can be used with any order-optimal single-player policy to achieve order optimality in the decentralized setting. Another difference between the policies proposed in [11] and the TDFS policy is on user fairness. The policies in [11] orthogonalize users into different channels that offer different throughput, whereas the TDFS policy ensures that each player achieves the same time-average reward at the same rate. One policy given in [11] does offer probabilistic fairness in the sense that all users have the same chance of settling in the best channel. However, given that the policy operates over an infinite horizon and the order optimality is asymptotic, each user only sees one realization in its lifetime, leading to different throughput among users. A lower bound on the achievable growth rate of the system regret is also given in [11], which is identical to the bound developed here. The derivation of the lower bound in this work, however, applies to a more general class of policies and, first given in [12], precedes that in [11]. In terms of using collision history to orthogonalizing players without pre-agreement, the basic idea used in this work is similar to that in [11]. The difference is that in the TDFS policy, the players are orthogonalized via settling at different offsets in their time sharing schedule, while in [11], players are orthogonalized to different channels. Recently, results obtained in [11] have been extended to incorporate unknown number of users for Bernoulli reward model in the context of cognitive radio [13].

A variation of centralized MAB in the context of cognitive radio has been considered in [14] where a channel offers independent Bernoulli rewards with different (unknown) means for different users. This more general model captures contention among secondary users that are in the communication range of different sets of primary users. A centralized policy that assumes full information exchange and cooperation among users is proposed which achieves the logarithmic order of the regret growth rate. We point out that the logarithmic order of the TDFS policy is preserved in the decentralized setting when we allow players to experience different reward distributions on the same arm provided that players have the common set of the best arms and each of the best arms has the same mean across players. It would be interesting to investigate whether the same holds when players experience different means on each arm.

I-F Notations

Let denote the cardinality of set . For two sets and , let denote the set consisting of all elements in that do not belong to . For two positive integers and , define , which is an integer taking values from . Let denote the probability measure when the unknown parameter in the associated distribution equals to .

Ii Classic Results on Single-Player MAB

In this section, we give a brief review of the main results established in [1, 2, 8, 9] on the classic MAB with a single player.

Ii-a System Regret

Consider an -arm bandit with a single player. At each time , the player chooses exactly () arms to play. Playing arm yields i.i.d. random reward drawn from a univariate density function parameterized by . The parameter set is unknown to the player. Let denote the mean of under the density function . Let be the Kullback-Liebler distance that measures the dissimilarity between two distributions parameterized by and , respectively.

An arm selection policy is a series of functions, where maps the previous observations of rewards to the current action that specifies the set of arms to play at time . The system performance under policy is measured by the system regret defined as the expected total reward loss up to time under policy compared to the ideal scenario that is known to the player (thus the best arms are played at each time). Let be a permutation of such that . We have

where is the random reward obtained at time under action , and denotes the expectation with respect to policy . The objective is to minimize the rate at which grows with under any parameter set by choosing an optimal policy .

We point out that the system regret rate is a finer performance measure than the long-term average reward. All policies with a sublinear regret rate would achieve the maximum long-term average reward. However, the difference in their performance measured by the total expected reward accrued over a time horizon of length can be arbitrarily large as increases. It is thus of great interest to characterize the minimum regret rate and construct policies optimal under this finer performance measure.

A policy is called uniformly good if for any parameter set , we have for any . Note that a uniformly good policy implies the sub-linear growth of the system regret and achieves the maximum long-term average reward which is the same as in the case with perfect knowledge of .

Ii-B The Logarithmic Order and the Optimal Policy

We present in the theorem below the result established in [1,2] on the logarithmic order as well as the leading constant of the minimum regret growth rate of the single-player MAB.

Theorem [1,2]: Under the regularity conditions (conditions C1-C4 in Appendix A), we have, for any uniformly good policy ,

(1)

Lai and Robbins also constructed a policy that achieves the lower bound on the regret growth rate given in (1) under single play ([1] (which was extended by Anantharam et al. to in [2]). Under this policy, two statistics of the past observations are maintained for each arm. Referred to as the point estimate, the first statistic is an estimate of the mean given by a function of the past observations on this arm ( denotes the total number of observations). The second statistic is the so-called confidence upper bound which represents the potential of an arm: the less frequently an arm is played, the less confident we are about the point estimate, and the higher the potential of this arm. The confidence upper bound, denoted by , thus depends on not only the number of observations on the arm but also the current time in order to measure how frequently this arm has been played.

Based on these two statistics, Lai-Robbins policy operates as follows. At each time , among all “well-sampled” arms, the one with the largest point estimate is selected as the leader denoted as . The player then chooses between the leader and a round-robin candidate to play. The leader is played if and only if its point estimate exceeds the confidence upper bound of the round-robin candidate . A detailed implementation of this policy is given in Fig. 1.

Lai-Robbins Policy for Single-Player MAB [1] Notations and Inputs: let denote the number of times that arm has been played up to (but excluding) time and the past observations obtained from arm . Fix . Initializations: in the first steps, play each arm once. At time , among all arms that have been played at least times, let denote the arm with the largest point estimate (referred to as the leader): Let be the round robin candidate at time . The player plays the leader if and the round-robin candidate otherwise.

Fig. 1: Lai-Robbins policy for single-player MAB [1].

Lai and Robbins [1] have shown that for point estimates and confidence upper bounds satisfying certain conditions (condition C5 in Appendix A), the above policy is optimal, i.e., it achieves the minimum regret growth rate given in (1). For Gaussian, Bernoulli, and Poisson reward models, and satisfying condition C5 are given below [1].

(2)
(3)

While given in (3) is not in closed-form, it does not affect the implementation of the policy. Specifically, the comparison between the point estimate of the leader and the confidence upper bound of the round-robin candidate (see Fig. 1) is shown to be equivalent to the following two conditions [1]:

Consequently, we only require the point estimate to implement the policy.

Ii-C Order-Optimal Index Policies

Since Lai and Robbins’s seminal work, researchers have developed several index-type policies that are simpler than Lai-Robbins policy by using a single sample mean based statistic [8,9]. Specifically, under such an index policy, each arm is assigned with an index that is a function of its sample mean and the arm with the greatest index will be played in each slot. To obtain an initial index, each arm is played once in the first slots.

The indexes proposed in [8] for Gaussian, Bernoulli, Poisson and exponential distributed reward models are given in (4). Except for Gaussian distribution, this index policy only achieves the optimal logarithmic order of the regret growth rate but not the best leading constant222The optimal indexes for Bernoulli, Poisson and exponential distributions are also developed in [8]. However, the indexes are not given in closed-form and are difficult to implement..

(4)

where is the index function, is the sample mean of the arm, and are upper bounds of all possible values of in the Poisson and exponential reward models, respectively.

Based on [8], a simpler order-optimal sample mean based index policy was established in [9] for reward distributions with a known finite support:

(5)

Iii Decentralized MAB: Problem Formulation

In this section, we formulate the decentralized MAB with distributed players. In addition to conditions C1-C5 required by the centralized MAB, we assume that the best arms have distinct nonnegative means333This condition can be relaxed to the case that a tie occurs at the th largest mean..

In the decentralized setting, players may collide and may not receive reward that the arm can potentially offer. We thus refer to as the state of arm at time (for example, the busy/idle state of a communication channel in the context of cognitive radio). At time , player chooses an action that specifies the arm to play and observes its state . The action is based on the player’s local observation and decision history. Note that the local observation history of each player also includes the collision history. As shown in Sec. V-C, the observation of a collision is used to adjust the local offset of a player’s time sharing schedule of the best arms to avoid excessive future collisions.

We define a local policy for player as a sequence of functions , where maps player ’s past observations and decisions to action at time . The decentralized policy is thus given by the concatenation of the local polices for all players:

Define reward as the total reward accrued by all players at time , which depends on the system collision model as given below.

Collision model 1: When multiple players choose the same arm to play, they share the reward in an arbitrary way. Since how players share the reward has no effect on the total system regret, without loss of generality, we assume that only one of the colliding players obtains a random reward given by the current state of the arm. Under this model, we have

where is the indicator function that equals to if arm is played by at least one player, and otherwise.

Collision model 2: When multiple players choose the same arm to play, no one obtains reward. Under this model, we have

where is the indicator function that equals to if arm is played by exactly one player, and otherwise.

The system regret of policy is thus given by

Note that the system regret in the decentralized MAB is defined with respect to the same best possible reward as in its centralized counterpart. The objective is to minimize the rate at which grows with time under any parameter set by choosing an optimal decentralized policy. Similarly, we say a decentralized policy is uniformly good if for any parameter set , we have for any . To address the optimal order of the regret, it is sufficient to focus on uniformly good decentralized polices provided that such policies exist.

We point out that all results developed in this work apply to a more general observation model. Specifically, the arm state observed by different players can be drawn from different distributions, as long as players have the common set of the best arms and each of the best arms has the same mean across players. This relaxation in the observation model is particularly important in the application of opportunistic transmission in fading channels, where different users experience different fading environments in the same channel.

Iv The Optimal Order of The System Regret

In this section, we show that the optimal order of the system regret growth rate of the decentralized MAB is logarithmic, the same as its centralized counterpart as given in Sec. II.

Theorem 1

Under both collision models, the optimal order of the system regret growth rate of the decentralized MAB is logarithmic, i.e., for an optimal decentralized policy , we have

(6)

for some constants and that depend on .

{proof}

The proof consists of two parts. First, we prove that the lower bound for the centralized MAB given in (1) is also a lower bound for the decentralzied MAB. Second, we construct a decentralized policy (see Sec. V) that achieves the logarithmic order of the regret growth rate.

It appears to be intuitive that the lower bound for the centralized MAB provides a lower bound for the decentralized MAB. This, however, may not hold when some of the best arms have negative means (modeling the punishment for playing certain arms). The reason is that the centralized MAB considered in [2] requires exactly arms to be played at each time, while in the decentralized setting, fewer than arms may be played at each time when players choose the same arm. To obtain an lower bound for the decentralized MAB, we need to consider a centralized MAB where the player has the freedom of playing up to arms at each time. When the player knows that all arms have nonnegative means, it is straightforward to see that the two conditions of playing exactly arms and playing no more than arms are equivalent. Without such knowledge, however, this statement needs to be proven.

Lemma 1

Under the condition that the best arms have nonnegative means, the centralized MAB that requires exactly arms are played at each time is equivalent to the one that requires at most arms are played at each time.

Proof: See Appendix B.

V An Order-Optimal Decentralized Policy

In this section, we construct a decentralized policy that achieves the optimal logarithmic order of the system regret growth rate under the fairness constraint.

V-a Basic Structure of the Decentralized TDFS Policy

The basic structure of the proposed policy is a time division structure at each player for selecting the best arms. For the ease of the presentation, we first assume that there is a pre-agrement among players so that they use different phases (offsets) in their time division schedule. For example, the offset in each player’s time division schedule can be predetermined based on the player’s ID. In Sec. V-C, we show that this pre-agreement can be eliminated while maintaining the order-optimality and fairness of the TDFS policy, which leads to a complete decentralization among players.

Consider, for example, the case of . The time sequence is divided into two disjoint subsequences, where the first subsequence consists of all odd slots and the second one consists of all even slots. The pre-agreement is such that player targets at the best arm during the first subsequence and the second best arm during the second subsequence, and player does the opposite.

Without loss of generality, consider player 1. In the first subsequence, player 1 applies a single-player policy, say Lai-Robbins policy, to efficiently learn and select the best arm. In the second subsequence, the second best arm is learned and identified by removing the arm that is considered to be the best and applying the Lai-Robbins policy to identify the best arm in the remaining arms (which is the second best among all arms). Note that since the best arm cannot be perfectly identified, which arm is considered as the best in an odd slot (in the first subsequence) is a random variable determined by the realization of past observations. We thus partition the second subsequence into multiple mini-sequences depending on which arm was considered to be the best in the preceding odd slot and thus should be removed from consideration when identifying the second best. Specifically, as illustrated in Fig. 2, the second subsequence is divided into disjoint mini-sequences, where the th mini-sequence consists of all slots that follow a slot in which arm was played (i.e., arm was considered the best arm in the preceding slot that belongs to the first subsequence). In the th mini-sequence, the player applies Lai-Robbins policy to arms after removing arm .

In summary, the local policy for each player consists of parallel Lai-Robbins procedures: one is applied in the subsequence that targets at the best arm and the rest are applied in the mini-sequences that target at the second best arm. These parallel procedures, however, are coupled through the common observation history since in each slot, regardless of which subsequence or mini-sequence it belongs to, all the past observations are used in the decision making. We point out that making each mini-sequence use only its own observations is sufficient for the order optimality of the TDFS policy and simplifies the optimality analysis. However, we expect that using all available observations leads to a better constant as demonstrated by simulation results in Sec. VII.

Fig. 2: The structure of player 1’s local policy under  (. Let denote the arm considered as the best by player 1 in the first subsequence. In this example, player 1 divides the second subsequence (i.e., all the even slots) into three mini-sequences, each associated with a subset of arms after removing the arm considered to be the best in the first subsequence.).

The basic structure of the TDFS policy is the same for the general case of . Specifically, the time sequence is divided into subsequences, in which each player targets at the best arms in a round-robin fashion with a different offset. Suppose that player 1 has offset , i.e., it targets at the th () best arm in the th subsequence. To player 1, the th  subsequence is then divided into mini-sequences, each associated with a subset of arms after removing those arms that are considered to have a higher rank than . In each of the mini-sequences, Lai-Robbins single-player policy is applied to the subset of arms associated with this mini-sequence. Note that while the subsequences are deterministic, each mini-sequence is random. Specifically, for a given slot in the th subsequence, the mini-sequence which it belongs to is determined by the specific actions of the player in the previous slots. For example, if arms are played in the previous slots, then slot belongs to the mini-sequence associated with the arm set of . A detailed implementation of the proposed policy for a general is given in Fig. 3.

Note that the basic structure of the TDFS policy is general. It can be used with any single-player policy to achieve an efficient and fair sharing of the best arms among distributed players. Furthermore, players are not restricted to using the same single-player policy. Details are given in Sec. V-B.

The Decentralized TDFS Policy Without loss of generality, consider player . Notations and Inputs: In addition to the notations and inputs of Lai-Robbins single-player policy (see Fig. 1), let denote the number of slots in the th subsequence up to (and including) , and let denote the number of slots in the mini-sequence associated with arm set up to (and including) . At time , player does the following. If belongs to the th subsequence (i.e., ), player targets at the best arm by carrying out the following procedure. If , play arm . Otherwise, the player chooses between a leader and a round-robin candidate , where the leader is the arm with the largest point estimate among all arms that have been played for at least times. The player plays the leader if its point estimate is larger than the confidence upper bound of . Otherwise, the player plays the round-robin candidate . If belongs to the th () subsequence (i.e., ), the player targets at the th best arm where by carrying out the following procedure. If , play arm . Otherwise, the player targets at the th best arm. Let denote the set of arms played in the previous slots. Slot thus belongs to the mini-sequence associated with the subset of arms. The player chooses between a leader and a round-robin candidate defined within . Specifically, among all arms that have been played for at least times, let denote the leader. Let be the round-robin candidate where, for simplicity, we have assumed that arms in are indexed by . The player plays the leader if its point estimate is larger than the confidence upper bound of . Otherwise, the player plays the round-robin candidate .

Fig. 3: The decentralized TDFS policy .

V-B Order-Optimality under Fairness Constraint

Compared to the single-player counterpart given in [1,2], the difficulties in establishing the logarithmic regret growth rate of are twofold. First, compared to the centralized problem where a single player observes different arms simultaneously, each player here can only observe one arm at each time. Each player is thus learning the entire rank of the best arms with fewer observations. Furthermore, since the rank of any arm can never be perfectly identified, the mistakes in identifying the th () best arm will propagate into the learning process for identifying the th, up to the th best arm. Second, without centralized scheduling, collisions are bound to happen even with pre-agreed offset on sharing the best arms since players do not always agree on the rank of the arms. Such issues need to be carefully dealt with in establishing the logarithmic order of the regret growth rate.

Theorem 2

Under the TDFS policy , we have, for some constant ,

(7)

Let

Under collision model 1,

Under collision model 2,

{proof}

Note that the system regret is given by the sum of the regrets in the subsequences. By symmetry in the structure of , all subsequences experience the same regret. Consequently, the system regret is equal to times the regret in each subsequence. We thus focus on a particular subsequence and show that the regret growth rate in this subsequence is at most logarithmic. Without loss of generality, consider the first subsequence in which the th player targets at the th () best arm.

To show the logarithmic order of the regret growth rate, The basic approach is to show that the number of slots in which the th player does not play the th best arm is at most logarithmic with time. This is done by establishing in Lemma 3 a lower bound on the number of slots in which the th player plays the th best arm in the first subsequence.

To show Lemma 3, we first establish Lemma 2 by focusing on the dominant mini-sequence of the first subsequence. Without loss of generality, consider player . Define the dominant mini-sequence as the one associated with arm set (i.e., the best arms are correctly identified and removed in this mini-sequence). Lemma 2 shows that, in the dominant mini-sequence, the number of slots in which player does not play the th best arm is at most logarithmic with time.

Lemma 2

Let denote the number of slots up to time in which arm is played in the dominant mini-sequence associated with arm set . Then, for any arm with , we have,

(8)
{proof}

Note that this lemma is an extension of Theorem 3 in [1]. The proof of this lemma is more complicated since the mini-sequence is random and the decisions made in this mini-sequence depend on all past observations (no matter to which mini-sequence they belong). See Appendix C for details.

Next, we establish Lemma 3. The basic approach is to show that the length of the dominant mini-sequence dominates the lengths of all other mini-sequences in the first subsequences. Specifically, we show that the number of slots that do not belong to the dominant mini-sequence is at most logarithmic with time. This, together with Lemma 2 that characterizes the dominant mini-sequence, leads to Lemma 3 below.

Lemma 3

Let denote the number of slots in which player plays the th best arm in the first subsequence up to time , we have

(9)

where

(10)
{proof}

The proof is based on an induction argument on . See Appendix D for details.

From Lemma 3, for all , the number of slots that the th best arm is not played by player is at most logarithmic with time. Consequently, for all , the number of slots that player plays the th best arm is also at most logarithmic with time, i.e., the number of collisions on the th best arm is at most logarithmic with time. Since a reward loss on the th best arm can only happen when it is not played or a collision happens, the reward loss on the th best is at most logarithmic order of time, leading to the logarithmic order of the regret growth rate.

To establish the upper bound on the constant of the logarithmic order of the system regret growth rate, we consider the worst-case collisions on each arm. See Appendix E for details.

From Theorem 1 and Theorem 2, the decentralized policy is order-optimal. Furthermore, as given in Theorem 3 below, the decentralized policy ensures fairness among players under a fair collision model that offers all colliding players the same expected reward. For example, if only one colliding player can receive reward, then all players have equal probability to be lucky.

Theorem 3

Define the local regret for player under as

where is the immediate reward obtained by player at time . Under a fair collision model, we have,

(11)

Theorem 3 follows directly from the symmetry among players under . It shows that each player achieves the same time average reward at the same rate. We point out that without knowing the reward rate that each arm can offer, ensuring fairness requires that each player identify the entire set of the best arms and share each of these arms evenly with other players. As a consequence, each player needs to learn which of the possibilities is the correct choice. This is in stark difference from polices that make each player target at a single arm with a specific rank (for example, the th player targets solely at the th best arm). In this case, each player only needs to distinguish one arm (with a specific rank) from the rest. The uncertainty facing each player, consequently, the amount of learning required, is reduced from to . Unfortunately, fairness among players is lost.

As mentioned in Sec. V-A, the proposed policy can be used with any single-player policy, which can also be different for different players. More importantly, the order optimality of the TDFS policy is preserved as long as each player’s single-player policy achieves the optimal logarithmic order in the single player setting. This statement can be proven along the same line as Theorem 2 by establishing results similar to Lemma 2 and Lemma 3.

V-C Eliminating the Pre-Agreement

In this subsection, we show that a pre-agreement among players on the offsets of the time division schedule can be eliminated in the TDFS policy while maintaining its order-optimality and fairness.

Specifically, when a player joins the system, it randomly generates a local offset uniformly drawn from and plays one round of the arms considered to be the best. For example, if the random offset is , the player targets at the second, the third, , the th, and then the best arms in the subsequent slots, respectively. If no collision is observed in this round of plays, the player keeps the same offset; otherwise the player randomly generates a new offset. Over the sequence of slots where the same offset is used, the player implements the local policy of (given in Fig. 3) with this offset. To summarize, each player implements parallel local procedures of corresponding to different offsets. These parallel procedures are coupled through the observation history, i.e., each player uses its entire local observation history in learning no matter which offset is being used.

Note that players can join the system at different time. We also allow each player to leave the system for an arbitrary finite number of slots.

Theorem 4

The decentralized TDFS policy without pre-agreement is order-optimal and fair.

{proof}

See Appendix F.

Vi A Lower Bound for a Class of Decentralized Polices

In this section, we establish a lower bound on the growth rate of the system regret for a general class of decentralized policies, to which the proposed policy belongs. This lower bound provides a tighter performance benchmark compared to the one defined by the centralized MAB. The definition of this class of decentralized polices is given below.

Definition 1

Time Division Selection Policies The class of time division selection (TDS) policies consists of all decentralized polices that satisfy the following property: under local policy , there exists independent of the parameter set such that the expected number of times that player plays the th best arm up to time is for all .

A policy in the TDS class essentially allows a player to efficiently select each of the best arms according to a fixed time portion that does not depend on the parameter set . It is easy to see that the TDFS policy (with or without pre-agreement) belongs to the TDS class with for all .

Theorem 5

For any uniformly good decentralized policy in the TDS class, we have

(12)
{proof}

The basic approach is to establish a lower bound on the number of slots in which each player plays an arm that does not belong to the best arms. By considering the best case that they do not collide, we arrive at the lower bound on the regret growth rate given in (12). The proof is based on the following lemma, which generalizes Theorem 2 in [1]. To simplify the presentation, we assume that the means of all arms are distinct. However, Theorem 5 and Lemma 4 apply without this assumption.

Lemma 4

Consider a local policy . If for any parameter set and , there exists a -independent positive increasing function satisfying as such that

(13)

then we have, ,

(14)

Note that Lemma 4 is more general than Theorem 2 in [1] that assumes and . The proof of Lemma 4 is given in Appendix G.

Consider a uniformly good decentralized policy in the TDS class. There exists a player, say player , that plays the best arm for at least times. Since at these times, cannot play other arms, there must exist another player, say player , that plays the second best arm for at least times. It thus follows that there exist different players such that under any parameter set , the expected time player plays the th best arm is at least times. Based on Lemma 4, for any arm with , the expected time that player plays arm is at least . By considering the best case that players do not collide, we arrive at (12).

Vii Simulation Examples

In this section, we study the performance (i.e., the leading constant of the logarithmic order) of the decentralized TDFS policy in different applications through simulation examples.

Vii-a Cognitive Radio Networks: Bernoulli Reward Model

We first consider a Bernoulli reward model using cognitive radio as an example of application. There are secondary users independently search for idle channels among channels. In each slot, the busy/idle state of each channel is drawn from Bernoulli distribution with unknown mean, i.e.,  for . When multiple secondary users choose the same channel for transmission, they will collide and no one gets reward (collision model 2).

In Fig. 4, we plot the performance of (with pre-agreement) using different single-player policies. The leading constant of the logarithmic order is plotted as a function of with fixed . From Fig. 4, we observe that adopting the optimal Lai-Robbins single-player policy in achieves the best performance. This is expected given that Lai-Robbins policy achieves the best leading constant in the single-player case. We also observe that coupling the parallel single-player procedures through the common observation history in leads to a better performance compared to the one without coupling.

Fig. 4: The performance of built upon different single-player policies (Bernoulli distributions, global horizon length ).

In Fig. 5, we study the impact of eliminating pre-agreement on the performance of . We plot the leading constant of the logarithmic order as a function of with fixed . We observe that eliminating pre-agreement comes with a price in performance in this case. We also observe that the system performance degrades as increases. One potential cause is the fairness property of the policy that requires each player to learn the entire rank of the best arms.

Fig. 5: The performance of based on Lai-Robbins policy (Bernoulli distributions, global horizon length ).

Vii-B Multichannel Communications under Unknown Fading: Exponential Reward Model

In this example, we consider opportunistic transmission over wireless fading channels with unknown fading statistics. In each slot, each user senses the fading condition of a selected channel and then transmits data with a fixed power. The reward obtained from a chosen channel is measured by its capacity (maximum data transmission rate) . We consider the Rayleigh fading channel model where the SNR of each channel is exponential distributed with an unknown mean. When multiple users choose the same channel, no one succeeds (collision model 2). Note that a channel that has a higher expected SNR also offers a higher expected channel capacity. It is thus equivalent to consider SNR as the reward which is exponential distributed. Specifically, we have for .

In Fig. 6, we plot the leading constant of the logarithmic order as a function of with fixed . We observe that in this example, eliminating pre-agreement has little impact on the performance of the TDFS policy.

Fig. 6: The performance of based on Agrawal’s index policy (Exponential distributions, global horizon length ).

Vii-C Target Collecting in Multi-agent Systems: Gaussian Reward Model

In this example, we consider the Gaussian reward model arisen in the application of multi-agent systems. At each time, agents independently select one of locations to collect targets (e.g., fishing or ore mining). The reward at each location is determined by the fish size or ore quality, which has been shown in [5, 6] to fit with log-Gaussian distribution. The reward at each location has an unknown mean but a known variance which is the same across all locations. When multiple agents choose the same location, they share the reward (collision model 1).

Note that the log-Gaussian and Gaussian reward distributions are equivalent since the arm ranks under these two distributions are the same when the arms have the same variance. We can thus focus on the Gaussian reward model, i.e.,  for .

In Fig. 7, we plot as a function of global horizon . We observe that the regret growth rate quickly converges as time goes, which implies that the policy can achieve a strong performance within a short finite period.

Fig. 7: The convergence of the regret growth rate under based on Lai-Robbins policy (Gaussian distributions, ).

In Fig. 8, we plot the leading constant of the logarithmic order as a function of with fixed . Similar to the previous example, eliminating pre-agreement has little impact on the performance of the TDFS policy.

Fig. 8: The performance of based on Lai-Robbins policy (Gaussian distributions, , global horizon length ).

Viii Conclusion

In this paper, we have studied a decentralized formulation of the classic multi-armed bandit problem by considering multiple distributed players. We have shown that the optimal system regret in the decentralized MAB grows at the same logarithmic order as that in the centralized MAB considered in the classic work by Lai and Robbins [1] and Anantharam, et al. [2]. A decentralized policy that achieves the optimal logarithmic order has been constructed. A lower bound on the leading constant of the logarithmic order is established for polices in the TDS class, to which the proposed policy belongs. Future work includes establishing tighter lower bound on the leading constant of the logarithmic order and investigating decentralized MAB where each arm has different means to different players.

Appendix A.

Let $⃝H$ denote the set of all possible values of the unknown parameter .

Regularity Conditions:

  • Existence of mean: exists for any $⃝H$.

  • Positive distance: .

  • Continuity of : .

  • Denseness of $⃝H$: .

Conditions on point estimate and confidence upper bound:

  • For any $⃝H$, we have


Appendix B. Proof of Lemma 1

Under the condition that the best arms have nonnegative means, the maximum expected immediate reward obtained under the ideal scenario that is known is given by . Under a uniformly good policy , we have, for any arm with ,

Following Theorem 3.1 in [2], the above equation implies that for any arm with ,

The regret growth rate is thus lower bounded by

Since the optimal policy that chooses exactly arms at each time achieves the above lower bound [1,2], it is also optimal for the MAB that chooses up to arms at each time. The two problems are thus equivalent.

Appendix C. Proof of Lemma 2

Let denote the leader among the arm set . Consider arm with . Let denote the set of slots in the dominant mini-sequence up to time .

For any , let denote the number of slots in at which arm is played when the leader is the th best arm and the difference between its point estimate and true mean does not exceed , the number of slots in at which arm is played when the leader is the th best arm and the difference between its point estimate and true mean exceeds , and the number of slots in when the leader is not the th best arm. Recall that each arm is played once in the first slots. We have

(15)

Next, we show that , , and are all at most in the order of .

Consider first . Based on the structure of Lai-Robbins single-player policy, we have

(16)

Under condition C5, for any , we can choose sufficiently small such that

Thus,

(17)

Consider . Define as the number of slots in that are no larger than . Since the number of observations obtained from is at least , under condition C5, we have,

(18)

We thus have

(19)

Next, we show that .

Choose and