Analysis of Thompson Sampling for Graphical Bandits Without the Graphs

# Analysis of Thompson Sampling for Graphical Bandits Without the Graphs

Fang Liu
The Ohio State University
Columbus, Ohio 43210
liu.3977@osu.edu
&Zizhan Zheng
Tulane University
New Orleans, LA 70118
zzheng3@tulane.edu
&Ness Shroff
The Ohio State University
Columbus, Ohio 43210
shroff.11@osu.edu
###### Abstract

We study multi-armed bandit problems with graph feedback, in which the decision maker is allowed to observe the neighboring actions of the chosen action, in a setting where the graph may vary over time and is never fully revealed to the decision maker. We show that when the feedback graphs are undirected, the original Thompson Sampling achieves the optimal (within logarithmic factors) regret over time horizon , where is the average independence number of the latent graphs. To the best of our knowledge, this is the first result showing that the original Thompson Sampling is optimal for graphical bandits in the undirected setting. A slightly weaker regret bound of Thompson Sampling in the directed setting is also presented. To fill this gap, we propose a variant of Thompson Sampling, that attains the optimal regret in the directed setting within a logarithmic factor. Both algorithms can be implemented efficiently and do not require the knowledge of the feedback graphs at any time.

Analysis of Thompson Sampling for Graphical Bandits Without the Graphs

Fang Liu The Ohio State University Columbus, Ohio 43210 liu.3977@osu.edu                        Zizhan Zheng Tulane University New Orleans, LA 70118 zzheng3@tulane.edu                        Ness Shroff The Ohio State University Columbus, Ohio 43210 shroff.11@osu.edu

## 1 Introduction

Multi-Armed Bandits (MAB) models are quintessential models for sequential decision making. In the classical MAB setting, at each time, a policy must choose an action from a set of actions with unknown probability distributions. Choosing an action at time reveals a random reward drawn from the probability distribution of action The goal is to find policies that minimize the expected loss due to uncertainty about actions’ distributions over a given time horizon .

In this work, we consider an important variant of bandit problems, called graphical bandits, where choosing an action not only generates a reward from action , but also reveals observations for a subset of the remaining actions. Graphical bandits are also known as bandits with graph-structured feedback or bandits with side-observations, in which the feedback model is specified by a sequence of feedback graphs. Each feedback graph is a directed graph whose nodes correspond to the actions. An arc111We also use the notation to represent an arc from node to node for simplicity. in the graph indicates that the agent observes the reward of action if action is chosen in that round.

Motivating examples for situations where side observations are available include viral marketing and online pricing. Consider the viral marketing problem, where a decision maker wants to find the user with the maximum influence in an online social network (e.g., Facebook) to offer a promotion (Carpentier and Valko (2016)). Each time the decision maker offers a promotion to a user, it also has an opportunity to survey the user’s neighbors in the network regarding their potential interest in the same offer. This is possible when the online network has an additional survey feature that generates “side observations”. For example, when user is offered a promotion, her neighbors may be queried as follows: “User was recently offered a promotion. Would you also be interested in the offer?”. Here, choosing an action in the graphical bandit problem corresponds to choosing a user in the network and side-observations across actions are captured by the links in the social network.

Consider another example in the online pricing problem, where a seller is selling goods on the Internet. In each round, the seller announces a price for the product. Then, a buyer arrives and decides whether or not to purchase the product based on its private value. A purchase takes place if and only if the announced price is no more than its private value. At the end of the round, the seller observes whether or not the buyer purchased the product at the announced price. If the buyer purchases the product, then the seller knows that the buyer would have bought the product at any lower price. Otherwise, the seller knows that the buyer would not have bought the product at any higher price. Here, actions in the graphical bandit problem corresponds to the prices that the seller can choose. The feedback graph is a directed graph over the prices that a price is connected to a lower (higher) price if and only if they are both below (above) the private value of the buyer.

Graphical bandits have been studied in both non-stochastic (adversarial) domain by Mannor and Shamir (2011); Alon et al. (2013, 2015); Kocák et al. (2014), and stochastic domain by Caron et al. (2012); Buccapatnam et al. (2014, 2017); Tossou et al. (2017); Liu et al. (2018). Regret bounds as a function of combinatorial properties of the feedback graphs are characterized in different settings: undirected graphs vs directed graphs, time-invariant graphs vs time-variant graphs.

However, most of the existing works mentioned above require prior knowledge of the feedback graphs for their algorithms to run. These algorithms fall into either the informed setting (where the algorithms have access to the graph structure before making decisions) or the uninformed setting (where the algorithms have access to the graph for performing their updates after decisions).

The assumption that the feedback graph is disclosed to the decision maker does not hold in many real-world applications. For example, in the viral marketing problem, the third-party decision maker is not allowed to have the knowledge of the social network in order to protect the privacy of the users. In the online pricing problem, the private value of the buyer is never revealed to the seller. Thus the feedback graph is never disclosed to the seller. This motivates us to study the graphical bandits in a setting with limited information, where the feedback graphs are never fully revealed to the decision maker.

In this work, we study the graphical bandits without the graphs in a general setting, where the graphs are allowed to be time-variant and directed. Moreover, the only feedback available to the decision maker at the end of each round is the out-neighborhood of the chosen action in the latent graph, along with the rewards associated with the observed actions. Generally speaking, our results show that Thompson Sampling algorithms (introduced by Thompson (1933)) can achieve a regret bound of the form , where is the average independence number222See Section 2.2 for a brief review of the combinatorial properties of graphs. of the latent graphs, that is optimal within logarithmic factors. More specifically, we make the following contributions to graphical bandits without knowledge of the feedback graphs. (Table 1 summarizes the main results.)

• We develop a problem-independent Bayesian regret bound for the vanilla Thompson Sampling algorithm (TS-N) for graphical bandits without the graphs. In the undirected setting, where the latent graphs are undirected, we show that TS-N obtains the optimal (within logarithmic factors) regret bound of , where is the average independence number of the latent graphs (Corollary 1). Our regret bound is much sharper than the form of that was shown by Liu et al. (2018), where is the average clique cover number of the latent graphs, as in general. As far as we know, this is the first result showing that Thompson Sampling, without knowledge of the graph, can attain the optimal regret within logarithmic factors.

• In the directed setting, where the graphs are allowed to be directed, we show that TS-N achieves regret in expectation, where is the average maximal acyclic subgraph number of the latent graphs (Corollary 2). As a byproduct, our regret bounds for TS-N provide improved regret bounds for information directed sampling algorithms (IDS-N and IDSN-LP algorithms) proposed by Liu et al. (2018).

• We propose a variant of the Thompson Sampling algorithm, TS-U, that achieves a regret bound of for both the undirected and directed setting (Corollary 3). The regret bound of TS-U is optimal within logarithmic factors, and sharper than the state-of-the-art algorithm proposed by Cohen et al. (2016). Our results offer a recipe for practitioners to choose algorithms for graphical bandits without the graphs. If the latent graphs are known to be undirected, one can choose TS-N for the best regret guarantee. Otherwise, TS-U is the choice with the best guarantee.

### 1.1 Related Work

Graphical bandits were introduced in the non-stochastic domain by Mannor and Shamir (2011). They propose the ExpBan algorithm that works in the time-invariant and informed setting, with the regret bound depending on the clique cover number. They also propose the ELP algorithm, that replaces the uniform distribution of Exp3 algorithm (proposed by Auer et al. (2002b)) with a distribution that maximizes the minimum probability to observe an action. An optimal (within logarithmic factors) regret bound of ELP is shown in the undirected setting.

However, the regret bound of the ELP depends on the clique cover number in the directed setting. These results are improved by Alon et al. (2013). They show that the vanilla Exp3 algorithm without mixing uniform distribution (Exp3-SET) achieves the same (but improved in the directed setting) regret bound as ELP, even in the uninformed setting. In the informed setting, they propose the Exp3-DOM algorithm, which is a variant of Exp3 algorithm with mixing uniform distribution over the dominating set of the feedback graph, achieves regret. This regret bound is further attained by Exp3.G (Alon et al. (2015)) and Exp3-IX (Kocák et al. (2014)) in the uninformed setting. The Exp3.G algorithm is a variant of Exp3-DOM where it replaces the dominating set with the universal set. The Exp3-IX algorithm uses a novel implicit exploration idea. However, these algorithms still require the knowledge of the feedback graphs for performing updates after the decisions in the uninformed setting.

Graphical bandits have also been considered in the stochastic domain by Caron et al. (2012), who propose a natural variant of upper confidence bounds (introduced by Auer et al. (2002a)) algorithm (UCB-N) and provide a problem-dependent regret guarantee depending on the clique cover number. This result is improved by Buccapatnam et al. (2014) in the informed and time-invariant setting. Policies proposed by Buccapatnam et al. (2014), namely -greedy-LP and UCB-LP, are shown to be asymptotically optimal, both in terms of the graph structure and time.

However, all of the afore-mentioned algorithms do not apply when the feedback graphs vary over the time and are never fully disclosed. Recently, researchers have developed new algorithms for graphical bandits in the setting with limited information, where the feedback graphs are time-variant, directed and never revealed to the decision maker. Cohen et al. (2016) propose an elimination-based algorithm that achieves the regret bound. Tossou et al. (2017) analyze the Bayesian regret performance of Thompson Sampling for the graphical bandits and provide a regret bound depending on the maximal clique cover number of the latent graphs. This result is improved to a regret bound depending on the average clique cover number by Liu et al. (2018). In this work, we provide sharper regret bounds for the vanilla Thompson Sampling (TS-N) and propose a variant of Thompson Sampling (TS-U) that obtains a better (within logarithmic factor) regret bound than the algorithm developed by Cohen et al. (2016).

Other related partial feedback models include label efficient bandit in Audibert and Bubeck (2010) and prediction with limited advice in Seldin et al. (2014), where side observations are limited by a budget. Graphical bandits with Erdős-Rényi random graphs are studied by Kocák et al. (2016a); Chen et al. (2016); Liu et al. (2018). Graphical bandits with noisy observations are studied by Kocák et al. (2016b); Wu et al. (2015). A survey of the graphical bandits refers to Valko (2016).

## 2 Problem Formulation

### 2.1 Stochastic Bandit Model

We consider a Bayesian formulation of the stochastic -armed bandit problem in which uncertainties are modeled as random variables. At each time , a decision maker chooses an action from a finite action set and receives the corresponding random reward . Without loss of generality, we assume the space of possible rewards . Note that the results in this work can be extended to the case where reward distributions are sub-Gaussian. There is a random variable associated with each action and . We assume that are independent for each time . Let be the vector of random variables at time . The true reward distribution is a distribution over , which is randomly drawn from the family of distributions and unknown to the decision maker. Conditioned on , is an independent and identically distributed sequence with each element sampled from the distribution .

Let be the true optimal action conditioned on . Then the period regret of the decision maker is the expected difference between the total rewards obtained by an oracle that always chooses the optimal action and the accumulated rewards up to time horizon . Formally, we study the expected regret

 E[R(T)]=E[T∑t=1Yt,A∗−Yt,At], (1)

where the expectation is taken over the randomness in the action sequence and the outcomes and over the prior distribution over . This notion of regret is also known as Bayesian regret.

### 2.2 Graph Feedback Model

In this problem, we assume the existence of side observations, which are described by a graph over the action set for each time . The graph may be directed or undirected and can be dependent on time . At each time , the decision maker observes the reward for playing action as well as the outcome for each action . Note that it becomes the classical bandit feedback setting when the graph is empty (i.e., no edge exists) and it becomes the full-information (expert) setting when the graph is complete for all time . Note that the graph is never fully revealed to the decision maker.

Let be the adjacent matrix that represents the deterministic graph feedback structure . Let be the element at the -th row and -th column of the matrix. Then if there exists an edge and otherwise. Note that we assume for any .

###### Definition 1.

(Clique cover number) A clique of a graph is a subset such that the sub-graph formed by and is a complete graph. A clique cover of a graph is a partition of , denoted by , such that is a clique for each . The cardinality of the smallest clique cover is called the clique cover number, which is denoted by .

###### Definition 2.

(Independence number) An independent set of a graph is a subset such that no two are connected by an edge in . The cardinality of a largest independent set is the independence number of , denoted by .

Note that the independence number of a directed graph is equivalent to that of the undirected graph by ignoring arc orientation. We can also lift the notion of independence number of an undirected graph to directed graph through the notion of maximum acyclic subgraphs.

###### Definition 3.

(Maximum acyclic subgraphs) An acyclic subgraph of is any graph such that , and , with no directed cycles. The cardinality of the largest such is the maximum acyclic subgraphs number, denoted by .

Note that in general. The equality holds when the graph is undirected. In this work, we slightly abuse the notation of the above graph numbers and use and interchangeably since fully characterizes the graph structure .

### 2.3 Randomized Policies

We define all random variables with respect to a probability space . Consider the filtration such that is the -algebra generated by the observation history . The observation history includes all decisions, rewards and side observations from time to time . For each time , the decision maker chooses an action based on the history and possibly some randomness. Any policy of the decision maker can be viewed as a randomized policy , which is an -adapted sequence . For each time , the decision maker chooses an action randomly according to , which is a probability distribution over . Let be the Bayesian regret defined by (1) when the decisions are chosen according to .

Uncertainty about induces uncertainty about the true optimal action , which is described by a prior distribution of . Let be the posterior distribution of given the history , i.e., . Then, can be updated by Bayes rule given , decision , reward and side observations. The Shannon entropy of is defined as . We slightly abuse the notion of and such that they represent distributions (or functions) over the finite set as well as vectors in a simplex . Note that .

Let be the instantaneous regret vector such that the -th coordinate, , is the expected regret of playing action at time . Let be the information gain vector such that the -th coordinate, , is the expected information gain of playing action at time . Note that the information gain of playing action consists of that of observing the reward and possibly some side observations. We define the information gain of observing action (i.e., ) as , which is the mutual information under the posterior distribution between random variables and . Let be the Kullback-Leibler divergence between two distributions333If is absolutely continuous with respect to , then , where is the Radon-Nikodym derivative of w.r.t. .. By the definition of mutual information, we have that

 D(P((A∗,Yt,a)∈⋅|Ft)||P(A∗∈⋅|Ft)P(Yt,a∈⋅|Ft)). (2)

At each time , a randomized policy updates , and makes a decision according to a sampling distribution . For any randomized policy, we define the information ratio (Russo and Van Roy (2016)) of sampling distribution at time as

 Ψt(πt)≜(πTtΔt)2/(πTtgt). (3)

Note that is the expected instantaneous regret of the sampling distribution , and is the expected information gain of the sampling distribution . So the information ratio measures the “energy” cost (which is the square of the expected instantaneous regret) per bit of information acquired.

### 2.4 Thompson Sampling

Thompson Sampling algorithm simply samples actions according to the posterior probability that they are optimal. In particular, actions are chosen randomly at time according to the sampling distribution . This conceptually elegant policy can be efficiently implemented. Consider the case where is some parametric family of distributions. The true reward distribution is indexed by in the sense that . Practical implementations of Thompson Sampling consist of two steps. First, an index is sampled from the posterior distribution. Then, the algorithm selects the action that would be optimal if the sampled parameter were the true parameter. Given the observation of playing , the posterior distribution is updated by Bayes’ rule.

## 3 Vanilla Thompson Sampling

In this section, we show that a vanilla Thompson Sampling algorithm, TS-N as shown in Algorithm 1, obtains optimal regret (within a logarithmic factor) in the undirected setting. A slightly weaker regret bound in the directed setting is also presented.

TS-N is the Thompson Sampling algorithm for graphical bandits such that , where is updated based on all the observations available, without additional modifications. It naturally keeps the information ratio bounded as well as balances between having low expected instantaneous regret (a.k.a. exploitation) and obtaining knowledge about the optimal action (a.k.a. exploration). If the information ratio is bounded, then the expected regret is bounded in terms of the maximum amount of information one could expect to acquire, which is at most the entropy of the prior distribution of , i.e., . First, we bound the information ratio of TS-N in terms of the key quantity

 Qt(πt)=∑i∈Kπt(i)∑j:jt→iπt(j). (4)

Note that represents an arc in graph .

###### Proposition 1.

If , then the information ratio satisfies almost surely.

Proposition 1 is a tight bound for the information ratio of the vanilla Thompson Sampling (Thompson (1933)). If the graph is empty (i.e., there is no edges in the graph), then the quantity equals to . If the graph is complete, then the quantity equals to . These recover the information ratio bounds shown by Russo and Van Roy (2016). Also, the quantity is upper bounded by clique cover number as one can separate the sum by cliques and dropping the weights out of the clique. This recovers the information ratio bound shown by Liu et al. (2018). Proposition 1 allows us to show a tighter regret bound of Thompson Sampling for graphical bandits. Next, we bound the regret of TS-N in terms of the quantity .

###### Theorem 1.

The regret of TS-N satisfies

 E[R(T,π)]≤ ⎷12T∑t=1E[Qt(αt)]H(α1). (5)

Note that the entropy is bounded, i.e., . It has been shown by Mannor and Shamir (2011); Alon et al. (2014) that the quantity is related to the graph numbers irrespective of the choice of the distribution . The following graph-theoretic result shows that the quantity is bounded by the independence number of the latent graph if the graph is undirected.

###### Lemma 1.

(Lemma 3 in Mannor and Shamir (2011)) Let be an undirected graph. For any distribution over ,

 ∑i∈Kπ(i)∑j:jt→iπ(j)≤β0(G). (6)

The following regret result of TS-N follows immediately from Theorem 1 and Lemma 1.

###### Corollary 1.

In the undirected setting, the regret of TS-N satisfies

 E[R(T,π)]≤ ⎷12T∑t=1β0(Gt)H(α1). (7)

As far as we know, this is the best regret bound for graphical bandits without the graphs. First, an information-theoretic lower bound of graphical bandits has been shown by Mannor and Shamir (2011); Alon et al. (2014) to be . So Corollary 1 shows that TS-N obtains the optimal regret (within a logarithmic factor) in the undirected setting. Moreover, the bound proven in Corollary 1 is tighter than the bound of TS-N shown by Liu et al. (2018) as . At last, Corollary 1 shows that TS-N enjoys better regret bound than Cohen’s Algorithm 1 developed by Cohen et al. (2016) in the undirected setting, both of which do not require the knowledge of the feedback graphs.

We now turn to the directed setting. The following graph-theoretic result shows that the quantity is upper-bounded by the maximum acyclic subgraph number if the graph is directed.

###### Lemma 2.

(Lemma 10 in Alon et al. (2014)) Let be a directed graph. For any distribution over ,

 ∑i∈Kπ(i)∑j:jt→iπ(j)≤mas(G). (8)

The following regret result of TS-N follows immediately from Theorem 1 and Lemma 2.

###### Corollary 2.

In the directed setting, the regret of TS-N satisfies

 E[R(T,π)]≤ ⎷12T∑t=1mas(Gt)H(α1). (9)

The bound proven in Corollary 2 is tighter than the bound of TS-N shown by Liu et al. (2018) since . However, there is a gap between the lower bound and the upper bound shown in Corollary 2. Though when the graph is undirected, the gap between them can be large in general directed graphs. For example, consider a directed graph such that arc if and only if . It is clear that and . This leads us to consider a more sophisticated randomized policy. In the next section, we show that a modified Thompson Sampling algorithm results in an optimal (within a logarithmic factor) regret bound in the general setting.

###### Remark 1.

The bounds proven in Corollary 1 and 2 hold for IDS-N and IDSN-LP developed by Liu et al. (2018) since the information ratio of IDS-N and IDSN-LP are bounded by the information ratio of TS-N almost surely. So our results also provide tighter bounds for IDS-N and IDSN-LP algorithms. Note that IDS-N and IDSN-LP algorithms require prior knowledge of the feedback graphs.

## 4 Thompson Sampling with Exploration

In this section, we show that a mixture of Thompson Sampling and uniform sampling, TS-U as shown in Algorithm 2, obtains the optimal regret within a logarithmic factor in the directed setting.

TS-U algorithm is a variant of Thompson Sampling with explicit exploration that allows the algorithm to explore some suboptimal actions with large out-degrees. The collected side observations from the uniform sampling allows us to capture the latent graph information, thus yielding a regret bound in terms of the independence number.

As shown in Algorithm 2, TS-U is a randomized policy such that for some parameter . The implementation and computation of TS-U is quite efficient. At each time , TS-U algorithm plays vanilla Thompson Sampling with probability and plays uniform sampling with probability . While the expected information gain diminishes, the expected instantaneous regret is bounded away from zero due to uniform sampling. Thus, the classical analysis of bounding the information ratio does not work any more. Fortunately, the linear form of allows us to bound the regret of TS-U by the regret due to uniform sampling plus the regret from Thompson Sampling, as shown in Theorem 2. Indeed, our techniques work for any variant of Thompson Sampling that is a linear combination of and some other distributions.

###### Theorem 2.

The regret of TS-U satisfies

 E[R(T,π)]≤ϵT+ ⎷12T∑t=1E[Qt(πt)]H(α1). (10)

Theorem 2 shows that the regret of TS-U consists of two parts, the regret from the uniform sampling and the regret from the Thompson Sampling. Note that the regret from the Thompson Sampling takes into account the information gain from the uniform sampling as the term depends on rather than . This allows us to use the following graph-theoretic result to bound the term , thus the regret of TS-U, by the independence number of the latent graph.

###### Lemma 3.

(Lemma 5 in Alon et al. (2015)) Let be a directed graph. For any distribution over such that for all for some constant . Then

 ∑i∈Kπ(i)∑j:jt→iπ(j)≤4β0(G)log(4Kβ0(G)η). (11)

Lemma 3 shows that the quantity can be bounded by the independence number of the directed graph if is bounded away from zero. The uniform sampling part of TS-U allows the sampling distribution to satisfy this condition. By Theorem 2 and taking in Lemma 3, we have the following result.

###### Corollary 3.

If , then the regret of TS-U satisfies

 E[R(T,π)]=O⎛⎜⎝ ⎷log(KT)T∑t=1β0(Gt)H(α1)⎞⎟⎠. (12)

Comparing the regret bound in Corollary 3 to the lower bound, , the TS-U algorithm obtains the optimal regret within a logarithmic factor in the general setting. Moreover, Corollary 3 shows that TS-U enjoys a sharper (by a logarithmic factor) regret bound than Cohen’s Algorithm 1 developed by Cohen et al. (2016) in the directed setting, both of which do not require the knowledge of the feedback graphs. As far as we know, this is the best-known regret bound for graphical bandits without the graphs. Finally, note that a comparison between Corollary 1 and Corollary 3 reveals that a symmetric observation system (i.e., undirected feedback graphs) enjoys better regret than an asymmetric observation system as the regret bound of TS-N is sharper by a logarithmic factor than the bound in Corollary 3 in the undirected setting.

###### Remark 2.

We present TS-U algorithm with fixed exploration rate for simplicity. It is easy to verify that the regret result of TS-U still holds if one uses some appropriate decreasing exploration rate sequence . For example, when , Theorem 2 holds by replacing the term with . Then the regret result of the corresponding TS-U algorithm follows. In practice, we recommend practitioners to use decreasing exploration rates.

###### Remark 3.

In the informed setting (i.e., when the feedback graph are revealed to the decision maker before the decisions), one may propose a variant of TS-U such that it restricts the exploration set to the dominating set of the feedback graph. In other words, one may replace uniform sampling over all the actions with uniform sampling over only the dominating set. The regret bound in Corollary 3 still holds by a variant of Lemma 3 shown in Alon et al. (2014).

## 5 Numerical Results

This section presents numerical results from experiments that evaluate the effectiveness of Thompson Sampling based policies in comparison to UCB-N and Cohen’s algorithm. We consider the classical Beta-Bernoulli bandit problem with independent actions. The reward of each action is a Bernoulli random variable and is independently drawn from Beta. The implementations of TS-N and TS-U are shown in Algorithms 3 and 4. In the experiment, we set , and as suggested by Remark 2. All the regret results are averaged over trials.

Figure 1 presents the cumulative regret results under undirected graph feedback. For the time-invariant case shown in Figure 0(a), we use a graph with 2 cliques, presented in Figure 3. Figure 2 presents the cumulative regret results under directed graph feedback. For the time-invariant case shown in Figure 1(a), we use the graph such that if and only , presented in Figure 4. For the time-variant cases shown in Figures 0(b) and 1(b), the sequences of graphs are generated by the Erdős-Rényi model with parameter 444For each time and each pair , with probability drawn from the uniform distribution over .

We find that TS-N and TS-U outperform the alternative algorithms, which is consistent with the empirical observation in the bandit feedback setting (Chapelle and Li (2011)). However, TS-N has better empirical performance in the tested settings even though we have proven a better regret bound for TS-U under directed feedback graphs. The average regrets of Cohen’s algorithm are dramatically larger than that of Thompson Sampling based policies. For this reason, parts of Cohen’s algorithm are omitted from Figures 1 and 2.

## 6 Conclusion

We have provided regret analysis of Thompson Sampling for graphical bandits without knowing the feedback graphs at any time. We show that the regret of TS-N is bounded by in the general setting. In the undirected setting, , and the resulting regret bound is optimal up to a logarithmic factor. As far as we know, this is the first result that shows that Thompson Sampling, even without the knowledge of the graph, can attain the optimal regret in the graphical bandits. As a byproduct, our analysis for TS-N provide improved regret bounds for information directed sampling algorithms (IDS-N and IDSN-LP algorithms) proposed by Liu et al. (2018) in the informed setting.

We have proposed a variant of Thompson Sampling, TS-U, that mixes Thompson Sampling with uniform sampling. This modification allows the algorithm to capture the graph structure and obtain regret bound in the directed setting, which is optimal within a logarithmic factor. Our results offer a recipe for practitioners to choose algorithms for graphical bandits without knowledge of the graphs. If the latent graphs are known to be undirected, one can choose TS-N for the best regret guarantee. Otherwise, TS-U is the choice with the best guarantee in the general (directed) setting.

## References

• Alon et al. [2013] Noga Alon, Nicolo Cesa-Bianchi, Claudio Gentile, and Yishay Mansour. From bandits to experts: A tale of domination and independence. In Advances in Neural Information Processing Systems, pages 1610–1618, 2013.
• Alon et al. [2014] Noga Alon, Nicolo Cesa-Bianchi, Claudio Gentile, Shie Mannor, Yishay Mansour, and Ohad Shamir. Nonstochastic multi-armed bandits with graph-structured feedback. arXiv preprint arXiv:1409.8428, 2014.
• Alon et al. [2015] Noga Alon, Nicolo Cesa-Bianchi, Ofer Dekel, and Tomer Koren. Online learning with feedback graphs: Beyond bandits. In COLT, pages 23–35, 2015.
• Audibert and Bubeck [2010] Jean-Yves Audibert and Sébastien Bubeck. Regret bounds and minimax policies under partial monitoring. Journal of Machine Learning Research, 11(Oct):2785–2836, 2010.
• Auer et al. [2002a] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002.
• Auer et al. [2002b] Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. SIAM journal on computing, 32(1):48–77, 2002.
• Buccapatnam et al. [2014] Swapna Buccapatnam, Atilla Eryilmaz, and Ness B. Shroff. Stochastic bandits with side observations on networks. SIGMETRICS Perform. Eval. Rev., 42(1):289–300, June 2014.
• Buccapatnam et al. [2017] Swapna Buccapatnam, Fang Liu, Atilla Eryilmaz, and Ness B Shroff. Reward maximization under uncertainty: Leveraging side-observations on networks. arXiv preprint arXiv:1704.07943, 2017.
• Caron et al. [2012] S. Caron, B. Kveton, M. Lelarge, and S. Bhagat. Leveraging side observations in stochastic bandits. In UAI, pages 142–151. AUAI Press, 2012.
• Carpentier and Valko [2016] Alexandra Carpentier and Michal Valko. Revealing graph bandits for maximizing local influence. In International Conference on Artificial Intelligence and Statistics, pages 10–18, 2016.
• Chapelle and Li [2011] Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. In Advances in neural information processing systems, pages 2249–2257, 2011.
• Chen et al. [2016] Wei Chen, Yajun Wang, Yang Yuan, and Qinshi Wang. Combinatorial multi-armed bandit and its extension to probabilistically triggered arms. Journal of Machine Learning Research, 17(50):1–33, 2016.
• Cohen et al. [2016] Alon Cohen, Tamir Hazan, and Tomer Koren. Online learning with feedback graphs without the graphs. CoRR, abs/1605.07018, 2016.
• Kocák et al. [2014] Tomáš Kocák, Gergely Neu, Michal Valko, and Rémi Munos. Efficient learning by implicit exploration in bandit problems with side observations. In Advances in Neural Information Processing Systems, pages 613–621, 2014.
• Kocák et al. [2016a] Tomáš Kocák, Gergely Neu, and Michal Valko. Online learning with erdős-rényi side-observation graphs. In Uncertainty in Artificial Intelligence, 2016.
• Kocák et al. [2016b] Tomás Kocák, Gergely Neu, and Michal Valko. Online learning with noisy side observations. In AISTATS, pages 1186–1194, 2016.
• Liu et al. [2018] Fang Liu, Swapna Buccapatnam, and Ness Shroff. Information directed sampling for stochastic bandits with graph feedback. In AAAI, 2018.
• Mannor and Shamir [2011] Shie Mannor and Ohad Shamir. From bandits to experts: On the value of side-observations. In NIPS, pages 684–692, 2011.
• Russo and Van Roy [2016] Daniel Russo and Benjamin Van Roy. An information-theoretic analysis of thompson sampling. Journal of Machine Learning Research, 17(68):1–30, 2016.
• Seldin et al. [2014] Yevgeny Seldin, Peter Bartlett, Koby Crammer, and Yasin Abbasi-Yadkori. Prediction with limited advice and multiarmed bandits with paid observations. In International Conference on Machine Learning, pages 280–287, 2014.
• Thompson [1933] William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933.
• Tossou et al. [2017] Aristide Tossou, Christos Dimitrakakis, and Devdatt Dubhashi. Thompson sampling for stochastic bandits with graph feedback. In AAAI Conference on Artificial Intelligence, 2017.
• Valko [2016] Michal Valko. Bandits on graphs and structures. PhD thesis, École normale supérieure de Cachan-ENS Cachan, 2016.
• Wu et al. [2015] Yifan Wu, András György, and Csaba Szepesvári. Online learning with gaussian payoffs and side observations. In Advances in Neural Information Processing Systems, pages 1360–1368, 2015.

## Appendix A Proof of Proposition 1

As shorthand, we let and . By the definition of the instantaneous regret, we have that

 ΔTtαt =∑a∈Kαt(a)E[Yt,A∗−Yt,a|Ft] (13) (a)=∑a∈Kαt(a)(E[Yt,A∗|Ft]−E[Yt,a|Ft]) (14) =E[Yt,A∗|Ft]−∑a∈Kαt(a)E[Yt,a|Ft] (15) (b)=∑a∈Kαt(a)(E[Yt,a|Ft,A∗=a]−E[Yt,a|Ft]) (c)≤√12∑a∈Kαt(a)√d(a,a) (16) =√12∑a∈Kαt(a)x(a), (17)

where follows from the linearity of expectation, uses the law of total probability, follows from the Pinsker’s inequality.

By the definition of the information gain of observing an action, we have that

 gTtαt (d)≥(Gtht)Tαt =∑a∈K⎛⎜⎝∑a′:a′t→aαt(a′)⎞⎟⎠It(A∗;Yt,a) (18) (e)=∑a∈K⎛⎜⎝∑a′:a′t→aαt(a′)⎞⎟⎠(∑a∗∈Kαt(a∗)d(a,a∗)) (f)≥∑a∈K⎛⎜⎝∑a′:a′t→aαt(a′)⎞⎟⎠αt(a)d(a,a) (19) =∑a∈K⎛⎜⎝∑a′:a′t→aαt(a′)⎞⎟⎠αt(a)(x(a))2, (20)

where follows from Proposition 1 of Liu et al. [2018], follows from the KL divergence form of mutual information and follows by dropping some nonnegative terms.

As shorthand, we let . Now, we are ready to bound the information ration.

 (ΔTt αt)2(g)≤12(∑a∈Kαt(a)x(a))2 (21) =12(∑a∈K1√z(a)√z(a)αt(a)x(a))2 (h)≤12(∑a∈K1z(a))(∑a∈Kz(a)(αt(a)x(a))2) (i)≤12(∑a∈K1z(a))gTtαt (22)

where follows from equation , follows from Cauchy-Schwartz inequality and follows from equation .

## Appendix B Proof of Theorem 1

First observe that the entropy bounds the expected cumulative information gain.

 ET∑t=1gTtπt (23) =ET∑t=1(∑i∈Kπt(i)E[H(αt)−H(αt+1)|Ft,At=i]) =ET∑t=1E[H(αt)−H(αt+1)|Ft] (24) =ET∑t=1(H(αt)−H(αt+1)) (25) ≤H(α1), (26)

Then, we bound the regret of TS-N.

 E[R(T,π)] =ET∑t=1ΔTtπt=ET∑t=1ΔTtπt√gTtπt√gTtπt (a)≤ ⎷ET∑t=1(ΔTtπt)2gTtπt ⎷ET∑t=1gTtπt (b)≤ ⎷12T∑t=1E[Qt(αt)]H(α1) (27)

where follows from Holder’s inequality and follows from Proposition 1 and equation .

## Appendix C Proof of Theorem 2

As shorthand, we let and . By the definition of the instantaneous regret, we have that

 ΔTtπt=∑a∈Kπt(a)E[Yt,A∗−Yt,a|Ft] (28) (a)=∑a∈Kπt(a)(E[Yt,A∗|Ft]−E[Yt,a|Ft]) (29) =E[Yt,A∗|Ft]−∑a∈Kπt(a)E[Yt,a|Ft] (30) (b)=∑a∈Kαt(a)E[Yt,a|Ft,A∗=a]−∑a∈Kπt(a)E[Yt,a|Ft] (c)≤(1−ϵ)∑a∈Kαt(a)(E[Yt,a|Ft,A∗=a]−E[Yt,a|Ft])+ϵ (d)≤(1−ϵ)√12∑a∈K(αt(a)√d(a,a))+ϵ (31) =(1−ϵ)√12∑a∈K(αt(a)x(a))+ϵ, (32)

where follows from the linearity of expectation, uses the law of total probability, follows from the fact that and the rewards are bounded by 1, follows from the Pinsker’s inequality. Step allows us to decompose the regret into the regret from uniform sampling and the regret from Thompson Sampling. Thus, we can further relate the latter regret term to the expected information gain.

By the definition of the information gain of observing an action, we have that

 gTtπt (e)≥(Gtht)Tπt =∑a∈K⎛⎜⎝∑a′:a′t→aπt(a′)⎞⎟⎠It(A∗;Yt,a) (33) (f)=∑a∈K⎛⎜⎝∑a′:a′t→aπt(a′)⎞⎟⎠(∑a∗∈Kαt(a∗)d(a,a∗)) (g)≥∑a∈K⎛⎜⎝∑a′:a′t→aπt(a′)⎞⎟⎠αt(a)d(a,a) (34) =∑a∈K⎛⎜⎝∑a′:a′t→aπt(a′)⎞⎟⎠αt(a)(x(a))2, (35)

where follows from Proposition 1 of Liu et al. [2018], follows from the KL divergence form of mutual information and follows by dropping some nonnegative terms.

As shorthand, we let . Now we are ready to bound the first term in equation .

 ∑a∈K αt(a)x(a)=∑a∈K1√ζ(a)√ζ(a)αt(a)x(a) (36) (h)≤√∑a∈K1ζ(a)√∑a∈Kζ(a)(αt(a)x(a))2 (37) (i)≤ ⎷∑a∈Kαt(a)∑a′:a′t→aπt(a′)√gTtπt (38) (j)≤ ⎷11−ϵ∑a∈Kπt(a)∑a′:a′t→aπt(a′)√gTtπt (39) =√11−ϵQt(πt)√gTtπt, (40)

where follows from Cauchy-Schwartz inequality, follows from equation and follows from the fact that .

Now, we are ready to bound the regret.

 E [R(T,π)]=ET∑t=1ΔTtπt (41) (k)≤ET∑t=1((1−ϵ)√12∑a∈K(αt(a)x(a))+ϵ) (l)≤ϵT+√12ET∑t=1√(1−ϵ)Qt(πt)√gTtπt (m)≤ϵT+ ⎷12ET∑t=1Qt(πt) ⎷ET∑