Generalized Thompson Sampling for
Thompson Sampling, one of the oldest heuristics for solving multi-armed bandits, has recently been shown to demonstrate state-of-the-art performance. The empirical success has led to great interests in theoretical understanding of this heuristic. In this paper, we approach this problem in a way very different from existing efforts. In particular, motivated by the connection between Thompson Sampling and exponentiated updates, we propose a new family of algorithms called Generalized Thompson Sampling in the expert-learning framework, which includes Thompson Sampling as a special case. Similar to most expert-learning algorithms, Generalized Thompson Sampling uses a loss function to adjust the experts’ weights. General regret bounds are derived, which are also instantiated to two important loss functions: square loss and logarithmic loss. In contrast to existing bounds, our results apply to quite general contextual bandits. More importantly, they quantify the effect of the “prior” distribution on the regret bounds.
Generalized Thompson Sampling for
Lihong Li Microsoft Research Redmond, WA 98052 email@example.com
Thompson Sampling , one of the oldest heuristics for solving stochastic multi-armed bandits, embodies the principle of probability matching. Given a prior distribution over the underlying, unknown reward generating process as well as past observations of rewards, one can maintain a posterior distribution of which arm is optimal. Thompson Sampling then selects arms randomly according to the current posterior distribution.
While having being unpopular for decades, this algorithm was recently shown to be state-of-the-art in empirical studies, and has found success in important applications like news recommendation and online advertising [16, 10, 7, 14]. In addition, it has other advantages such as robustness to observation delay  and simplicity in implementation, compared to the dominant strategies based on upper confidence bounds (UCB).
Despite the empirical success, theoretical understanding of finite-time performance of Thompson Sampling has been limited until very recently. The first such result is provided by  for non-contextual -armed bandits, who prove a nontrivial problem-dependent regret bound when the prior of an arm’s expected reward is a Beta distribution. Later on, improved bounds are found for the same setting [11, 3], which match the asymptotic regret lower bound .
For contextual bandits , only two pieces of work are available, to the best of our knowledge.  analyze linear bandits, where a Gaussian prior is used on the weight vector space, and a Gaussian likelihood function is assumed for the reward function. The authors are able to show the regret grows on the order of , which is only a factor away from a known matching lower bound . In contrast,  establish an interesting connection between UCB-style analysis and the Bayes risk of Thompson Sampling, based on the probability-matching property. This observation allows the authors to obtain Bayes risk bound based on a novel metric, known as margin dimension, of an arbitrary function class that essentially measures how fast upper confidence bounds decay.
All the existing work above relies critically either on advanced properties of the assumed prior distribution (such as in the case of Beta distributions), or on the assumption that the prior is correct (in the analysis of Bayes risk of ). Such analysis, although very interesting and important for better understanding Thompson Sampling, seems hard to be generalized to general (possibly nonlinear) contextual bandits. Furthermore, none of the existing theory is able to quantify the role of prior plays in controlling the regret, although in practice better domain knowledge is often available to construct good priors that should “accelerate” learning.
This paper attempts to address the limitations of prior work, from a very different angle. Based on a connection between Thompson Sampling and exponentiated update rules, we propose a family of contextual-bandit algorithms called Generalized Thompson Sampling in the expert-learning framework , where each expert corresponds to a contextual policy for arm selection. Similar to Thompson Sampling, Generalized Thompson Sampling is a randomized strategy, following an expert’s policy more often if the expert is more likely to be optimal. Different from Thompson Sampling, it uses a loss function to update the experts’ weights; Thompson Sampling is a special of Generalized Thompson Sampling when the logarithmic loss is used.111It should be emphasized that, in this paper, we use the loss function to measure how well an expert predicts the average reward, given the context and the selected arm. In general, the loss function and the reward may be completely unrelated. Details are given later.
Regret bounds are then derived under certain conditions. The proof relies critically on a novel application of a “self-boundedness” property of loss functions in competitive analysis. The results are instantiated to the square and logarithmic losses, two important loss functions. Not only do these bounds apply to quite general sets of experts, but they also quantify the impact of the prior distribution on regret. These benefits come at a cost of a worse dependence on the number of steps. However, we believe it is possible to close the gap with a more involved analysis, and the connection between (Generalized) Thompson Sampling to expert-learning will likely lead to further interesting insights and algorithms in future work.
Contextual bandits can be formulated as the following game between the learner and a stochastic environment. Let and be the sets of context and arms, and let . At step :
Learner observes the context , where can be chosen by an adversary.
Learner selects arm , and receives reward , with expectation .
Note that the setup above allows the contexts to be chosen by an adversary, which is a more general setting than typical contextual bandits . The reader may notice we require the reward to be binary, instead of being in . This choice will make our exposition simpler, without sacrificing loss of generality. Indeed, as also suggested by , if reward is received, one can convert it into a binary pseudo-reward as follows: let be with probability , and otherwise. Clearly, the bandit process remains essentially the same, with the same optimal expert and regrets.
Motivated by prior work on Thompson Sampling with parametric function classes , we allow the learner to have access to a set of experts, , each one of them makes predicts about the average reward . Let be the associated prediction function of expert . Its arm-selection policy in context is simply the greedy policy with respect to the reward predictions: . This setting can naturally be used to capture the use of parametric function classes: for example, when generalized linear models are used to predict [10, 7], each weight vector is an expert. The only difference is that our framework works with a discrete set of experts. Using a covering device, however, it is possible to approximate a continuous function class by a finite set of cardinality , where is the covering number.
We define the -step average regret of the learner by
where the expectation refers to the possible randomization of the learner in selecting . As in all existing analysis for Thompson Sampling, we make the realization assumption that one of the experts, , correctly predicts the average reward. Without loss of generality, let be this expert; in other words, . Clearly, is the reward-maximizing expert, so .
With the notation above, Thompson Sampling can be described as follows. It requires as input a “prior” distribution over the experts, where . Intuitively, may be interpreted as the prior probability that is the reward-maximizing expert. The algorithm starts with the first “posterior” distribution where . At step , the algorithm samples an expert based on the posterior distribution and follows that expert’s policy to choose action. Upon receiving the reward, the weights are updated by , where is the negative log-likelihood.
Finally, one can assume the optimal expert, , is drawn from an unknown prior distribution, . The expected -step Bayes regret can then be defined: . It should be noted that the Bayes risk considered by other authors  is just , where is the prior used by Thompson Sampling. In general, the true prior is unknown, so . We believe the Bayes risk defined with respect to is more reasonable in light of the almost inevitable misspecificatin of priors in practice.
3 Generalized Thompson Sampling
An observation with Thompson Sampling from the previous section is that its Bayes update rule can be viewed as an exponentiated update with the logarithmic loss (see also ). After receiving a reward, each expert is penalized for the mismatch in its prediction () and the observed reward, and the penalty happens to be the logarithmic loss in Thompson Sampling. Therefore, in principle, one can use other loss function to get a more general family of algorithms. In fact, none of the existing regret analyses [2, 3, 4, 11] relies on the interpretation that are meant to be Bayesian posteriors, and yet manages to show strong regret bound for Thompson Sampling.222The analysis of  is different since the metric (Bayes risk) is defined with respect to the prior. The above observations suggest the promising performance of Thompson Sampling is not due to its Bayesian nature, and also motivates us to develop a more general family of algorithms known as Generalized Thompson Sampling.
We denote by the loss incurred by reward prediction when the observed reward is . Generalized Thompson Sampling performs exponentiated updates to adjust experts’ weights, and follows a randomly selected expert when making decisions, similar to Thompson Sampling. In addition, the algorithm also allows mixing of the exponentially weighted distribution and a uniform distribution controlled by . The pseudocode is given in Algorithm 1.
Clearly, Generalized Thompson Sampling includes Thompson Sampling as a special case, by setting , , and to be the logarithmic loss: . Another loss function considered in this paper is the square loss: .
For convenience, the analysis here uses the following shorthand notation:
The history of the learner up to step is .
The immediate regret of expert in context is .
The normalized weight at step is .
The shifted loss incurred by expert in triple is denoted by . In particular, define . In other words, is the loss relative to the best expert (), and can be negative.
The average shifted loss at step is .
4.1 Main Theorem
Clearly, conditions are needed to relate the loss function to the regret. Our results need the following assumptions:
(Consistency) For all , .
(Informativeness) There exists a constant such that .
(Boundedness) The shifted loss assumes values in .
(Self-boundedness) There exists a constant such that, for all , ; namely, the second moment is bounded, up to a constant, by the first moment of the shifted loss.
Proof. The expected -step regret may be rewritten more explicitly, and then bounded, as follows:
Now the question becomes one of bounding the expected total shifted loss, . This problem is tackled by the following key lemma, which makes use the self-boundedness property of the loss function. The lemma may be of interest on its own. Similar properties were used in  in a very different way.
Proof. First, observe that if the shifted loss is used in Generalized Thompson Sampling to replace the loss , the algorithm behaves identically. The rest of the proof uses this fact, pretending Generalized Thompson Sampling uses for weight updates.
For any step , the weight sum changes according to
where the first inequality is due to Condition C3 and the inequality for ; the second inequality is due to the inequality for .
Conditioned on the observed context and selected arm at step , we take expectation of the above expressions, with respect to the randomization in observed reward, leading to
Condition C4 then implies
The above inequality holds for any , so also holds in expectation if is randomized:
Finally, summing the left-hand side over gives
The last inequality above follows from the observation that , and that .
The next corollary considers the Bayes regret, , with an unknown, true prior :
If the optimal expert is sampled from distribution , the Bayes regret is at most
where and are the standard entropy and KL-divergence.
Proof. We have
where the inequalities are due to Corollary 1 and Jensen’s inequality, respectively.
4.2 Square Loss
4.3 Logarithmic Loss
For logarithmic loss, we assume the shifted loss of all experts are bounded in for some constant , so that one can normalize the shifted logarithmic loss to the range of by defining:
This assumption can usually be satisfied in practice, and seems necessary to derive finite-time guarantees. Note that this assumption is slightly weaker than the more common assumption that the logarithmic loss itself is bounded (e.g., ).
We now verify all necessary conditions. Condition C1 follows from the well-known fact that the expectation of logarithmic loss between the true expert and another is their KL-divergence,
which is in turn non-negative.
Condition C2 is verified in the following lemma:
For the loss function defined in Equation (2), one has
Proof. We have the following:
where the first inequality is due to the triangle inequality; the second inequality is due to Pinsker’s inequality; the fourth inequality is due to Jensen’s inequality; the fifth inequality is from the fact that each arm is selected with probability at least ; the last equality is from Equation (3).
Condition C3 is immediately satisfied by the normalization of in the definition of above.
Condition C4 is the most difficult one to verify. To the best of our knowledge, such a result for logarithmic loss is not found in literature and can be of independent interest. For example, it implies that the analysis of  for square loss also applies to the logarithmic loss. The following lemma states the result more formally. Its proof, which is rather technical, is left to the appendix.
With all four conditions verified, we can apply results in Section 4.1 to reach the regret bound of and the Bayes regret bound of
In this paper, we propose a new family of algorithms, Generalized Thompson Sampling, and analyze its regret in the expert-learning framework. Our regret analysis provides a promising alternative to understanding the strong performance of Thompson Sampling, an interesting and pressing research problem raised by its recent empirical success. Compared to existing analysis in the literature, it has the following benefits. First, the results apply more generally to a set of experts, rather than making specific modeling assumptions about the prior and likelihood. Second, the analysis quantifies how the (not necessarily correct prior ) affects the regret bound, as well as the Bayes regret when optimal experts are drawn from an unknown prior . Similar to PAC-Bayes bounds, these results combine the benefits of good priors and the robustness of frequentist approaches.
Our proof for Generalized Thompson Sampling is inspired by the online-learning literature . However, a new technique is needed to prove the critical Lemma 1, which relies on self-boundedness of a loss function. A similar property is shown by  for square loss only, and is used in a very different way. The self-boundedness of logarithmic loss (Lemma 3) appears new, to the best of our knowledge, and may be of independent interest.
Generalized Thompson Sampling bears some similarities to the Regressor Elimination (RE) algorithm . A crucial difference is that RE requires a computationally expensive operation of computing a “balanced” distribution over experts, in order to control variance in the elimination process. In contrast, our algorithm is computationally much cheaper. The operations of Generalized Thompson Sampling are also related to EXP4 , which uses unbiased, importance-weighted reward estimates to do exponentiated updates of expert weights. In practice, it seems more natural to use prediction loss of an expert to adjust its weight, rather than using the reward signals directly [10, 7].
While we have focused on the case of finitely many experts, the setting is motivated by the more realistic case when the set of experts is continuous [10, 7, 4]. The discrete case considered here may be thought of as an approximation to the continuous case, using a covering device. We expect similar results to hold with replaced by the covering number of the class.
This work suggests a few interesting directions for future work. The first is to close the gap between the current bound and the best problem-independent bound for contextual bandits. The second is to extend the analysis here to continuous expert classes, and more importantly to the agnostic (non-realizable) case. Finally, it is interesting to use the regret analysis of (Generalized) Thompson Sampling to obtain performance guarantees for its reinforcement-learning analogues (e.g., ).
Appendix A Proof of Lemma 3: Self-boundedness of Logarithmic Loss
This section proves Lemma 3, regarding self-boundedness of logarithmic loss, in the sense described in Condition C4. The analysis here does not involve step and the corresponding context and selected arm. We therefore simplify notation as follows: the true expert predicts , and the other expert predicts . The binary reward is then a Bernoulli random variable with success rate . The shifted logarithmic loss of is given by
The first two moments of the random variable are given by:
the ratio between variance and expectation of , as a function of . Our goal is to show that is bounded by a constant, independent of and . It will then follow that is also bounded by a constant since .
Taking the derivative of , one obtains
for some function . It can be verified, by rather tedious calculations, that there exists some such that for and for . So is maximized by making close to either or . It then follows, again by rather tedious calculations, that , using the assumption that the log-ratios (that is, shifted loss) are bounded by .
-  Alekh Agarwal, Miroslav Dudík, Satyen Kale, John Langford, and Robert E. Schapire. Contextual bandit learning under the realizability assumption. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS-12), 2012.
-  Shipra Agrawal and Navin Goyal. Analysis of Thompson sampling for the multi-armed bandit problem. In Proceedins of the Twenty-Fifth Annual Conference on Learning Theory (COLT-12), pages 39.1–39.26, 2012.
-  Shipra Agrawal and Navin Goyal. Further optimal regret bounds for Thompson sampling. In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics (AISTATS-13), 2013.
-  Shipra Agrawal and Navin Goyal. Thompson sampling for contextual bandits with linear payoffs. In Proceedings of Thirtieth International Conference on Machine Learning (ICML-13), 2013.
-  Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002.
-  Nicolò Cesa-Bianchi and Gábor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
-  Olivier Chapelle and Lihong Li. An empirical evaluation of Thompson sampling. In Advances in Neural Information Processing Systems 24 (NIPS-11), pages 2249–2257, 2012.
-  Wei Chu, Lihong Li, Lev Reyzin, and Robert E. Schapire. Contextual bandits with linear payoff functions. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS-11), pages 208–214, 2011.
-  Sarah Filippi, Olivier Cappe, Aurélien Garivier, and Csaba Szepesvári. Parametric bandits: The generalized linear case. In Advances in Neural Information Processing Systems 23 (NIPS-10), pages 586–594, 2011.
-  Thore Graepel, Joaquin Quinonero Candela, Thomas Borchert, and Ralf Herbrich. Web-scale Bayesian click-through rate prediction for sponsored search advertising in Microsoft’s Bing search engine. In Proceedings of the Twenty-Seventh International Conference on Machine Learning (ICML-10), pages 13–20, 2010.
-  Emilie Kaufmann, Nathaniel Korda, and Rémi Munos. Thompson sampling: An asymptotically optimal finite-time analysis. In Proceedings of the Twenty-Third International Conference on Algorithmic Learning Theory (ALT-12), pages 199–213, 2012.
-  Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22, 1985.
-  John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In Advances in Neural Information Processing Systems 20, pages 1096–1103, 2008.
-  Benedict C. May, Nathan Korda, Anthony Lee, and David S. Leslie. Optimistic Bayesian sampling in contextual-bandit problems. Journal of Machine Learning Research, 13:2069–2106, 2012.
-  Daniel Russo and Benjamin Van Roy. Learning to optimize via posterior sampling, 2013. arXiv:1301.2609.
-  Steven L. Scott. A modern Bayesian look at the multi-armed bandit. Applied Stochastic Models in Business and Industry, 26:639–658, 2010.
-  Malcolm J. A. Strens. A Bayesian framework for reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML-00), pages 943–950, 2000.
-  William R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3–4):285–294, 1933.