Combinatorial Bandits Revisited

Combinatorial Bandits Revisited

Richard Combes   M. Sadegh Talebi    Alexandre Proutiere   Marc Lelarge
Centrale-Supelec, L2S, Gif-sur-Yvette, FRANCE
Department of Automatic Control, KTH, Stockholm, SWEDEN
INRIA & ENS, Paris, FRANCE, {mstms,alepro},

This paper investigates stochastic and adversarial combinatorial multi-armed bandit problems. In the stochastic setting under semi-bandit feedback, we derive a problem-specific regret lower bound, and discuss its scaling with the dimension of the decision space. We propose ESCB, an algorithm that efficiently exploits the structure of the problem and provide a finite-time analysis of its regret. ESCB  has better performance guarantees than existing algorithms, and significantly outperforms these algorithms in practice. In the adversarial setting under bandit feedback, we propose CombEXP, an algorithm with the same regret scaling as state-of-the-art algorithms, but with lower computational complexity for some combinatorial problems.


Combinatorial Bandits Revisited

  Richard Combes   M. Sadegh Talebi    Alexandre Proutiere   Marc Lelarge Centrale-Supelec, L2S, Gif-sur-Yvette, FRANCE Department of Automatic Control, KTH, Stockholm, SWEDEN INRIA & ENS, Paris, FRANCE, {mstms,alepro},

1 Introduction

Multi-Armed Bandits (MAB) problems [1] constitute the most fundamental sequential decision problems with an exploration vs. exploitation trade-off. In such problems, the decision maker selects an arm in each round, and observes a realization of the corresponding unknown reward distribution. Each decision is based on past decisions and observed rewards. The objective is to maximize the expected cumulative reward over some time horizon by balancing exploitation (arms with higher observed rewards should be selected often) and exploration (all arms should be explored to learn their average rewards). Equivalently, the performance of a decision rule or algorithm can be measured through its expected regret, defined as the gap between the expected reward achieved by the algorithm and that achieved by an oracle algorithm always selecting the best arm. MAB problems have found applications in many fields, including sequential clinical trials, communication systems, economics, see e.g. [2, 3].

In this paper, we investigate generic combinatorial MAB problems with linear rewards, as introduced in [4]. In each round , a decision maker selects an arm from a finite set and receives a reward . The reward vector is unknown. We focus here on the case where all arms consist of the same number of basic actions in the sense that . After selecting an arm in round , the decision maker receives some feedback. We consider both (i) semi-bandit feedback under which after round , for all , the component of the reward vector is revealed if and only if ; (ii) bandit feedback under which only the reward is revealed. Based on the feedback received up to round , the decision maker selects an arm for the next round , and her objective is to maximize her cumulative reward over a given time horizon consisting of rounds. The challenge in these problems resides in the very large number of arms, i.e., in its combinatorial structure: the size of could well grow as . Fortunately, one may hope to exploit the problem structure to speed up the exploration of sub-optimal arms.

We consider two instances of combinatorial bandit problems, depending on how the sequence of reward vectors is generated. We first analyze the case of stochastic rewards, where for all , are i.i.d. with Bernoulli distribution of unknown mean. The reward sequences are also independent across . We then address the problem in the adversarial setting where the sequence of vectors is arbitrary and selected by an adversary at the beginning of the experiment. In the stochastic setting, we provide sequential arm selection algorithms whose performance exceeds that of existing algorithms, whereas in the adversarial setting, we devise simple algorithms whose regret have the same scaling as that of state-of-the-art algorithms, but with lower computational complexity.

2 Contribution and Related Work

2.1 Stochastic combinatorial bandits under semi-bandit feedback

Contribution. (a) We derive an asymptotic (as the time horizon grows large) regret lower bound satisfied by any algorithm (Theorem 1). This lower bound is problem-specific and tight: there exists an algorithm that attains the bound on all problem instances, although the algorithm might be computationally expensive. To our knowledge, such lower bounds have not been proposed in the case of stochastic combinatorial bandits. The dependency in and of the lower bound is unfortunately not explicit. We further provide a simplified lower bound (Theorem 2) and derive its scaling in in specific examples.

(b) We propose ESCB  (Efficient Sampling for Combinatorial Bandits), an algorithm whose regret scales at most as (Theorem 5), where denotes the expected reward difference between the best and the second-best arm. ESCB assigns an index to each arm. The index of given arm can be interpreted as performing likelihood tests with vanishing risk on its average reward. Our indexes are the natural extension of KL-UCB indexes defined for unstructured bandits [5]. Numerical experiments for some specific combinatorial problems are presented in the supplementary material, and show that ESCB significantly outperforms existing algorithms.

Related work. Previous contributions on stochastic combinatorial bandits focused on specific combinatorial structures, e.g. -sets [6], matroids [7], or permutations [8]. Generic combinatorial problems were investigated in [9, 10, 11, 12]. The proposed algorithms, LLR and CUCB are variants of the UCB algorithm, and their performance guarantees are presented in Table 1. Our algorithms improve over LLR and CUCB by a multiplicative factor of .

[9] [10] [11] (Theorem 5)
Table 1: Regret upper bounds for stochastic combinatorial optimization under semi-bandit feedback.

2.2 Adversarial combinatorial problems under bandit feedback

Contribution. We present algorithm CombEXP, whose regret is , where and is the smallest nonzero eigenvalue of the matrix when is uniformly distributed over (Theorem 6). For most problems of interest [4] and , so that CombEXP has regret. A known regret lower bound is [13], so the regret gap between CombEXP and this lower bound scales at most as up to a logarithmic factor.

Related work. Adversarial combinatorial bandits have been extensively investigated recently, see [13] and references therein. Some papers consider specific instances of these problems, e.g., shortest-path routing [14], -sets [15], and permutations [16]. For generic combinatorial problems, known regret lower bounds scale as and (if ) in the case of semi-bandit and bandit feedback, respectively [13]. In the case of semi-bandit feedback, [13] proposes OSMD, an algorithm whose regret upper bound matches the lower bound. [17] presents an algorithm with regret where is the total reward of the best arm after rounds.

For problems with bandit feedback, [4] proposes ComBand and derives a regret upper bound which depends on the structure of action set . For most problems of interest, the regret under ComBand is upper-bounded by . [18] addresses generic linear optimization with bandit feedback and the proposed algorithm, referred to as EXP2 with John’s Exploration, has a regret scaling at most as in the case of combinatorial structure. As we show next, for many combinatorial structures of interest (e.g. -sets, matchings, spanning trees), CombEXP yields the same regret as ComBand and EXP2 with John’s Exploration, with lower computational complexity for a large class of problems. Table 2 summarises known regret bounds.

Algorithm Regret
Lower Bound [13] , if
ComBand [4]
EXP2 with John’s Exploration [18]
CombEXP (Theorem 6)
Table 2: Regret of various algorithms for adversarial combinatorial bandits with bandit feedback. Note that for most combinatorial classes of interests, and .

Example 1: -sets. is the set of all -dimensional binary vectors with non-zero coordinates. We have and (refer to the supplementary material for details). Hence when , the regret upper bound of CombEXP becomes , which is the same as that of ComBand and EXP2 with John’s Exploration.

Example 2: matchings. The set of arms is the set of perfect matchings in . and . We have , and . Hence the regret upper bound of CombEXP is , the same as for ComBand and EXP2 with John’s Exploration.

Example 3: spanning trees. is the set of spanning trees in the complete graph . In this case, , , and by Cayley’s formula has arms. for and when , The regret upper bound of ComBand and EXP2 with John’s Exploration becomes . As for CombEXP, we get the same regret upper bound .

3 Models and Objectives

We consider MAB problems where each arm is a subset of basic actions taken from . For , denotes the reward of basic action in round . In the stochastic setting, for each , the sequence of rewards is i.i.d. with Bernoulli distribution with mean . Rewards are assumed to be independent across actions. We denote by the vector of unknown expected rewards of the various basic actions. In the adversarial setting, the reward vector is arbitrary, and the sequence is decided (but unknown) at the beginning of the experiment.

The set of arms is an arbitrary subset of , such that each of its elements has basic actions. Arm is identified with a binary column vector , and we have . At the beginning of each round , a policy , selects an arm based on the arms chosen in previous rounds and their observed rewards. The reward of arm selected in round is .

We consider both semi-bandit and bandit feedbacks. Under semi-bandit feedback and policy , at the end of round , the outcome of basic actions for all are revealed to the decision maker, whereas under bandit feedback, only can be observed.

Let be the set of all feasible policies. The objective is to identify a policy in maximizing the cumulative expected reward over a finite time horizon . The expectation is here taken with respect to possible randomness in the rewards (in the stochastic setting) and the possible randomization in the policy. Equivalently, we aim at designing a policy that minimizes regret, where the regret of policy is defined by:

Finally, for the stochastic setting, we denote by the expected reward of arm , and let , or for short, be any arm with maximum expected reward: In what follows, to simplify the presentation, we assume that the optimal is unique. We further define: ,   where , and .

4 Stochastic Combinatorial Bandits under Semi-bandit Feedback

4.1 Regret Lower Bound

Given , define the set of parameters that cannot be distinguished from when selecting action , and for which arm is suboptimal:

We define and the Kullback-Leibler divergence between Bernoulli distributions of respective means and , i.e., . Finally, for , we define the vector .

We derive a regret lower bound valid for any uniformly good algorithm. An algorithm is uniformly good iff for all and all parameters . The proof of this result relies on a general result on controlled Markov chains [19].

Theorem 1

For all , for any uniformly good policy ,  where is the optimal value of the optimization problem:


Observe first that optimization problem (3) is a semi-infinite linear program which can be solved for any fixed , but its optimal value is difficult to compute explicitly. Determining how scales as a function of the problem dimensions and is not obvious. Also note that (3) has the following interpretation: assume that (3) has a unique solution . Then any uniformly good algorithm must select action at least times over the first rounds. From [19], we know that there exists an algorithm which is asymptotically optimal, so that its regret matches the lower bound of Theorem 1. However this algorithm suffers from two problems: it is computationally infeasible for large problems since it involves solving (3) times, furthermore the algorithm has no finite time performance guarantees, and numerical experiments suggests that its finite time performance on typical problems is rather poor. Further remark that if is the set of singletons (classical bandit), Theorem 1 reduces to the Lai-Robbins bound [20] and if is the set of -sets (bandit with multiple plays), Theorem 1 reduces to the lower bound derived in [6]. Finally, Theorem 1 can be generalized in a straightforward manner for when rewards belong to a one-parameter exponential family of distributions (e.g., Gaussian, Exponential, Gamma etc.) by replacing by the appropriate divergence measure.

A Simplified Lower Bound

We now study how the regret scales as a function of the problem dimensions and . To this aim, we present a simplified regret lower bound. Given , we say that a set has property iff, for all , we have for all . We may now state Theorem 2.

Theorem 2

Let be a maximal (inclusion-wise) subset of with property . Define . Then:

Corollary 1

Let for some constant and be such that each arm has at most suboptimal basic actions. Then .

Theorem 2 provides an explicit regret lower bound. Corollary 1 states that scales at least with the size of . For most combinatorial sets, is proportional to (see supplementary material for some examples), which implies that in these cases, one cannot obtain a regret smaller than . This result is intuitive since is the number of parameters not observed when selecting the optimal arm. The algorithms proposed below have a regret of , which is acceptable since typically, is much smaller than .

4.2 Algorithms

Next we present ESCB, an algorithm for stochastic combinatorial bandits that relies on arm indexes as in UCB1 [21] and KL-UCB [5]. We derive finite-time regret upper bounds for ESCB that hold even if we assume that , instead of , so that arms may have different numbers of basic actions.

4.2.1 Indexes

ESCB relies on arm indexes. In general, an index of arm in round , say , should be defined so that with high probability. Then as for UCB1 and KL-UCB, applying the principle of optimism against uncertainty, a natural way to devise algorithms based on indexes is to select in each round the arm with the highest index. Under a given algorithm, at time , we define the number of times basic action has been sampled. The empirical mean reward of action is then defined as if and otherwise. We define the corresponding vectors and .

The indexes we propose are functions of the round and of . Our first index for arm , referred to as or for short, is an extension of KL-UCB index. Let . is the optimal value of the following optimization problem:


where we use the convention that for , . As we show later, may be computed efficiently using a line search procedure similar to that used to determine KL-UCB index.

Our second index or for short is a generalization of the UCB1 and UCB-tuned indexes:

Note that, in the classical bandit problems with independent arms, i.e., when , reduces to the KL-UCB index (which yields an asymptotically optimal algorithm) and reduces to the UCB-tuned index. The next theorem provides generic properties of our indexes. An important consequence of these properties is that the expected number of times where or underestimate is finite, as stated in the corollary below.

Theorem 3

(i) For all , and , we have .
(ii) There exists depending on only such that, for all and :

Corollary 2

Statement (i) in the above theorem is obtained combining Pinsker and Cauchy-Schwarz inequalities. The proof of statement (ii) is based on a concentration inequality on sums of empirical KL divergences proven in [22]. It enables to control the fluctuations of multivariate empirical distributions for exponential families. It should also be observed that indexes and can be extended in a straightforward manner to the case of continuous linear bandit problems, where the set of arms is the unit sphere and one wants to maximize the dot product between the arm and an unknown vector. can also be extended to the case where reward distributions are not Bernoulli but lie in an exponential family (e.g. Gaussian, Exponential, Gamma, etc.), replacing by a suitably chosen divergence measure. A close look at reveals that the indexes proposed in [10], [11], and [9] are too conservative to be optimal in our setting: there the “confidence bonus” was replaced by (at least) . Note that [10], [11] assume that the various basic actions are arbitrarily correlated, while we assume independence among basic actions. When independence does not hold, [11] provides a problem instance where the regret is at least . This does not contradict our regret upper bound (scaling as ), since we have added the independence assumption.

4.2.2 Index computation

While the index is explicit, is defined as the solution to an optimization problem. We show that it may be computed by a simple line search. For , and , define:

Fix , , and . Define , and for , define:

Theorem 4

If , . Otherwise: (i) is strictly increasing, and . (ii) Define as the unique solution to . Then .

Theorem 4 shows that can be computed using a line search procedure such as bisection, as this computation amounts to solving the nonlinear equation , where is strictly increasing. The proof of Theorem 4 follows from KKT conditions and the convexity of KL divergence.

4.2.3 The Escb Algorithm

The pseudo-code of ESCB is presented in Algorithm 1. We consider two variants of the algorithm based on the choice of the index : ESCB-1 when and ESCB-2 if . In practice, ESCB-1 outperforms ESCB-2. Introducing ESCB-2 is however instrumental in the regret analysis of ESCB-1 (in view of Theorem 3 (i)). The following theorem provides a finite time analysis of our ESCB algorithms. The proof of this theorem borrows some ideas from the proof of [11, Theorem 3].

  for  do
     Select arm .
     Observe the rewards, and update and .
  end for
Algorithm 1 ESCB
Theorem 5

The regret under algorithms satisfies for all :

where does not depend on , and . As a consequence when .

ESCB with time horizon has a complexity of as neither nor can be written as for some vector . Assuming that the offline (static) combinatorial problem is solvable in time, the complexity of CUCB algorithm in [10] and [11] after rounds is . Thus, if the offline problem is efficiently implementable, i.e., , CUCB is efficient, whereas ESCB is not since may have exponentially many elements. In §2.5 of the supplement, we provide an extension of ESCB called Epoch-ESCB, that attains almost the same regret as ESCB while enjoying much better computational complexity.

5 Adversarial Combinatorial Bandits under Bandit Feedback

We now consider adversarial combinatorial bandits with bandit feedback. We start with the following observation:

with the convex hull of . We embed in the -dimensional simplex by dividing its elements by . Let be this scaled version of .

Inspired by OSMD [13, 18], we propose the CombEXP algorithm, where the KL divergence is the Bregman divergence used to project onto . Projection using the KL divergence is addressed in [23]. We denote the KL divergence between distributions and in by The projection of distribution onto a closed convex set of distributions is

Let be the smallest nonzero eigenvalue of , where is uniformly distributed over . We define the exploration-inducing distribution : and let is the distribution over basic actions induced by the uniform distribution over . The pseudo-code for CombEXP is shown in Algorithm 2. The KL projection in CombEXP ensures that . There exists , a distribution over such that . This guarantees that the system of linear equations in the decomposition step is consistent. We propose to perform the projection step (the KL projection of onto ) using interior-point methods [24]. We provide a simpler method in §3.4 of the supplement. The decomposition step can be efficiently implemented using the algorithm of [25]. The following theorem provides a regret upper bound for CombEXP.

  Initialization: Set , and , with .
  for  do
     Mixing: Let .
     Decomposition: Select a distribution over such that .
     Sampling: Select a random arm with distribution and incur a reward .
     Estimation: Let , where has law . Set , where is the pseudo-inverse of .
     Update: Set .
     Projection: Set to be the projection of onto the set using the KL divergence.
  end for
Algorithm 2 CombEXP
Theorem 6

For all :

For most classes of , we have and [4]. For these classes, CombEXP has a regret of , which is a factor off the lower bound (see Table 2).

It might not be possible to compute the projection step exactly, and this step can be solved up to accuracy in round . Namely we find such that . Proposition 1 shows that for , the approximate projection gives the same regret as when the projection is computed exactly. Theorem 7 gives the computational complexity of CombEXP with approximate projection. When is described by polynomially (in ) many linear equalities/inequalities, CombEXP is efficiently implementable and its running time scales (almost) linearly in . Proposition 1 and Theorem 7 easily extend to other OSMD-type algorithms and thus might be of independent interest.

Proposition 1

If the projection step of CombEXP is solved up to accuracy , we have:

Theorem 7

Assume that is defined by linear equalities and linear inequalities. If the projection step is solved up to accuracy , then CombEXP has time complexity .

The time complexity of CombEXP can be reduced by exploiting the structure of (See [24, page 545]). In particular, if inequality constraints describing are box constraints, the time complexity of CombEXP is .

The computational complexity of CombEXP is determined by the structure of and CombEXP has time complexity due to the efficiency of interior-point methods. In contrast, the computational complexity of ComBand depends on the complexity of sampling from . ComBand may have a time complexity that is super-linear in (see [16, page 217]). For instance, consider the matching problem described in Section 2. We have equality constraints and box constraints, so that the time complexity of CombEXP is: . It is noted that using [26, Algorithm 1], the cost of decomposition in this case is . On the other hand, CombBand has a time complexity of , with a super-linear function, as it requires to approximate a permanent, requiring operations per round. Thus, CombEXP has much lower complexity than ComBand and achieves the same regret.

6 Conclusion

We have investigated stochastic and adversarial combinatorial bandits. For stochastic combinatorial bandits with semi-bandit feedback, we have provided a tight, problem-dependent regret lower bound that, in most cases, scales at least as . We proposed ESCB, an algorithm with regret. We plan to reduce the gap between this regret guarantee and the regret lower bound, as well as investigate the performance of Epoch-ESCB. For adversarial combinatorial bandits with bandit feedback, we proposed the CombEXP algorithm. There is a gap between the regret of CombEXP and the known regret lower bound in this setting, and we plan to reduce it as much as possible.


A. Proutiere’s research is supported by the ERC FSA grant, and the SSF ICT-Psi project.


  • [1] Herbert Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins Selected Papers, pages 169–177. Springer, 1985.
  • [2] Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–222, 2012.
  • [3] Nicolò Cesa-Bianchi and Gábor Lugosi. Prediction, learning, and games, volume 1. Cambridge University Press Cambridge, 2006.
  • [4] Nicolò Cesa-Bianchi and Gábor Lugosi. Combinatorial bandits. Journal of Computer and System Sciences, 78(5):1404–1422, 2012.
  • [5] Aurélien Garivier and Olivier Cappé. The KL-UCB algorithm for bounded stochastic bandits and beyond. In Proc. of COLT, 2011.
  • [6] Venkatachalam Anantharam, Pravin Varaiya, and Jean Walrand. Asymptotically efficient allocation rules for the multiarmed bandit problem with multiple plays-part i: iid rewards. Automatic Control, IEEE Transactions on, 32(11):968–976, 1987.
  • [7] Branislav Kveton, Zheng Wen, Azin Ashkan, Hoda Eydgahi, and Brian Eriksson. Matroid bandits: Fast combinatorial optimization with learning. In Proc. of UAI, 2014.
  • [8] Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. Learning multiuser channel allocations in cognitive radio networks: A combinatorial multi-armed bandit formulation. In Proc. of IEEE DySpan, 2010.
  • [9] Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. Combinatorial network optimization with unknown variables: Multi-armed bandits with linear rewards and individual observations. IEEE/ACM Trans. on Networking, 20(5):1466–1478, 2012.
  • [10] Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit: General framework and applications. In Proc. of ICML, 2013.
  • [11] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for stochastic combinatorial semi-bandits. In Proc. of AISTATS, 2015.
  • [12] Zheng Wen, Azin Ashkan, Hoda Eydgahi, and Branislav Kveton. Efficient learning in large-scale combinatorial semi-bandits. In Proc. of ICML, 2015.
  • [13] Jean-Yves Audibert, Sébastien Bubeck, and Gábor Lugosi. Regret in online combinatorial optimization. Mathematics of Operations Research, 39(1):31–45, 2013.
  • [14] András György, Tamás Linder, Gábor Lugosi, and György Ottucsák. The on-line shortest path problem under partial monitoring. Journal of Machine Learning Research, 8(10), 2007.
  • [15] Satyen Kale, Lev Reyzin, and Robert Schapire. Non-stochastic bandit slate problems. Advances in Neural Information Processing Systems, pages 1054–1062, 2010.
  • [16] Nir Ailon, Kohei Hatano, and Eiji Takimoto. Bandit online optimization over the permutahedron. In Algorithmic Learning Theory, pages 215–229. Springer, 2014.
  • [17] Gergely Neu. First-order regret bounds for combinatorial semi-bandits. In Proc. of COLT, 2015.
  • [18] Sébastien Bubeck, Nicolò Cesa-Bianchi, and Sham M. Kakade. Towards minimax policies for online linear optimization with bandit feedback. Proc. of COLT, 2012.
  • [19] Todd L. Graves and Tze Leung Lai. Asymptotically efficient adaptive choice of control laws in controlled markov chains. SIAM J. Control and Optimization, 35(3):715–743, 1997.
  • [20] Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22, 1985.
  • [21] Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235–256, 2002.
  • [22] Stefan Magureanu, Richard Combes, and Alexandre Proutiere. Lipschitz bandits: Regret lower bounds and optimal algorithms. Proc. of COLT, 2014.
  • [23] I. Csiszár and P.C. Shields. Information theory and statistics: A tutorial. Now Publishers Inc, 2004.
  • [24] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
  • [25] H. D. Sherali. A constructive proof of the representation theorem for polyhedral sets based on fundamental definitions. American Journal of Mathematical and Management Sciences, 7(3-4):253–270, 1987.
  • [26] David P. Helmbold and Manfred K. Warmuth. Learning permutations with exponential weights. Journal of Machine Learning Research, 10:1705–1736, 2009.
  • [27] Richard Combes and Alexandre Proutiere. Unimodal bandits: Regret lower bounds and optimal algorithms. arXiv:1405.5096, 2014.
  • [28] Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167–175, 2003.
  • [29] J.W. Moon and L. Moser. On cliques in graphs. Israel Journal of Mathematics, 3:23–28, 1965.
  • [30] Alexander Schrijver. Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003.

Supplementary Materials and Proofs

Appendix A Stochastic Combinatorial Bandits: Regret Lower Bounds

a.1 Proof of Theorem 1

To derive regret lower bounds, we apply the techniques used by Graves and Lai [19] to investigate efficient adaptive decision rules in controlled Markov chains. First we give an overview of their general framework.

Consider a controlled Markov chain on a finite state space with a control set . The transition probabilities given control are parameterized by taking values in a compact metric space : the probability to move from state to state given the control and the parameter is . The parameter is not known. The decision maker is provided with a finite set of stationary control laws , where each control law is a mapping from to : when control law is applied in state , the applied control is . It is assumed that if the decision maker always selects the same control law , the Markov chain is then irreducible with stationary distribution . Now the reward obtained when applying control in state is denoted by , so that the expected reward achieved under control law is: . There is an optimal control law given whose expected reward is denoted by . Now the objective of the decision maker is to sequentially select control laws so as to maximize the expected reward up to a given time horizon . As for MAB problems, the performance of a decision scheme can be quantified through the notion of regret which compares the expected reward to that obtained by always applying the optimal control law.

Proof. The parameter takes values in . The Markov chain has values in . The set of controls corresponds to the set of feasible actions , and the set of control laws is also . These laws are constant, in the sense that the control applied by control law does not depend on the state of the Markov chain, and corresponds to selecting action . The transition probabilities are given as follows: for all ,

where for all , if , , and if , . Finally, the reward is defined by . Note that the state space of the Markov chain is here finite, and so, we do not need to impose any cost associated with switching control laws (see the discussion on page 718 in [19]).

We can now apply Theorem 1 in [19]. Note that the KL number under action is

From [19, Theorem 1], we conclude that for any uniformly good rule ,

where is the optimal value of the following optimization problem:


The result is obtained by observing that , where

a.2 Proof of Theorem 2

The proof proceeds in three steps. In the subsequent analysis, given the optimization problem P, we use to denote its optimal value.

Step 1.

In this step, first we introduce an equivalent formulation for problem (3) above by simplifying its constraints. We show that constraint (4) is equivalent to:

Observe that:

Fix . In view of the definition of , we can find such that . Thus, for the r.h.s. of the -th constraint in (4), we get:

and therefore problem (3) can be equivalently written as:


Next, we formulate an LP whose value gives a lower bound for . Define with

Clearly , and therefore:

Then, we can write:


For any introduce: . Now we form P1 as follows:

P1: (9)

Observe that since the feasible set of problem (7) is contained in that of P1.

Step 2.

In this step, we formulate an LP to give a lower bound for val(P1). To this end, for any suboptimal basic action , we define . Further, we let . Next, we represent the objective of P1 in terms of , and give a lower bound for it as follows: