A Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler Divergences

A Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler Divergences

Odalric-Ambrym Maillard
INRIA Lille Nord-Europe
France
odalric.maillard@inria.fr &Rémi Munos
INRIA Lille Nord-Europe
France
remi.munos@inria.fr &Gilles Stoltz
Ecole normale supérieure, Paris
& HEC Paris
France
gilles.stoltz@ens.fr
CNRS – Ecole normale supérieure, Paris – INRIA, within the project-team CLASSIC
Abstract

We consider a Kullback-Leibler-based algorithm for the stochastic multi-armed bandit problem in the case of distributions with finite supports (not necessarily known beforehand), whose asymptotic regret matches the lower bound of Burnetas and Katehakis (1996). Our contribution is to provide a finite-time analysis of this algorithm; we get bounds whose main terms are smaller than the ones of previously known algorithms with finite-time analyses (like UCB-type algorithms).

1 Introduction

The stochastic multi-armed bandit problem, introduced by Robbins (1952), formalizes the problem of decision-making under uncertainty, and illustrates the fundamental tradeoff that appears between exploration, i.e., making decisions in order to improve the knowledge of the environment, and exploitation, i.e., maximizing the payoff.

Setting. In this paper, we consider a multi-armed bandit problem with finitely many arms indexed by \mathcal{A}, for which each arm a\in\mathcal{A} is associated with an unknown and fixed probability distribution \nu_{a} over [0,1]. The game is sequential and goes as follows: at each round t\geqslant 1, the player first picks an arm A_{t}\in\mathcal{A} and then receives a stochastic payoff Y_{t} drawn at random according to \nu_{A_{t}}. He only gets to see the payoff Y_{t}.

For each arm a\in\mathcal{A}, we denote by \mu_{a} the expectation of its associated distribution \nu_{a} and we let a^{\star} be any optimal arm, i.e., \ \displaystyle{a^{\star}\in\mathop{\mathrm{argmax}}_{a\in\mathcal{A}}\,\mu_{a% }\,.}
We write \mu^{\star} as a short-hand notation for the largest expectation \mu_{a^{\star}} and denote the gap of the expected payoff \mu_{a} of an arm a\in\mathcal{A} to \mu^{\star} as \Delta_{a}=\mu^{\star}-\mu_{a}. In addition, the number of times each arm a\in\mathcal{A} is pulled between the rounds 1 and T is referred to as N_{T}(a),

N_{T}(a)\lx@stackrel{{\scriptstyle\rm def}}{{=}}\sum_{t=1}^{T}\mathbb{I}_{\{A_% {t}=a\}}\,.

The quality of a strategy will be evaluated through the standard notion of expected regret, which we recall now. The expected regret (or simply regret) at round T\geqslant 1 is defined as

R_{T}\lx@stackrel{{\scriptstyle\rm def}}{{=}}\mathbb{E}\!\left[T\mu^{\star}-% \sum_{t=1}^{T}Y_{t}\right]=\mathbb{E}\!\left[T\mu^{\star}-\sum_{t=1}^{T}\mu_{A% _{t}}\right]=\sum_{a\in\mathcal{A}}\Delta_{a}\,\,\mathbb{E}\bigl{[}N_{T}(a)% \bigr{]}\,, (1)

where we used the tower rule for the first equality. Note that the expectation is with respect to the random draws of the Y_{t} according to the \nu_{A_{t}} and also to the possible auxiliary randomizations that the decision-making strategy is resorting to.

The regret measures the cumulative loss resulting from pulling sub-optimal arms, and thus quantifies the amount of exploration required by an algorithm in order to find a best arm, since, as (1) indicates, the regret scales with the expected number of pulls of sub-optimal arms. Since the formulation of the problem by Robbins (1952) the regret has been a popular criterion for assessing the quality of a strategy.

Known lower bounds. Lai and Robbins (1985) showed that for some (one-dimensional) parametric classes of distributions, any consistent strategy (i.e., any strategy not pulling sub-optimal arms more than in a polynomial number of rounds) will despite all asymptotically pull in expectation any sub-optimal arm a at least

\mathbb{E}\bigl{[}N_{T}(a)\bigr{]}\geqslant\biggl{(}\frac{1}{\mathcal{K}(\nu_{% a},\nu^{\star})}+o(1)\biggr{)}\log(T)

times, where \mathcal{K}(\nu_{a},\nu^{\star}) is the Kullback-Leibler (KL) divergence between \nu_{a} and \nu^{\star}; it measures how close distributions \nu_{a} and \nu^{\star} are from a theoretical information perspective.

Later, Burnetas and Katehakis (1996) extended this result to some classes of multi-dimensional parametric distributions and proved the following generic lower bound: for a given family \mathcal{P} of possible distributions over the arms,

\mathbb{E}\bigl{[}N_{T}(a)\bigr{]}\geqslant\biggl{(}\frac{1}{\mathcal{K}_{\inf% }(\nu_{a},\mu^{\star})}+o(1)\biggr{)}\log(T)\,,\qquad\mbox{where}\quad\mathcal% {K}_{\inf}(\nu_{a},\mu^{\star})\lx@stackrel{{\scriptstyle\rm def}}{{=}}\inf_{% \nu\in\mathcal{P}:\,E(\nu)>\mu^{*}}\mathcal{K}(\nu_{a},\nu)\,,

with the notation E(\nu) for the expectation of a distribution \nu. The intuition behind this improvement is to be related to the goal that we want to achieve in bandit problems; it is not detecting whether a distribution is optimal or not (for this goal, the relevant quantity would be \mathcal{K}(\nu_{a},\nu^{\star})), but rather achieving the optimal rate of reward \mu^{\star} (i.e., one needs to measure how close \nu_{a} is to any distribution \nu\in\mathcal{P} whose expectation is at least \mu^{\star}).

Known upper bounds. Lai and Robbins (1985) provided an algorithm based on the KL divergence, which has been extended by Burnetas and Katehakis (1996) to an algorithm based on \mathcal{K}_{\inf}; it is asymptotically optimal since the number of pulls of any sub-optimal arm a satisfies

\mathbb{E}\bigl{[}N_{T}(a)\bigr{]}\leqslant\biggl{(}\frac{1}{\mathcal{K}_{\inf% }(\nu_{a},\mu^{\star})}+o(1)\biggr{)}\log(T)\,.

This result holds for finite-dimensional parametric distributions under some assumptions, e.g., the distributions having a finite and known support or belonging to a set of Gaussian distributions with known variance. Recently Honda and Takemura (2010a) extended this asymptotic result to the case of distributions \mathcal{P} with support in [0,1] and such that \mu^{*}<1; the key ingredient in this case is that \mathcal{K}_{\inf}(\nu_{a},\mu^{\star}) is equal to \mathcal{K}_{\min}(\nu_{a},\mu^{\star})\lx@stackrel{{\scriptstyle\rm def}}{{=}% }\inf_{\nu\in\mathcal{P}:E(\nu)\geqslant\mu^{*}}\mathcal{K}(\nu_{a},\nu).

Motivation. All the results mentioned above provide asymptotic bounds only. However, any algorithm is only used for a finite number of rounds and it is thus essential to provide a finite-time analysis of its performance. Auer et al. (2002) initiated this work by providing an algorithm (UCB1) based on a Chernoff-Hoeffding bound; it pulls any sub-optimal arm, till any time T, at most (8/\Delta_{a}^{2})\log T+1+\pi^{2}/3 times, in expectation. Although this yields a logarithmic regret, the multiplicative constant depends on the gap \Delta_{a}^{2}=(\mu^{\star}-\mu_{a})^{2} but not on \mathcal{K}_{\inf}(\nu_{a},\mu^{\star}), which can be seen to be larger than \Delta_{a}^{2}/2 by Pinsker’s inequality; that is, this non-asymptotic bound does not have the right dependence in the distributions. (How much is gained of course depends on the specific families of distributions at hand.) Audibert et al. (2009) provided an algorithm (UCB-V) that takes into account the empirical variance of the arms and exhibited a strategy such that \mathbb{E}\bigl{[}N_{T}(a)\bigr{]}\leqslant 10(\sigma_{a}^{2}/\Delta_{a}^{2}+2% /\Delta_{a})\log T for any time T (where \sigma_{a}^{2} is the variance of arm a); it improves over UCB1 in case of arms with small variance. Other variants include the MOSS algorithm by Audibert and Bubeck (2010) and Improved UCB by Auer and Ortner (2010).

However, all these algorithms only rely on one moment (for UCB1) or two moments (for UCB-V) of the empirical distributions of the obtained rewards; they do not fully exploit the empirical distributions. As a consequence, the resulting bounds are expressed in terms of the means \mu_{a} and variances \sigma_{a}^{2} of the sub-optimal arms and not in terms of the quantity \mathcal{K}_{\inf}(\nu_{a},\mu^{\star}) appearing in the lower bounds. The numerical experiments reported in Filippi (2010) confirm that these algorithms are less efficient than those based on \mathcal{K}_{\inf}.

Our contribution. In this paper we analyze a \mathcal{K}_{\inf}-based algorithm inspired by the ones studied in Lai and Robbins (1985), Burnetas and Katehakis (1996), Filippi (2010); it indeed takes into account the full empirical distribution of the observed rewards. The analysis is performed (with explicit bounds) in the case of Bernoulli distributions over the arms. Less explicit but finite-time bounds are obtained in the case of finitely supported distributions (whose supports do not need to be known in advance). Finally, we pave the way for handling the case of general finite-dimensional parametric distributions. These results improve on the ones by Burnetas and Katehakis (1996), Honda and Takemura (2010a) since finite-time bounds (implying their asymptotic results) are obtained; and on Auer et al. (2002), Audibert et al. (2009) as the dependency of the main term scales with \mathcal{K}_{\inf}(\nu_{a},\mu^{\star}). The proposed \mathcal{K}_{\inf}-based algorithm is also more natural and more appealing than the one presented in Honda and Takemura (2010a).

Recent related works. Since our initial submission of the present paper, we got aware of two papers that tackle problems similar to ours. First, a revised version of Honda and Takemura (2010b, personal communication) obtains finite-time regret bounds (with prohibitively large constants) for a randomized (less natural) strategy in the case of distributions with finite supports (also not known in advance). Second, another paper at this conference (Garivier and Cappé, 2011) also deals with the \mathcal{K}–strategy which we study in Theorem 3.2; they however do not obtain second-order terms in closed forms as we do and later extend their strategy to exponential families of distributions (while we extend our strategy to the case of distributions with finite supports). On the other hand, they show how the \mathcal{K}–strategy can be extended in a straightforward manner to guarantee bounds with respect to the family of all bounded distributions on a known interval; these bounds are suboptimal but improve on the ones of UCB-type algorithms.

2 Definitions and tools

Let \mathcal{X} be a Polish space; in the next sections, we will consider \mathcal{X}=\{0,1\} or \mathcal{X}=[0,1]. We denote by \mathcal{P}(\mathcal{X}) the set of probability distributions over \mathcal{X} and equip \mathcal{P}(\mathcal{X}) with the distance d induced by the norm \left\Arrowvert\,\cdot\,\right\Arrowvert defined by \left\Arrowvert\nu\right\Arrowvert=\sup_{f\in\mathcal{L}}\,\bigl{|}\int_{% \mathcal{X}}f\,\mbox{d}\nu\bigr{|}, where \mathcal{L} is the set of Lipschitz functions over \mathcal{X}, taking values in [-1,1] and with Lipschitz constant smaller than 1.

Kullback-Leibler divergence:

For two elements \nu,\,\kappa\in\mathcal{P}(\mathcal{X}), we write \nu\ll\kappa when \nu is absolutely continuous with respect to \kappa and denote in this case by \mbox{d}\nu/\mbox{d}\kappa the density of \nu with respect to \kappa. We recall that the Kullback-Leibler divergence between \nu and \kappa is defined as

\mathcal{K}(\nu,\kappa)=\int_{[0,1]}\frac{\mbox{d}\nu}{\mbox{d}\kappa}\log% \frac{\mbox{d}\nu}{\mbox{d}\kappa}\,\mbox{d}\kappa\quad\mbox{if}\ \nu\ll\kappa% ;\qquad\mbox{and}\quad\mathcal{K}(\nu,\kappa)=+\infty\quad\mbox{otherwise.} (2)

Empirical distribution:

We consider a sequence X_{1},X_{2},\ldots of random variables taking values in \mathcal{X}, independent and identically distributed according to a distribution \nu. For all integers t\geqslant 1, we denote the empirical distribution corresponding to the first t elements of the sequence by

\widehat{\nu}_{t}=\frac{1}{t}\sum_{s=1}^{t}\delta_{X_{t}}\,.

Non-asymptotic Sanov’s Lemma:

The following lemma follows from a straightforward adaptation of Dinwoodie (1992, Theorem 2.1 and comments on page 372). Details of the proof are provided in the appendix.

{lemma}

Let \mathcal{C} be an open convex subset of \mathcal{P}(\mathcal{X}) such that  \displaystyle{\Lambda(\mathcal{C})=\inf_{\kappa\in\mathcal{C}}\,\mathcal{K}(% \kappa,\nu)<\infty\,.}
Then, for all t\geqslant 1, one has \qquad\qquad\displaystyle{\mathbb{P}_{\nu}\bigl{\{}\widehat{\nu}_{t}\in% \overline{\mathcal{C}}\bigr{\}}\leqslant e^{-t\Lambda(\overline{\mathcal{C}})}}   where \overline{\mathcal{C}} is the closure of \mathcal{C}.

This lemma should be thought of as a deviation inequality. The empirical distribution converges (in distribution) to \nu. Now, if (and only if) \nu is not in the closure of \mathcal{C}, then \Lambda(\mathcal{C})>0 and the lemma indicates how unlikely it is that \widehat{\nu}_{t} is in this set \overline{\mathcal{C}} not containing the limit \nu. The probability of interest decreases at a geometric rate, which depends on \Lambda(\mathcal{C}).

3 Finite-time analysis for Bernoulli distributions

In this section, we start with the case of Bernoulli distributions. Although this case is a special case of the general results of Section 4, we provide here a complete and self-contained analysis of this case, where, in addition, we are able to provide closed forms for all the terms in the regret bound. Note however that the resulting bound is slightly worse than what could be derived from the general case (for which more sophisticated tools are used). This result is mainly provided as a warm-up.

3.1 Reminder of some useful results for Bernoulli distributions

We denote by \mathcal{B} the subset of \mathcal{P}\bigl{(}[0,1]\bigr{)} formed by the Bernoulli distributions; it corresponds to \mathcal{B}=\mathcal{P}\bigl{(}\{0,1\}\bigr{)}. A generic element of \mathcal{B} will be denoted by \beta(p), where p\in[0,1] is the probability mass put on 1. We consider a sequence X_{1},X_{2},\ldots of independent and identically distributed random variables, with common distribution \beta(p); for the sake of clarity we will index, in this subsection only, all probabilities and expectations with p.

For all integers t\geqslant 1, we denote by \quad\displaystyle{\widehat{p}_{t}=\frac{1}{t}\sum_{s=1}^{t}X_{t}}\quad the empirical average of the first t elements of the sequence.

The lemma below follows from an adaptation of Garivier and Leonardi (2010, Proposition 2). The details of the adaptation (and simplification) can be found in the appendix.

{lemma}

For all p\in[0,1], all \varepsilon>1, and all t\geqslant 1,

\mathbb{P}_{p}\!\left(\bigcup_{s=1}^{t}\biggl{\{}s\,\,\mathcal{K}\Bigl{(}\beta% \bigl{(}{\widehat{p}_{s}}\bigr{)},\,\beta(p)\Bigr{)}\geqslant\varepsilon\biggr% {\}}\right)\leqslant 2e\,\bigl{\lceil}\varepsilon\log t\bigr{\rceil}\,e^{-% \varepsilon}\,.

In particular, for all random variables N_{t} taking values in \{1,\ldots,t\},

\mathbb{P}_{p}\biggl{\{}N_{t}\,\,\mathcal{K}\Bigl{(}\beta\bigl{(}{\widehat{p}_% {N_{t}}}\bigr{)},\,\beta(p)\Bigr{)}\geqslant\varepsilon\biggr{\}}\leqslant 2e% \,\bigl{\lceil}\varepsilon\log t\bigr{\rceil}\,e^{-\varepsilon}\,.

Another immediate fact about Bernoulli distributions is that for all p\in(0,1), the mappings \mathcal{K}_{\,\cdot\,,p}:q\in(0,1)\mapsto\mathcal{K}\bigl{(}\beta(p),\beta(q)% \bigr{)} and \mathcal{K}_{p,\,\cdot\,}:q\in[0,1]\mapsto\mathcal{K}\bigl{(}\beta(q),\beta(p)% \bigr{)} are continuous and take finite values. In particular, we have, for instance, that for all \varepsilon>0 and p\in(0,1), the set

\Bigl{\{}q\in[0,1]:\ \ \mathcal{K}\bigl{(}\beta(p),\beta(q)\bigr{)}\leqslant% \varepsilon\Bigr{\}}

is a closed interval containing p. This property still holds when p\in\{0,1\}, as in this case, the interval is reduced to \{p\}.

3.2 Strategy and analysis

We consider the so-called \mathcal{K}–strategy of Figure 1, which was already considered in the literature, see Burnetas and Katehakis (1996), Filippi (2010). The numerical computation of the quantities B^{+}_{a,t} is straightforward (by convexity of \mathcal{K} in its second argument, by using iterative methods) and is detailed therein.

  Parameters: A non-decreasing function f:\mathbb{N}\to\mathbb{R}
Initialization: Pull each arm of \mathcal{A} once
For rounds t+1, where t\geqslant|\mathcal{A}|,

  • compute for each arm a\in\mathcal{A} the quantity

    B^{+}_{a,t}=\max\,\biggl{\{}q\in[0,1]:\ \ N_{t}(a)\,\,\mathcal{K}\Bigl{(}\beta% \bigl{(}\widehat{\mu}_{a,N_{t}(a)}\bigr{)},\,\beta(q)\Bigr{)}\leqslant f(t)% \biggr{\}}\,,

    where \qquad\qquad\displaystyle{\widehat{\mu}_{a,N_{t}(a)}=\frac{1}{N_{t}(a)}\sum_{s% \leqslant t:\,A_{s}=a}Y_{s}\,;}

  • in case of a tie, pick an arm with largest value of \widehat{\mu}_{a,N_{t}(a)};

  • pull any arm \displaystyle{A_{t+1}\in\mathop{\mathrm{argmax}}_{a\in\mathcal{A}}\,B^{+}_{a,t% }\,.}

 

Figure 1: The \mathcal{K}–strategy.

Before proceeding, we denote by \sigma^{2}_{a}=\mu_{a}(1-\mu_{a}) the variance of each arm a\in\mathcal{A} (and take the short-hand notation \sigma^{\star,2} for the variance of an optimal arm).

{theorem}

When \mu^{\star}\in(0,1), for all non-decreasing functions f:\mathbb{N}\to\mathbb{R}_{+} such that f(1)\geqslant 1, the expected regret R_{T} of the strategy of Figure 1 is upper bounded by the infimum, as the (c_{a})_{a\in\mathcal{A}} describe (0,+\infty), of the quantities

\sum_{a\in\mathcal{A}}\Delta_{a}\Biggl{(}\frac{(1+c_{a})\,f(T)}{\mathcal{K}% \bigl{(}\beta(\mu_{a}),\,\beta(\mu^{\star})\bigr{)}}+4e\sum_{t=|\mathcal{A}|}^% {T-1}\bigl{\lceil}f(t)\log t\bigr{\rceil}\,e^{-f(t)}+\frac{(1+c_{a})^{2}}{8\,c% _{a}^{2}\Delta_{a}^{2}\,\min\bigl{\{}\sigma_{a}^{4},\,\sigma^{\star,4}\bigr{\}% }}\mathbb{I}_{\{\mu_{a}\in(0,1)\}}+3\Biggr{)}\,.

For \mu^{\star}=0, its regret is null. For \mu^{\star}=1, it satisfies R_{T}\leqslant 2\bigl{(}|\mathcal{A}|-1\bigr{)}.

A possible choice for the function f is f(t)=\log\bigl{(}(et)\log^{3}(et)\bigr{)}, which is non decreasing, satisfies f(1)\geqslant 1, and is such that the second term in the sum above is bounded (by a basic result about so-called Bertrand’s series). Now, as the constants c_{a} in the bound are parameters of the analysis (and not of the strategy), they can be optimized. For instance, with the choice of f(t) mentioned above, taking each c_{a} proportional to (\log T)^{-1/3} (up to a multiplicative constant that depends on the distributions \nu_{a}) entails the regret bound

\sum_{a\in\mathcal{A}}\Delta_{a}\frac{\log T}{\mathcal{K}\bigl{(}\beta(\mu_{a}% ),\,\beta(\mu^{\star})\bigr{)}}+\varepsilon_{T}\,,

where it is easy to give an explicit and closed-form expression of \varepsilon_{T}; in this conference version, we only indicate that \varepsilon_{T} is of order of (\log T)^{2/3} but we do not know whether the order of magnitude of this second-order term is optimal.

{proof}

We first deal with the case where \mu^{\star}\not\in\{0,1\} and introduce an additional notation. In view of the remark at the end of Section 3.1, for all arms a and rounds t, we let B^{-}_{a,t} be the element in [0,1] such that

\biggl{\{}q\in[0,1]:\ \ N_{t}(a)\,\,\mathcal{K}\Bigl{(}\beta\bigl{(}\widehat{% \mu}_{a,N_{t}(a)}\bigr{)},\,\beta(q)\Bigr{)}\leqslant f(t)\biggr{\}}=\bigl{[}B% ^{-}_{a,t},\,\,B^{+}_{a,t}\bigr{]}\,. (3)

As (1) indicates, it suffices to bound N_{T}(a) for all suboptimal arms a, i.e., for all arms such that \mu_{a}<\mu^{\star}. We will assume in addition that \mu_{a}>0 (and we also have \mu_{a}\leqslant\mu^{\star}<1); the case where \mu_{a}=0 will be handled separately.
Step 1: A decomposition of the events of interest. For t\geqslant|\mathcal{A}|, when A_{t+1}=a, we have in particular, by definition of the strategy, that B^{+}_{a,t}\geqslant B^{+}_{a^{\star},t}. On the event

\bigl{\{}A_{t+1}=a\bigr{\}}\,\cap\,\Bigl{\{}\mu^{\star}\in\bigl{[}B^{-}_{a^{% \star},t},\,\,B^{+}_{a^{\star},t}\bigr{]}\Bigr{\}}\,\cap\,\Bigl{\{}\mu_{a}\in% \bigl{[}B^{-}_{a,t},\,\,B^{+}_{a,t}\bigr{]}\Bigr{\}}\,,

we therefore have, on the one hand, \mu^{\star}\leqslant B^{+}_{a^{\star},t}\leqslant B^{+}_{a,t} and on the other hand, B^{-}_{a,t}\leqslant\mu_{a}\leqslant\mu^{\star}, that is, the considered event is included in \Bigl{\{}\mu^{\star}\in\bigl{[}B^{-}_{a,t},\,\,B^{+}_{a,t}\bigr{]}\Bigr{\}}. We thus proved that

\bigl{\{}A_{t+1}=a\bigr{\}}\subseteq\Bigl{\{}\mu^{\star}\not\in\bigl{[}B^{-}_{% a^{\star},t},\,\,B^{+}_{a^{\star},t}\bigr{]}\Bigr{\}}\,\cup\,\Bigl{\{}\mu_{a}% \not\in\bigl{[}B^{-}_{a,t},\,\,B^{+}_{a,t}\bigr{]}\Bigr{\}}\,\cup\,\Bigl{\{}% \mu^{\star}\in\bigl{[}B^{-}_{a,t},\,\,B^{+}_{a,t}\bigr{]}\Bigr{\}}\,.

Going back to the definition (3), we get in particular the inclusion

\displaystyle\bigl{\{}A_{t+1}=a\bigr{\}}\subseteq \displaystyle\quad\biggl{\{}N_{t}(a^{\star})\,\,\mathcal{K}\Bigl{(}\beta\bigl{% (}\widehat{\mu}_{a^{\star},N_{t}(a^{\star})}\bigr{)},\,\beta(\mu^{\star})\Bigr% {)}>f(t)\biggr{\}}
\displaystyle\cup\,\biggl{\{}N_{t}(a)\,\,\mathcal{K}\Bigl{(}\beta\bigl{(}% \widehat{\mu}_{a,N_{t}(a)}\bigr{)},\,\beta(\mu_{a})\Bigr{)}>f(t)\biggr{\}}
\displaystyle\cup\,\Biggl{(}\biggl{\{}N_{t}(a)\,\,\mathcal{K}\Bigl{(}\beta% \bigl{(}\widehat{\mu}_{a,N_{t}(a)}\bigr{)},\,\beta(\mu^{\star})\Bigr{)}% \leqslant f(t)\biggr{\}}\,\cap\,\bigl{\{}A_{t+1}=a\bigr{\}}\Biggr{)}\,.

Step 2: Bounding the probabilities of two elements of the decomposition. We consider the filtration (\mathcal{F}_{t}), where for all t\geqslant 1, the \sigma–algebra \mathcal{F}_{t} is generated by A_{1},Y_{1}, \ldots,  A_{t},Y_{t}. In particular, A_{t+1} and thus all N_{t+1}(a) are \mathcal{F}_{t}–measurable. We denote by \tau_{a,1} the deterministic round at which a was pulled for the first time and by \tau_{a,2},\,\tau_{a,3},\,\ldots the rounds t\geqslant|\mathcal{A}|+1 at which a was then played; since for all k\geqslant 2,

\tau_{a,k}=\min\bigl{\{}t\geqslant|\mathcal{A}|+1:\ \ N_{t}(a)=k\bigr{\}}\,,

we see that \bigl{\{}\tau_{a,k}=t\bigr{\}} is \mathcal{F}_{t-1}–measurable. Therefore, for each k\geqslant 1, the random variable \tau_{a,k} is a (predictable) stopping time. Hence, by a well-known fact in probability theory (see, e.g., Chow and Teicher 1988, Section 5.3), the random variables \widetilde{X}_{a,k}=Y_{\tau_{a,k}}, where k=1,2,\ldots are independent and identically distributed according to \nu_{a}. Since on \bigl{\{}N_{t}(a)=k\bigr{\}}, we have the rewriting

\widehat{\mu}_{a,N_{t}(a)}=\widetilde{\mu}_{a,k}\,\qquad\mbox{where}\qquad% \widetilde{\mu}_{a,k}=\frac{1}{k}\sum_{j=1}^{k}\widetilde{X}_{a,j}\,,

and since for t\geqslant|\mathcal{A}|+1, one has N_{t}(a)\geqslant 1 with probability 1, we can apply the second statement in Lemma 3.1 and get, for all t\geqslant|\mathcal{A}|+1,

\mathbb{P}\biggl{\{}N_{t}(a)\,\,\mathcal{K}\Bigl{(}\beta\bigl{(}\widehat{\mu}_% {a,N_{t}(a)}\bigr{)},\,\beta(\mu_{a})\Bigr{)}>f(t)\biggr{\}}\leqslant 2e\,% \bigl{\lceil}f(t)\log t\bigr{\rceil}\,e^{-f(t)}\,.

A similar argument shows that for all t\geqslant|\mathcal{A}|+1,

\mathbb{P}\biggl{\{}N_{t}(a^{\star})\,\,\mathcal{K}\Bigl{(}\beta\bigl{(}% \widehat{\mu}_{a^{\star},N_{t}(a^{\star})}\bigr{)},\,\beta(\mu^{\star})\Bigr{)% }>f(t)\biggr{\}}\leqslant 2e\,\bigl{\lceil}f(t)\log t\bigr{\rceil}\,e^{-f(t)}% \,.\vskip 3.0pt plus 1.0pt minus 1.0pt

Step 3: Rewriting the remaining terms. We therefore proved that

\displaystyle\mathbb{E}\bigl{[}N_{T}(a)\bigr{]}\leqslant 1 \displaystyle+4e\sum_{t=|\mathcal{A}|}^{T-1}\bigl{\lceil}f(t)\log t\bigr{% \rceil}\,e^{-f(t)}+\sum_{t=|\mathcal{A}|}^{T-1}\mathbb{P}\Biggl{(}\biggl{\{}N_% {t}(a)\,\,\mathcal{K}\Bigl{(}\beta\bigl{(}\widehat{\mu}_{a,N_{t}(a)}\bigr{)},% \,\beta(\mu^{\star})\Bigr{)}\leqslant f(t)\biggr{\}}\,\cap\,\bigl{\{}A_{t+1}=a% \bigr{\}}\Biggr{)}

and deal now with the last sum. Since f is non decreasing, it is bounded by

\sum_{t=|\mathcal{A}|}^{T-1}\,\mathbb{P}\Bigl{(}K_{t}\,\cap\,\bigl{\{}A_{t+1}=% a\bigr{\}}\Bigr{)}\qquad\mbox{where}\qquad K_{t}=\biggl{\{}N_{t}(a)\,\,% \mathcal{K}\Bigl{(}\beta\bigl{(}\widehat{\mu}_{a,N_{t}(a)}\bigr{)},\,\beta(\mu% ^{\star})\Bigr{)}\leqslant f(T)\biggr{\}}\,.

Now, \qquad\displaystyle{\sum_{t=|\mathcal{A}|}^{T-1}\,\mathbb{P}\Bigl{(}K_{t}\,% \cap\,\bigl{\{}A_{t+1}=a\bigr{\}}\Bigr{)}=\mathbb{E}\!\left[\sum_{t=|\mathcal{% A}|}^{T-1}\mathbb{I}_{\bigl{\{}A_{t+1}=a\bigr{\}}}\mathbb{I}_{K_{t}}\right]=% \mathbb{E}\!\left[\sum_{k\geqslant 2}\mathbb{I}_{\bigl{\{}\tau_{a,k}\leqslant T% \bigr{\}}}\mathbb{I}_{K_{\tau_{a,k}-1}}\right].}
We note that, since N_{\tau_{a,k}-1}(a)=k-1, we have that

K_{\tau_{a,k}-1}=\biggl{\{}(k-1)\,\,\mathcal{K}\Bigl{(}\beta\bigl{(}\widetilde% {\mu}_{a,k-1}\bigr{)},\,\beta(\mu^{\star})\Bigr{)}\leqslant f(T)\biggr{\}}\,.

All in all, since \tau_{a,k}\leqslant T implies k\leqslant T-|\mathcal{A}|+1 (as each arm is played at least once during the first |\mathcal{A}| rounds), we have

\!\!\mathbb{E}\!\left[\sum_{k\geqslant 2}\mathbb{I}_{\bigl{\{}\tau_{a,k}% \leqslant T\bigr{\}}}\mathbb{I}_{K_{\tau_{a,k}-1}}\right]\leqslant\mathbb{E}\!% \left[\sum_{k=2}^{T-|\mathcal{A}|+1}\mathbb{I}_{K_{\tau_{a,k}-1}}\right]=\!% \sum_{k=2}^{T-|\mathcal{A}|+1}\mathbb{P}\biggl{\{}(k-1)\,\,\mathcal{K}\Bigl{(}% \beta\bigl{(}\widetilde{\mu}_{a,k-1}\bigr{)},\,\beta(\mu^{\star})\Bigr{)}% \leqslant f(T)\biggr{\}}\,.\vskip 3.0pt plus 1.0pt minus 1.0pt (4)

Step 4: Bounding the probabilities of the latter sum via Sanov’s lemma. For each \gamma>0, we define the convex open set \displaystyle{\mathcal{C}_{\gamma}=\Bigl{\{}\beta(q)\in\mathcal{B}:\ \ % \mathcal{K}\bigl{(}\beta(q),\,\beta(\mu^{\star})\bigr{)}<\gamma\Bigr{\}}}, which is a non-empty set (since \mu^{\star}<1); by continuity of the mapping \mathcal{K}_{\,\cdot\,,\mu^{\star}} defined after the statement of Lemma 3.1 when \mu^{\star}\in(0,1), its closure equals \displaystyle{\overline{\mathcal{C}}_{\gamma}=\Bigl{\{}\beta(q)\in\mathcal{B}:% \ \ \mathcal{K}\bigl{(}\beta(q),\,\beta(\mu^{\star})\bigr{)}\leqslant\gamma% \Bigr{\}}\,.}

In addition, since \mu_{a}\in(0,1), we have that \mathcal{K}\bigl{(}\beta(q),\,\beta(\mu_{a})\bigr{)}<\infty for all q\in[0,1]. In particular, for all \gamma>0, the condition \Lambda\bigl{(}\mathcal{C}_{\gamma}\bigr{)}<\infty of Lemma 2 is satisfied. Denoting this value by

\theta_{a}(\gamma)=\inf\biggl{\{}\mathcal{K}\bigl{(}\beta(q),\,\beta(\mu_{a})% \bigr{)}:\ \ \beta(q)\in\mathcal{B}\ \ \mbox{\small such that}\ \ \mathcal{K}% \bigl{(}\beta(q),\,\beta(\mu^{\star})\bigr{)}\leqslant\gamma\biggr{\}}\,,

we get by the indicated lemma that for all k\geqslant 1,

\mathbb{P}\biggl{\{}\mathcal{K}\Bigl{(}\beta\bigl{(}\widetilde{\mu}_{a,k}\bigr% {)},\,\beta(\mu^{\star})\Bigr{)}\leqslant\gamma\biggr{\}}=\mathbb{P}\Bigl{\{}% \beta\bigl{(}\widetilde{\mu}_{a,k}\bigr{)}\in\overline{\mathcal{C}}_{\gamma}% \Bigr{\}}\leqslant e^{-k\,\theta_{a}(\gamma)}\,.

Now, since (an open neighborhood of) \beta(\mu_{a}) is not included in \overline{\mathcal{C}}_{\gamma} as soon as 0<\gamma<\mathcal{K}\bigl{(}\beta(\mu_{a}),\,\beta(\mu^{\star})\bigr{)}, we have that \theta_{a}(\gamma)>0 for such values of \gamma. To apply the obtained inequality to the last sum in (4), we fix a constant c_{a}>0 and denote by k_{0} the following upper integer part, \displaystyle{k_{0}=\left\lceil\frac{(1+c_{a})\,f(T)}{\mathcal{K}\bigl{(}\beta% (\mu_{a}),\,\beta(\mu^{\star})\bigr{)}}\right\rceil,} so that f(T)/k\leqslant\mathcal{K}\bigl{(}\beta(\mu_{a}),\,\beta(\mu^{\star})\bigr{)}/% (1+c_{a})<\mathcal{K}\bigl{(}\beta(\mu_{a}),\,\beta(\mu^{\star})\bigr{)} for k\geqslant k_{0}, hence,

\displaystyle\sum_{k=2}^{T-|\mathcal{A}|+1}\,\mathbb{P}\biggl{\{}(k-1)\,\,% \mathcal{K}\Bigl{(}\beta\bigl{(}\widetilde{\mu}_{a,k-1}\bigr{)},\,\beta(\mu^{% \star})\Bigr{)}\leqslant f(T)\biggr{\}} \displaystyle\leqslant \displaystyle\sum_{k=1}^{T}\,\mathbb{P}\biggl{\{}\mathcal{K}\Bigl{(}\beta\bigl% {(}\widetilde{\mu}_{a,k}\bigr{)},\,\beta(\mu^{\star})\Bigr{)}\leqslant\frac{f(% T)}{k}\biggr{\}}
\displaystyle\leqslant \displaystyle k_{0}-1+\sum_{k=k_{0}}^{T}\,\exp\Bigl{(}-k\,\theta_{a}\bigl{(}f(% T)/k\bigr{)}\Bigr{)}\,.

Since \theta_{a} is a non-increasing function,

\displaystyle\sum_{k=k_{0}}^{T}\,\exp\Bigl{(}-k\,\theta_{a}\bigl{(}f(T)/k\bigr% {)}\Bigr{)} \displaystyle\leqslant \displaystyle\sum_{k=k_{0}}^{T}\,\exp\Bigl{(}-k\,\theta_{a}\bigl{(}\mathcal{K}% \bigl{(}\beta(\mu_{a}),\,\beta(\mu^{\star})\bigr{)}/(1+c_{a})\bigr{)}\Bigr{)}
\displaystyle\leqslant \displaystyle\Gamma_{a}(c_{a})\,\exp\Bigl{(}-k_{0}\,\theta_{a}\bigl{(}\mathcal% {K}\bigl{(}\beta(\mu_{a}),\,\beta(\mu^{\star})\bigr{)}/(1+c_{a})\bigr{)}\Bigr{% )}\leqslant\Gamma_{a}(c_{a}),

where \displaystyle{\Gamma_{a}(c_{a})=\Big{[}1-\exp\Bigl{(}-\theta_{a}\bigl{(}% \mathcal{K}\bigl{(}\beta(\mu_{a}),\,\beta(\mu^{\star})\bigr{)}/(1+c_{a})\bigr{% )}\Bigr{)}\Big{]}^{-1}}\,.
Putting all pieces together, we thus proved so far that

\mathbb{E}\bigl{[}N_{T}(a)\bigr{]}\leqslant 1+\frac{(1+c_{a})\,f(T)}{\mathcal{% K}\bigl{(}\beta(\mu_{a}),\,\beta(\mu^{\star})\bigr{)}}+4e\sum_{t=|\mathcal{A}|% }^{T-1}\bigl{\lceil}f(t)\log t\bigr{\rceil}\,e^{-f(t)}+\Gamma_{a}(c_{a})

and it only remains to deal with \Gamma_{a}(c_{a}).
Step 5: Getting an upper bound in closed form for \Gamma_{a}(c_{a}). We will make repeated uses of Pinsker’s inequality: for p,q\in[0,1], one has \mathcal{K}\bigl{(}\beta(p),\beta(q)\bigr{)}\geqslant 2\,(p-q)^{2}\,.
In what follows, we use the short-hand notation \Theta_{a}=\theta_{a}\bigl{(}\mathcal{K}\bigl{(}\beta(\mu_{a}),\,\beta(\mu^{% \star})\bigr{)}/(1+c_{a})\bigr{)} and therefore need to upper bound 1/\bigl{(}1-e^{-\Theta_{a}}\bigr{)}. Since for all u\geqslant 0, one has e^{-u}\leqslant 1-u+u^{2}/2, we get \displaystyle{\Gamma_{a}(c_{a})\leqslant\frac{1}{\Theta_{a}\bigl{(}1-\Theta_{a% }/2\bigr{)}}\leqslant\frac{2}{\Theta_{a}}\mbox{ for }\Theta_{a}\leqslant 1,} and \displaystyle{\Gamma_{a}(c_{a})\leqslant\frac{1}{1-e^{-1}}\leqslant 2\mbox{ % for }\Theta_{a}\geqslant 1.} It thus only remains to lower bound \Theta_{a} in the case when it is smaller than 1.

By the continuity properties of the Kullback-Leibler divergence, the infimum in the definition of \theta_{a} is always achieved; we therefore let \widetilde{\mu} be an element in [0,1] such that

\Theta_{a}=\mathcal{K}\bigl{(}\beta({\widetilde{\mu}}),\,\beta({\mu_{a}})\bigr% {)}\qquad\mbox{and}\qquad\mathcal{K}\bigl{(}\beta({\widetilde{\mu}}),\,\beta({% \mu^{\star}})\bigr{)}=\frac{\mathcal{K}\bigl{(}\beta({\mu_{a}}),\,\beta({\mu^{% \star}})\bigr{)}}{1+c}\,;

it is easy to see that we have the ordering \mu_{a}<\widetilde{\mu}<\mu^{\star}. By Pinsker’s inequality, \Theta_{a}\geqslant 2\bigl{(}\widetilde{\mu}-\mu_{a}\bigr{)}^{2} and we now lower bound the latter quantity. We use the short-hand notation f(p)=\mathcal{K}\bigl{(}\beta(p),\beta({\mu^{\star}})\bigr{)} and note that the thus defined mapping f is convex and differentiable on (0,1); its derivative equals f^{\prime}(p)=\log\bigl{(}(1-\mu^{\star})/(\mu^{\star})\bigr{)}+\log\bigl{(}p/% (1-p)\bigr{)} for all p\in(0,1) and is therefore non positive for p\leqslant\mu^{\star}. By the indicated convexity of f, using a sub-gradient inequality, we get f\bigl{(}\widetilde{\mu}\bigr{)}-f(\mu_{a})\geqslant f^{\prime}(\mu_{a})\,% \bigl{(}\widetilde{\mu}-\mu_{a}\bigr{)}\,, which entails, since f^{\prime}(\mu_{a})<0,

\widetilde{\mu}-\mu_{a}\geqslant\frac{f\bigl{(}\widetilde{\mu}\bigr{)}-f(\mu_{% a})}{f^{\prime}(\mu_{a})}=\frac{c_{a}}{1+c_{a}}\,\,\frac{f(\mu_{a})}{-f^{% \prime}(\mu_{a})}\,, (5)

where the equality follows from the fact that by definition of \mu, we have f\bigl{(}\widetilde{\mu}\bigr{)}=f(\mu_{a})/(1+c_{a}). Now, since f^{\prime} is differentiable as well on (0,1) and takes the value 0 at \mu^{\star}, a Taylor’s equality entails that there exists a \xi\in(\mu_{a},\mu^{\star}) such that

-f^{\prime}(\mu_{a})=f^{\prime}(\mu^{\star})-f^{\prime}(\mu_{a})=f^{\prime% \prime}(\xi)\,\bigl{(}\mu^{\star}-\mu_{a})\qquad\mbox{where}\quad f^{\prime% \prime}(\xi)=1/\xi+1/(1-\xi)=1\big{/}\bigl{(}\xi(1-\xi)\bigr{)}\,.

Therefore, by convexity of \tau\mapsto\tau(1-\tau), we get that

\frac{1}{-f^{\prime}(\mu_{a})}\geqslant\frac{\min\bigl{\{}\mu_{a}(1-\mu_{a}),% \,\mu^{\star}(1-\mu^{\star})\bigr{\}}}{\mu^{\star}-\mu_{a}}\,.

Substituting this into (5) and using again Pinsker’s inequality to lower bound f(\mu_{a}), we have proved

\widetilde{\mu}-\mu_{a}\geqslant 2\,\frac{c_{a}}{1+c_{a}}\,\bigl{(}\mu^{\star}% -\mu_{a}\bigr{)}\,\min\bigl{\{}\mu_{a}(1-\mu_{a}),\,\mu^{\star}(1-\mu^{\star})% \bigr{\}}\,.

Putting all pieces together, we thus proved that

\Gamma_{a}(c_{a})\leqslant 2\,\max\left\{\frac{(1+c_{a})^{2}}{8\,c_{a}^{2}% \bigl{(}\mu^{\star}-\mu_{a}\bigr{)}^{2}\,\Bigl{(}\min\bigl{\{}\mu_{a}(1-\mu_{a% }),\,\mu^{\star}(1-\mu^{\star})\bigr{\}}\Bigr{)}^{2}},\,\,1\right\}\,;

bounding the maximum of the two quantities by their sum concludes the main part of the proof.

Step 6: For \mu^{\star}\in\{0,1\} and/or \mu_{a}=0. When \mu^{\star}=1, then \widehat{\mu}_{a^{\star},N_{t}(a\star)}=1 for all t\geqslant|\mathcal{A}|+1, so that B^{+}_{a^{\star},t}=1 for all t\geqslant|\mathcal{A}|+1. Thus, the arm a is played after round t\geqslant|\mathcal{A}|+1 only if B^{+}_{a,t}=1 and \widehat{\mu}_{a,N_{t}(a)}=1 (in view of the tie-breaking rule of the considered strategy). But this means that a is played as long as it gets payoffs equal to 1 and is stopped being played when it receives the payoff 0 for the first time. Hence, in this case, we have that the sum of payoffs equals at least T-2\bigl{(}|\mathcal{A}|-1) and the regret R_{T}=\mathbb{E}[T\mu^{\star}-(Y_{1}+\ldots+Y_{t})] is therefore bounded by 2\bigl{(}|\mathcal{A}|-1).

When \mu^{\star}=0, a Dirac mass over 0 is associated with all arms and the regret of all strategies is equal to 0.

We consider now the case \mu^{\star}\in(0,1) and \mu_{a}=0, for which the first three steps go through; only in the upper bound of step 4 we used the fact that \mu_{a}>0. But in this case, we have a deterministic bound on (4). Indeed, since \mathcal{K}\bigl{(}\beta(0),\beta(\mu^{\star})\bigr{)}=-\log\mu^{\star}, we have k\,\mathcal{K}\bigl{(}\beta(0),\beta(\mu^{\star})\bigr{)}\leqslant f(T) if and only if

k\leqslant\frac{f(T)}{-\log\mu^{\star}}=\frac{f(T)}{\mathcal{K}\bigl{(}\beta(% \mu_{a}),\beta(\mu^{\star})\bigr{)}}\,,

which improves on the general bound exhibited in step 4.

Remark 1

Note that Step 5 in the proof is specifically designed to provide an upper bound on \Gamma_{a}(c_{a}) in the case of Bernoulli distributions. In the general case, getting such an explicit bound seems more involved.

4 A finite-time analysis in the case of distributions with finite support

Before stating and proving our main result, Theorem 4.2, we introduce the quantity \mathcal{K}_{\inf} and list some of its properties.

4.1 Some useful properties of \mathcal{K}_{\inf} and its level sets

We now introduce the key quantity in order to generalize the previous algorithm to handle the case of distributions with finite support. To that end, we introduce \mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}, the subset of \mathcal{P}\bigl{(}[0,1]\bigr{)} that consists of distributions with finite support.

{definition}

For all distributions \nu\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)} and \mu\in[0,1), we define

\displaystyle\mathcal{K}_{\inf}(\nu,\mu)=\inf\,\Bigl{\{}\mathcal{K}(\nu,\nu^{% \prime}):\ \ \nu^{\prime}\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}\ \ \mbox{\rm s% .t.}\ \ E(\nu^{\prime})>\mu\Bigr{\}},

where E(\nu^{\prime})=\int_{[0,1]}x\,{\mbox{\rm d}}\nu^{\prime}(x) denotes the expectation of the distribution \nu^{\prime}.

We now remind some useful properties of \mathcal{K}_{\inf}. Honda and Takemura (2010b, Lemma 6) can be reformulated in our context as follows.

{lemma}

For all \nu\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}, the mapping \mathcal{K}_{\inf}(\nu,\,\cdot\,) is continuous and non decreasing in its argument \mu\in[0,1). Moreover, the mapping \mathcal{K}_{\inf}(\,\cdot\,,\mu) is lower semi-continuous on \mathcal{P}_{F}\bigl{(}[0,1]\bigr{)} for all \mu\in[0,1).

The next two lemmas bound the variation of \mathcal{K}_{\inf}, respectively in its first and second arguments. (For clarity, we denote the expectations with respect to \nu by \mathbb{E}_{\nu}.) Their proofs are both deferred to the appendix. We denote by \left\Arrowvert\,\cdot\,\right\Arrowvert_{1} the \ell^{1}–norm on \mathcal{P}\bigl{(}[0,1]\bigr{)} and recall that the \ell^{1}–norm of \nu-\nu^{\prime} corresponds to twice the distance in variation between \nu and \nu^{\prime}.

{lemma}

For all \mu\in(0,1) and for all \nu,\,\nu^{\prime}\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}, the following holds true.

  • In the case when \mathbb{E}_{\nu}\bigl{[}(1-\mu)/(1-X)\bigr{]}>1, then \mathcal{K}_{\inf}(\nu,\mu)-\mathcal{K}_{\inf}(\nu^{\prime},\mu)\leqslant M_{% \nu,\mu}\,\Arrowvert\nu-\nu^{\prime}\Arrowvert_{1}\,, for some constant M_{\nu,\mu}>0.

  • In the case when \mathbb{E}_{\nu}\bigl{[}(1-\mu)/(1-X)\bigr{]}\leqslant 1, the fact that \mathcal{K}_{\inf}(\nu,\mu)-\mathcal{K}_{\inf}(\nu^{\prime},\mu)\geqslant% \alpha\,\mathcal{K}_{\inf}(\nu,\mu) for some \alpha\in(0,1) entails that

    \Arrowvert\nu-\nu^{\prime}\Arrowvert_{1}\geqslant\frac{1-\mu}{(2/\alpha)\,% \bigl{(}(2/\alpha)-1\bigr{)}}\,.
{lemma}

We have that for any \nu\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}, provided that \mu\geqslant\mu-\varepsilon>E(\nu), the following inequalities hold true:

\displaystyle\varepsilon/(1-\mu)\geqslant\mathcal{K}_{\inf}(\nu,\mu)-\mathcal{% K}_{\inf}(\nu,\mu-\varepsilon)\geqslant 2\varepsilon^{2}

Moreover, the first inequality is also valid when E(\nu)\geqslant\mu>\mu-\varepsilon or \mu>E(\nu)\geqslant\mu-\varepsilon.

Level sets of \mathcal{K}_{\inf}:

For each \gamma>0 and \mu\in(0,1), we consider the set

\displaystyle\mathcal{C}_{\mu,\gamma} \displaystyle= \displaystyle\Bigl{\{}\nu^{\prime}\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}:\ \ % \mathcal{K}_{\inf}(\nu^{\prime},\mu)<\gamma\Bigr{\}}
\displaystyle= \displaystyle\Bigl{\{}\nu^{\prime}\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}:\ \ % \exists\,\nu^{\prime}_{\mu}\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}\ \ \mbox{s.% t.}\ \ E\bigl{(}\nu^{\prime}_{\mu}\bigr{)}>\mu\ \ \mbox{and}\ \ \mathcal{K}% \bigl{(}\nu^{\prime},\nu^{\prime}_{\mu}\bigr{)}<\gamma\Bigr{\}}\,.

We detail a property in the following lemma, whose proof is also deferred to the appendix.

{lemma}

For all \gamma>0 and \mu\in(0,1), the set \mathcal{C}_{\mu,\gamma} is a non-empty open convex set. Moreover,

\displaystyle\overline{\mathcal{C}}_{\mu,\gamma}\,\supseteq\,\Bigl{\{}\nu^{% \prime}\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}:\ \ \mathcal{K}_{\inf}(\nu^{% \prime},\mu)\leqslant\gamma\Bigr{\}}\,.

4.2 The \mathcal{K}_{\inf}–strategy and a general performance guarantee

For each arm a\in\mathcal{A} and round t with N_{t}(a)>0, we denote by \widehat{\nu}_{a,N_{t}(a)} the empirical distribution of the payoffs obtained till round t when picking arm a, that is,

\widehat{\nu}_{a,N_{t}(a)}=\frac{1}{N_{t}(a)}\sum_{s\leqslant t:\,A_{s}=a}% \delta_{Y_{s}}\,,

where for all x\in[0,1], we denote by \delta_{x} the Dirac mass on x. We define the corresponding empirical averages as

\widehat{\mu}_{a^{\star},N_{t}(a^{\star})}=E\bigl{(}\widehat{\nu}_{a^{\star},N% _{t}(a^{\star})}\bigr{)}=\frac{1}{N_{t}(a)}\sum_{s\leqslant t:\,A_{s}=a}Y_{s}\,.

We then consider the \mathcal{K}_{\inf}–strategy defined in Figure 2. Note that the use of maxima in the definitions of the B^{+}_{a,t} is justified by Lemma 4.1.

As explained in Honda and Takemura (2010b), the computation of the quantities \mathcal{K}_{\inf} can be done efficiently in this case, i.e., when we consider only distributions with finite supports. This is because in the computation of \mathcal{K}_{\inf}, it is sufficient to consider only distributions with the same support as the empirical distributions (up to one point). Note that the knowledge of the support of the distributions associated with the arms is not required.

  Parameters: A non-decreasing function f:\mathbb{N}\to\mathbb{R}
Initialization: Pull each arm of \mathcal{A} once
For rounds t+1, where t\geqslant|\mathcal{A}|,

  • compute for each arm a\in\mathcal{A} the quantity

    B^{+}_{a,t}=\max\,\Bigl{\{}q\in[0,1]:\ \ N_{t}(a)\,\,\mathcal{K}_{\inf}\bigl{(% }\widehat{\nu}_{a,N_{t}(a)},\,q\bigr{)}\leqslant f(t)\Bigr{\}}\,,

    where \qquad\qquad\displaystyle{\widehat{\nu}_{a,N_{t}(a)}=\frac{1}{N_{t}(a)}\sum_{s% \leqslant t:\,A_{s}=a}\delta_{Y_{s}}\,;}

  • in case of a tie, pick an arm with largest value of \widehat{\mu}_{a,N_{t}(a)};

  • pull any arm \displaystyle{A_{t+1}\in\mathop{\mathrm{argmax}}_{a\in\mathcal{A}}\,B^{+}_{a,t% }\,.}

 

Figure 2: The strategy \mathcal{K}_{\inf}.
{theorem}

Assume that \nu^{\star} is finitely supported, with expectation \mu^{\star}\in(0,1) and with support denoted by \mathcal{S}^{\star}. Let a\in\mathcal{A} be a suboptimal arm such that \mu_{a}>0 and \nu_{a} is finitely supported. Then, for all c_{a}>0 and all

0<\varepsilon<\min\left\{\Delta_{a},\,\frac{c_{a}/2}{1+c_{a}}(1-\mu^{\star})\,% \mathcal{K}_{\inf}(\nu_{a},\mu^{\star})\right\},

the expected number of times the \mathcal{K}_{\inf}–strategy, run with f(t)=\log t, pulls arm a satisfies

\mathbb{E}\bigl{[}N_{T}(a)\bigr{]}\leqslant 1+\frac{(1+c_{a})\,\log T}{% \mathcal{K}_{\inf}(\nu_{a},\mu^{\star})}+\frac{1}{1-e^{-\Theta_{a}(c_{a},% \varepsilon)}}+\frac{1}{\varepsilon^{2}}\log\biggl{(}\frac{1}{1-\mu^{*}+% \varepsilon}\biggr{)}\sum_{k=1}^{T}(k+1)^{|\mathcal{S}^{\star}|}\,e^{-k% \varepsilon^{2}}+\frac{1}{(\Delta_{a}-\varepsilon)^{2}}\,,

where

\Theta_{a}(c_{a},\varepsilon)=\theta_{a}\!\left(\frac{\log T}{k_{0}}+\frac{% \varepsilon}{1-\mu^{\star}}\right)\qquad\mbox{with}\qquad k_{0}=\left\lceil% \frac{(1+c_{a})\,\log T}{\mathcal{K}_{\inf}(\nu_{a},\mu^{\star})}\right\rceil\,.

and for all \gamma>0,

\theta_{a}(\gamma)=\inf\Bigl{\{}\mathcal{K}(\nu^{\prime},\nu_{a}):\ \ \nu^{% \prime}\ \,\,\mbox{\small\rm s.t.}\ \,\,\mathcal{K}_{\inf}(\nu^{\prime},\mu^{% \star})<\gamma\Bigr{\}}\,.

As a corollary, we get (by taking some common value for all c_{a}) that for all c>0,

\overline{R}_{T}\leqslant\sum_{a\in\mathcal{A}}\Delta_{a}\frac{(1+c)\,\log T}{% \mathcal{K}_{\inf}(\nu_{a},\mu^{\star})}+h(c)\,,

where h(c)<\infty is a function of c (and of the distributions associated with the arms), which is however independent of T. As a consequence, we recover the asymptotic results of Burnetas and Katehakis (1996), Honda and Takemura (2010a), i.e., the guarantee that

\limsup_{T\rightarrow\infty}\frac{\overline{R}_{T}}{\log T}\leqslant\sum_{a\in% \mathcal{A}}\frac{\Delta_{a}}{\mathcal{K}_{\inf}(\nu_{a},\mu^{\star})}\,.

Of course, a sharper optimization can be performed by carefully choosing the constants c_{a}, that are parameters of the analysis; similarly to the comments after the statement of Theorem 3.2, we would then get a dominant term with a constant factor 1 instead of 1+c as above, plus an additional second-order term. Details are left to a journal version of this paper.

{proof}

By arguments similar to the ones used in the first step of the proof of Theorem 3.2, we have

\bigl{\{}A_{t+1}=a\bigr{\}}\subseteq\Bigl{\{}\mu^{\star}-\varepsilon<\widehat{% \mu}_{a,N_{t}(a)}\Bigr{\}}\,\cup\,\Bigl{\{}\mu^{\star}-\varepsilon>B^{+}_{a^{% \star},t}\Bigr{\}}\,\cup\,\Bigl{\{}\mu^{\star}-\varepsilon\in\bigl{[}\widehat{% \mu}_{a,N_{t}(a)},\,\,B^{+}_{a,t}\bigr{]}\Bigr{\}}\,;

indeed, on the event   \displaystyle{\bigl{\{}A_{t+1}=a\bigr{\}}\,\cap\,\Bigl{\{}\mu^{\star}-% \varepsilon\geqslant\widehat{\mu}_{a,N_{t}(a)}\Bigr{\}}\,\cap\,\Bigl{\{}\mu^{% \star}-\varepsilon\leqslant B^{+}_{a^{\star},t}\Bigr{\}}}\,,
we have, \widehat{\mu}_{a,N_{t}(a)}\leqslant\mu^{\star}-\varepsilon\leqslant B^{+}_{a^{% \star},t}\leqslant B^{+}_{a,t} (where the last inequality is by definition of the strategy). Before proceeding, we note that

\Bigl{\{}\mu^{\star}-\varepsilon\in\bigl{[}\widehat{\mu}_{a,N_{t}(a)},\,\,B^{+% }_{a,t}\bigr{]}\Bigr{\}}\subseteq\Bigl{\{}N_{t}(a)\,\,\mathcal{K}_{\inf}\bigl{% (}\widehat{\nu}_{a,N_{t}(a)},\,\mu^{\star}-\varepsilon\bigr{)}\leqslant f(t)% \Bigr{\}}\,,

since \mathcal{K}_{\inf} is a non-decreasing function in its second argument and \mathcal{K}_{\inf}\bigl{(}\nu,E(\nu)\bigr{)}=0 for all distributions \nu. Therefore,

\displaystyle\mathbb{E}\bigl{[}N_{T}(a)\bigr{]}\leqslant 1+\sum_{t=|\mathcal{A% }|}^{T-1}\mathbb{P}\Bigl{\{}\mu^{\star}-\varepsilon<\widehat{\mu}_{a,N_{t}(a)}% \ \,\,\mbox{\small and}\ \,\,A_{t+1}=a\Bigr{\}}+\sum_{t=|\mathcal{A}|}^{T-1}% \mathbb{P}\Bigl{\{}\mu^{\star}-\varepsilon>B^{+}_{a^{\star},t}\Bigr{\}}\\ \displaystyle+\sum_{t=|\mathcal{A}|}^{T-1}\mathbb{P}\Bigl{\{}N_{t}(a)\,\,% \mathcal{K}_{\inf}\bigl{(}\widehat{\nu}_{a,N_{t}(a)},\,\mu^{\star}-\varepsilon% \bigr{)}\leqslant f(t)\ \,\,\mbox{\small and}\ \,\,A_{t+1}=a\Bigr{\}}\,; (6)

now, the two sums with the events “and A_{t+1}=a” can be rewritten by using the stopping times \tau_{a,k} introduced in the proof of Theorem 3.2; more precisely, by mimicking the transformations performed in its step 3, we get the simpler bound

\displaystyle\mathbb{E}\bigl{[}N_{T}(a)\bigr{]}\leqslant 1+\sum_{k=2}^{T-|% \mathcal{A}|+1}\mathbb{P}\Bigl{\{}\mu^{\star}-\varepsilon<\widetilde{\mu}_{a,k% -1}\Bigr{\}}+\sum_{t=|\mathcal{A}|}^{T-1}\mathbb{P}\Bigl{\{}\mu^{\star}-% \varepsilon>B^{+}_{a^{\star},t}\Bigr{\}}\\ \displaystyle+\sum_{k=2}^{T-|\mathcal{A}|+1}\mathbb{P}\Bigl{\{}(k-1)\,\,% \mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_{a,k-1},\,\mu^{\star}-\varepsilon% \bigr{)}\leqslant f(t)\Bigr{\}}\,, (7)

where the \widetilde{\nu}_{a,s} and \widetilde{\mu}_{a,s} are respectively the empirical distributions and empirical expectations computed on the first s elements of the sequence of the random variables \widetilde{X}_{a,j}=Y_{\tau_{a,j}}, which are i.i.d. according to \nu_{a}.

Step 1: The first sum in (7) is bounded by resorting to Hoeffding’s inequality, whose application is legitimate since \mu^{\star}-\mu_{a}-\varepsilon>0;

\displaystyle\sum_{k=2}^{T-|\mathcal{A}|+1}\mathbb{P}\Bigl{\{}\mu^{\star}-% \varepsilon<\widetilde{\mu}_{a,k-1}\Bigr{\}} \displaystyle= \displaystyle\sum_{k=1}^{T-|\mathcal{A}|}\mathbb{P}\Bigl{\{}\mu^{\star}-\mu_{a% }-\varepsilon<\widetilde{\mu}_{a,k}-\mu_{a}\Bigr{\}}
\displaystyle\leqslant \displaystyle\sum_{k=1}^{T-|\mathcal{A}|}e^{-2k(\mu^{\star}-\mu_{a}-% \varepsilon)^{2}}\leqslant\frac{1}{1-e^{-2(\mu^{\star}-\mu_{a}-\varepsilon)^{2% }}}\leqslant\frac{1}{(\mu^{\star}-\mu_{a}-\varepsilon)^{2}}\,,

where we used for the last inequality the general upper bounds provided at the beginning of step 5 in the proof of Theorem 3.2.

Step 2: The second sum in (7) is bounded by first using the definition of B^{+}_{a^{\star},t}, then, decomposing the event depending on the values taken by N_{t}(a^{\star}); and finally using the fact that on \bigl{\{}N_{t}(a^{\star})=k\bigr{\}}, we have the rewriting  \widehat{\nu}_{a,N_{t}(a)}=\widetilde{\nu}_{a,k}  and \ \widehat{\mu}_{a,N_{t}(a)}=\widetilde{\mu}_{a,k}\,;  more precisely,

\displaystyle\sum_{t=|\mathcal{A}|}^{T-1}\mathbb{P}\Bigl{\{}\mu^{\star}-% \varepsilon>B^{+}_{a^{\star},t}\Bigr{\}} \displaystyle\leqslant \displaystyle\sum_{t=|\mathcal{A}|}^{T-1}\mathbb{P}\Bigl{\{}N_{t}(a^{\star})\,% \,\mathcal{K}_{\inf}\bigl{(}\widehat{\nu}_{a^{\star},N_{t}(a^{\star})},\,\mu^{% \star}-\varepsilon\bigr{)}>f(t)\Bigr{\}}
\displaystyle= \displaystyle\sum_{t=|\mathcal{A}|}^{T-1}\sum_{k=1}^{t}\mathbb{P}\Bigl{\{}N_{t% }(a^{\star})=k\ \,\,\mbox{\small and}\ \,\,k\,\,\mathcal{K}_{\inf}\bigl{(}% \widetilde{\nu}_{a^{\star},k},\,\mu^{\star}-\varepsilon\bigr{)}>f(t)\Bigr{\}}
\displaystyle\leqslant \displaystyle\sum_{k=1}^{T}\sum_{t=|\mathcal{A}|}^{T-1}\mathbb{P}\Bigl{\{}k\,% \,\mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_{a^{\star},k},\,\mu^{\star}-% \varepsilon\bigr{)}>f(t)\Bigr{\}}\,.

Since f=\log is increasing, we can rewrite the bound, using a Fubini-Tonelli argument, as

\displaystyle\sum_{t=|\mathcal{A}|}^{T-1}\mathbb{P}\Bigl{\{}\mu^{\star}-% \varepsilon>B^{+}_{a^{\star},t}\Bigr{\}} \displaystyle\leqslant \displaystyle\sum_{k=1}^{T}\,\,\sum_{t=|\mathcal{A}|}^{T-1}\mathbb{P}\biggl{\{% }f^{-1}\Bigl{(}k\,\mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_{a^{\star},k},\,% \mu^{\star}-\varepsilon\bigr{)}\Bigr{)}>t\biggr{\}}
\displaystyle\leqslant \displaystyle\sum_{k=1}^{T}\,\mathbb{E}\biggl{[}f^{-1}\Bigl{(}k\,\mathcal{K}_{% \inf}\bigl{(}\widetilde{\nu}_{a^{\star},k},\,\mu^{\star}-\varepsilon\bigr{)}% \Bigr{)}\,\,\mathbb{I}_{\bigl{\{}\mathcal{K}_{\inf}(\widetilde{\nu}_{a^{\star}% ,k},\,\mu^{\star}-\varepsilon)>0\bigr{\}}}\biggr{]}\,.

Now, Honda and Takemura (2010a, Lemma 13) indicates that, since \mu^{\star}-\varepsilon\in[0,1),

\sup_{\nu\in\mathcal{P}_{F}([0,1])}\mathcal{K}_{\inf}\bigl{(}\nu,\mu^{\star}-% \varepsilon\bigr{)}\leqslant\log\bigl{(}1/(1-\mu^{\star}+\varepsilon)\bigr{)}% \lx@stackrel{{\scriptstyle\rm def}}{{=}}K_{\max}\,;

we define Q=K_{\max}/\varepsilon^{2} and introduce the following sets (V_{q})_{1\leqslant q\leqslant Q}:

\displaystyle V_{q}=\Bigl{\{}\nu\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}:\ \ (q% -1)\varepsilon^{2}<\mathcal{K}_{\inf}\bigl{(}\nu,\mu^{*}-\varepsilon)\leqslant q% \varepsilon^{2}\Bigr{\}}.

A peeling argument (and by using that f^{-1}=\exp is increasing as well) entails, for all k\geqslant 1,

\displaystyle\mathbb{E}\biggl{[}f^{-1}\Bigl{(}k\,\mathcal{K}_{\inf}\bigl{(}% \widetilde{\nu}_{a^{\star},k},\,\mu^{\star}-\varepsilon\bigr{)}\Bigr{)}\,\,% \mathbb{I}_{\bigl{\{}\mathcal{K}_{\inf}(\widetilde{\nu}_{a^{\star},k},\,\mu^{% \star}-\varepsilon)>0\bigr{\}}}\biggr{]}
\displaystyle= \displaystyle\sum_{q=1}^{Q}\,\mathbb{E}\biggl{[}f^{-1}\Bigl{(}k\,\mathcal{K}_{% \inf}\bigl{(}\widetilde{\nu}_{a^{\star},k},\,\mu^{\star}-\varepsilon\bigr{)}% \Bigr{)}\,\,\mathbb{I}_{\bigl{\{}\widetilde{\nu}_{a^{\star},k}\in V_{q}\bigr{% \}}}\biggr{]}
\displaystyle\leqslant \displaystyle\sum_{q=1}^{Q}\,\mathbb{P}\bigl{\{}\widetilde{\nu}_{a^{\star},k}% \in V_{q}\bigr{\}}\,f^{-1}(kq\varepsilon^{2})\leqslant\sum_{q=1}^{Q}\mathbb{P}% \Bigl{\{}\mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_{a^{\star},k},\,\mu^{\star}% -\varepsilon\bigr{)}>(q-1)\varepsilon^{2}\Bigr{\}}\,f^{-1}(kq\varepsilon^{2})\,, (9)

where we used the definition of V_{q} to obtain each of the two inequalities. Now, by Lemma 4.1, when E\bigl{(}\widetilde{\nu}_{a^{\star},k}\bigr{)}<\mu^{\star}-\varepsilon, which is satisfied whenever \mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_{a^{\star},k},\,\mu^{\star}-% \varepsilon\bigr{)}>0, we have

\mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_{a^{\star},k},\,\mu^{\star}-% \varepsilon\bigr{)}\leqslant\mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_{a^{% \star},k},\,\mu^{\star}\bigr{)}-2\varepsilon^{2}\leqslant\mathcal{K}\bigl{(}% \widetilde{\nu}_{a^{\star},k},\,\nu^{\star}\bigr{)}-2\varepsilon^{2}\,,

where the last inequality is by mere definition of \mathcal{K}_{\inf}. Therefore,

\mathbb{P}\Bigl{\{}\mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_{a^{\star},k},\,% \mu^{\star}-\varepsilon\bigr{)}>(q-1)\varepsilon^{2}\Bigr{\}}\leqslant\mathbb{% P}\Bigl{\{}\mathcal{K}\bigl{(}\widetilde{\nu}_{a^{\star},k},\,\nu^{\star}\bigr% {)}>(q+1)\varepsilon^{2}\Bigr{\}}\,.

We note that for all k\geqslant 1,   \displaystyle{\mathbb{P}\Bigl{\{}\mathcal{K}\bigl{(}\widetilde{\nu}_{a^{\star}% ,k},\,\nu^{\star}\bigr{)}>(q+1)\varepsilon^{2}\Bigr{\}}\leqslant(k+1)^{|% \mathcal{S}^{\star}|}\,e^{-k(q+1)\varepsilon^{2}}\,,}
where we recall that \mathcal{S}^{\star} denotes the finite support of \nu^{\star} and where we applied Corollary A.4 of the appendix. Now, (9) then yields, via the choice f=\log and thus f^{-1}=\exp, that

\mathbb{E}\biggl{[}f^{-1}\Bigl{(}k\,\mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_% {a^{\star},k},\,\mu^{\star}-\varepsilon\bigr{)}\Bigr{)}\,\,\mathbb{I}_{\bigl{% \{}\mathcal{K}_{\inf}(\widetilde{\nu}_{a^{\star},k},\,\mu^{\star}-\varepsilon)% >0\bigr{\}}}\biggr{]}\leqslant\underbrace{\sum_{q=1}^{Q}(k+1)^{|\mathcal{S}^{% \star}|}\,e^{-k(q+1)\varepsilon^{2}}e^{kq\varepsilon^{2}}}_{=Q\,(k+1)^{|% \mathcal{S}^{\star}|}\,e^{-k\varepsilon^{2}}}\,.

Substituting the value of Q, we therefore have proved that

\sum_{t=|\mathcal{A}|}^{T-1}\mathbb{P}\Bigl{\{}\mu^{\star}-\varepsilon>B^{+}_{% a^{\star},t}\Bigr{\}}\leqslant\frac{1}{\varepsilon^{2}}\log\biggl{(}\frac{1}{1% -\mu^{*}+\varepsilon}\biggr{)}\sum_{k=1}^{T}(k+1)^{|\mathcal{S}^{\star}|}\,e^{% -k\varepsilon^{2}}.

Step 3: The third sum in (7) is first upper bounded by Lemma 4.1, which states that

\mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_{a,k-1},\,\mu^{\star}\bigr{)}-% \varepsilon/(1-\mu^{\star})\leqslant\mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_% {a,k-1},\,\mu^{\star}-\varepsilon\bigr{)}\,,

for all k\geqslant 1, and by using f(t)\leqslant f(T); this gives

\displaystyle\sum_{k=1}^{T-|\mathcal{A}|}\mathbb{P}\Bigl{\{}k\,\,\mathcal{K}_{% \inf}\bigl{(}\widetilde{\nu}_{a,k},\,\mu^{\star}-\varepsilon\bigr{)}\leqslant f% (t)\Bigr{\}}\\ \displaystyle\leqslant\sum_{k=1}^{T-|\mathcal{A}|}\mathbb{P}\left\{k\,\,% \mathcal{K}_{\inf}\bigl{(}\widetilde{\nu}_{a,k},\,\mu^{\star}\bigr{)}\leqslant f% (T)+\frac{k\,\varepsilon}{1-\mu^{\star}}\right\}=\sum_{k=1}^{T-|\mathcal{A}|}% \mathbb{P}\Bigl{\{}\widetilde{\nu}_{a,k}\in\overline{\mathcal{C}}_{\mu^{\star}% ,\gamma_{k}}\Bigr{\}}\,, (10)

where \gamma_{k}=f(T)/k+\varepsilon/(1-\mu^{\star}) and where the set \overline{\mathcal{C}}_{\mu^{\star},\gamma_{k}} was defined in Section 4.1. For all \gamma>0, we then introduce

\theta_{a}(\gamma)=\inf\Bigl{\{}\mathcal{K}(\nu^{\prime},\nu_{a}):\ \ \nu^{% \prime}\in\mathcal{C}_{\mu^{\star},\gamma}\Bigr{\}}=\inf\Bigl{\{}\mathcal{K}(% \nu^{\prime},\nu_{a}):\ \ \nu^{\prime}\in\overline{\mathcal{C}}_{\mu^{\star},% \gamma}\Bigr{\}}\,,

(where the second equality follows from the lower semi-continuity of \mathcal{K}) and aim at bounding \mathbb{P}\Bigl{\{}\widetilde{\nu}_{a,k}\in\overline{\mathcal{C}}_{\mu^{\star}% ,\gamma}\Bigr{\}}.

As shown in Section 4.1, the set \mathcal{C}_{\mu^{\star},\gamma} is a non-empty open convex set. If we prove that \theta_{a}(\gamma) is finite for all \gamma>0, then all the conditions will be required to apply Lemma 2 and get the upper bound

\sum_{k=1}^{T-|\mathcal{A}|}\mathbb{P}\Bigl{\{}\widetilde{\nu}_{a,k}\in% \overline{\mathcal{C}}_{\mu^{\star},\gamma_{k}}\Bigr{\}}\leqslant\sum_{k=1}^{T% -|\mathcal{A}|}\,e^{-k\,\theta_{a}(\gamma_{k})}\,.

To that end, we use the fact that \nu_{a} is finitely supported. Now, either the probability of interest is null and we are done; or, it is not null, which implies that there exists a possible value of \widetilde{\nu}_{a,k} that is in \overline{\mathcal{C}}_{\mu^{\star},\gamma}; since this value is a distribution with a support included in the one of \nu_{a}, it is absolutely continuous with respect to \nu_{a} and hence, the Kullback-Leibler divergence between this value and \nu_{a} is finite; in particular, \theta_{a}(\gamma) is finite.

Finally, we bound the \theta_{a}(\gamma_{k}) for values of k larger than   \displaystyle{k_{0}=\left\lceil\frac{(1+c_{a})\,f(T)}{\mathcal{K}_{\inf}(\nu_{% a},\mu^{\star})}\right\rceil\,;}
we have that for all k\geqslant k_{0}, in view of the bound put on \varepsilon,

\gamma_{k}\leqslant\gamma_{k_{0}}=\frac{f(T)}{k_{0}}+\frac{\varepsilon}{1-\mu^% {\star}}<\frac{\mathcal{K}_{\inf}(\nu_{a},\mu^{\star})}{1+c_{a}}+\frac{c_{a}/2% }{1+c_{a}}\,\mathcal{K}_{\inf}(\nu_{a},\mu^{\star})=\frac{1+c_{a}/2}{1+c_{a}}% \,\mathcal{K}_{\inf}(\nu_{a},\mu^{\star})\,. (11)

Since \theta_{a} is non increasing, we have

\sum_{k=1}^{T-|\mathcal{A}|}\,e^{-k\,\theta_{a}(\gamma_{k})}\leqslant k_{0}-1+% \sum_{k=k_{0}}^{T-|\mathcal{A}|}\,e^{-k\,\theta_{a}(\gamma_{k_{0}})}\leqslant k% _{0}-1+\frac{1}{1-e^{-\Theta_{a}(c_{a},\varepsilon)}}\,,

provided that the quantity \Theta_{a}(c_{a},\varepsilon)=\theta_{a}\bigl{(}\gamma_{k_{0}}\bigr{)} is positive, which we prove now.

Indeed for all \nu^{\prime}\in\mathcal{C}_{\mu^{\star},\gamma_{k_{0}}}, we have by definition and by (11) that

\mathcal{K}_{\inf}(\nu^{\prime},\mu^{\star})-\mathcal{K}_{\inf}(\nu_{a},\mu^{% \star})<\gamma_{k_{0}}-\mathcal{K}_{\inf}(\nu_{a},\mu^{\star})<-\bigl{(}(c_{a}% /2)\big{/}(1+c_{a})\bigr{)}\mathcal{K}_{\inf}(\nu_{a},\mu^{\star})\,.

Now, in the case where \mathbb{E}_{\nu_{a}}\bigl{[}(1-\mu^{\star})/(1-X)\bigr{]}>1, we have, first by application of Pinsker’s inequality and then by Lemma 4.1, that

\mathcal{K}\bigl{(}\nu^{\prime},\nu_{a}\bigr{)}\,\geqslant\,\frac{\Arrowvert% \nu^{\prime}-\nu_{a}\Arrowvert^{2}_{1}}{2}\,\geqslant\,\frac{1}{2\,M_{\nu_{a},% \mu^{\star}}^{2}}\bigl{(}\mathcal{K}_{\inf}(\nu_{a},\mu^{\star})-\mathcal{K}_{% \inf}(\nu^{\prime},\mu^{\star})\big{)}^{2}>\,\frac{c_{a}^{2}\,\bigl{(}\mathcal% {K}_{\inf}(\nu_{a},\mu^{\star})\bigr{)}^{2}}{8\,(1+c_{a})^{2}\,M_{\nu_{a},\mu^% {\star}}^{2}}\,;

since, again by Pinsker’s inequality, \mathcal{K}_{\inf}(\nu_{a},\mu^{\star})\geqslant(\mu_{a}-\mu^{\star})^{2}/2>0, we have exhibited a lower bound independent of \nu^{\prime} in this case. In the case where \mathbb{E}_{\nu_{a}}\bigl{[}(1-\mu^{\star})/(1-X)\bigr{]}\leqslant 1, we apply the second part of Lemma 4.1, with \alpha_{a}=(c_{a}/2)/(1+c_{a}), and get

\mathcal{K}\bigl{(}\nu^{\prime},\nu_{a}\bigr{)}\,\geqslant\,\frac{\Arrowvert% \nu^{\prime}-\nu_{a}\Arrowvert^{2}_{1}}{2}\,\geqslant\,\frac{1}{2}\,\left(% \frac{1-\mu^{\star}}{(2/\alpha_{a})\,\bigl{(}(2/\alpha_{a})-1\bigr{)}}\right)^% {2}>0\,.

Thus, in both cases we found a positive lower bound independent of \nu^{\prime}, so that the infimum over \nu^{\prime}\in\mathcal{C}_{\mu^{\star},\gamma_{k_{0}}} of the quantities \mathcal{K}_{\inf}(\nu^{\prime},\mu^{\star}), which precisely equals \theta_{a}\bigl{(}\gamma_{k_{0}}\bigr{)}, is also positive. This concludes the proof.

Conclusion.

We provided a finite-time analysis of the (asymptotically optimal) \mathcal{K}_{\inf}–strategy in the case of finitely supported distributions. One could think that the extension to the case of general distributions is straightforward. However this extension appears somewhat difficult (at least when using the current definition of \mathcal{K}_{\inf}) for the following reasons: (1) Step 2 in the proof uses the method of types, that would require some extension of Sanov’s non-asymptotic Theorem to this case. (2) Step 3 requires to have both \theta_{a}(\gamma)<\infty for all \gamma>0 and \theta_{a}(\gamma)>0 for \gamma<\mathcal{K}_{\inf}(\nu_{a},\mu^{\star}), which does not seem to be always the case for general distributions. Exploring other directions for such extensions is left for future work; for instance, histogram-based approximations of general distributions could be considered.

Acknowledgements.

The authors wish to thank Peter Auer and Daniil Ryabko for insightful discussions. They acknowledge support from the French National Research Agency (ANR) under grant EXPLO/RA (“Exploration–exploitation for efficient resource allocation”) and by the PASCAL2 Network of Excellence under EC grant no. 506778.

References

  • Audibert et al. (2009) J-Y. Audibert, R. Munos, and C. Szepesvari. Exploration-exploitation trade-off using variance estimates in multi-armed bandits. Theoretical Computer Science, 410:1876–1902, 2009.
  • Audibert and Bubeck (2010) J.Y. Audibert and S. Bubeck. Regret bounds and minimax policies under partial monitoring. Journal of Machine Learning Research, 11:2635–2686, 2010.
  • Auer and Ortner (2010) P. Auer and R. Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica, 61(1-2):55–65, 2010.
  • Auer et al. (2002) P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235–256, 2002.
  • Burnetas and Katehakis (1996) A.N. Burnetas and M.N. Katehakis. Optimal adaptive policies for sequential allocation problems. Advances in Applied Mathematics, 17(2):122–142, 1996.
  • Chow and Teicher (1988) Y. Chow and H. Teicher. Probability Theory. Springer, 1988.
  • Dinwoodie (1992) I.H. Dinwoodie. Mesures dominantes et théorème de Sanov. Annales de l’Institut Henri Poincaré – Probabilités et Statistiques, 28(3):365–373, 1992.
  • Filippi (2010) S. Filippi. Stratégies optimistes en apprentissage par renforcement. PhD thesis, Télécom ParisTech, 2010.
  • Garivier and Cappé (2011) A. Garivier and O. Cappé. The KL-UCB algorithm for bounded stochastic bandits and beyond. In Proceedings of COLT, 2011.
  • Garivier and Leonardi (2010) A. Garivier and F. Leonardi. Context tree selection: A unifying view. arXiv:1011.2424, 2010.
  • Honda and Takemura (2010a) J. Honda and A. Takemura. An asymptotically optimal bandit algorithm for bounded support models. In Proceedings of COLT, pages 67–79, 2010a.
  • Honda and Takemura (2010b) J. Honda and A. Takemura. An asymptotically optimal policy for finite support models in the multiarmed bandit problem. arXiv:0905.2776, 2010b.
  • Lai and Robbins (1985) T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4–22, 1985.
  • Robbins (1952) H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society, 58:527–535, 1952.

Appendix A Appendix beyond the COLT page limit

A conference version of this paper was published in the Proceedings of the Twenty-Fourth Annual Conference on Learning Theory (COLT’11); this appendix details some material which was alluded at in this conference version but could not be published therein because of the page limit.

A.1 Proof of Lemma 3.1

We only provide it for the convenience of the readers since it is similar to the one presented in Garivier and Leonardi (2010, Proposition 2) or in Garivier and Cappé (2011); it was however somewhat simplified by noting that the proof technique used leads to a maximal inequality, as stated in Lemma 3.1, and not only to an inequality for a self-normalized average, as stated in the original reference.

{proof}

The result is straightforward in the cases p=0 or p=1, since then, \widehat{p}_{s}=p almost surely; in the rest of the proof, we therefore only consider the case where p\in(0,1).

It suffices to show the first bound stated in the lemma, since the second one follows by a decomposition of the probability space according to the values of N_{t}. Actually, we will show

\mathbb{P}_{p}\!\left(\bigcup_{s=1}^{t}\biggl{\{}s\,\,\mathcal{K}\Bigl{(}\beta% \bigl{(}{\widehat{p}_{s}}\bigr{)},\,\beta(p)\Bigr{)}\geqslant\varepsilon\ \,\,% \mbox{\small and}\ \,\,\widehat{p}_{s}>p\biggr{\}}\right)\leqslant e\,\bigl{% \lceil}\varepsilon\log t\bigr{\rceil}\,e^{-\varepsilon}\,,

and the desired result will follow by symmetry and a union bound.

Step 1: A martingale. For all \lambda>0, we consider the log-Laplace transform

\psi_{p}(\lambda)=\log\mathbb{E}_{p}\bigl{[}e^{\lambda X_{1}}\bigr{]}=\log% \bigl{(}(1-p)+p\,e^{\lambda}\bigr{)}\,,

with which we define the martingale

W_{s}(\lambda)=\exp\bigl{(}\lambda(X_{1}+\ldots+X_{s})-s\,\psi_{p}(\lambda)% \bigr{)}\,.

Step 2: A peeling argument. We introduce t_{0}=1 and t_{k}=\lfloor\gamma^{k}\rfloor, for some \gamma>1 that will be defined by the analysis. We also denote by K=\bigl{\lceil}(\log t)/(\log\gamma)\bigr{\rceil} an upper bound on the number of elements in the peeling.

We also note that by continuity of the Kullback-Leibler divergence in the case of Bernoulli distributions, for all \varepsilon>0, there exists a unique element p_{\varepsilon}\in(p,1) such that \mathcal{K}\bigl{(}\beta({q_{\varepsilon}}),\,\beta(p)\bigr{)}=\varepsilon; this element satisfies that

\mathcal{K}\bigl{(}\beta(q),\,\beta(p)\bigr{)}\geqslant\varepsilon\ \ \mbox{% and}\ \ q\geqslant p\qquad\mbox{entails}\qquad q\geqslant p_{\varepsilon}\,.

Denoting by \varepsilon_{k}=\varepsilon/t_{k}, a union bound using the described peeling then yields

\displaystyle\mathbb{P}_{p}\!\left(\bigcup_{s=1}^{t}\biggl{\{}s\,\,\mathcal{K}% \Bigl{(}\beta\bigl{(}{\widehat{p}_{s}}\bigr{)},\,\beta(p)\Bigr{)}\geqslant% \varepsilon\ \,\,\mbox{\small and}\ \,\,\widehat{p}_{s}>p\biggr{\}}\right)
\displaystyle\leqslant \displaystyle\sum_{k=1}^{K}\,\mathbb{P}_{p}\!\left(\bigcup_{s=t_{k-1}}^{t_{k}}% \biggl{\{}s\,\,\mathcal{K}\Bigl{(}\beta\bigl{(}{\widehat{p}_{s}}\bigr{)},\,% \beta(p)\Bigr{)}\geqslant\varepsilon\ \,\,\mbox{\small and}\ \,\,\widehat{p}_{% s}>p\biggr{\}}\right)
\displaystyle\leqslant \displaystyle\sum_{k=1}^{K}\,\mathbb{P}_{p}\!\left(\bigcup_{s=t_{k-1}}^{t_{k}}% \biggl{\{}\mathcal{K}\Bigl{(}\beta\bigl{(}{\widehat{p}_{s}}\bigr{)},\,\beta(p)% \Bigr{)}\geqslant\frac{\varepsilon}{t_{k}}\ \,\,\mbox{\small and}\ \,\,% \widehat{p}_{s}>p\biggr{\}}\right)
\displaystyle= \displaystyle\sum_{k=1}^{K}\,\mathbb{P}_{p}\!\left(\bigcup_{s=t_{k-1}}^{t_{k}}% \Bigl{\{}\widehat{p}_{s}\geqslant p_{\varepsilon_{k}}\Bigr{\}}\right)\ =\ \sum% _{k=1}^{K}\,\mathbb{P}_{p}\!\left(\bigcup_{s=t_{k-1}}^{t_{k}}\Bigl{\{}X_{1}+% \ldots+X_{s}-s\,p_{\varepsilon_{k}}\geqslant 0\Bigr{\}}\right)

Now, the variational formula for Kullback-Leibler divergences shows that for all k, there exists a \lambda_{k} such that

\varepsilon_{k}=\mathcal{K}\bigl{(}\beta(p_{\varepsilon_{k}}),\,\beta(p)\bigr{% )}=\lambda_{k}\,p_{\varepsilon_{k}}-\psi_{p}(\lambda_{k})\,;

actually, a straightforward calculation shows that \lambda_{k}=\log\bigl{(}p_{\varepsilon_{k}}(1-p\bigr{)}-\log\bigl{(}p(1-p_{% \varepsilon_{k}})\bigr{)}>0 is a suitable value. Thus,

\displaystyle\sum_{k=1}^{K}\,\mathbb{P}_{p}\!\left(\bigcup_{s=t_{k-1}}^{t_{k}}% \Bigl{\{}X_{1}+\ldots+X_{s}-s\,p_{\varepsilon_{k}}\geqslant 0\Bigr{\}}\right)
\displaystyle= \displaystyle\sum_{k=1}^{K}\,\mathbb{P}_{p}\!\left(\bigcup_{s=t_{k-1}}^{t_{k}}% \Bigl{\{}\exp\bigl{(}\lambda_{k}(X_{1}+\ldots+X_{s})-\lambda_{k}s\,p_{% \varepsilon_{k}}\bigr{)}\geqslant 1\Bigr{\}}\right)
\displaystyle= \displaystyle\sum_{k=1}^{K}\,\mathbb{P}_{p}\!\left(\bigcup_{s=t_{k-1}}^{t_{k}}% \Bigl{\{}\exp\bigl{(}\lambda_{k}(X_{1}+\ldots+X_{s})-s\,\psi_{p}(\lambda_{k})% \bigr{)}\geqslant e^{s\,\varepsilon_{k}}\Bigr{\}}\right)
\displaystyle\leqslant \displaystyle\sum_{k=1}^{K}\,\mathbb{P}_{p}\!\left(\bigcup_{s=t_{k-1}}^{t_{k}}% \Bigl{\{}W_{s}(\lambda_{k})\geqslant e^{t_{k-1}\varepsilon_{k}}\Bigr{\}}\right)
\displaystyle\leqslant \displaystyle\sum_{k=1}^{K}\,e^{-t_{k-1}\,\varepsilon_{k}}=Ke^{-\varepsilon/% \gamma}\,,

where in the last step, we resorted to Doob’s maximal inequality.

Step 3: Choosing \gamma. The obtained bound equals, by substituting the value of K and by choosing \gamma=\varepsilon/(\varepsilon-1),

Ke^{-\varepsilon/\gamma}=\bigl{\lceil}(\log t)/(\log\gamma)\bigr{\rceil}\,e^{-% \varepsilon+1}=\left\lceil\frac{\log t}{\log\bigl{(}\varepsilon/(\varepsilon-1% )\bigr{)}}\right\rceil\,e^{-\varepsilon+1}\,;

the proof is concluded by noting that \varepsilon>1\longmapsto\log\bigl{(}\varepsilon/(\varepsilon-1)\bigr{)}-1/\varepsilon is decreasing (its derivative is negative), with limit 0 at +\infty.

A.2 Details of the adaptation leading to Lemma 2

The exact statement of Dinwoodie (1992, Theorem 2.1 and comments on page 372) is the following.

{lemma}

[Non-asymptotic Sanov’s lemma] Let \mathcal{C} be an open convex subset of \mathcal{P}(\mathcal{X}) such that

\Lambda(\mathcal{C})=\inf_{\kappa\in\mathcal{C}}\,\mathcal{K}(\kappa,\nu)<% \infty\,.

Then, for all t\geqslant 1,

\mathbb{P}_{\nu}\bigl{\{}\widehat{\nu}_{t}\in\mathcal{C}\bigr{\}}\leqslant e^{% -t\Lambda(\overline{\mathcal{C}})}\,.

We show how it entails Lemma 2. Let \mathcal{C} be an open convex subset of \mathcal{P}(\mathcal{X}) and let \overline{\mathcal{C}} be its closure. We denote by

\mathcal{C}_{\delta}=\bigl{\{}\nu\in\mathcal{C}:\ \ d(\nu,\mathcal{C})<\delta% \bigr{\}}

the \delta–open neighborhood of \mathcal{C}, we have \overline{\mathcal{C}}\subseteq\mathcal{C}_{\delta} for all \delta>0. Therefore, by the lemma above, since \Lambda(\mathcal{C}_{\delta})\leqslant\Lambda(\mathcal{C})<\infty,

\mathbb{P}_{\nu}\bigl{\{}\widehat{\nu}_{t}\in\overline{\mathcal{C}}\bigr{\}}% \leqslant\mathbb{P}_{\nu}\bigl{\{}\widehat{\nu}_{t}\in\mathcal{C}_{\delta}% \bigr{\}}\leqslant e^{-t\Lambda(\mathcal{C}_{\delta})}\,.

We pick for each integer n\geqslant 1 an element \kappa_{n} such that \Lambda\bigl{(}\mathcal{C}_{1/n}\bigr{)}=\mathcal{K}(\kappa_{n},\nu)-1/n; by Dinwoodie (1992, proof of Proposition 1.1), the sequence of the \kappa_{n} admits a converging subsequence \kappa_{\varphi(n)}, whose limit point \kappa_{\infty} belongs to \overline{\mathcal{C}} and which satisfies

\mathcal{K}(\kappa_{\infty},\nu)\leqslant\liminf_{n\to\infty}\mathcal{K}(% \kappa_{n},\nu)=\liminf_{\delta\to 0}\Lambda\bigl{(}\mathcal{C}_{\delta}\bigr{% )}\,.

Therefore, by taking limits in the above inequality, we have proved the desired inequality,

\mathbb{P}_{\nu}\bigl{\{}\widehat{\nu}_{t}\in\overline{\mathcal{C}}\bigr{\}}% \leqslant e^{-t\mathcal{K}(\kappa_{\infty},\nu)}\leqslant e^{-t\Lambda(% \overline{\mathcal{C}})}\,.

A.3 Useful properties of \mathcal{K}_{\inf} and its level sets

Proof of Lemma 4.1:

We resort to the formulation of \mathcal{K}_{\inf} in terms of a convex optimization problem as introduced in Honda and Takemura (2010b); more precisely, it is shown therein that

\mathcal{K}_{\inf}(\nu,\mu)=\max\biggl{\{}\mathbb{E}_{\nu}\Bigl{[}\log\bigl{(}% 1+\lambda(\mu-X)\bigr{)}\Bigr{]}:\ \ \lambda\in\bigl{[}0,\,1/(1-\mu)\bigr{]}% \biggr{\}} (12)

(where X denotes a random variable distributed according to \nu), as well as the following alternative. The optimal value \lambda_{\nu} of the parameter \lambda indexing the set is equal to 1/(1-\mu) if and only if \mathbb{E}_{\nu}\bigl{[}(1-\mu)/(1-X)\bigr{]}\leqslant 1, and lies in \bigl{[}0,\,1/(1-\mu)\bigr{)} if \mathbb{E}_{\nu}\bigl{[}(1-\mu)/(1-X)\bigr{]}>1.

For all \lambda\in\bigl{[}0,\,1/(1-\mu)\bigr{]}, we now introduce the function

\phi_{\lambda}:x\in[0,1]\,\,\longmapsto\,\,\log\bigl{(}1+\lambda(\mu-x)\bigr{)% }\,,

which is always continuous on [0,1); we note also that it is continuous and finite at x=1 when \lambda<1/(1-\mu). In the latter case, \phi_{\lambda} is bounded; since it is decreasing, it is easy to get a uniform bound: for all x,

\bigl{|}\phi_{\lambda}(x)\bigr{|}\leqslant\bigl{|}\phi(0)\bigr{|}+\bigl{|}\phi% (1)\bigr{|}=\log\frac{1+\lambda\mu}{1+\lambda(\mu-1)}\lx@stackrel{{% \scriptstyle\rm def}}{{=}}M_{\lambda}\,.

It then follows that for all \lambda\in\bigl{[}0,\,1/(1-\mu)\bigr{)},

\mathbb{E}_{\nu}\bigl{[}\phi_{\lambda}(X)\bigr{]}-\mathbb{E}_{\nu^{\prime}}% \bigl{[}\phi_{\lambda}(X)\bigr{]}\leqslant M_{\lambda}\,\Arrowvert\nu-\nu^{% \prime}\Arrowvert_{1}\,. (13)

In the case when \lambda_{\nu}<1/(1-\mu), we have from the variational formulation (12) that

\mathcal{K}_{\inf}(\nu,\mu)-\mathcal{K}_{\inf}(\nu^{\prime},\mu)\leqslant% \mathbb{E}_{\nu}\bigl{[}\phi_{\lambda_{\nu}}(X)\bigr{]}-\mathbb{E}_{\nu^{% \prime}}\bigl{[}\phi_{\lambda_{\nu}}(X)\bigr{]}\leqslant M_{\lambda_{\nu}}\,% \Arrowvert\nu-\nu^{\prime}\Arrowvert_{1}\,.

Thus, the constant M_{\nu,\mu} in the statement of the lemma corresponds to our quantity M_{\lambda_{\nu}} in this case.

We now consider the case where \lambda_{\nu}=1/(1-\mu). By (13) and variational formulation (12), we have that for all \lambda\in\bigl{[}0,\,1/(1-\mu)\bigr{)},

\displaystyle\mathcal{K}_{\inf}(\nu,\mu)-\mathcal{K}_{\inf}(\nu^{\prime},\mu)% \leqslant\mathcal{K}_{\inf}(\nu,\mu)-\mathbb{E}_{\nu^{\prime}}\bigl{[}\phi_{% \lambda}(X)\bigr{]}\\ \displaystyle=\Bigl{(}\mathcal{K}_{\inf}(\nu,\mu)-\mathbb{E}_{\nu}\bigl{[}\phi% _{\lambda}(X)\bigr{]}\Bigr{)}+\Bigl{(}\mathbb{E}_{\nu}\bigl{[}\phi_{\lambda}(X% )\bigr{]}-\mathbb{E}_{\nu^{\prime}}\bigl{[}\phi_{\lambda}(X)\bigr{]}\Bigr{)}\,. (13)

The second difference is bounded according to (13); the first difference is bounded by concavity of \lambda<1/(1-\mu)\,\,\mapsto\,\,\phi_{\lambda}(x), for all x:

\displaystyle\mathbb{E}_{\nu}\bigl{[}\phi_{\lambda}(X)\bigr{]}\geqslant\bigl{(% }1-\lambda(1-\mu)\bigr{)}\,\mathbb{E}_{\nu}\bigl{[}\phi_{0}(X)\bigr{]}+\lambda% (1-\mu)\,\mathbb{E}_{\nu}\bigl{[}\phi_{0}(X)\bigr{]}\\ \displaystyle=\lambda(1-\mu)\,\mathbb{E}_{\nu}\bigl{[}\phi_{1/(1-\mu)}(X)\bigr% {]}=\lambda(1-\mu)\,\mathcal{K}_{\inf}(\nu,\mu)\,, (14)

since \phi_{0} is the null function and \lambda_{\nu}=1/(1-\mu). Putting all pieces together, we have proved that for all \lambda\in\bigl{[}0,\,1/(1-\mu)\bigr{)},

\mathcal{K}_{\inf}(\nu,\mu)-\mathcal{K}_{\inf}(\nu^{\prime},\mu)\leqslant\bigl% {(}1-\lambda(1-\mu)\bigr{)}\,\mathcal{K}_{\inf}(\nu,\mu)+M_{\lambda}\,% \Arrowvert\nu-\nu^{\prime}\Arrowvert_{1}\,. (15)

We recall that by assumption, \mathcal{K}_{\inf}(\nu,\mu)-\mathcal{K}_{\inf}(\nu^{\prime},\mu)\geqslant% \alpha\,\mathcal{K}_{\inf}(\nu,\mu) with \alpha\in(0,1), so that the choice \lambda=(1-\alpha/2)/(1-\mu), which indeed lies in \bigl{(}0,\,1/(1-\mu)\bigr{)}, is such that

M_{\lambda}=\log\!\left(1+\frac{\lambda}{1+\lambda(\mu-1)}\right)=\log\!\left(% 1+\frac{\lambda}{\alpha/2}\right)\leqslant\frac{2\lambda}{\alpha}\,,

so that (15) entails

\alpha\,\mathcal{K}_{\inf}(\nu,\mu)\leqslant\frac{\alpha}{2}\,\mathcal{K}_{% \inf}(\nu,\mu)+\frac{2\lambda}{\alpha}\,\Arrowvert\nu-\nu^{\prime}\Arrowvert_{% 1}\,,

and finally

\Arrowvert\nu-\nu^{\prime}\Arrowvert_{1}\geqslant\frac{\alpha^{2}}{4\lambda}=% \frac{\alpha^{2}(1-\mu)}{1-\alpha/2}=\frac{1-\mu}{(2/\alpha)\,\bigl{(}(2/% \alpha)-1\bigr{)}}\,;

which concludes the proof. \qed

Proof of Lemma 4.1:

In Honda and Takemura (2010b) it is shown that in this case, \mathcal{K}_{\inf}(\nu,\mu) is differentiable in \mu\in(E(\nu),1) with

\displaystyle\frac{1}{1-\mu}\geqslant\frac{\partial}{\partial\mu}\mathcal{K}_{% \inf}(\nu,\mu)\geqslant\frac{\mu-E(\nu)}{\mu(1-\mu)}. (16)

We apply this result to the rewriting

\displaystyle\mathcal{K}_{\inf}(\nu,\mu)-\mathcal{K}_{\inf}(\nu,\mu-% \varepsilon)=\int_{\mu-\varepsilon}^{\mu}\frac{\partial}{\partial\mu}\mathcal{% K}_{\inf}(\nu,u)\,\mbox{d}u\,,

which already gives one part of the bound. For the lower bound, we note that by assumption -E(\nu)>-(\mu-\varepsilon) and that u(1-u)\leqslant 1/4 (since we consider distributions with support included in [0,1]); so that, for all u,

\displaystyle\frac{u-E(\nu)}{u(1-u)}\geqslant 4\bigl{(}u-(\mu-\varepsilon)% \bigr{)}\,.

Integrating the bound concludes the main part of the proof.

Now, to see that the first inequality in the statement is always valid, we need to consider the case when E(\nu)\geqslant\mu, for which the statement is trivial since then \mathcal{K}_{\inf}(\nu,\mu)=0, and the case when \mu>E(\nu)\geqslant\mu-\varepsilon. But in the latter case, it is shown in Honda and Takemura (2010b, Lemma 6, case 2) that

\displaystyle\mathcal{K}_{\inf}(\nu,\mu)\leqslant\frac{\mu-E(\nu)}{1-\mu}\,,

which concludes the proof. \qed

Proof of Lemma 4.1:

First, \mathcal{C}_{\mu}(\gamma) is non empty as it always contains \delta_{\mu}, the Dirac mass on \mu.

The fact that \mathcal{C}_{\mu}(\gamma) is convex follows from the convexity of \mathcal{K} in the pair of probability distributions that it takes as an argument. Indeed, for all \alpha\in[0,1], \nu^{\prime},\,\nu^{\prime\prime}\in\mathcal{C}_{\mu}(\gamma), denoting by \nu^{\prime}_{\mu},\,\nu^{\prime\prime}_{\mu} some distributions such that the defining conditions in \mathcal{C}_{\mu}(\gamma) are satisfied, we have that

E\bigl{(}\alpha\nu^{\prime}_{\mu}+(1-\alpha)\nu^{\prime\prime}_{\mu}\bigr{)}>\mu

and

\mathcal{K}\bigl{(}\alpha\nu^{\prime}+(1-\alpha)\nu^{\prime\prime},\,\alpha\nu% ^{\prime}_{\mu}+(1-\alpha)\nu^{\prime\prime}_{\mu}\bigr{)}\leqslant\alpha\,% \mathcal{K}\bigl{(}\nu^{\prime},\nu^{\prime}_{\mu}\bigr{)}+(1-\alpha)\,% \mathcal{K}\bigl{(}\nu^{\prime\prime},\nu^{\prime\prime}_{\mu}\bigr{)}<\gamma\,.

We prove that \mathcal{C}_{\mu}(\gamma) is an open set. With each \nu^{\prime}\in\mathcal{C}_{\mu}(\gamma), we associate a distribution \nu^{\prime}_{\mu} satisfying the defining constraints in \mathcal{C}_{\mu}(\gamma); by choosing

\alpha=\frac{1-\mu\big{/}E\bigl{(}\nu^{\prime}_{\mu}\bigr{)}}{2}\,\,\in(0,\,1/% 2),

we have that the open set formed by the

(1-\alpha)\,\nu^{\prime}+\alpha\,\nu^{\prime\prime},\qquad\nu^{\prime\prime}% \in\mbox{B}(\nu^{\prime},1)

is contained in \mathcal{C}_{\mu,\gamma}, where \mbox{B}(\nu^{\prime},1) denotes the ball with center \nu^{\prime} and radius 1 in the norm \left\Arrowvert\,\cdot\,\right\Arrowvert over \mathcal{P}(\mathcal{X}). Indeed, we have on the one hand,

E\bigl{(}(1-\alpha)\,\nu^{\prime}_{\mu}+\alpha\,\nu^{\prime\prime}\bigr{)}% \geqslant(1-\alpha)\,E\bigl{(}\nu^{\prime}_{\mu}\bigr{)}\geqslant\left(1-\frac% {1-\mu\big{/}E\bigl{(}\nu^{\prime}_{\mu}\bigr{)}}{2}\right)E\bigl{(}\nu^{% \prime}_{\mu}\bigr{)}=\frac{E\bigl{(}\nu^{\prime}_{\mu}\bigr{)}+\mu}{2}>\mu\,,

and on the other hand, by convexity of the Kullback-Leibler divergence,

\mathcal{K}\bigl{(}(1-\alpha)\,\nu^{\prime}+\alpha\,\nu^{\prime\prime},\,(1-% \alpha)\,\nu^{\prime}_{\mu}+\alpha\,\nu^{\prime\prime}\bigr{)}\leqslant(1-% \alpha)\,\mathcal{K}\bigl{(}\nu^{\prime},\,\nu^{\prime}_{\mu}\bigr{)}<(1-% \alpha)\gamma\,.

To prove the desired inclusion, we first note that in the case of \mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}, Honda and Takemura (2010b) show that one has the rewriting

\mathcal{K}_{\inf}(\nu,\mu)=\min\,\Bigl{\{}\mathcal{K}(\nu,\nu^{\prime}):\ \ % \nu^{\prime}\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)}\ \ \mbox{\rm s.t.}\ \ E(% \nu^{\prime})\geqslant\mu\Bigr{\}}\,;

in particular, the infimum is achieved with this new formulation. Hence,

\mathcal{C}_{\mu,\gamma}=\Bigl{\{}\nu^{\prime}\in\mathcal{P}_{F}\bigl{(}[0,1]% \bigr{)}:\ \ \exists\,\nu^{\prime}_{\mu}\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)% }\ \ \mbox{s.t.}\ \ E\bigl{(}\nu^{\prime}_{\mu}\bigr{)}\geqslant\mu\ \ \mbox{% and}\ \ \mathcal{K}\bigl{(}\nu^{\prime},\nu^{\prime}_{\mu}\bigr{)}<\gamma\Bigr% {\}}\,.

Also, an element of the set of interest is therefore a \nu^{\prime}\in\mathcal{P}_{F}\bigl{(}[0,1]\bigr{)} such that \mathcal{K}_{\inf}(\nu^{\prime},\mu)\leqslant\gamma, that is, such that there exists \nu^{\prime}_{\mu}\in\mathcal{P}\bigl{(}[0,1]\bigr{)} with E\bigl{(}\nu^{\prime}_{\mu}\bigr{)}\geqslant\mu and \mathcal{K}\bigl{(}\nu^{\prime},\nu^{\prime}_{\mu}\bigr{)}\leqslant\gamma. Now, the distributions

\nu^{\prime}_{n}=\left(1-\frac{1}{n}\right)\nu^{\prime}+\frac{1}{n}\delta_{1}% \,,\qquad\mbox{thanks to the}\qquad\nu^{\prime}_{\mu,n}=\left(1-\frac{1}{n}% \right)\nu^{\prime}_{\mu}+\frac{1}{n}\delta_{1}\,,

all belong to \mathcal{C}_{\gamma}, as, similarly to the above argument,

E\bigl{(}\nu^{\prime}_{n}\bigr{)}\geqslant\mu+\frac{1-\mu}{n}>\mu\qquad\mbox{% and}\qquad\mathcal{K}\bigl{(}\nu^{\prime}_{n},\,\nu^{\prime}_{\mu,n}\bigr{)}% \leqslant\left(1-\frac{1}{n}\right)\mathcal{K}\bigl{(}\nu^{\prime},\nu^{\prime% }_{\mu}\bigr{)}<\gamma\,.

In addition, we have by construction that the \nu^{\prime}_{n} converge to \nu^{\prime}, hence, \nu^{\prime}\in\overline{\mathcal{C}}_{\gamma}. \qed

A.4 The method of types

Let X_{1},X_{2},\ldots be a sequence of random variables that are i.i.d. according to a distribution denoted by \nu. In this subsection, we will index all probabilities and expectations by \nu.

For all k\geqslant, we denote by \mathcal{E}_{k} the set of possible values (the so-called types) of the empirical distribution

\widehat{\nu}_{k}=\sum_{j=1}^{k}\delta_{X_{j}}\,.

If \nu has a finite support denoted by \mathcal{S}, then the cardinality |\mathcal{E}_{k}| of \mathcal{E}_{k} is bounded by (k+1)^{|\mathcal{S}|}.

{lemma}

In the case where \nu has a finite support, for all k\geqslant 1 and \kappa\in\mathcal{E}_{k},

\mathbb{P}_{\nu}\bigl{\{}\widehat{\nu}_{k}=\kappa\bigr{\}}\leqslant e^{-k\,% \mathcal{K}(\kappa,\nu)}\,.
{corollary}

In the case where \nu has a finite support, for all k\geqslant 1, all \gamma>0,

\displaystyle\mathbb{P}\Bigl{\{}\mathcal{K}\bigl{(}\widehat{\nu}_{k},\,\nu% \bigr{)}>\gamma\Bigr{\}}=\sum_{\kappa\in\mathcal{E}_{k}}\mathbb{I}_{\{\mathcal% {K}(\kappa,\nu)>\gamma\}}\,\mathbb{P}_{\nu}\bigl{\{}\widehat{\nu}_{k}=\kappa% \bigr{\}}\\ \displaystyle\leqslant\sum_{\kappa\in\mathcal{E}_{k}}\mathbb{I}_{\{\mathcal{K}% (\kappa,\nu)>\gamma\}}\,e^{-k\,\mathcal{K}(\kappa,\nu)}\leqslant|\mathcal{E}_{% k}|\,e^{-k\gamma}\leqslant(k+1)^{|\mathcal{S}|}\,e^{-k\gamma}\,. (17)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
48818
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description