Bandits with heavy tail

Bandits with heavy tail

Sébastien Bubeck111Department of Operations Research and Financial Engineering, Princeton University    Nicolò Cesa-Bianchi222Dipartimento di Scienze dell’Informazione, Università degli Studi di Milano, Italy    Gábor Lugosi333ICREA and Department of Economics, Universitat Pompeu Fabra. Supported by the Spanish Ministry of Science and Technology grant MTM2009-09063 and PASCAL2 Network of Excellence under EC grant no. 216886.
July 4, 2019
Abstract

The stochastic multi-armed bandit problem is well understood when the reward distributions are sub-Gaussian. In this paper we examine the bandit problem under the weaker assumption that the distributions have moments of order , for some . Surprisingly, moments of order (i.e., finite variance) are sufficient to obtain regret bounds of the same order as under sub-Gaussian reward distributions. In order to achieve such regret, we define sampling strategies based on refined estimators of the mean such as the truncated empirical mean, Catoni’s -estimator, and the median-of-means estimator. We also derive matching lower bounds that also show that the best achievable regret deteriorates when .

1 Introduction

In this paper we investigate the classical stochastic multi-armed bandit problem introduced by Robbins (1952) and described as follows: an agent facing actions (or bandit arms) selects one arm at every time step. With each arm there is an associated probability distribution with finite mean . These distributions are unknown to the agent. At each round , the agent chooses an arm , and observes a reward drawn from independently from the past given . The goal of the agent is to minimize the regret

 Rn=nmaxi=1,…,Kμi−n∑t=1EμIt .

We refer the reader to Bubeck and Cesa-Bianchi (2012) for a survey of the extensive literature of this problem and its variations. The vast majority of authors assume that the unknown distributions are sub-Gaussian, that is, the moment generating function of each is such that if is a random variable drawn according to the distribution , then for all ,

 lnEeλ(X−EX)≤vλ22% andlnEeλ(EX−X)≤vλ22 (1)

where , the so-called “variance factor” is a parameter that is usually assumed to be known. In particular, if rewards take values in , then by Hoeffding’s lemma, one may take . Similarly to the asymptotic bound of (Agrawal, 1995, Theorem 4.10), this moment assumption was generalized in (Bubeck and Cesa-Bianchi, 2012, Chapter 2) by assuming that there exists a convex function such that, for all ,

 lnEeλ(X−EX)≤ψ(λ)andlnEeλ(EX−X)≤ψ(λ) . (2)

Then one can show that the so-called -UCB strategy (a variant of the basic UCB strategy of Auer et al. (2002)) satisfies the following regret guarantee. Let , and the Legendre-Fenchel transform of , defined by

 ψ∗(ε)=supλ∈R(λε−ψ(λ)) .

Then -UCB444More precisely, -UCB with . satisfies

 Rn≤∑i:Δi>0(4Δiψ∗(Δi/2)lnn+2).

In particular, when the reward distributions are sub-Gaussian, the regret bound is of the order , which is known to be optimal even for bounded reward distributions, see Auer et al. (2002).

While this result shows that assumptions weaker than sub-Gaussian distributions may suffice for a logarithmic regret, it still requires the distributions to have finite moment generating function. Another disadvantage of the bound above is that the dependence on the gaps deteriorates as the tail of the distributions become heavier. In fact, as we show it in this paper, the bound is sub-optimal when the tails are heavier than sub-Gaussian.

In this paper we investigate the behavior of the regret when the distributions are heavy-tailed, and might not have a finite moment generating function. We show that under significantly weaker assumptions, regret bounds of the same form as in the sub-Gaussian case may be achieved. In fact, the only condition we need is that the reward distributions have a finite variance. Moreover, even if the variance is infinite but the distributions have finite moments of order for some , one may still achieve a regret logarithmic in the number of rounds though the dependency on the ’s worsens as gets smaller. For instance, for distributions with moment of order bounded by we derive a strategy that satisfies

 Rn≤∑i:Δi>0⎛⎝8(4Δi)1εlogn+5Δi⎞⎠ .

The key to this result is to replace the empirical mean by more refined robust estimators of the mean and construct “upper confidence bound” strategies.

We also prove matching lower bounds that show that the proposed strategies are optimal up to constant factors. In particular the dependency in is unavoidable.

In the following we start by defining a general class of sampling strategies that are based on the availability of estimators of the mean with certain performance guarantees. Then we examine various estimators for the mean. For each estimator we describe their performance (in terms of concentration to the mean) and deduce the corresponding regret bound.

2 Robust upper confidence bound strategies

The rough idea behind upper confidence bound (ucb) strategies (see Lai and Robbins (1985), Agrawal (1995), Auer et al. (2002)) is that one should choose an arm for which the sum of its estimated mean and a confidence interval is highest. When the reward distributions all satisfy the sub-Gaussian condition (1) for a common variance factor , then such a confidence interval is easy to obtain. Suppose that at a certain time instance arm has been sampled times and the observed rewards are . Then the , are i.i.d. random variables with mean and by a simple Chernoff bound, for any , the empirical mean satisfies, with probability at least ,

 1ss∑r=1Xi,r≤μi+√2vlog(1/δ)s .

This property of the empirical mean turns out to be crucial in order to achieve a regret of optimal order. However, when the sub-Gaussian assumption does not hold, one cannot expect the empirical mean to have such an accuracy. In fact, if one only knows, say, that the variance of each is bounded, then the best possible confidence intervals are significantly wider, deteriorating the performance of standard ucb strategies. (See Appendix A for properties of the empirical mean under distributions of heavy tails.)

The key to successful handling heavy-tailed reward distributions is to replace the empirical mean with other, more robust, estimators of the mean. All we need is a performance guarantee like the one shown above for the empirical mean. More precisely, we need a mean estimator with the following property.

Assumption 1

Let be a positive parameter and let be positive constants. Let be i.i.d. random variables with finite mean . Suppose that for all there exists an estimator such that, with probability at least ,

 ˆμ≤μ+v1/(1+ε)(clog(1/δ)n)ε/(1+ε)

and also, with probability at least ,

 μ≤ˆμ+v1/(1+ε)(clog(1/δ)n)ε/(1+ε) .

For example, if the distribution of the satisfies the sub-Gaussian condition (1), then Assumption 1 is satisfied for , , and variance factor . Interestingly, the assumption may be satisfied for significantly more general distributions by using more sophisticated mean estimators. We recall some of these estimators in the following subsections, where we also show how they satisfy Assumption 1. As we shall see, the basic requirement for Assumption 1 to be satisfied is that the distribution of the has a finite moment of order .

We are now ready to define our generalized robust ucb strategy, described in Figure 1. We denote by the (random) number of times arm is selected up to time .

The following proposition gives a performance bound for the robust ucb policy provided that the reward distributions and the mean estimator used by the policy jointly satisfy Assumption 1. Below we exhibit several mean estimators that, under various moment assumptions, lead to regret bounds of optimal order.

Proposition 1

Let and let be a mean estimator. Suppose that the distributions are such that the mean estimator satisfies Assumption 1 for all . Then the regret of the Robust ucb policy satisfies

 Rn≤∑i:Δi>0⎛⎝2c(vΔi)1εlogn+5Δi⎞⎠ . (3)

Also, if is such that , then

 Rn≤n11+ε(4Kclogn)ε1+εv1/(1+ε) . (4)

Note that a regret of at least is suffered by any strategy that pulls each arm at least once. Thus, the interesting term in (3) is the one of the order of . We show below in Theorem 2 that this term is of optimal order under a moment assumption on the reward distributions. We also show in Theorem 2 that the gap-independent inequality (4) is optimal up to a logarithmic factor.

Proof. Both proofs of (3) and (4) rely on bounding the expected number of pulls for a suboptimal arm. More precisely, in the first two steps of the proof we prove that, for any such that ,

 ETi(n)≤2cv1/εΔ(1+ε)/εilogn+5 . (5)

To lighten notation, we introduce . Note that, up to rounding, (5) is equivalent to .

First step.
We show that if , then one the following three inequalities is true: either

 Bi∗,Ti∗(t−1),t≤μ∗, (6)

or

 (7)

or

 Ti(t−1)<2cv1/εΔ(1+ε)/εilogn. (8)

Indeed, assume that all three inequalities are false. Then we have

 Bi∗,Ti(t−1),t > μ∗ = μi+Δi ≥ μi+2v1/(1+ε)(clogt2Ti(t−1))ε/(1+ε) ≥ ˆμi,Ti(t−1),t+v1/(1+ε)(clogt2Ti(t−1))ε/(1+ε) = Bi,Ti(t−1),t

which implies, in particular, that .

Second step.
Here we first bound the probability that (6) or (7) hold. By Assumption 1 as well as an union bound over the value of and we obtain

 P((???) or (???) is true)≤2t∑s=11s4≤2t3.

Now using the first step, we obtain

 ETi(n)=En∑t=1\mathds1It=i ≤ u+En∑t=u+1\mathds1It=iand % (???) is false ≤ u+En∑t=u+1\mathds1(???) or% (???) is true ≤ u+n∑t=u+12t3 ≤ u+4 .

This concludes the proof of (5).

Third step.
Using that and (5), we directly obtain (3). On the other hand, for (4) we use Hölder’s inequality to obtain

 Rn = ∑i:Δi>0Δi(ETi(n))ε1+ε(ETi(n))11+ε ≤ ∑i:Δi>0Δi(ETi(n))11+ε⎛⎝2cv1/εΔ(1+ε)/εilogn+5⎞⎠ε1+ε ≤ ∑i:Δi>0Δi(ETi(n))11+ε⎛⎝4cv1/εΔ(1+ε)/εilogn⎞⎠ε1+ε (by assumption on n) ≤ Kε1+ε⎛⎝∑i:Δi>0ETi(n)⎞⎠11+ε(4c)ε1+εv1/(1+ε)(logn)ε1+ε (by Hölder’s inequality) ≤ n11+ε(4Kclogn)ε1+εv1/(1+ε) .

In the next sections we show how Proposition 1 may be applied, with different mean estimators, to obtain optimal regret bounds for possibly heavy-tailed reward distributions.

2.1 Truncated empirical mean

In this section we consider the simplest of the proposed mean estimators, a truncated version of the empirical mean. This estimator is similar to the “winsorized mean” and “trimmed mean” of Tukey, see Bickel (1965).

The following lemma shows that if the -th raw moment is bounded, then the truncated mean satisfies Assumption 1.

Lemma 1

Let , and . Consider the truncated empirical mean defined as

 ˆμT=1nn∑t=1Xt\mathds1⎧⎪⎨⎪⎩|Xt|≤(utlog(δ−1))11+ε⎫⎪⎬⎪⎭.

If , then with probability at least ,

 ˆμT≤μ+4u11+ε(log(δ−1)n)ε1+ε.

Proof. Let . From Bernstein’s inequality for bounded random variables, noting that , we have, with probability at least ,

 EX−1nn∑t=1Xt\mathds1|Xt|≤Bt =1nn∑t=1(EX−E(X\mathds1|X|≤Bt))+1nn∑t=1(E(X\mathds1|X|≤Bt)−Xt\mathds1|Xt|≤Bt) =1nn∑t=1E(X\mathds1|X|>Bt)+1nn∑t=1(E(X\mathds1|X|≤Bt)−Xt\mathds1|Xt|≤Bt) ≤1nn∑t=1uBεt+√2B1−εnulog(δ−1)n+Bnlog(δ−1)3n .

An easy computation concludes the proof.

The following is now a straightforward corollary of Proposition 1 and Lemma 1.

Theorem 1

Let and . Assume that the reward distributions satisfy

 EX∼νi|Xi|1+ε≤u∀i∈{1,…,K} . (9)

Then the regret of the Robust-ucb policy used with the truncated mean estimator defined above satisfies

 Rn≤∑i:Δi>0⎛⎝8(4uΔi)1εlogn+5Δi⎞⎠ .

When , the only assumption of the theorem above is that each reward distribution has a finite variance. In this case the obtained regret bound is of the order of , which is known to be not improvable in general, even when the rewards are bounded —note, however, that the kl-ucb algorithm of Garivier and Cappé (2011) is never worse than Robust-ucb in case of bounded rewards. We find it remarkable that regret of this order may be achieved under the only assumption of finite variance and one cannot improve the order by imposing stronger tail conditions.

When the variance is infinite but moments of order are available, we still have a regret that depends only logarithmically on . The bound deteriorates slightly as the dependency on is replaced by . We show next that this dependency is inevitable.

Theorem 2

For any , there exist two distributions and satisfying (9) with and with , such that the following holds. Consider an algorithm such that for any two-armed bandit problem satisfying (9) with and with arm 2 being suboptimal, one has for any . Then on the two-armed bandit problem with distributions and , the algorithm satisfies

 liminfn→+∞Rnlogn≥0.4Δ1ε . (10)

Furthermore, for any fixed , there exists a set of distributions satisfying (9) with and such that for any algorithm, one has

 Rn≥0.01Kε1+εn11+ε . (11)

Proof. To prove (10), we take with , and . It is easy to see that and are well defined, and they satisfy (9) with and . Now clearly, the two-armed bandit problem with these two distributions is equivalent to the two-armed bandit problem with two Bernoulli distributions with parameters respectively and . Slightly more formally, we could define a new algorithm that on behaves equivalently to the original algorithm on and . Therefore, we can use (Bubeck, 2010, Theorem 2.7) to directly obtain the following lower bound for ,

 liminfn→+∞ET2(n)logn≥1KL(Ber(γ1+ε−Δγ),Ber(γ1+ε))

where denotes Kullback-Leibler divergence. This implies the following lower bound for the original algorithm

 liminfn→+∞Rnlogn≥ΔKL(Ber(γ1+ε−Δγ),Ber(γ1+ε)) .

Equation (10) then follows directly by using along with straightforward computations.

The proof of (11) follows the same scheme. We use the same distributions as above and we consider the multi-armed bandit problem where one arm has distribution , and the remaining arms have distribution . Furthermore we set for this part of the proof. Now we can use the same proof as for (Bubeck, 2010, Theorem 2.6) on the modified algorithm that runs on the Bernoulli distributions corresponding to and . We leave the straightforward details to the reader.

2.2 Median of means

The truncated mean estimator and the corresponding bandit strategy are not entirely satisfactory as they are not translation invariant in the sense that the arms selected by the strategy may change if all reward distributions are shifted by the same constant amount. The reason for this is that the truncation is centered, quite arbitrarily, around zero. If the raw moments are small, then the strategy has a small regret. However, it would be more desirable to have a regret bound in terms of the centered moments . This is indeed possible if one replaces the truncated mean estimator by more sophisticated estimators of the mean. We show one such possibility, the “median-of-means” estimator in this section. In the next section we discuss Catoni’s -estimator, a quite different alternative.

The median-of-means estimator was proposed by Alon et al. (2002). The simple idea is to divide the data into various disjoint blocks. Within each block one calculates the standard empirical mean and takes a median value of these empirical means. The next lemma shows that for certain block size the estimator has the property required by our robust ucb strategy.

Lemma 2

Let and . Let be i.i.d. random variables with mean and centered -th moment . Let and . Let

 ^μ1=1NN∑t=1Xt , ^μ2=1N2N∑t=N+1Xt ,…, ^μk=1NkN∑t=(k−1)N+1Xt

be empirical mean estimates, each one computed on data points. Consider a median of these empirical means. Then, with probability at least ,

 ˆμM≤μ+(12v)11+ε(16log(e1/8δ−1)n)ε1+ε.

Proof. Let and for . According to equation (12) in the Appendix, has a Bernoulli distribution with parameter

 p≤3vNεη1+ε .

Note that for

 η=(12v)11+ε(1N)ε1+ε

we have . Thus using Hoeffding’s inequality for the tail of a binomial distribution, we get

 P(ˆμM>μ+η)=P(k∑ℓ=1Yℓ≥k/2)≤exp(−2k(1/2−p)2)≤exp(−k/8)=δ .

The next performance bound is a straightforward consequence of Proposition 1 and Lemma 2. In some situations it significantly improves on Theorem 1 as the bound depends on the centered moments of order rather than on raw moments.

Theorem 3

Let and . Assume that the reward distributions satisfy

 EX∼νi|X−μi|1+ε≤v,∀i∈{1,…,K} .

Then the regret of the Robust-ucb policy used with the median-of-means mean estimator defined in Lemma 2 satisfies

 Rn≤∑i:Δi>0⎛⎝32(12vΔi)1εlogn+5Δi⎞⎠ .

2.3 Catoni’s M estimator

Finally, we consider an elegant mean estimator introduced by Catoni (2010). As we will see, this estimator has similar performance guarantees as the median-of-means estimator but with better, near optimal, numerical constants. However, we only have a good guarantee in terms of the variance. Thus, in this section we assume that the variance is finite and we do not consider the case .

Catoni’s mean estimator is defined as follows: Let be a continuous strictly increasing function satisfying

 −log(1−x+x2/2)≤ψ(x)≤log(1+x+x2/2) .

Let be such that and introduce

 αδ=  ⎷2log(1/δ)n(v+2vlog(1/δ)n−2log(1/δ)) .

If be i.i.d. random variables, then Catoni’s estimator is defined as the unique value such that

 n∑i=1ψ(αδ(Xi−ˆμC))=0 .

Catoni (2010) proves that if and the have mean and variance at most , then, with probability at least ,

 ˆμC≤μ+2√vlogδ−1n

and a similar bound holds for the lower-tail. This bound has the same form as in Assumption 1, though it only holds with the additional requirement that and therefore it does not fomally fit in the framework of the robust ucb strategy as described in Section 2. However, by a simple modification, one may define a strategy that incorporates such a restriction. In Figure 2 we describe a policy based on Catoni’s mean estimator. The policy assumes that there is a known upper bound for the largest variance of any reward distribution. Then by a simple modification of the proof of Proposition 1, we obtain the following performance bound.

Theorem 4

Let . Assume that the reward distributions satisfy

 EX∼νi|X−μi|2≤v,∀i∈{1,…,K}.

Then the regret of the modified robust ucb policy satisfies

 Rn≤∑i:Δi>0(8vlognΔi+8Δilogn+5Δi) .

The regret bound has better numerical constants than its analogue based on the median-of-means estimator. However, a term of the order appears due to the restricted range of validity of Catoni’s estimator.

3 Discussion and conclusions

In this work we have extended the ucb algorithm to heavy-tailed stochastic multi-armed bandit problems in which the reward distributions have only moments of order for some . In this setting, we have compared three estimators for the mean reward of the arms: median-of-means, truncated mean, and Catoni’s -estimator. The median-of-means estimator gives a regret bound that depends on the central -moments of the reward distributions, without need of knowing bounds on these moments. The truncated mean estimator, instead, delivers a regret bound that depends on the raw -moments, and requires the knowledge of a bound on these moments. Finally, Catoni’s estimator depends on the central moments like the median-of-means, but it requires the knowledge of a bound on the central moments, and only works in the special case (where it gives the best leading constants on the regret). A trade-off in the choice of the estimator appears if we take into account the computational costs involved in the update of each estimator as new rewards are observed. Indeed, while the truncated mean requires constant time and space per update, the median-of-means is slightly more difficult to update, requiring space and time per update. Finally, Catoni’s -estimator requires linear space per update, which is an unfortunate feature in this sequential setting.

It is an interesting question whether there exists an estimator with the same good concentration properties as the median-of-means, but requiring only constant time and space per update. The truncated mean has good computational properties but the knowledge of raw moment bounds is required. So it is natural to ask whether we may drop this requirement for the truncated mean or some variants of it. Finally, our proof techniques heavily rely on the independence of rewards for each arm. It is unclear whether similar results could be obtained for heavy-tailed bandits with dependent reward processes.

While we focused our attention on bandit problems, the concentration results presented in this work may be naturally applied to other related sequential decision settings. Such examples include the racing algorithms of Maron and Moore (1997), and more generally nonparametric Monte Carlo estimation, see Dagum et al. (2000) and Domingo et al. (2002). These techniques are based on mean estimators, and current results are limited to the application of the empirical mean to bounded reward distributions.

Appendix A Empirical mean

In this appendix we discuss the behavior of the standard empirical mean when only moments of order are available. We focus on finite-sample guarantees (i.e., non-asymptotic results), as this is the key property to obtain finite-time results for the multi-armed bandit problem.

Let be a real i.i.d. sequence with finite mean . We assume that for some and , one has . We also denote by an upper bound on the raw moment of order , that is .

Lemma 3

Let be the empirical mean:

 ˆμ=1nn∑t=1Xt .

Then for any , with probability at least , one has

 ˆμ≤μ+(3vδnε)11+ε .

Proof. Let ,

 P(ˆμ−μ>η)≤P(∃t∈{1,…,n}:|Xt−μ|>a)+P(1nn∑t=1(Xt−μ)\mathds1|Xt−μ|≤a>η).

The first term on the right-hand side can be bounded by using a union bound followed by Chebyshev’s inequality (for moments of order ):

 P(∃t∈{1,…,n}:|Xt−μ|>a)≤nE|X−μ|1+εa1+ε≤nva1+ε .

On the other hand Chebyshev’s inequality together with the fact that give for the second term

 P (1nn∑t=1(Xt−μ)\mathds1|Xt−μ|≤a>η) ≤1η2E(1nn∑t=1(Xt−μ)\mathds1|Xt−μ|≤a)2 ≤E(X−μ)2\mathds1|X−μ|≤anη2+(E(X−μ)\mathds1|X−μ|≤a)2η2 =E(X−μ)2\mathds1|X−μ|≤anη2+(E(X−μ)\mathds1|X−μ|>a)2η2 .

By applying a trivial manipulation on the first term, and using Hölder’s inequality with exponents and for the second term, we obtain that the last expression above is upper bounded by

 E|X−μ|1+εa1−εnη2+(E|X−μ|1+ε)21+ε(P(|X−μ|>a))2ε1+εη2≤va1−εnη2+v21+εv2ε1+εη2a2ε .

Thus we proved that

 P(ˆμ−μ>η)≤nva1+ε+va1−εnη2+v2η2a2ε .

Taking entails

 P(ˆμ−μ>η)≤2vnεη1+ε+(vnεη1+ε)2 .

Note that if then the bound is trivial, and thus we always have

 P(ˆμ−μ>η)≤3vnεη1+ε . (12)

The proof now follows by straightforward computations.

It is easy to see that the order of magnitude of (12) is tight up to a constant factor. Indeed, let and consider the distribution . Clearly for this distribution we have , so (12) shows that for an i.i.d. sequence drawn from this distribution, one has

 P(ˆμ−μ>η)≤3nεη1+ε .

We can restrict our attention to the case where , for otherwise the above upper bound is trivial. Now consider . Note that we have and in particular this implies . From this last inequality and basic computations we get

 P(ˆμ−μ>η) ≥P(∃ i∈{1,…,n}:Xi≥n(η+μ)) ≥P(∃ i∈{1,…,n}:Xi=1/γ) =1−(1−γ1+ε)n ≥1−exp(−1nε(2η)1+ε) =1nε(2η)1+ε+o(1nε(2η)1+ε)

which shows that (12) is tight up to a constant factor for this distribution.

Clearly, the concentration properties of the empirical mean are much weaker than for the truncated empirical mean or the median-of-means. Indeed, while the dependency on in the confidence term is similar for the three estimators, the dependency on is polynomial for the empirical mean and polylogarithmic for the truncated empirical mean and the median-of-means. As we just showed, this is not an artifact of the proof method, and the empirical mean indeed has polynomial deviations (as opposed to the exponential deviations of the two other estimators). This remark is at the basis of the theory of robust statistics and many approaches to fix the above issue have been proposed, see for example Huber (1964, 1981). The empirical mean estimator has been previously applied to heavy-tailed stochastic bandits in Liu and Zhao (2011) obtaining polynomial, rather than logarithmic, regret bounds.

References

• Agrawal [1995] R. Agrawal. Sample mean based index policies with regret for the multi-armed bandit problem. Advances in Applied Mathematics, 27:1054–1078, 1995.
• Alon et al. [2002] N. Alon, Y. Matias, and M. Szegedy. The space complexity of approximating the frequency moments. Journal of Computer and System Sciences, 58:137–147, 2002.
• Auer et al. [2002] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning Journal, 47(2-3):235–256, 2002.
• Bickel [1965] P. J. Bickel. On some robust estimates of location. Annals of Mathematical Statistics, 36:847–858, 1965.
• Bubeck [2010] S. Bubeck. Bandits Games and Clustering Foundations. PhD thesis, Université Lille 1, 2010.
• Bubeck and Cesa-Bianchi [2012] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Arxiv preprint arXiv:1204.5721, 2012.
• Catoni [2010] O. Catoni. Challenging the empirical mean and empirical variance: a deviation study. Arxiv preprint arXiv:1009.2048, 2010.
• Dagum et al. [2000] P. Dagum, R. Karp, M. Luby, and S. Ross. An optimal algorithm for monte carlo estimation. SIAM Journal on Computing, 29(5):1484–1496, 2000.
• Domingo et al. [2002] C. Domingo, R. Gavaldà, and O. Watanabe. Adaptive sampling methods for scaling up knowledge discovery algorithms. Data Mining and Knowledge Discovery, 6(2):131–152, 2002.
• Garivier and Cappé [2011] A. Garivier and O. Cappé. The KL-UCB algorithm for bounded stochastic bandits and beyond. In Proceedings of the 24th Annual Conference on Learning Theory (COLT), 2011.
• Huber [1964] P. J. Huber. Robust estimation of a location parameter. Annals of Mathematical Statistics, 35:73–101, 1964.
• Huber [1981] P. J. Huber. Robust Statistics. Wiley Interscience, 1981.
• Lai and Robbins [1985] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4–22, 1985.
• Liu and Zhao [2011] K. Liu and Q. Zhao. Deterministic Sequencing of Exploration and Exploitation for Multi-Armed Bandit Problems. ArXiv e-prints, 2011.
• Maron and Moore [1997] O. Maron and A.W. Moore. The racing algorithm: Model selection for lazy learners. Artificial Intelligence Review, 11(1):193–225, 1997.
• Robbins [1952] H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society, 58:527–535, 1952.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters