\mathcal{X}–Armed Bandits

–Armed Bandits

Sébastien Bubeck
Sequel Project, INRIA Lille
sebastien.bubeck@inria.fr

Rémi Munos
Sequel Project, INRIA Lille
remi.munos@inria.fr

Gilles Stoltz
Ecole Normale Supérieure111This research was carried out within the INRIA project CLASSIC hosted by Ecole normale supérieure and CNRS., CNRS
&
HEC Paris, CNRS,
gilles.stoltz@ens.fr

Csaba Szepesvári
University of Alberta, Department of Computing Science
szepesva@cs.ualberta.ca
Abstract

We consider a generalization of stochastic bandits where the set of arms, , is allowed to be a generic measurable space and the mean-payoff function is “locally Lipschitz” with respect to a dissimilarity function that is known to the decision maker. Under this condition we construct an arm selection policy, called HOO (hierarchical optimistic optimization), with improved regret bounds compared to previous results for a large class of problems. In particular, our results imply that if is the unit hypercube in a Euclidean space and the mean-payoff function has a finite number of global maxima around which the behavior of the function is locally continuous with a known smoothness degree, then the expected regret of HOO is bounded up to a logarithmic factor by , i.e., the rate of growth of the regret is independent of the dimension of the space. We also prove the minimax optimality of our algorithm when the dissimilarity is a metric. Our basic strategy has quadratic computational complexity as a function of the number of time steps and does not rely on the doubling trick. We also introduce a modified strategy, which relies on the doubling trick but runs in linearithmic time. Both results are improvements with respect to previous approaches.

1 Introduction

In the classical stochastic bandit problem a gambler tries to maximize his revenue by sequentially playing one of a finite number of slot machines that are associated with initially unknown (and potentially different) payoff distributions [Rob52]. Assuming old-fashioned slot machines, the gambler pulls the arms of the machines one by one in a sequential manner, simultaneously learning about the machines’ payoff-distributions and gaining actual monetary reward. Thus, in order to maximize his gain, the gambler must choose the next arm by taking into consideration both the urgency of gaining reward (“exploitation”) and acquiring new information (“exploration”).

Maximizing the total cumulative payoff is equivalent to minimizing the (total) regret, i.e., minimizing the difference between the total cumulative payoff of the gambler and the one of another clairvoyant gambler who chooses the arm with the best mean-payoff in every round. The quality of the gambler’s strategy can be characterized as the rate of growth of his expected regret with time. In particular, if this rate of growth is sublinear, the gambler in the long run plays as well as the clairvoyant gambler. In this case the gambler’s strategy is called Hannan consistent.

Bandit problems have been studied in the Bayesian framework [gittins89], as well as in the frequentist parametric [LR85; Agr95] and non-parametric settings [ACF02], and even in non-stochastic scenarios [ACFS02; CL06]. While in the Bayesian case the question is whether the optimal actions can be computed efficiently, in the frequentist case the question is how to achieve low rate of growth of the regret in the lack of prior information, i.e., it is a statistical question. In this paper we consider the stochastic, frequentist, non-parametric setting.

Although the first papers studied bandits with a finite number of arms, researchers have soon realized that bandits with infinitely many arms are also interesting, as well as practically significant. One particularly important case is when the arms are identified by a finite number of continuous-valued parameters, resulting in online optimization problems over continuous finite-dimensional spaces. Such problems are ubiquitous to operations research and control. Examples are “pricing a new product with uncertain demand in order to maximize revenue, controlling the transmission power of a wireless communication system in a noisy channel to maximize the number of bits transmitted per unit of power, and calibrating the temperature or levels of other inputs to a reaction so as to maximize the yield of a chemical process” [Cop04]. Other examples are optimizing parameters of schedules, rotational systems, traffic networks or online parameter tuning of numerical methods. During the last decades numerous authors have investigated such “continuum-armed” bandit problems [Agr95b; Kle04; AOS07; KSU08; Cop04]. A special case of interest, which forms a bridge between the case of a finite number of arms and the continuum-armed setting, is formed by bandit linear optimization, see [AHA08] and the references therein.

In many of the above-mentioned problems, however, the natural domain of some of the optimization parameters is a discrete set, while other parameters are still continuous-valued. For example, in the pricing problem different product lines could also be tested while tuning the price, or in the case of transmission power control different protocols could be tested while optimizing the power. In other problems, such as in online sequential search, the parameter-vector to be optimized is an infinite sequence over a finite alphabet [CM07; BM10].

The motivation for this paper is to handle all these various cases in a unified framework. More precisely, we consider a general setting that allows us to study bandits with almost no restriction on the set of arms. In particular, we allow the set of arms to be an arbitrary measurable space. Since we allow non-denumerable sets, we shall assume that the gambler has some knowledge about the behavior of the mean-payoff function (in terms of its local regularity around its maxima, roughly speaking). This is because when the set of arms is uncountably infinite and absolutely no assumptions are made on the payoff function, it is impossible to construct a strategy that simultaneously achieves sublinear regret for all bandits problems (see, e.g., [BMS10, Corollary 4]). When the set of arms is a metric space (possibly with the power of the continuum) previous works have assumed either the global smoothness of the payoff function [Agr95b; Kle04; KSU08; Cop04] or local smoothness in the vicinity of the maxima [AOS07]. Here, smoothness means that the payoff function is either Lipschitz or Hölder continuous (locally or globally). These smoothness assumptions are indeed reasonable in many practical problems of interest.

In this paper, we assume that there exists a dissimilarity function that constrains the behavior of the mean-payoff function, where a dissimilarity function is a measure of the discrepancy between two arms that is neither symmetric, nor reflexive, nor satisfies the triangle inequality. (The same notion was introduced simultaneously and independently of us by [KSU08ext, Section 4.4] under the name “quasi-distance.”) In particular, the dissimilarity function is assumed to locally set a bound on the decrease of the mean-payoff function at each of its global maxima. We also assume that the decision maker can construct a recursive covering of the space of arms in such a way that the diameters of the sets in the covering shrink at a known geometric rate when measured with this dissimilarity.

Relation to the literature.

Our work generalizes and improves previous works on continuum-armed bandits.

In particular, Kle04 and AOS07 focused on one-dimensional problems, while we allow general spaces. In this sense, the closest work to the present contribution is that of KSU08, who considered generic metric spaces assuming that the mean-payoff function is Lipschitz with respect to the (known) metric of the space; its full version [KSU08ext] relaxed this condition and only requires that the mean-payoff function is Lipschitz at some maximum with respect to some (known) dissimilarity.222 The present paper paper is a concurrent and independent work with respect to the paper of Kleinberg, Slivkins, and Upfal [KSU08ext]. An extended abstract [KSU08] of the latter was published in May 2008 at STOC’08, while the NIPS’08 version [BMSS09] of the present paper was submitted at the beginning of June 2008. At that time, we were not aware of the existence of the full version [KSU08ext], which was released in September 2008. KSU08ext proposed a novel algorithm that achieves essentially the best possible regret bound in a minimax sense with respect to the environments studied, as well as a much better regret bound if the mean-payoff function has a small “zooming dimension”.

Our contribution furthers these works in two ways:

 (i)

our algorithms, motivated by the recent successful tree-based optimization algorithms [KS06; GWMT06; CM07], are easy to implement;

(ii)

we show that a version of our main algorithm is able to exploit the local properties of the mean-payoff function at its maxima only, which, as far as we know, was not investigated in the approach of KSU08; KSU08ext.

The precise discussion of the improvements (and drawbacks) with respect to the papers by KSU08; KSU08ext requires the introduction of somewhat extensive notations and is therefore deferred to Section 5. However, in a nutshell, the following can be said.

First, by resorting to a hierarchical approach, we are able to avoid the use of the doubling trick, as well as the need for the (covering) oracle, both of which the so-called zooming algorithm of KSU08 relies on. This comes at the cost of slightly more restrictive assumptions on the mean-payoff function, as well as a more involved analysis. Moreover, the oracle is replaced by an a priori choice of a covering tree. In standard metric spaces, such as the Euclidean spaces, such trees are trivial to construct, though, in full generality they may be difficult to obtain when their construction must start from (say) a distance function only. We also propose a variant of our algorithm that has smaller computational complexity of order compared to the quadratic complexity of our basic algorithm. However, the cheaper algorithm requires the doubling trick to achieve an anytime guarantee (just like the zooming algorithm).

Second, we are also able to weaken our assumptions and to consider only properties of the mean-payoff function in the neighborhoods of its maxima; this leads to regret bounds scaling as 333We write when up to a logarithmic factor. when, e.g., the space is the unit hypercube and the mean-payoff function has a finite number of global maxima around which it is locally equivalent to a function with some known degree . Thus, in this case, we get the desirable property that the rate of growth of the regret is independent of the dimensionality of the input space. (Comparable dimensionality-free rates are obtained under different assumptions in [KSU08ext].)

Finally, in addition to the strong theoretical guarantees, we expect our algorithm to work well in practice since the algorithm is very close to the recent, empirically very successful tree-search methods from the games and planning literature [GeSi08:ICML; GeSi08:AAAI; schadd2008addressing; ChaWiHeUiBo08; finnsson2008simulation].

Outline.

The outline of the paper is as follows:

  1. In Section 2 we formalize the –armed bandit problem.

  2. In Section 3 we describe the basic strategy proposed, called HOO (hierarchical optimistic optimization).

  3. We present the main results in Section 4. We start by specifying and explaining our assumptions (Section 4.1) under which various regret bounds are proved. Then we prove a distribution-dependent bound for the basic version of HOO (Section 4.2). A problem with the basic algorithm is that its computational cost increases quadratically with the number of time steps. Assuming the knowledge of the horizon, we thus propose a computationally more efficient variant of the basic algorithm, called truncated HOO and prove that it enjoys a regret bound identical to the one of the basic version (Section 4.3) while its computational complexity is only log-linear in the number of time steps. The first set of assumptions constrains the mean-payoff function everywhere. A second set of assumptions is therefore presented that puts constraints on the mean-payoff function only in a small vicinity of its global maxima; we then propose another algorithm, called local-HOO, which is proven to enjoy a regret again essentially similar to the one of the basic version (Section 4.4). Finally, we prove the minimax optimality of HOO in metric spaces (Section 4.5).

  4. In Section 5 we compare the results of this paper with previous works.

2 Problem setup

A stochastic bandit problem is a pair , where is a measurable space of arms and determines the distribution of rewards associated with each arm. We say that is a bandit environment on . Formally, is an mapping , where is the space of probability distributions over the reals. The distribution assigned to arm is denoted by . We require that for each arm , the distribution admits a first-order moment; we then denote by its expectation (“mean payoff”),

The mean-payoff function thus defined is assumed to be measurable. For simplicity, we shall also assume that all have bounded supports, included in some fixed bounded interval444More generally, our results would also hold when the tails of the reward distributions are uniformly sub-Gaussian. , say, the unit interval . Then, also takes bounded values, in .

A decision maker (the gambler of the introduction) that interacts with a stochastic bandit problem plays a game at discrete time steps according to the following rules. In the first round the decision maker can select an arm and receives a reward drawn at random from . In round the decision maker can select an arm based on the information available up to time , i.e., , and receives a reward drawn from , independently of given . Note that a decision maker may randomize his choice, but can only use information available up to the point in time when the choice is made.

Formally, a strategy of the decision maker in this game (“bandit strategy”) can be described by an infinite sequence of measurable mappings, , where maps the space of past observations,

to the space of probability measures over . By convention, does not take any argument. A strategy is called deterministic if for every , is a Dirac distribution.

The goal of the decision maker is to maximize his expected cumulative reward. Equivalently, the goal can be expressed as minimizing the expected cumulative regret, which is defined as follows. Let

be the best expected payoff in a single round. At round , the cumulative regret of a decision maker playing is

i.e., the difference between the maximum expected payoff in rounds and the actual total payoff. In the sequel, we shall restrict our attention to the expected cumulative regret, which is defined as the expectation of the cumulative regret .

Finally, we define the cumulative pseudo-regret as

that is, the actual rewards used in the definition of the regret are replaced by the mean-payoffs of the arms pulled. Since (by the tower rule)

the expected values of the cumulative regret and of the cumulative pseudo-regret are the same. Thus, we focus below on the study of the behavior of .

Remark 1

As it is argued in [BMS10], in many real-world problems, the decision maker is not interested in his cumulative regret but rather in its simple regret. The latter can be defined as follows. After rounds of play in a stochastic bandit problem , the decision maker is asked to make a recommendation based on the obtained rewards . The simple regret of this recommendation equals

In this paper we focus on the cumulative regret , but all the results can be readily extended to the simple regret by considering the recommendation , where is drawn uniformly at random in . Indeed, in this case,

as is shown in [BMS10, Section 3].

3 The Hierarchical Optimistic Optimization (HOO) strategy

The HOO strategy (cf. Algorithm 1) incrementally builds an estimate of the mean-payoff function over . The core idea (as in previous works) is to estimate precisely around its maxima, while estimating it loosely in other parts of the space . To implement this idea, HOO maintains a binary tree whose nodes are associated with measurable regions of the arm-space such that the regions associated with nodes deeper in the tree (further away from the root) represent increasingly smaller subsets of . The tree is built in an incremental manner. At each node of the tree, HOO stores some statistics based on the information received in previous rounds. In particular, HOO keeps track of the number of times a node was traversed up to round and the corresponding empirical average of the rewards received so far. Based on these, HOO assigns an optimistic estimate (denoted by ) to the maximum mean-payoff associated with each node. These estimates are then used to select the next node to “play”. This is done by traversing the tree, beginning from the root, and always following the node with the highest –value (cf. lines 414 of Algorithm 1). Once a node is selected, a point in the region associated with it is chosen (line 16) and is sent to the environment. Based on the point selected and the received reward, the tree is updated (lines 1833).


The tree of coverings which HOO needs to receive as an input is an infinite binary tree whose nodes are associated with subsets of . The nodes in this tree are indexed by pairs of integers ; node is located at depth from the root. The range of the second index, , associated with nodes at depth is restricted by . Thus, the root node is denoted by . By convention, and are used to refer to the two children of the node . Let be the region associated with node . By assumption, these regions are measurable and must satisfy the constraints

(1a)
(1b)

As a corollary, the regions at any level cover the space ,

explaining the term “tree of coverings”.

In the algorithm listing the recursive computation of the –values (lines 2833) makes a local copy of the tree; of course, this part of the algorithm could be implemented in various other ways. Other arbitrary choices in the algorithm as shown here are how tie breaking in the node selection part is done (lines 912), or how a point in the region associated with the selected node is chosen (line 16). We note in passing that implementing these differently would not change our theoretical results.


Parameters:  Two real numbers and , a sequence of subsets of satisfying the conditions (1a) and (1b).

Auxiliary function Leaf():  outputs a leaf of .

Initialization:   and .

1:for  do Strategy HOO in round
2:      Start at the root
3:      stores the path traversed in the tree
4:     while  do Search the tree
5:         if   then Select the “more promising” child
6:              
7:         else if   then
8:              
9:         else Tie-breaking rule
10:               e.g., choose a child at random
11:              
12:         end if
13:         
14:     end while
15:      The selected node
16:      Choose arm in and play it Arbitrary selection of an arm
17:     Receive corresponding reward
18:      Extend the tree
19:     for all   do Update the statistics and stored in the path
20:          Increment the counter of node
21:          Update the mean of node
22:     end for
23:     for all   do Update the statistics stored in the tree
24:          Update the –value of node
25:     end for
26:      –values of the children of the new leaf
27:     
28:      Local copy of the current tree
29:     while  do Backward computation of the –values
30:          Take any remaining leaf
31:          Backward computation
32:          Drop updated leaf
33:     end while
34:end for
Algorithm 1  The HOO strategy

To facilitate the formal study of the algorithm, we shall need some more notation. In particular, we shall introduce time-indexed versions (, , , , , etc.) of the quantities used by the algorithm. The convention used is that the indexation by is used to indicate the value taken at the end of the round.

In particular, is used to denote the finite subtree stored by the algorithm at the end of round . Thus, the initial tree is and it is expanded round after round as

where is the node selected in line 15. We call the node played in round . We use to denote the point selected by HOO in the region associated with the node played in round , while denotes the received reward.

Node selection works by comparing –values and always choosing the node with the highest –value. The –value, , at node by the end of round is an estimated upper bound on the mean-payoff function at node . To define it we first need to introduce the average of the rewards received in rounds when some descendant of node was chosen (by convention, each node is a descendant of itself):

Here, denotes the set of all descendants of a node in the infinite tree,

and is the number of times a descendant of is played up to and including round , that is,

A key quantity determining is , an initial estimate of the maximum of the mean-payoff function in the region associated with node :

(2)

In the expression corresponding to the case , the first term added to the average of rewards accounts for the uncertainty arising from the randomness of the rewards that the average is based on, while the second term, , accounts for the maximum possible variation of the mean-payoff function over the region . The actual bound on the maxima used in HOO is defined recursively by

The role of is to put a tight, optimistic, high-probability upper bound on the best mean-payoff that can be achieved in the region . By assumption, . Thus, assuming that (resp., ) is a valid upper bound for region (resp., ), we see that must be a valid upper bound for region . Since is another valid upper bound for region , we get a tighter (less overoptimistic) upper bound by taking the minimum of these bounds.

Obviously, for leafs of the tree , one has , while close to the root one may expect that ; that is, the upper bounds close to the root are expected to be less biased than the ones associated with nodes farther away from the root.


Note that at the beginning of round , the algorithm uses to select the node to be played (since will only be available at the end of round ). It does so by following a path from the root node to an inner node with only one child or a leaf and finally considering a child of the latter; at each node of the path, the child with highest –value is chosen, till the node with infinite –value is reached.

Illustrations.

Figure 1 illustrates the computation done by HOO in round , as well as the correspondence between the nodes of the tree constructed by the algorithm and their associated regions. Figure 2 shows trees built by running HOO for a specific environment.

Figure 1: Illustration of the node selection procedure in round . The tree represents . In the illustration, , therefore, the selected path included the node rather than the node .
Figure 2: The trees (bottom figures) built by HOO after 1,000 (left) and 10,000 (right) rounds. The mean-payoff function (shown in the top part of the figure) is ; the corresponding payoffs are Bernoulli-distributed. The inputs of HOO are as follows: the tree of coverings is formed by all dyadic intervals, and . The tie-breaking rule is to choose a child at random (as shown in the Algorithm 1), while the points in to be played are chosen as the centers of the dyadic intervals. Note that the tree is extensively refined where the mean-payoff function is near-optimal, while it is much less developed in other regions.
Computational complexity.

At the end of round , the size of the active tree is at most , making the storage requirements of HOO linear in . In addition, the statistics and –values of all nodes in the active tree need to be updated, which thus takes time . HOO runs in time at each round , making the algorithm’s total running time up to round quadratic in . In Section 4.3 we modify HOO so that if the time horizon is known in advance, the total running time is , while the modified algorithm will be shown to enjoy essentially the same regret bound as the original version.

4 Main results

We start by describing and commenting on the assumptions that we need to analyze the regret of HOO. This is followed by stating the first upper bound, followed by some improvements on the basic algorithm. The section is finished by the statement of our results on the minimax optimality of HOO.

4.1 Assumptions

The main assumption will concern the “smoothness” of the mean-payoff function. However, somewhat unconventionally, we shall use a notion of smoothness that is built around dissimilarity functions rather than distances, allowing us to deal with function classes of highly different smoothness degrees in a unified manner. Before stating our smoothness assumptions, we define the notion of a dissimilarity function and some associated concepts.

Definition 2 (Dissimilarity)

A dissimilarity over is a non-negative mapping satisfying for all .

Given a dissimilarity , the diameter of a subset of as measured by is defined by

while the –open ball of with radius and center is defined by

Note that the dissimilarity is only used in the theoretical analysis of HOO; the algorithm does not require as an explicit input. However, when choosing its parameters (the tree of coverings and the real numbers and ) for the (set of) two assumptions below to be satisfied, the user of the algorithm probably has in mind a given dissimilarity.

However, it is also natural to wonder what is the class of functions for which the algorithm (given a fixed tree) can achieve non-trivial regret bounds; a similar question for regression was investigated e.g., by Yang07. We shall indicate below how to construct a subset of such a class, right after stating our assumptions connecting the tree, the dissimilarity, and the environment (the mean-payoff function). Of these, Assumption 4.1 will be interpreted, discussed, and equivalently reformulated below into (4), a form that might be more intuitive. The form (3) stated below will turn out to be the most useful one in the proofs.

  • Assumptions  Given the parameters of HOO, that is, the real numbers and and the tree of coverings , there exists a dissimilarity function such that the following two assumptions are satisfied.

    1. There exists such that for all integers ,

      1. for all ;

      2. for all , there exists such that

      3. for all .

    2. The mean-payoff function satisfies that for all ,

      (3)

We show next how a tree induces in a natural way first a dissimilarity and then a class of environments. For this, we need to assume that the tree of coverings –in addition to (1a) and (1b)– is such that the subsets and are disjoint whenever and that none of them is empty. Then, each corresponds to a unique path in the tree, which can be represented as an infinite binary sequence , where

For points with respective representations and , we let

It is not hard to see that this dissimilarity satisfies 4.1. Thus, the associated class of environments is formed by those with mean-payoff functions satisfying 4.1 with the so-defined dissimilarity. This is a “natural class” underlying the tree for which our tree-based algorithm can achieve non-trivial regret. (However, we do not know if this is the largest such class.)

In general, Assumption 4.1 ensures that the regions in the tree of coverings shrink exactly at a geometric rate. The following example shows how to satisfy 4.1 when the domain is a –dimensional hyper-rectangle and the dissimilarity is some positive power of the Euclidean (or supremum) norm.

Example 1

Assume that is a -dimension hyper-rectangle and consider the dissimilarity , where and are real numbers and is the Euclidean norm. Define the tree of coverings in the following inductive way: let . Given a node , let and be obtained from the hyper-rectangle by splitting it in the middle along its longest side (ties can be broken arbitrarily).

We now argue that Assumption 4.1 is satisfied. With no loss of generality we take . Then, for all integers and ,

It is now easy to see that Assumption 4.1 is satisfied for the indicated dissimilarity, e.g., with the choice of the parameters and for HOO, and the value .

Example 2

In the same setting, with the same tree of coverings over , but now with the dissimilarity , we get that for all integers and ,

This time, Assumption 4.1 is satisfied, e.g., with the choice of the parameters and for HOO, and the value .

The second assumption, 4.1, concerns the environment; when Assumption 4.1 is satisfied, we say that is weakly Lipschitz with respect to (w.r.t.) . The choice of this terminology follows from the fact that if is –Lipschitz w.r.t. , i.e., for all , one has , then it is also weakly Lipschitz w.r.t. .

On the other hand, weak Lipschitzness is a milder requirement. It implies local (one-sided) –Lipschitzness at any global maximum, since at any arm such that , the criterion (3) rewrites to . In the vicinity of other arms , the constraint is milder as the arm gets worse (as increases) since the condition (3) rewrites to

(4)

Here is another interpretation of these two facts; it will be useful when considering local assumptions in Section 4.4 (a weaker set of assumptions). First, concerning the behavior around global maxima, Assumption 4.1 implies that for any set with ,

(5)

Second, it can be seen that Assumption 4.1 is equivalent555That Assumption 4.1 implies (6) is immediate; for the converse, it suffices to consider, for each , the sequence where denotes the nonnegative part. to the following property: for all and ,

(6)

where

denotes the set of –optimal arms. This second property essentially states that there is no sudden and large drop in the mean-payoff function around the global maxima (note that this property can be satisfied even for discontinuous functions).

Figure 3 presents an illustration of the two properties discussed above.

Figure 3: Illustration of the property of weak Lipschitzness (on the real line and for the distance ). Around the optimum the values should be above . Around any –optimal point the values should be larger than for and larger than elsewhere.

Before stating our main results, we provide a straightforward, though useful consequence of Assumptions 4.1 and 4.1, which should be seen as an intuitive justification for the third term in (2).

For all nodes , let

is called the suboptimality factor of node . Depending whether it is positive or not, a node is called suboptimal () or optimal ().

Lemma 3

Under Assumptions 4.1 and 4.1, if the suboptimality factor of a region is bounded by for some , then all arms in are –optimal, that is,

Proof  For all , we denote by an element of such that

By the weak Lipschitz property (Assumption 4.1), it then follows that for all ,

(7)

Letting and substituting the bounds on the suboptimality and on the diameter of (Assumption A1) concludes the proof.  

4.2 Upper bound for the regret of HOO

AOS07 [AOS07, Assumption 2] observed that the regret of a continuum-armed bandit algorithm should depend on how fast the volumes of the sets of –optimal arms shrink as . Here, we capture this by defining a new notion, the near-optimality dimension of the mean-payoff function. The connection between these concepts, as well as with the zooming dimension defined by KSU08, will be further discussed in Section 5. We start by recalling the definition of packing numbers.

Definition 4 (Packing number)

The –packing number of w.r.t. the dissimilarity is the size of the largest packing of with disjoint –open balls of radius . That is, is the largest integer such that there exists disjoint –open balls with radius contained in .

We now define the –near-optimality dimension, which characterizes the size of the sets as a function of . It can be seen as some growth rate in of the metric entropy (measured in terms of and with packing numbers rather than covering numbers) of the set of –optimal arms.

Definition 5 (Near-optimality dimension)

For the –near-optimality dimension of w.r.t.  equals

The following example shows that using a dissimilarity (rather than a metric, for instance) may sometimes allow for a significant reduction of the near-optimality dimension.

Example 3

Let and let be defined by for some and some norm on . Consider the dissimilarity defined by . We shall see in Example 4 that is weakly Lipschitz w.r.t.  (in a sense however slightly weaker than the one given by (5) and (6) but sufficiently strong to ensure a result similar to the one of the main result, Theorem 6 below). Here we claim that the –near-optimality dimension (for any ) of w.r.t.  is . On the other hand, the –near-optimality dimension (for any ) of w.r.t. the dissimilarity defined, for , by is . In particular, when and , the –near-optimality dimension is .


Proof  (sketch) Fix . The set is the –ball with center and radius , that is, the –ball with center and radius . Its –packing number w.r.t.  is bounded by a constant depending only on , and ; hence, the value for the near-optimality dimension w.r.t. the dissimilarity .
In case of , we are interested in the packing number of the –ball with center and radius w.r.t. –balls. The latter is of the order of

hence, the value for the near-optimality dimension in the case of the dissimilarity .
Note that in all these cases the –near-optimality dimension of is independent of the value of .  

We can now state our first main result. The proof is presented in Section A.1.

Theorem 6 (Regret bound for HOO)

Consider HOO tuned with parameters such that Assumptions 4.1 and 4.1 hold for some dissimilarity . Let be the –near-optimality dimension of the mean-payoff function w.r.t. . Then, for all , there exists a constant such that for all ,

Note that if is infinite, then the bound is vacuous. The constant in the theorem depends on and on all other parameters of HOO and of the assumptions, as well as on the bandit environment . (The value of is determined in the analysis; it is in particular proportional to .) The next section will exhibit a refined upper bound with a more explicit value of in terms of all these parameters.

Remark 7

The tuning of the parameters of HOO is critical for the assumptions to be satisfied, thus to achieve a good regret; given some environment, one should select the parameters of HOO such that the near-optimality dimension of the mean-payoff function is minimized. Since the mean-payoff function is unknown to the user, this might be difficult to achieve. Thus, ideally, these parameters should be selected adaptively based on the observation of some preliminary sample. For now, the investigation of this possibility is left for future work.

4.3 Improving the running time when the time horizon is known

A deficiency of the basic HOO algorithm is that its computational complexity scales quadratically with the number of time steps. In this section we propose a simple modification to HOO that achieves essentially the same regret as HOO and whose computational complexity scales only log-linearly with the number of time steps. The needed amount of memory is still linear. We work out the case when the time horizon, , is known in advance. The case of unknown horizon can be dealt with by resorting to the so-called doubling trick, see, e.g., [CL06, Section 2.3], which consists of periodically restarting the algorithm for regimes of lengths that double at each such fresh start, so that the instance of the algorithm runs for rounds.


We consider two modifications to the algorithm described in Section 3. First, the quantities of (2) are redefined by replacing the factor by , that is, now

(This results in a policy which explores the arms with a slightly increased frequency.) The definition of the –values in terms of the is unchanged. A pleasant consequence of the above modification is that the –value of a given node changes only when this node is part of a path selected by the algorithm. Thus at each round , only the nodes along the chosen path need to be updated according to the obtained reward.

However, and this is the reason for the second modification, in the basic algorithm, a path at round may be of length linear in (because the tree could have a depth linear in ). This is why we also truncate the trees at a depth of the order of . More precisely, the algorithm now selects the node to pull at round by following a path in the tree , starting from the root and choosing at each node the child with the highest –value (with the new definition above using ), and stopping either when it encounters a node which has not been expanded before or a node at depth equal to

(It is assumed that so that .) Note that since no child of a node located at depth will ever be explored, its –value at round simply equals .

We call this modified version of HOO the truncated HOO algorithm. The computational complexity of updating all –values at each round is of the order of and thus of the order of . The total computational complexity up to round is therefore of the order of , as claimed in the introduction of this section.

As the next theorem indicates this new procedure enjoys almost the same cumulative regret bound as the basic HOO algorithm.

Theorem 8 (Upper bound on the regret of truncated HOO)

Fix a horizon such that . Then, the regret bound of Theorem 6 still holds true at round for truncated HOO up to an additional additive factor.

4.4 Local assumptions

In this section we further relax the weak Lipschitz assumption and require it only to hold locally around the maxima. Doing so, we will be able to deal with an even larger class of functions and in fact we will show that the algorithm studied in this section achieves a bound on the regret regret when it is used for functions that are smooth around their maxima (e.g., equivalent to for some known smoothness degree ).

For the sake of simplicity and to derive exact constants we also state in a more explicit way the assumption on the near-optimality dimension. We then propose a simple and efficient adaptation of the HOO algorithm suited for this context.

4.4.1 Modified set of assumptions

  • Assumptions  Given the parameters of (the adaption of) HOO, that is, the real numbers and and the tree of coverings , there exists a dissimilarity function such that Assumption 4.1 (for some ) as well as the following two assumptions hold.

    1. There exists such that for all optimal subsets (i.e., ) with diameter ,

      Further, there exists such that for all and ,

    2. There exist and such that for all ,

      where .

When satisfies Assumption 4.4.1, we say that is –locally weakly Lipschitz w.r.t. . Note that this assumption was obtained by weakening the characterizations (5) and (6) of weak Lipschitzness.

Assumption 4.4.1 is not a real assumption but merely a reformulation of the definition of near optimality (with the small added ingredient that the limit can be achieved, see the second step of the proof of Theorem 6 in Section A.1).

Example 4

We consider again the domain and function studied in Example 3 and prove (as announced beforehand) that is –locally –weakly Lipschitz w.r.t. the dissimilarity defined by ; which, in fact, holds for all .


Proof  Note that is such that . Therefore, for all ,

which yields the first part of Assumption 4.4.1. To prove that the second part is true for and with no constraint on the considered , we first note that since , it holds by convexity that for all . Now, for all and , i.e., such that ,

(8)

which concludes the proof of the second part of 4.4.1.  

4.4.2 Modified HOO algorithm

We now describe the proposed modifications to the basic HOO algorithm.

We first consider, as a building block, the algorithm called –HOO, which takes an integer as an additional parameter to those of HOO. Algorithm –HOO works as follows: it never plays any node with depth smaller or equal to and starts directly the selection of a new node at depth . To do so, it first picks the node at depth with the best –value, chooses a path and then proceeds as the basic HOO algorithm. Note in particular that the initialization of this algorithm consists (in the first rounds) in playing once each of the nodes located at depth in the tree (since by definition a node that has not been played yet has a –value equal to ). We note in passing that when , algorithm –HOO coincides with the basic HOO algorithm.

Algorithm local-HOO employs the doubling trick in conjunction with consecutive instances of –HOO. It works as follows. The integers will index different regimes. The regime starts at round and ends when the next regime starts; it thus lasts for rounds. At the beginning of regime , a fresh copy of –HOO, where , is initialized and is then used throughout the regime.

Note that each fresh start needs to pull each of the nodes located at depth at least once (the number of these nodes is ). However, since round lasts for time steps (which is exponentially larger than the number of nodes to explore), the time spent on the initialization of –HOO in any regime is greatly outnumbered by the time spent in the rest of the regime.

In the rest of this section, we propose first an upper bound on the regret of –HOO (with exact and explicit constants). This result will play a key role in proving a bound on the performance of local-HOO.

4.4.3 Adaptation of the regret bound

In the following we write for the smallest integer such that

and consider the algorithm –HOO, where . In particular, when is chosen, the obtained bound is the same as the one of Theorem 6, except that the constants are given in analytic forms.

Theorem 9 (Regret bound for –Hoo)

Consider –HOO tuned with parameters and such that Assumptions 4.1, 4.4.1 and 4.4.1 hold for some dissimilarity and the values . If, in addition, and is large enough so that

where

then the following bound holds for the expected regret of –HOO:

The proof, which is a modification of the proof to Theorem 6, can be found in Section A.3 of the Appendix. The main complication arises because the weakened assumptions do not allow one to reason about the smoothness at an arbitrary scale; this is essentially due to the threshold used in the formulation of the assumptions. This is why in the proposed variant of HOO we discard nodes located too close to the root (at depth smaller than ). Note that in the bound the second term arises from playing in regions corresponding to the descendants of “poor” nodes located at level . In particular, this term disappears when , in which case we get a bound on the regret of HOO provided that holds.

Example 5

We consider again the setting of Examples 2, 3, and 4. The domain is and the mean-payoff function is defined by . We assume that HOO is run with parameters and . We already proved that Assumptions 4.1, 4.4.1 and 4.4.1 are satisfied with the dissimilarity , the constants , , , and666To compute , one can first note that ; the question at hand for Assumption 4.4.1 to be satisfied is therefore to upper bound the number of balls of radius (w.r.t. the supremum norm ) that can be packed in a ball of radius , giving rise to the bound . , as well as any (that is, with ). Thus, resorting to Theorem 9 (applied with ), we obtain

and get

Under the prescribed assumptions, the rate of convergence is of order no matter the ambient dimension . Although the rate is independent of , the latter impacts the performance through the multiplicative factor in front of the rate, which is exponential in . This is, however, not an artifact of our analysis, since it is natural that exploration in a –dimensional space comes at a cost exponential in . (The exploration performed by HOO naturally combines an initial global search, which is bound to be exponential in , and a local optimization, whose regret is of the order of .)

The following theorem is an almost straightforward consequence of Theorem 9 (the detailed proof can be found in Section A.4 of the Appendix). Note that local-HOO does not require the knowledge of the parameter in 4.4.1.

Theorem 10 (Regret bound for local-HOO)

Consider local-HOO and assume that its parameters are tuned such that Assumptions 4.1, 4.4.1 and 4.4.1 hold for some dissimilarity . Then the expected regret of local-HOO is bounded (in a distribution-dependent sense) as follows,

4.5 Minimax optimality in metric spaces

In this section we provide two theorems showing the minimax optimality of HOO in metric spaces. The notion of packing dimension is key.

Definition 11 (Packing dimension)

The –packing dimension of a set (w.r.t. a dissimilarity ) is defined as

For instance, it is easy to see that whenever is a norm, compact subsets of