Multi-Armed Bandits with Metric Movement Costs

Multi-Armed Bandits with Metric Movement Costs

Tomer Koren Google; tkoren@google.com Roi Livni Princeton University; rlivni@cs.princeton.edu Yishay Mansour Tel Aviv University and Google Research; mansour@cs.tau.ac.il
Abstract

We consider the non-stochastic Multi-Armed Bandit problem in a setting where there is a fixed and known metric on the action space that determines a cost for switching between any pair of actions. The loss of the online learner has two components: the first is the usual loss of the selected actions, and the second is an additional loss due to switching between actions. Our main contribution gives a tight characterization of the expected minimax regret in this setting, in terms of a complexity measure  of the underlying metric which depends on its covering numbers. In finite metric spaces with actions, we give an efficient algorithm that achieves regret of the form , and show that this is the best possible. Our regret bound generalizes previous known regret bounds for some special cases: (i) the unit-switching cost regret where , and (ii) the interval metric with regret where . For infinite metrics spaces with Lipschitz loss functions, we derive a tight regret bound of where is the Minkowski dimension of the space, which is known to be tight even when there are no switching costs.

\pdfstringdefDisableCommands

1 Introduction

Multi-Armed Bandit (MAB) is perhaps one of the most well studied model for learning that allows to incorporate settings with limited feedback. In its simplest form, MAB can be thought of as a game between a learner and an adversary: At first, the adversary chooses an arbitrary sequence of losses (possibly adversarially). Then, at each round the learner chooses an action from a finite set of actions . At the end of each round, the learner gets to observe her loss , and only the loss of her chosen action. The objective of the learner is to minimize her (external) regret, defined as the expected difference between her loss, , and the loss of the best action in hindsight, i.e., .

One simplification of the MAB is that it assumes that the learner can switch between actions without any cost, this is in contrast to online algorithms that maintain a state and have a cost of switching between states. One simple intermediate solution is to add further costs to the learner that penalize movements between actions. (Since we compare the learner to the single best action, the adversary has no movement and hence no movement cost.) This approach has been studied in the MAB with unit switching costs [2, 12], where the learner is not only penalized for her loss but also pays a unit cost for any time she switches between actions. This simple penalty implicitly advocates the construction of algorithms that avoid frequent fluctuation in their decisions. Regulating switching has been successfully applied to many interesting instances such as buffering problems [16], limited-delay lossy coding [19] and dynamic pricing with patient buyers [15].

The unit switching cost assumes that any pair of actions have the same cost, which in many scenarios is far from true. For example, consider an ice-cream vendor on a beach, where his actions are to select a location and price. Clearly, changing location comes at a cost, while changing prices might come with no cost. In this case we can define a interval metric (the coast line) and the movement cost is the distance. A more involved case is a hot-dog vendor in Manhattan, which needs to select a location and price. Again, it makes sense to charge a switching cost between locations according to their distance, and in this case the Manhattan-distance seems the most appropriate. Such settings are at the core of our model for MAB with movement cost. The authors of [24] considered a MAB problem equipped with an interval metric, i.e, the actions are and the movement cost is the distance between the actions. They proposed a new online algorithm, called the Slowly Moving Bandit (SMB) algorithm, that achieves optimal regret bound for this setting, and applied it to a dynamic pricing problem with patient buyers to achieve a new tight regret bound.

The objective of this paper is to handle general metric spaces, both finite and infinite. We show how to generalize the SMB algorithm and its analysis to design optimal moving-cost algorithms for any metric space over finite decision space. Our main result identifies an intrinsic complexity measure of the metric space, which we call the covering/packing complexity, and give a tight characterization of the expected movement regret in terms of the complexity of the underlying metric. In particular, in finite metric spaces of complexity with actions, we give a regret bound of the form and present an efficient algorithm that achieves it. We also give a matching lower bound that applies to any metric with complexity .

We extend out results to general continuous metric spaces. For such a settings we clearly have to make some assumption about the losses, and we make the rather standard assumption that the losses are Lipchitz with respect to the underlying metric. In this setting our results depend on a quite different complexity measures: the upper and lower Minkowski dimensions of the space, thus exhibiting a phase transition between the finite case (that corresponds to Minkowski dimension zero) and the infinite case. Specifically, we give an upper bound on the regret of where is the upper Minkowski dimension. When the upper and lower Minkowski dimensions coincide—which is the case in many natural spaces, such as normed vector spaces—the latter bound matches a lower bound of [10] that holds even when there are no switching costs. Thus, a surprising implication of our result is that in infinite actions spaces (of bounded Minkowski dimension), adding movement costs do not add to the complexity of the MAB problem!

Our approach extends the techniques of [24] for the SMB algorithm, which was designed to optimize over an interval metric, which is equivalent to a complete binary Hierarchally well-Separated Tree (HST) metric space. By carefully balancing and regulating its sampling distributions, the SMB algorithm avoids switching between far-apart nodes in the tree and possibly incurring large movement costs with respect to the associated metric. We show that the SMB regret guarantees are much more general than just binary balanced trees, and give an analysis of the SMB algorithm when applied to general HSTs. As a second step, we show that a rich class of trees, on which the SMB algorithm can be applied, can be used to upper-bound any general metric. Finally, we reduce the case of an infinite metric space to the finite case via simple discretization, and show that this reduction gives rise to the Minkowski dimension as a natural complexity measure. All of these contractions turn out to be optimal (up to logarithmic factors), as demonstrated by our matching lower bounds.

1.1 Related Work

Perhaps the most well known classical algorithm for non-stochastic bandit is the Exp3 Algorithm [4] that guarantee a regret of without movement costs. However, for general MAB algorithms there are no guarantees for slow movement between actions. In fact, it is known that in a worst case switches between actions are expected (see [12]).

A simple case of MAB with movement cost is the uniform metric, i.e., when the distance between any two actions is the same. This setting has seen intensive study, both in terms of analyzing optimal regret rates [2, 12], as well as applications [16, 19, 15]. Our main technical tools for achieving lower bounds is through the lower bound of Dekel et al. [12] that achieve such bound for this special case. The general problem of bandits with movement costs has been first introduced in [24], where the authors gave an efficient algorithm for a -HST binary balanced tree metric, as well as for evenly spaced points on the interval. The main contribution of this paper is a generalization of these results to general metric spaces.

There is a vast and vigorous study of MAB in continuous spaces [23, 11, 5, 10, 31]. These works relate the change in the payoff to the change in the action. Specifically, there has been a vast research on Lipschitz MAB with stochastic payoffs [22, 28, 29, 21, 25], where, roughly, the expected reward is Lipschitz. For applying our results in continuous spaces we too need to assume Lipschitz losses, however, our metric defines also the movement cost between actions and not only relates the losses of similar actions. Our general findings is that in Euclidean spaces, one can achieve the same regret bounds when movement cost is applied. Thus, the SMB algorithm can achieve the optimal regret rate.

One can model our problem as a deterministic Markov Decision Process (MDP), where the states are the MAB actions and in every state there is an action to move the MDP to a given state (which correspond to switching actions). The payoff would be the payoff of the MAB action associated with the state plus the movement cost to the next state. The work of Ortner [27] studies deterministic MDP where the payoffs are stochastic, and also allows for a fixed uniform switching cost. The work of Even-Dar et al. [13] and it extensions [26, 32] studies a MDP where the payoffs are adversarial but there is full information of the payoffs. Latter this work was extended to the bandit model by Neu et al. [26]. This line of works imposes various assumptions regarding the MDP and the benchmark policies, specifically, that the MDP is “mixing” and that the policies considered has full support stationary distributions, assumptions that clearly fail in our very specific setting.

Bayesian MAB, such as in the Gittins index (see [17]), assume that the payoffs are from some stochastic process. It is known that when there are switching costs then the existence of an optimal index policy is not guaranteed [6]. There have been some works on special cases with a fixed uniform switching cost [1, 3]. The most relevant work is that of Guha and Munagala [18] which for a general metric over the actions gives a constant approximation off-line algorithm. For a survey of switching costs in this context see [20].

The MAB problem with movement costs is related to the literature on online algorithms and the competitive analysis framework [8]. A prototypical online problem is the Metrical Task System (MTS) presented by Borodin et al. [9]. In a metrical task system there are a collection of states and a metric over the states. Similar to MAB, the online algorithm at each time step moves to a state, incurs a movement cost according to the metric, and suffers a loss that corresponds to that state. However, unlike MAB, in an MTS the online algorithm is given the loss prior to selecting the new state. Furthermore, competitive analysis has a much more stringent benchmark: the best sequence of actions in retrospect. Like most of the regret minimization literature, we use the best single action in hindsight as a benchmark, aiming for a vanishing average regret.

One of our main technical tools is an approximation from above of a metric via a Metric Tree (i.e., -HST). -HST metrics have been vastly studied in the online algorithms starting with [7]. The main goal is to derive a simpler metric representation (using randomized trees) that will both upper and lower bound the given metric. The main result is to show a bound of on the expected stretch of any edge, and this is also the best possible [14]. It is noteworthy that for bandit learning, and in contrast with these works, an upper bound over the metric suffices to achieve optimal regret rate. This is since in online learning we compete against the best static action in hindsight, which does not move at all and hence has zero movement cost. In contrast, in a MTS, where one compete against the best dynamic sequence of actions, one needs both an upper a lower bound on the metric.

2 Problem Setup and Background

In this section we recall the setting of Multi-armed Bandit with Movement Costs introduced in [24], and review the necessary background required to state our main results.

2.1 Multi-armed Bandits with Movement Costs

In the Multi-armed Bandits (MAB) with Movement Costs problem, we consider a game between an online learner and an adversary continuing for rounds. There is a set , possibly infinite, of actions (or “arms”) that the learner can choose from. The set of actions is equipped with a fixed and known metric that determines a cost for moving between any pair of actions .

Before the game begins, an adversary fixes a sequence of loss functions assigning loss values in to actions in (in particular, we assume an oblivious adversary). Then, on each round , the learner picks an action , possibly at random. At the end of each round , the learner gets to observe her loss (namely, ) and nothing else. In contrast with the standard MAB setting, in addition to the loss the learner suffers an additional cost due to her movement between actions, which is determined by the metric and is equal to . Thus, the total cost at round is given by .

The goal of the learner, over the course of rounds of the game, is to minimize her expected movement-regret, which is defined as the difference between her (expected) total costs and the total costs of the best fixed action in hindsight (that incurs no movement costs); namely, the movement regret with respect to a sequence of loss vectors and a metric equals

Here, the expectation is taken with respect to the learner’s randomization in choosing the actions ; notice that, as we assume an oblivious adversary, the loss functions are deterministic and cannot depend on the learner’s randomization.

2.2 Basic Definitions in Metric Spaces

We recall basic notions in metric space that govern the regret in the MAB with movement costs setting. Throughout we assume a bounded metric space , where for normalization we assume for all . Given a point we will denote by the ball of radius around .

The following definitions are standard.

Definition 1 (Packing numbers).

A subset in a metric space is an -packing if the sets are disjoint sets. The -packing number of , denoted , is the maximum cardinality of any -packing of .

Definition 2 (Covering numbers).

A subset in a metric space is an -covering if . The -covering number of , denoted , is the minimum cardinality of any -covering of .

Tree metrics and HSTs.

We recall the notion of a tree metric, and in particular, a metric induced by an Hierarchically well-Separated (HST) Tree; see [7] for more details. Any weighted tree defines a metric over the vertices, by considering the shortest path between each two nodes. An HST tree (-HST tree, to be precise) is a rooted weighted tree such that: 1) the edge weight from any node to each of its children is the same and 2) the edge weight along any path from the root to a leaf are decreasing by a factor per edge. We will also assume that all leaves are of the same depth in the tree (this does not imply that the tree is complete).

Given a tree we let denote its height, which is the maximal length of a path from any leaf to the root. Let be the level of a node , where the level of the leaves is and the level of the root is . Given nodes , let be their least common ancestor node in .

The metric which we next define is equivalent (up to a constant factor) to standard tree–metric induced over the leaves by an HST. By a slight abuse of terminology, we will call it HST metric:

Definition 3 (HST metric).

Let be a finite set and let be a tree whose leaves are at the same depth and are indexed by elements of . Then the HST metric over induced by the tree is defined as follows:

For a HST metric , observe that the packing number and covering number are simple to characterize: for all we have that for ,

Complexity measures for finite metric spaces.

We next define the two notions of complexity that, as we will later see, governs the complexity of MAB with metric movement costs.

Definition 4 (covering complexity).

The covering complexity of a metric space denoted is given by

Definition 5 (packing complexity).

The packing complexity of a metric space denoted is given by

For a HST metric, the two complexity measures coincide as its packing and covering numbers are the same. Therefore, for a HST metric we will simply denote the complexity of as . In fact, it is known that in any metric space for all . Thus, for a general metric space we obtain that

(1)

Complexity measures for infinite metric spaces.

For infinite metric spaces, we require the following definition.

Definition 6 (Minkowski dimensions).

Let be a bounded metric space. The upper Minkowski dimension of , denoted , is defined as

Similarly, the lower Minkowski dimension is denoted by and is defined as

We refer to [30] for more background on the Minkowski dimensions and related notions in metric spaces theory.

3 Main Results

We now state the main results of the paper, which give a complete characterization of the expected regret in the MAB with movement costs problem.

3.1 Finite Metric Spaces

The following are the main results of the paper. Detailed proofs are provided in Appendix A.

Theorem 7 (Upper Bound).

Let be a finite metric space over elements with diameter and covering complexity . There exists an algorithm such that for any sequence of loss functions guarantees that

Theorem 8 (Lower Bound).

Let be a finite metric space over elements with diameter and packing complexity . For any algorithm there exists a sequence of loss functions such that

Recalling Eq. 1, we see that the regret bounds obtained in Theorems 8 and 7 are matching up to logarithmic factors. Notice that the tightness is achieved per instance; namely, for any given metric we are able to fully characterize the regret’s rate of growth as a function of the intrinsic properties of the metric. (In particular, this is substantially stronger than demonstrating a specific metric for which the upper bound cannot be improved.) Note that for the lower bound statement in Theorem 8 we require that the diameter of is bounded away from zero, where for simplicity we assume a constant bound of . Such an assumption is necessary to avoid degenerate metrics. Indeed, when the diameter is very small, the problem reduces to the standard MAB setting without any additional costs and we obtain a regret rate of .

Notice how the above results extend known instances of the problem from previous work: for uniform movement costs (i.e., unit switching costs) over we have , so that the obtain bound is which recovers the results in [2, 12]; and for a -HST binary balanced tree with leaves, we have and the resulting bound is which is identical to the bound proved in [24].

The -HST regret bound in [24] was primarily used to obtain regret bounds for the action space . In the next section we show how this technique is extended for infinite metric space to obtain regret bounds that depend on the dimensionality of the action space.

3.2 Infinite Metric Spaces

When is an infinite metric space, without additional constraints on the loss functions, the problem becomes ill-posed with a linear regret rate, even without movement costs. Therefore, one has to make additional assumptions on the loss functions in order to achieve sublinear regret. One natural assumption, which is common in previous work, is to assume that the loss functions are all -Lipschitz with respect to the metric . Under this assumption, we have the following result.

Theorem 9.

Let be a metric space with diameter and upper Minkowski dimension , such that . There exists a strategy such that for any sequence of loss functions , which are all -Lipschitz with respect to , guarantees that

Again, we observe that the above result extend the case of where . Indeed, for Lipschitz functions over the interval a tight regret bound of was achieved in [24], which is exactly the bound we obtain above.

We mention that a lower bound of is known for MAB in metric spaces with Lipschitz cost functions—even without movement costs—where is the lower Minkowski dimension.

Theorem 10 (Bubeck et al. [10]).

Let be a metric space with diameter and lower Minkowski dimension , such that . Then for any learning algorithm, there exists a sequence of loss function , which are all -Lipschitz with respect to , such that the regret (without movement costs) is

In many natural metric spaces in which the upper and lower Minkowski dimensions coincide (e.g., normed spaces), the bound of Theorem 9 is tight up to logarithmic factors in . In particular, and quite surprisingly, we see that the movement costs do not add to the regret of the problem!

It is important to note that Theorem 9 holds only for metric spaces whose (upper) Minkowski dimension is at least . Indeed, finite metric spaces are of Minkowski dimension zero, and as we demonstrated in Section 3.1 above, a regret bound is not achievable. Finite matric spaces are associated with a complexity measure which is very different from the Minkowski dimension (i.e., the covering/packing complexity). In other words, we exhibit a phase transition between dimension and in the rate of growth of the regret induced by the metric.

4 Algorithms

In this section we turn to prove Theorem 7. Our strategy is much inspired by the approach in [24], and we employ a two-step approach: First, we consider the case that the metric is a HST metric; we then turn to deal with general metrics, and show how to upper-bound any metric with a HST metric.

4.1 Tree Metrics: The Slowly-Moving Bandit Algorithm

In this section we analyze the simplest case of the problem, in which the metric is induced by a HST tree (whose leaves are associated with actions in ). In this case, our main tool is the Slowly-Moving Bandit (SMB) algorithm [24]: we demonstrate how it can be applied to general tree metrics, and analyze its performance in terms of intrinsic properties of the metric.

We begin by reviewing the SMB algorithm. In order to present the algorithm we require few additional notations. The algorithm receives as input a tree structure over the set of actions , and its operation depends on the tree structure. We fix a HST tree and let . For any level and action , let be the set of leaves of that share a common ancestor with at level (recall that level is the bottom–most level corresponding to the singletons). In terms of the tree metric we have that .

The SMB algorithm is presented in Algorithm 1. The algorithm is based on the multiplicative update method, in the spirit of Exp3 algorithms [4]. Similarly to Exp3, the algorithm computes at each round an estimator to the loss vector using the single loss value observed. In addition to being an (almost) unbiased estimate for the true loss vector, the estimator used by SMB has the additional property of inducing slowly-changing sampling distributions : This is done by choosing at random a level of the tree to be rebalanced (in terms of the weights maintained by the algorithm): As a result, the marginal probabilities are not changed at round .

In turn, and in contrast with Exp3, the algorithm choice of action at round is not purely sampled from , but rather conditioned on our last choice of level . This is informally justified by the fact that and agree on the marginal distribution of , hence we can think of the level drawn at round as if it were drawn subject to .

Input: A tree with a set of finite leaves , . Initialize: , Initialize , and For : Choose action , observe loss Choose uniformly at random; let where Compute vectors recursively via and for all : Define and set: Update:

List of Algorithms 1 The SMB algorithm.

A key observation is that by directly applying SMB to the metric , we can achieve the following regret bound:

Theorem 11.

Let be a metric space defined by a -HST with and complexity . Using SMB algorithm we can achieve the following regret bound:

(2)

To show Theorem 11, we adapt the analysis of [24] (that applies only to complete binary HSTs) to handle more general HSTs. We defer this part of our analysis to the appendix, since it follows from a technical modification of the original proof; for the proof of Theorem 11, see Appendix B.

For a tree that is either too deep or too shallow, Eq. 2 may not necessarily lead to a sublinear regret bound, let alone optimal. The main idea behind achieving optimal regret bound for a general tree, is to modify it until one of two things happen: Either we have optimized the depth so that the two terms in the left-hand side of Eq. 2 are of same order: In that case, we will show that one can achieve regret rate of order . If we fail to do that, we show that the first term in the left-hand side is the dominant one, and it will be of order .

For trees that are in some sense “well behaved” we have the following Corollary of Theorem 11 (for a proof see Section A.1).

Corollary 12.

Let be a metric space defined by a tree over leaves with and complexity . Assume that satisfies the following:

  1. ;

  2. One of the following is true:

    1. ;

    2. .

Then, the SMB algorithm can be used to attain

The following establishes Theorem 7 for the special case of tree metrics (see Section A.2 for proofs).

Lemma 13.

For any tree and time horizon , there exists a tree (over the same set of leaves) that satisfies the conditions of Corollary 12, such that and . Furthermore, can be constructed efficiently from (i.e., in time polynomial in and ). Hence, applying SMB to the metric space leads to

4.2 General Finite Metrics

Finally, we obtain the general finite case as a corollary of the following.

Lemma 14.

Let be a finite metric space. There exists a tree metric over (with ) such that , dominates (i.e., such that for all ) for which . Furthermore, can be constructed efficiently.

Proof.

Let be such that the minimal distance in is larger than . For each we let be a covering of of size using balls of radius . Note that finding a minimal set of balls of radius  that covers is exactly the set cover problem. Hence, we can efficiently approximate it (to within a factor) and construct the sets .

We now construct a tree graph, whose nodes are associated with the cover balls: The leaves correspond to singleton balls, hence correspond to the action space. For each leaf we find an action such that: If there is more than one, we arbitrarily choose one, and we connect an edge between and . We continue in this manner inductively to define for every and : given we find an action such that and we connect an edge from and .

We now claim that the metric induced by the tree graph dominates up to factor the original metric. Let such that then by construction there are and , such that and for which it holds that and similarly for every . Denoting and , we have that

4.3 Infinite Metric Spaces

Finally, we address infinite spaces by discretizing the space and reducing to the finite case. Recall that in this case we also assume that the loss functions are Lipschitz.

Proof of Theorem 9.

Given the definition of the covering dimension , it is straightforward that for some constant (that might depend on the metric ) it holds that for all . Fix some , and take a minimal -covering of of size . Observe that by restricting the algorithm to pick actions from , we might lose at most in the regret. Also, since is minimal, the distance between any two elements in is at least , thus the covering complexity of the space has

as we assume that . Hence, by Theorem 7 and the Lipschitz assumption, there exists an algorithm for which

A simple computation reveals that optimizes the above bound, and leads to movement regret. ∎

Acknowledgements

RL is supported in funds by the Eric and Wendy Schmidt Foundation for strategic innovations. YM is supported in part by a grant from the Israel Science Foundation, a grant from the United States-Israel Binational Science Foundation (BSF), and the Israeli Centers of Research Excellence (I-CORE) program (Center No. 4/11).

References

  • Agrawal et al. [1988] R. Agrawal, M. V. Hegde, and D. Teneketzis. Asymptotically efficient adaptive allocation rules for the multiarmed bandit problem with switching costs. IEEE Transactions on Optimal Control, 33(10):899–906, 1988.
  • Arora et al. [2012] R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary: from regret to policy regret. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1503–1510, 2012.
  • Asawa and Teneketzis [1996] M. Asawa and D. Teneketzis. Multi-armed bandits with switching penalties. IEEE Transactions on Automatic Control, 41(3):328–348, 1996.
  • Auer et al. [2002] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002.
  • Auer et al. [2007] P. Auer, R. Ortner, and C. Szepesvári. Improved rates for the stochastic continuum-armed bandit problem. Proceedings of the 20th Annual Conference on Learning Theory, pages 454–468, 2007.
  • Banks and Sundaram [1994] J. S. Banks and R. K. Sundaram. Switching costs and the gittins index. Econometrica, 62:687–694, 1994.
  • Bartal [1996] Y. Bartal. Probabilistic approximations of metric spaces and its algorithmic applications. In 37th Annual Symposium on Foundations of Computer Science, FOCS ’96, Burlington, Vermont, USA, 14-16 October, 1996, pages 184–193, 1996.
  • Borodin and El-Yaniv [1998] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 1998.
  • Borodin et al. [1992] A. Borodin, N. Linial, and M. E. Saks. An optimal on-line algorithm for metrical task system. Journal of the ACM (JACM), 39(4):745–763, 1992.
  • Bubeck et al. [2011] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesvári. -armed bandits. Journal of Machine Learning Research, 12:1587–1627, 2011.
  • Cope [2009] E. Cope. Regret and convergence bounds for a class of continuum-armed bandit problems. IEEE Transactions on Automatic Control, 54(6):1243–1253, 2009.
  • Dekel et al. [2014] O. Dekel, J. Ding, T. Koren, and Y. Peres. Bandits with switching costs: regret. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 459–467. ACM, 2014.
  • Even-Dar et al. [2009] E. Even-Dar, S. M. Kakade, and Y. Mansour. Online markov decision processes. Math. Oper. Res., 34(3):726–736, 2009.
  • Fakcharoenphol et al. [2004] J. Fakcharoenphol, S. Rao, and K. Talwar. A tight bound on approximating arbitrary metrics by tree metrics. J. Comput. Syst. Sci., 69(3):485–497, 2004.
  • Feldman et al. [2016] M. Feldman, T. Koren, R. Livni, Y. Mansour, and A. Zohar. Online pricing with strategic and patient buyers. In Annual Conference on Neural Information Processing Systems, 2016.
  • Geulen et al. [2010] S. Geulen, B. Vöcking, and M. Winkler. Regret minimization for online buffering problems using the weighted majority algorithm. In COLT, pages 132–143, 2010.
  • Gittins et al. [2011] J. Gittins, K. Glazebrook, and R. Weber. Multi-Armed Bandit Allocation Indices, 2nd Edition. John Wiley, 2011.
  • Guha and Munagala [2009] S. Guha and K. Munagala. Multi-armed bandits with metric switching costs. In International Colloquium on Automata, Languages, and Programming, pages 496–507. Springer, 2009.
  • Gyorgy and Neu [2014] A. Gyorgy and G. Neu. Near-optimal rates for limited-delay universal lossy source coding. IEEE Transactions on Information Theory, 60(5):2823–2834, 2014.
  • Jun [2004] T. Jun. A survey on the bandit problem with switching costs. De Economist, 152(4):513–541, 2004.
  • Kleinberg and Slivkins [2010] R. Kleinberg and A. Slivkins. Sharp dichotomies for regret minimization in metric spaces. In Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms, pages 827–846. Society for Industrial and Applied Mathematics, 2010.
  • Kleinberg et al. [2008] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In Proceedings of the fortieth annual ACM symposium on Theory of computing, pages 681–690. ACM, 2008.
  • Kleinberg [2004] R. D. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In Advances in Neural Information Processing Systems, pages 697–704, 2004.
  • Koren et al. [2017] T. Koren, R. Livni, and Y. Mansour. Bandits with movement costs and adaptive pricing. In COLT, 2017.
  • Magureanu et al. [2014] S. Magureanu, R. Combes, and A. Proutiere. Lipschitz bandits: Regret lower bound and optimal algorithms. In COLT, pages 975–999, 2014.
  • Neu et al. [2014] G. Neu, A. György, C. Szepesvári, and A. Antos. Online markov decision processes under bandit feedback. IEEE Trans. Automat. Contr., 59(3):676–691, 2014.
  • Ortner [2010] R. Ortner. Online regret bounds for markov decision processes with deterministic transitions. Theor. Comput. Sci., 411(29-30):2684–2695, 2010.
  • Slivkins [2011] A. Slivkins. Multi-armed bandits on implicit metric spaces. In Advances in Neural Information Processing Systems, pages 1602–1610, 2011.
  • Slivkins et al. [2013] A. Slivkins, F. Radlinski, and S. Gollapudi. Ranked bandits in metric spaces: learning diverse rankings over large document collections. Journal of Machine Learning Research, 14(Feb):399–436, 2013.
  • Tao [2009] T. Tao. 245c, notes 5: Hausdorff dimension. http://terrytao.wordpress.com/2009/05/19/245c-notes-5-hausdorff-dimension-optional/, 2009.
  • Yu and Mannor [2011] J. Yu and S. Mannor. Unimodal bandits. In Proceedings of the 28th International Conference on Machine Learning, 2011.
  • Yu et al. [2009] J. Y. Yu, S. Mannor, and N. Shimkin. Markov decision processes with arbitrary reward processes. Math. Oper. Res., 34(3):737–757, Aug. 2009. ISSN 0364-765X.

Appendix A Proofs

a.1 Proof of Corollary 12

Corollary.

Let be a metric space defined by a tree over leaves with and complexity . Assume that satisfies the following:

  1. ;

  2. One of the following is true:

    1. ;

    2. .

Then, the SMB algorithm can be used to attain regret bounded as

Proof.

Notice that by Item 1 and Theorem 11 we have for some constant factor . First assume, we have that Item 2a holds. Note that in particular . We thus, obtain:

(3)

The next case is that Item 2b holds. By reordering we can write Item 2b as:

(4)

Which also implies , hence:

(5)

Overall, we see that in both cases the regret is bounded by the maximum between the two terms in Eqs. 5 and 3. ∎

a.2 Proof of Lemma 13

Lemma.

For any tree and time horizon , there exists a tree (over the same set of leaves) that satisfies the conditions of Corollary 12, such that and . Furthermore, can be constructed efficiently from (i.e., in time polynomial in and ). Hence, applying SMB to the metric space leads to

Proof.

Let us call a -well-behaved tree if it satisfies the conditions of Corollary 12. First we construct a tree that will satisfy Item 1. To do that, we simply add to each leaf at a single son, which is a new leaf: we naturally identify each leaf in with an actions from , by considering the father of the leaf. One can see that, with the definition of HST-metric we have not changed the distances: i.e. . In particular, we did not change the covering number or the complexity. (Note however, that this change does effect the Algorithm though, as it depends on the tree representation and not directly on the metric.)

The aforementioned change, did however change the depth of the tree and increased it by one. We can repeat this step iteratively until Item 1 is satisfied. To avoid the notation , we will simply assume that satisfies Item 1.

Next, we prove the following statement by induction over the depth of : We assume that for every tree of depth that satisfies Item 1 the statement holds, and prove it for depth . For , since we have that Item 2a holds.

Next, let be a tree that we get from by connecting all the leaves to their grandparents (and removing their fathers from the graph). The first observation is that we have increased the distance between the leaves, so . We also assume that is not -well behaved, because otherwise the statement obviously holds for with .

Given that we next show that . Note that by construction for every we have that . We also have by assumption and since any covering is smaller than we also have . Overall, by defintion of we have that . Hence,

Next, we assume that does not satisfy Item 1. We then have which implies that satisfies Item 2b. Thus, either is well-behaved or we can construct a tree with depth such that , and satisfies Item 1. The result now follows by an induction step. ∎

a.3 Proof of Theorem 7

Theorem.

Let be a finite metric space over elements with diameter and covering complexity . There exists an algorithm such that for any sequence of loss functions guarantees that

Proof.

Given a finite metric space we have by Lemma 14 a tree with complexity such that . We can apply SMB as depicted in Lemma 13 over the sequence of losses To obtain:

a.4 Proof of Theorem 8

We next set out to prove the lower bound of Theorem 8. We begin by recalling the known lower bound for MAB with unit switching cost.

Theorem 15 (Dekel et al. [12]).

Let be a metric space over actions and for every . Then for any algorithm, there exists a sequence such that

Note that for a discrete metric the minimum covering of points with balls of radius is by balls, hence . Thus we see that Theorem 15 already gives Theorem 8 for the special case of a unit-cost metric (up to logarithmic factors). The general case can be derived by embedding the lower bound construction in an action set that constitute a -packing of size .

Proof of Theorem 8 (sketch).

First, it is easy to see that the adversary can always force a regret of ; indeed, this lower bound applies for the MAB problem even when there is no movement cost between actions [4]. We next show a regret lower bound of . By definition, there exist such that . Let be a set of balls that form a maximal packing with , and observe that for all , . Since we assume the diameter of the metric space is exactly we have that for all , . Therefore we may assume . We can now use Theorem 15 to show that for any algorithm, one can construct a sequence of loss functions supported on (and extend them to the entire domain by assigning a maximal loss of to any ) such that

Appendix B Analysis of SMB for General HSTs

In this section, we extend the analysis given in [24] for the SMB algorithm (Algorithm 1) to general HST metrics over finite action sets, and prove the following theorem.

Theorem 16.

Assume that the metric is a metric specified by a tree which is a HST with and complexity . Then, for any sequence of loss functions , Algorithm 1 guarantees that

In particular, by setting , the bound on the expected movement regret of the algorithm becomes

The main new ingredients in the generalized proof are bounds on the bias and the variance of the loss estimates used by Algorithm 1, which we give in the following two lemmas. In the proof of both, we require the following inequality:

(6)

This follows from the fact that equals (both quantities are equal to the number of nodes in the ’th level of ), and since by definition of the (covering) complexity of .

We begin with bounding the bias of the estimator from the true loss vector .

Lemma 17.

For all , we have and .

Proof.

The proof of the first inequality is identical to the one found in [24] and thus omitted. To bound , observe that and