A Drifting-Games Analysis for Online Learning and Applications to Boosting

A Drifting-Games Analysis for Online Learning and Applications to Boosting

Haipeng Luo
Department of Computer Science
Princeton University
Princeton, NJ 08540
haipengl@cs.princeton.edu
&Robert E. Schapire
Department of Computer Science
Princeton University
Princeton, NJ 08540
schapire@cs.princeton.edu
R. Schapire is currently at Microsoft Research in New York City.
Abstract

We provide a general mechanism to design online learning algorithms based on a minimax analysis within a drifting-games framework. Different online learning settings (Hedge, multi-armed bandit problems and online convex optimization) are studied by converting into various kinds of drifting games. The original minimax analysis for drifting games is then used and generalized by applying a series of relaxations, starting from choosing a convex surrogate of the 0-1 loss function. With different choices of surrogates, we not only recover existing algorithms, but also propose new algorithms that are totally parameter-free and enjoy other useful properties. Moreover, our drifting-games framework naturally allows us to study high probability bounds without resorting to any concentration results, and also a generalized notion of regret that measures how good the algorithm is compared to all but the top small fraction of candidates. Finally, we translate our new Hedge algorithm into a new adaptive boosting algorithm that is computationally faster as shown in experiments, since it ignores a large number of examples on each round.

 

A Drifting-Games Analysis for Online Learning and Applications to Boosting


  Haipeng Luo Department of Computer Science Princeton University Princeton, NJ 08540 haipengl@cs.princeton.edu Robert E. Schapirethanks: R. Schapire is currently at Microsoft Research in New York City. Department of Computer Science Princeton University Princeton, NJ 08540 schapire@cs.princeton.edu

1 Introduction

In this paper, we study online learning problems within a drifting-games framework, with the aim of developing a general methodology for designing learning algorithms based on a minimax analysis.

To solve an online learning problem, it is natural to consider game-theoretically optimal algorithms which find the best solution even in worst-case scenarios. This is possible for some special cases ([7, 1, 3, 21]) but difficult in general. On the other hand, many other efficient algorithms with optimal regret rate (but not exactly minimax optimal) have been proposed for different learning settings (such as the exponential weights algorithm [14, 15], and follow the perturbed leader [18]). However, it is not always clear how to come up with these algorithms. Recent work by Rakhlin et al. [26] built a bridge between these two classes of methods by showing that many existing algorithms can indeed be derived from a minimax analysis followed by a series of relaxations.

In this paper, we provide a parallel way to design learning algorithms by first converting online learning problems into variants of drifting games, and then applying a minimax analysis and relaxations. Drifting games [28] (reviewed in Section 2) generalize Freund’s “majority-vote game” [13] and subsume some well-studied boosting and online learning settings. A nearly minimax optimal algorithm is proposed in [28]. It turns out the connections between drifting games and online learning go far beyond what has been discussed previously. To show that, we consider variants of drifting games that capture different popular online learning problems. We then generalize the minimax analysis in [28] based on one key idea: relax a 0-1 loss function by a convex surrogate. Although this idea has been applied widely elsewhere in machine learning, we use it here in a new way to obtain a very general methodology for designing and analyzing online learning algorithms. Using this general idea, we not only recover existing algorithms, but also design new ones with special useful properties. A somewhat surprising result is that our new algorithms are totally parameter-free, which is usually not the case for algorithms derived from a minimax analysis. Moreover, a generalized notion of regret (-regret, defined in Section 3) that measures how good the algorithm is compared to all but the top fraction of candidates arises naturally in our drifting-games framework. Below we summarize our results for a range of learning settings.

Hedge Settings: (Section 3) The Hedge problem [14] investigates how to cleverly bet across a set of actions. We show an algorithmic equivalence between this problem and a simple drifting game (DGv1). We then show how to relax the original minimax analysis step by step to reach a general recipe for designing Hedge algorithms (Algorithm 3). Three examples of appropriate convex surrogates of the 0-1 loss function are then discussed, leading to the well-known exponential weights algorithm and two other new ones, one of which (NormalHedge.DT in Section 3.3) bears some similarities with the NormalHedge algorithm [10] and enjoys a similar -regret bound simultaneously for all and horizons. However, our regret bounds do not depend on the number of actions, and thus can be applied even when there are infinitely many actions. Our analysis is also arguably simpler and more intuitive than the one in [10] and easy to be generalized to more general settings. Moreover, our algorithm is more computationally efficient since it does not require a numerical searching step as in NormalHedge. Finally, we also derive high probability bounds for the randomized Hedge setting as a simple side product of our framework without using any concentration results.

Multi-armed Bandit Problems: (Section 4) The multi-armed bandit problem [6] is a classic example for learning with incomplete information where the learner can only obtain feedback for the actions taken. To capture this problem, we study a quite different drifting game (DGv2) where randomness and variance constraints are taken into account. Again the minimax analysis is generalized and the EXP3 algorithm [6] is recovered. Our results could be seen as a preliminary step to answer the open question [2] on exact minimax optimal algorithms for the multi-armed bandit problem.

Online Convex Optimization: (Section 4) Based the theory of convex optimization, online convex optimization [31] has been the foundation of modern online learning theory. The corresponding drifting game formulation is a continuous space variant (DGv3). Fortunately, it turns out that all results from the Hedge setting are ready to be used here, recovering the continuous EXP algorithm [12, 17, 24] and also generalizing our new algorithms to this general setting. Besides the usual regret bounds, we also generalize the -regret, which, as far as we know, is the first time it has been explicitly studied. Again, we emphasize that our new algorithms are adaptive in and the horizon.

Boosting: (Section 4) Realizing that every Hedge algorithm can be converted into a boosting algorithm ([29]), we propose a new boosting algorithm (NH-Boost.DT) by converting NormalHedge.DT. The adaptivity of NormalHedge.DT is then translated into training error and margin distribution bounds that previous analysis in [29] using nonadaptive algorithms does not show. Moreover, our new boosting algorithm ignores a great many examples on each round, which is an appealing property useful to speeding up the weak learning algorithm. This is confirmed by our experiments.

Related work: Our analysis makes use of potential functions. Similar concepts have widely appeared in the literature [8, 5], but unlike our work, they are not related to any minimax analysis and might be hard to interpret. The existence of parameter free Hedge algorithms for unknown number of actions was shown in [11], but no concrete algorithms were given there. Boosting algorithms that ignore some examples on each round were studied in [16], where a heuristic was used to ignore examples with small weights and no theoretical guarantee is provided.

2 Reviewing Drifting Games

We consider a simplified version of drifting games similar to the one described in [29, chap. 13] (also called chip games). This game proceeds through rounds, and is played between a player and an adversary who controls chips on the real line. The positions of these chips at the end of round are denoted by , with each coordinate corresponding to the position of chip . Initially, all chips are at position so that . On every round : the player first chooses a distribution over the chips, then the adversary decides the movements of the chips so that the new positions are updated as . Here, each has to be picked from a prespecified set , and more importantly, satisfy the constraint for some fixed constant .

At the end of the game, each chip is associated with a nonnegative loss defined by for some nonincreasing function mapping from the final position of the chip to . The goal of the player is to minimize the chips’ average loss after rounds. So intuitively, the player aims to “push” the chips to the right by assigning appropriate weights on them so that the adversary has to move them to the right by in a weighted average sense on each round. This game captures many learning problems. For instance, binary classification via boosting can be translated into a drifting game by treating each training example as a chip (see [28] for details).

We regard a player’s strategy as a function mapping from the history of the adversary’s decisions to a distribution that the player is going to play with, that is, where stands for . The player’s worst case loss using this algorithm is then denoted by . The minimax optimal loss of the game is computed by the following expression: where is the dimensional simplex and is assumed to be compact. A strategy that realizes the minimum in is called a minimax optimal strategy. A nearly optimal strategy and its analysis is originally given in [28], and a derivation by directly tackling the above minimax expression can be found in [29, chap. 13]. Specifically, a sequence of potential functions of a chip’s position is defined recursively as follows:

(1)

Let be the weight that realizes the minimum in the definition of , that is, . Then the player’s strategy is to set . The key property of this strategy is that it assures that the sum of the potentials over all the chips never increases, connecting the player’s final loss with the potential at time as follows:

(2)

It has been shown in [28] that this upper bound on the loss is optimal in a very strong sense.

Moreover, in some cases the potential functions have nice closed forms and thus the algorithm can be efficiently implemented. For example, in the boosting setting, is simply , and one can verify and . With the loss function being , these can be further simplified and eventually give exactly the boost-by-majority algorithm [13].

3 Online Learning as a Drifting Game

The connection between drifting games and some specific settings of online learning has been noticed before ([28, 23]). We aim to find deeper connections or even an equivalence between variants of drifting games and more general settings of online learning, and provide insights on designing learning algorithms through a minimax analysis. We start with a simple yet classic Hedge setting.

3.1 Algorithmic Equivalence

In the Hedge setting [14], a player tries to earn as much as possible (or lose as little as possible) by cleverly spreading a fixed amount of money to bet on a set of actions on each day. Formally, the game proceeds for rounds, and on each round : the player chooses a distribution over actions, then the adversary decides the actions’ losses (i.e. action incurs loss ) which are revealed to the player. The player suffers a weighted average loss at the end of this round. The goal of the player is to minimize his “regret”, which is usually defined as the difference between his total loss and the loss of the best action. Here, we consider an even more general notion of regret studied in [20, 19, 10, 11], which we call -regret. Suppose the actions are ordered according to their total losses after rounds (i.e. ) from smallest to largest, and let be the index of the action that is the -th element in the sorted list (). Now, -regret is defined as In other words, -regret measures the difference between the player’s loss and the loss of the -th best action (recovering the usual regret with ), and sublinear -regret implies that the player’s loss is almost as good as all but the top fraction of actions. Similarly, denotes the worst case -regret for a specific algorithm . For convenience, when or , we define -regret to be or respectively.

Next we discuss how Hedge is highly related to drifting games. Consider a variant of drifting games where and for some constant . Additionally, we impose an extra restriction on the adversary: for all and . In other words, the difference between any two chips’ movements is at most . We denote this specific variant of drifting games by DGv1 (summarized in Appendix A) and a corresponding algorithm by to emphasize the dependence on . The reductions in Algorithm 1 and 2 and Theorem 1 show that DGv1 and the Hedge problem are algorithmically equivalent (note that both conversions are valid). The proof is straightforward and deferred to Appendix B. By Theorem 1, it is clear that the minimax optimal algorithm for one setting is also minimax optimal for the other under these conversions.

Input : A Hedge Algorithm
for  to  do
      Query : .
      Set: .
      Receive movements from the adversary.
      Set: .
     
Algorithm 1 Conversion of a Hedge Algorithm to a DGv1 Algorithm
Input : A DGv1 Algorithm
for  to  do
      Query : .
      Set: .
      Receive losses from the adversary.
      Set: .
     
Algorithm 2 Conversion of a DGv1 Algorithm to a Hedge Algorithm
Theorem 1.

DGv1 and the Hedge problem are algorithmically equivalent in the following sense:
(1) Algorithm 1 produces a DGv1 algorithm satisfying where is such that .

(2) Algorithm 2 produces a Hedge algorithm with for any such that .

3.2 Relaxations

From now on we only focus on the direction of converting a drifting game algorithm into a Hedge algorithm. In order to derive a minimax Hedge algorithm, Theorem 1 tells us it suffices to derive minimax DGv1 algorithms. Exact minimax analysis is usually difficult, and appropriate relaxations seem to be necessary. To make use of the existing analysis for standard drifting games, the first obvious relaxation is to drop the additional restriction in DGv1, that is, for all and . Doing this will lead to the exact setting discussed in [23] where a near optimal strategy is proposed using the recipe in Eq. (1). It turns out that this relaxation is reasonable and does not give too much more power to the adversary. To see this, first recall that results from [23], written in our notation, state that which, by Hoeffding’s inequality, is upper bounded by . Second, statement (2) in Theorem 1 clearly remains valid if the input of Algorithm 2 is a drifting game algorithm for this relaxed version of DGv1. Therefore, by setting and solving for , we have , which is the known optimal regret rate for the Hedge problem, showing that we lose little due to this relaxation.

However, the algorithm proposed in [23] is not computationally efficient since the potential functions do not have closed forms. To get around this, we would want the minimax expression in Eq. (1) to be easily solved, just like the case when . It turns out that convexity would allow us to treat almost as . Specifically, if each is a convex function of , then due to the fact that the maximum of a convex function is always realized at the boundary of a compact region, we have

(3)

with realizing the minimum. Since the 0-1 loss function is not convex, this motivates us to find a convex surrogate of . Fortunately, relaxing the equality constraints in Eq. (1) does not affect the key property of Eq. (2) as we will show in the proof of Theorem 2. “Compiling out” the input of Algorithm 2, we thus have our general recipe (Algorithm 3) for designing Hedge algorithms with the following regret guarantee.

Input : A convex, nonincreasing, nonnegative function .
for  down to  do
      Find a convex function s.t.
      Set: .
      for  to  do
           Set: s.t. .
           Receive losses and set .
          
Algorithm 3 A General Hedge Algorithm
Theorem 2.

For Algorithm 3, if and are such that and for all , then .

Proof. It suffices to show that Eq. (2) holds so that the theorem follows by a direct application of statement (2) of Theorem 1. Let . Then since and . On the other hand, by Eq. (3), we have , which is at most by Algorithm 3. This shows and Eq. (2) follows. ∎

Theorem 2 tells us that if solving for gives for some value , then the regret of Algorithm 3 is less than any value that is greater than , meaning the regret is at most .

3.3 Designing Potentials and Algorithms

Now we are ready to recover existing algorithms and develop new ones by choosing an appropriate potential as Algorithm 3 suggests. We will discuss three different algorithms below, and summarize these examples in Table 1 (see Appendix C).

Exponential Weights (EXP) Algorithm.

Exponential loss is an obvious choice for as it has been widely used as the convex surrogate of the 0-1 loss function in the literature. It turns out that this will lead to the well-known exponential weights algorithm [14, 15]. Specifically, we pick to be which exactly upper bounds . To compute for , we simply let hold with equality. Indeed, direct computations show that all share a similar form: Therefore, according to Algorithm 3, the player’s strategy is to set

which is exactly the same as EXP (note that becomes irrelevant after normalization). To derive regret bounds, it suffices to require which is equivalent to By Theorem 2 and Hoeffding’s lemma (see [9, Lemma A.1]), we thus know where the last step is by optimally tuning to be . Note that this algorithm is not adaptive in the sense that it requires knowledge of and to set the parameter .

We have thus recovered the well-known EXP algorithm and given a new analysis using the drifting-games framework. More importantly, as in [26], this derivation may shed light on why this algorithm works and where it comes from, namely, a minimax analysis followed by a series of relaxations, starting from a reasonable surrogate of the 0-1 loss function.

2-norm Algorithm.

We next move on to another simple convex surrogate: where is some positive constant and represents a truncating operation. The following lemma shows that can also be simply described.

Lemma 1.

If , then satisfies .

Thus, Algorithm 3 can again be applied. The resulting algorithm is extremely concise:

We call this the “2-norm” algorithm since it resembles the -norm algorithm in the literature when (see [9]). The difference is that the -norm algorithm sets the weights proportional to the derivative of potentials, instead of the difference of them as we are doing here. A somewhat surprising property of this algorithm is that it is totally adaptive and parameter-free (since disappears under normalization), a property that we usually do not expect to obtain from a minimax analysis. Direct application of Theorem 2 () shows that its regret achieves the optimal dependence on the horizon .

Corollary 1.

Algorithm 3 with potential defined in Lemma 1 produces a Hedge algorithm such that simultaneously for all and .

NormalHedge.DT.

The regret for the 2-norm algorithm does not have the optimal dependence on . An obvious follow-up question would be whether it is possible to derive an adaptive algorithm that achieves the optimal rate simultaneously for all and using our framework. An even deeper question is: instead of choosing convex surrogates in a seemingly arbitrary way, is there a more natural way to find the right choice of ?

To answer these questions, we recall that the reason why the 2-norm algorithm can get rid of the dependence on is that appears merely in the multiplicative constant that does not play a role after normalization. This motivates us to let in the form of for some . On the other hand, from Theorem 2, we also want to upper bound the 0-1 loss function for some constant . Taken together, this is telling us that the right choice of should be of the form 111 Similar potential was also proposed in recent work [22, 25] for a different setting. . Of course we still need to refine it to satisfy the monotonicity and other properties. We define formally and more generally as:

where and are some positive constants. This time it is more involved to figure out what other should be. The following lemma addresses this issue (proof deferred to Appendix C).

Lemma 2.

If and (define ), then we have for all and . Moreover, Eq. (2) still holds.

Note that even if is not valid in general, Lemma 2 states that Eq. (2) still holds. Thus Algorithm 3 can indeed still be applied, leading to our new algorithm:

Here, seems to be an extra parameter, but in fact, simply setting is good enough:

Corollary 2.

Algorithm 3 with potential defined in Lemma 2 and produces a Hedge algorithm such that the following holds simultaneously for all and :

We have thus proposed a parameter-free adaptive algorithm with optimal regret rate (ignoring the term) using our drifting-games framework. In fact, our algorithm bears a striking similarity to NormalHedge [10], the first algorithm that has this kind of adaptivity. We thus name our algorithm NormalHedge.DT222“DT” stands for discrete time.. We include NormalHedge in Table 1 for comparison. One can see that the main differences are: 1) On each round NormalHedge performs a numerical search to find out the right parameter used in the exponents; 2) NormalHedge uses the derivative of potentials as weights.

Compared to NormalHedge, the regret bound for NormalHedge.DT has no explicit dependence on , but has a slightly worse dependence on (indeed is almost negligible). We emphasize other advantages of our algorithm over NormalHedge: 1) NormalHedge.DT is more computationally efficient especially when is very large, since it does not need a numerical search for each round; 2) our analysis is arguably simpler and more intuitive than the one in [10]; 3) as we will discuss in Section 4, NormalHedge.DT can be easily extended to deal with the more general online convex optimization problem where the number of actions is infinitely large, while it is not clear how to do that for NormalHedge by generalizing the analysis in [10]. Indeed, the extra dependence on the number of actions for the regret of NormalHedge makes this generalization even seem impossible. Finally, we will later see that NormalHedge.DT outperforms NormalHedge in experiments. Despite the differences, it is worth noting that both algorithms assign zero weight to some actions on each round, an appealing property when is huge. We will discuss more on this in Section 4.

3.4 High Probability Bounds

We now consider a common variant of Hedge: on each round, instead of choosing a distribution , the player has to randomly pick a single action , while the adversary decides the losses at the same time (without seeing ). For now we only focus on the player’s regret to the best action: Notice that the regret is now a random variable, and we are interested in a bound that holds with high probability. Using Azuma’s inequality, standard analysis (see for instance [9, Lemma 4.1]) shows that the player can simply draw according to , the output of a standard Hedge algorithm, and suffers regret at most with probability . Below we recover similar results as a simple side product of our drifting-games analysis without resorting to concentration results, such as Azuma’s inequality.

For this, we only need to modify Algorithm 3 by setting . The restriction is then relaxed to hold in expectation. Moreover, it is clear that Eq. (2) also still holds in expectation. On the other hand, by definition and the union bound, one can show that . So setting shows that the regret is smaller than with probability . Therefore, for example, if EXP is used, then the regret would be at most with probability , giving basically the same bound as the standard analysis. One draw back is that EXP would need as a parameter. However, this can again be addressed by NormalHedge.DT for the exact same reason that NormalHedge.DT is independent of . We have thus derived high probability bounds without using any concentration inequalities.

4 Generalizations and Applications

Multi-armed Bandit (MAB) Problem: The only difference between Hedge (randomized version) and the non-stochastic MAB problem [6] is that on each round, after picking , the player only sees the loss for this single action instead of the whole vector . The goal is still to compete with the best action. A common technique used in the bandit setting is to build an unbiased estimator for the losses, which in this case could be . Then algorithms such as EXP can be used by replacing with , leading to the EXP3 algorithm [6] with regret .

One might expect that Algorithm 3 would also work well by replacing with . However, doing so breaks an important property of the movements : boundedness. Indeed, Eq. (3) no longer makes sense if could be infinitely large, even if in expectation it is still in (note that is now a random variable). It turns out that we can address this issue by imposing a variance constraint on . Formally, we consider a variant of drifting games where on each round, the adversary picks a random movement for each chip such that: and . We call this variant DGv2 and summarize it in Appendix A. The standard minimax analysis and the derivation of potential functions need to be modified in a certain way for DGv2, as stated in Theorem 4 (Appendix D). Using the analysis for DGv2, we propose a general recipe for designing MAB algorithms in a similar way as for Hedge and also recover EXP3 (see Algorithm 4 and Theorem 5 in Appendix D). Unfortunately so far we do not know other appropriate potentials due to some technical difficulties. We conjecture, however, that there is a potential function that could recover the poly-INF algorithm [4, 5] or give its variants that achieve the optimal regret .

Online Convex Optimization: We next consider a general online convex optimization setting [31]. Let be a compact convex set, and be a set of convex functions with range on . On each round , the learner chooses a point , and the adversary chooses a loss function (knowing ). The learner then suffers loss . The regret after rounds is . There are two general approaches to OCO: one builds on convex optimization theory [30], and the other generalizes EXP to a continuous space [12, 24]. We will see how the drifting-games framework can recover the latter method and also leads to new ones.

To do so, we introduce a continuous variant of drifting games (DGv3, see Appendix A). There are now infinitely many chips, one for each point in . On round , the player needs to choose a distribution over the chips, that is, a probability density function on . Then the adversary decides the movements for each chip, that is, a function with range on (not necessarily convex or continuous), subject to a constraint . At the end, each point is associated with a loss , and the player aims to minimize the total loss .

OCO can be converted into DGv3 by setting and predicting . The constraint holds by the convexity of . Moreover, it turns out that the minimax analysis and potentials for DGv1 can readily be used here, and the notion of -regret, now generalized to the OCO setting, measures the difference of the player’s loss and the loss of a best fixed point in a subset of that excludes the top fraction of points. With different potentials, we obtain versions of each of the three algorithms of Section 3 generalized to this setting, with the same -regret bounds as before. Again, two of these methods are adaptive and parameter-free. To derive bounds for the usual regret, at first glance it seems that we have to set to be close to zero, leading to a meaningless bound. Nevertheless, this is addressed by Theorem 6 using similar techniques in [17], giving the usual regret bound. All details can be found in Appendix E.

Applications to Boosting: There is a deep and well-known connection between Hedge and boosting [14, 29]. In principle, every Hedge algorithm can be converted into a boosting algorithm; for instance, this is how AdaBoost was derived from EXP. In the same way, NormalHedge.DT can be converted into a new boosting algorithm that we call NH-Boost.DT. See Appendix F for details and further background on boosting. The main idea is to treat each training example as an “action”, and to rely on the Hedge algorithm to compute distributions over these examples which are used to train the weak hypotheses. Typically, it is assumed that each of these has “edge” , meaning its accuracy on the training distribution is at least . The final hypothesis is a simple majority vote of the weak hypotheses. To understand the prediction accuracy of a boosting algorithm, we often study the training error rate and also the distribution of margins, a well-established measure of confidence (see Appendix F for formal definitions). Thanks to the adaptivity of NormalHedge.DT, we can derive bounds on both the training error and the distribution of margins after any number of rounds:

Theorem 3.

After rounds, the training error of NH-Boost.DT is of order , and the fraction of training examples with margin at most is of order .

Thus, the training error decreases at roughly the same rate as AdaBoost. In addition, this theorem implies that the fraction of examples with margin smaller than eventually goes to zero as gets large, which means NH-Boost.DT converges to the optimal margin ; this is known not to be true for AdaBoost (see [29]). Also, like AdaBoost, NH-Boost.DT is an adaptive boosting algorithm that does not require or as a parameter. However, unlike AdaBoost, NH-Boost.DT has the striking property that it completely ignores many examples on each round (by assigning zero weight), which is very helpful for the weak learning algorithm in terms of computational efficiency. To test this, we conducted experiments to compare the efficiency of AdaBoost, “NH-Boost” (an analogous boosting algorithm derived from NormalHedge) and NH-Boost.DT. All details are in Appendix G. Here we only briefly summarize the results. While the three algorithms have similar performance in terms of training and test error, NH-Boost.DT is always the fastest one in terms of running time for the same number of rounds. Moreover, the average faction of examples with zero weight is significantly higher for NH-Boost.DT than for NH-Boost (see Table 3). On one hand, this explains why NH-Boost.DT is faster (besides the reason that it does not require a numerical step). On the other hand, this also implies that NH-Boost.DT tends to achieve larger margins, since zero weight is assigned to examples with large margin. This is also confirmed by our experiments.

Acknowledgements. Support for this research was provided by NSF Grant #1016029. The authors thank Yoav Freund for helpful discussions and the anonymous reviewers for their comments.

References

  • [1] Jacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the 21st Annual Conference on Learning Theory, 2008.
  • [2] Jacob Abernethy and Manfred K. Warmuth. Minimax games with bandits. In Proceedings of the 22st Annual Conference on Learning Theory, 2009.
  • [3] Jacob Abernethy and Manfred K. Warmuth. Repeated games against budgeted adversaries. In Advances in Neural Information Processing Systems 23, 2010.
  • [4] Jean-Yves Audibert and Sébastien Bubeck. Regret bounds and minimax policies under partial monitoring. The Journal of Machine Learning Research, 11:2785–2836, 2010.
  • [5] Jean-Yves Audibert, Sébastien Bubeck, and Gábor Lugosi. Regret in online combinatorial optimization. Mathematics of Operations Research, 39(1):31–45, 2014.
  • [6] Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002.
  • [7] Nicolò Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire, and Manfred K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427–485, May 1997.
  • [8] Nicolò Cesa-Bianchi and Gábor Lugosi. Potential-based algorithms in on-line prediction and game theory. Machine Learning, 51(3):239–261, 2003.
  • [9] Nicolò Cesa-Bianchi and Gábor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
  • [10] Kamalika Chaudhuri, Yoav Freund, and Daniel Hsu. A parameter-free hedging algorithm. Advances in Neural Information Processing Systems 22, 2009.
  • [11] Alexey Chernov and Vladimir Vovk. Prediction with advice of unknown number of experts. arXiv preprint arXiv:1006.0475, 2010.
  • [12] Thomas M. Cover. Universal portfolios. Mathematical Finance, 1(1):1–29, January 1991.
  • [13] Yoav Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256–285, 1995.
  • [14] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997.
  • [15] Yoav Freund and Robert E. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29:79–103, 1999.
  • [16] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: A statistical view of boosting. Annals of Statistics, 28(2):337–407, April 2000.
  • [17] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169–192, 2007.
  • [18] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307, 2005.
  • [19] Robert Kleinberg. Anytime algorithms for multi-armed bandit problems. In Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, pages 928–936. ACM, 2006.
  • [20] Robert David Kleinberg. Online decision problems with large strategy sets. PhD thesis, MIT, 2005.
  • [21] Haipeng Luo and Robert E. Schapire. Towards Minimax Online Learning with Unknown Time Horizon. In Proceedings of the 31st International Conference on Machine Learning, 2014.
  • [22] H Brendan McMahan and Francesco Orabona. Unconstrained online linear learning in hilbert spaces: Minimax algorithms and normal approximations. In Proceedings of the 27th Annual Conference on Learning Theory, 2014.
  • [23] Indraneel Mukherjee and Robert E. Schapire. Learning with continuous experts using drifting games. Theoretical Computer Science, 411(29):2670–2683, 2010.
  • [24] Hariharan Narayanan and Alexander Rakhlin. Random walk approach to regret minimization. In Advances in Neural Information Processing Systems 23, 2010.
  • [25] Francesco Orabona. Simultaneous model selection and optimization through parameter-free stochastic learning. In Advances in Neural Information Processing Systems 28, 2014.
  • [26] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and localize: From value to algorithms. In Advances in Neural Information Processing Systems 25, 2012. Full version available in arXiv:1204.0870.
  • [27] Lev Reyzin and Robert E. Schapire. How boosting the margin can also boost classifier complexity. In Proceedings of the 23rd International Conference on Machine Learning, 2006.
  • [28] Robert E. Schapire. Drifting games. Machine Learning, 43(3):265–291, June 2001.
  • [29] Robert E. Schapire and Yoav Freund. Boosting: Foundations and Algorithms. MIT Press, 2012.
  • [30] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2011.
  • [31] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth International Conference on Machine Learning, 2003.

Appendix A Summary of Drifting Game Variants

We study three different variants of drifting games throughout the paper, which corresponds to the Hedge setting, the multi-armed bandit problem and online convex optimization respectively. The protocols of these variants are summarized below.

DGv1 Given: a loss function . For : The player chooses a distribution over chips. The adversary decides the movement of each chip subject to and for all and . The player suffers loss .
DGv2 Given: a loss function . For : The player chooses a distribution over chips. The adversary randomly decides the movement of each chip subject to and . The player suffers loss .
DGv3 Given: a compact convex set , a loss function . For : The player chooses a density function on . The adversary decides a function subject to . The player suffers loss .

Appendix B Proof of Theorem 1

Proof.

We first show that both conversions are valid. In Algorithm 1, it is clear that . Also, is guaranteed due to the extra restriction of DGv1. For Algorithm 2, lies in since , and direct computation shows and for all and .

(1) For any choices of , we have

where the inequality holds since is required to be nonnegative and is a nonincreasing function. By Algorithm 1, is equal to , leading to

Since , we must have except for the best actions, which means . This holds for any choices of , so .

(2) By Algorithm 2 and the condition , we have

which means there are at most actions satisfying , and thus . Since this holds for any choices of , we have . ∎

Appendix C Summary of Hedge Algorithms and Proofs of Lemma 1, Lemma 2 and Corollary 2

EXP 2-norm NormalHedge.DT NormalHedge
N/A

( is

s.t.

Adaptive? No Yes Yes Yes
Table 1: Different algorithms derived from Algorithm 3, and comparisons with NormalHedge
Proof of Lemma 1.

It suffices to show . When , . When , . ∎

Proof of Lemma 2.

Let . It suffices to show

which is clearly true for the following 3 cases:

For the last case , if we can show that is increasing in this region, then the lemma follows. Below, we show this by proving is nonnegative when .

Let . can now be written as

where and . Next we apply (one-dimensional) Taylor expansion to and around , and around , leading to

Direct computation (see Lemma 3 below) shows that and share exact same forms only with different constants:

(4)

where and