The Computational Power of Optimization in Online Learning
We consider the fundamental problem of prediction with expert advice where the experts are “optimizable”: there is a black-box optimization oracle that can be used to compute, in constant time, the leading expert in retrospect at any point in time. In this setting, we give a novel online algorithm that attains vanishing regret with respect to experts in total computation time. We also give a lower bound showing that this running time cannot be improved (up to log factors) in the oracle model, thereby exhibiting a quadratic speedup as compared to the standard, oracle-free setting where the required time for vanishing regret is . These results demonstrate an exponential gap between the power of optimization in online learning and its power in statistical learning: in the latter, an optimization oracle—i.e., an efficient empirical risk minimizer—allows to learn a finite hypothesis class of size in time .
We also study the implications of our results to learning in repeated zero-sum games, in a setting where the players have access to oracles that compute, in constant time, their best-response to any mixed strategy of their opponent. We show that the runtime required for approximating the minimax value of the game in this setting is , yielding again a quadratic improvement upon the oracle-free setting, where is known to be tight.
Prediction with expert advice is a fundamental model of sequential decision making and online learning in games. This setting is often described as the following repeated game between a player and an adversary: on each round, the player has to pick an expert from a fixed set of possible experts, the adversary then reveals an arbitrary assignment of losses to the experts, and the player incurs the loss of the expert he chose to follow. The goal of the player is to minimize his -round average regret, defined as the difference between his average loss over rounds of the game and the average loss of the best expert in that period—the one having the smallest average loss in hindsight. Multiplicative weights algorithms (Littlestone and Warmuth, 1994; Freund and Schapire, 1997; see also Arora et al., 2012 for an overview) achieve this goal by maintaining weights over the experts and choosing which expert to follow by sampling proportionally to the weights; the weights are updated from round to round via a multiplicative update rule according to the observed losses.
While multiplicative weights algorithms are very general and provide particularly attractive regret guarantees that scale with , they need computation time that grows linearly with to achieve meaningful average regret. The number of experts is often exponentially large in applications (think of the number of all possible paths in a graph, or the number of different subsets of a certain ground set), motivating the search for more structured settings where efficient algorithms are possible. Assuming additional structure—such as linearity, convexity, or submodularity of the loss functions—one can typically minimize regret in total time in many settings of interest (e.g., Zinkevich, 2003; Kalai and Vempala, 2005; Awerbuch and Kleinberg, 2008; Hazan and Kale, 2012). However, the basic multiplicative weights algorithm remains the most general and is still widely used.
The improvement in structured settings—most notably in the linear case (Kalai and Vempala, 2005) and in the convex case (Zinkevich, 2003)—often comes from a specialized reduction of the online problem to the offline version of the optimization problem. In other words, efficient online learning is made possible by providing access to an offline optimization oracle over the experts, that allows the player to quickly compute the best performing expert with respect to any given distribution over the adversary’s losses. However, in all of these cases, the regret and runtime guarantees of the reduction need the additional structure. Thus, it is natural to ask whether such a drastic improvement in runtime is possible for generic online learning. Specifically, we ask: What is the runtime required for minimizing regret given a black-box optimization oracle for the experts, without assuming any additional structure? Can one do better than linear time in ?
In this paper, we give a precise answer to these questions. We show that, surprisingly, an offline optimization oracle gives rise to a substantial, quadratic improvement in the runtime required for convergence of the average regret. We give a new algorithm that is able to minimize regret in total time ,111Here and throughout, we use the notation to hide constants and poly-logarithmic factors. and provide a matching lower bound confirming that this is, in general, the best possible. Thus, our results establish a tight characterization of the computational power of black-box optimization in online learning. In particular, unlike in many of the structured settings where runtime is possible, without imposing additional structure a polynomial dependence on is inevitable.
Our results demonstrate an exponential gap between the power of optimization in online learning, and its power in statistical learning. It is a simple and well-known fact that for a finite hypothesis class of size (which corresponds to a set of experts in the online setting), black-box optimization gives rise to a statistical learning algorithm—often called empirical risk minimization—that needs only examples for learning. Thus, given an offline optimization oracle that optimizes in constant time, statistical learning can be performed in time ; in contrast, our results show that the complexity of online learning using such an optimization oracle is . This dramatic gap is surprising due to a long line of work in online learning suggesting that whatever can be done in an offline setting can also be done (efficiently) online.
Finally, we study the implication of our results to repeated game playing in two-player zero-sum games. The analogue of an optimization oracle in this setting is a best-response oracle for each of the players, that allows her to quickly compute the pure action being the best-response to any given mixed strategy of her opponent. In this setting, we consider the problem of approximately solving a zero-sum game—namely finding a mixed strategy profile with payoff close to the minimax payoff of the game. We show that our new online learning algorithm above, if deployed by each of the players in an zero-sum game, guarantees convergence to an approximate equilibrium in total time. This is, again, a quadratic improvement upon the best possible runtime in the oracle-free setting, as established by Grigoriadis and Khachiyan (1995) and Freund and Schapire (1999). Interestingly, it turns out that the quadratic improvement is tight for solving zero-sum games as well: we prove that any algorithm would require time to approximate the value of a zero-sum game in general, even when given access to powerful best-response oracles.
1.1 Related Work
The most general reduction from regret minimization to optimization was introduced in the influential work of Kalai and Vempala (2005) as the Follow-the-Perturbed Leader (FPL) methodology. This technique requires the problem at hand to be embeddable in a low-dimensional space and the cost functions to be linear in that space.222The extension to convex cost functions is straightforward (see, e.g., Hazan, 2014). Subsequently, Kakade et al. (2009) reduced regret minimization to approximate linear optimization. For general convex functions, the Follow-the-Regularized-Leader (FTRL) framework (Zinkevich, 2003; see also Hazan, 2014) provides a general reduction from online to offline optimization, that often gives dimension-independent convergence rates. Another general reduction was suggested by Kakade and Kalai (2006) for the related model of transductive online learning, where future data is partially available to the player (in the form of unlabeled examples).
Without a fully generic reduction from online learning to optimization, specialized online variants for numerous optimization scenarios have been explored. This includes efficient regret-minimization algorithms for online variance minimization (Warmuth and Kuzmin, 2006), routing in networks (Awerbuch and Kleinberg, 2008), online permutations and ranking (Helmbold and Warmuth, 2009), online planning (Even-Dar et al., 2009), matrix completion (Hazan et al., 2012), online submodular minimization (Hazan and Kale, 2012), contextual bandits (Dudík et al., 2011; Agarwal et al., 2014), and many more.
Computational tradeoffs in learning.
Tradeoffs between sample complexity and computation in statistical learning have been studied intensively in recent years (e.g., Agarwal, 2012; Shalev-Shwartz and Srebro, 2008; Shalev-Shwartz et al., 2012). However, the adversarial setting of online learning, which is our main focus in this paper, did not receive a similar attention. One notable exception is the seminal paper of Blum (1990) who showed that, under certain cryptographic assumptions, there exists an hypothesis class which is computationally hard to learn in the online mistake bound model but is non-properly learnable in polynomial time in the PAC model.333Non-proper learning means that the algorithm is allowed to return an hypothesis outside of the hypothesis class it competes with. In our terminology, Blum’s result show that online learning might require time, even in a case where offline optimization can be performed in time, albeit non-properly (i.e., the optimization oracle is allowed to return a prediction rule which is not necessarily one of the experts).
Solution of zero-sum games.
The computation of equilibria in zero-sum games is known to be equivalent to linear programming, as was first observed by von-Neumann (Adler, 2013). A basic and well-studied question in game theory is the study of rational strategies that converge to equilibria (see Nisan et al., 2007 for an overview). Freund and Schapire (1999) showed that in zero-sum games, no-regret algorithms converge to equilibrium. Hart and Mas-Colell (2000) studied convergence of no-regret algorithms to correlated equilibria in more general games; Even-dar et al. (2009) analyzed convergence to equilibria in concave games. Grigoriadis and Khachiyan (1995) were the first to observe that zero-sum games can be solved in total time sublinear in the size of the game matrix.
Game dynamics that rely on best-response computations have been a topic of extensive research for more than half a century, since the early days of game theory. Within this line of work, perhaps the most prominent dynamic is the “fictitious play” algorithm, in which both players repeatedly follow their best-response to the empirical distribution of their opponent’s past plays. This simple and natural dynamic was first proposed by Brown (1951), shown to converge to equilibrium in two-player zero-sum games by Robinson (1951), and was extensively studied ever since (see e.g., Brandt et al., 2013; Daskalakis and Pan, 2014 and the references therein). Another related dynamic, put forth by Hannan (1957) and popularized by Kalai and Vempala (2005), is based on perturbed (i.e., noisy) best-responses.
We remark that since the early works of Grigoriadis and Khachiyan (1995) and Freund and Schapire (1999), faster algorithms for approximating equilibria in zero-sum games have been proposed (e.g., Nesterov, 2005; Daskalakis et al., 2011). However, the improvements there are in terms of the approximation parameter rather than the size of the game . It is a simple folklore fact that using only value oracle access to the game matrix, any algorithm for approximating the equilibrium must run in time ; see, e.g., Clarkson et al. (2012).
2 Formal Setup and Statement of Results
We now formalize our computational oracle-based model for learning in games—a setting which we call “Optimizable Experts”. The model is essentially the classic online learning model of prediction with expert advice augmented with an offline optimization oracle.
Prediction with expert advice can be described as a repeated game between a player and an adversary, characterized by a finite set of experts for the player to choose from, a set of actions for the adversary, and a loss function . First, before the game begins, the adversary picks an arbitrary sequence of actions from .444Such an adversary is called oblivious, since it cannot react to the decisions of the player as the game progresses. We henceforth assume an oblivious adversary, and relax this assumption later in Section 4. On each round of the game, the player has to choose (possibly at random) an expert , the adversary then reveals his action and the player incurs the loss . The goal of the player is to minimize his expected average regret over rounds of the game, defined as
Here, the expectation is taken with respect to the randomness in the choices of the player.
In the optimizable experts model, we assume that the loss function is initially unknown to the player, and allow her to access by means of two oracles: and . The first oracle simply computes for each pair of actions the respective loss incurred by expert when the adversary plays the action .
Definition (value oracle).
A value oracle is a procedure that for any action pair , , returns the loss value in time ; that is,
The second oracle is far more powerful, and allows the player to quickly compute the best performing expert with respect to any given distribution over actions from (i.e., any mixed strategy of the adversary).
Definition (optimization oracle).
An optimization oracle is a procedure that receives as input a distribution , represented as a list of atoms , and returns a best performing expert with respect to (with ties broken arbitrarily), namely
The oracle runs in time on any input.
Recall that our goal in this paper is to evaluate online algorithms by their runtime complexity. To this end, it is natural to consider the running time it takes for the average regret of the player to drop below some specified target threshold.555This is indeed the appropriate criterion in algorithmic applications of online learning methods. Namely, for a given , we will be interested in the total computational cost (as opposed to the number of rounds) required for the player to ensure that , as a function of and . Notice that the number of rounds required to meet the latter goal is implicit in this view, and only indirectly affects the total runtime.
2.1 Main Results
We can now state the main results of the paper: a tight characterization of the runtime required for the player to converge to expected average regret in the optimizable experts model.
The dependence on the number of experts in the above result is tight, as the following theorem shows.
Any (randomized) algorithm in the optimizable experts model cannot guarantee an expected average regret smaller than in total time better than .
In other words, we exhibit a quadratic improvement in the total runtime required for the average regret to converge, as compared to standard multiplicative weights schemes that require time, and this improvement is the best possible. Granted, the regret bound attained by the algorithm is inferior to those achieved by multiplicative weights methods, that depend on only logarithmically; however, when we consider the total computational cost required for convergence, the substantial improvement is evident.
Our upper bound actually applies to a model more general than the optimizable experts model, where instead of having access to an optimization oracle, the player receives information about the leading expert on each round of the game. Namely, in this model the player observes at the end of round the leader
as part of the feedback. This is indeed a more general model, as the leader can be computed in the oracle model in amortized time, simply by calling . (The list of actions played by the adversary can be maintained in an online fashion in time per round.) Our lower bound, however, applies even when the player has access to an optimization oracle in its full power.
Finally, we mention a simple corollary of Theorem 2: we obtain that the time required to attain vanishing average regret in online Lipschitz-continuous optimization in Euclidean space is exponential in the dimension, even when an oracle for the corresponding offline optimization problem is at hand. For the precise statement of this result, see Section 5.1.
2.2 Zero-sum Games with Best-response Oracles
In this section we present the implications of our results for repeated game playing in two-player zero-sum games. Before we can state the results, we first recall the basic notions of zero-sum games and describe the setting formally.
A two-player zero-sum game is specified by a matrix , in which the rows correspond to the (pure) strategies of the first player, called the row player, while the columns correspond to strategies of the second player, called the column player. For simplicity, we restrict the attention to games in which both players have pure strategies to choose from; our results below can be readily extended to deal with games of general (finite) size. A mixed strategy of the row player is a distribution over the rows of ; similarly, a mixed strategy for the column player is a distribution over the columns. For players playing strategies , the loss (respectively payoff) suffered by the row (respectively column) player is given by . A pair of mixed strategies is said to be an approximate equilibrium, if for both players there is almost no incentive in deviating from the strategies and . Formally, is an -equilibrium if and only if
Here and throughout, stands for the ’th standard basis vector, namely a vector with in its ’th coordinate and zeros elsewhere. The celebrated von-Neumann minimax theorem asserts that for any zero-sum game there exists an exact equilibrium (i.e., with ) and it has a unique value, given by
A repeated zero-sum game is an iterative process in which the two players simultaneously announce their strategies, and suffer loss (or receive payoff) accordingly. Given , the goal of the players in the repeated game is to converge, as quickly as possible, to an -equilibrium; in this paper, we will be interested in the total runtime required for the players to reach an -equilibrium, rather than the total number of game rounds required to do so.
We assume that the players do not know the game matrix in advance, and may only access it through two types of oracles, which are very similar to the ones we defined in the online learning model. The first and most natural oracle allows the player to query the payoff for any pair of pure strategies (i.e., a pure strategy profile) in constant time. Formally,
Definition (value oracle).
A value oracle for a zero-sum game described by a matrix is a procedure that accepts row and column indices as input and returns the game value for the pure strategy profile , namely:
The value oracle runs in time on any valid input.
The other oracle we consider is the analogue of an optimization oracle in the context of games. For each of the players, a best-response oracle is a procedure that computes the player’s best response (pure) strategy to any mixed strategy of his opponent, given as input.
Definition (best-response oracle).
A best-response oracle for the row player in a zero-sum game described by a matrix , is a procedure that receives as input a distribution , represented as a list of atoms , and computes
with ties broken arbitrarily. Similarly, a best-response oracle for the column player accepts as input a represented as a list , and computes
Both best-response oracles return in time on any input.
Our main results regarding the runtime required to converge to an approximate equilibrium in zero-sum games with best-response oracles, are the following.
Any (randomized) algorithm for approximating the equilibrium of zero-sum games with best-response oracles cannot guarantee with probability greater than that the average payoff of the row player is at most -away from its value at equilibrium in total time better than .
As indicated earlier, these results show that best-response oracles in repeated game playing give rise again to a quadratic improvement in the runtime required for solving zero-sum games, as compared to the best possible runtime to do so without an access to best-response oracles, which scales linearly with (Grigoriadis and Khachiyan, 1995; Freund and Schapire, 1999).
The algorithm deployed in Theorem 3 above is a very natural one: it simulates a repeated game where both players play a slight modification of the regret minimization algorithm of Theorem 1, and the best-response oracle of each player serves as the optimization oracle required for the online algorithm; see Section 4 for more details.
2.3 Overview of the Approach and Techniques
We now outline the main ideas leading to the quadratic improvement in runtime achieved by our online algorithm of Theorem 1. Intuitively, the challenge is to reduce the number of “effective” experts quadratically, from to roughly . Since we have an optimization oracle at our disposal, it is natural to focus on the set of “leaders”—those experts that have been best at some point in history—and try to reduce the complexity of the online problem to scale with the number of such leaders. This set is natural considering our computational concerns: the algorithm can obtain information on the leaders at almost no cost (using the optimization oracle, it can compute the leader on each round in only time per round), resulting with a potentially substantial advantage in terms of runtime.
First, suppose that there is a small number of leaders throughout the game, say . Then, intuitively, the problem we face is easy: if we knew the identity of those leaders in advance, our regret would scale with and be independent of the total number of experts . As a result, using standard multiplicative weights techniques we would be able to attain vanishing regret in total time that depends linearly on , and in case we would be done. When the leaders are not known in advance, one could appeal to various techniques that were designed to deal with experts problems in which the set of experts evolves over time (e.g., Freund et al., 1997; Blum and Mansour, 2007; Kleinberg et al., 2010; Gofer et al., 2013). However, the per-round runtime of all of these methods is linear in , which is prohibitive for our purposes. We remark that the simple “follow the leader” algorithm, that simply chooses the most recent leader on each round of the game, is not guaranteed to perform well in this case: the regret of this algorithm scales with the number of times the leader switches—rather than the number of distinct leaders—that might grow linearly with even when there are few active leaders.
A main component in our approach is a novel online learning algorithm, called Leaders, that keeps track of the leaders in an online game, and attains average regret in expectation with runtime per round. The algorithm, that we describe in detail in Section 3.1, queries the oracles only times per iteration and thus can be implemented efficiently. More formally,
The expected -round average regret of the Leaders algorithm is upper bounded by , where is an upper bound over the total number of distinct leaders during throughout the game. The algorithm can be implemented in time per round in the optimizable experts model.
As far as we know, this technique is new to the theory of regret minimization and may be of independent interest. In a sense, it is a partial-information algorithm: it is allowed to use only a small fraction of the feedback signal (i.e., read a small fraction of the loss values) on each round, due to the time restrictions. Nevertheless, its regret guarantee can be shown to be optimal in terms of the number of leaders , even when removing the computational constraints! The new algorithm is based on running in parallel a hierarchy of multiplicative-updates algorithms with varying look-back windows for keeping track of recent leaders.
But what happens if there are many leaders, say ? In this case, we can incorporate random guessing: if we sample about experts, with nice probability one of them would be among the “top” leaders. By competing with this small random set of experts, we can keep the regret under control, up to the point in time where at most leaders remain active (in the sense that they appear as leaders at some later time). In essence, this observation allows us to reduce the effective number of leaders back to the order of and use the approach detailed above even when , putting the Leaders algorithm into action at the point in time where the top leader is encountered (without actually knowing when exactly this event occurs).
In order to apply our algorithm to repeated two-player zero-sum games and obtain Theorem 3, we first show how it can be adapted to minimize regret even when used against an adaptive adversary, that can react to the decisions of the algorithm (as is the case in repeated games). Then, via standard techniques (Freund and Schapire, 1999), we show that the quadratic speedup we achieved in the online learning setting translates to similar speedup in the solution of zero-sum games. In a nutshell, we let both players use our online regret-minimization algorithm for picking their strategies on each round of the game, where they use their best-response oracles to fill the role of the optimization oracle in the optimizable experts model.
Our lower bounds (i.e., Theorems 4 and 2) are based on information-theoretic arguments, which can be turned into running time lower bounds in our oracle-based computational model. In particular, the lower bound for zero-sum games is based on a reduction to a problem investigated by Aldous (1983) and revisited years later by Aaronson 2006, and reveals interesting connections between the solution of zero-sum games and local-search problems. Aldous investigated the hardness of local-search problems and gave an explicit example of an efficiently-representable (random) function which is hard to minimize over its domain, even with access to a local improvement oracle. (A local improvement oracle improves upon a given solution by searching in its local neighborhood.) Our reduction constructs a zero-sum game in which a best-response query amounts to a local-improvement step, and translates Aldous’ query-complexity lower bound to a runtime lower bound in our model.
Interestingly, the connection to local-search problems is also visible in our algorithmic results: our algorithm for learning with optimizable experts (Algorithm 2) involves guessing a “top ” solution (i.e., a leader) and making local-improvement steps to this solution (i.e., tracking the finalist leaders all the way to the final leader). This is reminiscent of a classical randomized algorithm for local-search, pointed out by Aldous (1983).
3 Algorithms for Optimizable Experts
In this section we develop our algorithms for online learning in the optimizable experts model. Recall that we assume a more general setting where there is no optimization oracle, but instead the player observes after each round the identity of the leader (see Eq. 1) as part of the feedback on that round. Thus, in what follows we assume that the leader is known immediately after round with no additional computational costs, and do not require the oracle any further.
To simplify the presentation, we introduce the following notation. We fix an horizon and denote by the sequence of loss functions induced by the actions chosen by the adversary, where for all ; notice that the resulting sequence is a completely arbitrary sequence of loss functions over , as both and the ’s are chosen adversarially. We also fix the set of experts to , identifying each expert with its serial index.
3.1 The Leaders Algorithm
We begin by describing the main technique in our algorithmic results—the Leaders algorithm—which is key to proving Theorem 1. Leaders is an online algorithm designed to perform well in online learning problems with a small number of leaders, both in terms of average regret and computational costs. The algorithm makes use of the information on the leaders received as feedback to save computation time, and can be made to run in almost constant time per round (up to logarithmic factors).
The Leaders algorithm is presented in Algorithm 1. In the following theorem we state its guarantees; the theorem gives a slightly more general statement than the one presented earlier in Theorem 5, that we require for the proof of our main result.
Assume that Leaders is used for prediction with expert advice (with leaders feedback) against loss functions , and that the total number of distinct leaders during a certain time period whose length is bounded by , is at most . Then, provided the numbers and are given as input, the algorithm obtains the following regret guarantee:
The algorithm can be implemented to run in time per round.
Algorithm 2 relies on two simpler online algorithms—the and algorithms—that we describe in detail later on in this section (see Section 3.3, where we also discuss an algorithm called ). These two algorithms are variants of the standard multiplicative weights (MW) method for prediction with expert advice. is a rather simple adaptation of MW which is able to guarantee bounded regret in any time interval of predefined length:
Suppose that (Algorithm 3 below) is used for prediction with expert advice, against an arbitrary sequence of loss functions over experts. Then, for and any , its sequence of predictions satisfies
in any time interval of length at most . The algorithm can be implemented to run in time per round.
The algorithm a “sliding window” version of , that given a parameter , maintains a buffer of experts that were recently “activated”; in our context, an expert is activated on round if it is the leader at the end of that round. competes (in terms of regret) with the most recent activated experts as long as they remain in the buffer. Formally,
Suppose that (Algorithm 5 below) is used for prediction with expert advice, against an arbitrary sequence of loss functions over experts. Assume that expert was activated on round , and from that point until round there were no more than different activated experts (including itself). Then, for and any , the predictions of the algorithm satisfy
in any time interval of length at most . Furthermore, the algorithm can be implemented to run in time per round.
For the analysis of Algorithm 1, we require a few definitions. We let denote the time interval under consideration. For all , we denote by the set of all leaders encountered since round up to and including round ; for completeness we also define . The theorem’s assumption then implies that . For a set of experts , we let be the last round in which one of the experts in occurs as a leader. In other words, after round , the leaders in have “died out” and no longer appear as leaders.
Next, we split into epochs , where the ’th epoch spans between rounds and , and is defined recursively by and for all . In words, is the set of leaders encountered by the beginning of epoch , and this epoch ends once all leaders in this set have died out. Let denote the number of resulting epochs (notice that , as at least one leader dies out in each of the epochs). For each , let denote the length of the ’th epoch, namely , and let be the leader at the end of epoch . Finally, for each epoch we let denote the set of leaders that have died out during the epoch, and for technical convenience we also define ; notice that is a partition of the set of all leaders, so in particular . See Fig. 1 for an illustration of the definitions.
Our first lemma states that minimizing regret in each epoch with respect to the leader at the end of the epoch, also guarantees low regret with respect to the overall leader . It is a variant of the “Follow The Leader, Be The Leader” lemma (Kalai and Vempala, 2005).
Following the epoch’s leader yields no regret, in the sense that
Let and . We will prove by induction on that
since by definition performs better than any other expert, and in particular than , throughout the first epochs. Adding the term to both sides of the above inequality, we obtain Eq. 2. ∎
Next, we identify a key property of our partition to epochs.
For all epochs , it holds that . In addition, any leader encountered during the lifetime of as leader (i.e., between its first and last appearances in the sequence of leaders) must be a member of .
Consider epoch and the leader at the end of this epoch. To see that , recall that the ’th epoch ends right after the leaders in have all died out, so the leader at the end of this epoch must be a member of the latter set. This also means that was first encountered not before epoch (in fact, even not on the first round of that epoch), and the last time it was a leader was on the last round of epoch (see Fig. 1). In particular, throughout the lifetime of as leader, only the experts in could have appeared as leaders. ∎
We are now ready to analyze the regret in a certain epoch with respect to its leader . To this end, we define and consider the instance , where and (note that and ). The following lemma shows that the regret of the algorithm in epoch can be bounded in terms of the quantity . Below, we use to denote the decision of on round .
The cumulative expected regret of the algorithm throughout epoch , with respect to the leader at the end of this epoch, has
Recall that has a buffer of size and step size . Now, from Lemma 8 we know that , which means that first appeared as leader either on or before the first round of epoch . Also, the same lemma states that the number of distinct leaders that were encountered throughout the lifetime of (including itself) is at most , namely no more than the size of ’s buffer. Hence, applying Lemma 13 to epoch , we have
where we have used and to bound the logarithmic term. Now, note that and , which follow from and . Plugging into the above bound, we obtain the lemma. ∎
Our final lemma analyzes the MW algorithm , and shows that it obtains low regret against the algorithm in epoch .
The difference between the expected cumulative loss of Algorithm 1 during epoch , and the expected cumulative loss of during that epoch, is bounded as
The algorithm is following updates over algorithms as meta-experts. Thus, Lemma 11 gives
Using to bound the logarithmic term gives the result. ∎
We now turn to prove the theorem.
Proof of Theorem 6.
First, regarding the running time of the algorithm, note that on each round Algorithm 1 has to update instances of , where each such update costs at most time according to Lemma 12. Hence, the overall runtime per round is .
where we have used and . In order to bound the sum on the right-hand side, we first notice that . Hence, using the Cauchy-Schwarz inequality we get Combining this with Section 3.1 and our choice of , and rearranging the left-hand side of the inequality, we obtain
and the theorem follows. ∎
3.2 Main Algorithm
We now ready to present our main online algorithm: an algorithm for online learning with optimizable experts, that guarantees expected average regret in total time. The algorithm is presented in Algorithm 2, and in the following theorem we give its guarantees.
Theorem 1 (restated).
The expected average regret of Algorithm 2 on any sequence of loss functions over experts is upper bounded by . The algorithm can be implemented to run in time per round in the optimizable experts model.
Algorithm 2 relies on the Leaders and algorithms discussed earlier, and on yet another variant of the MW method—the algorithm—which is similar to . The difference between the two algorithms is in their running time per round: , like standard MW, runs in time per round over experts; is an “amortized” version of that spreads computation over time and runs in only time per round, but requires times more rounds to converge to the same average regret.
Suppose that (see Algorithm 4) is used for prediction with expert advice, against an arbitrary sequence of loss functions over experts. Then, for and any , its sequence of predictions satisfies
in any time interval of length at most . The algorithm can be implemented to run in time per round.
Given the Leaders algorithm, the overall idea behind Algorithm 2 is quite simple: first guess experts uniformly at random, so that with nice probability one of the “top” experts is picked, where experts are ranked according to the last round of the game in which they are leaders. (In particular, the best expert in hindsight is ranked first.) The first online algorithm —an instance of —is designed to compete with this leader, up to that point in time where it appears as leader for the last time. At this point, the second algorithm —an instance of Leaders—comes into action and controls the regret until the end of the game. It is able to do so because in that time period there are only few different leaders (i.e., at most ), and as we pointed out earlier, Leaders is designed to exploit this fact. The role of the algorithm , being executed on top of and as experts, is to combine between the two regret guarantees, each in its relevant time interval.
Proof of Theorem 1.
The fact that the algorithm can be implemented to run in time per round follows immediately from the running time of the algorithms , , and Leaders, each of which runs in time per round with the parameters used in Algorithm 2.
We move on to analyze the expected regret. Rank each expert according to if is never a leader throughout the game, and otherwise. Let be the list of experts sorted according to their rank in decreasing order (with ties broken arbitrarily). In words, is the best expert in hindsight, is the expert leading right before becomes the sole leader, is the leading expert right before and become the only leaders, and so on. Using this definition, we define be the set of the top experts having the highest rank.
First, consider the random set . We claim that with high probability, this set contains at least one of the top leaders. Indeed, we have
so that with probability at least it holds that . As a result, it is enough to upper bound the expected regret of the algorithm for any fixed realization of such that : in the event that the intersection is empty, that occurs with probability , the regret can be at most and thus ignoring these realizations can only affect the expected regret by an additive term. Hence, in what follows we fix an arbitrary realization of the set such that and bound the expected regret of the algorithm.
Given with , we can pick and let be the last round in which is the leader. Since and , the instance over the experts in , with parameter , guarantees (recall Lemma 12) that
where we use to denote the decision of on round .
On the other hand, observe that there are at most different leaders throughout the time interval , which follows from the fact that . Thus, in light of Theorem 6, we have
where here denotes the decision of on round .
for any fixed realization of with . As we explained before, the overall expected regret is larger by at most than the right-hand side of Eq. 8, and dividing through by gives the theorem. ∎
3.3 Multiplicative Weights Algorithms
We end the section by presenting the several variants of the Multiplicative Weights (MW) method used in our algorithms above. For an extensive survey of the basic MW method and its applications, refer to Arora et al., 2012.
3.3.1 : Mixed MW
The first variant, the algorithm, is designed so that its regret on any time interval of bounded length is controlled. The standard MW algorithm does not have such a property, because the weight it assigns to an expert might become very small if this expert performs badly, so that even if the expert starts making good decisions, it cannot regain a non-negligible weight.
Our modification of the algorithm (see Algorithm 3) involves mixing in a fixed weight to the update of the algorithm, for all experts on each round, so as to keep the weights away from zero at all times. We note that this is not equivalent to the more standard modification of mixing-in the uniform distribution to the sampling distributions of the algorithms: in our variant, it is essential that the mixed weights are fed back into the update of the algorithm so as to control its weights.