Fast rates with high probability in expconcave statistical learning
Abstract
We present an algorithm for the statistical learning setting with a bounded expconcave loss in dimensions that obtains excess risk with probability at least . The core technique is to boost the confidence of recent inexpectation excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to expconcavity. We also show that with probability the standard ERM method obtains excess risk . We further show that a regret bound for any online learner in this setting translates to a high probability excess risk bound for the corresponding onlinetobatch conversion of the online learner. Lastly, we present two high probability bounds for the expconcave model selection aggregation problem that are quantileadaptive in a certain sense. The first bound is a purely exponential weights type algorithm, obtains a nearly optimal rate, and has no explicit dependence on the Lipschitz continuity of the loss. The second bound requires Lipschitz continuity but obtains the optimal rate.
1 Introduction
In the statistical learning problem, a learning agent observes a samples of points drawn i.i.d. from an unknown distribution over an outcome space . The agent then seeks an action in an action space that minimizes their expected loss, or risk, , where is a loss function . Several recent works have studied this problem in the situation where the loss is expconcave and bounded, and are subsets of , and is convex. Mahdavi et al. (2015) were the first to show that there exists a learner for which, with probability at least , the excess risk decays at the rate . Via new algorithmic stability arguments applied to empirical risk minimization (ERM), Koren and Levy (2015) and Gonen and ShalevShwartz (2016) discarded the factor to obtain a rate of , but their bounds only hold in expectation. All three works highlighted the open problem of obtaining a high probability excess risk bound with the rate . Whether this is possible is far from a trivial question in light of a result of Audibert (2008): when learning over a finite class with bounded expconcave losses, the progressive mixture rule (a Cesàro mean of pseudoBayesian estimators) with learning rate obtains expected excess risk but, for any learning rate, these rules suffer from severe deviations of order .
This work resolves the high probability question: we present a learning algorithm with an excess risk bound (Corollary 5) which has rate with probability at least . ERM also obtains excess risk, a fact that apparently was not widely known although it follows from results in the literature. To vanquish the factor with the small price it suffices to run a twophase ERM method based on a confidenceboosting device. The key to our analysis is connecting expconcavity to the central condition of van Erven et al. (2015), which in turn implies a Bernstein condition. We then exploit the variance control of the excess loss random variables afforded by the Bernstein condition to boost the boosting the confidence trick of Schapire (1990).
In the next section, we discuss a brief history of the work in this area. In Section 3, we formally define the setting and describe the previous inexpectation bounds. We present the results for standard ERM and our confidenceboosted ERM method in Sections 4 and 5 respectively. Section 6 extends the results of Kakade and Tewari (2009) to expconcave losses, showing that under a bounded loss assumption a regret bound for any online expconcave learner transfers to a high probability excess risk bound via an onlinetobatch conversion. This extension comes at no additional technical price: it is a consequence of the variance control implied by expconcavity, control leveraged by Freedman’s inequality for martingales to obtain a fast rate with high probability. This result continues the line of work of CesaBianchi et al. (2001) and Kakade and Tewari (2009) and accordingly is about the generalization ability of online expconcave learning algorithms. One powerful consequence of this result is a new guarantee for model selection aggregation: we present a method (Section 7) for the model selection aggregation problem over finite classes with expconcave losses that obtains a rate of with high probability, with no explicit dependence on the Lipschitz continuity of the loss function. All previous bounds of which we are aware have explicit dependence on the Lipschitz continuity of the problem. Moreover, the bound is a quantilelike bound in that it improves with the prior measure on a subclass of nearly optimal hypotheses.
2 A history of expconcave learning
Learning under expconcave losses with finite classes dates back to the seminal work of Vovk (1990) and the game of prediction with expert advice, with the first explicit treatment for expconcave losses due to Kivinen and Warmuth (1999). Vovk (1990) showed that if a game is mixable (which is implied by expconcavity), one can guarantee that the worstcase individual sequence regret against the best of experts is at most . An onlinetobatch conversion then implies an inexpectation excess risk bound of the same order in the stochastic i.i.d. setting.
Audibert (2008) showed that when learning over a finite class with expconcave losses, no progressive mixture rule can obtain a high probability excess risk bound of order better than . ERM fares even worse, with a lower bound of in expectation. (Juditsky et al., 2008). Audibert (2008) overcame the deviations shortcoming of progressive mixture rules via his empirical star algorithm, which first runs ERM on , obtaining , and then runs ERM a second time on the star convex hull of with respect to . This algorithm achieves with high probability; the rate was only proved for squared loss with targets and predictions in , but it was claimed that the result can be extended to general, bounded losses satisfying smoothness and strong convexity as a function of predictions . Under similar assumptions, Lecué and Rigollet (2014) proved that a method, aggregation, also obtains this rate but can further take into account a prior distribution.
For convex classes, such as as we consider here, Hazan et al. (2007) designed the Online Newton Step (ONS) and Exponentially Weighted Online Optimization (EWOO) algorithms. Both have regret over rounds, which, after onlinetobatch conversion yields excess risk in expectation. Until recently, it was unclear whether one could obtain a similar high probability result; however, Mahdavi et al. (2015) showed that an onlinetobatch conversion of ONS enjoys excess risk bounded by with high probability. While this resolved the statistical complexity of learning up to factors, ONS (though efficient) can have a high computational cost of even in simple cases like learning over the unit ball, and in general its complexity may be as high as per projection step (Hazan et al., 2007; Koren, 2013).
If one hopes to eliminate the factor, the additional hardness of the online setting makes it unlikely that one can proceed via an onlinetobatch conversion approach. Moreover, computational considerations suggest circumventing ONS anyways. In this vein, as we discuss in the next section both Koren and Levy (2015) and Gonen and ShalevShwartz (2016) recently established inexpectation excess risk bounds for a lightly penalized ERM algorithm and ERM itself respectively, without resorting to an onlinetobatch conversion. Notably, both works developed arguments based on algorithmic stability, thereby circumventing the typical reliance on chainingbased arguments to discard factors. Table 1 summarizes what is known and our new results.
Convex  Finite  

Algorithm  Expectation  Probability  Expectation  Probability 
Progressive mixture  —  —  
Empirical star / agg.  —  —  
Online Newton Step  —  —  
EWOO  —  —  
ERM  —  
Boosted ERM  —  —  — 
3 Rateoptimal inexpectation bounds
We now describe the setting more formally.
In this work is always assumed to be convex, except in Section 7, which studies the model selection aggregation problem for countable classes.
We say a function has diameter if .
Assume for each that the loss map is expconcave, i.e. is concave over .
We further assume, for each outcome , that the loss has diameter .
We adopt the notation .
Given a sample of points drawn i.i.d. from an unknown distribution over , our objective is to select a hypothesis that minimizes the excess risk . We assume that there exists satisfying ; this assumption also was made by Gonen and ShalevShwartz (2016) and Kakade and Tewari (2009).
Let be an algorithm, defined for a function class as a mapping ; we drop the subscript when it is clear from the context. Our starting point will be an algorithm which, when provided with a sample of i.i.d. points, satisfies an expected risk bound of the form
(1) 
Koren and Levy (2015) and Gonen and ShalevShwartz (2016) both established inexpectation bounds of the form (1) that obtain a rate of in the case when , each in a slightly different setting. Koren and Levy (2015) assume, for each outcome , that the loss has diameter and is smooth for some , i.e. for all , the gradient is Lipschitz:
They also use a 1strongly convex regularizer with diameter . Under these assumptions, they show that ERM run with the weighted regularizer has expected excess risk at most
It is not known if the smoothness assumption is necessary to eliminate the factor.
Gonen and ShalevShwartz (2016) work in a slightly different setting that captures all known expconcave losses. They assume that the loss is of the form , for . They further assume, for each , that the mapping is strongly convex and Lipschitz, but they do not assume smoothness. They show that standard, unregularized ERM has expected excess risk at most
where ; the purpose of the rightmost expression is that the loss is expconcave. Although this bound ostensibly is independent of the loss’s diameter , the dependence may be masked by : for logistic loss, , while squared loss admits the more favorable .
4 A high probability bound for ERM
As a warmup to proving a high probability excess risk bound, we first show that ERM itself obtains excess risk with high probability; here and elsewhere, if is omitted the dependence is . That ERM satisfies such a bound was largely implicit in the literature, and so we make this result explicit. The closest such result, Theorem 1 of Mahdavi and Jin (2014), does not apply as it relies on an additional assumption (see their Assumption (I)). Our assumptions subtly differ from elsewhere in this work. We assume that satisfies and that, for each outcome , the loss is Lipschitz and . The first two assumptions already imply the last for . All these assumptions were made by Mahdavi and Jin (2014) and Koren and Levy (2015), sometimes implicitly, and while Gonen and ShalevShwartz (2016) only make the Lipschitz assumption, for all known expconcave losses the constant depends on (which itself typically will depend on ).
The first, critical observation is that expconcavity implies good concentration properties of the excess loss random variable. This is easiest to see by way of the central condition, which the excess loss satisfies. This concept, studied by van Erven et al. (2015) and first introduced by van Erven et al. (2012) as “stochastic mixability”, is defined as follows.
Definition (Central condition)
We say that satisfies the central condition for some if there exists a comparator such that, for all ,
Jensen’s inequality implies that if this condition holds, the corresponding must be a risk minimizer. It is known (van Erven et al., 2015, Section 4.2.2) that in our setting ) satisfies the central condition.
Lemma
Let be convex. Take to be a loss function , and assume that, for each , the map is expconcave. Then, for all distributions over , if there exists an that minimizes the risk under , then satisfies the central condition.
With the central condition in our grip, Theorem 7 of Mehta and Williamson (2014) directly implies an bound for ERM; however, a far simpler version of that result yields much smaller constants. The proof of the version below, in the appendix for completeness, only makes use of an net of in the norm, which induces an net of in the sup norm.
Theorem
Let be a convex set satisfying . Suppose, for all , that the loss is expconcave and Lipschitz. Let . Then if , with probability at least , ERM learns a hypothesis with excess risk bounded as
(2) 
5 Boosting the confidence for high probability bounds
The two existing excess risk bounds mentioned in Section 3 decay at the rate . A naïve application of Markov’s inequality unsatisfyingly yields excess risk bounds of order that hold with probability .
In this section, we present and analyze our metaalgorithm, ConfidenceBoost, which boosts these inexpectation bounds to hold with probability at least at the price of factor.
This method is essentially the “boosting the confidence” trick of Schapire (1990);
Our analysis of ConfidenceBoost actually applies more generally than the expconcave learning setting, requiring only that satisfy an inexpectation bound of the form (1), the loss have bounded diameter for each , and the problem satisfy a Bernstein condition.
Definition (Bernstein condition)
We say that satisfies the Bernstein condition for some and if there exists a comparator such that, for all ,
Before getting to ConfidenceBoost, we first show that the expconcave learning setting satisfies the Bernstein condition with the best exponent, , and so is a special case of the more general setting we analyze. Recall from Lemma 4 that the central condition holds for . The next lemma, which adapts a result of van Erven et al. (2015), shows that the central condition, together with boundedness of the loss, implies that a Bernstein condition holds.
Lemma (Central to Bernstein)
Let be a random variable taking values in . Assume that . Then .
Boosting the boostingtheconfidence trick.
First, consider running on a sample of i.i.d. points. The excess risk random variable is nonnegative, and so Markov’s inequality and the expected excess risk being bounded by imply that
Now, let be independent samples, each of size . Running on each sample yields . Applying Markov’s inequality as above, combined with independence, implies that with probability at least there exists such that . Let us call this good event good.
Our quest is now to show that on event good, we can identify any of the hypotheses approximately satisfying , where by “approximately” we mean up to some slack that weakens the order of our resulting excess risk bound by a multiplicative factor of at most . As we will see, it suffices to run ERM over this finite subclass using a fresh sample. The proposed metaalgorithm is presented in Algorithm 1.
Analysis.
From here on out, we treat the initial sample of size as fixed and unhat the estimators above, referring to them as . Without loss of generality, we further assume that they are sorted in order of increasing risk (breaking ties arbitrarily). Our goal now is to show that running ERM on the finite class yields low excess risk with respect to comparator . A typical analysis of the boosting the confidence trick would apply Hoeffding’s inequality to select a risk minimizer optimal to resolution , but this is not good enough here. As a further boost to the trick, this time with respect to its resolution, we will establish that a Bernstein condition holds over a particular subclass of with high probability, which will in turn imply that ERM obtains excess risk over .
We first establish an approximate Bernstein condition for . Since for all , from the Bernstein condition,
where the last step follows because the map is concave and hence subadditive. We call this bound an approximate Bernstein condition because, on event good, for all :
Define the class as the set . Then with probability , the problem satisfies the Bernstein condition.
We now analyze the outcome of running ERM on using a fresh sample of points. The next lemma shows that ERM performs favorably under a Bernstein condition, a wellknown result.
Lemma
Let be a finite class of functions and assume without loss of generality that is a risk minimizer. Let be a subclass for which, for all :
and almost surely. Then, with probability at least , ERM run on will not select any function in whose excess risk satisfies
Applying Lemma 5 with and , with probability at least over the fresh sample, ERM selects a function falling in one of two cases:

;

(using ).
We now run ConfidenceBoost with on a sample of points, with and ; for simplicity, we assume that divides . Taking the failure probability for the ERM phase to be , ConfidenceBoost admits the following guarantee.
Theorem
Let satisfy the Bernstein condition, and assume for all that the loss has diameter . Impose any necessary assumptions such that algorithm obtains a bound of the form (1). Then, with probability at least , ConfidenceBoost run with , , and learns a hypothesis with excess risk at most
(3) 
The next result for expconcave learning is immediate.
Remarks.
As we saw from Lemmas 4 and 5, in the expconcave setting a Bernstein condition holds for the class . A natural inquiry is if one could use this Bernstein condition to show directly a high probability fast rate of for ERM. Indeed, under strong convexity (which is strictly stronger than expconcavity), Sridharan et al. (2009) show that a similar bound for ERM is possible; however, they used strong convexity to bound a localized complexity. It is unclear if expconcavity can be used to bound a localized complexity, and the Bernstein condition alone seems insufficient; such a bound may be possible via ideas from the local norm analysis of Koren and Levy (2015). While we think controlling a localized complexity from expconcavity is a very interesting and worthwhile direction, we leave this to future work, and for now only conjecture that ERM also enjoys excess risk bounded by with high probability. This conjecture is from analogy to the empirical star algorithm of Audibert (2008), which for convex reduces to ERM itself; note that the conjectured effect of is additive rather than multiplicative.
6 Onlinetobatchconversion
The present section’s purpose is to show that if one is willing to accept the additional factor in a high probability bound, then it is sufficient to use an onlinetobatch conversion of an online expconcave learner whose worstcase cumulative regret (over rounds) is logarithmic in . Using such a conversion, it is easy to get an excess risk bound with the additional factor that holds in expectation. The key difficulty is making such a bound hold with high probability. This result provides an alternative to the high probability result for ERM in Section 4.
Mahdavi et al. (2015) previously considered an onlinetobatch conversion of ONS and established the first explicit high probability excess risk bound in the expconcave statistical learning setting. Their analysis is elegant but seems to be intimately coupled to ONS; it consequently is unclear if their analysis can be used to grasp excess risk bounds by onlinetobatch conversions of other online expconcave learners. This leads to our next point and a new path: it is possible to transfer regret bounds to high probability excess risk bounds via onlinetobatch conversion for general online expconcave learners. Our analysis builds strongly on the analysis of Kakade and Tewari (2009) in the strongly convex setting.
We first consider a different, related setting: online convex optimization (OCO) under a bounded, strongly convex loss that is Lipschitz with respect to the action. An OCO game unfolds over rounds. An adversary first selects a sequence of convex loss functions . In round , the online learner plays , the environment subsequently reveals cost function , and the learner suffers loss . Note that the adversary is oblivious, and so the learner does not necessarily need to randomize. Because we are interested in analyzing the statistical learning setting, we constrain the adversary to play a sequence of points , inducing cost functions .
Consider an online learner that sequentially plays actions in response to , so that depends on . The (cumulative) regret is defined as
When the losses are bounded, strongly convex, and Lipschitz, Kakade and Tewari (2009) showed that if an online algorithm has regret on an i.i.d. sequence , onlinetobatch conversion by simple averaging of the iterates admits the following guarantee.
Theorem (Cor. 5, Kakade and Tewari (2009))
For all , assume that is bounded by , strongly convex, and Lipschitz. Then with probability at least the action satisfies excess risk bound
Under various assumptions, there are OCO algorithms that obtain worstcase regret (under all sequences ) . For instance, Online Gradient Descent (Hazan et al., 2007) admits the regret bound , where is an upper bound on the gradient.
What if we relax strong convexity to expconcavity? As we will see, it is possible to extend the analysis of Kakade and Tewari (2009) to expconcave losses. Of course, such a regrettoexcessrisk bound conversion is useful only if we have online algorithms and regret bounds to start with. Indeed, at least two such algorithms and bounds exist, due to Hazan et al. (2007):

ONS, with , where is a bound on the gradient and is a bound on the diameter of the action space.

Exponentially Weighted Online Optimization (EWOO), with . The better regret bound comes at the price of not being computationally efficient. EWOO can be run in randomized polynomial time, but the regret bound then holds only in expectation (which is insufficient for an onlinetobatch conversion).
We now show how to extend the analysis of Kakade and Tewari (2009) to expconcave losses. While similar results can be obtained from the work of Mahdavi et al. (2015) for the specific case of ONS, our analysis is agnostic of the base algorithm. A particular consequence is that our analysis also applies to EWOO, which, although highly impractical, offers a better regret bound. Moreover, our analysis applies to any future online learning algorithms which may have improved guarantees and computational complexities. The key insight is that expconcavity implies a variance inequality similar to Lemma 1 of Kakade and Tewari (2009), a pivotal result of that work that unlocks Freedman’s inequality for martingales (Freedman, 1975). Let denote the sequence .
Lemma (Conditional variance control)
Define the Martingale difference sequence
Then 
Proof
Observe that . Treating the sequence as fixed and also treating as a fixed parameter , the above conditional variance equals , where the randomness lies entirely in . Then, Lemma 5 implies that .
The next corollary is from a retrace of the proof of Theorem 2 of Kakade and Tewari (2009).
Corollary
For all , let be bounded by and expconcave with respect to the action . Then with probability at least , for any , the excess risk of is at most
In particular, an onlinetobatch conversion of EWOO yields excess risk of order
By proceeding similarly one can get a guarantee for ONS, under the additional assumptions that has bounded diameter and that, for all , the gradient has bounded norm.
Obtaining excess risk.
The worstcase regret bounds in this online setting have a factor, but when the environment is stochastic and the distribution satisfies some notion of easiness the actual regret can be . In such situations the excess risk similarly can be because our excess risk bounds depend not on worstcase regret bounds but rather the actual regret. We briefly explore one scenario where such improvement is possible. Suppose that the loss is also smooth; then, in situations when the cumulative loss of is small, the analysis of Orabona et al. (2012, Theorem 1) for ONS yields a more favorable regret bound: they show a regret bound of order . As a simple example, consider the case when the problem is realizable in the sense that almost surely. Then the regret bound is constant and the rate with respect to for the excess risk in Corollary 6 is .
7 Model selection aggregation
In the model selection aggregation problem for expconcave losses, we are given a countable class of functions from an input space to an output space and a loss ; for each , the mapping is expconcave. The loss is a supervised loss, as in supervised classification and regression, unlike the more general loss functions used in the rest of the paper which fit into Vapnik’s general setting of the learning problem (Vapnik, 1995). The random points now decompose into an inputoutput pair . We often use the notation . The goal is the same as in the stochastic expconcave optimization problem, but now fails to be convex (and the expconcavity assumption slightly differs).
After Audibert (2008) showed that the progressive mixture rule cannot obtain fast rates with high probability, several works developed methods that departed from progressive mixture rules and gravitated instead toward ERMstyle rules, starting with the empirical star algorithm of
Audibert (2008) and a subsequent method of Lecué and Mendelson (2009) which runs ERM over the convex hull of a datadependent subclass. Lecué and Rigollet (2014) extended these results to take into account a prior on the class using their aggregation procedure. All the methods require Lipschitz continuity of the loss
Since is countable and hence not convex, algorithms for stochastic expconcave optimization do not directly apply. Our approach is to apply stochastic expconcave optimization to the convex hull of a certain small cardinality and datadependent subset of . The first phase of obtaining this subset makes use of the progressive mixture rule. We offer two variants for the second phase: PMEWOO (Algorithm 2) and PMCB (Algorithm 3). In the algorithms, and are onlinetobatch conversions of the progressive mixture rule and EWOO respectively, is ConfidenceBoost, and is ERM.
Our interest in PMEWOO is twofold: (i) it is a “purely” exponential weights type method in that it is based only on the progressive mixture rule and EWOO; (ii) it does not require any Lipschitz assumption on the loss function, unlike all previous work.
Theorem
Let be a countable and a prior distribution over . Assume that for each the loss is expconcave. Further assume that for all in the support of . Then with probability at least , PMEWOO run with , and learns a hypothesis satisfying
with 
Here, is the generalized Expected Bayesian Redundancy (Takeuchi and Barron, 1998; Grünwald, 2012), defined as
for the KLdivergence. The bound can be rewritten as a quantilelike bound; for all :
where . This bound enjoys a quantilelike improvement when is small. For instance, if there is a set of large prior measure which has excess risk close to , then Theorem 7 pays for the complexity; in contrast, Theorem A of Lecué and Rigollet (2014) pays a higher complexity price of .
Lastly, we provide a simpler bound by specializing to the case of concentrated entirely on . Then
Theorem 7 does not explicitly require Lipschitz continuity of the loss, but the rate is suboptimal due to the extra factor. The next result obtains the correct rate by using ConfidenceBoost for the second stage of the procedure.
Theorem
Take the assumptions of Theorem 7, but instead assume that for each the loss is strongly convex and Lipschitz (so expconcavity holds). Then with probability at least , PMCB run with , and learns a hypothesis satisfying
with 
The proofs of Theorems 7 and 7 are nearly identical and left to the appendix. We sketch a proof here, as it uses a novel reduction of the second phase to a lowdimensional stochastic expconcave optimization problem. For simplicity, we restrict to the case of finite , uniform prior , and competing with . A naïve approach is to run a stochastic expconcave optimization method on the convex hull of , but this suffers an excess risk bound scaling as rather than . We instead start with an initial procedure that drastically reduces the set of candidates to a set of . To this end, note that an onlinetobatch conversion of the progressive mixture rule run on samples obtains expected excess risk at most . Hence, independent runs yield a hypothesis with the same bound inflated by a factor with probability at least (we assume that this high probability event holds hereafter). At this point, it seems that we have replaced the original problem with an isomorphic one, as we do not know which yields the desired candidate , and the corresponding subclass is still clearly nonconvex. However, by taking the convex hull of this set of predictors and reparameterizing the problem, we arrive at a stochastic expconcave optimization problem over the dimensional simplex; the best predictor in the convex hull clearly at least as good as the best one in . Thus, our analyses of EWOO and ConfidenceBoost apply and the results follow.
8 Discussion and Open Problems
We presented the first high probability excess risk bound for expconcave statistical learning. The key to proving this bound was the connection between expconcavity and the central condition, a connection which suggests that expconcavity implies a low noise condition. Here, low noise can be interpreted either in terms of the central condition, by the exponential decay of the negative tail of the excess loss random variables, or in terms of the Bernstein condition, by the variance of the excess loss of a hypothesis being controlled by its excess risk. All our results for stochastic expconcave optimization were based on this low noise interpretation of expconcavity. In contrast, The previous inexpectation results of Koren and Levy (2015) and Gonen and ShalevShwartz (2016) used the geometric/convexityinterpretation of expconcavity, which we further boosted to high probability results using the low noise interpretation. It would be interesting to get a high probability result that proceeds purely from a low noise interpretation or purely from a geometric/convexity one.
Many results flowing from algorithmic stability often only yield inexpectation bounds, with high probability bounds stemming either from (i) a posthoc confidence boosting procedure — typically involving Hoeffding’s inequality, which “slows down” fast rate results; or (ii) quite strong stability notions — e.g. uniform stability allows one to apply McDiarmid’s inequality to a single run of the algorithm (Bousquet and Elisseeff, 2002). Is it a limitation of algorithmic stability techniques that high probability fast rates seem to be out of reach without a posthoc confidence boosting procedure, or are we simply missing the right perspective? One reason to avoid a confidence boosting procedure is that the resulting bounds suffer from a multiplicative factor rather than the lighter effect of an additive factor in bounds like Theorem 4. As we mentioned earlier, we conjecture that the basic ERM method obtains a high probability rate, and a potential path to show this rate would be to control a localized complexity as done by Sridharan et al. (2009) but using a more involved argument based on expconcavity rather than strong convexity.
We also developed high probability quantilelike risk bounds for model selection aggregation, one with an optimal rate and another with a slightly suboptimal rate but no explicit dependence on the Lipschitz continuity of the loss. However, our bound form is not yet a full quantiletype bound; it degrades when the gap term is large, while the bound of Lecué and Rigollet (2014) does not have this problem. Yet, our bound provides an improvement when there is a neighborhood around with large prior mass, which the bound of Lecué and Rigollet cannot do. It is an open problem to get a bound with the best of both worlds.
Appendix A Proofs for Stochastic ExpConcave Optimization
Proof (of Lemma 4)
The expconcavity of for each implies that, for all and all distributions over :
It therefore holds that for all distributions over , for all distributions over , there exists (from convexity of ) satisfying
This condition is equivalent to stochastic mixability as well as the pseudoprobability convexity (PPC) condition, both defined by van Erven et al. (2015). To be precise, for stochastic mixability, in Definition 4.1 of van Erven et al. (2015), take their and both equal to our , their equal to , and