A note on a confidence bound of Kuzborskij and Szepesvári
In an interesting recent work, Kuzborskij and Szepesvári derived a confidence bound for functions of independent random variables, which is based on an inequality that relates concentration to squared perturbations of the chosen function. Kuzborskij and Szepesvári also established the PAC-Bayes-ification of their confidence bound. Two important aspects of their work are that the random variables could be of unbounded range, and not necessarily of an identical distribution. The purpose of this note is to advertise/discuss these interesting results, with streamlined proofs. This expository note is written for persons who, metaphorically speaking, enjoy the ‘featured movie’ but prefer to skip the preview sequence.
In an interesting recent work, Kuzborskij and Szepesvári derived a confidence bound for the random variable
where is a size- random sample composed of independent -valued random elements , and is a measurable function.
Notice, however, that the components are not required to be identically distributed: each may be distributed according to a different
Their confidence bound is based on an estimator of the variance of . Recall that McDiarmid’s inequality, which is based on the bounded differences property, relates concentration of around zero (its mean) to the sensitivity of to coordinatewise perturbations (“first-order”). By contrast, the bound of Kuzborskij and Szepesvári relates concentration to squared perturbations (“second-order”), which leads to an inequality based on a variance estimator. The latter has a resemblance with a well-known estimator, recalled next.
The variance estimator used in the Efron-Stein inequality.
This is defined as follows:
where is the positive part, and the notation indicates that the th element of is replaced with , where is an independent copy of . Further details about this estimator, with context and references, can be found in Boucheron et al. .
Problem: In order to prove a confidence bound for based on , one needs a priori assumptions on the moments of . To avoid this limitation, Kuzborskij and Szepesvári used a modified variance estimator.
The variance estimator used in the Kuzborskij-Szepesvári inequality.
This is defined as follows:
Kuzborskij and Szepesvári called it a “semi-empirical” estimator, because of its dependence on both the sample and the distribution of the sample.
The main result of Kuzborskij and Szepesvári is the following high-confidence bound: For any and , with probability at least one has
Inequality (3) does not require boundedness of random variables , nor of the function ; the only crucial assumption is independence of elements in the sample . Observe that inequality (3) basically depends on and a positive free parameter , which must be selected by the user. For instance, choosing gives: For any , with probability at least one has
Paraphrasing Kuzborskij and Szepesvári: With this particular choice of , the resulting inequality shows a Bernstein-type behavior, in the sense that the upper-bound is dominated by the lower-order term whenever is small enough; and the price for such a simple choice of is in the logarithmic term.
2 The main result and its proof
Let be a measurable function,
let be the random gap with randomly chosen from a distribution ,
and let be the variance estimator defined in Eq. 2.
(i) For any ,
(ii) For any , and any ,
To discuss the proof of Theorem 1, the following definition will be convenient: A pair of random variables is called a canonical pair if and
See de la Peña et al. [2009, Section 10.2] for further discussion on this condition, and its connection with the so-called self-normalized processes.
A key step of the proof of Theorem 1 consists of establishing that is a canonical pair. We state this as a lemma for convenient reference:
is a canonical pair.
The rest of the proof of Theorem 1 relies on following technical result, which essentially gives subgaussian tail probabilities for some functions of a canonical pair (cf. de la Peña et al. [2009, Theorem 12.4 & Corollary 12.5]):
Suppose is a canonical pair. Then:
(i) For any ,
(ii) For any and ,
The proof of Theorem 1 is then merely by combining Lemma 2 and Lemma 3. Hence, it remains to prove Lemma 2. This uses the martingale method, which is at the core of the proofs of McDiarmid’s/Azuma-Hoeffding’s inequalities.
Proof of Lemma 2.
Let stand for . Using the martingale difference decomposition, the gap can be written as
where . Notice that , which follows from the elementary identity .
The variance estimator (cf. Eq. 2) can be rewritten as
where . This is just a convenient notation.
Assume for now that for every the following holds:
Then, using a recursive argument and Eq. 5, we get
Thus, it remains to prove Eq. 5. Fix and let be a random variable independent of such that . Let . Notice that , and by Jensen’s inequality
Let denote conditioning on without . Then we have
The last equality follows from the assumption on the distributions, that is, given , the random variables and are identically distributed, hence so are and . Since is subgaussian (for any ), the innermost expectation in the last display is upper-bounded by one. ∎
The proof makes it clear that this inequality holds in the slightly more general setting in which and has independent components, where each is a -valued random variable with distribution .
We adapt the notation for and to make explicit their dependence on , and see them as being defined over ’s from some function class :
It might be convenient to make explicit the dependence of and on the sample as well; to do so, we may write and . Recall that the distribution of the (size-) random sample is . Notice that for a fixed nonrandom , the gap is
The expression for is longer to write, but easy to imagine. The point is that and are real-valued functions defined over .
Let be a parametric family of functions . For each , define and , the gap and the variance estimator for . Then is a canonical pair, for each , by Lemma 2.
Given a probability kernel from to and , we write expectations with respect to the distribution as , and similarly . If is the random sample, then expectations with respect to the random measure are conditional expectations:
The joint distribution over defined by and the probability kernel , denoted , is so that choosing a random pair corresponds to choosing and then choosing . Accordingly, integrals under correspond to the ‘total expectation’ with respect to the random choice of and . For instance,
With a slight abuse of notation, we may write instead of .
The ‘PAC-Bayes-ification’ of Theorem 1 is as follows.
Fix an arbitrary ‘data-free’ probability distribution over ,
and an arbitrary probability kernel from to .
(i) For any , with probability at least we have
(ii) For all and , with probability at least we have
The statement of this theorem uses the language of probability kernels for representing data-dependent distributions (cf. Rivasplata et al. ).
In the remaining of this note, we switch back to the usual notation in terms of conditional expectations: and . Also recall that is the total expectation.
The proof of Theorem 4 is based on the following lemma.
Under the same conditions as in Theorem 4.
(i) For all ,
(ii) For any , we have
Proof of Lemma 5.
For convenience, we start with the proof of Eq. 9. Recall the following change of measure, which is the basis of the PAC-Bayesian analysis: Let and be probability measures on , and let the induced expectation operators be and , respectively. Let be a -valued random variable. Then, for any measurable function we have
Below we use this with , , and .
Let and be the expectation with respect to and , respectively. Conditioning on the random sample we have:
Subtracting the KL term, and taking exponential on both sides gives
Then, taking expectation over the random sample on both sides, and keeping in mind that is a canonical pair for any fixed , we have
The equality is by swapping the order of expectation, which is possible since is a data-free distribution (cf. Rivasplata et al. ).
Next, multiplying both sides by for some fixed , integrating with respect to , and applying Fubini’s theorem, gives
Carrying out the Gaussian integration we arrive at
which finishes the proof of Eq. 9.
For the other part of the lemma, we consider the following:
Let be a non-negative random variable, and for define . Then, for any , .
The proof of this claim is as follows. Fix and . Using the inequality with and we have
Then take exponential on both sides, and take expectations.
Next, we see the proof of part (i) of the lemma.
Consider the random variable
and notice that Eq. 8 follows from the claim with , provided that we show that . For this, consider an arbitrary , and consider the abbreviations
We need to upper-bound . Keeping in mind that (in fact, ), by Cauchy-Schwarz,
Observe that by Eq. 9, and . Now, we have
Finally, by subadditivity of the square root function and Jensen’s inequality,
where the last inequality is by taking any . Thus, for the chosen . Applying creftype 6 with completes the proof. ∎
To complete the argument, the proof of Theorem 4 is given next.
Proof of Theorem 4.
Applying Chernoff’s bounding technique with Eq. 8 gives
The infimum is . Thus, with probability at least we have
With some algebra, this event implies
Next, observe that for any and any ,
where the last two inequalities follow from Markov’s inequality and Eq. 9. This implies that for all , with probability at least , one has
Kuzborskij and Szepesvári deserve fair credit for showing that the pair meets de la Peña et al. ’s ‘canonical condition’ (Lemma 2), which enabled powerful tools for bounding exponential moments. Of course, this was possible with their variance estimator . Apart from that, the main part of the work of Kuzborskij and Szepesvári is in the proofs of Lemma 5 and Theorem 4, which cleverly use the techniques of de la Peña et al.. In the next iteration of this note (provided that enough readers cared about it) I intend to add discussions about Theorem 1 & Theorem 4, and applications.
- denotes the family of probability measures defined on a measurable space . When is clear from the context, we write simply for simplicity.
- This is inspired by the proof of [de la Peña et al., 2009, Theorem 12.4], which uses the method of mixtures with a Gaussian distribution.
- Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. Oxford University Press, 2013.
- Victor H de la Peña, Tze Leung Lai, and Qi-Man Shao. Self-normalized processes: Limit theory and Statistical Applications. Springer, 2009.
- Ilja Kuzborskij and Csaba Szepesvári. Efron-Stein PAC-Bayesian Inequalities. arXiv:1909.01931, 2019.
- Omar Rivasplata, Ilja Kuzborskij, Csaba Szepesvári, and John Shawe-Taylor. PAC-Bayes Analysis Beyond the Usual Bounds. In Advances in Neural Information Processing Systems, 2020.