A note on a confidence bound of Kuzborskij and Szepesvári

A note on a confidence bound of Kuzborskij and Szepesvári

Abstract

In an interesting recent work, Kuzborskij and Szepesvári derived a confidence bound for functions of independent random variables, which is based on an inequality that relates concentration to squared perturbations of the chosen function. Kuzborskij and Szepesvári also established the PAC-Bayes-ification of their confidence bound. Two important aspects of their work are that the random variables could be of unbounded range, and not necessarily of an identical distribution. The purpose of this note is to advertise/discuss these interesting results, with streamlined proofs. This expository note is written for persons who, metaphorically speaking, enjoy the ‘featured movie’ but prefer to skip the preview sequence.

1 Introduction

In an interesting recent work, Kuzborskij and Szepesvári derived a confidence bound for the random variable

where is a size- random sample composed of independent -valued random elements , and is a measurable function. Notice, however, that the components are not required to be identically distributed: each may be distributed according to a different1 . Accordingly, the distribution of the size- random sample is .

Their confidence bound is based on an estimator of the variance of . Recall that McDiarmid’s inequality, which is based on the bounded differences property, relates concentration of around zero (its mean) to the sensitivity of to coordinatewise perturbations (“first-order”). By contrast, the bound of Kuzborskij and Szepesvári relates concentration to squared perturbations (“second-order”), which leads to an inequality based on a variance estimator. The latter has a resemblance with a well-known estimator, recalled next.

The variance estimator used in the Efron-Stein inequality.

This is defined as follows:

(1)

where is the positive part, and the notation indicates that the th element of is replaced with , where is an independent copy of . Further details about this estimator, with context and references, can be found in Boucheron et al. [2013].

Problem: In order to prove a confidence bound for based on , one needs a priori assumptions on the moments of . To avoid this limitation, Kuzborskij and Szepesvári used a modified variance estimator.

The variance estimator used in the Kuzborskij-Szepesvári inequality.

This is defined as follows:

(2)

Kuzborskij and Szepesvári called it a “semi-empirical” estimator, because of its dependence on both the sample and the distribution of the sample.

The main result of Kuzborskij and Szepesvári is the following high-confidence bound: For any and , with probability at least one has

(3)

Remark:

Inequality (3) does not require boundedness of random variables , nor of the function ; the only crucial assumption is independence of elements in the sample . Observe that inequality (3) basically depends on and a positive free parameter , which must be selected by the user. For instance, choosing gives: For any , with probability at least one has

Paraphrasing Kuzborskij and Szepesvári: With this particular choice of , the resulting inequality shows a Bernstein-type behavior, in the sense that the upper-bound is dominated by the lower-order term whenever is small enough; and the price for such a simple choice of is in the logarithmic term.

Remark:

In addition to inequality (3), Kuzborskij and Szepesvári showed a bound that does not involve and, in particular, is scale-free: For any , with probability at least one has .


The remaining of this note’s content is as follows. The confidence bound of Kuzborskij and Szepesvári is presented and proved in Section 2; and the ‘PAC-Bayes-ified’ version of this bound is presented and proved in Section 3.

2 The main result and its proof

Theorem 1.

Let be a measurable function, let be the random gap with randomly chosen from a distribution , and let be the variance estimator defined in Eq. 2.
(i) For any ,

(ii) For any , and any ,

To discuss the proof of Theorem 1, the following definition will be convenient: A pair of random variables is called a canonical pair if and

(4)

See de la Peña et al. [2009, Section 10.2] for further discussion on this condition, and its connection with the so-called self-normalized processes.

A key step of the proof of Theorem 1 consists of establishing that is a canonical pair. We state this as a lemma for convenient reference:

Lemma 2.

is a canonical pair.

The rest of the proof of Theorem 1 relies on following technical result, which essentially gives subgaussian tail probabilities for some functions of a canonical pair (cf. de la Peña et al. [2009, Theorem 12.4 & Corollary 12.5]):

Lemma 3.

Suppose is a canonical pair. Then:
(i) For any ,

(ii) For any and ,

The proof of Theorem 1 is then merely by combining Lemma 2 and Lemma 3. Hence, it remains to prove Lemma 2. This uses the martingale method, which is at the core of the proofs of McDiarmid’s/Azuma-Hoeffding’s inequalities.

Proof of Lemma 2.

Let stand for . Using the martingale difference decomposition, the gap can be written as

where . Notice that , which follows from the elementary identity .

The variance estimator (cf. Eq. 2) can be rewritten as

where . This is just a convenient notation.

Assume for now that for every the following holds:

(5)

Then, using a recursive argument and Eq. 5, we get

Thus, it remains to prove Eq. 5. Fix and let be a random variable independent of such that . Let . Notice that , and by Jensen’s inequality

Let denote conditioning on without . Then we have

The last equality follows from the assumption on the distributions, that is, given , the random variables and are identically distributed, hence so are and . Since is subgaussian (for any ), the innermost expectation in the last display is upper-bounded by one. ∎

Remark:

The proof makes it clear that this inequality holds in the slightly more general setting in which and has independent components, where each is a -valued random variable with distribution .

3 PAC-Bayes-ification

We adapt the notation for and to make explicit their dependence on , and see them as being defined over ’s from some function class :

(1’)
(2’)

It might be convenient to make explicit the dependence of and on the sample as well; to do so, we may write and . Recall that the distribution of the (size-) random sample is . Notice that for a fixed nonrandom , the gap is

The expression for is longer to write, but easy to imagine. The point is that and are real-valued functions defined over .

Let be a parametric family of functions . For each , define and , the gap and the variance estimator for . Then is a canonical pair, for each , by Lemma 2.

Given a probability kernel from to and , we write expectations with respect to the distribution as , and similarly . If is the random sample, then expectations with respect to the random measure are conditional expectations:

The joint distribution over defined by and the probability kernel , denoted , is so that choosing a random pair corresponds to choosing and then choosing . Accordingly, integrals under correspond to the ‘total expectation’ with respect to the random choice of and . For instance,

With a slight abuse of notation, we may write instead of .

The ‘PAC-Bayes-ification’ of Theorem 1 is as follows.

Theorem 4.

Fix an arbitrary ‘data-free’ probability distribution over , and an arbitrary probability kernel from to . Then
(i) For any , with probability at least we have

(6)

(ii) For all and , with probability at least we have

(7)

The statement of this theorem uses the language of probability kernels for representing data-dependent distributions (cf. Rivasplata et al. [2020]).

In the remaining of this note, we switch back to the usual notation in terms of conditional expectations: and . Also recall that is the total expectation.

The proof of Theorem 4 is based on the following lemma.

Lemma 5.

Under the same conditions as in Theorem 4.
(i) For all ,

(8)

(ii) For any , we have

(9)
Proof of Lemma 5.

For convenience, we start with the proof of Eq. 9. Recall the following change of measure, which is the basis of the PAC-Bayesian analysis: Let and be probability measures on , and let the induced expectation operators be and , respectively. Let be a -valued random variable. Then, for any measurable function we have

Below we use this with , , and .

Let and be the expectation with respect to and , respectively. Conditioning on the random sample we have:

Subtracting the KL term, and taking exponential on both sides gives

Then, taking expectation over the random sample on both sides, and keeping in mind that is a canonical pair for any fixed , we have

The equality is by swapping the order of expectation, which is possible since is a data-free distribution (cf. Rivasplata et al. [2020]). Next, multiplying both sides by for some fixed , integrating with respect to , and applying Fubini’s theorem, gives2

Carrying out the Gaussian integration we arrive at

which finishes the proof of Eq. 9.

For the other part of the lemma, we consider the following:

Claim 6.

Let be a non-negative random variable, and for define . Then, for any , .

The proof of this claim is as follows. Fix and . Using the inequality with and we have

Then take exponential on both sides, and take expectations.

Next, we see the proof of part (i) of the lemma.

Consider the random variable

and notice that Eq. 8 follows from the claim with , provided that we show that . For this, consider an arbitrary , and consider the abbreviations

We need to upper-bound . Keeping in mind that (in fact, ), by Cauchy-Schwarz,

Observe that by Eq. 9, and . Now, we have

Finally, by subadditivity of the square root function and Jensen’s inequality,

where the last inequality is by taking any . Thus, for the chosen . Applying creftype 6 with completes the proof. ∎

To complete the argument, the proof of Theorem 4 is given next.

Proof of Theorem 4.

Applying Chernoff’s bounding technique with Eq. 8 gives

The infimum is . Thus, with probability at least we have

With some algebra, this event implies

The last display is equivalent to Eq. 6. Hence Theorem 4(i) is proved.

Next, observe that for any and any ,

where the last two inequalities follow from Markov’s inequality and Eq. 9. This implies that for all , with probability at least , one has

Notice that may be replaced with , since is a free variable. Doing this replacement, and rearranging the terms, we get the equivalent of Eq. 7. Hence Theorem 4(ii) is proved. ∎

Closing remarks.

Kuzborskij and Szepesvári deserve fair credit for showing that the pair meets de la Peña et al. [2009]’s ‘canonical condition’ (Lemma 2), which enabled powerful tools for bounding exponential moments. Of course, this was possible with their variance estimator . Apart from that, the main part of the work of Kuzborskij and Szepesvári is in the proofs of Lemma 5 and Theorem 4, which cleverly use the techniques of de la Peña et al.. In the next iteration of this note (provided that enough readers cared about it) I intend to add discussions about Theorem 1 & Theorem 4, and applications.

Footnotes

  1. denotes the family of probability measures defined on a measurable space . When is clear from the context, we write simply for simplicity.
  2. This is inspired by the proof of [de la Peña et al., 2009, Theorem 12.4], which uses the method of mixtures with a Gaussian distribution.

References

  1. Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. Oxford University Press, 2013.
  2. Victor H de la Peña, Tze Leung Lai, and Qi-Man Shao. Self-normalized processes: Limit theory and Statistical Applications. Springer, 2009.
  3. Ilja Kuzborskij and Csaba Szepesvári. Efron-Stein PAC-Bayesian Inequalities. arXiv:1909.01931, 2019.
  4. Omar Rivasplata, Ilja Kuzborskij, Csaba Szepesvári, and John Shawe-Taylor. PAC-Bayes Analysis Beyond the Usual Bounds. In Advances in Neural Information Processing Systems, 2020.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
425551
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description