Stability Bounds for Stationary \varphi-mixing and \beta-mixing Processes

Stability Bounds for Stationary -mixing and -mixing Processes

\nameMehryar Mohri \emailmohri@cims.nyu.edu
\addrCourant Institute of Mathematical Sciences
and Google Research
251 Mercer Street
New York, NY 10012
\AND\nameAfshin Rostamizadeh \emailrostami@cs.nyu.edu
\addrDepartment of Computer Science
Courant Institute of Mathematical Sciences
251 Mercer Street
New York, NY 10012
Abstract

Most generalization bounds in learning theory are based on some measure of the complexity of the hypothesis class used, independently of any algorithm. In contrast, the notion of algorithmic stability can be used to derive tight generalization bounds that are tailored to specific learning algorithms by exploiting their particular properties. However, as in much of learning theory, existing stability analyses and bounds apply only in the scenario where the samples are independently and identically distributed. In many machine learning applications, however, this assumption does not hold. The observations received by the learning algorithm often have some inherent temporal dependence.

This paper studies the scenario where the observations are drawn from a stationary -mixing or -mixing sequence, a widely adopted assumption in the study of non-i.i.d. processes that implies a dependence between observations weakening over time. We prove novel and distinct stability-based generalization bounds for stationary -mixing and -mixing sequences. These bounds strictly generalize the bounds given in the i.i.d. case and apply to all stable learning algorithms, thereby extending the use of stability-bounds to non-i.i.d. scenarios.

We also illustrate the application of our -mixing generalization bounds to general classes of learning algorithms, including Support Vector Regression, Kernel Ridge Regression, and Support Vector Machines, and many other kernel regularization-based and relative entropy-based regularization algorithms. These novel bounds can thus be viewed as the first theoretical basis for the use of these algorithms in non-i.i.d. scenarios.

Stability Bounds for Stationary -mixing and -mixing Processes Mehryar Mohri mohri@cims.nyu.edu
Courant Institute of Mathematical Sciences
and Google Research
251 Mercer Street
New York, NY 10012
Afshin Rostamizadeh rostami@cs.nyu.edu
Department of Computer Science
Courant Institute of Mathematical Sciences
251 Mercer Street
New York, NY 10012

Editor: TBD

Keywords: Mixing Distributions, Algorithmic Stability, Generalization Bounds, Machine Learning Theory

1 Introduction

Most generalization bounds in learning theory are based on some measure of the complexity of the hypothesis class used, such as the VC-dimension, covering numbers, or Rademacher complexity. These measures characterize a class of hypotheses, independently of any algorithm. In contrast, the notion of algorithmic stability can be used to derive bounds that are tailored to specific learning algorithms and exploit their particular properties. A learning algorithm is stable if the hypothesis it outputs varies in a limited way in response to small changes made to the training set. Algorithmic stability has been used effectively in the past to derive tight generalization bounds (Bousquet and Elisseeff, 2001, 2002).

But, as in much of learning theory, existing stability analyses and bounds apply only in the scenario where the samples are independently and identically distributed (i.i.d.). In many machine learning applications, this assumption, however, does not hold; in fact, the i.i.d. assumption is not tested or derived from any data analysis. The observations received by the learning algorithm often have some inherent temporal dependence. This is clear in system diagnosis or time series prediction problems. Clearly, prices of different stocks on the same day, or of the same stock on different days, may be dependent. But, a less apparent time dependency may affect data sampled in many other tasks as well.

This paper studies the scenario where the observations are drawn from a stationary -mixing or -mixing sequence, a widely adopted assumption in the study of non-i.i.d. processes that implies a dependence between observations weakening over time (Yu, 1994; Meir, 2000; Vidyasagar, 2003; Lozano et al., 2006). We prove novel and distinct stability-based generalization bounds for stationary -mixing and -mixing sequences. These bounds strictly generalize the bounds given in the i.i.d. case and apply to all stable learning algorithms, thereby extending the usefulness of stability-bounds to non-i.i.d. scenarios. Our proofs are based on the independent block technique described by Yu (1994) and attributed to Bernstein (1927), which is commonly used in such contexts. However, our analysis differs from previous uses of this technique in that the blocks of points considered are not of equal size.

For our analysis of stationary -mixing sequences, we make use of a generalized version of McDiarmid’s inequality (Kontorovich and Ramanan, 2006) that holds for -mixing sequences. This leads to stability-based generalization bounds with the standard exponential form. Our generalization bounds for stationary -mixing sequences cover a more general non-i.i.d. scenario and use the standard McDiarmid’s inequality, however, unlike the -mixing case, the -mixing bound presented here is not a purely exponential bound and contains an additive term depending on the mixing coefficient.

We also illustrate the application of our -mixing generalization bounds to general classes of learning algorithms, including Support Vector Regression (SVR) (Vapnik, 1998), Kernel Ridge Regression (Saunders et al., 1998), and Support Vector Machines (SVMs) (Cortes and Vapnik, 1995). Algorithms such as support vector regression (SVR) (Vapnik, 1998; Schölkopf and Smola, 2002) have been used in the context of time series prediction in which the i.i.d. assumption does not hold, some with good experimental results (Müller et al., 1997; Mattera and Haykin, 1999). To our knowledge, the use of these algorithms in non-i.i.d. scenarios has not been previously supported by any theoretical analysis. The stability bounds we give for SVR, SVMs, and many other kernel regularization-based and relative entropy-based regularization algorithms can thus be viewed as the first theoretical basis for their use in such scenarios.

The following sections are organized as follows. In Section 2, we introduce the necessary definitions for the non-i.i.d. problems that we are considering and discuss the learning scenarios in that context. Section 3 gives our main generalization bounds for stationary -mixing sequences based on stability, as well as the illustration of its applications to general kernel regularization-based algorithms, including SVR, KRR, and SVMs, as well as to relative entropy-based regularization algorithms. Finally, Section 4 presents the first known stability bounds for the more general stationary -mixing scenario.

2 Preliminaries

We first introduce some standard definitions for dependent observations in mixing theory (Doukhan, 1994) and then briefly discuss the learning scenarios in the non-i.i.d. case.

2.1 Non-i.i.d. Definitions

Definition 1

A sequence of random variables is said to be stationary if for any and non-negative integers and , the random vectors and have the same distribution.

Thus, the index or time, does not affect the distribution of a variable in a stationary sequence. This does not imply independence however. In particular, for , may not equal . The following is a standard definition giving a measure of the dependence of the random variables within a stationary sequence. There are several equivalent definitions of this quantity, we are adopting here that of (Yu, 1994).

Definition 2

Let be a stationary sequence of random variables. For any , let denote the -algebra generated by the random variables , . Then, for any positive integer , the -mixing and -mixing coefficients of the stochastic process are defined as

(1)

is said to be -mixing (-mixing) if (resp. ) as . It is said to be algebraically -mixing (algebraically -mixing) if there exist real numbers (resp. ) and such that (resp. ) for all , exponentially mixing if there exist real numbers (resp. ) and (resp. ) such that (resp. ) for all .

Both and measure the dependence of an event on those that occurred more than units of time in the past. -mixing is a weaker assumption than -mixing and thus covers a more general non-i.i.d. scenario.

This paper gives stability-based generalization bounds both in the -mixing and -mixing case. The -mixing bounds cover a more general case of course, however, the -mixing bounds are simpler and admit the standard exponential form. The -mixing bounds are based on a concentration inequality that applies to -mixing processes only. Except from the use of this concentration bound, all of the intermediate proofs and results to derive a -mixing bound in Section 3 are given in the more general case of -mixing sequences.

It has been argued by Vidyasagar (2003) that -mixing is “just the right” assumption for the analysis of weakly-dependent sample points in machine learning, in particular because several PAC-learning results then carry over to the non-i.i.d. case. Our -mixing generalization bounds further contribute to the analysis of this scenario.111Some results have also been obtained in the more general context of -mixing but they seem to require the stronger condition of exponential mixing (Modha and Masry, 1998).

We describe in several instances the application of our bounds in the case of algebraic mixing. Algebraic mixing is a standard assumption for mixing coefficients that has been adopted in previous studies of learning in the presence of dependent observations (Yu, 1994; Meir, 2000; Vidyasagar, 2003; Lozano et al., 2006).

Let us also point out that mixing assumptions can be checked in some cases such as with Gaussian or Markov processes (Meir, 2000) and that mixing parameters can also be estimated in such cases.

Most previous studies use a technique originally introduced by Bernstein (1927) based on independent blocks of equal size (Yu, 1994; Meir, 2000; Lozano et al., 2006). This technique is particularly relevant when dealing with stationary -mixing. We will need a related but somewhat different technique since the blocks we consider may not have the same size. The following lemma is a special case of Corollary 2.7 from (Yu, 1994).

Lemma 3 (Yu (Yu, 1994), Corollary 2.7)

Let and suppose that is measurable function, with absolute value bounded by , on a product probability space where for all . Let be a probability measure on the product space with marginal measures on (), and let be the marginal measure of on , . Let , where , and . Then,

(2)

The lemma gives a measure of the difference between the distribution of blocks where the blocks are independent in one case and dependent in the other case. The distribution within each block is assumed to be the same in both cases. For a monotonically decreasing function , we have , where is the smallest gap between blocks.

2.2 Learning Scenarios

We consider the familiar supervised learning setting where the learning algorithm receives a sample of labeled points , where is the input space and the set of labels ( in the regression case), both assumed to be measurable.

For a fixed learning algorithm, we denote by the hypothesis it returns when trained on the sample . The error of a hypothesis on a pair is measured in terms of a cost function . Thus, measures the error of a hypothesis on a pair , in the standard regression cases. We will use the shorthand for a hypothesis and and will assume that is upper bounded by a constant . We denote by the empirical error of a hypothesis for a training sample :

(3)

In the standard machine learning scenario, the sample pairs are assumed to be i.i.d., a restrictive assumption that does not always hold in practice. We will consider here the more general case of dependent samples drawn from a stationary mixing sequence over . As in the i.i.d. case, the objective of the learning algorithm is to select a hypothesis with small error over future samples. But, here, we must distinguish two versions of this problem.

In the most general version, future samples depend on the training sample and thus the generalization error or true error of the hypothesis trained on must be measured by its expected error conditioned on the sample :

(4)

This is the most realistic setting in this context, which matches time series prediction problems. A somewhat less realistic version is one where the samples are dependent, but the test points are assumed to be independent of the training sample . The generalization error of the hypothesis trained on is then:

(5)

This setting seems less natural since, if samples are dependent, future test points must also depend on the training points, even if that dependence is relatively weak due to the time interval after which test points are drawn. Nevertheless, it is this somewhat less realistic setting that has been studied by all previous machine learning studies that we are aware of (Yu, 1994; Meir, 2000; Vidyasagar, 2003; Lozano et al., 2006), even when examining specifically a time series prediction problem (Meir, 2000). Thus, the bounds derived in these studies cannot be directly applied to the more general setting.

We will consider instead the most general setting with the definition of the generalization error based on Eq. 4. Clearly, our analysis also applies to the less general setting just discussed as well.

Let us briefly discuss the more general scenario of non-stationary mixing sequences, that is one where the distribution may change over time. Within that general case, the generalization error of a hypothesis , defined straightforwardly by

(6)

would depend on the time and it may be the case that for , making the definition of the generalization error a more subtle issue. To remove the dependence on time, one could define a weaker notion of the generalization error based on an expected loss over all time:

(7)

It is not clear however whether this term could be easily computed and useful. A stronger condition would be to minimize the generalization error for any particular target time. Studies of this type have been conducted for smoothly changing distributions, such as in Zhou et al. (2008), however, to the best of our knowledge, the scenario of a both non-identical and non-independent sequences has not yet been studied.

3 -Mixing Generalization Bounds and Applications

This section gives generalization bounds for -stable algorithms over a mixing stationary distribution.222The standard variable used for the stability coefficient is . To avoid the confusion with the -mixing coefficient, we will use instead. The first two sections present our main proofs which hold for -mixing stationary distributions. In the third section, we will briefly discuss concentration inequalities that apply to -mixing processes only. Then, in the final section, we will present our main results.

The condition of -stability is an algorithm-dependent property first introduced by Devroye and Wagner (1979) and Kearns and Ron (1997). It has been later used successfully by Bousquet and Elisseeff (2001, 2002) to show algorithm-specific stability bounds for i.i.d. samples. Roughly speaking, a learning algorithm is said to be stable if small changes to the training set do not produce large deviations in its output. The following gives the precise technical definition.

Definition 4

A learning algorithm is said to be (uniformly) -stable if the hypotheses it returns for any two training samples and that differ by a single point satisfy

(8)

The use of stability in conjunction with McDiarmid’s inequality will allow us to produce generalization bounds. McDiarmid’s inequality is an exponential concentration bound of the type,

where the probability is over a sample of size and is the Lipschitz parameter of (which is also a function of ). Unfortunately, this inequality cannot be easily applied when the sample points are not distributed in an i.i.d. fashion. We will use the results of Kontorovich and Ramanan (2006) to extend the use of McDiarmid’s inequality with general mixing distributions (Theorem 9).

To obtain a stability-based generalization bound, we will apply this theorem to . To do so, we need to show, as with the standard McDiarmid’s inequality, that is a Lipschitz function and, to make it useful, bound . The next two sections describe how we achieve both of these in this non-i.i.d. scenario.

Let us first take a brief look at the problem faced when attempting to give stability bounds for dependent sequences and give some idea of our solution for that problem. The stability proofs given by Bousquet and Elisseeff (2001) assume the i.i.d. property, thus replacing an element in a sequence with another does not affect the expected value of a random variable defined over that sequence. In other words, the following equality holds,

(9)

for a random variable that is a function of the sequence of random variables . However, clearly, if the points in that sequence are dependent, this equality may not hold anymore.

The main technique to cope with this problem is based on the so-called “independent block sequence” originally introduced by Bernstein (1927). This consists of eliminating from the original dependent sequence several blocks of contiguous points, leaving us with some remaining blocks of points. Instead of these dependent blocks, we then consider independent blocks of points, each with the same size and the same distribution (within each block) as the dependent ones. By Lemma 3, for a -mixing distribution, the expected value of a random variable defined over the dependent blocks is close to the one based on these independent blocks. Working with these independent blocks brings us back to a situation similar to the i.i.d. case, with i.i.d. blocks replacing i.i.d. points.

Our use of this method somewhat differs from previous ones (see Yu, 1994; Meir, 2000) where many blocks of equal size are considered. We will be dealing with four blocks and with typically unequal sizes. More specifically, note that for Equation 9 to hold, we only need that the variable be independent of the other points in the sequence. To achieve this, roughly speaking, we will be “discarding” some of the points in the sequence surrounding . This results in a sequence of three blocks of contiguous points. If our algorithm is stable and we do not discard too many points, the hypothesis returned should not be greatly affected by this operation. In the next step, we apply the independent block lemma, which then allows us to assume each of these blocks as independent modulo the addition of a mixing term. In particular, becomes independent of all other points. Clearly, the number of points discarded is subject to a trade-off: removing too many points could excessively modify the hypothesis returned; removing too few would maintain the dependency between and the remaining points, thereby producing a larger penalty when applying Lemma 3. This trade-off is made explicit in the following section where an optimal solution is sought.

3.1 Lipschitz Bound

As discussed in Section 2.2, in the most general scenario, test points depend on the training sample. We first present a lemma that relates the expected value of the generalization error in that scenario and the same expectation in the scenario where the test point is independent of the training sample. We denote by the expectation in the dependent case and by the expectation where the test points are assumed independent of the training, with denoting a sequence similar to but with the last points removed. Figure 1(a) illustrates that sequence. The block is assumed to have exactly the same distribution as the corresponding block of the same size in .

Lemma 5

Assume that the learning algorithm is -stable and that the cost function is bounded by . Then, for any sample of size drawn from a -mixing stationary distribution and for any , the following holds:

(10)

Proof  The -stability of the learning algorithm implies that

(11)

The application of Lemma 3 yields

(12)

The other side of the inequality of the lemma can be shown following the same steps.  
We can now prove a Lipschitz bound for the function .

Lemma 6

Let and be two sequences drawn from a -mixing stationary process that differ only in point , and let and be the hypotheses returned by a -stable algorithm when trained on each of these samples. Then, for any , the following inequality holds:

(13)

Proof  To prove this inequality, we first bound the difference of the empirical errors as in (Bousquet and Elisseeff, 2002), then the difference of the true errors. Bounding the difference of costs on agreeing points with and the one that disagrees with yields

Since both and are defined with respect to a (different) dependent point, we apply Lemma 5 to both generalization error terms and use -stability. This then results in

The lemma’s statement is obtained by combining inequalities 3.1 and 3.1.  

(a) (b)
(c) (d)
Figure 1: Illustration of the sequences derived from that are considered in the proofs.

3.2 Bound on Expectation

As mentioned earlier, to obtain an explicit bound after application of a generalized McDiarmid’s inequality, we also need to bound . This is done by analyzing independent blocks using Lemma 3.

Lemma 7

Let be the hypothesis returned by a -stable algorithm trained on a sample drawn from a stationary -mixing distribution. Then, for all , the following inequality holds:

(16)

Proof  Let be defined as in the proof of Lemma 5. To deal with independent block sequences defined with respect to the same hypothesis, we will consider the sequence , which is illustrated by Figure 1(c). This can result in as many as four blocks. As before, we will consider a sequence with a similar set of blocks each with the same distribution as the corresponding blocks in , but such that the blocks are independent.

Since three blocks of at most points are removed from each hypothesis, by the -stability of the learning algorithm, the following holds:

(17)
(18)

The application of Lemma 3 to the difference of two cost functions also bounded by as in the right-hand side leads to

(19)

Now, since the points and are independent and since the distribution is stationary, they have the same distribution and we can replace with in the empirical cost. Thus, we can write

where is the sequence derived from by replacing with . The last inequality holds by -stability of the learning algorithm. The other side of the inequality in the statement of the lemma can be shown following the same steps.  

3.3 -mixing Generalization Bounds

We are now prepared to make use of a concentration inequality to provide a generalization bound in the -mixing scenario. Several concentration inequalities have been shown in -mixing case, e.g. Marton (1998); Samson (2000); Chazottes et al. (2007); Kontorovich and Ramanan (2006). We will use that of Kontorovich and Ramanan (2006), which is very similar to that of Chazottes et al. (2007) modulo the fact that the latter requires a finite sample space.

These concentration inequalities are generalizations of the of following inequality of McDiarmid (1989) commonly used in the i.i.d. setting.

Theorem 8 (McDiarmid (1989), 6.10)

Let be a sequence of random variables, each taking values in the set , then for any measurable function that satisfies the following, ,

for constants . Then, for all ,

In the i.i.d. scenario, the requirement to produce the constants simply translates into a Lipschitz condition on the function . Theorem 5.1 of Kontorovich and Ramanan (2006) bounds precisely this quantity as follows,333 We should note that original bound is expressed in terms of -mixing coefficients. To simplify presentation, we are adapting it to the case of stationary -mixing sequences by using the following straightforward inequality for a stationary process: . Furthermore, the bound presented in Kontorovich and Ramanan (2006) holds when the sample space is countable, it is extended to the continuous case in Kontorovich (2007).

(20)

Given the bound in Equation 20, the concentration bound of McDiarmid can be restated as follows, making it easily accessible to -mixing distributions.

Theorem 9 (Kontorovich and Ramanan (2006))

Let be a measurable function. If is -Lipschitz with respect to the Hamming metric for some , then the following holds for all :

(21)

where .

It should be pointed out that the statement of the theorem in this paper is improved by a factor of in the exponent, from the one stated in Kontorovich and Ramanan (2006) Theorem 1.1. This can be achieve straightforwardly by following the same steps as in the proof by Kontorovich and Ramanan (2006) and making use of the general form of McDiarmid’s inequality (Theorem 8) as opposed to Azuma’s inequality.

This section presents several theorems that constitute the main results of this paper. The following theorem is constructed form the bounds shown in the previous three sections.

Theorem 10 (General Non-i.i.d. Stability Bound)

Let denote the hypothesis returned by a -stable algorithm trained on a sample drawn from a -mixing stationary distribution and let be a measurable non-negative cost function upper bounded by , then for any and any , the following generalization bound holds

Proof  The theorem follows directly the application of Lemma 6 and Lemma 7 to Theorem 9.  
The theorem gives a general stability bound for -mixing stationary sequences. If we further assume that the sequence is algebraically -mixing, that is for all , for some , then we can solve for the value of to optimize the bound.

Theorem 11 (Non-i.i.d. Stability Bound for Algebraically Mixing Sequences)

Let denote the hypothesis returned by a -stable algorithm trained on a sample drawn from an algebraically -mixing stationary distribution, with and let be a measurable non-negative cost function upper bounded by , then for any , the following generalization bound holds

where .

Proof  For an algebraically mixing sequence, the value of minimizing the bound of Theorem 10 satisfies , which gives and . The following term can be bounded as

Using the assumption , we upper bound with and find that,

Plugging in this value and the minimizing value of in the bound of Theorem 10 yields the statement of the theorem.  
In the case of a zero mixing coefficient ( and ), the bounds of Theorem 10 coincide with the i.i.d. stability bound of (Bousquet and Elisseeff, 2002). In order for the right-hand side of these bounds to converge, we must have and . For several general classes of algorithms, (Bousquet and Elisseeff, 2002). In the case of algebraically mixing sequences with , as assumed in Theorem 11, implies . The next section illustrates the application of Theorem 11 to several general classes of algorithms.

We now present the application of our stability bounds to several algorithms in the case of an algebraically mixing sequence. We make use of the stability analysis found in Bousquet and Elisseeff (2002), which allows us to apply our bounds in the case of kernel regularized algorithms, -local rules and relative entropy regularization.

3.4 Applications

3.4.1 Kernel Regularized Algorithms

Here we apply our bounds to a family of algorithms based on the minimization of a regularized objective function based on the norm in a reproducing kernel Hilbert space, where is a positive definite symmetric kernel:

(22)

The application of our bound is possible, under some general conditions, since kernel regularized algorithms are stable with (Bousquet and Elisseeff, 2002). Here we briefly reproduce the proof of this -stability for the sake of completeness; first we introduce some needed terminology.

We will assume that the cost function is -admissible, that is there exists such that for any two hypotheses and for all ,

(23)

This assumption holds for the quadratic cost and most other cost functions when the hypothesis set and the set of output labels are bounded by some : and . We will also assume that is differentiable. This assumption is in fact not necessary and all of our results hold without it, but it makes the presentation simpler.

We denote by the Bregman divergence associated to a convex function : . In what follows, it will be helpful to define as the objective function of a general regularization based algorithm,

(24)

where is the empirical error as measured on the sample , is a regularization function and is the usual trade-off parameter. Finally, we shall use the shorthand .

Lemma 12 (Bousquet and Elisseeff (2002))

A kernel regularized learning algorithm, (22), with bounded kernel and -admissible cost function, is -stable with coefficient,

Proof  Let and be the minimizers of and respectively where and differ in the first coordinate (choice of coordinate is without loss of generality), then,

(25)

To see this, we notice that since , and since a Bregman divergence is non-negative,

By the definition of and as the minimizers of and ,

Finally, by the -admissibility of the cost function and the definition of and ,

which establishes (25).

Now, if we consider , we have , thus and by (25) and the reproducing kernel property,

Thus . And using the -admissibility of and the kernel reproducing property we get,

Therefore,

which completes the proof.  

Three specific instances of kernel regularization algorithms are SVR, for which the cost function is based on the -insensitive cost:

(26)

Kernel Ridge Regression (Saunders et al., 1998), for which

(27)

and finally Support Vector Machines with the hinge-loss,

(28)

We note that for kernel regularization algorithms, as pointed out in Bousquet and Elisseeff (2002, Lemma 23), a bound on the labels immediately implies a bound on the output of the hypothesis produced by equation (22). We formally state this lemma below.

Lemma 13

Let be the solution to equation (22), let be a cost function and let be a real-valued function such that ,

Then, the output of is bounded as follows,

where is the regularization parameter, and for all .

Proof  Let and let 0 be the zero hypothesis, then by definition of and ,

Then, using the reproducing kernel property and the Cauchy-Schwartz inequality we note,

Combining the two inequalities produces the result.  
We note that in Bousquet and Elisseeff (2002), the following the bound is also stated: . However, when later applied it seems the authors use an incorrect upper bound function , which we remedy in the following.

Corollary 14

Assume a bounded output , for some , and assume that for all for some . Let denote the hypothesis returned by the algorithm when trained on a sample drawn from an algebraically -mixing stationary distribution. Let , , and . Then, with probability at least , the following generalization bounds hold for

  1. Support Vector Machines (SVM, with hinge-loss)

    where .

  2. Support Vector Regression (SVR):

    where .

  3. Kernel Ridge Regression (KRR):

    where .

Proof  For SVM, the hinge-loss is -admissible giving , and the cost function is clearly bounded by .

Similarly, SVR has a loss function that is -admissible, thus, applying Lemma 12 gives us . Using Lemma 13, with , we can bound the loss as follows: .

Finally for KRR, we have a loss function that is -admissible and again using Lemma 12 . Again, applying Lemma 13 with and .

Plugging these values into the bound of Theorem 11 and setting the right-hand side to yields the statement of the corollary.  

3.4.2 Relative Entropy Regularized Algorithms

In this section we apply Theorem 11 to algorithms that produce a hypothesis that is a convex combination of base hypotheses which are parameterized by . Thus, we wish to learn a weighting function that is a solution to the following optimization,

(29)

where the cost function is defined in term of a second internal cost function :

and where is the Kullback-Leibler divergence or relative entropy regularizer (with respect to some fixed distribution ):

It has been shown, (Bousquet and Elisseeff, 2002, Theorem 24), that an algorithm satisfying equation 29 and with bounded loss , is -stable with coefficient

The application of our bounds, results in the following corollary.

Corollary 15

Let be the hypothesis produced by the optimization in (29), with internal cost function bounded by . Then with probability at least ,

where , , and .

3.5 Discussion

The results presented here are, to the best of our knowledge, the first stability-based generalization bounds for the class of algorithms just studied in a non-i.i.d. scenario. These bounds are non-trivial when the condition on the regularization parameter holds for all large values of . This condition coincides with the i.i.d. condition, in the limit, as tends to infinity. The next section gives stability-based generalization bounds that hold even in the scenario of -mixing sequences.

4 -Mixing Generalization Bounds

In this section, we prove a stability-based generalization bound that only requires the training sequence to be drawn from a stationary -mixing distribution. The bound is thus more general and covers the -mixing case analyzed in the previous section. However, unlike the -mixing case, the -mixing bound presented here is not a purely exponential bound. It contains an additive term, which depends on the mixing coefficient.

As in the previous section, is defined by