Generalization and Robustness of Batched Weighted Average Algorithm with V-geometrically Ergodic Markov Data

Generalization and Robustness of Batched Weighted Average Algorithm with V-geometrically Ergodic Markov Data

Abstract

We analyze the generalization and robustness of the batched weighted average algorithm for V-geometrically ergodic Markov data. This algorithm is a good alternative to the empirical risk minimization algorithm when the latter suffers from overfitting or when optimizing the empirical risk is hard. For the generalization of the algorithm, we prove a PAC-style bound on the training sample size for the expected -loss to converge to the optimal loss when training data are V-geometrically ergodic Markov chains. For the robustness, we show that if the training target variable’s values contain bounded noise, then the generalization bound of the algorithm deviates at most by the range of the noise. Our results can be applied to the regression problem, the classification problem, and the case where there exists an unknown deterministic target hypothesis.

1Introduction

The generalization ability of learning algorithms has been studied extensively in statistical learning theory [1]. One main assumption in traditional learning theory when studying this problem is that data, drawn from an unknown distribution, are independent and identically distributed (IID) [2]. Although this assumption is useful for proving theoretical results, it may not hold in applications such as speech recognition or market prediction where data are usually temporal in nature [3].

One attempt to relax this IID data assumption is to consider cases where training data form a Markov chain with certain mixing properties. A common algorithm that has been analyzed is the empirical risk minimization (ERM) algorithm, which tries to find the hypothesis minimizing the empirical loss on the training data. Generalization bounds of this well-known algorithm were proven for exponentially strongly mixing data [4], uniformly ergodic data [5], and V-geometrically ergodic data [6].

In this paper, we investigate another learning algorithm, the batched weighted average (BWA) algorithm, when training data form a V-geometrically ergodic Markov chain. This algorithm is a batch version of the online weighted average algorithm with -loss [7]. Given the training data and a set of real-valued hypotheses, the BWA algorithm learns the weight of each hypothesis based on its prediction on the training data. During testing, the algorithm makes prediction based on the weighted average prediction of all the hypotheses on the testing data.

An advantage of the BWA algorithm when compared to the ERM algorithm is that the former may be less suffered from overfitting when the hypothesis space is large or complex [8]. The BWA algorithm is also a good alternative to the ERM algorithm in cases where optimizing the empirical risk is hard.

We prove the generalization of the BWA algorithm by providing a PAC-style bound on the training sample size for the expected -loss of the algorithm to converge to the optimal loss with high probability, assuming that training data are V-geometrically ergodic. The main idea of our proof is to bound the normalized weights of all the bad hypotheses whose expected loss is far from the optimal. This idea comes from the observation that when more training data are seen, the normalized weights of the bad hypotheses will eventually be dominated by those of the better hypotheses.

Using the same proof technique, we then prove the robustness of the BWA algorithm when training data form a V-geometrically ergodic Markov chain with noise. By robustness, we mean the ability of an algorithm to generalize when there is a small amount of noise in the training data. For the BWA algorithm, we show that if the training values of the target variable are allowed to contain bounded noise, then the generalization bound of the algorithm deviates at most by the range of the noise.

Our main results are proven mainly for the regression problem and the case where the pairs of observation and target variables’ values are V-geometrically ergodic. However, we also give two lemmas to show that the results can be easily applied to other common settings such as the classification problem and the case where there exists an unknown deterministic target hypothesis.

This paper chooses to analyze the BWA algorithm for data that are V-geometrically ergodic. Theoretically, V-geometrically ergodic Markov chains have many good properties that make them appealing for analyses. Firstly, they are “nice” general state space Markov chains as they mix geometrically fast [10]. Secondly, the fact that these chains can be defined on a general, possibly uncountable, state space makes their learning models more general than previous models which learn from finite or countable state space Markov chains [11]. Thirdly, the V-geometrically ergodic assumption is not too restrictive since it includes all uniformly ergodic chains as well as all ergodic chains on a finite state space [6]. Nevertheless, we emphasize that our proof idea can be applied to other types of mixing Markov chains if we have the uniform convergence rate of the empirical loss for these chains.

2Related Work

The BWA algorithm considered in this paper is a batch version of the online weighted average algorithm [7]. The main differences are that the BWA algorithm uses an infinite real-valued hypothesis space and is trained from batch data. The original weighted average algorithm is a generalization of the weighted majority algorithm [13]. Both algorithms were analyzed for the online setting [7] and a variant of the weighted majority algorithm was analyzed for the classification problem with batched IID data [8]. However, to the best of our knowledge, there was no rigorous treatment for the generalization and robustness of the BWA algorithm for non-IID data.

The proofs in our paper use a previous result on the uniform convergence rate of the empirical loss for V-geometrically ergodic Markov chains [6]. Convergence of the empirical loss is a fundamental problem in statistics and statistical learning theory, and it has been studied for other types of Markov chains such as -mixing [4], -mixing [16], -mixing [16], and uniformly ergodic [5] chains. These results can be used with our proof idea to prove generalization and robustness bounds of the BWA algorithm for those chains.

The robustness of learning algorithms in the presence of noise has been studied for Valiant’s PAC model with IID data [18]. Recently, Xu et al. [12] analyzed the generalization of learning algorithms based on their algorithmic robustness, the ability of an algorithm to achieve similar performances on similar training and testing data. Their analyses hold for both IID and uniformly ergodic Markov data. Another related concept is stability, the ability of an algorithm to return similar hypotheses when small changes are made to the training data [22]. Stability-based generalization bounds of learning algorithms were proven by Mohri et al. for -mixing and -mixing data [22]. Our bounds, in contrast, are obtained without measuring the algorithmic robustness or stability of the BWA algorithm.

3Preliminaries

We now introduce the V-geometrically ergodic Markov chains and the settings for our analyses. We will follow the definitions in [6]. We also review a result on the uniform convergence rate of the empirical loss for V-geometrically ergodic Markov data [6] which will be used in the subsequent sections.

3.1V-geometrically Ergodic Markov Chain

Let be a measurable space, where is a compact subset of () and is a -algebra on . A Markov chain on is a sequence of random variables together with a set of transition probabilities , where denotes the probability that a chain starting from will be in after steps. By Markov property,

where is the probability of an event. For any two probability measures and on , we define their total variation distance as . A V-geometrically ergodic Markov chain can be defined as follows.

A special case of V-geometrically ergodic Markov chain is uniformly ergodic Markov chain, which has (the constant function ) [6]. So, the results in this paper also hold for the uniformly ergodic Markov data. Throughout our paper, we mostly consider the first elements of a V-geometrically ergodic Markov chain . For convenience, we will also call a V-geometrically ergodic Markov chain. Whenever we consider , , and of , we actually refer to those of .

3.2Settings

We assume that the training data form a V-geometrically ergodic Markov chain on a state space , where is a compact subset of () and is a compact subset of . The variables ’s are usually called the observation variables and ’s are usually called the target variables.

Let be the set of all hypotheses, where a hypothesis is a function from to . Throughout this paper, we make the following assumption: is contained in a ball of a Hölder space for some , which is similar to the assumption in [6]. The Hölder space is the space of all continuous functions on with the following norm [6]:

where and is a metric defined on .

In this paper, we consider the -loss of a hypothesis on an example . Because of the boundedness of and , there exist and such that

and

For any data , we define the empirical loss of the hypothesis on as

and the expected loss of with respect to the stationary distribution of the Markov chain as

3.3Uniform Convergence Rate of the Empirical Loss

We review a previous result [6] which gives a PAC-style bound on the training set size for the empirical loss to converge uniformly to the expected loss when training data are V-geometrically ergodic Markov chains. This result will be used to prove the generalization and robustness bounds for the BWA algorithm in subsequent sections. To state the result, we first need to define the covering number, the quantity for measuring the capacity of a hypothesis space.

Note that the covering number is defined with respect to the norm and thus is data independent. This is different from another type of covering number which is data dependent [24]. With the assumption that , there exists such that for every , we have (see [23]). Thus, the covering number is finite in our setting.

We also need a concept of effective sample size for a V-geometrically ergodic Markov chain. The effective sample size plays the same role in our analyses as the sample size in the IID case. This concept is usually used when the observations are not independent (e.g., hierarchical autocorrelated observations [25]).

For a V-geometrically ergodic Markov chain, as . The uniform convergence rate for the empirical loss when training data are V-geometrically ergodic Markov chains is stated in Lemma ? below. This lemma is a direct consequence of Theorem 2 in [6].

4The Batched Weighted Average Algorithm

In this section, we introduce the BWA algorithm. In contrast to the ERM algorithm which makes prediction based on a single empirical loss minimizing hypothesis, the BWA algorithm makes prediction based on the weighted average predictions of all the hypotheses in the hypothesis space. The pseudo code for the BWA algorithm is given in Algorithm ?.

Inputs for the BWA algorithm are a parameter and a training data sequence , which is a V-geometrically ergodic Markov chain on the state space . The algorithm computes a weight for each hypothesis in the hypothesis space by:

Then, the weights of the hypotheses are normalized to obtain a probability density function with respect to the measure (probability mass function if is finite) over the hypothesis space:

We will call the normalized weight of . Given a new example , we use the normalized weights to compute the weighted average prediction of all the hypotheses on :

In the algorithm, we assume there exists a probability measure on such that . The measure plays a similar role to the prior distribution in Bayesian analysis [26]. It reflects our initial belief about the distribution of the hypotheses in . During the execution of the algorithm, we gradually update our belief, via the weights, based on the prediction of each hypothesis on the training data. The existence of such a measure was also assumed in [8] for averaged classifiers.

When is infinite, we usually cannot compute the value of exactly. In practice, we can apply the Markov Chain Monte Carlo method [27] to approximate . For instance, we can sample hypotheses from the unnormalized density distribution and approximate by .

5Generalization Bound for BWA Algorithm

In this section, we prove the generalization bound for the BWA algorithm when training data are V-geometrically ergodic Markov chains. For the analyses to be valid, we assume the following sets are measurable with respect to :

Since Algorithm ? does not assume the existence of a perfect hypothesis in , we need to define the optimal expected loss of . Let , the optimal expected loss of is defined as . Note that always exists since and . For all , let be the volume of all the hypotheses with expected loss at most . By definition of , for all , we always have .

The idea of using was proposed in [8] to analyze the generalization bounds of averaged classifiers in the IID case. The argument for considering is that when is uncountable, a comparison between the average hypothesis and a single best hypothesis is meaningless because a single hypothesis mostly has measure . Hence, we should compare to a set of good hypotheses that has positive measure, as suggested in [8].

To prove the generalization bound, we need Lemma ? that bounds the normalized weights of all the bad hypotheses. Specifically, this lemma proves that if the effective sample size is large enough, the normalized weights of all the bad hypotheses are sufficiently small with high probability.

Denote and . We can write: . If the effective sample size satisfies

then by Lemma ?, with probability at least , we both have:

For all and , we also have . Therefore, with probability at least , for all and ,

Since , we have . Hence, . Note that this inequality holds for all and . Therefore,

Let , we have

Therefore, .

Using Lemma ?, we now prove the following generalization bound for the BWA algorithm.

We have

Notice that for all , we have: . On the other hand, from Lemma ?, if the effective sample size satisfies

then with probability at least , we have: .

Thus,

Note that when , we have . From the definition of the effective sample size, in order to ensure the previous condition for the sample size , it is sufficient to let

Hence, for

we have .

In Theorem ?, the convergence rate of the expected loss to the optimal loss depends not only on the covering number but also on . From the definition of , this value depends mostly on the distribution on . If gives higher probability to hypotheses with small expected loss, will be closer to and the convergence rate will be better. Thus, it is desirable for the BWA algorithm to choose a good distribution . This is analogous to the Bayesian setting where we also need to choose a good prior for the learning algorithm. When is finite, for sufficiently small . In this case, does not depend on , but only depends on .

The bound in Theorem ? and all the subsequent bounds depend on the values of , and . For one V-geometrically ergodic Markov chain, there may be many values of (, , ) satisfying Definition ?. Thus, to obtain good bounds, we need to choose a value of (, , ) that makes the bounds as tight as possible. This corresponds to selecting small values for these parameters.

When comparing various V-geometrically ergodic Markov chains, Theorem ? suggests that the convergence rate is better if , and are smaller. Small values of these parameters correspond to chains that converge quickly to the stationary distribution . This result is expected because the expected loss is defined with respect to a random example drawn from . In the limit when and , the chains become more IID-like and the effective sample size bound tends to .

From the discussion in Section 3.3, there exists such that for , we have . Therefore, we can deduce the following corollary of Theorem ? in which the bound does not depend on the covering number.

Since as , by the above corollary, we have for every . Hence, the BWA algorithm is consistent.

6Robustness Bound for BWA Algorithm

In this section, we consider the robustness of the BWA algorithm when the target variable’s values in the training data contain a small amount of noise. In particular, instead of the settings in Section 3.2, we assume that the training data are now , where and form a V-geometrically ergodic Markov chain with stationary distribution . We further assume that the noise are bounded, i.e., for all . However, we will not make any assumption on the distribution of noise.

With this setting, the BWA algorithm that we consider is essentially the same as Algorithm ?, except that now the algorithm does not have access to the true target variables ’s. Instead, it uses the noisy target variables and updates the hypothesis weights according to the following formula:

Hence, , where is the (noisy) empirical loss of the hypothesis on the noisy dataset :

For any hypothesis , the expected loss is defined as in Section 3.2 with respect to the stationary distribution of the Markov chain . We also let , and be the parameters satisfying Definition ? for the chain . The optimal expected loss is defined as in Section 5.

We now prove that with this setting, the generalization bound of the BWA algorithm deviates at most by . The steps for the proof are similar to those in Section 5. First, we prove the following uniform convergence bound for V-geometrically ergodic Markov chain with bounded noise.

Let and be defined as in Section 3.2. For all ,

By Lemma ?, if the effective sample size satisfies

then . In this case, . Hence, Lemma ? holds.

Using Lemma ?, we can prove the following lemma, which is an analogy of Lemma ?.

The proof for this lemma uses the same technique as that of Lemma ?, except that we define and replace Lemma ? by Lemma ? with all and .

Using Lemma ?, we can prove the following robustness bound.

The proof for this theorem is essentially the same as that of Theorem ?, except that we partition into and after the first inequality and then apply Lemma ? instead of Lemma ?.

From Theorem ?, with high probability, the expected loss of is at most larger than the optimal loss when we allow noise with range in the training data. This shows that the BWA algorithm is robust in the sense that it does not perform too badly if the level of noise in the training data is small. In the noiseless case where , we can recover Theorem ?. Thus, Theorem ? is a generalization of Theorem ? to the bounded noise case.

7Applications to other Settings

Our results in Section 5 and Section 6 are proven for the regression problem when the pairs of observation and target variables are V-geometrically ergodic. We now prove that our results can be easily applied to other common settings such as the classification problem and the case where there exists an unknown deterministic target hypothesis. The discussion in Section 7.1 is for the noiseless training data, while the discussion in Section 7.2 can be applied to both the noiseless and noisy cases. In this section, we let be the indicator function for the event .

7.1The Classification Problem

For the classification problem, the training data satisfy for ; and during testing, we need to predict the label of a given data point . If the hypothesis space contains the hypotheses satisfying for all , we can apply Algorithm ? to compute and use its value to construct the following random classifier:

Let be the expected error of . The following lemma shows that is equal to the expected loss of . Thus, we can bound the probability using this lemma and Theorem ?.

Note that . Thus,

7.2When a Target Hypothesis Exists

When there exists an unknown deterministic target hypothesis such that for all and the observation variables form a V-geometrically ergodic Markov chain, the following lemma shows that the chain is V-geometrically ergodic. Thus, our previous results can still be applied in this situation. Note that in our lemma, may not be in .

Let be the one-step transition probability of . It is easy to see that is a Markov chain on with the following one-step transition probability :

Intuitively, after taking the first step (from onwards), the new Markov chain on will transit around the points in with the same probabilities as the transitions on . Thus, the new Markov chain has the stationary distribution , where is the stationary distribution of . Let , , and be the parameters satisfying Definition ? for the chain and consider the measurable function as follows:

We have . Furthermore, for any two points and in , the n-step transition probability from to satisfies:

Thus, for all , we have: . Hence, satisfies the V-geometrically ergodic definition with the same parameters , , and the function above.

8Conclusion

A good property of the BWA algorithm is that the normalized weights of the good hypotheses will eventually dominate those of the bad ones when more training data are obtained. This property enables us to obtain its generalization and robustness bounds for V-geometrically ergodic Markov data. The bounds can be applied to various settings such as the regression problem, the classification problem, and the case where there exists a deterministic target hypothesis. Our results show that the BWA algorithm is consistent and robust for V-geometrically ergodic Markov data. So, when overfitting is involved or when optimizing the empirical risk is hard, it may be a good replacement for the ERM algorithm.

References

  1. Statistical learning theory.
    Vapnik, V.N.: (1998)
  2. A theory of the learnable.
    Valiant, L.: Communications of the ACM 27(11) (1984) 1134–1142
  3. Learning from dependent observations.
    Steinwart, I., Hush, D., Scovel, C.: Journal of Multivariate Analysis 100(1) (2009) 175–194
  4. The generalization performance of ERM algorithm with strongly mixing observations.
    Zou, B., Li, L., Xu, Z.: Machine learning 75(3) (2009) 275–295
  5. Learning from uniformly ergodic Markov chains.
    Zou, B., Zhang, H., Xu, Z.: Journal of Complexity 25(2) (2009) 188–200
  6. Generalization bounds of ERM algorithm with V-geometrically ergodic Markov chains.
    Zou, B., Xu, Z., Chang, X.: Advances in Computational Mathematics 36(1) (2012) 99–114
  7. Averaging expert predictions.
    Kivinen, J., Warmuth, M.K.: In: Computational Learning Theory. (1999) 153–167
  8. Generalization bounds for averaged classifiers.
    Freund, Y., Mansour, Y., Schapire, R.: Annals of Statistics (2004) 1698–1722
  9. Why averaging classifiers can protect against overfitting.
    Freund, Y., Mansour, Y., Schapire, R.E.: In: Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics. Volume 304. (2001)
  10. Markov chains and stochastic stability.
    Meyn, S., Tweedie, R.: Cambridge University Press (2009)
  11. Extension of the PAC framework to finite and countable Markov chains.
    Gamarnik, D.: IEEE Transactions on Information Theory 49(1) (2003) 338–345
  12. Robustness and generalization.
    Xu, H., Mannor, S.: Machine Learning (2012) 1–33
  13. The weighted majority algorithm.
    Littlestone, N., Warmuth, M.: In: IEEE Symposium on Foundations of Computer Science. (1989) 256–261
  14. Convergence of empirical means with alpha-mixing input sequences, and an application to PAC learning.
    Vidyasagar, M.: In: IEEE Conference on Decision and Control and European Control Conference. (2005) 560–565
  15. The performance bounds of learning machines based on exponentially strongly mixing sequences.
    Zou, B., Li, L.: Computers & Mathematics with Applications 53(7) (2007) 1050–1058
  16. Rates of convergence for empirical processes of stationary mixing sequences.
    Yu, B.: Annals of Probability (1994) 94–116
  17. Rademacher complexity bounds for non-iid processes.
    Mohri, M., Rostamizadeh, A.: In: Advances in Neural Information Processing Systems. (2009) 1097–1104
  18. Efficient noise-tolerant learning from statistical queries.
    Kearns, M.: Journal of the ACM 45(6) (1998) 983–1006
  19. Noise-tolerant learning, the parity problem, and the statistical query model.
    Blum, A., Kalai, A., Wasserman, H.: Journal of the ACM 50(4) (2003) 506–519
  20. General bounds on statistical query learning and PAC learning with noise via hypothesis boosting.
    Aslam, J.A., Decatur, S.E.: In: IEEE Symposium on Foundations of Computer Science. (1993) 282–291
  21. Can PAC learning algorithms tolerate random attribute noise?
    Goldman, S.A., Sloan, R.H.: Algorithmica 14(1) (1995) 70–84
  22. Stability bounds for stationary -mixing and -mixing processes.
    Mohri, M., Rostamizadeh, A.: Journal of Machine Learning Research 11 (2010) 789–814
  23. Capacity of reproducing kernel spaces in learning theory.
    Zhou, D.: IEEE Transactions on Information Theory 49(7) (2003) 1743–1752
  24. Introduction to statistical learning theory.
    Bousquet, O., Boucheron, S., Lugosi, G.: In: Advanced Lectures on Machine Learning.
  25. Analysis of comparative data with hierarchical autocorrelation.
    Ané, C.: Annals of Applied Statistics 2(3) (2008) 1078–1102
  26. Bayesian methods for adaptive models.
    MacKay, D.: PhD thesis, California Institute of Technology (1992)
  27. Markov Chain Monte Carlo method and its application.
    Brooks, S.: Journal of the Royal Statistical Society: Series D (The Statistician) 47(1) (1998) 69–100
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
10723
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description