On the Relation between Realizable and Nonrealizable Cases of the Sequence Prediction Problem

On the Relation between Realizable and Nonrealizable Cases of the Sequence Prediction Problem

\nameDaniil Ryabko \emaildaniil@ryabko.net
\addrINRIA Lille-Nord Europe,
40, avenue Halley,
Parc Scientifique de la Haute Borne
59650 Villeneuve d’Ascq, France
Abstract

A sequence of discrete-valued observations is generated according to some unknown probabilistic law (measure) . After observing each outcome, one is required to give conditional probabilities of the next observation. The realizable case is when the measure belongs to an arbitrary but known class of process measures. The non-realizable case is when is completely arbitrary, but the prediction performance is measured with respect to a given set of process measures. We are interested in the relations between these problems and between their solutions, as well as in characterizing the cases when a solution exists and finding these solutions. We show that if the quality of prediction is measured using the total variation distance, then these problems coincide, while if it is measured using the expected average KL divergence, then they are different. For some of the formalizations we also show that when a solution exists, it can be obtained as a Bayes mixture over a countable subset of . We also obtain several characterization of those sets for which solutions to the considered problems exist. As an illustration to the general results obtained, we show that a solution to the non-realizable case of the sequence prediction problem exists for the set of all finite-memory processes, but does not exist for the set of all stationary processes. It should be emphasized that the framework is completely general: the processes measures considered are not required to be i.i.d., mixing, stationary, or to belong to any parametric family.

On the Relation between Realizable and Nonrealizable Cases of the Sequence Prediction Problem Daniil Ryabko daniil@ryabko.net
INRIA Lille-Nord Europe,
40, avenue Halley,
Parc Scientifique de la Haute Borne
59650 Villeneuve d’Ascq, France

Editor: Nicolò Cesa-Bianchi

Keywords: Sequence Prediction, Time Series, Online Prediction, Realizable sequence prediction, Non-realizable sequence prediction.

1 Introduction

A sequence of discrete-valued observations (where belong to a finite set ) is generated according to some unknown probabilistic law (measure). That is, is a probability measure on the space of one-way infinite sequences (here is the usual Borel -algebra). After each new outcome is revealed, one is required to predict conditional probabilities of the next observation , , given the past . Since a predictor is required to give conditional probabilities for all possible histories , it defines itself a probability measure on the space of one-way infinite sequences. In other words, a probability measure on can be considered both as a data-generating mechanism and as a predictor.

Therefore, given a set of probability measures on , one can ask two kinds of questions about . First, does there exist a predictor , whose forecast probabilities converge (in a certain sense) to the -conditional probabilities, if an arbitrary is chosen to generate the data? Here we assume that the “true” measure that generates the data belongs to the set of interest, and would like to construct a predictor that predicts all measures in . The second type of questions is as follows: does there exist a predictor that predicts at least as well as any predictor , if the measure that generates the data comes possibly from outside of ? Thus, here we consider elements of as predictors, and we would like to combine their predictive properties, if this is possible. Note that in this setting the two questions above concern the same object: a set of probability measures on .

Each of these two questions, the realizable and the non-realizable one, have enjoyed much attention in the literature; the setting for the non-realizable case is usually slightly different, which is probably why it has not (to the best of the author’s knowledge) been studied as another facet of the realizable case. The realizable case traces back to Laplace, who has considered the problem of predicting outcomes of a series of independent tosses of a biased coin. That is, he has considered the case when the set is that of all i.i.d. process measures. Other classical examples studied are the set of all computable (or semi-computable) measures (Solomonoff, 1978), the set of -order Markov and finite-memory processes (e.g., Krichevsky, 1993) and the set of all stationary processes (Ryabko, 1988). The general question of finding predictors for an arbitrary given set of process measures has been addressed in (Ryabko and Hutter, 2007, 2008; Ryabko, 2010a); the latter work shows that when a solution exists it can be obtained as a Bayes mixture over a countable subset of .

The non-realizable case is usually studied in a slightly different, non-probabilistic, setting. We refer to (Cesa-Bianchi and Lugosi, 2006) for a comprehensive overview. It is usually assumed that the observed sequence of outcomes is an arbitrary (deterministic) sequence; it is required not to give conditional probabilities, but just deterministic guesses (although these guesses can be selected using randomisation). Predictions result in a certain loss, which is required to be small as compared to the loss of a given set of reference predictors (experts) . The losses of the experts and the predictor are observed after each round. In this approach, it is mostly assumed that the set is finite or countable. The main difference with the formulation considered in this work is that we require a predictor to give probabilities, and thus the loss is with respect to something never observed (probabilities, not outcomes). The loss itself is not completely observable in our setting. In this sense our non-realizable version of the problem is more difficult. Assuming that the data generating mechanism is probabilistic, even if it is completely unknown, makes sense in such problems as, for example, game playing, or market analysis. In these cases one may wish to assign smaller loss to those models or experts who give probabilities closer to the correct ones (which are never observed), even though different probability forecasts can often result in the same action. Aiming at predicting probabilities of outcomes also allows us to abstract from the actual use of the predictions (for example, making bets) and thus from considering losses in a general form; instead, we can concentrate on those forms of loss that are more convenient for the analysis. In this latter respect, the problems we consider are easier than those considered in prediction with expert advice. (However, in principle, nothing restricts us to considering the simple losses that we chose; they are just a convenient choice.) Noteworthy, the probabilistic approach also makes the machinery of probability theory applicable, hopefully making the problem easier. A reviewer suggested the following summary explanation of the difference between the non-realizable problems of this work and prediction with expert advice: the latter is prequential (in the sense of Dawid, 1992), whereas the former is not.

In this work we consider two measures of the quality of prediction. The first one is the total variation distance, which measures the difference between the forecast and the “true” conditional probabilities of all future events (not just the probability of the next outcome). The second one is expected (over the data) average (over time) Kullback-Leibler divergence. Requiring that predicted and true probabilities converge in total variation is very strong; in particular, this is possible if (Blackwell and Dubins, 1962) and only if (Kalai and Lehrer, 1994) the process measure generating the data is absolutely continuous with respect to the predictor. The latter fact makes the sequence prediction problem relatively easy to analyse. Here we investigate what can be paralleled for the other measure of prediction quality (average KL divergence), which is much weaker, and thus allows for solutions for the cases of much larger sets of process measures (considered either as predictors or as data generating mechanisms).

Having introduced our measures of prediction quality, we can further break the non-realizable case into two problems. The first one is as follows. Given a set of predictors, we want to find a predictor whose prediction error converges to zero if there is at least one predictor in whose prediction error converges to zero; we call this problem simply the “non-realizable” case, or Problem 2 (leaving the name “Problem 1” to the realizable case). The second non-realizable problem is the “fully agnostic” problem: it is to make the prediction error asymptotically as small as that of the best (for the given process measure generating the data) predictor in (we call this Problem 3). Thus, we now have three problems about a set of process measures to address.

We show that if the quality of prediction is measured in total variation, then all the three problems coincide: any solution to any one of them is a solution to the other two. For the case of expected average KL divergence, all the three problems are different: the realizable case is strictly easier than non-realizable (Problem 2), which is, in turn, strictly easier than the fully agnostic case (Problem 3). We then analyse which results concerning prediction in total variation can be transferred to which of the problems concerning prediction in average KL divergence. It was shown in (Ryabko, 2010a) that, for the realizable case, if there is a solution for a given set of process measures , then a solution can also be obtained as a Bayesian mixture over a countable subset of ; this holds both for prediction in total variation and in expected average KL divergence. Here we show that this result also holds true for the (non-realizable) case of Problem 2, for prediction in expected average KL divergence. We do not have an analogous result for Problem 3 (and, in fact, conjecture that the opposite statement holds true). However, for the fully agnostic case of Problem 3, we show that separability with respect to a certain topology given by KL divergence is a sufficient (though not a necessary) condition for the existence of a predictor. This is used to demonstrate that there is a solution to this problem for the set of all finite-memory process measures, complementing similar results obtained earlier in different settings. On the other hand, we show that there is no solution to this problem for the set of all stationary process measures, in contrast to a result of B. Ryabko (1988) that gives a solution to the realizable case of this problem (that is, a predictor whose expected average KL error goes to zero if any stationary process is chosen to generate the data). Finally, we also consider a modified version of Problem 3, in which the performance of predictors is only compared on individual sequences. For this problem, we obtain, using a result from (Ryabko, 1986), a characterisation of those sets for which a solution exists in terms of the Hausdorff dimension.

2 Notation and Definitions

Let be a finite set. The notation is used for . We consider stochastic processes (probability measures) on where is the sigma-field generated by the cylinder sets , ( is the set of all infinite sequences that start with ). For a finite set denote its cardinality. We use for expectation with respect to a measure .

Next we introduce the measures of the quality of prediction used in this paper. For two measures and we are interested in how different the - and -conditional probabilities are, given a data sample . Introduce the (conditional) total variation distance

if and , and otherwise.

Definition 1

We say that predicts in total variation if

This convergence is rather strong. In particular, it means that -conditional probabilities of arbitrary far-off events converge to -conditional probabilities. Moreover, predicts in total variation if (Blackwell and Dubins, 1962) and only if (Kalai and Lehrer, 1994) is absolutely continuous with respect to . Denote the relation of absolute continuity (that is, if is absolutely continuous with respect to ).

Thus, for a class of measures there is a predictor that predicts every in total variation if and only if every has a density with respect to . Although such sets of processes are rather large, they do not include even such basic examples as the set of all Bernoulli i.i.d. processes. That is, there is no that would predict in total variation every Bernoulli i.i.d. process measure , , where is the probability of . Indeed, all these processes , , are singular with respect to one another; in particular, each of the non-overlapping sets of all sequences which have limiting fraction of 0s has probability 1 with respect to one of the measures and 0 with respect to all others; since there are uncountably many of these measures, there is no measure with respect to which they all would have a density (since such a measure should have for all ).

Therefore, perhaps for many (if not most) practical applications this measure of the quality of prediction is too strong, and one is interested in weaker measures of performance.

For two measures and introduce the expected cumulative Kullback-Leibler divergence (KL divergence) as

(1)

In words, we take the expected (over data) cumulative (over time) KL divergence between - and -conditional (on the past data) probability distributions of the next outcome.

Definition 2

We say that predicts in expected average KL divergence if

This measure of performance is much weaker, in the sense that it requires good predictions only one step ahead, and not on every step but only on average; also the convergence is not with probability 1 but in expectation. With prediction quality so measured, predictors exist for relatively large classes of measures; most notably, Ryabko (1988) provides a predictor which predicts every stationary process in expected average KL divergence.

We will use the following well-known identity (introduced, in the context of sequence prediction, by Ryabko, 1988)

(2)

where on the right-hand side we have simply the KL divergence between measures and restricted to the first observations.

Thus, the results of this work will be established with respect to two very different measures of prediction quality, one of which is very strong and the other rather weak. This suggests that the facts established reflect some fundamental properties of the problem of prediction, rather than those pertinent to particular measures of performance. On the other hand, it remains open to extend the results below to different measures of performance.

Definition 3

Consider the following classes of process measures: is the set of all process measures, is the set of all degenerate discrete process measures, is the set of all stationary processes and is the set of all stationary measures with memory not greater than (-order Markov processes, with being the set of all i.i.d. processes):

(3)
(4)
(5)

Abusing the notation, we will sometimes use elements of and interchangeably. The following (simple and well-known) statement will be used repeatedly in the examples.

Lemma 4

For every there exists such that for all .

Proof  Indeed, for each we can select for that minimizes , so that .  

3 Sequence Prediction Problems

For the two notions of predictive quality introduced, we can now state formally the sequence prediction problems.
Problem 1(realizable case). Given a set of probability measures , find a measure such that predicts in total variation (expected average KL divergence) every , if such a exists.

Thus, Problem 1 is about finding a predictor for the case when the process generating the data is known to belong to a given class . That is, the set here is a set of measures that generate the data. Next let us formulate the questions about as a set of predictors.

Problem 2 (non-realizable case). Given a set of process measures (predictors) , find a process measure such that predicts in total variation (in expected average KL divergence) every measure such that there is which predicts (in the same sense) .

While Problem 2 is already quite general, it does not yet address what can be called the fully agnostic case: if nothing at all is known about the process generating the data, it means that there may be no such that predicts , and then, even if we have a solution to the Problem 2, we still do not know what the performance of is going to be on the data generated by , compared to the performance of the predictors from . To address this fully agnostic case we have to introduce the notion of loss.

Definition 5

Introduce the almost sure total variation loss of with respect to 

and the asymptotic KL loss

We can now formulate the fully agnostic version of the sequence prediction problem.

Problem 3. Given a set of process measures (predictors) , find a process measure such that predicts at least as well as any in , if any process measure is chosen to generate the data:

(6)

for every and every , where is either or .

The three problems just formulated represent different conceptual approaches to the sequence prediction problem. Let us illustrate the difference by the following informal example. Suppose that the set is that of all (ergodic, finite-state) Markov chains. Markov chains being a familiar object in probability and statistics, we can easily construct a predictor that predicts every (for example, in expected average KL divergence, see Krichevsky, 1993). That is, if we know that the process generating the data is Markovian, we know that our predictor is going to perform well. This is the realizable case of Problem 1. In reality, rarely can we be sure that the Markov assumption holds true for the data at hand. We may believe, however, that it is still a reasonable assumption, in the sense that there is a Markovian model which, for our purposes (for the purposes of prediction), is a good model of the data. Thus we may assume that there is a Markov model (a predictor) that predicts well the process that we observe, and we would like to combine the predictive qualities of all these Markov models. This is the “non-realizable” case of Problem 2. Note that this problem is more difficult than the first one; in particular, a process generating the data may be singular with respect to any Markov process, and still be predicted well (in the sense of expected average KL divergence, for example) by some of them. Still, here we are making some assumptions about the process generating the data, and, if these assumptions are wrong, then we do not know anything about the performance of our predictor. Thus, we may ultimately wish to acknowledge that we do not know anything at all about the data; we still know a lot about Markov processes, and we would like to use this knowledge on our data. If there is anything at all Markovian in it (that is, anything that can be captured by a Markov model), then we would like our predictor to use it. In other words, we want to have a predictor that predicts any process measure whatsoever (at least) as well as any Markov predictor. This is the “fully agnostic” case of Problem 3.

Of course, Markov processes were just mentioned as an example, while in this work we are only concerned with the most general case of arbitrary (uncountable) sets of process measures.

The following statement is rather obvious.

Proposition 6

Any solution to Problem 3 is a solution to Problem 2, and any solution to Problem 2 is a solution to Problem 1.

Despite the conceptual differences in formulations, it may be somewhat unclear whether the three problems are indeed different. It appears that this depends on the measure of predictive quality chosen: for the case of prediction in total variation distance all the three problems coincide, while for the case of prediction in expected average KL divergence they are different.

4 Prediction in Total Variation

As it was mentioned, a measure is absolutely continuous with respect to a measure if and only if predicts in total variation distance. This reduces studying at least Problem 1 for total variation distance to studying the relation of absolute continuity. Introduce the notation for this relation.

Let us briefly recall some facts we know about ; details can be found, for example, in (Plesner and Rokhlin, 1946). Let denote the set of equivalence classes of with respect to , and for denote the equivalence class that contains . Two elements (or ) are called disjoint (or singular) if there is no such that and ; in this case we write . We write for . Every pair has a supremum . Introducing into an extra element such that for all , we can state that for every there exists a unique pair of elements and such that , and . (This is a form of Lebesgue decomposition.) Moreover, . Thus, every pair of elements has a supremum and an infimum. Moreover, every bounded set of disjoint elements of is at most countable.

Furthermore, we introduce the (unconditional) total variation distance between process measures.

Definition 7 (unconditional total variation distance)

The (unconditional) total variation distance is defined as

Known characterizations of those sets that are bounded with respect to can now be related to our prediction problems 1-3 as follows.

Theorem 8

Let . The following statements about are equivalent.

  • There exists a solution to Problem 1 in total variation.

  • There exists a solution to Problem 2 in total variation.

  • There exists a solution to Problem 3 in total variation.

  • is upper-bounded with respect to .

  • There exists a sequence , such that for some (equivalently, for every) sequence of weights , such that , the measure satisfies for every .

  • is separable with respect to the total variation distance.

  • Let . Every disjoint (with respect to ) subset of is at most countable.

Moreover, every solution to any of the Problems 1-3 is a solution to the other two, as is any upper bound for . The sequence in the statement (v) can be taken to be any dense (in the total variation distance) countable subset of (cf. (vi)), or any maximal disjoint (with respect to ) subset of of statement (vii), in which every measure that is not in is replaced by any measure from that dominates it.

Proof  The implications are obvious (cf. Proposition 6). The implication is a reformulation of the result of Blackwell and Dubins (1962). The converse (and hence ) was established in (Kalai and Lehrer, 1994). follows from the equivalence and the transitivity of ; follows from the transitivity of and from Lemma 9 below: indeed, from Lemma 9 we have if and otherwise. From this and the transitivity of it follows that if then also for all . The equivalence of , , and was established in (Ryabko, 2010a). The equivalence of and was proven in (Plesner and Rokhlin, 1946). The concluding statements of the theorem are easy to demonstrate from the results cited above.  

The following lemma is an easy consequence of (Blackwell and Dubins, 1962).

Lemma 9

Let be two process measures. Then converges to either 0 or 1 with -probability 1.

9

Proof  Assume that is not absolutely continuous with respect to (the other case is covered by (Blackwell and Dubins, 1962)). By Lebesgue decomposition theorem, the measure admits a representation where and the measures and are such that is absolutely continuous with respect to and is singular with respect to . Let be such a set that and . Note that we can take and . From (Blackwell and Dubins, 1962) we have -a.s., as well as -a.s. and -a.s. Moreover, so that -a.s. Furthermore,

and

We have -a.s. and hence -a.s., as well as -a.s. and hence -a.s. Thus, , which concludes the proof.  

Remark. Using Lemma 9 we can also define expected (rather than almost sure) total variation loss of with respect to , as the -probability that converges to 1:

Then Problem 3 can be reformulated for this notion of loss. However, it is easy to see that for this reformulation Theorem 8 holds true as well.

Thus, we can see that, for the case of prediction in total variation, all the sequence prediction problems formulated reduce to studying the relation of absolute continuity for process measures and those families of measures that are absolutely continuous (have a density) with respect to some measure (a predictor). On the one hand, from a statistical point of view such families are rather large: the assumption that the probabilistic law in question has a density with respect to some (nice) measure is a standard one in statistics. It should also be mentioned that such families can easily be uncountable. (In particular, this means that they are large from a computational point of view.) On the other hand, even such basic examples as the set of all Bernoulli i.i.d. measures does not allow for a predictor that predicts every measure in total variation (as explained in Section 2).

That is why we have to consider weaker notions of predictions; from these, prediction in expected average KL divergence is perhaps one of the weakest. The goal of the next sections is to see which of the properties that we have for total variation can be transferred (and in which sense) to the case of expected average KL divergence.

5 Prediction in Expected Average KL Divergence

First of all, we have to observe that for prediction in KL divergence Problems 1, 2, and 3 are different, as the following theorem shows. While the examples provided in the proof are artificial, there is a very important example illustrating the difference between Problem 1 and Problem 3 for expected average KL divergence: the set of all stationary processes, given in Theorem 16 in the end of this section.

Theorem 10

For the case of prediction in expected average KL divergence, Problems 1, 2 and 3 are different: there exists a set for which there is a solution to Problem 1 but there is no solution to Problem 2, and there is a set for which there is a solution to Problem 2 but there is no solution to Problem 3.

Proof  We have to provide two examples. Fix the binary alphabet . For each deterministic sequence construct the process measure as follows: and for let , for all . That is, is Bernoulli i.i.d. 1/2 process measure strongly biased towards a specific deterministic sequence, . Let also for all , (the Bernoulli i.i.d. 1/2). For the set we have a solution to Problem 1: indeed, . However, there is no solution to Problem 2. Indeed, for each we have (that is, for every deterministic measure there is an element of which predicts it), while by Lemma 4 for every there exists such that for all (that is, there is no predictor which predicts every measure that is predicted by at least one element of ).

The second example is similar. For each deterministic sequence construct the process measure as follows: and for let , for all . It is easy to see that is a solution to Problem 2 for the set . Indeed, if is such that then we must have . From this and the fact that and coincide (up to ) on all other sequences we conclude . However, there is no solution to Problem 3 for . Indeed, for every we have . Therefore, if is a solution to Problem 3 then which contradicts Lemma 4.  

Thus, prediction in expected average KL divergence turns out to be a more complicated matter than prediction in total variation. The next idea is to try and see which of the facts about prediction in total variation can be generalized to some of the problems concerning prediction in expected average KL divergence.

First, observe that, for the case of prediction in total variation, the equivalence of Problems 1 and 2 was derived from the transitivity of the relation of absolute continuity. For the case of expected average KL divergence, the relation “ predicts in expected average KL divergence” is not transitive (and Problems 1 and 2 are not equivalent). However, for Problem 2 we are interested in the following relation: “dominates” if predicts every such that predicts . Denote this relation by :

Definition 11 ()

We write if for every the equality implies .

The relation has some similarities with . First of all, is also transitive (as can be easily seen from the definition). Moreover, similarly to , one can show that for any any strictly convex combination is a supremum of with respect to . Next we will obtain a characterization of predictability with respect to similar to one of those obtained for .

The key observation is the following. If there is a solution to Problem 2 for a set then a solution can be obtained as a Bayesian mixture over a countable subset of . For total variation this is the statement of Theorem 8.

Theorem 12

Let be a set of probability measures on . If there is a measure such that for every ( is a solution to Problem 2) then there is a sequence , , such that for every , where are some positive weights.

The proof is deferred to Section 7. An analogous result for Problem 1 was established in (Ryabko, 2009). (The proof of Theorem 12 is based on similar ideas, but is more involved.)

For the case of Problem 3, we do not have results similar to Theorem 12 (or statement of Theorem 8); in fact, we conjecture that the opposite is true: there exists a (measurable) set of measures such that there is a solution to Problem 3 for , but there is no Bayesian solution to Problem 3, meaning that there is no probability distribution on (discrete or not) such that the mixture over with respect to this distribution is a solution to Problem 3 for .

However, we can take a different route and extend another part of Theorem 8 to obtain a characterization of sets for which a solution to Problem 3 exists.

We have seen that, in the case of prediction in total variation, separability with respect to the topology of this distance is a necessary and sufficient condition for the existence of a solution to Problems 1-3. In the case of expected average KL divergence the situation is somewhat different, since, first of all, (asymptotic average) KL divergence is not a metric. While one can introduce a topology based on it, separability with respect to this topology turns out to be a sufficient but not a necessary condition for the existence of a predictor, as is shown in the next theorem.

Definition 13

Define the distance on process measures as follows

(7)

where we assume .

Clearly, is symmetric and satisfies the triangle inequality, but it is not exact. Moreover, for every we have

(8)

The distance measures the difference in behaviour of and on all individual sequences. Thus, using this distance to analyse Problem 3 is most close to the traditional approach to the non-realizable case, which is formulated in terms of predicting individual deterministic sequences.

Theorem 14
  • Let be a set of process measures. If is separable with respect to then there is a solution to Problem 3 for , for the case of prediction in expected average KL divergence.

  • There exists a set of process measures such that is not separable with respect to , but there is a solution to Problem 3 for this set, for the case of prediction in expected average KL divergence.

Proof  For the first statement, let be separable and let be a dense countable subset of . Define , where are any positive summable weights. Fix any measure and any . We will show that . For every , find such a that . We have

From this, dividing by taking on both sides, we conclude

Since this holds for every the first statement is proven.

The second statement is proven by the following example. Let be the set of all deterministic sequences (measures concentrated on just one sequence) such that the number of 0s in the first symbols is less than , for all . Clearly, this set is uncountable. It is easy to check that implies for every , but the predictor , given by independently for different , predicts every in expected average KL divergence. Since all elements of are deterministic, is also a solution to Problem 3 for .  

Although simple, Theorem 14 can be used to establish the existence of a solution to Problem 3 for an important class of process measures: that of all processes with finite memory, as the next theorem shows. Results similar to Theorem 15 are known in different settings, e.g., (Ziv and Lempel, 1978; Ryabko, 1984; Cesa-Bianchi and Lugosi, 1999) and others.

Theorem 15

There exists a solution to Problem 3 for prediction in expected average KL divergence for the set of all finite-memory process measures .

Proof  We will show that the set is separable with respect to . Then the statement will follow from Theorem 14. It is enough to show that each set is separable with respect to .

For simplicity, assume that the alphabet is binary (; the general case is analogous). Observe that the family of -order stationary binary-valued Markov processes is parametrized by -valued parameters: probability of observing after observing , for each . Note that this parametrization is continuous (as a mapping from the parameter space with the Euclidean topology to with the topology of ). Indeed, for any and every such that , , it is easy to see that

(9)

so that the right-hand side of (9) also upper-bounds , implying continuity of the parametrization.

It follows that the set , of all stationary -order Markov processes with rational values of all the parameters () is dense in , proving the separability of the latter set.  

Another important example is the set of all stationary process measures . This example also illustrates the difference between the prediction problems that we consider. For this set a solution to Problem 1 was given in (Ryabko, 1988). In contrast, here we show that there is no solution to Problem 3 for .

Theorem 16

There is no solution to Problem 3 for the set of all stationary processes .

Proof  This proof is based on the construction similar to the one used in (Ryabko, 1988) to demonstrate impossibility of consistent prediction of stationary processes without Cesaro averaging.

Let be a Markov chain with states and state transitions defined as follows. From each sate the chain passes to the state with probability 2/3 and to the state 0 with probability 1/3. It is easy to see that this chain possesses a unique stationary distribution on the set of states (see, e.g., Shiryaev, 1996); taken as the initial distribution it defines a stationary ergodic process with values in . Fix the ternary alphabet . For each sequence define the process as follows. It is a deterministic function of the chain . If the chain is in the state 0 then the process outputs ; if the chain is in the state then the process outputs . That is, we have defined a hidden Markov process which in the state 0 of the underlying Markov chain always outputs , while in other states it outputs either or according to the sequence .

To show that there is no solution to Problem 3 for , we will show that there is no solution to Problem 3 for the smaller set . Indeed, for any we have . Then if is a solution to Problem 3 for we should have for every , which contradicts Lemma 4.  

From the proof of Theorem 16 one can see that, in fact, the statement that is proven is stronger: there is no solution to Problem 3 for the set of all functions of stationary ergodic countable-state Markov chains. We conjecture that a solution to Problem 2 exists for the latter set, but not for the set of all stationary processes.

As we have seen in the statements above, the set of all deterministic measures plays an important role in the analysis of the predictors in the sense of Problem 3. Therefore, an interesting question is to characterize those sets of measures for which there is a predictor that predicts every individual sequence at least as well as any measure from . Such a characterization can be obtained in terms of Hausdorff dimension, using a result of Ryabko (1986), that shows that Hausdorff dimension of a set characterizes the optimal prediction error that can be attained by any predictor.

For a set denote its Hausdorff dimension (see, for example, (Billingsley, 1965) for its definition).

Theorem 17

Let . The following statements are equivalent.

  • There is a measure that predicts every individual sequence at least as well as the best measure from : for every and every sequence we have

    (10)
  • For every the Hausdorff dimension of the set of sequences on which the average prediction error of the best measure in is not greater than is bounded by :

    (11)

Proof  The implication follows directly from (Ryabko, 1986) where it is shown that for every measure one must have .

To show the opposite implication, we again refer to (Ryabko, 1986): for every set there is a measure such that

(12)

For each define By assumption, , so that from (12) for all we obtain

(13)

Furthermore, define , where is the set of rationals in and is any sequence of positive reals satisfying . For every let , be such a sequence that . Then, for every and every we have

From this and (13) we get

Since this holds for every , it follows that for all we have

which completes the proof of the implication .  

6 Discussion

It has been long realized that the so-called probabilistic and agnostic (adversarial, non-stochastic, deterministic) settings of the problem of sequential prediction are strongly related. This has been most evident from looking at the solutions to these problems, which are usually based on the same ideas. Here we have proposed a formulation of the agnostic problem as a non-realizable case of the probabilistic problem. While being very close to the traditional one, this setting allows us to directly compare the two problems. As a somewhat surprising result, we can see that whether the two problems are different depends on the measure of performance chosen: in the case of prediction in total variation distance they coincide, while in the case of prediction in expected average KL divergence they are different. In the latter case, the distinction becomes particularly apparent on the example of stationary processes: while a solution to the realizable problem has long been known, here we have shown that there is no solution to the agnostic version of this problem. This formalization also allowed us to introduce another problem that lies in between the realizable and the fully agnostic problems: given a class of process measures , find a predictor whose predictions are asymptotically correct for every measure for which at least one of the measures in gives asymptotically correct predictions (Problem 2). This problem is less restrictive then the fully agnostic one (in particular, it is not concerned with the behaviour of a predictor on every deterministic sequence) but at the same time the solutions to this problem have performance guarantees far outside the model class considered.

In essence, the formulation of Problem 2 suggests to assume that we have a set of models one of which is good enough to make predictions, with the goal of combining the predictive powers of these models. This is perhaps a good compromise between making modelling assumptions on the data (the data is generated by one of the models we have) and the fully agnostic, worst-case, setting.

Since the problem formulations presented here are mostly new (at least, in such a general form), it is not surprising that there are many questions left open. A promising route to obtain new results seems to be to first analyse the case of prediction in total variation, which amounts to studying the relation of absolute continuity and singularity of probability measures, and then to try and find analogues in less restrictive (and thus more interesting and difficult) cases of predicting only the next observation, possibly with Cesaro averaging. This is the approach that we took in this work. Here it is interesting to find properties common to all or most of the prediction problems (in total variation as well as with respect to other measures of the performance), if it is at all possible. For example, the “countable Bayes” property of Theorem 12, that states that if there is a solution to a given sequence prediction problem for a set then a solution can be obtained as a mixture over a suitable countable subset of , holds for Problems 1–3 in total variation, and for Problems 1 and 2 in KL divergence; however we conjecture that it does not hold for the Problem 3 in KL divergence.

It may also be interesting to study algebraic properties of the relation that arises when studying Problem 2. We have show that it shares some properties with the relation of absolute continuity. Since the latter characterizes prediction in total variation and the former characterizes prediction in KL divergence (in the sense of Problem 2), which is much weaker, it would be interesting to see exactly what properties the two relations share.

Another direction for future research concerns finite-time performance analysis. In this work we have adopted the asymptotic approach to the prediction problem, ignoring the behaviour of predictors before asymptotic. While for prediction in total variation it is a natural choice, for other measures of performance, including average KL divergence, it is clear that Problems 1-3 admit non-asymptotic formulations. It is also interesting what are the relations between performance guarantees that can be obtained in non-asymptotic formulations of Problems 1–3.

7 Proof of Theorem 12

Proof  Define the sets as the set of all measures such that predicts in expected average KL divergence. Let . For each let be any (fixed) such that . In other words, is the set of all measures that are predicted by some of the measures in , and for each measure in we designate one “parent” measure from such that predicts .

Define the weights , for all .

Step 1. For each let be any monotonically increasing function such that and . Define the sets

(14)
(15)

and

(16)

We will upper-bound . First, using Markov’s inequality, we derive

(17)

Next, observe that for every and every set , using Jensen’s inequality we can obtain

(18)

Moreover,

where in the inequality we have used (15) for the first summand and (18) for the second. Thus,

(19)

From (16), (17) and (19) we conclude

(20)

Step 2n: a countable cover, time . Fix an . Define (since are finite all suprema are reached). Find any such that and let . For , let . If , let be any such that , and let ; otherwise let . Observe that (for each ) there is only a finite number of positive , since the set is finite; let be the largest index such that . Let

(21)

As a result of this construction, for every every and every using the definitions (16), (14) and (15) we obtain

(22)

Step 2: the resulting predictor. Finally, define

(23)

where is the i.i.d. measure with equal probabilities of all (that is, for every and every ). We will show that predicts every , and then in the end of the proof (Step r) we will show how to replace by a combination of a countable set of elements of (in fact, is just a regularizer which ensures that -probability of any word is never too close to 0).

Step 3: predicts every . Fix any