Learning Multiple Markov Chains via Adaptive Allocation

Learning Multiple Markov Chains
via Adaptive Allocation

Mohammad Sadegh Talebi
SequeL Team, Inria Lille – Nord Europe
sadegh.talebi@inria.fr
&Odalric-Ambrym Maillard
SequeL Team, Inria Lille – Nord Europe
sadegh.talebi@inria.fr
Abstract

We study the problem of learning the transition matrices of a set of Markov chains from a single stream of observations on each chain. We assume that the Markov chains are ergodic but otherwise unknown. The learner can sample Markov chains sequentially to observe their states. The goal of the learner is to sequentially select various chains to learn transition matrices uniformly well with respect to some loss function. We introduce a notion of loss that naturally extends the squared loss for learning distributions to the case of Markov chains, and further characterize the notion of being uniformly good in all problem instances. We present a novel learning algorithm that efficiently balances exploration and exploitation intrinsic to this problem, without any prior knowledge of the chains. We provide finite-sample PAC-type guarantees on the performance of the algorithm. Further, we show that our algorithm asymptotically attains an optimal loss.

1 Introduction

We study a variant of the following sequential adaptive allocation problem: A learner is given a set of arms, where to each arm , an unknown real-valued distributions with mean and variance is associated. At each round , the learner must select an arm , and receives a sample drawn from . Given a total budget of pulls, the objective is to estimate the expected values of all distributions uniformly well. The quality of estimation in this problem is classically measured through expected quadratic estimation error, for the empirical mean estimate built with the many samples received from at time , and the performance of an allocation strategy is the maximal error, . Using ideas from the Multi-Armed Bandit (MAB) literature, previous works (e.g., antos2008active ; carpentier2011upper ) have provided optimistic sampling strategies with near-optimal performance guarantees for this setup.

This generic adaptive allocation problem is related to several application problems arising in optimal experiment design fedorov1972theory ; dror2008sequential , active learning cohn1996active , or Monte-Carlo methods etore2010adaptive ; we refer to antos2008active ; antos2010active ; carpentier2011upper ; carpentier2015adaptive and references therein for further motivation. We extend this line of work to the case where each process is a discrete Markov chain, hence introducing the problem of active bandit learning of Markov chains. More precisely, we no longer assume that are real-valued distributions, but we study the case where each is a discrete Markov process over a state space . The law of the observations on arm (or chain) is given by , where denotes the initial distribution of states, and is the transition function of the Markov chain. The goal of the learner is to learn the transition matrices uniformly well on the chains. Note that the chains are not controlled (we only decide which chain to advance, not the states it transits to).

Before discussing the challenges of the extension to Markov chains, let us give further comments on the performance measure considered in bandit allocation for real-valued distributions: Using the expected quadratic estimation error on each arm makes sense since when are deterministic, it coincides with , thus suggesting to pull the distributions proportionally to . However, for a learning strategy, typically depends on all past observations. The presented analyses in these series of works rely on Wald’s second identity as the technical device, heavily relying on the use of a quadratic loss criterion, which prevents one from extending the approach therein to other distances. Another peculiarity arising in working with expectations is the order of “max” and “expectation” operators. While it makes more sense to control the expected value of the maximum, the works cited above look at maximum of the expected value, which is more in line with a pseudo-loss definition rather than the loss; actually in extensions of these works a pseudo-loss is considered instead of this performance measure. As we show, all of these difficulties can be avoided by resorting to a high probability setup. Hence, in this paper, we deviate from using an expected loss criterion, and rather use a high-probability control. We formally define our performance criterion in Section 2.3.

1.1 Related Work

On the one hand, our setup can be framed into the line of works on active bandit allocation, considered for the estimation of reward distributions in MABs as introduced in antos2008active ; antos2010active , and further studied in carpentier2011upper ; neufeld2014adaptive . This has been extended to stratified sampling for Monte-Carlo methods in carpentier2012adaptive ; carpentier2015adaptive , or to continuous mean functions in, e.g., carpentier2012online . On the other hand, our extension from real-valued distributions to Markov chains can be framed into the rich literature on Markov chain estimation; see, e.g., billingsley1961statistical ; kipnis1986central ; haviv1984perturbation ; welton2005estimation ; craig2002estimation ; meyn2012markov . This stream of works extends a wide range of results from the i.i.d. case to the Markov case. These include, for instance, the law of large numbers for (functions of) state values meyn2012markov , the central limit theorem for Markov sequences kipnis1986central (see also meyn2012markov ; rio2017asymptotic ), and Chernoff-type or Bernstein-type concentration inequalities for Markov sequences lezaud1998chernoff ; paulin2015concentration . Note that the majority of these results are available for ergodic Markov chains.

Another stream of research on Markov chains, which is more relevant to our work, investigates learning and estimation of the transition matrix (as opposed to its full law); see, e.g., craig2002estimation ; welton2005estimation ; wolfer2019minimax ; hao2018learning . Among the recent studies falling in this category, hao2018learning investigates learning of the transition matrix with respect to a loss function induced by -divergences in a minimax setup, thus extending kamath2015learning to the case of Markov chains. wolfer2019minimax derives a PAC-type bound for learning the transition matrix of an ergodic Markov chain with respect to the total variation loss. It further provides a matching lower bound. Among the existing literature on learning Markov chains, to the best of our knowledge, wolfer2019minimax is the closest to ours. There are however two aspects distinguishing our work: Firstly, the challenge in our problem resides in dealing with multiple Markov chains, which is present neither in wolfer2019minimax nor in the other studies cited above. Secondly, our notion of loss does not coincide with that considered in wolfer2019minimax , and hence, the lower bound of wolfer2019minimax does not apply to our case.

Among the results dealing with multiple chains, we may refer to learning in the Markovian bandits setup ortner2014regret ; tekin2012online ; dance2019optimal . Most of these studies address the problem of reward maximization over a finite time horizon. We also mention that in a recent study, tarbouriech2019active introduces the so-called active exploration in Markov decision processes, where the transition kernel is known, and the goal is rather to learn the mean reward associated to various states. To the best of our knowledge, none of these works address the problem of learning the transition matrix. Last, as we target high-probability performance bounds (as opposed to those holding in expectation), our approach is naturally linked to the Probably Approximately Correct (PAC) analysis. kearns1994learnability provides one of the first PAC bounds for learning discrete distributions. Since then, the problem of learning discrete distributions has been well studied; see, e.g., gamarnik2003extension ; jiao2015minimax ; kamath2015learning and references therein. We refer to kamath2015learning for a rather complete characterization of learning distribution in a minimax setting under a big class of smooth loss functions. We remark that except for very few studies (e.g., gamarnik2003extension ), most of these results are provided for discrete distributions.

1.2 Overview and Contributions

Our contributions are the following: (i) For the problem of learning Markov chains, we consider a notion of loss function, which appropriately extends the loss function for learning distributions to the case of Markov chains. Our notion of loss is similar to that considered in hao2018learning (we refer to Section 2.3 for a comparison between our notion and the one in hao2018learning ). In contrast to existing works on similar bandit allocation problems, our loss function avoids technical difficulties faced when extending the squared loss function to this setup. We further characterize the notion of a “uniformly good algorithm” under the considered loss function for ergodic chains; (ii) We present an optimistic algorithm, called BA-MC, for active learning of Markov chains, which is simple to implement and does not require any prior knowledge of the chains. To the best of our knowledge, this constitutes the first algorithm for active bandit allocation for learning Markov chains; (iii) We provide non-asymptotic PAC-type, and asymptotic bounds, on the loss incurred by BA-MC, indicating three regimes. In the first regime, which holds for any budget , we present (in Theorem 1) a high-probability bound on the loss scaling as , where hides factors. Here, and respectively denote the number of chains and the number of states in a given chain. This result holds for homogenous Markov chains. We then characterize a cut-off budget  (in Theorem 2) so that when , the loss behaves as , where denotes the sum of variances of all states and all chains, and where denotes the transition probability of chain . This latter bound constitutes the second regime, in view of the fact that equals the asymptotically optimal loss (see Section 2.4 for more details). Thus, this bound indicates that the pseudo-excess loss incurred by the algorithm vanishes at a rate (we refer to Section 4 for a more precise definition). Furthermore, we carefully characterize the constant . In particular, we discuss that does not deteriorate with mixing times of the chains, which, we believe, is a strong feature of our algorithm. We also discuss how various properties of the chains, e.g., discrepancies between stationary distribution of various states a given chain, may impact the learning performance. Finally, we demonstrate a third regime, the asymptotic one, when the budget grows large, in which we show (in Theorem 3) that the loss of BA-MC matches the asymptotically optimal loss . All proofs are provided in the supplementary material.

Markov chains have been successfully used for modeling a broad range of practical problems, and their success makes the studied problem in this paper relevant in practice. There are practical applications in reinforcement learning (e.g., active exploration in MDPs tarbouriech2019active ) and in rested Markov bandits (e.g., channel allocation in wireless communication systems where a given channel’s state follows a Markov chain111For example, in the Gilbert-Elliott channels mushkin1989capacity .), for which we believe our contributions could serve as a technical tool.

2 Preliminaries and Problem Statement

2.1 Preliminaries

Before describing our model, we recall some preliminaries on Markov chains; these are standard definitions and results, and can be found in, e.g., norris1998markov ; levin2009markov . Consider a Markov chain defined on a finite state space with cardinality . Let denote the collection of all row-stochastic matrices over . The Markov chain is specified by its transition matrix and its initial distribution : For all , denotes the probability of transition to if the current state is . In what follows, we may refer to a chain by just referring to its transition matrix.

We recall that a Markov chain is ergodic if (entry-wise) for some . If is ergodic, then it has a unique stationary distribution satisfying . Moreover . A chain is said to be reversible if its stationary distribution satisfies detailed balance equations: For all , . Otherwise, is called non-reversible. For a Markov chain , the largest eigenvalue is (with multiplicity one). In a reversible chain , all eigenvalues belong to . We define the absolute spectral gap of a reversible chain as , where denotes the second largest (in absolute value) eigenvalue of . If is reversible, the absolute spectral gap controls the convergence rate of the state distributions of the chain towards the stationary distribution . If is non-reversible, the convergence rate is determined by the pseudo-spectral gap as introduced in paulin2015concentration as follows. Define as: for all . Then, the pseudo-spectral gap is defined as:

2.2 Model and Problem Statement

We are now ready to describe our model. We consider a learner interacting with a finite set of Markov chains indexed by . For ease of presentation, we assume that all Markov chains are defined on the same state space222Our algorithm and results are straightforwardly extended to the case where the Markov chains are defined on different state spaces. with cardinality . The Markov chain , or for short chain , is specified by its transition matrix . In this work, we assume that all Markov chains are ergodic, which implies that any chain admits a unique stationary distribution, which we denote by . Moreover, the minimal element of is bounded away from zero: . The initial distributions of the chains are assumed to be arbitrary. Further, we let to denote the absolute spectral gap of chain if is reversible; otherwise, we define the pseudo-spectral gap of by .

A related quantity in our results is the Gini index of the various states. For a chain , the Gini index for state is defined as

Note that . This upper bound is verified by the fact that the maximal value of is achieved when for all (in view of the concavity of ). In this work, we assume that for all , .333We remark that there exist chains with . In view of the definition of the Gini index, such chains are necessarily deterministic (or degenerate), namely their transition matrices belong to . One example is a deterministic cycle with nodes. We note that such chains may fail to satisfy irreducibility or aperiodicity. Another related quantity in our results is the sum (over states) of inverse stationary distributions: For a chain , we define . Note that . The quantity reflects the discrepancy between individual elements of .

The online learning problem.

The learner wishes to design a sequential allocation strategy to adaptively sample various Markov chains so that all transition matrices are learnt uniformly well. The game proceeds as follows: Initially all chains are assumed to be non-stationary with arbitrary initial distributions chosen by the environment. At each step , the learner samples a chain , based on the past decisions and the observed states, and observes the state . The state of evolves according to . The state of chains does not change: for all .

We introduce the following notations: Let denote the number of times chain is selected by the learner up to time : , where denotes the indicator function. Likewise, we let represent the number of observations of chain , up to time , when the chain was in state : Further, we note that the learner only controls (or equivalently, ), but not the number of visits to individual states. At each step , the learner maintains empirical estimates of the stationary distributions, and estimates transition probabilities of various chains based on the observations gathered up to . We define the empirical stationary distribution of chain at time as for all For chain , we maintain the following smoothed estimation of transition probabilities:

(1)

where is a positive constant. In the literature, the case of is usually referred to as the Laplace-smoothed estimator. The learner is given a budget of samples, and her goal is to obtain an accurate estimation of transition matrices of the Markov chains. The accuracy of the estimation is determined by some notion of loss, which will be discussed later. The learner adaptively selects various chains so that the minimal loss is achieved.

2.3 Performance Measures

We are now ready to provide a precise definition of our notion of loss, which would serve as the performance measure of a given algorithm. Given , we define the loss of an adaptive algorithm as:

The use of the -norm in the definition of loss is quite natural in the context of learning and estimation of distributions, as it is directly inspired by the quadratic estimation error used in active bandit allocation (e.g., carpentier2011upper ). Given a budget , the loss of an adaptive algorithm is a random variable, due to the evolution of the various chains as well as the possible randomization in the algorithm. Here, we aim at controlling this random quantity in a high probability setup as follows: Let . For a given algorithm , we wish to find such that

(2)
Remark 1

We remark that the empirical stationary distribution may differ from the stationary distribution associated to the smoothed estimator of the transition matrix. Our algorithm and results, however, do not rely on possible relations between and , though one could have used smoothed estimators for . The motivation behind using empirical estimate of in is that it naturally corresponds to the occupancy of various states according to a given sample path.

Comparison with other losses.

We now turn our attention to the comparison between our loss function and some other possible notions. First, we compare ours to the loss function . Such a notion of loss might look more natural or simpler, since the weights are replaced simply with (equivalently, uniform weights). However, this means a strategy may incur a high loss for a part of the state space that is rarely visited, even though we have absolutely no control on the chain. For instance, in the extreme case when some states are reachable with a very small probability, may be arbitrarily small thus resulting in a large loss for all algorithms, while it makes little sense to penalize an allocation strategy for these “virtual" states. Weighting the loss according to the empirical frequency of visits avoids such a phenomenon, and is thus more meaningful.

In view of the above discussion, it is also tempting to replace the empirical state distribution with its expectation , namely to define a pseudo-loss function of the form (as studied in, e.g., hao2018learning in a different setup). We recall that our aim is to derive performance guarantees on the algorithm’s loss that hold with high probability (for portions of the sample paths of the algorithm for a given ). To this end, (which uses ) is more natural and meaningful than as penalizes the algorithm’s performance by the relative visit counts of various states in a given sample path (through ), and not by the expected value of these. This matters a lot in the small-budget regime, where could differ significantly from — Otherwise when is large enough, becomes well-concentrated around with high probability. To clarify further, let us consider the small-budget regime, and some state where is not small. In the case of , using we penalize the performance by the mismatch between and , weighted proportionally to the number of rounds the algorithm has actually visited . In contrast, in the case of , weighting the mismatch proportionally to does not seem reasonable since in a given sample path, the algorithm might not have visited enough even though is not small. We remark that our results in subsequent sections easily apply to the pseudo-loss , at the expense of an additive second-order term, which might depend on the mixing times.

Finally, we position the high-probability guarantee on , in the sense of Eq. (2), against those holding in expectation. Prior studies on bandit allocation, such as antos2010active ; carpentier2011upper , whose objectives involve a max operator, consider expected squared distance. The presented analyses in these series of works rely on Wald’s second identity as the technical device. This prevents one to extend the approach therein to other distances. Another peculiarity arising in working with expectations is the order of “max” and “expectation” operators. While it makes more sense to control the expected value of the maximum, the works cited above look at maximum of the expected value, which is more in line with a pseudo-loss definition rather than the loss. All of these difficulties can be avoided by resorting to a high probability setup (in the sense of Eq. (2).

Further intuition and example.

We now provide an illustrative example to further clarify some of the above comments. Let us consider the following two-state Markov chain:   , where . The stationary distribution of this Markov chain is . Let (resp. ) denote the state corresponding to the first (resp. second) row of the transition matrix. In view of , when , the chain tends to stay in (the lazy state) most of the time: Out of observations, one gets on average only observations of state , which means, for , essentially no observation of state . Hence, no algorithm can estimate the transitions from in such a setup, and all strategies would suffer a huge loss according to , no matter how samples are allocated to this chain. Thus, has limited interest in order to distinguish between good and base sampling strategies. On the other hand, using enables to better distinguish between allocation strategies, since the weight given to would be essentially in this case, thus focusing on the good estimation of (and other chains) only.

2.4 Static Allocation

In this subsection, we investigate the optimal loss asymptotically achievable by an oracle policy that is aware of some properties of the chains. To this aim, let us consider a non-adaptive strategy where sampling of various chains is deterministic. Therefore, are not random. The following lemma is a consequence of the central limit theorem:

Lemma 1

We have for any chain :

The proof of this lemma consists in two steps: First, we provide lower and upper bounds on in terms of the loss incurred by the learner had she used an empirical estimator (corresponding to in (1)). Second, we show that by the central limit theorem, .

Now, consider an oracle policy , who is aware of for various chains. In view of the above discussion, and taking into account the constraint , it would be asymptotically optimal to allocate samples to chain , where

The corresponding loss would satisfy: We shall refer to the quantity as the asymptotically optimal loss, which is a problem-dependent quantity. The coefficients characterize the discrepancy between the transition matrices of the various chains, and indicate that an algorithm needs to account for such discrepancy in order to achieve the asymptotically optimal loss. Having characterized the notion of asymptotically optimal loss, we are now ready to define the notion of uniformly good algorithm:

Definition 1 (Uniformly Good Algorithm)

An algorithm is said to be uniformly good if, for any problem instance, it achieves the asymptotically optimal loss when grows large; that is, for all problem instances.

3 The Ba-Mc Algorithm

In this section, we introduce an algorithm designed for adaptive bandit allocation of a set of Markov chains. It is designed based on the optimistic principle, as in MAB problems (e.g., lai1985asymptotically ; auer2002finite ), and relies on an index function. More precisely, at each time , the algorithm maintains an index function for each chain , which provides an upper confidence bound (UCB) on the loss incurred by at ; more precisely, with high probability, , where denotes the smoothed estimate of with some (see Eq. (1)). Now, by sampling a chain at time , we can balance exploration and exploitation by selecting more the chains with higher estimated losses or those with higher uncertainty in these estimates.

In order to specify the index function , let us choose (we motivate this choice of later on), and for each state , define the estimate of Gini coefficient at time as The index is then defined as

where , with being an arbitrary choice. In this paper, we choose .

We remark that the design of the index above comes from the application of empirical Bernstein concentration for -smoothed estimators (see Lemma 4 in the supplementary) to the loss function . In other words, Lemma 4 guarantees that with high probability, . Our concentration inequality (Lemma 4) is new, to our knowledge, and could be of independent interest.

Having defined the index function , we are now ready to describe our algorithm, which we call BA-MC (Bandit Allocation for Markov Chains). BA-MC receives as input a confidence parameter , a budget , as well as the state space . It initially samples each chain twice (hence, this phase lasts for rounds). Then, BA-MC simply consists in sampling the chain with the largest index at each round . Finally, it returns, after pulls, an estimate for each chain . We provide the pseudo-code of BA-MC in Algorithm 1. Note that BA-MC does not require any prior knowledge of the chains (neither the initial distribution nor the mixing time).

  Input: Confidence parameter , budget , state space ;
  Initialize: Sample each chain twice;
  for  do
     Sample chain ;
     Observe , and update and ;
  end for
Algorithm 1 BA-MC – Bandit Allocation for Markov Chains

In order to provide more insights into the design of BA-MC, let us remark that (as shown in Lemma 8 in the supplementary) provides a high-probability UCB on the quantity as well. Now by sampling the chain at time , in view of discussions in Section 2.4, BA-MC would try to mimic an oracle algorithm being aware of for various chains.

We remark that our concentration inequality in Lemma 4 (of the supplementary) parallels the one presented in Lemma 8.3 in hsu2015mixing . In contrast, our concentration lemma makes appear the terms in the denominator, whereas Lemma 8.3 in hsu2015mixing makes appear terms in the denominator. This feature plays an important role to deal with situations where some states are not sampled up to time , that is for when for some .

4 Performance Bounds

We are now ready to study the performance bounds on the loss in both asymptotic and non-asymptotic regimes. We begin with a generic non-asymptotic bound as follows:

Theorem 1 (Ba-Mc, Generic Performance)

Let . Then, for any budget , with probability at least , the loss under satisfies

The proof of this theorem, provided in Section C in the supplementary, reveals the motivation to choose : It verifies that to minimize the dependency of the loss on , on must choose . In particular, the proof does not rely on the ergodicity assumption:

Remark 2

Theorem 1 is valid even if the Markov chains are reducible or periodic.

In the following theorem, we state another non-asymptotic bound on the performance of BA-MC, which refines Theorem 1. To present this result, we recall the notation , and that for any chain , , , and

Theorem 2

Let , and assume that Then, with probability at least ,

where

Recalling the asymptotic loss of the oracle algorithm discussed in Section 2.4 being equal to , in view of the Bernstein concentration, the oracle would incur a loss at most for when the budget is finite. In this regard, we may look at the quantity as the pseudo-excess loss of (we refrain from calling this quantity the excess loss, as is not equal to the high-probability loss of the oracle). Theorem 2 implies that when is greater than the cut-off budget , the pseudo-excess loss under BA-MC vanishes at a rate . In particular, Theorem 2 characterizes the constant controlling the main term of the pseudo-excess loss: . This further indicates that the pseudo-excess loss is controlled by the quantity , which captures (i) the discrepancy among the values of various chains , and (ii) the discrepancy between various stationary probabilities . We emphasize that the dependency of the learning performance (through ) on is in alignment with the result obtained by wolfer2019minimax for the estimation of a single ergodic Markov chain.

The proof of this theorem, provided in Section D in the supplementary, shows that to determine the cut-off budget , one needs to determine the value of such that with high probability, for any chain and state , the term approaches , which is further controlled by (or if is reversible) as well as the minimal stationary distribution . This in turn allows us to show that, under BA-MC, the number of samples for any chain comes close to the quantity . Finally, we remark that the proof of Theorem 2 also reveals that the result in the theorem is indeed valid for any constant .

In the following theorem, we characterize the asymptotic performance of BA-MC:

Theorem 3 (Ba-Mc, Asymptotic Regime)

Under BA-MC,

The above theorem asserts that, asymptotically, the loss under BA-MC matches the asymptotically optimal loss characterized in Section 2.4. We may thus conclude that BA-MC is uniformly good (in the sense of Definition 1). The proof of Theorem 3 (provided in Section E of the supplementary) proceeds as follows: It divides the estimation problem into two consecutive sub-problems, the one with the budget and the other with the rest of pulls. We then show when , the number of samples on each chain at the end of the first sub-problem is lower bounded by , and as a consequence, the index would be accurate enough: with high probability. This allows us to relate the allocation under BA-MC in the course of the second sub-problem to that of the oracle, and further to show that the difference vanishes as .

Below, we provide some further comments about the presented bounds in Theorems 13:

Various regimes.

Theorem 1 provides a non-asymptotic bound on the loss valid for any , while Theorem 3 establishes the optimality of BA-MC in the asymptotic regime. In view of the inequality , the bound in Theorem 1 is at least off by a factor of from the asymptotic loss . Theorem 2 bridges between the two results thereby establishing a third regime, in which the algorithm enjoys the asymptotically optimal loss up to an additive pseudo-excess loss scaling as .

The effect of mixing.

It is worth emphasizing that the mixing times of the chains do not appear explicitly in the bounds, and only control (through the pseudo-spectral gap ) the cut-off budget  that ensures when the pseudo-excess loss vanishes at a rate . This is indeed a strong aspect of our results due to our meaningful definition of loss, which could be attributed to the fact that our loss function employs empirical estimates in lieu of . Specifically speaking, as argued in hsu2015mixing , given the number of samples of various states (akin to using in the loss definition), the convergence of frequency estimates towards the true values is independent of the mixing time of the chain. We note that despite the dependence of  on the mixing times, BA-MC does not need to estimate them as when , it still enjoys the loss guarantee of Theorem 1. We also mention that to define an index function for the loss function , one may have to derive confidence bounds on the mixing time and/or stationary distribution as well.

More on the pseudo-excess loss.

We stress that the notion of the pseudo-excess loss bears some similarity with the definition of regret for active bandit learning of distributions as introduced in antos2010active ; carpentier2011upper (see Section 1). In the latter case, the regret typically decays as similarly to the pseudo-excess loss in our case. An interesting question is whether the decay rate of the pseudo-excess loss, as a function of , can be improved. And more importantly, if a (problem-dependent) lower bound on the pseudo-excess loss can be established. These questions are open even for the simpler case of active learning of distributions in the i.i.d. setup; see, e.g., carpentier2014minimax ; carpentier2015adaptive ; carpentier2011upper . We plan to address these as a future work.

5 Conclusion

In this paper, we addressed the problem of active bandit allocation in the case of discrete and ergodic Markov chains. We considered a notion of loss function appropriately extending the loss function for learning distributions to the case of Markov chains. We further characterized the notion of a “uniformly good algorithm” under the considered loss function. We presented an algorithm for learning Markov chains, which we called BA-MC. Our algorithm is simple to implement and does not require any prior knowledge of the Markov chains. We provided non-asymptotic PAC-type bounds on the loss incurred by BA-MC, and showed that asymptotically, it incurs an optimal loss. We further discussed that the (pseudo-excess) loss incurred by BA-MC in our bounds does not deteriorate with mixing times of the chains. As a future work, we plan to derive a (problem-dependent) lower bound on the pseudo-excess loss. Another interesting, yet very challenging, future direction is to devise adaptive learning algorithms for restless Markov chains, where the state of various chains evolve at each round independently of the learner’s decision.

Acknowledgements

This work has been supported by CPER Nord-Pas-de-Calais/FEDER DATA Advanced data science and technologies 2015-2020, the French Ministry of Higher Education and Research, Inria, and the French Agence Nationale de la Recherche (ANR), under grant ANR-16-CE40-0002 (project BADASS).

References

  • (1) András Antos, Varun Grover, and Csaba Szepesvári. Active learning in multi-armed bandits. In International Conference on Algorithmic Learning Theory, pages 287–302. Springer, 2008.
  • (2) Alexandra Carpentier, Alessandro Lazaric, Mohammad Ghavamzadeh, Rémi Munos, and Peter Auer. Upper-confidence-bound algorithms for active learning in multi-armed bandits. In International Conference on Algorithmic Learning Theory, pages 189–203. Springer, 2011.
  • (3) Valerii Vadimovich Fedorov. Theory of optimal experiments. Elsevier, 1972.
  • (4) Hovav A. Dror and David M. Steinberg. Sequential experimental designs for generalized linear models. Journal of the American Statistical Association, 103(481):288–298, 2008.
  • (5) David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan. Active learning with statistical models. Journal of Artificial Intelligence Research, 4:129–145, 1996.
  • (6) Pierre Etoré and Benjamin Jourdain. Adaptive optimal allocation in stratified sampling methods. Methodology and Computing in Applied Probability, 12(3):335–360, 2010.
  • (7) András Antos, Varun Grover, and Csaba Szepesvári. Active learning in heteroscedastic noise. Theoretical Computer Science, 411(29-30):2712–2728, 2010.
  • (8) Alexandra Carpentier, Remi Munos, and András Antos. Adaptive strategy for stratified Monte Carlo sampling. Journal of Machine Learning Research, 16:2231–2271, 2015.
  • (9) James Neufeld, András György, Dale Schuurmans, and Csaba Szepesvári. Adaptive Monte Carlo via bandit allocation. In Proceedings of the 31st International Conference on International Conference on Machine Learning, pages 1944–1952, 2014.
  • (10) Alexandra Carpentier and Rémi Munos. Adaptive stratified sampling for Monte-Carlo integration of differentiable functions. In Advances in Neural Information Processing Systems, pages 251–259, 2012.
  • (11) Alexandra Carpentier and Odalric-Ambrym Maillard. Online allocation and homogeneous partitioning for piecewise constant mean-approximation. In Advances in Neural Information Processing Systems, pages 1961–1969, 2012.
  • (12) Patrick Billingsley. Statistical methods in Markov chains. The Annals of Mathematical Statistics, pages 12–40, 1961.
  • (13) Claude Kipnis and S. R. Srinivasa Varadhan. Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions. Communications in Mathematical Physics, 104(1):1–19, 1986.
  • (14) Moshe Haviv and Ludo Van der Heyden. Perturbation bounds for the stationary probabilities of a finite Markov chain. Advances in Applied Probability, 16(4):804–818, 1984.
  • (15) Nicky J. Welton and A. E. Ades. Estimation of Markov chain transition probabilities and rates from fully and partially observed data: uncertainty propagation, evidence synthesis, and model calibration. Medical Decision Making, 25(6):633–645, 2005.
  • (16) Bruce A. Craig and Peter P. Sendi. Estimation of the transition matrix of a discrete-time Markov chain. Health Economics, 11(1):33–42, 2002.
  • (17) Sean P. Meyn and Richard L. Tweedie. Markov chains and stochastic stability. Springer Science & Business Media, 2012.
  • (18) Emmanuel Rio. Asymptotic theory of weakly dependent random processes. Springer, 2017.
  • (19) Pascal Lezaud. Chernoff-type bound for finite Markov chains. Annals of Applied Probability, pages 849–867, 1998.
  • (20) Daniel Paulin. Concentration inequalities for Markov chains by Marton couplings and spectral methods. Electronic Journal of Probability, 20, 2015.
  • (21) Geoffrey Wolfer and Aryeh Kontorovich. Minimax learning of ergodic Markov chains. In Algorithmic Learning Theory, pages 903–929, 2019.
  • (22) Yi HAO, Alon Orlitsky, and Venkatadheeraj Pichapati. On learning Markov chains. In Advances in Neural Information Processing Systems, pages 648–657, 2018.
  • (23) Sudeep Kamath, Alon Orlitsky, Dheeraj Pichapati, and Ananda Theertha Suresh. On learning distributions from their samples. In Conference on Learning Theory, pages 1066–1100, 2015.
  • (24) Ronald Ortner, Daniil Ryabko, Peter Auer, and Rémi Munos. Regret bounds for restless Markov bandits. Theoretical Computer Science, 558:62–76, 2014.
  • (25) Cem Tekin and Mingyan Liu. Online learning of rested and restless bandits. IEEE Transactions on Information Theory, 58(8):5588–5611, 2012.
  • (26) Christopher R. Dance and Tomi Silander. Optimal policies for observing time series and related restless bandit problems. Journal of Machine Learning Research, 20(35):1–93, 2019.
  • (27) Jean Tarbouriech and Alessandro Lazaric. Active exploration in Markov decision processes. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 974–982, 2019.
  • (28) Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, Robert E. Schapire, and Linda Sellie. On the learnability of discrete distributions. In Proceedings of the Twenty-Sixth Annual ACM Symposium on Theory of Computing, pages 273–282. ACM, 1994.
  • (29) David Gamarnik. Extension of the PAC framework to finite and countable Markov chains. IEEE Transactions on Information Theory, 49(1):338–345, 2003.
  • (30) Jiantao Jiao, Kartik Venkat, Yanjun Han, and Tsachy Weissman. Minimax estimation of functionals of discrete distributions. IEEE Transactions on Information Theory, 61(5):2835–2885, 2015.
  • (31) Mordechai Mushkin and Israel Bar-David. Capacity and coding for the Gilbert-Elliott channels. IEEE Transactions on Information Theory, 35(6):1277–1290, 1989.
  • (32) James R. Norris. Markov chains. Cambridge University Press, 1998.
  • (33) David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov chains and mixing times. American Mathematical Society, Providence, RI, 2009.
  • (34) Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22, 1985.
  • (35) Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235–256, 2002.
  • (36) Daniel J. Hsu, Aryeh Kontorovich, and Csaba Szepesvári. Mixing time estimation in reversible Markov chains from a single sample path. In Advances in Neural Information Processing Systems, pages 1459–1467, 2015.
  • (37) Alexandra Carpentier and Rémi Munos. Minimax number of strata for online stratified sampling: The case of noisy samples. Theoretical Computer Science, 558:77–106, 2014.
  • (38) Odalric-Ambrym Maillard. Mathematics of statistical sequential decision making. Habilitation à Diriger des Recherches, 2019.

Appendix A Concentration Inequalities

Lemma 2 ([38, Lemma 2.4])

Let be a sequence of random variables generated by a predictable process, and be its natural filtration. Let be a convex upper-envelope of the cumulant generating function of the conditional distributions with , and let denote its Legendre-Fenchel transform, that is:

where . Assume that contains an open neighborhood of . Let (resp. ) be its reverse map on (resp. ), that is

Let be an integer-valued random variable that is -measurable and almost surely bounded by . Then, for all and ,

Moreover, if is a possibly unbounded -valued random variable that is -measurable, then for all and ,

We provide an immediate consequence of this lemma for the case of sub-Gamma random variables:

Corollary 1

Let be a sequence of random variables generated by a predictable process, and be its natural filtration. Assume for all , and for some positive numbers and . Let be an integer-valued random variable that is -measurable and almost surely bounded by . Then, for all and ,

where , with being an arbitrary parameter.

Proof. The proof follows by an application of Lemma 2 for sub-Gamma random variables with parameters . Note that sub-Gamma random variables satisfy , for all , so that

Plugging these into the first statements of Lemma 2 completes the proof.

As a consequence of this corollary, we present the following lemma:

Lemma 3 (Bernstein-Markov Concentration)

Let be generated from an ergodic Markov chain defined on a finite state-space with transition matrix . Consider the smoothed estimator of defined as follows: For all ,

with . Then, for any , it holds that with probability at least , for all ,

where , with being an arbitrary parameter.

Proof. The proof uses similar steps as in the one of Lemma 8.3 in [36]. Consider a pair . We have

where . Hence,

(3)

To control , we define the sequence , with , and

Note that for all , almost surely. Moreover, denoting by the filtration generated by , we observe that is -measurable and . Hence, it is a martingale difference sequence with respect to , and it satisfies for all , and

Applying Corollary 1 yields

with probability at least . Plugging the above bound into (3) gives the announced result.

Lemma 4 (Empirical Bernstein-Markov Concentration)

Let be generated from an ergodic Markov chain defined on with transition matrix . Consider the smoothed estimator of as defined in Lemma 3. Then, with probability at least , for all ,

where , where is an arbitrary parameter, , and

Proof. Fix a pair . Recall from Lemma 3 that with probability at least ,