Example 1.
Abstract

We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or -greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates near-optimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.

 

Generalization and Exploration via Randomized Value Functions

 

Ian Osband iosband@stanford.edu

Benjamin Van Roy bvr@stanford.edu

Zheng Wen zhengwen207@gmail.com

Stanford University


\@xsect

The design of reinforcement learning (RL) algorithms that explore intractably large state-action spaces efficiently remains an important challenge. In this paper, we propose randomized least-squares value iteration (RLSVI), which generalizes using a linearly parameterized value function. Prior RL algorithms that generalize in this way require, in the worst case, learning times exponential in the number of model parameters and/or the planning horizon. RLSVI aims to overcome these inefficiencies.

RLSVI operates in a manner similar to least-squares value iteration (LSVI) and also shares much of the spirit of other closely related approaches such as TD, LSTD, and SARSA (see, e.g., (Sutton & Barto, 1998; Szepesvári, 2010)). What fundamentally distinguishes RLSVI is that the algorithm explores through randomly sampling statistically plausible value functions, whereas the aforementioned alternatives are typically applied in conjunction with action-dithering schemes such as Boltzmann or -greedy exploration, which lead to highly inefficient learning. The concept of exploring by sampling statistically plausible value functions is broader than any specific algorithm, and beyond our proposal and study of RLSVI. We view an important role of this paper is to establish this broad concept as a promising approach to tackling a critical challenge in RL: synthesizing efficient exploration and effective generalization.

We will present computational results comparing RLSVI to LSVI with action-dithering schemes. In our case studies, these algorithms generalize using identical linearly parameterized value functions but are distinguished by how they explore. The results demonstrate that RLSVI enjoys dramatic efficiency gains. Further, we establish a bound on the expected regret for an episodic tabula rasa learning context. Our bound is , where and denote the cardinalities of the state and action spaces, denotes time elapsed, and denotes the episode duration. This matches the worst case lower bound for this problem up to logarithmic factors (Jaksch et al., 2010). It is interesting to contrast this against known bounds for other provably efficient tabula rasa RL algorithms (e.g., UCRL2 (Jaksch et al., 2010)) adapted to this context. To our knowledge, our results establish RLSVI as the first RL algorithm that is provably efficient in a tabula rasa context and also demonstrates efficiency when generalizing via linearly parameterized value functions.

There is a sizable literature on RL algorithms that are provably efficient in tabula rasa contexts (Brafman & Tennenholtz, 2002; Kakade, 2003; Kearns & Koller, 1999; Lattimore et al., 2013; Ortner & Ryabko, 2012; Osband et al., 2013; Strehl et al., 2006). The literature on RL algorithms that generalize and explore in a provably efficient manner is sparser. There is work on model-based RL algorithms (Abbasi-Yadkori & Szepesvári, 2011; Osband & Van Roy, 2014a; b), which apply to specific model classes and are computationally intractable. Value function generalization approaches have the potential to overcome those computational challenges and offer practical means for synthesizing efficient exploration and effective generalization. A relevant line of work establishes that efficient RL with value function generalization reduces to efficient KWIK online regression (Li & Littman, 2010; Li et al., 2008). However, it is not known whether the KWIK online regression problem can be solved efficiently. In terms of concrete algorithms, there is optimistic constraint propagation (OCP) (Wen & Van Roy, 2013), a provably efficient RL algorithm for exploration and value function generalization in deterministic systems, and C-PACE (Pazis & Parr, 2013), a provably efficient RL algorithm that generalizes using interpolative representations. These contributions represent important developments, but OCP is not suitable for stochastic systems and is highly sensitive to model mis-specification, and generalizing effectively in high-dimensional state spaces calls for methods that extrapolate. RLSVI advances this research agenda, leveraging randomized value functions to explore efficiently with linearly parameterized value functions. The only other work we know of involving exploration through random sampling of value functions is (Dearden et al., 1998). That work proposed an algorithm for tabula rasa learning; the algorithm does not generalize over the state-action space.

\@xsect

A finite-horizon MDP , where is a finite state space, is a finite action space, is the number of periods, encodes transition probabilities, encodes reward distributions, and is a state distribution. In each episode, the initial state is sampled from , and, in period , if the state is and an action is selected then a next state is sampled from and a reward is sampled from . The episode terminates when state is reached and a terminal reward is sampled from .

To represent the history of actions and observations over multiple episodes, we will often index variables by both episode and period. For example, , and respectively denote the state, action, and reward observed during period in episode .

A policy is a sequence of functions, each mapping to . For each policy , we define a value function for :

The optimal value function is defined by . A policy is said to be optimal if . It is also useful to define a state-action optimal value function for :

A policy is optimal , .

A reinforcement learning algorithm generates each action based on observations made up to period of episode . Over each episode, the algorithm realizes reward . One way to quantify the performance of a reinforcement learning algorithm is in terms of the expected cumulative regret over episodes, or time , defined by

Consider a scenario in which the agent models that, for each , for some . With some abuse of notation, we use and to denote the cardinalities of the state and action spaces. We refer this matrix as a generalization matrix and use to denote the row of matrix associated with state-action pair . For , we write the th column of as and refer to as a basis function. We refer to contexts where the agent’s belief is correct as coherent learning, and refer the alternative as agnostic learning.

\@xsect

LSVI can be applied at each episode to estimate the optimal value function from data gathered over previous episodes. To form an RL algorithm based on LSVI, we must specify how the agent selects actions. The most common scheme is to selectively take actions at random, we call this approach dithering. Appendix Document presents RL algorithms resulting from combining LSVI with the most common schemes of -greedy or Boltzmann exploration.

The literature on efficient RL shows that these dithering schemes can lead to regret that grows exponentially in and/or (Kearns & Singh, 2002; Brafman & Tennenholtz, 2002; Kakade, 2003). Provably efficient exploration schemes in RL require that exploration is directed towards potentially informative state-action pairs and consistent over multiple timesteps. This literature provides several more intelligent exploration schemes that are provably efficient, but most only apply to tabula rasa RL, where little prior information is available and learning is considered efficient even if the time required scales with the cardinality of the state-action space. In a sense, RLSVI represents a synthesis of ideas from efficient tabula rasa reinforcement learning and value function generalization methods.

To motivate some of the benefits of RLSVI, in Figure 1 we provide a simple example that highlights the failings of dithering methods. In this setting LSVI with Boltzmann or -greedy exploration requires exponentially many episodes to learn an optimal policy, even in a coherent learning context and even with a small number of basis functions.

This environment is made up of a long chain of states . Each step the agent can transition left or right. Actions left are deterministic, but actions right only succeed with probability , otherwise they go left. All states have zero reward except for the far right which gives a reward of . Each episode is of length and the agent will begin each episode at state . The optimal policy is to go right at every step to receive an expected reward of each episode, all other policies give no reward. Example 1 establishes that, for any choice of basis function, LSVI with any -greedy or Boltzmann exploration will lead to regret that grows exponentially in . A similar result holds for policy gradient algorithms.

Figure 1: An MDP where dithering schemes are highly inefficient.
Example 1.

Let be the first episode during which state is visited. It is easy to see that for all and all . Furthermore, with either -greedy or Boltzmann exploration, actions are sampled uniformly at random over episodes . Thus, in any episode , the red node will be reached with probability . It follows that and .

\@xsect

We now consider an alternative approach to exploration that involves randomly sampling value functions rather than actions. As a specific scheme of this kind, we propose randomized least-squares value iteration (RLSVI), which we present as Algorithm 1.111Note that when , both and are empty, hence, we set . To obtain an RL algorithm, we simply select greedy actions in each episode, as specified in Algorithm 2.

The manner in which RLSVI explores is inspired by Thompson sampling (Thompson, 1933), which has been shown to explore efficiently across a very general class of online optimization problems (Russo & Van Roy, 2013; 2014). In Thompson sampling, the agent samples from a posterior distribution over models, and selects the action that optimizes the sampled model. RLSVI similarly samples from a distribution over plausible value functions and selects actions that optimize resulting samples. This distribution can be thought of as an approximation to a posterior distribution over value functions. RLSVI bears a close connection to PSRL (Osband et al., 2013), which maintains and samples from a posterior distribution over MDPs and is a direct application of Thompson sampling to RL. PSRL satisfies regret bounds that scale with the dimensionality, rather than the cardinality, of the underlying MDP (Osband & Van Roy, 2014b; a). However, PSRL does not accommodate value function generalization without MDP planning, a feature that we expect to be of great practical importance.

Input: Data , Parameters
Output:

1:  for  do
2:     Generate regression problem , :
3:     Bayesian linear regression for the value function
4:     Sample from Gaussian posterior
5:  end for
Algorithm 1 Randomized Least-Squares Value Iteration

Input: Features ;

1:  for  do
2:     Compute using Algorithm 1
3:     Observe
4:     for  do
5:        Sample
6:        Observe and
7:     end for
8:     Observe
9:  end for
Algorithm 2 RLSVI with greedy action
\@xsect

RLSVI is an algorithm designed for efficient exploration in large MDPs with linear value function generalization. So far, there are no algorithms with analytical regret bounds in this setting. In fact, most common methods are provably inefficient, as demonstrated in Example 1, regardless of the choice of basis function. In this section we will establish an expected regret bound for RLSVI in a tabular setting without generalization where the basis functions .

The bound is on an expectation with respect to a probability space . We define the MDP and all other random variables we will consider with respect to this probability space. We assume that , , , and , are deterministic and that and are drawn from a prior distribution. We will assume that rewards are drawn from independent Dirichlet with values on and transitions Dirichlet . Analytical techniques exist to extend similar results to general bounded distributions; see, for example (Agrawal & Goyal, 2012).

Theorem 1.

If Algorithm 1 is executed with for , and , then:

(1)

Surprisingly, these scalings better state of the art optimistic algorithms specifically designed for efficient analysis which would admit regret (Jaksch et al., 2010). This is an important result since it demonstrates that RLSVI can be provably-efficient, in contrast to popular dithering approaches such as -greedy which are provably inefficient.

\@xsect

Central to our analysis is the notion of stochastic optimism, which induces a partial ordering among random variables.

Definition 1.

For any and real-valued random variables we say that is stochastically optimistic for if and only if for any convex and increasing

We will use the notation to express this relation.

It is worth noting that stochastic optimism is closely connected with second-order stochastic dominance: if and only if second-order stochastically dominates (Hadar & Russell, 1969). We repoduce the following result which establishes such a relation involving Gaussian and Dirichlet random variables in Appendix Document.

Lemma 1.

For all and with , if and for then .

\@xsect

Let and denote the value function and policy generated by RLSVI for episode and let . We can decompose the per-episode regret

We will bound this regret by first showing that RLSVI generates optimistic estimates of , so that has nonpositive expectation for any history available prior to episode . The remaining term vanishes as estimates generated by RLSVI concentrate around .

Lemma 2.

Conditional on any data , the Q-values generated by RLSVI are stochastically optimistic for the true Q-values for all .

Proof.

Fix any data available and use backwards induction on . For any we write for the amount of visits to that datapoint in . We will write for the empirical mean reward and mean transitions based upon the data . We can now write the posterior mean rewards and transitions:

Now, using for all we can write the RLSVI updates in similar form. Note that, is diagonal with each diagonal entry equal to . In the case of

Using the relation that Lemma 1 means that

Therefore, choosing and , we must satisfy the lemma for all and .

For the inductive step we assume that the result holds for all and , we now want to prove the result for all at timestep . Once again, we can express in closed form.

To simplify notation we omit the arguments where they should be obvious from context. The posterior mean estimate for the next step value , conditional on :

As long as and . By our induction process so that

We can conclude by Lemma 1 and noting that the noise from rewards is dominated by and the noise from transitions is dominated by . This requires that . ∎

Lemma 2 means RLSVI generates stochastically optimistic Q-values for any history . All that remains is to prove the remaining estimates concentrate around the true values with data. Intuitively this should be clear, since the size of the Gaussian perturbations decreases as more data is gathered. In the remainder of this section we will sketch this result.

The concentration error . We decompose the value estimate explicitly:

where is the Gaussian noise from RLSVI and are optimistic bias terms for RLSVI. These terms emerge since RLSVI shrinks estimates towards zero rather than the Dirichlet prior for rewards and transitions.

Next we note that, conditional on we can rewrite where and is some martingale difference. This allows us to decompose the error in our policy to the estimation error of the states and actions we actually visit. We also note that, conditional on the data the true MDP is independent of the sampling process of RLSVI. This means that:

Once again, we can replace this transition term with a single sample and a martingale difference. Combining these observations allows us to reduce the concentration error

We can even write explicit expressions for and .

The final details for this proof are technical but the argument is simple. We let and . Up to notation , and . Summing using a pigeonhole principle for gives us an upper bound on the regret. We write to bound the effects of the prior mistmatch in RLSVI arising from the bias terms . The constraint can only be violated twice for each . Therefore up to notation:

This completes the proof of Theorem 1.

\@xsect

Our analysis in Section Document shows that RLSVI with tabular basis functions acts as an effective Gaussian approximation to PSRL. This demonstrates a clear distinction between exploration via randomized value functions and dithering strategies such as Example 1. However, the motivating for RLSVI is not for tabular environments, where several provably efficient RL algorithms already exist, but instead for large systems that require generalization.

We believe that, under some conditions, it may be possible to establish polynomial regret bounds for RLSVI with value function generalization. To stimulate thinking on this topic we present a conjecture of result what may be possible in Appendix Document. For now, we will present a series of experiments designed to test the applicability and scalability of RLSVI for exploration with generalization.

Our experiments are divided into three sections. First, we present a series of didactic chain environments similar to Figure 1. We show that RLSVI can effectively synthesize exploration with generalization with both coherent and agnostic value functions that are intractable under any dithering scheme. Next, we apply our Algorithm to learning to play Tetris. We demonstrate that RLSVI leads to faster learning, improved stability and a superior learned policy in a large-scale video game. Finally, we consider a business application with a simple model for a recommendation system. We show that an RL algorithm can improve upon even the optimal myopic bandit strategy. RLSVI learns this optimal strategy when dithering strategies do not.

\@xsect

We now consider a series of environments modelled on Example 1, where dithering strategies for exploration are provably inefficient. Importantly, and unlike the tabular setting of Section Document, our algorithm will only interact with the MDP but through a set of basis function which generalize across states. We examine the empirical performance of RLSVI and find that it does efficiently balance exploration and generalization in this didactic example.

\@xsect

In our first experiments, we generate a random set of basis functions. This basis is coherent but the individual basis functions are not otherwise informative. We form a random linear subspace spanned by . Here and are IID Gaussian . We then form by projecting onto and renormalize each component to have equal 2-norm222For more details on this experiment see Appendix Document.. Figure 2 presents the empirical regret for RLSVI with and an -greedy agent over 5 seeds333In this setting any choice of or Boltzmann is equivalent..

(a) First episodes
(b) First episodes
Figure 2: Efficient exploration on a 50-chain

Figure 1 shows that RLSVI consistently learns the optimal policy in roughly episodes. Any dithering strategy would take at least episodes for this result. The state of the art upper bounds for the efficient optimistic algorithm UCRL given by appendix C.5 in (Dann & Brunskill, 2015) for only kick in after more than suboptimal episodes. RLSVI is able to effectively exploit the generalization and prior structure from the basis functions to learn much faster.

We now examine how learning scales as we change the chain length and number of basis functions . We observe that RLSVI essentially maintains the optimal policy once it discovers the rewarding state. We use the number of episodes until 10 rewards as a proxy for learning time. We report the average of five random seeds.

Figure 3 examines the time to learn as we vary the chain length with fixed basis functions. We include the dithering lower bound as a dashed line and a lower bound scaling for tabular learning algorithms as a solid line (Dann & Brunskill, 2015). For , and . RLSVI demonstrates scalable generalization and exploration to outperform these bounds.

Figure 3: RLSVI learning time against chain length.

Figure 4 examines the time to learn as we vary the basis functions in a fixed length chain. Learning time scales gracefully with . Further, the marginal effect of decrease as approaches . We include a local polynomial regression in blue to highlight this trend. Importantly, even for large the performance is far superior to the dithering and tabular bounds444For chain , the bounds and ..

Figure 4: RLSVI learning time against number of basis features.

Figure 5 examines these same scalings on a logarithmic scale. We find the data for these experiments is consistent with polynomial learning as hypothesized in Appendix Document. These results are remarkably robust over several orders of magnitude in both and . We present more detailed analysis of these sensitivies in Appendix Document.

Figure 5: Empirical support for polynomial learning in RLSVI.
\@xsect

Unlike the example above, practical RL problems will typically be agnostic. The true value function will not lie within . To examine RLSVI in this setting we generate basis functions by adding Gaussian noise to the true value function . The parameter determines the scale of this noise. For this problem is coherent but for this will typically not be the case. We fix and .

For we run RLSVI for 10,000 episodes with and a random seed. Figure 6 presents the number of episodes until 10 rewards for each value of . For large values of , and an extremely misspecified basis, RLSVI is not effective. However, there is some region where learning remains remarkably stable555Note so represents significant noise..

This simple example gives us some hope that RLSVI can be useful in the agnostic setting. In our remaining experiments we will demonstrate that RLSVI can acheive state of the art results in more practical problems with agnostic features.

Figure 6: RLSVI is somewhat robust model mispecification.
\@xsect

We now turn our attention to learning to play the iconic video game Tetris. In this game, random blocks fall sequentially on a 2D grid with 20 rows and 10 columns. At each step the agent can move and rotate the object subject to the constraints of the grid. The game starts with an empty grid and ends when a square in the top row becomes full. However, when a row becomes full it is removed and all bricks above it move downward. The objective is to maximize the score attained (total number of rows removed) before the end of the game.

Tetris has been something of a benchmark problem for RL and approximate dynamic programming, with several papers on this topic (Gabillon et al., 2013). Our focus is not so much to learn a high-scoring Tetris player, but instead to demonstrate the RLSVI offers benefits over other forms of exploration with LSVI. Tetris is challenging for RL with a huge state space with more than states. In order to tackle this problem efficiently we use 22 benchmark features. These featurs give the height of each column, the absolute difference in height of each column, the maximum height of a column, the number of “holes” and a constant. It is well known that you can find far superior linear basis functions, but we use these to mirror their approach.

In order to apply RLSVI to Tetris, which does not have fixed episode length, we made a few natural modifications to the algorithm. First, we approximate a time-homogeneous value function. We also only the keep most recent transitions to cap the linear growth in memory and computational requirements, similar to (Mnih, 2015). Details are provided in Appendix Document. In Figure 7 we present learning curves for RLSVI and LSVI with a tuned -greedy exploration schedule666We found that we could not acheive good performance for any fixed . We used an annealing exploration schedule that was tuned to give good performance. See Appendix Document averaged over 5 seeds. The results are significant in several ways.

First, both RLSVI and LSVI make significant improvements over the previous approach of LSPI with the same basis functions (Bertsekas & Ioffe, 1996). Both algorithms reach higher final performance ( and respectively) than the best level for LSPI (). They also reach this performance after many fewer games and, unlike LSPI do not “collapse” after finding their peak performance. We believe that these improvements are mostly due to the memory replay buffer, which stores a bank of recent past transitions, rather than LSPI which is purely online.

Second, both RLSVI and LSVI learn from scratch where LSPI required a scoring initial policy to begin learning. We believe this is due to improved exploration schemes, LSPI is completely greedy so struggles to learn without an initial policy. LSVI with a tuned schedule is much better. However, we do see a significant improvement through exploration via RLSVI even when compared to the tuned scheme. More details are available in Appendix Document.

Figure 7: Learning to play Tetris with linear Bertsekas features.
\@xsect

We will now show that efficient exploration and generalization can be helpful in a simple model of customer interaction. Consider an agent which recommends products from sequentially to a customer. The conditional probability that the customer likes a product depends on the product, some items are better than others. However it also depends on what the user has observed, what she liked and what she disliked. We represent the products the customer has seen by . For each product we will indicate for her preferences respectively. If the customer has not observed the product we will write . We model the probability that the customer will like a new product by a logistic transformation linear in :

(2)

Importantly, this model reflects that the customers’ preferences may evolve as their experiences change. For example, a customer may be much more likely to watch the second season of the TV show “Breaking Bad” if they have watched the first season and liked it.

The agent in this setting is the recommendation system, whose goal is to maximize the cumulative amount of items liked through time for each customer. The agent does not know initially, but can learn to estimate the parameters through interactions across different customers. Each customer is modeled as an episode with horizon length with a “cold start” and no previous observed products . For our simulations we set and sample a random problem instance by sampling independently for each and .

Figure 8: RLSVI performs better than Boltzmann exploration.
Figure 9: RLSVI can outperform the optimal myopic policy.

Although this setting is simple, the number of possible states is exponential in . To learn in time less than it is crucial that we can exploit generalization between states as per equation (2). For this problem we constuct the following simple basis functions: , let and . In each period form . The dimension of our function class is exponentially smaller than the number of states. However, barring a freak event, this simple basis will lead to an agnostic learning problem.

Figure 8 and 9 show the performance of RLSVI compared to several benchmark methods. In Figure 8 we plot the cumulative regret of RLSVI when compared against LSVI with Boltzmann exploration and identical basis features. We see that RLSVI explores much more efficiently than Boltzmann exploration over a wide range of temperatures.

In Figure 9 we show that, using this efficient exploration method, the reinforcement learning policy is able to outperform not only benchmark bandit algorithms but even the optimal myopic policy777The optimal myopic policy knows the true model defined in Equation 2, but does not plan over multiple timesteps.. Bernoulli Thompson sampling does not learn much even after episodes, since the algorithm does not take context into account. The linear contextual bandit outperforms RLSVI at first. This is not surprising, since learning a myopic policy is simpler than a multi-period policy. However as more data is gathered RLSVI eventually learns a richer policy which outperforms the myopic policy.

Appendix Document provides pseudocode for this computational study. We set , , and . Note that such problems have states; this allows us to solve each MDP exactly so that we can compute regret. Each result is averaged over problem instances and for each problem instance, we repeat simulations times. The cumulative regret for both RLSVI (with and ) and LSVI with Boltzmann exploration (with and a variety of “temperature” settings ) are plotted in Figure 8. RLSVI clearly outperforms LSVI with Boltzmann exploration.

Our simulations use an extremely simplified model. Nevertheless, they highlight the potential value of RL over multi-armed bandit approaches in recommendation systems and other customer interactions. An RL algorithm may outperform even even an optimal myopic system, particularly where large amounts of data are available. In some settings, efficient generalization and exploration can be crucial.

\@xsect

We have established a regret bound that affirms efficiency of RLSVI in a tabula rasa learning context. However the real promise of RLSVI lies in its potential as an efficient method for exploration in large-scale environments with generalization. RLSVI is simple, practical and explores efficiently in several environments where state of the art approaches are ineffective.

We believe that this approach to exploration via randomized value functions represents an important concept beyond our specific implementation of RLSVI. RLSVI is designed for generalization with linear value functions, but many of the great successes in RL have come with highly nonlinear “deep” neural networks from Backgammon (Tesauro, 1995) to Atari888Interestingly, recent work has been able to reproduce similar performance using linear value functions (Liang et al., 2015). (Mnih, 2015). The insights and approach gained from RLSVI may still be useful in this nonlinear setting. For example, we might adapt RLSVI to instead take approximate posterior samples from a nonlinear value function via a nonparametric bootstrap (Osband & Van Roy, 2015).

References

  • Abbasi-Yadkori & Szepesvári (2011) Abbasi-Yadkori, Yasin and Szepesvári, Csaba. Regret bounds for the adaptive control of linear quadratic systems. Journal of Machine Learning Research - Proceedings Track, 19:1–26, 2011.
  • Agrawal & Goyal (2012) Agrawal, Shipra and Goyal, Navin. Further optimal regret bounds for Thompson sampling. arXiv preprint arXiv:1209.3353, 2012.
  • Bertsekas & Ioffe (1996) Bertsekas, Dimitri P and Ioffe, Sergey. Temporal differences-based policy iteration and applications in neuro-dynamic programming. Lab. for Info. and Decision Systems Report LIDS-P-2349, MIT, Cambridge, MA, 1996.
  • Brafman & Tennenholtz (2002) Brafman, Ronen I. and Tennenholtz, Moshe. R-max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213–231, 2002.
  • Dann & Brunskill (2015) Dann, Christoph and Brunskill, Emma. Sample complexity of episodic fixed-horizon reinforcement learning. In Advances in Neural Information Processing Systems, pp. 2800–2808, 2015.
  • Dearden et al. (1998) Dearden, Richard, Friedman, Nir, and Russell, Stuart J. Bayesian Q-learning. In AAAI/IAAI, pp. 761–768, 1998.
  • Gabillon et al. (2013) Gabillon, Victor, Ghavamzadeh, Mohammad, and Scherrer, Bruno. Approximate dynamic programming finally performs well in the game of tetris. In Advances in Neural Information Processing Systems, pp. 1754–1762, 2013.
  • Hadar & Russell (1969) Hadar, Josef and Russell, William R. Rules for ordering uncertain prospects. The American Economic Review, pp. 25–34, 1969.
  • Jaksch et al. (2010) Jaksch, Thomas, Ortner, Ronald, and Auer, Peter. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11:1563–1600, 2010.
  • Kakade (2003) Kakade, Sham. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003.
  • Kearns & Koller (1999) Kearns, Michael J. and Koller, Daphne. Efficient reinforcement learning in factored MDPs. In IJCAI, pp. 740–747, 1999.
  • Kearns & Singh (2002) Kearns, Michael J. and Singh, Satinder P. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3):209–232, 2002.
  • Lagoudakis et al. (2002) Lagoudakis, Michail, Parr, Ronald, and Littman, Michael L. Least-squares methods in reinforcement learning for control. In Second Hellenic Conference on Artificial Intelligence (SETN-02), 2002.
  • Lattimore et al. (2013) Lattimore, Tor, Hutter, Marcus, and Sunehag, Peter. The sample-complexity of general reinforcement learning. In ICML, 2013.
  • Levy (1992) Levy, Haim. Stochastic dominance and expected utility: survey and analysis. Management Science, 38(4):555–593, 1992.
  • Li & Littman (2010) Li, Lihong and Littman, Michael. Reducing reinforcement learning to KWIK online regression. Annals of Mathematics and Artificial Intelligence, 2010.
  • Li et al. (2008) Li, Lihong, Littman, Michael L., and Walsh, Thomas J. Knows what it knows: a framework for self-aware learning. In ICML, pp. 568–575, 2008.
  • Liang et al. (2015) Liang, Yitao, Machado, Marlos C., Talvitie, Erik, and Bowling, Michael H. State of the art control of atari games using shallow reinforcement learning. CoRR, abs/1512.01563, 2015. URL http://arxiv.org/abs/1512.01563.
  • Mnih (2015) Mnih, Volodymyr et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
  • Ortner & Ryabko (2012) Ortner, Ronald and Ryabko, Daniil. Online regret bounds for undiscounted continuous reinforcement learning. In NIPS, 2012.
  • Osband & Van Roy (2014a) Osband, Ian and Van Roy, Benjamin. Model-based reinforcement learning and the eluder dimension. In Advances in Neural Information Processing Systems, pp. 1466–1474, 2014a.
  • Osband & Van Roy (2014b) Osband, Ian and Van Roy, Benjamin. Near-optimal reinforcement learning in factored MDPs. In Advances in Neural Information Processing Systems, pp. 604–612, 2014b.
  • Osband & Van Roy (2015) Osband, Ian and Van Roy, Benjamin. Bootstrapped thompson sampling and deep exploration. arXiv preprint arXiv:1507.00300, 2015.
  • Osband et al. (2013) Osband, Ian, Russo, Daniel, and Van Roy, Benjamin. (More) efficient reinforcement learning via posterior sampling. In NIPS, pp. 3003–3011. Curran Associates, Inc., 2013.
  • Pazis & Parr (2013) Pazis, Jason and Parr, Ronald. PAC optimal exploration in continuous space Markov decision processes. In AAAI. Citeseer, 2013.
  • Russo & Van Roy (2013) Russo, Dan and Van Roy, Benjamin. Eluder dimension and the sample complexity of optimistic exploration. In NIPS, pp. 2256–2264. Curran Associates, Inc., 2013.
  • Russo & Van Roy (2014) Russo, Daniel and Van Roy, Benjamin. Learning to optimize via posterior sampling. Mathematics of Operations Research, 39(4):1221–1243, 2014.
  • Strehl et al. (2006) Strehl, Alexander L., Li, Lihong, Wiewiora, Eric, Langford, John, and Littman, Michael L. PAC model-free reinforcement learning. In ICML, pp. 881–888, 2006.
  • Sutton & Barto (1998) Sutton, Richard and Barto, Andrew. Reinforcement Learning: An Introduction. MIT Press, March 1998.
  • Szepesvári (2010) Szepesvári, Csaba. Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2010.
  • Tesauro (1995) Tesauro, Gerald. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58–68, 1995.
  • Thompson (1933) Thompson, W.R. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933.
  • Wen & Van Roy (2013) Wen, Zheng and Van Roy, Benjamin. Efficient exploration and value function generalization in deterministic systems. In NIPS, pp. 3021–3029, 2013.

APPENDICES

\@xsect

The LSVI algorithm iterates backwards over time periods in the planning horizon, in each iteration fitting a value function to the sum of immediate rewards and value estimates of the next period. Each value function is fitted via least-squares: note that vectors satisfy

(3)

Notice that in Algorithm 3, when , matrix and vector are empty. In this case, we simply set .

Input: Data
.         Parameter
Output:

1:  ,
2:  for  do
3:     Generate regression problem , :
4:     Linear regression for value function
5:  end for
Algorithm 3 Least-Squares Value Iteration

RL algorithms produced by synthesizing Boltzmann exploration or -greedy exploration with LSVI are presented as Algorithms 4 and 5. In these algorithms the “temperature” parameters in Boltzmann exploration and in -greedy exploration control the degree to which random perturbations distort greedy actions.

Input: Features ;

1:  for  do
2:     Compute based on Algorithm 3
3:     Observe
4:     for  do
5:        Sample
6:        Observe and
7:     end for
8:  end for
Algorithm 4 LSVI with Boltzmann exploration

Input: Features ;

1:  for  do
2:     Compute using Algorithm 3
3:     Observe
4:     for  do
5:        Sample
6:        if  then
7:           Sample
8:        else
9:           Sample
10:        end if
11:        Observe and
12:     end for
13:  end for
Algorithm 5 LSVI with -greedy exploration
\@xsect

Our computational results suggest that, when coupled with generalization, RLSVI enjoys levels of efficiency far beyond what can be achieved by Boltzmann or -greedy exploration. We leave as an open problem establishing efficiency guarantees in such contexts. To stimulate thinking on this topic, we put forth a conjecture.

Conjecture 1.

For all , , , and , if reward distributions have support , there is a unique satisfying for , and , then there exists a polynomial such that

As one would hope for from an RL algorithm that generalizes, this bound does not depend on the number of states or actions. Instead, there is a dependence on the number of basis functions. In Appendix Document we present empirical results that are consistent with this conjecture.

\@xsect\@xsect

We present full details for Algorithm 6, which generates the random coherent basis functions for . In this algorithm we use some standard notation for indexing vector elements. For any we will write for the element in the row and column. We will use the placeholder to repesent the entire axis so that, for example, is the first column of .

Input: ,    for
Output: for

1:  Sample
2:  Set
3:  Stack
4:  Set
5:  Form projection
6:  Sample
7:  Set
8:  Project
9:  Scale for
10:  Reshape
11:  Return for
Algorithm 6 Generating a random coherent basis

The reason we rescale the value function in step (9) of Algorithm 6 is so that the resulting random basis functions are on a similar scale to . This is a completely arbitrary choice as any scaling in can be exactly replicated by similar rescalings in and .

\@xsect

In Figures 10 and 11 we present the cumulative regret for over the first 10000 episodes for several orders of magnitude for and . For most combinations of parameters the learning remains remarkably stable.

Figure 10: Fixed , varying .
Figure 11: Fixed , varying .

We find that large values of lead to slowers learning, since the Bayesian posterior concentrates only very slowly with new data. However, in stochastic domains we found that choosing a which is too small might cause the RLSVI posterior to concentrate too quickly and so fail to sufficiently explore. This is a similar insight to previous analyses of Thompson sampling (Agrawal & Goyal, 2012) and matches the flavour of Theorem 1.

\@xsect

In Figure 4 we demonstrated that RLSVI seems to scale gracefully with the number of basis features on a chain of length . In Figure 13 we reproduce these reults for chains of several different lengths. To highlight the overall trend we present a local polynomial regression for each chain length.

Figure 12: Graceful scaling with number of basis functions.

Roughly speaking, for low numbers of features the number of episodes required until learning appears to increase linearly with the number of basis features. However, the marginal increase from a new basis features seems to decrease and almost plateau once the number of features reaches the maximum dimension for the problem .

\@xsect

Our simulation results empirically demonstrate learning which appears to be polynomial in both and . Inspired by the results in Figure 5, we present the learning times for different and together with a quadratic regression fit separately for each .

Figure 13: Graceful scaling with number of basis functions.

This is only one small set of experiments, but these results are not inconsistent with Conjecture 1. This quadratic model seems to fit data pretty well.

\@xsect\@xsect

In Algorithm 7 we present a natural adaptation to RLSVI without known episode length, but still a regular episodic structure. This is the algorithm we use for our experiments in Tetris. The LSVI algorithms are formed in the same way.

Input: Data
.         Previous estimate
.         Parameters
Output:

1:  Generate regression problem , :
2:  Bayesian linear regression for the value function
3:  Sample from Gaussian posterior
Algorithm 7 Stationary RLSVI

Input: Features ;