Scalable Coordinated Exploration in Concurrent Reinforcement Learning

Scalable Coordinated Exploration in
Concurrent Reinforcement Learning

Abstract

We consider a team of reinforcement learning agents that concurrently operate in a common environment, and we develop an approach to efficient coordinated exploration that is suitable for problems of practical scale. Our approach builds on seed sampling Dimakopoulou and Van Roy (2018) and randomized value function learning Osband et al. (2016a). We demonstrate that, for simple tabular contexts, the approach is competitive with previously proposed tabular model learning methods Dimakopoulou and Van Roy (2018). With a higher-dimensional problem and a neural network value function representation, the approach learns quickly with far fewer agents than alternative exploration schemes.

\setstretch

1.0

Scalable Coordinated Exploration in

Concurrent Reinforcement Learning

Maria Dimakopoulou Stanford University madima@stanford.edu Ian Osband Google DeepMind iosband@google.com Benjamin Van Roy Stanford University bvr@stanford.edu

1 Introduction

Consider a farm of robots operating concurrently, learning how to carry out a task, as studied in Gu et al. (2016). There are benefits to scale, since a larger number of robots can gather and share larger volumes of data that enable each to learn faster. These benefits are most dramatic if the robots explore in a coordinated fashion, diversifying their learning goals and adapting appropriately as data is gathered. Web services present a similar situation, as considered in Silver et al. (2013). Each user is served by an agent, and the collective of agents can accelerate learning by intelligently coordinating how they experiment. Considering its importance, the problem of coordinated exploration in reinforcement learning has received surprisingly little attention; while Gu et al. (2016); Silver et al. (2013) consider teams of agents that gather data in parallel, they do not address coordination of data gathering, though this can be key to team performance. Dimakopoulou and Van Roy Dimakopoulou and Van Roy (2018) recently identified properties that are essential to efficient coordinated exploration and proposed suitable tabular model learning methods based on seed sampling. Though this represents a conceptual advance, the methods do not scale to meet the needs of practical applications, which require generalization to address intractable state spaces. In this paper, we develop scalable reinforcement learning algorithms that aim to efficiently coordinate exploration and we present computational results that establish their substantial benefit.

Work on coordinated exploration builds on a large literature that addresses efficient exploration in single-agent reinforcement learning (see, e.g., Kearns and Singh (2002); Jaksch et al. (2010); Szepesvári (2010)). A growing segment of this literature studies and extends posterior sampling for reinforcement learning (PSRL) Strens (2000), which has led to statistically efficient and computationally tractable approaches to exploration Osband et al. (2013); Osband and Van Roy (2017a, b). The methods we will propose leverage this line of work, particularly the use of randomized value function learning Osband et al. (2016b).

The problem we address is known as concurrent reinforcement learning Silver et al. (2013); Pazis and Parr (2013); Guo and Brunskill (2015); Pazis and Parr (2016); Dimakopoulou and Van Roy (2018). A team of reinforcement learning agents interact with the same unknown environment, share data with one another, and learn in parallel how to operate effectively. To learn efficiently in such settings, the agents should coordinate their exploratory effort. Three properties essential to efficient coordinated exploration, identified in Dimakopoulou and Van Roy (2018), are real-time adaptivity to shared observations, commitment to carry through with action sequences that reveal new information, and diversity across learning opportunities pursued by different agents. That paper demonstrated that upper-confidence-bound (UCB) exploration schemes for concurrent reinforcement learning (concurrent UCRL), such as those discussed in Pazis and Parr (2013); Guo and Brunskill (2015); Pazis and Parr (2016), fail to satisfy the diversity property due to their deterministic nature. Further, a straightforward extension of PSRL to the concurrent multi-agent setting, in which each agent independently samples a new MDP at the start of each time period, as done in Kim (2017), fails to satisfy the commitment property because the agents are unable to explore the environment thoroughly Russo et al. (2017). As an alternative, Dimakopoulou and Van Roy (2018) proposed seed sampling, which extends PSRL in a manner that simultaneously satisfies the three properties. The idea is that each concurrent agent independently samples a random seed, a mapping from seed to the MDP is determined by the prevailing posterior distribution. Independence among seeds diversifies exploratory effort among agents. If the mapping is defined in an appropriate manner, the fact that each agent maintains a consistent seed ensures a sufficient degree of commitment, while the fact that the posterior adapts to new data allows each agent to react intelligently to new information.

Algorithms presented in Dimakopoulou and Van Roy (2018) are tabular and hence do not scale to address intractable state spaces. Further, computational studies carried out in Dimakopoulou and Van Roy (2018) focus on simple stylized problems designed to illustrate the benefits of seed sampling. In the next section, we demonstrate that observations made in these stylized contexts extend to a more realistic problem involving swinging up and balancing a pole. Subsequent sections extend the seed sampling concept to operate with generalizing randomized value functions Osband et al. (2016b), leading to new algorithms such as seed temporal-difference learning (seed TD) and seed least-squares value iteration (seed LSVI). We show that on tabular problems, these scalable seed sampling algorithms perform as well as the tabular seed sampling algorithms of Dimakopoulou and Van Roy (2018). Finally, we present computational results demonstrating effectiveness of one of our new algorithms applied in conjunction with a neural network representation of the value function on another pole balancing problem with a state space too large to be addressed by tabular methods. Our approach is able to explore efficiently, with agents learning to balance the pole quickly, while agents operating with an -greedy baseline fail to see any reward over any reasonable duration of interaction.

2 Seeding with Tabular Representations

This section shows that the advantages of seed sampling over alternative exploration schemes extend beyond the toy problems with known transition dynamics and a handful of unknown rewards considered in Dimakopoulou and Van Roy (2018). We consider a problem that is more realistic and complex, but of sufficiently small scale to be addressed by tabular methods, in which a group of agents learn to swing-up and balance a pole. We demonstrate that seed sampling learns to achieve the goal quickly and with far fewer agents than other exploration strategies.

In the classic problem Sutton and Barto (2017), a pole is attached to a cart that moves on a frictionless rail. We modify the problem so that deep exploration is crucial to identifying rewarding states and thus learning the optimal policy. Unlike the traditional cartpole problem, where the interaction begins with the pole stood upright and the agent must learn to balance it, in our problem the interaction begins with the pole hanging down and the agent must learn to swing it up. The cart moves on an infinite rail. Concretely the agent interacts with the environment through the state , where is the angle of the pole from the vertical, upright position and is the respective angular velocity. The cart is of mass and the pole has mass and length , with acceleration due to gravity . At each timestep the agent can apply a horizontal force to the cart. The dynamics of the system are governed by the second order differential equation in :

We discretize the evolution of this second order differential equation with timescale and present a choice of actions for all . At each timestep the agent pays a cost for its action but can receive a reward of if the pole is balanced upright and steady in the middle. The interaction ends after actions, i.e. at . The environment is modeled as a time-homogeneous MDP, which is identified by , where is the discretized state space , is the action space, is the reward model, is the transition model and is the initial state distribution.

(a)
(b)
Figure 1: Performance of PSRL (no adaptivity), concurrent UCRL (no diversity), Thompson resampling (no commitment) and seed sampling in the tabular problem of learning how to swing and keep upright a pole attached to a cart that moves left and right on an infinite rail.

Consider a group of agents, who explore and learn to operate in parallel in this common environment. Each th agent begins at state , where each component of is uniformly distributed in . Each agent takes an action at arrival times of an independent Poisson process with rate . At time , the agent takes action , transitions from state to state and observes reward . The agents are uncertain about the transition structure and share a common Dirichlet prior over the transition probabilities associated with each state-action pair with parameters , for all . The agents are also uncertain about the reward structure and share a common Gaussian prior over the reward associated with each state-action pair with parameters . Agents share information in real time and update their posterior beliefs.

We compare seed sampling with three baselines, PSRL, concurrent UCRL and Thompson resampling. In PSRL, each agent samples an MDP from the common prior at time and computes the optimal policy with respect to , which does not change throughout the agent’s interaction with the environment. Therefore, the PSRL agents do not adapt to the new information in real-time.

On the other hand, in concurrent UCRL, Thompson resampling and seed sampling, at each time , the agent generates a new MDP based on the data gathered by all agents up to that time, computes the optimal policy for and takes an action according to the new policy. Concurrent UCRL is a deterministic approach according to which all the parallel agents construct the same optimistic MDP conditioned on the common shared information up to that time. Therefore, the concurrent UCRL agents do not diversify their exploratory effort. Thompson resampling has each agent independently sample a new MDP at each time period from the common posterior distribution conditioned on the shared information up to that time. Resampling an MDP independently at each time period breaks the agent’s intent to pursue a sequence of actions revealing the rare reward states. Therefore, the Thompson resampling agents do not commit.

Finally, in seed sampling, at the beginning of the experiment, each agent samples a random seed with two components that remain fixed throughout the experiment. The first component is sequences of independent and identically distributed random variables; the second component is independent and identically distributed random variables. At each time , agent maps the data gathered by all agents up to that time and its seed to an MDP by combining the Exponential-Dirichlet seed sampling and the standard-Gaussian seed sampling methods described in Dimakopoulou and Van Roy (2018). Independence among seeds diversifies exploratory effort among agents. The fact that the agent maintains a consistent seed leads to a sufficient degree of commitment, while the fact that the posterior adapts to new data allows the agent to react intelligently to new information.

After the end of the learning interaction, there is an evaluation of what the group of agents learned. The performance of each algorithm is measured with respect to the reward achieved during this evaluation, where a greedy agent starts at , generates the expected MDP of the cartpole environment based on the posterior beliefs formed by the parallel agents at the end of their learning, and interacts with the cartpole as dictated by the optimal policy with respect to this MDP. Figure 1 plots the reward achieved by the evaluation agent for increasing number of PSRL, seed sampling, concurrent UCRL and Thompson resampling agents operating in parallel in the cartpole environment. As the number of parallel learning agents grows, seed sampling quickly increases its evaluation reward and soon attains a high reward only within 20 seconds of learning. On the other hand, the evaluation reward achieved by episodic PSRL (no adaptivity), concurrent UCRL (no diversity), and Thompson resampling (no commitment) does not improve at all or improves in a much slower rate as the number of parallel agents increases.

3 Seeding with Generalizing Representations

As we demonstrated in Section 2, seed sampling can offer great advantage over other exploration schemes. However, our examples involved tabular learning and the algorithms we considered do not scale gracefully to address practical problems that typically pose enormous state spaces. In this section, we propose an algorithmic framework that extends the seeding concept from tabular to generalizing representations. This framework supports scalable reinforcement learning algorithms with the degrees of adaptivity, commitment, and intent required for efficient coordinated exploration.

We consider algorithms with which each agent is instantiated with a seed and then learns a parameterized value function over the course of operation. When data is insufficient, the seeds govern behavior. As data accumulates and is shared across agents, each agent perturbs each observation in a manner distinguished by its seed before training its value function on the data. The varied perturbations of shared observations result in diverse value function estimates and, consequently, diverse behavior. By maintaining a constant seed throughout learning, an agent does not change his interpretation of the same observation from one time period to the next, and this achieves the desired level of commitment, which can be essential in the presence of delayed consequences. Finally, by using parameterized value functions, agents can cope with intractably large state spaces. Section 3.1 offers a more detailed description of our proposed algorithmic framework, and Section 3.2 provides examples of algorithms that fit this framework.

3.1 Algorithmic Framework

There are agents, indexed . The agents operate over time periods in identical environments, each with state space and action space . Denote by the time at which agent applies its th action. The agents may progress synchronously () or asynchronously (). Each agent begins at state . At time , agent is at state , takes action , observes reward and transitions to state . In order for the agents to adapt their policies in real-time, each agent has access to a buffer with observations of the form . This buffer stores past observations of all agents. Denote by the content of this buffer at time . With value function learning, agent uses a family of state action value functions indexed by a set of parameters . Each defines a state-action value function . The value could be, for example, the output of a neural network with weights in response to an input . Initially, the agents may have prior beliefs over the parameter , such as the expectation, , or the level of uncertainty, , on .

Agents diversify their behavior through a seeding mechanism. Under this mechanism, each agent is instantiated with a seed . Seed is intrinsic to agent and differentiates how agent interprets the common history of observations in the buffer . A form of seeding is that each agent can independently and randomly perturb observations in the buffer. For example, different agents can add different noise terms and of variance , which are determined by seeds and , respectively, to rewards from the same th observation in the buffer , as discussed in Osband et al. (2016b) for the single-agent setting. This induces diversity by creating modified training sets from the same history among the agents. Based on the prior distribution for the parameter , agent can initialize the value function with a sample from this distribution, with the seed providing the source of randomness. These independent value function parameter samples diversify the exploration in initial stages of operation. The seed remains fixed throughout the course of learning. This induces a level of commitment in agent , which can be important in reinforcement learning settings where delayed consequences are present.

At time , before taking the th action, agent fits its generalized representation model on the history (or a subset thereof) of observations perturbed by the noise seeds , . The initial parameter seed can also play a role in subsequent stages of learning, other than the first time period, by influencing the model fitting. An example of employing the initial parameter seed in the model fitting of subsequent time periods is by having a function as a regularization term in which appears. By this model fitting, agent obtains parameters at time period . These parameters define a state-action value function based on which a policy is computed. Based on the obtained policy and its current state , the agent takes a greedy action , observes reward and transitions to state . The agent stores this observation in the buffer so that all agents can access it next time they fit their models. For learning problems with large learning periods, it may be practical to cap the common buffer to a certain capacity and once this capacity is exceeded to start overwriting observations at random. In this case, the way observations are overwritten can also be different for each agent and determined by seed (e.g. by also defining random permutation of indices ).

The ability of the agents to make decisions in the high-dimensional environments of real systems, where the number of states is enormous or even infinite, is achieved through the value function representations, while coordinating the exploratory effort of the group of agents is achieved through the way that the seeding mechanism controls the fitting of these generalized representations. As the number of parallel agents increases, this framework enables the agents to learn to operate and achieve high rewards in complex environments very quickly.

3.2 Examples of Algorithms

We now present examples of algorithms that fit the framework of Section 3.1.

3.2.1 Seed Least Squares Value Iteration (Seed LSVI)

LSVI computes a sequence of value functions parameters reflecting optimal expected rewards over an expanding horizon based on observed data. In seed LSVI, each th agent’s initial parameter is set to . Before its th action, agent uses the buffer of observations gathered by all agents up to that time, or a subset thereof, and the random noise terms to carry out LSVI, initialized with , where is the LSVI planning horizon:

for , where is a regularization penalty (e.g. ). After setting , agent applies action . Note that the agent’s random seed can be viewed as .

3.2.2 Seed Temporal-Difference Learning (Seed TD)

When the dimension of is very large, significant computational time may be required to produce an estimate with LSVI, and using first-order algorithms in the vein of stochastic gradient descent, such as TD, can be beneficial. In seed TD, each th agent’s initial parameter is set to . Before its th action, agent uses the buffer of observations gathered by all agents up to that time to carry out iterations of stochastic gradient descent, initialized with :

for , where is the TD learning rate, is the loss function, is the discount rate and is a regularization penalty (e.g. ). After setting , agent applies action . Note that the agent’s random seed can be viewed as .

3.2.3 Seed Ensemble

When the number of parallel agents is large, instead of having each one of the agents fit a separate value function model (e.g. separate neural networks), we can have an ensemble of models, , to decrease computational requirements. Each model is initialized with from the common prior belief on parameters , which is fixed and specific to model of the ensemble. Moreover model is trained on the buffer of observations according to one of the methods of Section 3.2.1 or 3.2.2. Each observation is perturbed with noise , which is also fixed and specific to model of the ensemble. Note that the agent’s random seed, , is a randomly drawn index associated with a model from the ensemble.

3.2.4 Extensions

The framework we propose is not necessarily constrained to value function approximation methods. For instance, one could use the same principles for policy function approximation, where each agent defines a policy function and before its th action uses the buffer of observations gathered by all agents up to that time and its seeds to perform policy gradient.

4 Computational Results

In this section, we present computational results that demonstrate the robustness and effectiveness of the approach we suggest in Section 3. In Section 4.1, we present results that serve as a sanity check for our approach. We show that in the tabular toy problems considered in Dimakopoulou and Van Roy (2018), seeding with generalized representations performs equivalently with the seed sampling algorithm proposed in Dimakopoulou and Van Roy (2018), which is particularly designed for tabular settings and can benefit from very informative priors. In Section 4.2, we scale-up to a high-dimensional problem, which would be too difficult to address by any tabular approach. We use the concurrent reinforcement learning algorithm of Sections 3.2.2 and 3.2.3 with a neural network value function approximation and we see that our approach explores quickly and achieves high rewards.

4.1 Sanity Checks

The authors of Dimakopoulou and Van Roy (2018) considered two toy problems that demonstrate the advantage of seed sampling over Thompson resampling or concurrent UCRL. We compare the performance of seed LSVI (Section 3.2.1) and seed TD (Section 3.2.2), which are designed for generalized representations, with seed sampling, Thompson resampling and concurrent UCRL which are designed for tabular representations.

The first toy problem is the “bipolar chain” of figure (a)a. The chain has an even number of vertices, , and the endpoints are absorbing. From any inner vertex of the chain, there are two edges that lead deterministically to the left or to the right. The leftmost edge has weight and the rightmost edge has weight , such that and . All other edges have weight . Each one of the agents starts from vertex , and its goal is to maximize the accrued reward. We let the agents interact with the environment for time periods. As in Dimakopoulou and Van Roy (2018), seed sampling, Thompson resampling and concurrent UCRL, know everything about the environment except from whether or and they share a common prior that assigns probability to either scenario. Once an agent reaches either of the endpoints, all agents learn the true value of and . Seed LSVI and seed TD use -dimensional one-hot encoding to represent any of the chain’s states and a linear value function representation. Unlike, the tabular algorithms, seed LSVI and seed TD start with a completely uninformative prior.

We run the algorithms with different number of parallel agents operating on a chain with vertices. Figure (c)c shows the mean reward per agent achieved as increases. The “bipolar chain” example aims to highlight the importance of the commitment property. As explained in Dimakopoulou and Van Roy (2018), concurrent UCRL and seed sampling are expected to perform in par because they exhibit commitment, but Thompson resampling is detrimental to exploration because resampling a MDP in every time period leads the agents to oscillation around the start vertex. Seed LSVI and seed TD exhibit commitment and perform almost as well as seed sampling, which not only is designed for tabular problems but also starts with a significantly more informed prior.

The second toy problem is the “parallel chains” of figure (b)b. Starting from vertex , each of the agents chooses one of the chains, of length . Once a chain is chosen, the agent cannot switch to another chain. All the edges of each chain have zero weights, apart from the edge incoming to the last vertex of the chain, which has weight . The objective is to choose the chain with the maximum reward. As in Dimakopoulou and Van Roy (2018), seed sampling, Thompson resampling and concurrent UCRL, know everything about the environment except from , on which they share a common, well-specified prior. Once an agent traverses the last edge of chain , all agents learn . Seed LSVI and seed TD use -dimensional one-hot encoding to represent any of the chain’s states and a linear value function representation. As before, seed LSVI and seed TD start with a completely uninformative prior.

We run the algorithms with different number of parallel agents operating on a parallel chain environment with , and . Figure (d)d shows the mean reward per agent achieved as increases. The “parallel chains” example aims to highlight the importance of the diversity property. As explained in Dimakopoulou and Van Roy (2018), Thompson resampling and seed sampling are expected to perform in par because they diversify, but concurrent UCRL is wasteful of the exploratory effort of the agents, because it sends all the agents who have not left the source to the same chain with the most optimistic last edge reward. Seed LSVI and seed TD exhibit diversity and perform identically with seed sampling, which again starts with a very informed prior.

(a) Bipolar chain environment
(b) Parallel chains environment
(c) Bipolar chain mean regret per agent
(d) Parallel chains mean regret per agent
Figure 2: Comparison of the scalable seed algorithms, seed LSVI and seed TD, with their tabular counterpart seed sampling and the tabular alternatives concurrent UCRL and Thompson resampling in the toy settings considered in Dimakopoulou and Van Roy (2018). This comparison serves as a sanity check for the proposed framework.

4.2 Scaling Up: Cartpole Swing-Up

In this section we extend the algorithms and insights we have developed in the rest of the paper to a complex non-linear control problem. We revisit a variant of the “cartpole” problem of Section 2, but we introduce two additional state variables, the horizontal distance of the cart from the center and its velocity, . The second order differential equation governing the system becomes

We discretize the evolution of this second order differential equation with timescale . The agent receives a reward of if the pole is balanced upright, steady in the middle and the cartpole is centered111Precisely when , , and ., otherwise the reward is zero. We evaluate performance for seconds of interaction or equivalently actions. For implementation, we make use of the DeepMind control suite that imposes a rigid edge at Tassa et al. (2018).

Due to the curse of dimensionality, tabular approaches to seed sampling quickly become intractable as we introduce more state variables. For a practical approach to seed sampling in this domain we apply the seed TD-ensemble algorithm of Sections 3.2.2 and 3.2.3, together with a neural network representation of the value function. We pass the neural network six features: . Let be a (50, 50)-MLP with rectified linear units and linear skip connection. We initialize each for sampled from Glorot initialization Glorot and Bengio (2010). During learning we take gradient steps only with respect to , the parameter plays a similar role to a prior. We sample noise to be used in the shared replay buffer.

Figure 3: Comparison of seed sampling varying the number of agents, with a model ensemble size . As a baseline, results from applying DQN with 100 agents applying -greedy exploration are in black.

Figure 3 presents the results of our seed sampling experiments on this cartpole problem. Each curve is averaged over 10 random instances. As a baseline, we consider DQN with 100 parallel agents each with -greedy action selection. With this approach, the agents fail to see any reward over any reasonable duration of their experience. By contrast, a seed sampling approach is able to explore efficiently, with agents learning to balance the pole remarkably quickly. The average reward per agent increases as we increase the number of parallel agents. To reduce compute time, we use seed ensemble with models; this seems to not significantly degrade performance relative to using one model per agent.

5 Closing Remarks

We have extended the concept of seeding from the non-practical tabular representations to generalized representations and we have proposed an approach for designing scalable concurrent reinforcement learning algorithms that can intelligently coordinate the exploratory effort of agents learning in parallel in potentially enormous state spaces. This approach allows the concurrent agents (1) to adapt to each other’s high-dimensional observations via value function learning, (2) to diversify their experience collection via an intrinsic random seed that uniquely initializes each agent’s generalized representation and uniquely interprets the common history of observations, (3) to commit to sequences of actions revealing useful information by maintaining each agent’s seed constant throughout learning. We envision multiple applications of practical interest, where a number of parallel agents who conform to the proposed framework, can learn and achieve high rewards in short learning periods. Such application areas include web services, the management of a fleet of autonomous vehicles or the management of a farm of networked robots, where each online user, vehicle or robot respectively is controlled by an agent.

References

  • Dimakopoulou and Van Roy (2018) Maria Dimakopoulou and Benjamin Van Roy. Coordinated exploration in concurrent reinforcement learning. In ICML, 2018.
  • Glorot and Bengio (2010) Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256, 2010.
  • Gu et al. (2016) Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In arXiv, 2016.
  • Guo and Brunskill (2015) Z. Guo and E. Brunskill. Concurrent PAC RL. In AAAI Conference on Artificial Intelligence, pages 2624–2630, 2015.
  • Jaksch et al. (2010) Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11:1563–1600, 2010.
  • Kearns and Singh (2002) Michael J. Kearns and Satinder P. Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3):209–232, 2002.
  • Kim (2017) Michael Jong Kim. Thompson sampling for stochastic control: The finite parameter case. IEEE Transactions on Automatic Control, 2017.
  • Osband and Van Roy (2017a) Ian Osband and Benjamin Van Roy. On optimistic versus randomized exploration in reinforcement learning. In The Multi-disciplinary Conference on Reinforcement Learning and Decision Making, 2017a.
  • Osband and Van Roy (2017b) Ian Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforcement learning. In ICML, 2017b.
  • Osband et al. (2013) Ian Osband, Daniel Russo, and Benjamin Van Roy. (More) efficient reinforcement learning via posterior sampling. In NIPS, pages 3003–3011. Curran Associates, Inc., 2013.
  • Osband et al. (2016a) Ian Osband, Daniel Russo, Benjamin Van Roy, and Zheng Wen. Deep exploration via randomized value functions. arXiv preprint arXiv:1608.02731, 2016a.
  • Osband et al. (2016b) Ian Osband, Benjamin Van Roy, and Zheng Wen. Generalization and exploration via randomized value functions. In Proceedings of The 33rd International Conference on Machine Learning, pages 2377–2386, 2016b.
  • Pazis and Parr (2013) Jason Pazis and Ronald Parr. PAC optimal exploration in continuous space Markov decision processes. In AAAI. Citeseer, 2013.
  • Pazis and Parr (2016) Jason Pazis and Ronald Parr. Efficient PAC-optimal exploration in concurrent, continuous state mdps with delayed updates. In AAAI. Citeseer, 2016.
  • Russo et al. (2017) Daniel Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. A tutorial on Thompson sampling. arXiv preprint arXiv:1707.02038, 2017.
  • Silver et al. (2013) D. Silver, Barker Newnham, L, S. Weller, and J. McFall. Concurrent reinforcement learning from customer interactions. In Proceedings of The 30th International Conference on Machine Learning, pages 924–932, 2013.
  • Strens (2000) Malcolm J. A. Strens. A Bayesian framework for reinforcement learning. In ICML, pages 943–950, 2000.
  • Sutton and Barto (2017) Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, 2017.
  • Szepesvári (2010) Csaba Szepesvári. Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2010.
  • Tassa et al. (2018) Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
198972
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description