Shared Learning: Enhancing Reinforcement in Q-Ensembles

Shared Learning: Enhancing Reinforcement in -Ensembles

Rakesh R Menon
College of Information and Computer Sciences
University of Massachusetts Amherst
rrmenon@cs.umass.edu &Balaraman Ravindran
Department of Computer Science and Engineering
Robert Bosch Centre for Data Science and AI
Indian Institute of Technology Madras
ravi@cse.iitm.ac.in
Abstract

Deep Reinforcement Learning has been able to achieve amazing successes in a variety of domains from video games to continuous control by trying to maximize the cumulative reward. However, most of these successes rely on algorithms that require a large amount of data to train in order to obtain results on par with human-level performance. This is not feasible if we are to deploy these systems on real world tasks and hence there has been an increased thrust in exploring data efficient algorithms. To this end, we propose the Shared Learning framework aimed at making -ensemble algorithms data-efficient. For achieving this, we look into some principles of transfer learning which aim to study the benefits of information exchange across tasks in reinforcement learning and adapt transfer to learning our value function estimates in a novel manner. In this paper, we consider the special case of transfer between the value function estimates in the -ensemble architecture of BootstrappedDQN. We further empirically demonstrate how our proposed framework can help in speeding up the learning process in -ensembles with minimum computational overhead on a suite of Atari 2600 Games.

Shared Learning: Enhancing Reinforcement in -Ensembles


Rakesh R Menon College of Information and Computer Sciences University of Massachusetts Amherst rrmenon@cs.umass.edu                        Balaraman Ravindran Department of Computer Science and Engineering Robert Bosch Centre for Data Science and AI Indian Institute of Technology Madras ravi@cse.iitm.ac.in

Introduction

Reinforcement Learning (RL) deals with learning how to act in an environment by trying to maximize the cumulative payoff (?). However, the early approaches in RL did not scale well to environments with large state spaces. Recently, Deep Reinforcement Learning (DRL) has gained a great deal of interest because of its ability to map high-dimensional observations to actions using a neural network function approximator (????). With the development of many algorithms, researchers have been able to show the effectiveness of deep reinforcement learning in trying to solve problems in complex domains. However, most of these architectures require a large amount of training data in order get near human-level performance. This problem of data inefficiency has been well established previously in (?).

Prior attempts at improving data efficiency in reinforcement learning, involved the use of an Experience Replay mechanism (?) which allowed the agent to replay trajectories from a memory buffer to learn more effectively from each sample trajectory. This idea was first introduced in deep Q-learning by (?). Later, (?) was able to use the idea of Experience Replay to create a sample efficient actor-critic along with some other modifications. Hindsight Experience Replay (?) tries to add samples to the replay memory which help the agent to learn about both the desired and undesired states in the environment more efficiently. Another line of work involves auxiliary tasks that learn how to control the environment on the high dimensional visual observations (??). More recently, combining model-based methods and model-free methods have been shown to be make learning data-efficient (?). We aim to develop an agent that can learn more efficiently from each sample in the Experience Replay.

The framework we propose builds upon some ideas from transfer in reinforcement learning literature (?). In particular, we have focused on transfer through action advice (?) to perform online transfer. While online transfer has been done before in (?), our framework differs from this algorithm in the sense that the action advice happens while learning from samples collected in the Experience Replay. A -ensemble is an suitable architecture to perform online transfer since we can take advantage of the independent value function estimates within the architecture. The BootstrappedDQN architecture (?) is an example of one such -ensemble algorithm that learns to explore complex environments more efficiently by planning over several timesteps. Experimentally, BootstrappedDQN was shown to perform better than Double DQN (?) on most Atari 2600 games. (?) proposed 3 algorithms, Ensemble Voting, UCB exploration and UCB+InfoGain exploration, inspired by concepts from bayesian reinforcement learning and bandit algorithms. These algorithms were shown to perform better than BootstrappedDQN and Double DQN on many Atari 2600 games. In this paper, we present a new framework, Shared Learning, that learns to share knowledge between the value function estimates of BootstrappedDQN and Ensemble Voting which allows for data-efficiency and then show how our framework can be extended to any -ensemble algorithm.

The rest of the paper is structured as follows: we will look at some related works in data-efficient reinforcement learning, -ensembles and transfer learning. Following which we touch upon some necessary background before moving on to the main section on Shared Learning. We first motivate and analyze how our framework helps through toy Markov Decision Process(MDP) chains. Further, we go on to show the efficacy of our proposed framework on the Arcade Learning Environment (?) Atari 2600 games.

Related Work

Data Efficiency

With the evolution of deep learning and RL, many algorithms were developed which could solve many complex problems in domains like robotics and games. However, most of these algorithms suffer from some fundamental issues that were highlighted in (?). Some of these fundamental issues include data inefficiency, brittle nature of learned policies and the inability to adapt to new tasks. Here, we focus on some works that try to tackle the problem of data efficiency. The first work on deep Q-learning (?) introduced the idea of Experience Replay (?) to improve data-efficiency . By replaying the trajectories visited by the agent, the agent was able to decorrelate samples for training and also learn more robustly from each sample. (?) was able to apply the idea of Experience Replay along with some other modifications into the on-policy actor-critic and make the whole architecture sample efficient. Recently, (?) proposed Hindsight Experience Replay, a method allowing the agent to learn as much information about the undesired outcomes as it would of the desired outcomes. To do this they take as input, the observation and the goal state that is to be achieved (experiments were done on environments where the goal state was known) and train the neural network to act in order to get to the goal state. They further add samples in the Experience Replay that reward the agent upon reaching different states visited in the agents trajectories. This allows for more information to be learned about the environment.

(?) introduced a method of learning multiple unsupervised auxiliary tasks on the visual data stream that allow the agent to learn to control and predict different aspects of the environment. The proposed agent, UNREAL, significantly outperformed previous baselines based on the on-line A3C (?) architecture. The algorithm also learned robust policies and was able to learn with much less data when compared to A3C. (?) tries to predict the actions that are required to be taken by the agent along with predictions of some game features which lead to an increase in performance on VizDoom (?). (?) also tries to reproduce the current state of the environment as an auxiliary task along with the successor representation of the value function.

Model-based methods are known to be one of the best methods for data-efficient learning because such algorithms can perform planning. However, such algorithms are hinged on the fact that they have access to a model of the environment. This is not such an easy task in the case of deep reinforcement learning. (?) has been able to achieve some success by combining model-based and model-free methods to get a data-efficient learning algorithm. Their algorithm makes use of an imperfect model of the environment to create imagination-based trajectories and the final policy and value function are provided as a combination of the model-based and the model-free algorithm.

Exploration Strategies

Exploration strategies in reinforcement learning have been able to achieve near-optimal guarantees for many small domains like Markov Decision Process chains and puddle world in the past (??). However, these near-optimal algorithms were not able to scale well to large domains that deep reinforcement learning aims to solve. Most of the recently proposed exploration strategies, have made use of pseudo counts for state visitations (??), hashing (?), exploration bonuses(?) and intrinsic motivation (?). However, there was a need for strategies that can perform deep exploration over multiple time-steps.

The -ensemble architecture was first introduced in DRL by (?) through the BootstrappedDQN architecture to perform deep exploration along with some level of planning. The architecture derives most of its inspiration from Posterior Sampling for Reinforcement Learning (PSRL) (?). BootstrappedDQN maintains multiple -value estimates and was able to perform more efficient exploration than Double DQN (?) on majority of the Atari Games. Recently, (?) modified BootstrappedDQN to further improve exploration by adapting the UCB algorithm (?) to the -ensemble architecture and further using the disagreement among the value function estimates in order to provide a reward bonus signal. Additionally, the paper also proposed an exploitation -ensemble algorithm called Ensemble Voting where the agent follows a policy which is the majority vote of the heads.

Transfer in DRL

Transfer for reinforcement learning has been studied to a great extent in the past (?). While several methods for performing transfer have been proposed in the past, here we study action advice in reinforcement learning. In action advice, we have a teacher advice the student what kind of action needs to be taken at each step. A2T (?) tries to provide action advice from multiple trained experts to a new agent to try and capture different skills from each teacher through positive policy transfer. Other methods like (??) have involved knowledge transfer from one (or multiple) source tasks to a target task in order to speed up the learning process by learning from transitions of the expert agents. These methods were also shown to be able to learn more general representations of the environment on Atari 2600 games. However, these algorithms depend on an agent that has been trained completely on a task(or a teacher) to perform the knowledge transfer to a fresh agent(or a student). (?) is one of the first papers to present online transfer between agents that are still learning how to act in the environment. The paper further goes on to show that classical transfer is a specific case of online transfer and prove the convergence of Q-learning and SARSA using online transfer.

Background

RL Notations

A common approach to solving an RL problem is through value functions that indicate the expected reward that can be obtained from each state. In this paper, we only concern ourselves with state-action value functions which is the return that an agent is expected to receive from a particular state upon taking an action . One common approach for learning the value functions is an off-policy TD-algorithm called Q-learning (?). The optimal policy can be achieved by behaving greedily with respect to the learned state-action value function in each state. The update rule for the Q-learning is as follows,

(1)

Here, is the state, is the action taken and is the reward obtained by taking that action at time . From here on, we will refer to as value function.

Deep Q-Networks (DQN)

In environments with large state spaces, it is not possible to learn values for every possible state-action pair. The need for generalizing from experience of a small subset of the state space to give useful approximations of the value function becomes a key issue (?). Neural networks, while attractive as potential value function approximators, were known to be unstable or even to diverge on reinforcement learning problems. Through some seminal work on representation learning using deep neural networks (?) were able to create new methods for learning algorithms from the high dimensional visual observations. (?) successfully used deep neural networks in order to carry out Q-learning using the high dimensional Atari 2600 (?) screen observation as input. Additionally, (?) also overcomes the problem of stability in learning using two crucial ideas: replay memory and target networks. The parametrized value function is trained using the expected squared TD-error as the loss signal given to the neural network.

(2)

Here, represents the weights of online network and refers to the target network weights. The target network ensures that the learning happens in a stable manner while the replay memory ensures that network can learn from independent identically distributed(i.i.d.) samples hence ensures stable neural network training.

The max operator in Q-learning has been shown to produce an overestimation error in the value function (?). To overcome this overestimation error in DQNs, (?) introduced Double Deep Q-Networks (DDQN), a low computational overhead algorithm, in which the greedy policy is evaluated using the online network and the value is given by the target network for the learning update. Thus the loss signal for the DDQN becomes:

(3)

BootstrappedDQN

Figure 1: BootstrappedDQN architecture
(a) 40-state chain MDP
(b) 45-state chain MDP
(c) 50-state chain MDP
Figure 2: Comparison of number of steps taken by Shared Learning-Bootstrap, Bootstrap, Q-learning and Double Q-learning algorithm during each run (averaged over 50 runs). When the algorithm has converged there is a zero variance line corresponding to the fastest path to the goal state which takes steps in an -state MDP chain.

BootstrappedDQN (?) introduces a novel exploration strategy that performs deep exploration in environments with large state spaces. The idea mainly draws inspiration from two prior works on PSRL (?) and RLSVI (?). The -ensemble architecture maintains multiple parametrized value function estimates or “heads”. BootstrappedDQN depends on the independent initializations of the heads and the fact that each head is trained with different set of samples. However, it was shown that training each head with different samples was not as much an important criteria as the independent initializations.

At the start of every episode, BootstrappedDQN samples a single value function estimate at random according to a uniform distribution. The agent then follows the greedy policy with respect to the selected estimate until the end of the episode. The authors propose that this is an adaptation of the Thompson sampling heuristic to RL that allows for temporally extended (or deep) exploration.

BootstrappedDQN is implemented by adding multiple heads which branch out from the output of the convolutional layers as shown in Figure 1. Suppose there are heads in this network. The outputs from each head represent different independent estimates of the action-value function. Let be the value estimate and be the target value estimate of the head. The loss signal for the kth head is given as follows,

(4)

Each of the heads is updated this way and the gradients are aggregated and normalized at the convolutional layers.

Shared Learning

A motivating example

Consider the task of solving an MDP chain environment as in Figure 3 with state space = and action space = {Jump to , Right, Left, No-op}. In this task, the agent begins at state and has to reach state to get a reward of +10 and if it goes to state , it gets a reward of -10. The episode terminates when either state or is reached.

This is an environment that good exploration algorithms can solve very fast. Algorithms like Q-learning and Double Q-learning (?) find it very hard to solve such an environment. As shown in the graph in Figure 2, a tabular adaptation of BootstrappedDQN, which we call the Bootstrap algorithm here, is able to solve larger chains when compared to Q-learning (?) and Double Q-learning. However, the algorithm takes a long time to converge to the goal state because all the value function estimates have to learn from their independent experiences. So, we need a method to transfer the knowledge among other value function estimates. This where Shared Learning steps in, our framework for -ensemble algorithms which is able to allow the ensemble estimates to learn from each other through action advice in the learning update. The results for Shared Learning, in Figure 2, indicate that our framework converges to the goal state solution faster than Bootstrap.

Figure 3: MDP chain with -states

Knowledge Sharing in Learning Update

The motivation for Shared Learning is to allow the different value function estimates to learn from each other so that the complete agent can solve a task much faster and more efficiently. To this end, knowledge must be transferred from an apparent expert to the other estimates. Transfer learning literature suggests that in order to make sure that other estimates are able to replicate the rewarding trajectories of the expert, we should perform action advice. Instead, we introduce a novel method of trying to influence an estimate to attempt to go along the direction of the chosen estimate through a minor modification in the learning update of BootstrappedDQN as given in Equation 5.

(5)

where . Since our requirement is to follow directions along maximum reward, we can choose the estimate that is able to give the highest(best) expected reward to perform the knowledge sharing. So, we modify the above equation to give the following update rule,

(6)

While (?) tried to show how action advice during learning can help in the overall training process we have shown, in a novel manner, how to use action advice in the learning update to perform faster learning.

Robust Target Estimates

While proposing to use Double Q-learning in DQNs in (?), the author(s) proposed a low computational overhead modification which was shown to reduce the overestimation errors in DQNs by using the online network to provide the greedy policy while using the target network for the value estimate. But, the target network, being a previous iteration of the online network, is coupled with the online network. This reduces the capability of the Double Q-learning update and has been reported in (?).

However, through Shared Learning we can ensure that most of the time the estimate that is providing the greedy action () is not the same as the estimate that provides the target value(). Hence, the effect of coupling further reduces through our update and learning becomes more robust. This is a second advantage of our framework and we believe that this allows for better learning from each sample in the replay memory leading to better understanding on the world. Additionally, we would like to note that the framework is expected to get better at reducing overestimations with increasing number of heads since the chances of a given head providing the greedy action to its own target estimate reduces and so the coupling effect also reduces.

We have summarised the Shared Learning framework for any deep -ensemble algorithm X in Algorithm 1.

Through the combined ability of transferring knowledge about rewarding states and robust learning updates, Shared Learning is able to replay rewarding sequences more often and also have a better understanding of rewarding states. Empirically, this can be observed from the results on the MDP chain environment in Table 1 and Figure 4 where the Shared Learning algorithm is able to make more goal-state visitations than Bootstrap algorithm. Additionally, we can also see the effect of the choice of “best” head as Shared Learning is able to get to the goal state more often than a Random Head algorithm, where a “random” head is chosen to give the target estimate. In some sense, Shared Learning can be assumed to have developed a biased implicit curriculum towards rewarding states as simpler goals can be learnt much faster through our framework allowing the agent to focus on more complex goals in the environment.

No. of states in MDP chain No. of episodes per run Bootstrap Algorithm Random Head Algorithm Shared Learning Algorithm
40 150 56.62 58.28 60.36
50 300 74.22 89.96 93.18
60 700 146.48 166.88 184.46
70 1500 99.96 151.6 164.78
Table 1: Comparison of the number of visitations of the goal state made by each algorithm on an -state MDP chain. The results have been averaged over 50 runs.
Figure 4: Comparison of the number of times the goal state is reached by Shared Learning, Bootstrap and Random Head algorithm on the MDP chain environment.
(a) Seaquest
(b) Enduro
(c) Kangaroo
(d) Pong
(e) Freeway
(f) Riverraid
Figure 5: Comparison of Shared Learning-Bootstrap vs BootstrappedDQN algorithm
(a) Seaquest
(b) Enduro
(c) Kangaroo
(d) Pong
(e) Freeway
(f) Riverraid
Figure 6: Comparison of Shared Learning-Ensemble Voting vs Ensemble Voting algorithm
1:Value function networks with outputs
2:Let B be a replay buffer storing experience for training and select_best_int be the interval during which the best head is selected. 
3:numSteps 0 
4:best_head Uniform
5:for each episode do
6:     Obtain initial state from environment
7:     for step until end of episode do
8:         Pick an action according to -ensemble algorithm X 
9:         Take action and receive state and reward
10:         Store transition (,,,) in B
11:         Sample minibatch from B
12:         Train network with loss function given in equation 6
13:         numSteps numSteps + 1 
14:         if numSteps select_best_int  then
15:              best_head = argmax
16:         end if
17:     end for
18:end for
Algorithm 1 Shared Learning - X

Experiments and Discussion

In this section, we try to answer the following questions:

  • Can Shared Learning become data-efficient with deep reinforcement learning algorithms?

  • Can the algorithm be extended to exploitation -ensemble algorithms?

To this end, we test the efficacy of the Shared Learning framework on an exploration -ensemble algorithm and an exploitation -ensemble algorithm with BootstrappedDQN and Ensemble Voting (?) respectively. The hyperparameters were tuned for 6 Atari Games (Seaquest, Riverraid, Kangaroo, Pong, Freeway and Enduro) and were kept constant throughout both experiments.

In our experiments, the most important parameter to be dealt with is the frequency with which we choose the “best” head among the value function estimates. Here, we have chosen the frequency to be 10,000 steps, i.e., the head that is required to perform knowledge sharing is chosen once every 10,000 steps. The DQN code has been taken from OpenAI baselines111https://github.com/openai/baselines and our algorithms are evaluated on the Atari Games in OpenAI Gym (?), which is simulated by Arcade Learning Environment (?) and trained for 40 million frames on each game. For the graphs, we have plotted the learning curves for the algorithms against the number of episodes played over 40 million frames. We believe that in doing so we get to see that our proposed framework plays lesser number of episodes within the same number of timesteps and still gets to a higher score which essentially shows that the framework makes for data efficiency.

Shared Learning-BootstrappedDQN

The training curves for Shared Learning-BootstrappedDQN vs BootstrappedDQN is shown in Figure 5. We have also summarised the highest scores obtained during training on the different games in Table 2. We refer to the algorithm BootstrappedDQN as BDQN and the application of Shared Learning on BDQN as SLBDQN. From the graph and the table, we can see that Shared Learning is able to do equal or better than BootstrappedDQN on 5 out of 6 games in terms of score and it is also able to achieve these scores with lesser data. We claim this is because of the ability of the framework to replay rewarding trajectories much more often because of the knowledge sharing that happens between the value function estimates making the framework data-efficient for the exploration algorithm. Further, we see that the graphs have an increasing slope in Seaquest, Kangaroo and Enduro which indicates that we can get better scores with more training.

Game BDQN-MeanMax SLBDQN-MeanMax BDQN-EpMax SLBDQN-EpMax
Pong 20.53 20.74 21 21
Kangaroo 3476 9120 10600 15000
Riverraid 5307.9 10242.8 9280 15370
Enduro 501.48 911.47 1007 1869
Freeway 32.81 32.39 34 33
Seaquest 4989.4 9086.4 14880 23970
Table 2: Experimental results for Shared Learning-Bootstrap and BootstrappedDQN experiments. Here MeanMax represents the scores from the mean curves in Figure 5, EpMax represents the maximum score achieved in an episode by the algorithm.

Shared Learning-Ensemble Voting

The training curves for Shared Learning-Ensemble Voting vs Ensemble Voting is shown in Figure 6. We have also summarised the highest scores obtained during training on the different games in Table 3. We refer to the algorithm Ensemble Voting as EV and the application of Shared Learning on EV as SLEV. From the graph and the table, we can see that Shared Learning is able to do equal or better than Ensemble Voting on 5 out of 6 games in terms of score and it is also able to achieve these scores with lesser data due to the same reasons as mentioned in the previous subsection. However, we believe that the data efficiency and the improved scores are not as profound as in BootstrappedDQN because we are applying a biased curriculum on an exploitation algorithm.

Game EV-MeanMax SLEV-MeanMax EV-EpMax SLEV-EpMax
Pong 20.6 20.87 21 21
Kangaroo 9694 11215 14700 15200
Riverraid 10443.1 12118.2 15770 16530
Enduro 1098.5 1206.77 1882 1940
Freeway 33.48 33.16 34 34
Seaquest 6456.6 13176 20360 35180
Table 3: Experimental results for Shared Learning-Ensemble Voting and Ensemble Voting experiments. Here MeanMax represents the scores from the mean curves in Figure 6, EpMax represents the maximum score achieved in an episode by the algorithm.

Conclusion & Future Work

We have proposed a low computational overhead framework Shared Learning to make -ensemble algorithms data-efficient. We have also shown that the framework is applicable to exploration and exploitation algorithms such as BootstrappedDQN and Ensemble Voting respectively. Our action advice in learning update framework shows a significant improvements in game score for some of the Atari Games indicating faster learning through robust estimates.

Figure 7: Freeway using 20,000-step interval for choosing the “best head”, is able to learn much faster initially than the 10,000-step interval. Analysis done for SLBDQN vs BDQN.

In this paper, we have fixed the interval for choosing the “best” head to share knowledge among the other heads. However, we could look to tune this hyperparameter and make it dynamic. For example, we can see in Figure 7 that upon changing the parameter from 10,000 steps to 20,000 steps for Freeway, we can make the framework learn faster initially. We could also tune the number of heads to share knowledge from to improve the robustness of the learning rule. Since, we have shown that our framework is able to perform better learning updates when compared to DoubleDQN, we could test the effect of our learning update on masking BootstrappedDQN. We leave these modifications for Shared Learning to future work.

Acknowledgements

We would like to thank Manu Srinath Halvagal, EPFL, for some useful discussions as well Joe Eappen, IIT Madras, and Yash Chandak, UMass Amherst, for their valuable feedback on the work.

References

  • [Andrychowicz et al. 2017] Andrychowicz, M.; Wolski, F.; Ray, A.; Schneider, J.; Fong, R.; Welinder, P.; McGrew, B.; Tobin, J.; Abbeel, P.; and Zaremba, W. 2017. Hindsight Experience Replay. ArXiv e-prints.
  • [Auer, Cesa-Bianchi, and Fischer 2002] Auer, P.; Cesa-Bianchi, N.; and Fischer, P. 2002. Finite-time analysis of the multiarmed bandit problem. Machine learning 47(2-3):235–256.
  • [Bellemare et al. 2013] Bellemare, M. G.; Naddaf, Y.; Veness, J.; and Bowling, M. 2013. The Arcade Learning Environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR) 47:253–279.
  • [Bellemare et al. 2016] Bellemare, M.; Srinivasan, S.; Ostrovski, G.; Schaul, T.; Saxton, D.; and Munos, R. 2016. Unifying Count-based Exploration and Intrinsic Motivation. In Advances in Neural Information Processing Systems, 1471–1479.
  • [Brafman and Tennenholtz 2002] Brafman, R. I., and Tennenholtz, M. 2002. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research 3(Oct):213–231.
  • [Brockman et al. 2016] Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. 2016. OpenAI Gym. ArXiv e-prints.
  • [Chen et al. 2017] Chen, R. Y.; Sidor, S.; Abbeel, P.; and Schulman, J. 2017. UCB and InfoGain Exploration via Q-Ensembles. ArXiv e-prints.
  • [Hasselt 2010] Hasselt, H. V. 2010. Double Q-learning. In Advances in Neural Information Processing Systems, 2613–2621.
  • [Houthooft et al. 2016] Houthooft, R.; Chen, X.; Duan, Y.; Schulman, J.; De Turck, F.; and Abbeel, P. 2016. Vime: Variational Information Maximizing Exploration. In Advances in Neural Information Processing Systems, 1109–1117.
  • [Jaderberg et al. 2017] Jaderberg, M.; Mnih, V.; Czarnecki, W. M.; Schaul, T.; Leibo, J. Z.; Silver, D.; and Kavukcuoglu, K. 2017. Reinforcement learning with unsupervised auxiliary tasks.
  • [Kearns and Koller 1999] Kearns, M., and Koller, D. 1999. Efficient Reinforcement Learning in Factored MDPs. In IJCAI, volume 16, 740–747.
  • [Kempka et al. 2016] Kempka, M.; Wydmuch, M.; Runc, G.; Toczek, J.; and Jaśkowski, W. 2016. ViZDoom: A Doom-based AI Research Platform for Visual Reinforcement Learning. ArXiv e-prints.
  • [Kulkarni et al. 2016] Kulkarni, T. D.; Saeedi, A.; Gautam, S.; and Gershman, S. J. 2016. Deep Successor Reinforcement Learning. ArXiv e-prints.
  • [Lake et al. 2016] Lake, B. M.; Ullman, T. D.; Tenenbaum, J. B.; and Gershman, S. J. 2016. Building machines that learn and think like people. Behavioral and Brain Sciences 1–101.
  • [Lample and Singh Chaplot 2016] Lample, G., and Singh Chaplot, D. 2016. Playing FPS Games with Deep Reinforcement Learning. ArXiv e-prints.
  • [LeCun, Bengio, and Hinton 2015] LeCun, Y.; Bengio, Y.; and Hinton, G. 2015. Deep learning. Nature 521(7553):436–444.
  • [Lin 1992] Lin, L.-H. 1992. Self-improving reactive agents based on reinforcement learning, planning and teaching.
  • [Mnih et al. 2015] Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Riedmiller, M.; Fidjeland, A. K.; Ostrovski, G.; et al. 2015. Human-level Control through Deep Reinforcement Learning. Nature 518(7540):529–533.
  • [Mnih et al. 2016] Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T. P.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous Methods for Deep Reinforcement Learning. In International Conference on Machine Learning.
  • [Osband et al. 2016] Osband, I.; Blundell, C.; Pritzel, A.; and Van Roy, B. 2016. Deep Exploration via Bootstrapped DQN. In Advances In Neural Information Processing Systems, 4026–4034.
  • [Osband, Roy, and Wen 2016] Osband, I.; Roy, B. V.; and Wen, Z. 2016. Generalization and Exploration via Randomized Value Functions. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, 2377–2386.
  • [Osband, Russo, and Van Roy 2013] Osband, I.; Russo, D.; and Van Roy, B. 2013. (more) Efficient Reinforcement Learning via Posterior Sampling. In Advances in Neural Information Processing Systems, 3003–3011.
  • [Ostrovski et al. 2017] Ostrovski, G.; Bellemare, M. G.; van den Oord, A.; and Munos, R. 2017. Count-Based Exploration with Neural Density Models. ArXiv e-prints.
  • [Parisotto, Ba, and Salakhutdinov 2016] Parisotto, E.; Ba, J. L.; and Salakhutdinov, R. 2016. Actor-mimic: Deep multitask and transfer reinforcement learning.
  • [Rajendran et al. 2017] Rajendran, J.; Lakshminarayanan, A.; Khapra, M. M.; Prasanna, P.; and Ravindran, B. 2017. Attend, adapt and transfer: Attentive deep architecture for adaptive transfer from multiple sources in the same domain.
  • [Rusu et al. 2016] Rusu, A. A.; Colmenarejo, S. G.; Gulcehre, C.; Desjardins, G.; Kirkpatrick, J.; Pascanu, R.; Mnih, V.; Kavukcuoglu, K.; and Hadsell, R. 2016. Policy distillation.
  • [Schaul et al. 2015] Schaul, T.; Quan, J.; Antonoglou, I.; and Silver, D. 2015. Prioritized Experience Replay. arXiv preprint arXiv:1511.05952.
  • [Stadie, Levine, and Abbeel 2015] Stadie, B. C.; Levine, S.; and Abbeel, P. 2015. Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models. CoRR abs/1507.00814.
  • [Sutton and Barto 1998] Sutton, R. S., and Barto, A. G. 1998. Reinforcement Learning: An Introduction, volume 1. MIT press Cambridge.
  • [Tang et al. 2016] Tang, H.; Houthooft, R.; Foote, D.; Stooke, A.; Chen, X.; Duan, Y.; Schulman, J.; Turck, F. D.; and Abbeel, P. 2016. #Exploration: A Study of Count-based Exploration for Deep Reinforcement Learning. CoRR abs/1611.04717.
  • [Taylor and Stone 2009] Taylor, M. E., and Stone, P. 2009. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research 10(Jul):1633–1685.
  • [Taylor et al. 2014] Taylor, M. E.; Carboni, N.; Fachantidis, A.; Vlahavas, I.; and Torrey, L. 2014. Reinforcement learning agents providing advice in complex video games. Connection Science 26(1):45–63.
  • [Van Hasselt, Guez, and Silver 2016] Van Hasselt, H.; Guez, A.; and Silver, D. 2016. Deep Reinforcement Learning with Double Q-Learning. In AAAI, 2094–2100.
  • [Wang et al. 2016] Wang, Z.; Schaul, T.; Hessel, M.; Van Hasselt, H.; Lanctot, M.; and De Freitas, N. 2016. Dueling network architectures for deep reinforcement learning. In Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48, 1995–2003. JMLR. org.
  • [Wang et al. 2017] Wang, Z.; Bapst, V.; Heess, N.; Mnih, V.; Munos, R.; Kavukcuoglu, K.; and de Freitas, N. 2017. Sample efficient actor-critic with experience replay.
  • [Watkins and Dayan 1992] Watkins, C. J., and Dayan, P. 1992. Q-learning. Machine learning 8(3-4):279–292.
  • [Weber et al. 2017] Weber, T.; Racanière, S.; Reichert, D. P.; Buesing, L.; Guez, A.; Jimenez Rezende, D.; Puigdomènech Badia, A.; Vinyals, O.; Heess, N.; Li, Y.; Pascanu, R.; Battaglia, P.; Silver, D.; and Wierstra, D. 2017. Imagination-Augmented Agents for Deep Reinforcement Learning. ArXiv e-prints.
  • [Zhan and Taylor 2015] Zhan, Y., and Taylor, M. E. 2015. Online transfer learning in reinforcement learning domains. arXiv preprint arXiv:1507.00436.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10736
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description