Exploration by Distributional Reinforcement Learning

Exploration by Distributional Reinforcement Learning

Yunhao Tang, Shipra Agrawal
Columbia University IEOR
yt2541@columbia.edu, sa3305@columbia.edu
Abstract

We propose a framework based on distributional reinforcement learning and recent attempts to combine Bayesian parameter updates with deep reinforcement learning. We show that our proposed framework conceptually unifies multiple previous methods in exploration. We also derive a practical algorithm that achieves efficient exploration on challenging control tasks.

Exploration by Distributional Reinforcement Learning


Yunhao Tang, Shipra Agrawal Columbia University IEOR yt2541@columbia.edu, sa3305@columbia.edu

1 Introduction

Deep reinforcement learning (RL) has enjoyed numerous recent successes in various domains such as video games and robotics control [???]. Deep RL algorithms typically apply naive exploration strategies such as greedy [??]. However, such myopic strategies cannot lead to systematic exploration in hard environments [?].

We provide an exploration algorithm based on distributional RL [?] and recent attempts to combine Bayesian parameter updates with deep reinforcement learning. We show that the proposed algorithm provides a conceptual unification of multiple previous methods on exploration in deep reinforcement learning setting. We also show that the algorithm achieves efficient exploration in challenging environments.

2 Background

2.1 Markov Decision Process and Value Based Reinforcement Learning

In a Markov Decision Process (MDP), at time step , an agent is in state , takes action , receives reward and gets transitioned to next state . At time the agent’s state distribution follows . A policy is a mapping from a state to a distribution over action . The objective is to find a policy to maximize the discounted cumulative reward

(1)

where is a discount factor. In state , the action-value function is defined as the expected cumulative reward that could be received by first taking action and following policy thereafter

From the above definition, it can be shown that satisfies the Bellman equation

Let be the optimal policy and its action value function. satisfies the following Bellman equation

The above equations illustrate the temporal consistency of the action value functions that allows for the design of learning algorithms. Define Bellman operator

When , starting from any , iteratively applying the operator leads to convergence as .

In high dimensional cases, it is critical to use function approximation as a compact representation of action values. Let be such a function with parameter that approximates a table of action values with entry . The aim is to find such that . Let be the operator that projects arbitrary vector to the subspace spanned by function . Since the update of action values can now only take place in the subspace spanned by function , the iterate is updated as . In cases where is linear, the above procedure can be shown to converge [?]. However, in cases where is nonlinear (neural network), the function approximation becomes more expressive at the cost of no convergence guarantee. Many deep RL algorithms are designed following the above formulation, such as Deep Q Network (DQN) [?].

2.2 Distributional Reinforcement Learning

Following [?], instead of considering action value under policy , which is itself an expectation, consider the random return at by following policy , . It follows that . Let be the distribution of . The Bellman equation for random return is similar to that of the action value functions

where both sides are distributions and denotes equality in distribution.111In future notations, we replace by for simplicity. Define distributional Bellman operator under policy as

Notice that operates on distributions. Define as follows

When , starting from any distribution , applying the operator as leads to convergence in expectation . However, the distribution itself may not weakly converge.

To design a practical algorithm, one must use a parametric family of distribution to approximate , with parameter . Let be a discrepancy measure between distribution and . Define the projection operator as follows

In other words, projects a distribution into another distribution in the parametric family with smallest discrepancy from . Hence the distribution is updated as . In practice, the operator is applied to different entries asynchronously. For a given pair , one first selects a greedy action for next state

then updates the distribution to match the target distribution by minimizing the discrepancy

(2)

When only samples are available, let the empirical distribution be 222 is the Dirac distribution that assigns point mass of probability at ., then (2) reduces to minimizing .

3 Related Work

In reinforcement learning (RL), naive explorations such as greedy [??] do not explore well because local perturbations of actions break the consistency between consecutive steps [?]. A number of prior works apply randomization to parameter space [??] to preserve the consistency in exploration, but their formulations are built on heuristics. Posterior sampling is a principled exploration strategy in the bandit setting [??], yet its extension to RL [?] is hard to scale to large problems. More recent prior works have formulated the exploration strategy as sampling randomized value functions and interpreted the algorithm as approximate posterior sampling [??]. Instead of modeling value functions, our formulation is built on modeling return distributions which reduces to exact posterior sampling in the bandit setting.

Following similar ideas of randomized value function, multiple recent works have combined approximate Bayesian inference [??] with Q learning and justified the efficiency of exploration by relating to posterior sampling [????]. Though their formulations are based on randomized value functions, we offer an alternate interpretation by modeling return distribution and provide a conceptual framework that unifies these previous methods (Section 5). We will also provide a potential approach that extends the current framework to policy based methods as in [?].

Modeling return distribution dated back to early work of [???], where learning a return distribution instead of only its expectation presents a more statistically challenging task but provides more information during control. More recently, [?] applies a histogram to learn the return distribution and displays big performance gains over DQN [?]. Based on [?], we provide a more general distributional learning paradigm that combines return distribution learning and exploration based on approximate posterior sampling.

4 Exploration by Distributional Reinforcement Learning

4.1 Formulation

Recall that is the return distribution for state action pair . In practice, we approximate such distribution by a parametric distribution with parameter . Following [?], we take the discrepancy to be KL divergence. Recall is the empirical distribution of samples , hence the KL divergence reduces to

(3)

where we have dropped a constant in the last equality. Let follow a given distribution with parameter . We propose to minimize the following objective

(4)

where is the entropy of . Note that (3) corresponds to the projection step defined in (2), and the first term of (4) takes an expectation of projection discrepancy over the distribution . The intuition behind (4) is that by the first term, the objective encourages low expected discrepancy (which is equivalent to Bellman error) to learn optimal policies; the second term serves as an exploration bonus to encourage a dispersed distribution over for better exploration during learning.

We now draw the connection between (4) and approximate Bayesian inference. First assign an improper uniform prior on , i.e. . The posterior is defined by Bayes rule given the data as where 333We assume samples drawn from the next state distributions are i.i.d. as in [?].. Since by definition , (4) is equivalent to

(5)

Hence to minimize the objective (4) is to search for a parametric distribution to approximate the posterior . From (5) we can see that the posterior is the minimizer policy of (4), which achieves the optimal balance between minimizing low discrepancy and being as random as possible. The close resemblance between our formulation and posterior sampling partially justifies the potential strength of our exploration strategy.

4.2 Generic Algorithm

A generic algorithm Algorithm 1 can be derived from (5). We start with a proposed distribution over parameter and a distribution model . During control, in state , we sample a parameter from and choose action . This is equivalent to taking an action based on the approximate posterior probability that it is optimal. During training, we sample from one-step lookahead distribution of the greedy action, and update parameter by optimizing (4).

1:  INPUT: generic return distribution with parameter , parameter distribution with parameter .
2:  while not converged do
3:     // Control
4:     Sample .
5:     In state , choose , get transition and reward .
6:     // Training
7:     Given state action pair , choose greedy one-step lookahead distribution .
8:     Sample from the distribution and let be the empirical distribution of samples, update parameter by minimizing objective (4).
9:  end while
Algorithm 1 Exploration by Distributional RL: Generic

4.3 Practical Algorithm: Gaussian Assumption

We turn Algorithm 1 into a practical algorithm by imposing assumption on . [?] assumes to be Gaussian based on the assumption that the chain is ergodic and close to . We make this assumption here and let be a Gaussian with parametrized mean and fixed standard error . The objective (3) reduces to

(6)

We now have an analytical form . The objective (4) reduces to

(7)
1:  INPUT: target parameter update period ; learning rate ; Gaussian distribution parameter .
2:  INITIALIZE: parameters ; replay buffer ; step counter .
3:  for  do
4:     while episode not terminated do
5:        .
6:        Sample .
7:        In state , choose , get transition and reward .
8:        Save experience tuple to buffer .
9:        Sample parameters and sample tuples from .
10:        Sample target for th tuple in where is greedy w.r.t. .
11:        Take gradient of the KL divergence in (7).
12:        .
13:        if  then
14:           Update target parameter .
15:        end if
16:     end while
17:  end for
Algorithm 2 Exploration by Distributional RL: Gaussian

Parallel to the principal network with parameter , we maintain a target network with parameter to stabilize learning [?]. Samples for updates are generated by target network . We also maintain a replay buffer to store off-policy data.

4.4 Randomized Value Function as Randomized Critic for Policy Gradient

In off-policy optimization algorithm like Deep Deterministic Policy Gradient (DDPG) [?], a policy with parameter and a critic with parameter are trained at the same time. The policy gradient of reward objective (1) is

(8)

where replacing true by a critic introduces bias but largely reduces variance [?].

To extend the formulation of Algorithm 2 to policy based methods, we can interpret as a randomized critic with a distribution induced by . At each update we sample a parameter and compute the policy gradient (8) through the sampled critic to update . The distributional parameters are updated as in Algorithm 2 with the greedy actions replaced by actions produced by the policy .

Policy gradients computed from randomized critic may lead to better exploration directly in the policy space as in [?], since the uncertainties in the value function can be propagated into the policy via gradients through the uncertain value functions.

5 Connections with Previous Methods

We now argue that the above formulation provides a conceptual unification to multiple previous methods. We can recover the same objective functions as previous methods by properly choosing the parametric form of return distribution , the distribution over model parameter and the algorithm to optimize the objective (5).

5.1 Posterior sampling for bandits

In the bandit setting, we only have a set of actions . Assume the underlying reward for each action is Gaussian distributed. To model the return distribution of action , we set to be Gaussian with unknown mean parameters , i.e. . We assume the distribution over parameter to be Gaussian as well. Due to the conjugacy between improper uniform prior (assumed in Section 4.1) and likelihood , the posterior is still Gaussian. We can minimize (5) exactly by setting . During control, Algorithm 1 selects action with sampled , which is exact posterior sampling. This shows that our proposed algorithm reduces to exact posterior sampling for bandits. For general RL cases, the equivalence is not exact but this connection partially justifies that our algorithm can achieve very efficient exploration.

5.2 Deep Q Network with Bayesian updates

Despite minor algorithmic differences, Algorithm 2 has very similar objective as Variational DQN [?], BBQ Network [?] and Bayesian DQN [?], i.e. all three algorithms can be interpreted as having Gaussian assumption over return distribution and proposing Gaussian distribution over parameters . However, it is worth recalling that Algorithm 2 is formulated by modeling return distributions, while previous methods are formulated by randomizing value functions.

If we are to interpret these three algorithms as instantiations of Algorithm 2, the difference lies in how they optimize (7). Variational DQN and BBQ apply variational inference to minimize the divergence between and posterior , while Bayesian DQN applies exact analytical updates (exact minimization of (7)), by using the conjugacy of prior and likelihood distributions as discussed above. Algorithm 1 generalizes these variants of DQN with Bayesian updates by allowing for other parametric likelihood models , though in practice Gaussian distribution is very popular due to its simple analytical form.

To recover NoisyNet [?] from (7), we can properly scale the objective (by multiplying (7) by ) and let . This implies that NoisyNet makes less strict assumption on return distribution (Gauss parameter does not appear in objective) but does not explicitly encourage exploration by adding entropy bonus, hence the exploration purely relies on the randomization of parameter . To further recover the objective of DQN [?], we set to be the Dirac distribution. Finally, since DQN has no randomness in the parameter , its exploration relies on greedy action perturbations.

5.3 Distributional RL

Distributional RL [?] models return distribution using categorical distribution and does not introduce parameter uncertainties. Since there is no distribution over parameter , Algorithm 1 recovers the exact objective of distributional RL from (4) by setting and letting be categorical distributions. As the number of atoms in the categorical distribution increases, the modeling becomes increasingly close to non-parametric estimation. Though having more atoms makes the parametric distribution more expressive, it also poses a bigger statistical challenge during learning due to a larger number of parameters. As with general , choosing a parametric form with appropriate representation power is critical for learning.

6 Experiments

In all experiments, we implement Algorithm 2 and refer to it as GE (Gauss exploration) in the following. We aim to answer the following questions,

  • In environments that require consistent exploration, does GE achieve more efficient exploration than conventional naive exploration strategies like greedy in DQN and direct parameter randomization in NoisyNet?

  • When a deterministic critic in an off-policy algorithm like DDPG [?] is replaced by a randomized critic, does the algorithm achieve better exploration?

6.1 Testing Environment

Chain MDP.

The chain MDP [?] (Figure 1) serves as a benchmark to test if an algorithm entails consistent exploration. The environment consists of states and each episode lasts time steps. The agent has two actions at each state , while state are both absorbing. The transition is deterministic. At state the agent receives reward , at state the agent receives reward and no reward anywhere else. The initial state is always , making it hard for the agent to escape local optimality at . If the agent explores uniformly randomly, the expected number of time steps required to reach is . For large , it is almost not possible for the randomly exploring agent to reach in a single episode, and the optimal strategy to reach will never be learned.

Figure 1: Chain MDP with states

Sparse Reward Environments.

All RL agents require reward signals to learn good policies. In sparse reward environment, agents with naive exploration strategies randomly stumble around for most of the time and require many more samples to learn good policies than agents that explore consistently. We modify the reward signals in OpenAI gym [?] and MuJoCo benchmark tasks [?] to be sparse as follows.

  • MountainCar, Acrobot: when the episode terminates and otherwise.

  • CartPole, InvertedPendulum, InvertedDoublePendulum: when the episode terminates and otherwise.

6.2 Experiment Results

Exploration in Chain MDP.

In Figure 2 (a) - (c) we compare DQN vs NoisyNet vs GE in Chain MDP environments with different number of states . When , all three algorithms can solve the task. When , DQN cannot explore properly and cannot make progress, GE explores more efficiently and converges to optimal policy faster than NoisyNet. When , both NoisyNet and DQN get stuck while GE makes progress more consistently. Compared to Bootstrapped DQN (BDQN)[?], GE has a higher variance when . This might be because BDQN represents the distribution using multiple heads and can approximate more complex distributions, enabling better exploration on this particular task. In general, however, our algorithm is much more computationally feasible than BDQN yet still achieves very efficient exploration.

Figure 2 (d) plots the state visit frequency for GE vs. DQN within the first 10 episodes of training. DQN mostly visits states near (the initial state), while GE visits a much wider range of states. Such active exploration allows the agent to consistently visit and learns the optimal policy within a small number of iterations.

(a) Chain MDP
(b) Chain MDP
(c) Chain MDP
(d) State visit frequency
Figure 2: Comparison of DQN vs NoisyNet vs GE on Chain MDP environments with (a) (b) and (c) states. Figure 2 (d) plots state visit frequency within the first iteration in training for Gauss vs. DQN in Chain MDP . For state , set if is ever visited in one episode and otherwise. The moving average of across multiple episodes computes the state visit frequency. Each iteration consists of episodes.

Exploration in Sparse Reward Environments.

In Figure 3 (a) - (c) we present the comparison of three algorithms in sparse reward environments. For each environment, we plot the rewards at a different scale. In CartPole, the plotted cumulative reward is the episode length; in MountainCar, the plotted cumulative reward is for reaching the target within one episode and otherwise; in Acrobot, the plotted cumulative reward is the negative of the episode length. In all sparse reward tasks, GE entails much faster progress than the other two algorithms. For example, in Sparse MountainCar, within the given number of iterations, DQN and NoisyNet have never (or very rarely) reached the target, hence they make no (little) progress in cumulative reward. On the other hand, GE reaches the targets more frequently since early stage of the training, and makes progress more steadily.

In Figure 3 (d) we plot the state visit trajectories of GE vs. DQN in Sparse MountainCar. The vertical and horizontal axes of the plot correspond to two coordinates of the state space. Two panels of (d) correspond to training after and iterations respectively. As the training proceeds, the state visits of DQN increasingly cluster on a small region in state space and fail to efficiently explore. On the contrary, GE maintains a widespread distribution over states and can explore more systematically.

(a) Sparse CartPole
(b) Sparse MountainCar
(c) Sparse Acrobot
(d) State visit trajectories
Figure 3: Comparison of DQN vs NoisyNet vs GE on sparse reward environments (a) Sparse CartPole (b) Sparse MountainCar (c) Sparse Acrobot. Each iteration corresponds to episodes. Rewards are plotted using moving windows of episodes. Figure 3 (d) plots state visit trajectories for Gauss vs. DQN in Sparse MountainCar. Left panel of (d) is training after iterations and the right panel is after iterations. The vertical and horizontal axes correspond to two coordinates of the state space.

Randomized Critic for Exploration.

We evaluate the performance of DDPG with different critics. When DQN is used as a critic, the agent explores by injecting noise into actions produced by the policy [?]. When critics are NoisyNet or randomized DQN with GE, the agent explores by updating its parameters using policy gradients computed through randomized critics, effectively injecting noise into the parameter space. In conventional continuous control tasks (Figure 4 (a) and (b)), randomized critics do not enjoy much advantage: for example, in simple control task like InvertedPendulum, where exploration is not important, DDPG with action noise injection makes progress much faster (Figure 4 (a)), though DDPG with randomized critics seem to make progress in a steadier manner. In sparse reward environments (Figure 4 (c) and (d)), however, DDPG with randomized critics tend to make progress at a slightly higher rate than action noise injection.

(a) InvertedPendulum
(b) InvertedDoublePendulum
(c) Sparse InvertedPendulum
(d) Sparse DoublePendulum
Figure 4: Comparison of original Q function (DQN) vs NoisyNet vs GE as baselines for DDPG on sparse reward environments (a) InvertedPendulum (b) InvertedDoublePendulum (c) Sparse InvertedPendulum (d) Sparse InvertedDoublePendulum.

Hyper-parameter.

In all experiments, we set to be factorized Gaussian. In GE, as in NoisyNet [?], each parameter in a fully connected layer (weight and bias) has two distributional parameters: the mean and standard error . Set and let be the actual hyper-parameter to tune. If is large, the distribution over is widespread and the agent can execute a larger range of policies before committing to a solution. For both NoisyNet and GE, we require all to be the same, denoted as , and set the range for grid search. A second hyper-parameter for GE is the Gauss parameter to determine the balance between expected Bellman error and entropy in (7). In our experiments, we tune on the log scale .

We empirically find that both are critical to the performance of GE. For each algorithm, We use a fairly exhaustive grid search to obtain the best hyper-parameters. Each experiment is performed multiple times and the reward plots in Figure 2,3,4 are averaged over five different seeds. In Figure 5, we plot the performance of GE under different and on Sparse CartPole. From Figure 5 we see that the performance is not monotonic in : large (small ) generally leads to more active exploration but may hinder fast convergence, and vice versa. One must strike a proper balance between exploration and exploitation to obtain good performance. In DQN, we set the exploration constant to be . In all experiments, we tune the learning rate .

(a) Hyper-parameter
(b) Hyper-parameter
Figure 5: Hyper-parameter for GE

7 Conclusion

We have provided a framework based on distributional RL that unifies multiple previous methods on exploration in reinforcement learning, including posterior sampling for bandits as well as recent efforts in Bayesian updates of DQN parameters. We have also derived a practical algorithm based on the Gaussian assumption of return distribution, which allows for efficient control and parameter updates. We have observed that the proposed algorithm obtains good performance on challenging tasks that require consistent exploration. A further extension of our current algorithm is to relax the Gaussian assumption on return distributions. We leave it be future work if more flexible assumption can lead to better performance and whether it can be combined with model-based RL.

References

  • [Azizzadenesheli et al., 2017] Kamyar Azizzadenesheli, Emma Brunskill, and Animashree Anandkumar. Efficient exploration through bayesian deep q networks. Symposium on Deep Reinforcement Learning, NIPS, 2017.
  • [Bellemare et al., 2017] Marc G. Bellemare, Will Dabney, and Remi Munos. A distributional perspective on reinforcement learning. International Conference on Machine Learning, 2017.
  • [Blei et al., 2017] David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, Volume 112 - Issue 518, 2017.
  • [Brockman et al., 2016] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. Arxiv: 1606.01540, 2016.
  • [Dearden et al., 1998] Richard Dearden, Nir Friedman, and Stuart Russel. Bayesian q learning. American Association for Artificial Intelligence (AAAI), 1998.
  • [Duan et al., 2016] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. International Conference on Machine Learning, 2016.
  • [Fortunato et al., 2017] Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Ilivier Pietquin, Charles Blundell, and Shane Legg. Noisy network for exploration. arXiv:1706.10295, 2017.
  • [Henderson et al., 2017] Peter Henderson, Thang Doan, Riashat Islam, and David Meger. Bayesian policy gradients via alpha divergence dropout. 2nd Workshop on Bayesian Deep Learning, NIPS, 2017.
  • [Levine et al., 2016] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End to end training of deep visuomotor policies. Journal of Machine Learning Research, 2016.
  • [Lillicrap et al., 2016] Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. International Conference on Learning Representations, 2016.
  • [Lipton et al., 2016] Zachary C. Lipton, Xiujun Li, Jianfeng Gao, Lihong Li, Faisal Ahmed, and Li Deng. Efficient dialogue policy learning with bbq-networks. ArXiv: 1608.05081, 2016.
  • [Mnih et al., 2013] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. NIPS workshop in Deep Learning, 2013.
  • [Moerland et al., 2017] Thomas M. Moerland, Joost Broekens, and Catholijn M. Jonker. Efficient exploration with double uncertain value networks. Symposium on Deep Reinforcement Learning, NIPS, 2017.
  • [Morimura et al., 2010] Tetsuro Morimura, Masashi Sugiyama, Hisashi Kashima, Hirotaka Hachiya, and Toshiyuki Tanaka. Nonparametric return distribution approximation for reinforcement learning. ICML, 2010.
  • [Morimura et al., 2012] Tetsuro Morimura, Masashi Sugiyama, Hisashi Kashima, Hirotaka Hachiya, and Toshiyuki Tanaka. Parametric return density estimation for reinforcement learning. UAI, 2012.
  • [Osband and Roy, 2015] Ian Osband and Benjamin Van Roy. Bootstrapped thompson sampling and deep exploration. arXiv:1507:00300, 2015.
  • [Osband et al., 2013] Ian Osband, Daniel Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via posterior sampling. Arxiv: 1306.0940, 2013.
  • [Osband et al., 2016] Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. arXiv:1602.04621, 2016.
  • [Osband et al., 2017] Ian Osband, daniel Russo, Zheng Wen, and Benjamin Van Roy. Deep exploration via randomized value functions. arXiv: 1703.07608, 2017.
  • [Plappert et al., 2016] Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y. Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. Parameter space noise for exploration. International Conference on Learning Representation, 2016.
  • [Ranganath et al., 2014] Rejesh Ranganath, Sean Gerrish, and David M. Blei. Black box variational inference. Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), 2014.
  • [Russo, 2017] Daniel Russo. Tutorial on thompson sampling. arxiv, 2017.
  • [Schulman et al., 2015] John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization. International Conference on Machine Learning, 2015.
  • [Tang and Kucukelbir, 2017] Yunhao Tang and Alp Kucukelbir. Variational deep q network. 2nd Workshop on Bayesian Deep Learning, NIPS, 2017.
  • [Thompson, 1933] William R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, Vol. 25, No. 3/4, 1933.
  • [Todorov et al., 2012] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. International Conference on Intelligent Robots, 2012.
  • [Tsitsiklis and Roy, 1996] John N. Tsitsiklis and Benjamin Van Roy. Feature based methods for large scale dynamic programming. Machine Learning, 1996.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
188017
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description