Learning Latent State Spaces for Planning through Reward Prediction

Learning Latent State Spaces for Planning through Reward Prediction

Abstract

Model-based reinforcement learning methods typically learn models for high-dimensional state spaces by aiming to reconstruct and predict the original observations. However, drawing inspiration from model-free reinforcement learning, we propose learning a latent dynamics model directly from rewards. In this work, we introduce a model-based planning framework which learns a latent reward prediction model and then plans in the latent state-space. The latent representation is learned exclusively from multi-step reward prediction which we show to be the only necessary information for successful planning. With this framework, we are able to benefit from the concise model-free representation, while still enjoying the data-efficiency of model-based algorithms. We demonstrate our framework in multi-pendulum and multi-cheetah environments where several pendulums or cheetahs are shown to the agent but only one of which produces rewards. In these environments, it is important for the agent to construct a concise latent representation to filter out irrelevant observations. We find that our method can successfully learn an accurate latent reward prediction model in the presence of the irrelevant information while existing model-based methods fail. Planning in the learned latent state-space shows strong performance and high sample efficiency over model-free and model-based baselines.

1 Introduction

Deep reinforcement learning (DRL) has demonstrated some of the most impressive results on several complex, high-dimensional sequential decision-making tasks such as video games (atari_nature, ) and board games (alphazero, ). Such algorithms are often model-free, i.e. they do not learn or exploit knowledge of the environment’s dynamics. Although these model-free schemes can achieve state-of-the-art performance, they often require millions of interactions with the environment. The high sample complexity restricts their applicability to real environments, where obtaining the required amount of interactions is prohibitively time-consuming. On the other hand, model-based approaches can achieve low sample complexity by first learning a model for the environment. The learned model can then be used for planning, thereby minimizing the number of environment interactions. However, model-based methods are typically limited to low-dimensional tasks or settings with certain assumptions imposed on the dynamics (li2004iterative, ; deisenroth2011pilco, ; gps, ; pigps, ).

Several recent works have learned dynamics models parameterized by deep neural networks (lenz2015deepmpc, ; finn2017deep, ; mppi, ), but such methods generally result in high-dimensional models which are unsuitable for planning. In order to efficiently apply model-based planning in high-dimensional spaces, a promising approach is to learn a low-dimensional latent state representation of the original high-dimensional space. A compact latent representation is often more conducive for planning, which can further reduce the number of environment interactions required to learn a good policy (watter2015embed, ; robustembedding, ; finn2016deep, ; pavone, ; zhang2018solar, ; planet, ). Typically, such a model has a variational autoencoder (VAE)-like  (Kingma2013AutoEncodingVB, ) structure that learns a latent representation. In particular, it learns this representation through reconstruction and prediction of the full-state from the latent state. In other words, the learned low-dimensional latent state must retain a sufficient amount of information from the original state to accurately predict the state in the high-dimensional space. Although the ability to perform full-state prediction is sufficient for planning in latent space, it is not strictly necessary. Predicting the full-state may result in a latent state representation which contains information that is irrelevant to planning. For example, consider a task of maze navigation, with a TV placed in the maze, displaying images (burda2018large, ). The TV content is irrelevant to the task, but the latent representation may learn features designed to predict the state of the TV.

In RL, the objective is to maximize the cumulative sum of rewards. Given this objective, it may be excessive to learn to perform full-state prediction, if aspects of the full state have no influence on the reward. In examples like the aforementioned TV in a maze, it may be more fruitful to instead optimize a latent model to be able to perform reward prediction. In some sense, value-based model-free methods can be seen as reward prediction models. However, they attempt to predict the optimal value which is policy dependent and requires a large number of samples to learn. Fortunately, model-based planning methods such as model predictive control (MPC) only maximize the cumulative sum of future rewards over the choices of action sequences. This implies that what a planning method requires is a latent reward prediction model: a model which predicts current and future rewards over a horizon from a latent state under different action sequences. In this work, we introduce a model-based DRL framework where we learn a latent dynamics model exclusively from the multi-step reward prediction criterion and then use MPC to plan directly in the latent state-space. Learning the latent reward prediction model benefits from the sample-efficiency of model-based methods, and the learned latent model is more useful for planning than its full-state prediction counterparts as discussed above.

The contributions of this paper are as follows. The proposed latent model is learned only from multi-step reward prediction. Optimizing exclusively for reward prediction allows us to circumvent the inefficiency stemming from learning irrelevant parts of the state space, while maintaining the model-based sample efficiency. We provide a performance guarantee on planning in the learned latent state-space without assumptions on the underlying task dynamics. Empirically, we demonstrate that our method outperforms model-free and model-based baselines on multi-pendulum and multi-cheetah environments where the agent must ignore a large amount of irrelevant information.

The remainder of this paper is structured as follows. In Section 2, we present related work in model-based DRL. In Section 3, we provide the requisite background on RL and MPC. In Section 4, we introduce our method. In Section 5, we present our experimental results, and conclude by indicating potential future directions of research.

2 Related work

Model-based DRL

Model-based works have aimed to increase sample efficiency and stability by utilizing a model during policy optimization. Methods like PILCO (deisenroth2011pilco, ), DeepMPC (lenz2015deepmpc, ), and MPPI (mppi, ) have exhibited promising performance on some control tasks using only a handful of episodes, however they usually have difficulty scaling to high-dimensional observations. Guided-policy search methods (gps, ; pigps, ) make use of local models to update a global deep policy, however these local assumptions could fail to capture global properties and suffer at points of discontinuity. For high-dimensional observations such as images, deep neural networks have been trained to directly predict the observations (finn2017deep, ), but the resulting high-dimensional models limits their applicability in efficient planning.

Learning latent state spaces for planning

To make model-based methods more scalable, several works have adopted an auto-encoder scheme to learn a reduced-dimensional latent state representation. Then one may plan directly in the latent space using some conventional planning methods such as linear quadratic regulator, or rapidly exploring random tree (watter2015embed, ; robustembedding, ; finn2016deep, ; pavone, ; zhang2018solar, ). However, these methods construct latent spaces based on state reconstruction/prediction which may not be entirely relevant to the planning problem as discussed in the previous section. A recent work PlaNet (planet, ) offered a latent space planning approach by predicting multi-step observations and rewards. PlaNet has shown to achieve good sample complexity and performance compared to the state-of-the-art model-free methods, but its latent model might also suffer from the aforementioned issues with irrelevant observation predictions. In this work we use a similar planning algorithm to PlaNet, however, removing the dependency of observation prediction provides a more concise latent representation for planning.

Reward-based representations learning for RL

The value prediction network (VPN) (oh2017value, ) is an RL framework which learns a model to predict the future values rather than the full-states of the environment. Although VPN shares a very similar idea to our framework in not predicting irrelevant information in the full-state, it suffers from the sample complexity issue of learning the optimal value as in model-free methods. Performance guarantees of latent models were analyzed under the DeepMDP framework in a recent paper (deepmdp, ) without full-state reconstruction or prediction. With Lipschitz assumptions, their work provides performance bounds for latent representations when the two prediction losses of the current reward and the next latent state are optimized. They also show improvements in model-free DRL by adding the DeepMDP losses as auxiliary objectives. However, with only the single-step prediction loss, DeepMDP may not be suitable for making long term reward predictions necessary for planning algorithms. In addition, the latent state prediction loss may be unnecessary and could result in local minima observed in their paper. Our framework shares the same spirit of DeepMDP by learning latent dynamics only from the reward sequences, but focusing on multi-step reward prediction allows us to achieve higher performance in model-based planning.

3 Preliminaries

3.1 Problem Setup

In this paper we consider a discrete time nonlinear dynamical system with continuous state and action spaces , , and a reward function . Then, given an admissible action at time , the system state evolves according to the dynamics

(1)

where is the next state at time .

Our goal is to find a policy that selects actions so as to maximize the cumulative discounted rewards

(2)

where the initial state is assumed to be fixed, and is the discount factor.

Note that Equations (1) and (2) formulate an infinite horizon optimal control problem. This problem can also be viewed as a deterministic Markov decision process (MDP) by the tuple .

3.2 MPC Planning

When a dynamics model is available, model predictive control (MPC) is a powerful planning framework for optimal control problems. An MPC agent chooses its action by online optimization over a finite planning horizon . More specifically, at each time , the agent solves an -horizon trajectory optimization problem:

(3a)
subject to (3b)
(3c)

Suppose the optimal control sequence is , then the MPC agent will select as the control action at time . This method is termed model predictive control due to its use of the model to predict the future trajectory from state and the intended action sequence . Let denote the MPC policy using the dynamics model and reward . Then the MPC agent selects its action by .

3.3 Sample-Based Trajectory Optimization

From the description in Section 3.2, an MPC agent needs to solve the trajectory optimization problem specified by Equation (3). In this work, we use the cross entropy method (CEM) (rubinstein1997optimization, ) to solve problem (3). CEM is a very simple sampling-based planning method which only relies on a forward model and trajectory returns which may make direct use of our learned latent mappings somewhat like a simulator. The advantage to using a method like CEM is that its sampling procedure can be easily parallelized directly in the learned latent space.

4 Latent reward prediction model for planning

In order to perform model-based planning directly from observations without a model, we need to learn a dynamics model. For high-dimensional problems, a common model-based DRL approach is to learn a latent representation and then conduct planning in the low-dimensional latent space. In this work, rather than predict the full state/observation, we learn a latent state-space model which predicts only the current and future rewards conditioned on action sequences. We will discuss that observation reconstruction is not necessary if we can predict the rewards accurately over the planning horizon.

4.1 Learning a Latent Reward Prediction Model

Our model consists of three distinct components which can be trained in an end-to-end fashion. To do so, we first embed the state observation to the latent state with the parameterized function . In order to propagate the latent state forward in latent space, we define the forward dynamics function which maps , corresponding to the discrete-time dynamical system in latent state space. Finally, we require a reward function which provides us with an estimate of the reward given a latent state and control action . The full latent reward prediction model is depicted in Figure 1.

(4)

Note that we do not require a “decoding” function since planning algorithms often only need to evaluate the rewards along a trajectory given an action sequence.

Figure 1: The latent reward prediction model . Blue and green circles represent the input and output variables. The squares indicate components of the prediction model and white circles represent the latent variables. The model takes in an initial state and a sequence of actions to return a sequence of -step predicted rewards .

To achieve the aforementioned reward prediction ability, we learn these functions over multi-step reward prediction losses. Formally, given a -step sequence of states and actions , the training objective is defined to be the mean-squared error between the original rewards and the multi-step latent reward predictions.

(5)
(6)

We later show that this -step reward prediction loss is sufficient to bound the planning performance over the same horizon.

Note that we do not impose other losses such as the observation prediction loss or latent state prediction loss as in some previous latent model frameworks (deepmdp, ; planet, ). The reason is that other loss functions are not necessary for planning. Having those additional losses would increase the training complexity and may induce undesirable local minima.

4.2 MPC with the Latent Reward Prediction Model

Once a latent state-space model is learned, we can perform MPC using the latent embedding. Let be the latent MPC policy with the latent reward model . Similar to Equation (3), the latent MPC agent at each timestep solves a -step trajectory planning problem.

(7a)
subject to (7b)
(7c)

After solving the optimization problem, the latent MPC agent will choose its action by

(8)
where (9)

Similar to other model-based methods, the latent reward prediction model can be used in an offline setting with a pre-collected dataset. Together with an exploration policy, e.g. -greedy, our model can also be used in an online iterative scheme. See appendix in supplementary materials for the descriptions of the offline and online iterative variants of the algorithm.

4.3 Planning Performance Guarantee

To evaluate the performance of MPC planning, define recursively the optimal -step Q-function by

Then Bellman optimality shows that gives the optimal value for the MPC planning problem (3), i.e., .

Similarly, define the optimal -step Q-functions for the latent state-space model :

We have the following result connecting the -step reward prediction loss and the -step Q-functions.

Theorem 1.

Suppose the -step prediction losses for any -step trajectory. Then the -step Q-functions satisfy

for all . Moreover, if is optimal for the latent MPC planning problem (7), then it is also a -optimal policy for the MPC planning problem (3).

This result provide a planning performance guarantee for the latent reward prediction model. It also confirms that other losses are not necessary to achieve the desired performance from the latent model. Note that this result does not require additional assumptions such as Lipschitz continuity of the environment. See the appendix for a proof of the theorem.

5 Experiments

In this section we introduce several experiments which aim to investigate the representation ability, long-term prediction ability and sample efficiency. We implement the mappings as deterministic feed-forward neural networks. The extensions to stochastic and/or recurrent neural networks are potential future directions. In order to do MPC, we choose to use CEM with a planning horizon of and trajectory samples. We use the best trajectories to compute the control, re-planning after every action. CEM will use the model somewhat like a simulator to collect samples in latent space (in our case this is a batch forward pass in a deep network). We consider two sets of environments with high-dimensional irrelevant observations. Results for an additional image-based environment is also available in the appendix.

5.1 Baselines

Before introducing the experiments, we briefly introduce the baseline algorithms used for comparison. Throughout our experiments, we have several baselines, each serving to demonstrate the attributes of our reward model: concise representation, long-term prediction and sample efficiency.

State prediction model

To demonstrate the representation ability of our reward prediction model, we consider a latent model where the latent space is learned from state prediction only. We train a reward predictor separately for the model. This model works well for the concise representations, however, like many model-based methods it fails to scale to high-dimensional observations.

DeepMDP

We use the DeepMDP model (deepmdp, ), which is essentially a one-step variant of the reward prediction model with an additional latent state prediction loss. We aim to show that our multi-step reward prediction-only loss improves the planning performance.

Sac

Finally to demonstrate sample efficiency, we compare against the model free Soft Actor-Critic algorithm (SAC) (sac, ). SAC is currently a popular DRL algorithm known for its sample efficiency and performance.

5.2 Multi-pendulum

In several RL tasks, there may be irrelevant aspects of the state, e.g. the noisy TV problem  (tv_noise, ), so reward prediction might be extremely crucial to learn a more conducive and useful model for planning. To verify this claim, we develop a multi-pendulum environment, an adaptation of the classic pendulum control task (gym, ), where the agent must swing up and stabilize a pendulum at the up-right state with torque constraints. Although this task is simple, it encompasses nonlinear dynamics and long planning horizons in order to accumulate enough energy to swing up. In the multi-pendulum task, the agent operates in an environment with pendulums, but is only rewarded for swinging up a single pendulum. The other pendulums serving as determined observational noise being driven by uniform random actions. The state observation for the -pendulum environment is given as:

Since reward prediction model can be used either completely offline or iterative online setting, we present two sets of experiments. First we show the average final performances of our algorithm and baselines after training completely offline from steps of the environment under time-correlated noised control. We vary the number of pendulums where we maintain the latent space to be -dimensional throughout. In the second experiment we train the model and baselines under the iterative scheme described in the supplementary appendix, consuming approximately the same number of samples. We also compare against the SAC algorithm as a state-of-the-art model-free baseline.

As shown in Figure 2, the reward prediction loss does not degrade quickly for our reward prediction model as the number of irrelevant pendulums increases, but degrades almost immediately for the state prediction model. DeepMDP struggles to predict well in all cases, due to only training on a single-step latent plus reward reconstruction loss. In Table 1, we can further see that performance of the reward prediction model is not very sensitive to the number of excess pendulum environments, where the state prediction model immediately fails to solve the task. Note that the SAC method requires a significantly greater number of samples to achieve similar performance.

For the iterative scheme, the reward prediction model clearly outperforms all baselines as can be seen in Figure 3. Note that the DeepMDP model has a much higher variance than other methods. This high variance could be due the local optima from the latent state prediction loss in DeepMDP. On the other hand, our reward prediction model provides a stable low-variance result.

Figure 2: Training curves for the -step reward prediction loss. All experiments are ran in the offline setting using steps from the true environment under a random policy and trained for epochs. The results of our reward prediction model is shown in the left, the state prediction model in the middle, and DeepMDP in the right.
Reward Model State Model DeepMDP SAC ( samples)
# pendulums mean(std) mean(std) mean(std) mean(std)
Table 1: Statistics over the final performance of the multi-pendulum environment trained under the offline setting from environment steps under a random-policy (excluding SAC). For each of the latent model, we use CEM for MPC planning, and each method is evaluated from episodes.
Figure 3: Training curves for the iterative scheme in the -pendulum environment. We initialize the each model with steps, and then perform training iterations of batch-size after each episode is collected with the current policy with -greedy exploration. Each method is run with different seeds. The solid lines and shaded areas are the average training returns and their one standard deviation regions under the exploration policy. The corresponding dashed lines mark the average final evaluation performance after environment samples.

5.3 Multi-cheetah

In order to demonstrate that our method scales to higher dimensional environment, we use the Deepmind control suite “cheetah-run” environment (dm, ) to construct “multi-cheetah”. The cheetah-run environment has a continuous -dimensional observation space and -dimensional action space. Similar to multi-pendulum, we concatenate each cheetah observation vector together into one observation, where only the first cheetah environment returns the reward signal. Because a random policy is not sufficient to explore the state space in cheetah, the remaining cheetahs act according to an expert SAC policy. In multi-cheetah, we mainly compare our method with the model-free baseline SAC to highlight the strong performance and sample efficiency.

Figure 4: The average return training curves for SAC, Deepmdp and our reward prediction model in the -cheetah and -cheetah environments (-dimensional). The reward model is trained iteratively online with zero-mean Gaussian exploration noise. Solid lines indicate training return (with seeds and shaded one standard deviation regions), and the dashed lines are the final evaluation returns without exploration noise.

Shown in figure 4, the reward prediction model is able to learn and plan near-optimally using far less samples than SAC in the -cheetah environment (i.e., the regular cheetah-run). Within episodes, the reward model is able to achieve a near-optimal running behavior ( and final return in evaluation for -cheetah and -cheetah respectively), which takes SAC episodes to achieve. After episodes of -cheetah (not shown in the figure), the reward model achieves return, approaching SAC’s maximum performance of after episodes. When there are several irrelevant running cheetahs, the performance SAC further drops. In comparison, our reward prediction model is rather agnostic to the irrelevant observations and maintains high performance and sample efficiency.

6 Conclusion

In this paper we introduced a method for learning a latent state representation from rewards. By constructing a latent space based on multi-step reward predictions, we obtained a concise representation similar to that of model-free DRL algorithms while maintaining high sample efficiency by planning with model-based algorithms. In this work, we used a sample-based CEM algorithm for MPC planning, but the use of other planning algorithms is also plausible. We demonstrated in the multi-pendulum and multi-cheetah environments that the latent reward prediction model is able to succeed in the presence of high-dimensional irrelevant information with both offline and iterative online schemes.

However, there are several aspects which still need to be investigated, such as more complex environments and sparse reward settings, where typically a finite-horizon planner would struggle. We would also like to investigate how to incorporate model uncertainty through either stochastic dynamics or ensemble methods. One interesting question we have is what is the optimal observer or what the optimal combination of reward and state reconstruction to do well in the task. We believe our minimal formulation is a good starting point for this question.

Appendix A Algorithm descriptions

  Inputs: planning horizon , dataset , latent model .
  Train using the loss function in (5) and data .
  Output: policy
Algorithm 1 Offline Training
  Inputs: planning horizon , number of iterations , latent model .
  Initialize dataset with random trajectories.
  for  to  do
     Train using the loss function in (5) and data .
     Run latent MPC in the environment with -greedy and collect data .
     
  end for
  Output: policy
Algorithm 2 Online Iterative Scheme

Appendix B Proof of Theorem 1

Proof.

Note that the -step Q-function can be written as

where , , and for

Here denotes the optimal policy for the MPC problem (3). Because is the optimal Q-function for the latent MPC problem (7), we have

Adding and subtracting we obtain

(10)

Now let be the optimal policy for the latent MPC problem in the latent space, then from the Q-function definition we have

where , , and for

The -step Q-function of policy in problem (3) is then given by

(11)

Now combining (10) and (11) we get

which means that is a -optimal policy. ∎

Appendix C Experiments with image observations

To demonstrate that this method can solve tasks with rather high-dimensional state observations, we solve the single Pendulum task from two-gray-scale images ( pixels). Similar to the Mutli-Pendulum experiments, we let the latent space be -dimensional. In provide two images per state to insure that velocity can be inferred and fully-observable assumption in maintained. We find that the prediction results are fairly similar to the kinematic state representation, but more noisy due to the course image observations. We train the model using only mostly offline, using only iteration. We initialize the model with samples under random control trained for epochs and then trained once more with another samples using the new control policy. This policy is able to solve the task efficiently ( return) similarly to the kinematic state representation, producing similar prediction results. Its interesting to note that the open-loop model predicts that the pendulum will continue to oscillate and not stabilize. This could be evidence that the model over fits to the random training data, however it would also require very accurate model to predict stabilizing behavior far into the future.

Figure 5: Depicted is the open loop prediction result of the Image-Pendulum environment successfully stabilized by the learned latent-reward CEM policy. The model is first trained on samples from a random policy, then another sample under the new control policy.

References

  1. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
  2. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140–1144, 2018.
  3. Weiwei Li and Emanuel Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222–229, 2004.
  4. Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465–472, 2011.
  5. Sergey Levine and Vladlen Koltun. Guided policy search. In International Conference on Machine Learning, pages 1–9, 2013.
  6. Yevgen Chebotar, Mrinal Kalakrishnan, Ali Yahya, Adrian Li, Stefan Schaal, and Sergey Levine. Path integral guided policy search. In 2017 IEEE international conference on robotics and automation (ICRA), pages 3381–3388. IEEE, 2017.
  7. Ian Lenz, Ross A Knepper, and Ashutosh Saxena. Deepmpc: Learning deep latent features for model predictive control. In Robotics: Science and Systems. Rome, Italy, 2015.
  8. Chelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 2786–2793. IEEE, 2017.
  9. Grady Williams, Nolan Wagener, Brian Goldfain, Paul Drews, James M Rehg, Byron Boots, and Evangelos A Theodorou. Information theoretic mpc for model-based reinforcement learning. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 1714–1721. IEEE, 2017.
  10. Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in neural information processing systems, pages 2746–2754, 2015.
  11. Ershad Banijamali, Rui Shu, Mohammad Ghavamzadeh, Hung Bui, and Ali Ghodsi. Robust locally-linear controllable embedding. arXiv preprint arXiv:1710.05373, 2017.
  12. Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Deep spatial autoencoders for visuomotor learning. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 512–519. IEEE, 2016.
  13. Brian Ichter and Marco Pavone. Robot motion planning in learned latent spaces. IEEE Robotics and Automation Letters, 4(3):2407–2414, 2019.
  14. Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew Johnson, and Sergey Levine. Solar: deep structured representations for model-based reinforcement learning. 2018.
  15. Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. arXiv preprint arXiv:1811.04551, 2018.
  16. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. CoRR, abs/1312.6114, 2013.
  17. Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A Efros. Large-scale study of curiosity-driven learning. arXiv preprint arXiv:1808.04355, 2018.
  18. Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. In Advances in Neural Information Processing Systems, pages 6118–6128, 2017.
  19. Carles Gelada, Saurabh Kumar, Jacob Buckman, Ofir Nachum, and Marc G Bellemare. Deepmdp: Learning continuous latent space models for representation learning. In International Conference on Machine Learning, pages 2170–2179, 2019.
  20. Reuven Y Rubinstein. Optimization of computer simulation models with rare events. European Journal of Operational Research, 99(1):89–112, 1997.
  21. Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018.
  22. Yuri Burda, Harrison Edwards, Deepak Pathak, Amos J. Storkey, Trevor Darrell, and Alexei A. Efros. Large-scale study of curiosity-driven learning. CoRR, abs/1808.04355, 2018.
  23. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. CoRR, abs/1606.01540, 2016.
  24. Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, and Martin A. Riedmiller. Deepmind control suite. CoRR, abs/1801.00690, 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
401273
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description