Have you experimented with alternative algorithms that are shown to be more sample efficient, e.g. PPO? Also, what is the model size increase compared to the baseline (in number of parameters)?

Deep Variational Reinforcement Learning for POMDPs

Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of incomplete and noisy observations. In this paper, we propose deep variational reinforcement learning (DVRL), which introduces an inductive bias that allows an agent to learn a generative model of the environment and perform inference in that model to effectively aggregate the available information. We develop an n-step approximation to the evidence lower bound (ELBO), allowing the model to be trained jointly with the policy. This ensures that the latent state representation is suitable for the control task. In experiments on Mountain Hike and flickering Atari we show that our method outperforms previous approaches relying on recurrent neural networks to encode the past.

Alan
10 minutes ago
Views
This is a comment super asjknd jkasnjk adsnkj
Cancel
Save
Upvote
Downvote
upvotes
  -  
Edit
-  
Unpublish

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description