The Dreaming Variational Autoencoder for Reinforcement Learning Environments

The Dreaming Variational Autoencoder for Reinforcement Learning Environments

Per-Arne Andersen(fi) Department of ICT, University of Agder, Grimstad, Norway
11email: {per.andersen,morten.goodwin,ole.granmo}@uia.no
   Morten Goodwin Department of ICT, University of Agder, Grimstad, Norway
11email: {per.andersen,morten.goodwin,ole.granmo}@uia.no
   Ole-Christoffer Granmo Department of ICT, University of Agder, Grimstad, Norway
11email: {per.andersen,morten.goodwin,ole.granmo}@uia.no
Abstract

Reinforcement learning has shown great potential in generalizing over raw sensory data using only a single neural network for value optimization. There are several challenges in the current state-of-the-art reinforcement learning algorithms that prevent them from converging towards the global optima. It is likely that the solution to these problems lies in short- and long-term planning, exploration and memory management for reinforcement learning algorithms. Games are often used to benchmark reinforcement learning algorithms as they provide a flexible, reproducible, and easy to control environment. Regardless, few games feature a state-space where results in exploration, memory, and planning are easily perceived. This paper presents The Dreaming Variational Autoencoder (DVAE), a neural network based generative modeling architecture for exploration in environments with sparse feedback. We further present Deep Maze, a novel and flexible maze engine that challenges DVAE in partial and fully-observable state-spaces, long-horizon tasks, and deterministic and stochastic problems. We show initial findings and encourage further work in reinforcement learning driven by generative exploration.

Keywords:
Deep Reinforcement Learning Environment Modeling Neural Networks Variational Autoencoder Markov Decision Processes Exploration Artificial Experience-Replay
\pdfstringdefDisableCommands

1 Introduction

Reinforcement learning (RL) is a field of research that has quickly become one of the most promising branches of machine learning algorithms to solve artificial general intelligence [10, 12, 2, 16]. There have been several breakthroughs in reinforcement learning in recent years for relatively simple environments [14, 15, 6, 21], but no algorithms are capable of human performance in situations where complex policies must be learned. Due to this, a number of open research questions remain in reinforcement learning. It is possible that many of the problems can be resolved with algorithms that adequately accounts for planning, exploration, and memory at different time-horizons.

In current state-of-the-art RL algorithms, long-horizon RL tasks are difficult to master because there is as of yet no optimal exploration algorithm that is capable of proper state-space pruning. Exploration strategies such as -greedy is widely used in RL, but cannot find an adequate exploration/exploitation balance without significant hyperparameter-tuning. Environment modeling is a promising exploration technique where the goal is for the model to imitate the behavior of the target environment. This limits the required interaction with the target environment, enabling nearly unlimited access to exploration without the cost of exhausting the target environment. In addition to environment-modeling, a balance between exploration and exploitation must be accounted for, and it is, therefore, essential for the environment model to receive feedback from the RL agent.

By combining the ideas of variational autoencoders with deep RL agents, we find that it is possible for agents to learn optimal policies using only generated training data samples. The approach is presented as the dreaming variational autoencoder. We also show a new learning environment, Deep Maze, that aims to bring a vast set of challenges for reinforcement learning algorithms and is the environment used for testing the DVAE algorithm.

This paper is organized as follows. Section 3 briefly introduces the reader to preliminaries. Section 4 proposes The Dreaming Variational Autoencoder for environment modeling to improve exploration in RL. Section 5 introduces the Deep Maze learning environment for exploration, planning and memory management research for reinforcement learning. Section 6 shows results in the Deep Line Wars environment and that RL agents can be trained to navigate through the deep maze environment using only artificial training data.

2 Related Work

In machine learning, the goal is to create an algorithm that is capable of constructing a model of some environment accurately. There is, however, little research in game environment modeling in the scale we propose in this paper. The primary focus of recent RL research has been on the value and policy aspect of RL algorithm, while less attention has been put into perfecting environment modeling methods.

In 2016, the work in [3] proposed a method of deducing the Markov Decision Process (MDP) by introducing an adaptive exploration signal (pseudo-reward), which was obtained using deep generative model. Their method was to compute the Jacobian of each state and used it as the pseudo-reward when using deep neural networks to learn the state-generalization.

Xiao et al. proposed in [22] the use of generative adversarial networks (GAN) for model-based reinforcement learning. The goal was to utilize GAN for learning dynamics of the environment in a short-horizon timespan and combine this with the strength of far-horizon value iteration RL algorithms. The GAN architecture proposed illustrated near authentic generated images giving comparable results to [14].

In [9] Higgins et al. proposed DARLA, an architecture for modeling the environment using -VAE [8]. The trained model was used to extract the optimal policy of the environment using algorithms such as DQN [15], A3C [13], and Episodic Control [4]. DARLA is to the best of our knowledge, the first algorithm to properly introduce learning without access to the target environment during training.

Buesing et al. recently compared several methods of environment modeling, showing that it is far better to model the state-space then to utilize Monte-Carlo rollouts (RAR). The proposed architecture, state-space models (SSM) was significantly faster and produced acceptable results compared to auto-regressive (AR) methods. [5]

Ha and Schmidhuber proposed in [7] World Models, a novel architecture for training RL algorithms using variational autoencoders. This paper showed that agents could successfully learn the environment dynamics and use this as an exploration technique requiring no interaction with the target domain.

3 Background

We base our work on the well-established theory of reinforcement learning and formulate the problem as a MDP [20]. An MDP contains pairs that define the environment as a model. The state-space, represents all possible states while the action-space, represents all available actions the agent can perform in the environment. denotes the transition function , which is a mapping from state and action to the future state . After each performed action, the environment dispatches a reward signal, .

We call a sequence of states and actions a trajectory denoted as
and the sequence is sampled through the use of a stochastic policy that predicts the optimal action in any state: , where is the policy and are the parameters. The primary goal of the reinforcement learning is to reinforce good behavior. The algorithm should try to learn the policy that maximizes the total expected discounted reward given by, [15].

4 The Dreaming Variational Autoencoder

The Dreaming Variational Autoencoder (DVAE) is an end-to-end solution for generating probable future states from an arbitrary state-space using state-action pairs explored prior to and .

Figure 1: Illustration of the DVAE model. The model consumes state and action pairs, yielding the input encoded in latent-space. Latent-space can then be decoded to a probable future state. is the encoder, is latent-space, and is the decoder. DVAE can also use LSTM to better learn longer sequences in continuous state-spaces.
1:Initialize replay memory and to capacity
2:Initialize policy
3:function Run-Agent(, )
4:     for i = 0 to N_EPISODES do
5:         Observe starting state,
6:         while  not TERMINAL do
7:              
8:              
9:              Store experience into replay buffer
10:              
11:         end while
12:     end for
13:end function
14:Initialize encoder
15:Initialize decoder
16:Initialize DVAE model
17:function DVAE
18:     for  in D do
19:          Expand replay buffer pair
20:         
21:          Encode into latent-space
22:          Decode into probable future state
23:         Store experience into artificial replay buffer ()
24:         
25:     end for
26:     return
27:end function
Algorithm 1 The Dreaming Variational Autoencoder

The DVAE algorithm, seen in Figure 1 works as follows. First, the agent collects experiences for utilizing experience-replay in the Run-Agent function. At this stage, the agent explores the state-space guided by a Gaussian distributed policy. The agent acts, observes, and stores the observations into the experience-replay buffer . After the agent reaches terminal state, the DVAE algorithm encodes state-action pairs from the replay-buffer into probable future states. This is stored in the replay-buffer for artificial future-states .

1 0 0 1 0 0
Real States 0 0 0 0 0 1
0 1 0 0
Generated States N/A 0 0 0 1
Table 1: DVAE algorithm for generating states using versus the real transition function . First, a real state is collected from the replay-memory. DVAE can then produce new states from current the trajectory using the state-action pairs. represent the trainable model parameters.

Table 1 illustrates how the algorithm can generate sequences of artificial trajectories using , where is the encoder, and is the decoder. With state and action as input, the algorithm generates state which in the table can be observed is similar to the real state . With the next input, , the DVAE algorithm generates the next state which again can be observed to be equal to . Note that this is without ever observing state . Hence, the DVAE algorithm needs to be initiated with a state, e.g. , and actions follows. It then generates (dreams) next states,

The requirement is that the environment must be partially discovered so that the algorithm can learn to behave similarly to the target environment. To predict a trajectory of three timesteps, the algorithm does nesting to generate the whole sequence: . The algorithm does this well in early on, but have difficulties with long sequences beyond eight in continuous environments.

5 Environments

The DVAE algorithm was tested on two game environments. The first environment is Deep Line Wars [1], a simplified Real-Time Strategy game. We introduce Deep Maze, a flexible environment with a wide range of challenges suited for reinforcement learning research.

5.1 The Deep Maze Environment

(a) A Small, Fully Observable MDP
(b) A Large, Fully Observable MDP
(c) Partially Observable MDP having a vision distance of 3 tiles
(d) Partially Observable MDP having ray-traced vision
Figure 2: Overview of four distinct MDP scenarios using Deep Maze.

The Deep Maze is a flexible learning environment for controlled research in exploration, planning, and memory for reinforcement learning algorithms. Maze solving is a well-known problem, and is used heavily throughout the RL literature [20], but is often limited to small and fully-observable scenarios. The Deep Maze environment extends the maze problem to over 540 unique scenarios including Partially-Observable Markov Decision Processes (POMDP). Figure 2 illustrates a small subset of the available environments for Deep Maze, ranging from small-scale MDP’s to large-scale POMDP’s. The Deep Maze further features custom game mechanics such as relocated exits and dynamically changing mazes.

The game engine is modularized and has an API that enables a flexible tool set for third-party scenarios. This extends the capabilities of Deep Maze to support nearly all possible scenario combination in the realm of maze solving.111The Deep Maze is open-source and publicly available at https://github.com/CAIR/deep-maze.

5.1.1 State Representation

RL agents depend on sensory input to evaluate and predict the best action at current timestep. Preprocessing of data is essential so that agents can extract features from the input. For this reason, Deep Maze has built-in state representation for RGB Images, Grayscale Images, and raw state matrices.

5.1.2 Scenario Setup

The Deep Maze learning environment ships with four scenario modes: (1) Normal, (2) POMDP, (3) Limited POMDP, and (4) Timed Limited POMDP.

The first mode exposes a seed-based randomly generated maze where the state-space is modeled as an MDP. The second mode narrows the state-space observation to a configurable area around the player. In addition to radius based vision, the POMDP mode also features ray-tracing vision that better mimic the sight of a physical agent. The third and fourth mode is intended for memory research where the agent must find the goal in a limited number of time-steps. In addition to this, the agent is presented with the solution but fades after a few initial time steps. The objective is the for the agent to remember the solution to find the goal. All scenario setups have a variable map-size ranging between and tiles.

5.2 The Deep Line Wars Environment

The Deep Line Wars environment was first introduced in [1]. Deep Line Wars is a real-time strategy environment that makes an extensive state-space reduction to enable swift research in reinforcement learning for RTS games.

Figure 3: The Graphical User Interface of the Deep Line Wars environment.

The game objective of Deep Line Wars is to invade the enemy player with mercenary units until all health points are depleted, see Figure 3). For every friendly unit that enters the far edge of the enemy base, the enemy health pool is reduced by one. When a player purchases a mercenary unit, it spawns at a random location inside the edge area of the buyers base. Mercenary units automatically move towards the enemy base. To protect the base, players can construct towers that shoot projectiles at the opponents mercenaries. When a mercenary dies, a fair percentage of its gold value is awarded to the opponent. When a player sends a unit, the income is increased by a percentage of the units gold value. As a part of the income system, players gain gold at fixed intervals.

6 Experiments

6.1 Deep Maze Environment Modeling using DVAE

The DVAE algorithm must be able to generalize over many similar states to model a vast state-space. DVAE aims to learn the transition function, bringing the state from to . We use the deep maze environment because it provides simple rules, with a controllable state-space complexity. Also, we can omit the importance of reward for some scenarios.

We trained the DVAE model on two No-Wall Deep Maze scenarios of size and . For the encoder and decoder, we used the same convolution architecture as proposed by [17] and trained for 5000 epochs for and 1000 epochs for respectively. For the encoding of actions and states, we concatenated the flattened state-space and action-space, having a fully-connected layer with ReLU activation before calculating the latent-space. We used the Adam optimizer [11] with a learning-rate of 1e-08 to update the parameters.

(a) A Small, Fully Observable MDP
(b) A Large, Fully Observable MDP
Figure 4: The training loss for DVAE in the No-Wall and Deep Maze scenario. The experiment is run for a total of 1000 (5000 for ) episodes. The algorithm only trains on 50% of the state-space to the model for the environment while the whole state-space is trainable in the environment.

Figure 4 illustrates the loss of the DVAE algorithm in the No-Wall Deep Maze scenario. In the scenario, DVAE is trained on only 50% of the state space, which results in noticeable graphic artifacts in the prediction of future states, see Figure 5. Because the environment is fully visible, we see in Figure 6 that the artifacts are exponentially reduced.

Figure 5: For the scenario, only 50% of the environment is explored, leaving artifacts on states where the model is uncertain of the transition function. In more extensive examples, the player disappears, teleports or gets stuck in unexplored areas.
Algorithm Avg Performance Avg Performance
DQN- @ 9314 @ N/A
TRPO- @ 5320 @ 7401
PPO- @ 3151 @ 7195
DQN- @ 4314 @ 8241
TRPO- @ 3320 @ 4120
PPO- @ 2453 @ 2904
Table 2: Results of the deep maze and environment, comparing DQN [15], TRPO [18], and PPO [19]. The optimal path yields performance of 100% while no solution yields 0%. Each of the algorithms ran 10000 episodes for both map-sizes. The last number represents at which episode the algorithm converged.
Figure 6: Results of Deep Maze modeling using the DVAE algorithm. To simplify the environment, no reward signal is received per iteration. The left caption describes current state, , while the right caption is the action performed to compute, .

6.2 Using for RL Agents in Deep Maze

The goal of this experiment is to observe the performance of RL agents using the generated experience-replay from Figure 1 in Deep Maze environments of size and . In Table 2, we compare the performance of DQN [14], TRPO [18], and PPO [19] using the DVAE generated to tune the parameters.

Figure 7: A typical deep maze of size . The lower-right square indicates the goal state, the dotted-line indicates the optimal path, while the final square represents the player’s current position in the state-space. The controller agent is DQN, TRPO, and PPO (from left to right).

Figure 7 illustrates three maze variations of size , where the AI has learned the optimal path. We see that the best performing algorithm, PPO [19] beats DQN and TRPO using either or . The DQN- agent did not converge in the environment, but it is likely that value-based algorithms could struggle with graphical artifacts generated from the DVAE algorithm. These artifacts significantly increase the state-space so that direct-policy algorithms could perform better.

6.3 Deep Line Wars Environment Modeling using DVAE

The DVAE algorithm works well in more complex environments, such as the Deep Line Wars game environment [1]. Here, we expand the DVAE algorithm with LSTM to improve the interpretation of animations, illustrated Figure 1.

Figure 8: The DVAE algorithm applied to the Deep Line Wars environment. Each epoch illustrates the quality of generated states in the game, where the left image is real state and the right image is the generated state .

Figure 8 illustrates the state quality during training of DVAE in a total of 6000 episodes (epochs). Both players draw actions from a Gaussian distributed policy. The algorithm understands that the player units can be located in any tiles after only 50 epochs, and at 1000 we observe the algorithm makes a more accurate statement of the probability of unit locations (i.e., some units have increased intensity). At the end of the training, the DVAE algorithm is to some degree capable of determining both towers, and unit locations at any given time-step during the game episode.

7 Conclusion and Future Work

This paper introduces the Dreaming Variational Autoencoder (DVAE) as a neural network based generative modeling architecture to enable exploration in environments with sparse feedback. The DVAE shows promising results in modeling simple non-continuous environments. For continuous environments, such as Deep Line Wars, DVAE performs better using a recurrent neural network architecture (LSTM) while it is sufficient to use only a sequential feed-forward architecture to model non-continuous environments such as Chess, Go, and Deep Maze.

There are, however, several fundamental issues that limit DVAE from fully modeling environments. In some situations, exploration may be a costly act that makes it impossible to explore all parts of the environment in its entirety. DVAE cannot accurately predict the outcome of unexplored areas of the state-space, making the prediction blurry or false.

Reinforcement learning has many unresolved problems, and the hope is that the Deep Maze learning environment can be a useful tool for future research. For future work, we plan to expand the model to model the reward function using inverse reinforcement learning. DVAE is an ongoing research question, and the goal is that reinforcement learning algorithms could utilize this form of dreaming to reduce the need for exploration in real environments.

References

  • [1] Andersen, P.A., Goodwin, M., Granmo, O.C.: Towards a deep reinforcement learning approach for tower line wars. In: Bramer, M., Petridis, M. (eds.) Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 10630 LNAI, pp. 101–114 (2017)
  • [2] Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A.: Deep reinforcement learning: A brief survey. IEEE Signal Processing Magazine 34(6), 26–38 (2017)
  • [3] Bangaru, S.P., Suhas, J., Ravindran, B.: Exploration for Multi-task Reinforcement Learning with Deep Generative Models. arxiv preprint arXiv:1611.09894 (nov 2016)
  • [4] Blundell, C., Uria, B., Pritzel, A., Li, Y., Ruderman, A., Leibo, J.Z., Rae, J., Wierstra, D., Hassabis, D.: Model-Free Episodic Control. arxiv preprint arXiv:1606.04460 (jun 2016)
  • [5] Buesing, L., Weber, T., Racaniere, S., Eslami, S.M.A., Rezende, D., Reichert, D.P., Viola, F., Besse, F., Gregor, K., Hassabis, D., Wierstra, D.: Learning and Querying Fast Generative Models for Reinforcement Learning. arxiv preprint arXiv:1802.03006 (feb 2018)
  • [6] Chen, K.: Deep Reinforcement Learning for Flappy Bird. cs229.stanford.edu p. 6 (2015)
  • [7] Ha, D., Schmidhuber, J.: World Models. arxiv preprint arXiv:1803.10122 (mar 2018)
  • [8] Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., Lerchner, A.: beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. International Conference on Learning Representations (nov 2016)
  • [9] Higgins, I., Pal, A., Rusu, A., Matthey, L., Burgess, C., Pritzel, A., Botvinick, M., Blundell, C., Lerchner, A.: DARLA: Improving Zero-Shot Transfer in Reinforcement Learning. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 1480–1490. PMLR, International Convention Centre, Sydney, Australia (2017)
  • [10] Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research (apr 1996)
  • [11] Kingma, D.P., Ba, J.L.: Adam: A Method for Stochastic Optimization. Proceedings, International Conference on Learning Representations 2015 (2015)
  • [12] Li, Y.: Deep Reinforcement Learning: An Overview. arxiv preprint arXiv:1701.07274 (jan 2017)
  • [13] Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous Methods for Deep Reinforcement Learning. In: Balcan, M.F., Weinberger, K.Q. (eds.) Proceedings of The 33rd International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 48, pp. 1928–1937. PMLR, New York, New York, USA (2016)
  • [14] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with Deep Reinforcement Learning. Neural Information Processing Systems (dec 2013)
  • [15] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (feb 2015)
  • [16] Mousavi, S.S., Schukat, M., Howley, E.: Deep Reinforcement Learning: An Overview. In: Bi, Y., Kapoor, S., Bhatia, R. (eds.) Proceedings of SAI Intelligent Systems Conference (IntelliSys) 2016. pp. 426–440. Springer International Publishing, Cham (2018)
  • [17] Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., Carin, L.: Variational Autoencoder for Deep Learning of Images, Labels and Captions. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., R., G. (eds.) Advances in Neural Information Processing Systems. pp. 2352–2360. Curran Associates, Inc. (2016)
  • [18] Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust Region Policy Optimization. In: Bach, F., Blei, D. (eds.) Proceedings of the 32nd International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 37, pp. 1889–1897. PMLR, Lille, France (2015)
  • [19] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal Policy Optimization Algorithms. arxiv preprint arXiv:1707.06347 (jul 2017)
  • [20] Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, vol. 9. MIT Press (1998)
  • [21] Van Seijen, H., Fatemi, M., Romoff, J., Laroche, R., Barnes, T., Tsang, J.: Hybrid Reward Architecture for Reinforcement Learning. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems 30, pp. 5392–5402. Curran Associates, Inc. (2017)
  • [22] Xiao, T., Kesineni, G.: Generative Adversarial Networks for Model Based Reinforcement Learning with Tree Search. Tech. rep., University of California, Berkeley (2016)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
297494
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description