Procedural Level GenerationImproves Generality of Deep Reinforcement Learning

Procedural Level Generation
Improves Generality of Deep Reinforcement Learning

Niels Justesen
IT University of Copenhagen
Copenhagen, Denmark
\AndRuben Rodriguez Torrado
New York University
Brooklyn, USA
\AndPhilip Bontrager
New York University
Brooklyn, USA
Ahmed Khalifa
New York University
Brooklyn, USA
Julian Togelius
New York University
Brooklyn, USA
Sebastian Risi
IT University of Copenhagen
Copenhagen, Denmark

Over the last few years, deep reinforcement learning (RL) has shown impressive results in a variety of domains, learning directly from high-dimensional sensory streams. However, when networks are trained in a fixed environment, such as a single level in a video game, it will usually overfit and fail to generalize to new levels. When RL agents overfit, even slight modifications to the environment can result in poor agent performance. In this paper, we present an approach to prevent overfitting by generating more general agent controllers, through training the agent on a completely new and procedurally generated level each episode. The level generator generate levels whose difficulty slowly increases in response to the observed performance of the agent. Our results show that this approach can learn policies that generalize better to other procedurally generated levels, compared to policies trained on fixed levels.


Procedural Level Generation
Improves Generality of Deep Reinforcement Learning

  Niels Justesen IT University of Copenhagen Copenhagen, Denmark Ruben Rodriguez Torrado New York University Brooklyn, USA Philip Bontrager New York University Brooklyn, USA Ahmed Khalifa New York University Brooklyn, USA Julian Togelius New York University Brooklyn, USA Sebastian Risi IT University of Copenhagen Copenhagen, Denmark


noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

Deep reinforcement learning has shown remarkable results in a variety of domains, in particular learning policies for video games [12]. However, there is increasing evidence that suggests that agents easily overfit to their particular training environment, resulting in policies that do not generalize well to related problems or even different instances of the same problem. Even small game modifications can often lead to dramatically reduced performance, leading to the suspicion that these networks learn reactions to particular situations rather than general strategies [14, 31].

In this paper, we introduce a novel approach to training more general agents through deep RL. In our approach, the agent is evaluated on a completely new level every time a new episode begins. We are building on insights from the field of procedural content generation (PCG), where methods have been developed to generate diverse sets of levels (and other content) for games [27]. Importantly, methods exist for generating levels with varying difficulty. The method proposed here includes an adaptive difficulty scaling which facilitates a smooth learning curve; when a particular level is too difficult for the agent to solve, the level difficulty is decreased, whereas difficulty is increased if the agent is able to solve the level.

The results in this paper show that agents trained through our PCG approach are able to generalize better to other procedurally generated levels than agents trained on a fixed set of human-made levels. Additionally, increasing the difficulty to match the agent’s performance during training allows general behaviors to be learned in some games that are otherwise too difficult to learn.

2 Related Work

The idea of training agents on a set of progressively harder tasks is an old one and has been rediscovered several times within the wider machine learning context. Within evolutionary computation, this practice is known as incremental evolution [10]. For example, it has been shown that while evolving neural networks to drive a simulated car around a particular race track works well, the resulting network has learned only to drive that particular track; but by gradually adding more, and more difficult, levels to the fitness function, a network can be trained to drive any track well, even hard tracks that could not be learned from scratch [29]. Essentially the same idea has later been independently invented as curriculum learning [2]. Similar ideas have been formulated within a coevolutionary framework as well [4].

Within supervised learning, it is generally accepted that accuracy (and other metrics) are reported on a testing set that is separate from the training set. In contrast, in reinforcement learning research it is common to report results on the very same task a model was trained on.

Several learning-focused game-based AI competitions, such as the Visual Doom [15] AI Competition, The General Video Game AI Learning Track [16, 22] and the OpenAI Retro Contest111 evaluate the submitted controllers on levels that the participants did not have access to. However, none of them are based on procedurally generated levels. (The only game-based AI competition to prominently feature procedural level generation, the Mario AI Competition, did not have provisions for learning agents. [30])

Randomization of objects in simulated environments has shown to improve generality for robotic grasping to such a degree that the robotic arm could generalize to realistic settings as well [28]. Low-fidelity texture randomization during training in a simulated environment has also allowed for autonomous indoor flight in the real world [24]. Several approaches exist that manipulate the reward function instead of the structure of the environment to ease learning and ultimately improve generality, such as Hindsight Experience Replay [1] and Rarity of Events [13]. There are some suggestions for how to select training tasks in a more informed way than simply selecting them randomly, for example the POWERPLAY [26] algorithm and Teacher-Student Curriculum Learning [17].

A protocol for training reinforcement learning algorithms and evaluate generalization and overfitting, by having large training and test sets, was proposed in [31]. Their experiments show that training on thousands of levels in a simple video game enables the agent to generalize to unseen levels. Our (contemporaneous) work here differs by implementing an adaptive difficulty progression into the content generator, and by testing on several, somewhat more complex games.

There has also been some work where a single network has been trained to play multiple games simultaneously, such as the IMPALA system [9]. In that work, the same games were used for training and testing, and it is in principle possible that the network simply learned state-to-action mappings for all of these games.

3 General Video Game AI Framework

To allow the procedural generation of different training levels, we are building on the General Video Game AI framework (GVG-AI), which is a flexible framework designed to facilitate the advance of general AI through video game playing [21]. There are currently over 160 2D games written for GVG-AI and the framework uses a high-level video game description language (VGDL) [25] to allow for rapid development of new games.

VGDL is a declarative video game language originally proposed by Ebner et al. [8]. The game definition specifies the types of objects and non-player characters that exist in the game. It also specifies how they interact and the types of rewards and effects they can have on the player. The level is then designed as an ASCII grid where each character represents an object in the game grid. This allows for quick development of games and levels making the framework ideal for different research use cases [20].

The GVGAI framework has recently been integrated with the OpenAI Gym environment [22]. OpenAI Gym provides a unified RL interface across several different environments [5], as well as a set of baseline RL implementations to help standardize the benchmarking of algorithms [7]. While GVG-AI originally provides a forward model that allows agents to use search algorithms, the GVG-AI Gym only provides the pixels of each frame, the incremental reward, and whether the game is won or lost.

3.1 Parameterized Level Generator

Figure 1: Examples of procedurally generated levels for Zelda (row 1), Frogs (row 2), and Boulderdash (row 3) with various difficulties between 0 and 1. Row 4 shows human-designed levels for each game.

For this paper, constructive level generators are built for three games in GVG-AI: Boulderdash, Frogs, and Zelda. These three games were selected as most of the GVG-AI tree search agents do not perform well and can thus be considered hard [3]. Constructive level generators [27] are widely used in game development because they are relatively fast and easy to debug. The constructive level generators incorporate game knowledge during the generation process to make sure the output level is directly playable without testing. Our generators are designed after analyzing the core components in the human-designed levels for each game. All the generators are parameterized using maximum width, maximum height, and a difficulty parameter. The different games and the corresponding level generators are described in more detail next.

Boulderdash Level Generator: This game is a GVG-AI port of the “Boulder Dash” (First Star Software, 1984) game. Here the player tries to collect at least ten gems and then go to the exit door while avoiding getting killed by falling boulders and moving enemies. The level generation in Boulderdash works as follows: (1) Generate the layout of the map using Cellular Automata [11]. (2) Add the player to the map at a random location. (3) Add the exit door at a random location directly proportional to the distance from the player (locations that are further away from the player have a higher probability to be selected). (4) Add at least ten gems to the map at random locations. (5) Add enemies to the map at random locations in a similar manner to the third step.

Frogs Level Generator: Frogs is a GVG-AI port of the “Frogger” (Konami, 1981) game, but with more restrictive scoring. In Frogs, the player tries to move upwards towards the goal without getting killed either by drowning in water or by getting run over by a car. The level generation in Frogs follow the following steps: (1) Add the player at the lowest row in the level. (2) Add the goal at the highest row in the level. (3) Assign the intermediate rows either as roads, water, or forest. (4) Add cars to the roads and wood logs to water.

Zelda Level Generator: Zelda is a GVG-AI port of the dungeon system of “The Legend of Zelda” (Nintendo, 1986) game. In Zelda, the player needs to grab a key and go toward the exit without getting killed by enemies. The player can use their sword to kill enemies for a higher score. The level generation in Zelda works as follows: (1) Generate the map layout as a maze using Prim’s Algorithm [6]. (2) Remove some of the solid walls in the maze at random locations. (3) Add the player to a random empty tile. (4) Add the key and exit door at random locations far from the player. (5) Add enemies in the maze at random locations far away from the player.

Number of possible generated levels with difficulty
Game Name .1 .2 .3 .4 .5 .6 .7 .8 .9 1.0
Table 1: The average number of possible generated levels with different difficulty values for Boulderdash, Frogs, and Zelda

We can control the difficulty of the three generators using a difficulty parameter in [0;1] passed to the generator during the generation process. Figure 1 shows the effect of the difficulty parameter in Zelda, Frogs, and Boulderdash in that order. Increasing the difficulty has three effects. First, the active level size increases, which is the area in the level where the player can move through. Second, the number of harmful objects (objects that kill the player) increases, and third, the layout of the level gets more complex to navigate. Table 1 shows the average number of possible generated levels with different difficulty values. Difficult levels have more possible configurations as they typically have more elements.

4 Procedural Level Generation for Deep RL

In a supervised learning setting, generality is obtained by training a model on a very large dataset, typically with thousands of examples. Similarly, the hypothesis in this paper is that RL algorithms should achieve generality if many environments are used during training, rather than just one. This paper presents a novel RL framework wherein a new level is generated whenever a new episode begins.

Since an agent does not see the same levels during training, it must learn general strategies in the game to improve. Learning a policy this way is more difficult than learning one for just a single level and it may be infeasible if the game rules and/or generated levels are hard. To ease the learning, we allow the agent to control the difficulty of the generated levels. In this way, the generator will initially create easy levels and then progressively increase the difficulty as the agent begins to play well.

Our proposed method, Progressive PCG (PPCG), uses a level generator with a difficulty setting between 0 and 1. Figure 2 shows the training setup. We implemented a very simple control mechanism where the agent initially requests levels of difficulty 0 (the minimum value). If the agent wins an episode, the difficulty will be incremented. The difficulty is increased by for a win and decreased by the same amount for a loss. In our experiments, we used . We compare this approach to a simpler method where the agents play on generated levels with a constant difficulty level. We refer to this approach as PCG N, where N is the fixed difficulty level.

Figure 2: Our proposed PCG-based RL framework where a level generator is creating a new level every training episode. We use the video game analogy level as the structure of the environment while the rules remain intact. Importantly, based on the agent’s performance it is able to request a level with a specific difficulty, which enables a gradual difficulty increase and learning progress.

5 Experiments

To evaluate our approach, we employ the reinforcement learning algorithm Advantage Actor-Critic (A2C) [18]. We use the implementation of A2C from the Open AI Baselines and run it on the GVG-AI Gym framework. The neural networks in this paper have the same architecture as the original DQN network from [19] with three convolutional layers and a single fully-connected layer. A2C is using 12 parallel workers, a step size , no frame skipping as in [22], and a constant learning rate of with the RMS optimizer [23]. The code for our experiments is available on GitHub222

We compare a total of four different approaches:

  1. Lvl X: Agents trained on a single human-designed level from GVG-AI. X denotes which of the five levels in GVG-AI ranging from 0-4.

  2. Lvl 0-3: Agents trained on several human-designed levels (level 0, 1, 2, and 3) which are sampled randomly during training.

  3. PCG X: PCG generated levels with a constant difficulty X.

  4. PPCG: Progressive PCG that generates levels that increase in difficulty depending on the performance of the agent.

Each training approach is repeated four times. The trained models are evaluated on ten sets of ten pre-generated levels; each set with its own difficulty level. Additionally, we evaluate them on the five human-designed levels. The training plots and the test results in Table 2 are averages across the four trained models; each tested 30 times on each test setup (thus a total of 120 runs per test). All four training approaches were tested on Zelda, and only PCG and PPCG were tested on Frogs and Boulderdash. The trained agents are also compared to a random policy.

Training .0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.0 Mean
Random 2.00 2.00 1.82 1.12 .86 0.6 0.64 0.35 0.24 0.12 -0.01 0.77
Level 0 1.67 1.60 1.30 0.44 0.02 0.35 -0.33 -0.33 -0.48 -0.55 -0.61 0.28
Level 4 0.65 0.99 -0.1 -0.57 -0.83 -0.8 -0.76 -0.74 -0.41 -0.77 -0.54 -0.35
Level 0-3 1.66 1.48 1.22 0.64 0.68 0.69 0.78 0.64 0.27 0.12 -0.15 0.73
PCG .3* 2.00 2.00 2.31 2.82 3.61 3.69 2.54 2.21 1.67 0.58 1.95 2.31
PCG .5 2.00 2.00 2.30 2.62 3.60 3.49 3.87 2.15 2.23 1.63 2.34 2.57
PCG .7 2.00 2.00 2.34 2.33 2.45 2.68 3.15 2.59 3.03 2.71 1.73 2.46
PCG 1 2.00 2.00 1.78 0.65 0.82 0.11 -0.21 -0.52 -0.25 -0.23 0.02 0.56
PPCG 2.00 2.00 2.23 2.36 2.75 2.65 3.50 3.35 3.45 3.61 3.06 2.81
Random 1.00 0.25 0.10 0.00 0.00 0.02 0.00 0.00 0.00 0.00 -0.01 0.03
PCG 1 1.00 0.08 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10
PPCG 1.00 0.96 0.88 0.81 0.85 0.82 0.55 0.30 0.61 0.48 0.30 0.69
Random 19.69 17.62 8.72 8.91 6.42 7.17 3.88 4.17 5.03 4.54 4.70 7.14
PCG 1 19.78 19.30 14.49 15.89 14.04 12.76 9.90 12.54 9.05 9.67 8.34 13.25
PPCG 19.98 19.66 15.78 15.99 13.45 11.05 8.08 9.31 5.82 5.10 5.48 11.79
Table 2: Test results of A2C with several training settings including: a single human-designed level (Level 0 and Level 4), several human-designed levels (Level 0-3), PCG, and PPCG. Random refers to results of a random agent. PCG .3/.5/.7/1 refers to training on procedurally generated levels with a fixed difficulty of .3/.5/.7/1 and Progressive PCG (PPCG) initially trains on easy levels and progressively adapts the difficulty of the levels to match the agent’s performance. Five runs of each training setting was done and tested on eleven sets of ten pre-generated levels; each set with a different difficulty level between 0 and 1. The reported scores for each test set are the average across 30 runs/play-troughs for each of the five trained models. The mean score shows the average across the ten test sets. The best result for each test set is highlighted in bold. *This training setup was only repeated three times due to technical issues.
Training Level 0 Level 1 Level 2 Level 3 Level 4
Random .08 0.03 -0.09 0.04 -0.03
Level 0 6.65 -0.29 -0.09 -0.18 -0.34
Level 4 0.09 2.54 -0.21 -0.21 5.89
Level 0-3 6.16 6.47 5.67 7.32 1.36
PCG .3 0.62 -0.18 0.15 -0.15 -0.59
PCG .5 0.08 0.14 0.52 -0.15 -0.53
PCG .7 0.43 0.52 1.00 0.38 -0.09
PCG 1 -0.13 -0.26 -0.45 -0.33 -0.25
PPCG 0.69 0.37 1.14 0.87 1.01
Random 0.0 0.0 0.0 0.0 0.0
PCG 1 0.0 0.0 0.0 0.0 0.0
PPCG 0.0 0.0 0.0 0.0 0.09
Random 1.79 1.74 2.28 2.02 2.0
PCG 1 5.37 9.01 7.09 4.70 7.80
PPCG 3.09 0.82 0.25 0.06 3.43
Table 3: Test results of A2C with several training settings including: a single human-designed level (Level 0 and Level 4), several human-designed levels (Level 0-3), PCG, and PPCG. Random refers to results of a random agent. PCG .3/.5/.7/1 refers to training on procedurally generated levels with a fixed difficulty of .3/.5/.7/1 and Progressive PCG (PPCG) initially trains on easy levels and progressively adapts the difficulty of the levels to match the agent’s performance. Each training setting was trained and tested four times on each human-designed level. The reported scores for each level are the average across 30 runs/play-troughs for each of the five trained models. The best result for each test level is highlighted in bold. Scores in red represent the performance of an agent playing on the level it was trained on.

6 Results

6.1 Training on a few Human-designed Levels

When the policies are trained on just one level in Zelda (Lvl 0 and Lvl 4 in Table 2 and Table 3) they reach high scores on the training level but poor performance on all test levels. It is clear that these are prone to memorization and cannot adapt well to play new levels. These agents even perform worse than random on the simple generated Zelda levels with difficulty 0, which do not include any enemies; A clear indication of overfitting.

In addition to training on just one level, agents were trained on multiple human-designed levels in Zelda to test if this increases their generality. The human levels 0,1,2, and 3 are randomly sampled during training at the beginning of each episode. The results in Table 3 show that these agents achieve high scores on the training levels. In the test level (level 4), these agents reach rather low, but positive, scores on average. These results suggest that training on a handful of levels is not enough to learn a strong general policy.

6.2 Training on Procedurally Generated Levels

Agents trained on PCG levels with a fixed difficulty learned to generalize to other PCG levels with similar difficulties in Zelda, Frogs, and Boulderdash with medium scores. Not surprisingly, they do not play as well on levels with very different difficulty levels. For example, agents trained with PCG .5 in Zelda are weak on difficult levels (0.7 – 1.0) and agents trained with PCG .7 are weak on easy levels with difficulty .3 – .4. Rodriguez Torrado et al. [22] have previously shown that DQN and A2C fail to learn Frogs on one just level using 1 million training steps. Here, we show that PCG allows the agent to play Frogs at a decent level across many difficult levels. Note that the reported scores in Frogs correspond to win rates while scores in Boulderdash are between 0 and 20.

Training an agent directly on PCG levels with high difficulties increase the learning time dramatically (see Figure 3). For that reason, the agents trained with PCG 1.0 in Zelda and Frogs have scores that are similar to the random agent. In Boulderdash, the agents trained with PCG 1.0 reaches decent scores (8.34 on average out of 20) on levels with difficulty 1.0.

In Zelda and Frogs, where PCG 1.0 failed to learn to play the most difficult levels, PPCG is more successful. PPCG has the best scores across the test sets with generated levels in Zelda and Frogs, achieving a respectable level of play on these hard levels. Figure 3 and Figure 4 show how both the difficulty and score increase during training. PPCG was, however, less successful in Boulderdash where the difficulty level got stuck just below 0.2. Thus, it was never trained on harder levels.

When observing the agents that were trained with PPCG to play Zelda it is clear that they learned a few general skills such as striking down and avoiding enemies, collect the key, and exit through the door. However, they often have issues with navigation in tricky mazes. Some of the procedurally generated levels allow the agent to get away with this behavior even at the difficult level.

6.3 Testing on Human-Designed Levels

It is clear that by using PCG and PPCG the agents learn to generalize to other generated levels. It is, however, interesting whether they also generalize to the five human-designed levels.

Interestingly, in Zelda and Frogs all the PCG and PPCG agents do not generalize well to the five human-designed levels. PPCG is slightly better than random in these two games. In Frogs level 4, the PPCG agents do, however, win 9% of the time, compared to the 30% win rate in the generated Frogs levels with difficulty 1.0. In Boulderdash, the PCG 1 agents achieve on average between 4–9 points (out of 20) in the five human-designed levels. The PPCG agents perform worse, as the difficulty level got stuck just below 2 during training and the human-designed levels arguably have a higher difficulty level.

Figure 3: Mean scores during training across four repetitions with different PCG variations in Zelda. Left: PCG with five different fixed difficulty settings. Right: The Progressive PCG (PPCG) that increases the difficulty of the training levels when the agent wins and decreases it when it loses. Notice, that the score never increases when using PCG 1.0, while PPCG reached a difficulty around 0.9. Opaque coloring show one standard deviation.
Figure 4: Mean scores during training across five repetitions of the Progressive PCG in Frogs (left) and Boulderdash (right). In Frogs, the difficulty almost reaches the highest difficulty level and the score, which in this game corresponds to the win rate, goes up to around 0.6. In Boulderdash, the score begins to increase but the agent needs 20 points to complete a level; thus, the difficulty rarely increases. Opaque coloring show one standard deviation.

7 Discussion

The results of these experiments affirm our original concern with the way reinforcement learning research is often evaluated and reported. When it is reported that an agent has learned to play a game, it may simply mean that the policy model is memorizing what action to take for a large number of observations that it has seen over and over again. This boils down to the network compressing a large number of image and action pairs without the network ever learning the general concepts of the game. Table 3 shows this with the huge disparity between the red and black numbers; the difference in performance on the training levels and test levels.

While many researchers may be aware of this already, what the agent actually learns in these cases is often ambiguous. Our results show that agents trained on one or several levels do not learn how to generally react to game objects in the game. If the goal of the agent is to learn how to play a game, then this work shows that it is not enough to train in a fixed environment and then evaluate in the same setting. The agent must be evaluated in several test environments.

Our approach attempts to force the agent to learn a generalized behavior in a game by generating a new level to train on after each episode. When trained on procedurally generated levels the agent is able to generalize to other generated levels it has never seen before. This shows that some generalization is happening but since it cannot play well on levels made by a human designer it is not general to an extent that is truly desirable. In Zelda, the agent may never learn to properly navigate around walls and kill enemies at certain positions. If a truly general understanding of the game mechanics was learned, the agent would be able to recognize when two frames were semantically similar and take proper actions even on new types of level layouts. It is clear that our approach, in contrast, still only generalizes to a subset of levels.

Some interesting points can be observed when looking at the results for each game individually. In Zelda, in particular, we see how PCG enables generalization and how PPCG allows the agent to learn general behaviors even on difficult levels. These results also indicate that with PPCG the agents to some extend forgets how to play the easier levels. We believe this happens as we can see the final performance on these levels is slightly lower than those of the agents trained only on levels with a low difficulty. The agent trained with PPCG in Frogs was able to generalize fairly well across all the procedurally generated test levels with a win rate between 30% and 100%. Considering that the difficult levels are in Frogs are very challenging, these results demonstrate the strengths of PPCG when the level generator and difficulty setting are working well. Finally, in Boulderdash, the learned agents show generalization as well but also exposes the weakness of relying on a win/loss metric with PPCG. As the agent never seemed to win when the difficulty reached around 0.2, the agent was never allowed to advance to more difficult levels. When using PCG with a fixed difficulty setting of 1.0, the agent also rarely won during training but still learned to obtain a decent amount of points in these levels, thus achieving higher test scores than with PPCG. This highlights the importance of the design of the level generator and difficulty adjustment mechanism.

8 Conclusion

We explored how policies learned with deep reinforcement learning generalize to levels that were not used during training. Our results demonstrate that agents trained on just one or a handful of levels fail to generalize to new levels. We have presented a new approach that incorporates a procedural level generator into the reinforcement learning framework. This approach generates a new level for each episode and it allows the agent to generalize well on new levels from the same level generator.

Training on a large set of levels requires much more data compared to training on just one level. Our experiments show that training on hard procedurally generated levels can be infeasible in some games. Our Progressive PCG approach increases the difficulty of the generated levels when the agent wins during training and decreases the difficulty when it loses. This enabled the agent to reach respectable scores on difficult procedurally generated test levels in the three selected games used in our experiments. We were not able to reach the same scores when using a fixed difficulty setting during training in two of the three games. Our approach learned policies that generalize well to other procedurally generated levels, even with some variation in difficulties, but unfortunately not as well on human-designed levels. Ensuring that the level generator covers a sufficiently large space of levels is a challenging problem that we will focus on in future work.


Niels Justesen was financially supported by the Elite Research travel grant from The Danish Ministry for Higher Education and Science. Ahmed Khalifa acknowledges the financial support from NSF grant (Award number 1717324 - "RI: Small: General Intelligence through Algorithm Invention and Selection.").


  • [1] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, O. P. Abbeel, and W. Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pages 5048–5058, 2017.
  • [2] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48. ACM, 2009.
  • [3] P. Bontrager, A. Khalifa, A. Mendes, and J. Togelius. Matching games and algorithms for general video game playing. In Twelfth Artificial Intelligence and Interactive Digital Entertainment Conference, pages 122–128, 2016.
  • [4] J. C. Brant and K. O. Stanley. Minimal criterion coevolution: a new approach to open-ended search. In Proceedings of the Genetic and Evolutionary Computation Conference, pages 67–74. ACM, 2017.
  • [5] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
  • [6] J. Buck. Mazes for Programmers: Code Your Own Twisty Little Passages. Pragmatic Bookshelf, 2015.
  • [7] P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, and Y. Wu. Openai baselines., 2017.
  • [8] M. Ebner, J. Levine, S. M. Lucas, T. Schaul, T. Thompson, and J. Togelius. Towards a video game description language. 2013.
  • [9] L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley, I. Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
  • [10] F. Gomez and R. Miikkulainen. Incremental evolution of complex general behavior. Adaptive Behavior, 5(3-4):317–342, 1997.
  • [11] L. Johnson, G. N. Yannakakis, and J. Togelius. Cellular automata for real-time generation of infinite cave levels. In Proceedings of the 2010 Workshop on Procedural Content Generation in Games, page 10. ACM, 2010.
  • [12] N. Justesen, P. Bontrager, J. Togelius, and S. Risi. Deep learning for video game playing. arXiv preprint arXiv:1708.07902, 2017.
  • [13] N. Justesen and S. Risi. Automated curriculum learning by rewarding temporally rare events. arXiv preprint arXiv:1803.07131, 2018.
  • [14] K. Kansky, T. Silver, D. A. Mély, M. Eldawy, M. Lázaro-Gredilla, X. Lou, N. Dorfman, S. Sidor, S. Phoenix, and D. George. Schema networks: Zero-shot transfer with a generative causal model of intuitive physics. arXiv preprint arXiv:1706.04317, 2017.
  • [15] M. Kempka, M. Wydmuch, G. Runc, J. Toczek, and W. Jaśkowski. ViZDoom: A Doom-based AI research platform for visual reinforcement learning. In IEEE Conference on Computational Intelligence and Games, pages 341–348, Santorini, Greece, Sep 2016. IEEE. The best paper award.
  • [16] J. Liu, D. Perez-Lebana, and S. M. Lucas. The single-player gvgai learning framework technical manual.
  • [17] T. Matiisen, A. Oliver, T. Cohen, and J. Schulman. Teacher-student curriculum learning. arXiv preprint arXiv:1707.00183, 2017.
  • [18] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pages 1928–1937, 2016.
  • [19] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
  • [20] D. Perez-Liebana, J. Liu, A. Khalifa, R. D. Gaina, J. Togelius, and S. M. Lucas. General video game ai: a multi-track framework for evaluating agents, games and content generation algorithms. arXiv preprint arXiv:1802.10363, 2018.
  • [21] D. Perez-Liebana, S. Samothrakis, J. Togelius, S. M. Lucas, and T. Schaul. General Video Game AI: Competition, Challenges and Opportunities. In Thirtieth AAAI Conference on Artificial Intelligence, pages 4335–4337, 2016.
  • [22] R. Rodriguez Torrado, P. Bontrager, J. Togelius, J. Liu, and D. Perez-Liebana. Deep reinforcement learning for general video game ai. In Computational Intelligence and Games (CIG), 2018 IEEE Conference on. IEEE, 2018.
  • [23] S. Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
  • [24] F. Sadeghi and S. Levine. Cad2rl: Real single-image flight without a single real image. arXiv preprint arXiv:1611.04201, 2016.
  • [25] T. Schaul. A video game description language for model-based or interactive learning. In Computational Intelligence in Games (CIG), 2013 IEEE Conference on, pages 1–8. IEEE, 2013.
  • [26] J. Schmidhuber. Powerplay: Training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Frontiers in psychology, 4:313, 2013.
  • [27] N. Shaker, J. Togelius, and M. J. Nelson. Procedural content generation in games. Springer, 2016.
  • [28] J. Tobin, W. Zaremba, and P. Abbeel. Domain randomization and generative models for robotic grasping. arXiv preprint arXiv:1710.06425, 2017.
  • [29] J. Togelius and S. M. Lucas. Evolving robust and specialized car racing skills. In Evolutionary Computation, 2006. CEC 2006. IEEE Congress on, pages 1187–1194. IEEE, 2006.
  • [30] J. Togelius, N. Shaker, S. Karakovskiy, and G. N. Yannakakis. The mario ai championship 2009-2012. AI Magazine, 34(3):89–92, 2013.
  • [31] C. Zhang, O. Vinyals, R. Munos, and S. Bengio. A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:1804.06893, 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description