Scaling MAP-Elites to Deep Neuroevolution

Scaling MAP-Elites to Deep Neuroevolution


Quality-Diversity (QD) algorithms, and MAP-Elites (ME) in particular, have proven very useful for a broad range of applications including enabling real robots to recover quickly from joint damage, solving strongly deceptive maze tasks or evolving robot morphologies to discover new gaits. However, present implementations of ME and other QD algorithms seem to be limited to low-dimensional controllers with far fewer parameters than modern deep neural network models. In this paper, we propose to leverage the efficiency of Evolution Strategies (ES) to scale MAP-Elites to high-dimensional controllers parameterized by large neural networks. We design and evaluate a new hybrid algorithm called MAP-Elites with Evolution Strategies (ME-ES) for post-damage recovery in a difficult high-dimensional control task where traditional ME fails. Additionally, we show that ME-ES performs efficient exploration, on par with state-of-the-art exploration algorithms in high-dimensional control tasks with strongly deceptive rewards.


1. Introduction

The path to success is not always a straight line. This popular saying describes, in simple terms, a key problem of non-convex optimization. When optimizing a model for a non-convex objective function, an algorithm that greedily follows the gradient of the objective might get stuck in local optima. One can think of an agent in a maze with numerous walls. An algorithm that minimizes the distance between the position of the agent and the maze center will surely lead to the agent getting stuck in a corner. Based on this observation, Lehman and Stanley (2008) proposed a thought-provoking idea: ignore the objective and optimize for novelty instead. Novelty search (NS) continually generates new behaviors without considering any objective and, as such, is not subject to the local optima encountered by algorithms following fixed objectives. In cases like our maze example, NS can lead to better solutions than objective-driven optimization (Lehman and Stanley, 2008).

Despite the successes of NS, objectives still convey useful information for solving tasks. As the space of possible behaviors increases in size, NS can endlessly generate novel outcomes, few of which may be relevant to the task at hand. Quality-Diversity (QD) algorithms address this issue by searching for a collection of solutions that is both diverse and high-performing (Lehman and Stanley, 2011; Mouret and Clune, 2015; Cully and Demiris, 2017). MAP-Elites, in particular, was used to generate diverse behavioral repertoires of walking gaits on simulated and physical robots, which enabled them to recover quickly from joint damage (Cully et al., 2015).

In QD algorithms like MAP-Elites or NSLC, a Genetic Algorithm (GA) is often used as the underlying optimization algorithm. A candidate controller is selected to be mutated and the resulting controller is evaluated in the environment, leading to a performance measure (fitness) and a behavioral characterization (low-dimensional representation of the agent’s behavior). QD algorithms usually curate some form of a behavioral repertoire, a collection of high-performing and/or diverse controllers experienced in the past (sometimes called archive (Lehman and Stanley, 2011; Cully and Demiris, 2017) or behavioral map (Mouret and Clune, 2015)). Each newly generated controller can thus be added to the behavioral repertoire if it meets some algorithm-dependent conditions. From a parallel line of work, Intrinsically Motivated Goal Exploration Processes (IMGEP) also curate behavioral repertoires and are often based on GA-like optimizations (Baranes and Oudeyer, 2013; Forestier et al., 2017). Agents are able to set their own goals in the behavioral space, and try to reach them by combining controllers that reached behavioral characterizations (s) close to these goals. Uniform goal selection here triggers a novelty effect where controllers reaching sparse areas are used more often, while learning-progress-based sampling implements a form of QD where quality is defined as the ability to reliably achieve goals (Baranes and Oudeyer, 2013).

Thus far, the most successful demonstrations of QD algorithms have been on robotics problems with relatively simple, low-dimensional controllers (Lehman and Stanley, 2008, 2011; Mouret and Clune, 2015; Cully and Demiris, 2017). Modern robot controllers such as ones based on neural networks can have millions of parameters, which makes them difficult –though not impossible– to optimize with Evolutionary Algorithms (Such et al., 2017). Such large controllers are usually trained through Deep Reinforcement Learning (DRL), where an agent learns to perform a sequence of actions in an environment so as to maximizes some notion of cumulative reward (Sutton and Barto, 2018). DRL is concerned with training deep neural networks (DNNs), typically via stochastic gradient descent (SGD) to facilitate learning in robot control problems. Unlike in supervised learning, training data in DRL is generated by having the agent interact with the environment. If the agent greedily takes actions to maximize reward –a phenomenon known as exploitation– it may run into local optima and fail to discover alternate strategies with larger payoffs. To avoid this, RL algorithms also need exploration. RL algorithms usually explore in the action space, adding random noise on a controller’s selected actions (-greedy) (Mnih et al., 2013; Lillicrap et al., 2015; Fujimoto et al., 2018). More directed exploration techniques endow the agent with intrinsic motivations (Schmidhuber, 1991; Oudeyer et al., 2007; Barto, 2013). Most of the time, the reward is augmented with an exploration bonus, driving the agent to optimize for proxies of uncertainty such as novelty (Bellemare et al., 2016), prediction error (Pathak et al., 2017; Burda et al., 2018), model-disagreement (Shyam et al., 2018), surprise (Achiam and Sastry, 2017) or expected information gain (Houthooft et al., 2016).

In contrast with QD algorithms however, DRL algorithms are usually concerned with training a single controller to solve tasks (Mnih et al., 2013; Lillicrap et al., 2015; Fujimoto et al., 2018; Pathak et al., 2017). The agent thus needs to rely on a single controller to adapt to new environments (Nichol et al., 2018; Portelas et al., 2019), tasks (Finn et al., 2017), or adversarial attacks (Gleave et al., 2019). Kume et al. (2017) outlines perhaps the first RL algorithm to form a behavioral repertoire by training multiple policies. Optimizing for performance, this algorithm relies on the instabilities of the underlying learning algorithm (Deep Deterministic Policy Gradient (Lillicrap et al., 2015)) to generate a diversity of behaviors that will be collected in a behavioral map. Although the performance optimization of this QD algorithm leverages DRL, exploration remains incidental.

In recent years, Deep Neuroevolution has emerged as a powerful competitor to SGD for training DNNs. Salimans et al. (2017), in particular, presents a scalable version of the Evolution Strategies (ES) algorithm, achieving performances comparable to state-of-the-art DRL algorithms on high-dimensional control tasks like Mujoco (Brockman et al., 2016) and the Atari suite (Bellemare et al., 2013). Similarly, Genetic Algorithms (GA) were also shown to be capable of training DNN controllers for the Atari suite, but failed to do so on the Mujoco suite (Such et al., 2017). ES, in contrast with GA, combines information from many perturbed versions of the parent controller to generate a new one. Doing so allows the computation of gradient estimates, leading to efficient optimization in high-dimensional parameter spaces like DNNs (Salimans et al., 2017). Recent implementations also utilize the resources of computing clusters by parallelizing the controller evaluations, making ES algorithms competitive with DRL in terms of training time (Salimans et al., 2017). Since Such et al. (2017) showed that even powerful, modern GAs using considerable amounts of computation could not solve continuous control tasks like those from the Mujoco suite, we propose to unlock QD for high-dimensional control tasks via a novel QD-ES hybrid algorithm. Doing so, we aim to benefit from the exploration abilities and resulting behavioral repertoires of QD algorithms, while leveraging the ability of ES to optimize large models.

NS-ES made a first step in this direction by combining NS with ES: replacing the performance objective of ES with a novelty objective (Conti et al., 2018). Two variants were proposed to incorporate the performance objective: NSR-ES, which mixes performance and novelty objectives evenly and NSRA-ES, which adaptively tunes the ratio between the two. These methods –like most RL exploration methods– use a mixture of exploitation and exploration objectives as a way to deal with the exploitation-exploration tradeoff. While this might work when the two objectives are somewhat aligned, it may be inefficient when they are not (Pugh et al., 2016). Several works have started to investigate this question and some propose to disentangle exploration and exploitation into distinct phases (Colas et al., 2018; Beyer et al., 2019; Zhang et al., 2019). QD presents a natural way of decoupling the optimization of exploitation (quality) and exploration (diversity) by looking for high-performing solutions in local niches of the behavioral space, leading to local competition between solutions instead of a global competition (Lehman and Stanley, 2011; Mouret and Clune, 2015; Cully and Demiris, 2017).

Contributions In this work, we present ME-ES, a version of the powerful QD algorithm MAP-Elites that scales to hard, high-dimensional control tasks by leveraging ES. Unlike NS-ES and its variants, which optimize a small population of DNN controllers (e.g. ) (Conti et al., 2018), ME-ES builds a large repertoire of diverse controllers. Having a set of diverse, high-performing behaviors not only enables efficient exploration but also robustness to perturbations (either in the environment itself or in the agent’s abilities). In behavioral repertoires, the burden of being robust to perturbations can be shared between different specialized controllers, one of which can solve the new problem at hand. Algorithms that train a single controller, however, need this controller to be robust to all potential perturbations. We present two applications of ME-ES. The first is damage recovery in a high-dimensional control task: after building a repertoire of behaviors, the agent is damaged (e.g. disabled joints) and must adapt to succeed. We show that ME-ES can discover adaptive behaviors that perform well despite the damage, while a previous implementation of MAP-Elites based on GA fails. The second application is exploration in environments with strongly deceptive rewards. We show that agents trained with ME-ES perform on par with state-of-the-art exploration algorithms (NS-ES and variants). Table 1 presents a classification of ME-ES and related algorithms presented in this paper along three dimensions: 1) whether they use pure exploration, pure exploitation or both, 2) whether they rely on gradient-based or mutation-based learning algorithms and 3) whether they couple or decouple the trade-off between exploration and exploitation (if applicable). As ES computes natural gradient estimations through Monte-Carlo approximations (Wierstra et al., 2008), we refer to ES-powered methods as gradient-based.

Gradient-Based Mutation-Based
Pure ES GA
Exploration & Coupled: NSR-ES, NSRA-ES ME-GA
Exploitation Decoupled: ME-ES
Table 1. Classification of related algorithms

2. Background

2.1. MAP-Elites

MAP-Elites was first presented in Mouret and Clune (2015). In addition to an objective function –or fitness function – which measures the performance of an agent, MAP-Elites assumes the definition of a behavioral characterization mapping the state-action trajectory of an agent in its environment to a low-dimensional embedding lying in a behavioral space. MAP-Elites keeps track of an archive of individuals (controllers) tried in the past along with their associated fitness values and behavioral characterizations. The aim is to curate a repertoire of behaviorally diverse and high-performing agents. To this end, the behavioral space is discretized into cells representing behavioral niches, where each niche maintains the highest performing individual whose behavioral characterization falls into its cell. Individuals in each niche optimize for the fitness objective, yet implicitly have a pressure for diversity, as they are driven towards empty and under-optimized cells to avoid the selection pressure of cells with highly optimized individuals.

After initializing the archive with a few randomly initialized controllers, MAP-Elites repeats the following steps:

  1. Select a populated cell at random,

  2. Mutate the cell’s controller using GA to obtain a new controller,

  3. Gather a trajectory of agent-environment interactions with the new controller and obtain its fitness and ,

  4. Update the archive: add the controller to the cell where the falls if Rule 1) the cell is empty or Rule 2) the fitness is higher than that of the controller currently in the cell.

The two rules guiding additions to the archive implement a decoupling between exploitation (quality) and exploration (diversity). Rule 1 implements exploration, as it ensures the preservation of controllers that exhibit novel behavior (i.e. mapped to empty cells). Rule 2 implements exploitation, as it enforces local competition and retains only the highest performing solution in each behavioral niche. In this manner, the exploration and exploitation objectives cannot contradict each other. To distinguish MAP-Elites based on GA from ME-ES, we call the traditional implementation of MAP-Elites ME-GA while MAP-Elites will encompass both ME-GA and ME-ES variants.

2.2. Evolution Strategies

Evolution Strategies (ES) is a class of black box optimization algorithms inspired by natural evolution (Rechenberg, 1973; Back et al., 1991). For each generation, an initial parameter vector (the parent), is mutated to generate a population of parameter vectors (the offspring). The fitness of each resultant offspring is evaluated and the parameters are combined such that individuals with high fitness have higher influence than others. As this process is repeated, the population tends towards regions of the parameter space with higher fitness, until a convergence point is reached.

The version of ES used in this paper belongs to the subcategory of Natural Evolution Strategies (NES) (Wierstra et al., 2008; Sehnke et al., 2010) for which the population of parameter vectors is represented by a distribution , parameterized by . Given an objective function , NES algorithms optimize the expected objective value using stochastic gradient ascent.

Recently, Salimans et al. (2017) proposed a version of the NES algorithm able to scale to the optimization of high-dimensional parameters (). In their work, they address the RL problem, for which refers to the parameters of a controller while the fitness function is the reward obtained by the corresponding controller over an episode of environment interactions. Given the parent controller parameters , the offspring population ) is sampled from the isotropic multivariate Gaussian distribution with fixed variance . As in reinforce (Williams, 1992), is updated according to the following gradient approximation:

where is the offspring population size, usually large to compensate for the high variance of this estimate. In practice, any can be decomposed as , and the gradient is estimated by:

As in Salimans et al. (2017), we use virtual batch normalization of the controller’s inputs and rank-normalize the fitness . The version of NES used in this paper is strictly equivalent to the one proposed in Salimans et al. (2017). We simply refer to it as ES hereafter.

3. Methods

3.1. Me-Es

The ME-ES algorithm reuses the founding principles of MAP-Elites leading to damage robustness and efficient exploration, while leveraging the optimization performance of ES. ME-ES curates a behavioral map , an archive of neural network controllers parameterized by . Every n_optim_gens generations, a populated cell and its associated controller are sampled from the archive. This controller is then copied to , which is subsequently evolved for n_optim_gens generations using ES. At each generation, offspring parameters are only used to compute an update for . Every is considered for addition to the . Algorithm 1 provides a detailed outline of the way ME-ES combines principles from MAP-Elites and ES.

1:Input: n_gens, pop_size, , n_optim_gens, empty behavioral map , n_eval
3:for n_gens do
4:     if g % n_optim_gens == 0 then
5:         mode explore_or_exploit()
6:         if mode == ‘explore’ then
7:               sample_explore_cell()
8:         else if mode == ‘exploit’ then
9:               sample_exploit_cell()          
11:      ES_optim, pop_size, , objective=mode
12:     , Evaluate(, n_eval)
13:     Update_BM
Algorithm 1 ME-ES Algorithm

Me-Es variants We define three variants of ME-ES that differ in the objective optimized by ES:

  • In ME-ES exploit, the objective is the fitness function . is computed by running the agent’s controller in the environment for one episode (directed exploitation).

  • In ME-ES explore, the objective is the novelty function , which we define as the average Euclidean distance between a controller’s and its nearest neighbors in an archive storing all previous , regardless of their addition to the (one per generation). Because the novelty objective explicitly incentivizes agents to explore their environment, we call it a directed exploration objective.

  • Finally, ME-ES explore-exploit alternates between both objectives, thus implementing a decoupled exploration and exploitation procedure. Because it explicitly optimizes for both objectives, ME-ES explore-exploit conducts both directed exploration and directed exploitation.

Note that ME-ES explore performs undirected exploitation; while the ES steps do not directly optimize for fitness, performance measures are still used to update the (Rule 2 of the map updates). In the same way, ME-ES exploit also performs undirected exploration, as the is updated with novel controllers (Rule 1 of map updates). Like ME-GA, all versions of ME-ES thus perform forms of exploration and exploitation. Only optimization steps with ES, however, enable agents to perform the directed version of both exploitation and exploration.

Cell sampling The number of generations performed by ME-ES is typically orders of magnitude lower than MAP-Elites, as each generation of ME-ES involves a greater number of episodes of environment interaction ( instead of for ME-GA). Therefore, the sampling of the cell from which to initiate the next n_optim_gens generations of ES is crucial (see Algorithm 1). Here, we move away from the cell selection via uniform sampling used in MAP-Elites, towards biased cell sampling. We propose two distinct sampling strategies, one for exploitation steps, the other for exploration steps. For exploitation steps, we select from cells with high fitness controllers, under the assumption that high fitness cells may lead to even higher fitness cells. For exploration steps, we select from cells with high novelty scores. By definition, novel controllers are in under-explored areas of the search space and mutating these controllers should lead to novel areas of the behavior space (Lehman and Stanley, 2008, 2011). In practice, as the number of cells populated increases with each generation, the bias towards selecting cells with higher performance or novelty score diminishes. For instance, if cells are selected proportional to their novelty, the best controller of two that have novelty and will have a chance of being selected, while the same controller in a group of controllers with novelty will only have probability of being selected. To fight this phenomenon, we propose to restrict the novelty-proportional selection to the five most novel cells. Similarly, for exploitation steps, we sample either uniformly from the two highest fitness cells (), which promotes exploitation from the current best cells, or uniformly from the two highest fitness cells from the last five updated (), which promotes exploitation from newly discovered cells.

Hyperparameters We use fully connected neural network controllers with two hidden layers of size and a non-linearity between each layer. We use the Xavier initialization (Glorot and Bengio, 2010), Adam optimizer (Kingma and Ba, 2014) with learning rate , and an l2-coefficient of for regularization. We run ES for consecutive generations with a population size of and a noise parameter . After each step, the resulting is evaluated times to estimate its average fitness , its average behavioral characterization and its associated novelty score . Novelty is computed as the average distance between the controller’s and its nearest neighbors in the archive of all past s, regardless of whether they were added to the .

3.2. Damage Recovery

The Intelligent Trial and Error (IT&E) algorithm integrates the evolution of a behavioral map using MAP-Elites and a subsequent procedure enabling agents to recover from damage by searching the for efficient recovery controllers (Cully et al., 2015). Here, we reuse this setting to compare the traditional implementation of MAP-Elites based on GA and our proposed ME-ES variants, see Fig. 1. Once the behavioral map has been filled by ME-GA or ME-ES, we can use it for damage adaptation (e.g. loss of control for one or several joints). As adaptation procedure, we use the map-based Bayesian optimization algorithm (M-BOA), part of the original IT&E (Cully et al., 2015). Bayesian optimization is especially suitable to find the maximum of an unknown objective function for which it is costly to obtain samples. We define the objective function as . M-BOA initializes a model of using the behavioral map filled by MAP-Elites and updates the model using Gaussian Process (GP) regression. Data acquisition boils down to the evaluation of a controller in the perturbed environment, providing a new pair to update the model . The problem of sampling the next controller to evaluate can be framed as a bandit problem, where the value of a controller is its estimated performance under the current model , which integrates the initial performances contained in the and the evaluations of previously selected controllers in the damage condition. Here, The decision model is implemented by a variant of the UCB algorithm (Lai and Robbins, 1985), balancing exploitation (selecting the controller associated to the highest expected value) and exploration (selecting controllers the model is uncertain about): , where is the uncertainty of the model about and a hyperparameter tuning the exploitation-exploration balance. As in Cully et al. (2015), we use a Matérn kernel function with and . We use and as a prior on the stochasticity of controller performance evaluations. Implementation details are provided in Cully et al. (2015).

Figure 1. Building repertoires for damage adaptation. Phase 1: MAP-Elites builds a repertoire of diverse and high-performing behaviors. Phase 2: M-BOA builds a model of the objective function under the perturbed conditions to find a controller robust to the damage.

3.3. Baselines and Controls

In this paper, we test ME-ES in two applications. The first is the construction of behavioral repertoires for damage adaptation and the second is exploration in deceptive reward environments (deep exploration). The two applications have different baselines and control treatments, which are described in the following sections.

MAP-Elites with Genetic Algorithm

We use the traditional implementation of MAP-Elites based on GA (ME-GA) as a baseline in the damage adaptation experiments in order to highlight the lift from using ES instead of GA to power MAP-Elites in high-dimensional control tasks. To ensure a fair comparison, it is important to keep the number of episodes constant between ME-GA and ME-ES. Here, ME-ES requires episodes for each new controller ( evaluations for the offsprings and evaluations of the final controller to get a robust performance estimate). As such, we enable ME-GA to add up to new controllers per generation, each being evaluated times.

We do not use ME-GA as a baseline in the deep exploration experiments because the task builds on the Humanoid-v1 environment where Such et al. (2017) demonstrated that GAs were not competitive with ES, even with x more environment interactions.

Novelty Search and Evolution Strategies

We compare ME-ES to NS-ES and its variants (Conti et al., 2018) for the deep exploration experiments. Since these baselines do not build behavioral repertoires, they are not used for the damage adaptation application. NS-ES replaces the performance objective of ES by the same novelty objective as ME-ES explore. Instead of a behavioral map, NS-ES and its variants keep a population of parent controllers optimized in parallel via ES. NS-ES never uses fitness for optimization and thus is a pure exploration algorithm. We also compare against two variants of NS-ES that include a fitness objective: (1) NSR-ES optimizes the average of the fitness and novelty objectives, while (2) NSRA-ES implements an adaptive mixture where the mixing weight is updated during learning. For NSRA-ES, the mixing weight leans towards exploration (novelty objective) when the best fitness stagnates and shifts back towards exploitation (fitness objective) when new behaviors are discovered that lead to higher performance.

The weight adaptation strategy of NSRA-ES was modified in this paper. In the original work, the weight starts at (pure exploitation), decreases by every generations if fitness does not improve and increases by at every improvement. In practice, we found this cadence too slow to adapt properly, as the mixing weight can only go to after generations in the best case. Thus, we increase the update frequency to every generations and remove the bias towards exploitation at the start by setting the mixing weight to initially.

4. Experiments

Our two experiments are presented in the next sections. Recall that controllers are implemented by fully-connected neural networks with hidden layers of size , resulting in about parameters. This results in search spaces that have orders of magnitude more dimensions than those used in traditional QD studies (Lehman and Stanley, 2008; Mouret and Clune, 2015; Lehman and Stanley, 2011; Cully and Demiris, 2017). For each treatment, we report the mean and standard deviations across seeds, except for ME-GA, for which we present seeds. ME-GA is about times slower to run ( days) and does not manage to learn a successful controller for the damage adaptation task.

4.1. Building Repertoires for Damage Adaptation

We compare the quality of the generated by the different versions of MAP-Elites for damage recovery. In the first phase we build a behavioral map with each of the MAP-Elites algorithms and in the second phase we use the behavioral map created in phase 1 to help the agent recover from damage to its joints via M-BOA (Fig. 1). The damage applied to joints, indexed by , include:

  • One joint cannot be controlled .

  • One full leg cannot be controlled .


We evolve a repertoire of walking gaits in the Mujoco Ant-v2 environment (Brockman et al., 2016). The is a 4-D vector, where each dimension represents the proportion of episode steps where each leg is in contact with the floor (). This behavioral space is discretized into bins along each dimension, for a total of cells. The fitness is a mixture of the current position and a cost on the torque for each joint. This incentivizes the agent to move as far along the x-axis as possible, without exerting too much energy.

Figure 2. Ant behavioral map. a: Map performance (best performance in map). b: Map coverage (# populated cells), log scale on -axis.

Results - Map Building

Fig. 2a shows the evolution of the map’s best performance and 2b shows the map’s coverage. As ME-GA adds up to times more controllers to the map per generation, its coverage is orders of magnitude higher than ME-ES variants. However, the best performance found in the map is quite low for ME-GA. ME-ES explore also shows poor performances across its map. In contrast, ME-ES exploit and ME-ES explore-exploit show high performance, on par with Twin-Delayed Actor Critic () (Fujimoto et al., 2018) but below Soft-Actor Critic () (Haarnoja et al., 2018), two state-of-the-art DRL algorithms for this task.

Results - Damage Adaptation

M-BOA is run on the behavioral maps from each experiment. Recovery controllers extracted from ME-GA and ME-ES explore maps do not achieve high scores (Fig. 4a). The scores are around , which correspond to a motionless Ant (not moving forward, but not falling neither). In contrast, ME-ES explore-exploit and ME-ES exploit are able to recover from joint damage and the ant is able to move in all damage scenarios (Fig. 4a). In Fig. 4b, we highlight the difference in performance before and after adaptation. Pre-adaptation performance is computed by evaluating the highest performing pre-damage controller in each of the post-damage environments. The post-adaption performance is computed by evaluating the recovery controller in each of the damaged joint environments. In most cases, ME-ES explore-exploit and ME-ES exploit are able to significantly improve upon the pre-damage controller (Fig. 4b). In both figures, averages and standard errors are computed over different runs of MAP-Elites + M-BOA (different seeds). The performance of a given recovery controller is averaged over episodes.


ME-GA does not scale to the high-dimensional Ant-v2 task, probably because it relies on random parameter perturbations (i.e. a GA) for optimization, which does not scale to high-dimensional controllers (Such et al., 2017). ME-ES explore performs poorly, as it is not interested in performance, but only in exploring the behavioral space, most of which is not useful. The exploitation ability of ME-ES explore only relies on its directed exploration component to find higher-performing solutions by chance. Indeed, neither ME-GA nor ME-ES explore leverages directed exploitation. ME-ES exploit only focuses on exploitation, but still performs undirected exploration by retaining novel solutions that fill new cells. While ME-ES explore-exploit targets performance only half of the time, it still finds a that is good for damage recovery. Its exploration component enables better coverage of the behavioral space while its exploitation component ensures high performing controllers are in the map. Note that a good space coverage is a poor indicator of adaptation performance, whereas having a high-performing, somewhat populated map is a good indicator. Directed exploitation seems to be required to achieve good performance in the Ant-v2 task. Undirected exploration –as performed by ME-ES exploit– is sufficient to build a behavioral map useful for damage recovery, as its recovery performance is similar to that of variants using directed exploration and exploitation (ME-ES explore-exploit).

4.2. Deep Exploration


We use two modified domains from OpenAI Gym based on Humanoid-v2 and Ant-v2 (Fig. 3). In Deceptive Humanoid, the humanoid robot faces a U-shaped wall (like in (Conti et al., 2018)), while in Ant Maze the Ant is placed in a maze similar to (Frans et al., 2017). Both environments possess a strongly deceptive reward, whose gradient leads the agent directly into a trap. For both environments, we use the final position of the agent as the BC. In Deceptive Humanoid, the fitness is defined as the cumulative reward, a mixture of velocity along the -axis and costs for joint torques. See Brockman et al. (2016) for more details. In Ant Maze, the fitness is the Euclidean distance to the goal (green area). Controllers are run for timesteps in Ant Maze and up to timesteps in Deceptive Humanoid, less if the agent falls over.

Figure 3. Domains for the deep exploration study. a: Deceptive Humanoid domain. b: Ant Maze domain. Here the goal is the green area and the trap is in the cul-de-sac to the right of the agent’s starting position.
Figure 4. Damage adaptation. a: Performance of the best recovery controller. The joint damage is represented as where is the identifier of the broken joint or joints (). Dashed line represent the score of an agent that does not move forward but does not fall. b: Difference in performance, post- and pre-behavioral adaptation.

Results - Deceptive Humanoid

Undirected exploration (ME-ES exploit) is insufficient to get out of the trap and directed exploration alone (ME-ES explore) falls short as well (Fig. 5a). That said, like NS-ES, ME-ES explore eventually explores around the wall, as indicated by its greater than performance. Algorithms implementing directed exploration and directed exploitation manage to go around the wall and achieve high performance (Fig. 5a), regardless of whether they use decoupled (ME-ES explore-exploit) or coupled (NSR-ES, NSRA-ES) exploration-exploitation. Surprisingly, ME-ES explore displays poor map coverage despite its focus on exploration (Fig. 5b).

Figure 5. Deceptive Humanoid behavioral map. a: Map performance (best performance in map). Being stuck in the trap corresponds to around . b: Map coverage (# populated cells).

Results - Ant Maze

In the Ant Maze, both ME-ES exploit and NSR-ES get stuck in the deceptive trap, as indicated by a score of . All other methods (ME-ES explore, NS-ES, ME-ES explore-exploit, and NSRA-ES) are able to avoid the trap and obtain a score closer to . Examining the exploitation ratio of NSRA-ES, we observe that all runs quickly move towards performing mostly exploration. That said, some runs have a ratio that falls to pure exploration and stays there, whereas in others the algorithm manages to find a high-performing controller, which triggers an increase in preference towards performance (Fig. 7)

Figure 6. Ant Maze behavioral map. a: Map performance (highest negative distance to the goal area in map). Being stuck in the trap corresponds to . b: Map coverage (# populated cells).
Figure 7. NSRA-ES weight adaptation. Evolution of NSRA-ES weight tuning the ratio between the performance (exploitation) and the novelty (exploration) objectives for different seeds. Higher values correspond to higher levels of exploitation and vice-versa.


Deceptive Humanoid. In this environment, simply following the fitness objective can only lead the agent into the U-shaped trap. Because the environment is open-ended, pure exploration does not guarantee the discovery of a high performing behavior either, as it is possible to endlessly explore in directions misaligned with performance, a phenomenon previously encountered with NS-ES in Conti et al. (2018). This explains the poor performance of both ME-ES explore and ME-ES exploit. However, combining exploration and exploitation provides the best of both worlds, and we observe that all algorithms that do so navigate around the wall (NSR-ES, NSRA-ES, ME-ES explore-exploit). As there is no need to follow gradients orthogonal to those of the performance objective to succeed, coupling exploration and exploitation (e.g. NSR-ES) is sufficient to achieve high fitness. The poor map coverage of ME-ES explore may be a result of it only optimizing for performance indirectly via MAP-Elites map updates. As a result, it is never encouraged to learn to walk efficiently, and thus unable to fill large portions of the map. In Conti et al. (2018) however, NS-ES manages to make the humanoid walk using pure directed exploration. This may be because NS-ES only updates a population of controllers, whereas any controller in the behavioral map of ME-ES explore can be updated. A given ME-ES explore controller will thus be updated less frequently than any of the NS-ES controllers. This dilution of the number of updates of each controller might explain why ME-ES explore is less efficient at exploring than NS-ES in this environment.

Ant Maze. In the Ant Maze task, pure exploitation will lead the agent to the deceptive trap (Fig. 3a). However, the environment is enclosed by walls so the agent cannot explore endlessly. Thus, performing pure exploration will eventually lead to the goal. This explains the success of directed exploration algorithms (NS-ES, ME-ES explore) and the poor performance of directed exploitation algorithms (ME-ES exploit). This environment requires a more extensive exploration procedure to achieve the goal than Deceptive Humanoid. In Ant Maze, the agent needs to avoid two traps and needs to make turns to achieve its goal, while in Humanoid Deceptive, the agent can simply walk with an angle of degrees to achieve high scores. As such, an even mix of the performance and novelty objectives (NSR-ES) fails, as we try to average two contradicting forces. NSRA-ES has a better chance of getting out of the trap by adding more and more exploration to the mix until the agent escapes. In doing so, however, the mixing weight can go down to and only consider the exploration objective, thus losing sight of the goal (Fig. 7). ME-ES explore-exploit is able to escape the trap because of its decoupled objectives; exploration steps will lead it to explore the environment until it reaches a point where exploitation steps allow it to reach the goal.

5. Discussion and Conclusion

In this paper we present ME-ES, a new QD algorithm scaling the powerful MAP-Elites algorithm to deep neural networks by leveraging ES ( parameters). We present three variants of this algorithm: (1) ME-ES exploit, where ES targets a performance objective, (2) ME-ES explore, where ES targets a novelty objective, and (3) ME-ES explore-exploit, where ES targets performance and novelty objectives in an alternating fashion. Because the MAP-Elites behavioral map implicitly implements undirected exploration (update rule 1) and undirected exploitation (update rule 2), these variants implement directed exploitation with undirected exploration, directed exploration with undirected exploitation, and decoupled directed exploration and exploitation, respectively.

All variants of ME-ES are evaluated in two applications. In the first application, we use ME-ES to curate an archive of diverse and high-performing controllers, which is subsequently used for damage adaptation. We show that ME-ES manages to recover from joint damage in a high-dimensional control task while ME-GA does not. In the second application, we use ME-ES for exploration in tasks characterized by strongly deceptive rewards. Here we show that ME-ES explore-exploit performs well in both open and closed domains, on par with the state-of-the-art exploration algorithm NSRA-ES.

Others have performed work similar to ours. Frans et al. (2017) tackles an environment similar to Ant Maze, but their method relies on hierarchical RL. It learns independent low-level controllers that are combined by a single high-level controller. The low-level controllers are pre-trained to crawl in various directions, while the high-level controller learns to pick directions to find the goal. On top of requiring some domain knowledge to train low-level controllers, hierarchical RL has trouble composing sub-controllers, a problem that is solved by resetting the agent to a default position between sub-sequences. In this work, however, we learn to solve the deep exploration problem by directly controlling joints.

Concurrently to our work, Fontaine et al. (2019) proposed a similar algorithm called CMA-ME, in which MAP-Elites is combined with CMA-ES, another algorithm from the ES family (Hansen, 2016). In addition to a performance objective, CMA-ME considers two others. The improvement objective selects for controllers that discovered new cells or improved over already populated cells. The random direction objective rewards controllers for minimizing the distance between their and a behavioral target vector randomly sampled from the behavioral space. Although the last objective does encourage exploration, it is not explicitly directed towards novelty as in ME-ES explore and is not decoupled from performance. Additionally, although CMA-ES is generally recognized as a very powerful algorithm, in practice it is limited to comparatively low-dimensional parameter spaces due to the algorithmic complexity induced by its covariance matrix updates .

Because ME-ES only affects the optimization procedure of MAP-Elites, it can be used for any application leveraging behavioral maps. In addition to the demonstrated applications for exploration (Section 4.2) or damage adaptation (Section 4.1 and (Cully et al., 2015)), previous works have used MAP-Elites for maze navigation (Pugh et al., 2015) or exploring the behavioral space of soft robotic arms and the design space of soft robots (Mouret and Clune, 2015). It can also co-evolve a repertoire of diverse images matching each of the ImageNet categories (Nguyen et al., 2015) or generate diverse collections of objects (Lehman et al., 2016). The improved optimization efficiency of ME-ES in building repertoire maps might benefit each of the above applications.

ME-ES could also leverage advances proposed for either MAP-Elites or ES. Vassiliades et al. (2017), for example, propose to use a centroidal Voronoi tessellation to scale MAP-Elites to high-dimensional behavioral spaces, while Chatzilygeroudis et al. (2018) improve on the behavioral adaptation procedure to enable adaptation without the need to reset the environment (or for humans to intervene in the case of real robots). Finally, Gajewski et al. (2019) recently proposed to replace the performance objective of ES by an evolvability objective, which aims to maximize the effect of parameter pertubations on controller behavior. Evolvability can make a good exploration objective because perturbing evolvable controllers is likely to increase the speed at which a behavioral map can be explored. More generally, ME-ES could leverage a set of various objectives in parallel. As the bottleneck lies in the computation of the offspring, various objectives can be computed and optimized for simultaneously (novelty, performance, evolvability etc.). Each resulting controller can be tested times and considered a candidate for the behavioral map updates, at a negligible increase in cost of .

In the deep exploration experiment, ME-ES could also be extended to perform hierarchical evolution. In such an algorithm, solutions are chains of controllers, where each link is a controller run for a given amount of time. After sampling a cell and retrieving the associated chain, the algorithm can either mutate the last controller of the chain or add a new controller to it. This would implement the go-explore strategy proposed in Ecoffet et al. (2019), where the agent first returns to a behavioral cell before exploring further.


  1. doi: doi
  2. isbn:
  3. conference: TBA; TBA; TBA
  4. journalyear: 2020
  5. price:


  1. Surprise-based intrinsic motivation for deep reinforcement learning. arXiv preprint arXiv:1703.01732. Cited by: §1.
  2. A survey of evolution strategies. In Proceedings of the fourth international conference on genetic algorithms, Vol. 2. Cited by: §2.2.
  3. Active learning of inverse models with intrinsically motivated goal exploration in robots. Robotics and Autonomous Systems 61 (1), pp. 49–73. Cited by: §1.
  4. Intrinsic motivation and reinforcement learning. In Intrinsically motivated learning in natural and artificial systems, pp. 17–47. Cited by: §1.
  5. The arcade learning environment: an evaluation platform for general agents. Journal of Artificial Intelligence Research 47, pp. 253–279. Cited by: §1.
  6. Unifying count-based exploration and intrinsic motivation. In Advances in neural information processing systems, pp. 1471–1479. Cited by: §1.
  7. MULEX: disentangling exploitation from exploration in deep rl. arXiv preprint arXiv:1907.00868. Cited by: §1.
  8. Openai gym. arXiv preprint arXiv:1606.01540. Cited by: §1, §4.1.1, §4.2.1.
  9. Exploration by random network distillation. arXiv preprint arXiv:1810.12894. Cited by: §1.
  10. Reset-free trial-and-error learning for robot damage recovery. Robotics and Autonomous Systems 100, pp. 236–250. Cited by: §5.
  11. Gep-pg: decoupling exploration and exploitation in deep reinforcement learning algorithms. arXiv preprint arXiv:1802.05054. Cited by: §1.
  12. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. In Advances in Neural Information Processing Systems, pp. 5027–5038. Cited by: §1, §1, §3.3.2, §4.2.1, §4.2.4.
  13. Robots that can adapt like animals. Nature 521 (7553), pp. 503. Cited by: §1, §3.2, §5.
  14. Quality and diversity optimization: a unifying modular framework. IEEE Transactions on Evolutionary Computation 22 (2), pp. 245–259. Cited by: §1, §1, §1, §1, §4.
  15. Go-explore: a new approach for hard-exploration problems. arXiv preprint arXiv:1901.10995. Cited by: §5.
  16. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135. Cited by: §1.
  17. Covariance matrix adaptation for the rapid illumination of behavior space. arXiv preprint arXiv:1912.02400. Cited by: §5.
  18. Intrinsically motivated goal exploration processes with automatic curriculum learning. arXiv preprint arXiv:1708.02190. Cited by: §1.
  19. Meta learning shared hierarchies. arXiv preprint arXiv:1710.09767. Cited by: §4.2.1, §5.
  20. Addressing function approximation error in actor-critic methods. arXiv preprint arXiv:1802.09477. Cited by: §1, §1, §4.1.2.
  21. Evolvability es: scalable and direct optimization of evolvability. In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 107–115. Cited by: §5.
  22. Adversarial policies: attacking deep reinforcement learning. arXiv preprint arXiv:1905.10615. Cited by: §1.
  23. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. Cited by: §3.1.
  24. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290. Cited by: §4.1.2.
  25. The cma evolution strategy: a tutorial. arXiv preprint arXiv:1604.00772. Cited by: §5.
  26. Vime: variational information maximizing exploration. In Advances in Neural Information Processing Systems, pp. 1109–1117. Cited by: §1.
  27. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.1.
  28. Map-based multi-policy reinforcement learning: enhancing adaptability of robots by deep reinforcement learning. arXiv preprint arXiv:1710.06117. Cited by: §1.
  29. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics 6 (1), pp. 4–22. Cited by: §3.2.
  30. Creative generation of 3d objects with deep learning and innovation engines. In Proceedings of the 7th International Conference on Computational Creativity, Cited by: §5.
  31. Exploiting open-endedness to solve problems through the search for novelty.. In ALIFE, pp. 329–336. Cited by: §1, §1, §3.1, §4.
  32. Evolving a diversity of virtual creatures through novelty search and local competition. In Proceedings of the 13th annual conference on Genetic and evolutionary computation, pp. 211–218. Cited by: §1, §1, §1, §1, §3.1, §4.
  33. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §1, §1.
  34. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. Cited by: §1, §1.
  35. Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909. Cited by: §1, §1, §1, §1, §2.1, §4, §5.
  36. Innovation engines: automated creativity and improved stochastic optimization via deep learning. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pp. 959–966. Cited by: §5.
  37. Gotta learn fast: a new benchmark for generalization in rl. arXiv preprint arXiv:1804.03720. Cited by: §1.
  38. Intrinsic motivation systems for autonomous mental development. IEEE transactions on evolutionary computation 11 (2), pp. 265–286. Cited by: §1.
  39. Curiosity-driven exploration by self-supervised prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 16–17. Cited by: §1, §1.
  40. Teacher algorithms for curriculum learning of deep rl in continuously parameterized environments. arXiv preprint arXiv:1910.07224. Cited by: §1.
  41. Searching for quality diversity when diversity is unaligned with quality. In International Conference on Parallel Problem Solving from Nature, pp. 880–889. Cited by: §1.
  42. Confronting the challenge of quality diversity. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pp. 967–974. Cited by: §5.
  43. Evolutionsstrategie—optimierung technischer systeme nach prinzipien der biologischen information. Stuttgart-Bad Cannstatt: Friedrich Frommann Verlag. Cited by: §2.2.
  44. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864. Cited by: §1, §2.2, §2.2.
  45. A possibility for implementing curiosity and boredom in model-building neural controllers. In Proc. of the international conference on simulation of adaptive behavior: From animals to animats, pp. 222–227. Cited by: §1.
  46. Parameter-exploring policy gradients. Neural Networks 23 (4), pp. 551–559. Cited by: §2.2.
  47. Model-based active exploration. arXiv preprint arXiv:1810.12162. Cited by: §1.
  48. Deep neuroevolution: genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567. Cited by: §1, §1, §3.3.1, §4.1.4.
  49. Reinforcement learning: an introduction. MIT press. Cited by: §1.
  50. Using centroidal voronoi tessellations to scale up the multidimensional archive of phenotypic elites algorithm. IEEE Transactions on Evolutionary Computation 22 (4), pp. 623–630. Cited by: §5.
  51. Natural evolution strategies. In 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), pp. 3381–3387. Cited by: §1, §2.2.
  52. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §2.2.
  53. Scheduled intrinsic drive: a hierarchical take on intrinsically motivated exploration. arXiv preprint arXiv:1903.07400. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description