Evolutionary Reinforcement Learning

Evolutionary Reinforcement Learning

Shauharda Khadka   Kagan Tumer
Collaborative Robotics and Intelligent Systems Institute
Oregon State University
{khadkas,kagan.tumer}@oregonstate.edu
Abstract

Deep Reinforcement Learning (DRL) algorithms have been successfully applied to a range of challenging control tasks. However, these methods typically suffer from three core difficulties: temporal credit assignment with sparse rewards, lack of effective exploration, and brittle convergence properties that are extremely sensitive to hyperparameters. Collectively, these challenges severely limit the applicability of these approaches to real world problems. Evolutionary Algorithms (EAs), a class of black box optimization techniques inspired by natural evolution, are well suited to address each of these three challenges. However, EAs typically suffer with high sample complexity and struggle to solve problems that require optimization of a large number of parameters. In this paper, we introduce Evolutionary Reinforcement Learning (ERL), a hybrid algorithm that leverages the population of an EA to provide diversified data to train an RL agent, and reinserts the RL agent into the EA population periodically to inject gradient information into the EA. ERL inherits EA’s ability of temporal credit assignment with a fitness metric, effective exploration with a diverse set of policies, and stability of a population-based approach and complements it with off-policy DRL’s ability to leverage gradients for higher sample efficiency and faster learning. Experiments in a range of challenging continuous control benchmark tasks demonstrate that ERL significantly outperforms prior DRL and EA methods, achieving state-of-the-art performances.

 

Evolutionary Reinforcement Learning


  Shauharda Khadka   Kagan Tumer Collaborative Robotics and Intelligent Systems Institute Oregon State University {khadkas,kagan.tumer}@oregonstate.edu

\@float

noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

Reinforcement learning (RL) algorithms have been successfully applied in a number of challenging domains, ranging from arcade games Mnih et al. (2015, 2016), board games Silver et al. (2016) to robotic control tasks Andrychowicz et al. (2017); Lillicrap et al. (2015). A primary driving force behind the explosion of RL in these domains is its integration with powerful non-linear function approximators like deep neural networks. This partnership with deep learning, often referred to as Deep Reinforcement Learning (DRL) has enabled RL to successfully extend to tasks with high-dimensional input and action spaces. However, widespread adoption of these techniques to real-world problems is still limited by three major challenges: temporal credit assignment with long time horizons and sparse rewards, lack of diverse exploration, and brittle convergence properties.

First, associating actions with returns when a reward is sparse (only observed after a series of actions) is difficult. This is a common occurrence in most real world domains and is often referred to as the temporal credit assignment problem Sutton and Barto (1998). Temporal Difference methods in RL use bootstrapping to address this issue but often struggle when the time horizons are long and the reward is sparse. Multi-step returns address this issue but are mostly effective in on-policy scenarios De Asis et al. (2017); Schulman et al. (2015a, b). Off-policy multi-step learning Mahmood et al. (2017); Sherstan et al. (2018) have been demonstrated to be stable in recent works but require complementary correction mechanisms like importance sampling, Retrace Munos (2016); Wang et al. (2016) and V-trace Espeholt et al. (2018) which can be computationally expensive and limiting.

Secondly, RL relies on exploration to find good policies and avoid converging prematurely to local optima. Effective exploration remains a key challenge for DRL operating on high dimensional action and state spaces Plappert et al. (2017). Many methods have been proposed to address this issue ranging from count based exploration Ostrovski et al. (2017); Tang et al. (2017), intrinsic motivation Bellemare et al. (2016), curiosity Pathak et al. (2017) and variational information maximization Houthooft et al. (2016). A separate class of techniques emphasize exploration by adding noise directly to the parameter space of agents Fortunato et al. (2017); Plappert et al. (2017). However, each of these techniques either rely on complex supplementary structures or introduce sensitive parameters that are task-specific. A general strategy for exploration that is applicable across domains and learning algorithms is an active area of research.

Finally, DRL methods are notoriously sensitive to the choice of their hyperparamaters Henderson et al. (2017); Islam et al. (2017) and often have brittle convergence properties Haarnoja et al. (2018). This is particularly true for off-policy DRL that utilize a replay buffer to store and reuse past experiences Bhatnagar et al. (2009). The replay buffer is a vital component in enabling sample-efficient learning but pairing it with a deep non-linear function approximator leads to extremely brittle convergence properties Duan et al. (2016); Haarnoja et al. (2018).

One approach well suited to address these challenges in theory is evolutionary algorithms (EA) Fogel (2006); Spears et al. (1993). The use of a fitness metric that consolidates returns across an entire episode makes EAs invariant to sparse rewards with long time horizons Fortunato et al. (2017). EA’s population-based approach also has the advantage of enabling diverse exploration, particularly when combined with explicit diversity maintenance techniques Cully et al. (2015); Lehman and Stanley (2008). Additionally, the redundancy inherent in a population also promotes robustness and stable convergence properties particularly when combined with elitism Ahn and Ramakrishna (2003). A number of recent work have used EA as an alternative to DRL with some success Conti et al. (2017); Gangwani and Peng (2017); Salimans et al. (2017); Such et al. (2017). However, EAs typically suffer with high sample complexity and often struggle to solve high dimensional problems that require optimization of a large number of parameters. The primary reason behind this is EA’s inability to leverage powerful gradient descent methods which are at the core of the more sample-efficient DRL approaches.

Figure 1: High level schematic of ERL highlighting the incorporation of EA’s population-based learning with DRL’s gradient based optimization.

In this paper, we introduce Evolutionary Reinforcement Learning (ERL), a hybrid algorithm that incorporates EA’s population-based approach to generate diverse experiences to train an RL agent, and transfers the RL agent into the EA population periodically to inject gradient information into the EA. The key insight here is that an EA can be used to address the core challenges within DRL without losing out on the ability to leverage gradients for higher sample efficiency. ERL inherits EA’s ability to address temporal credit assignment by its use of a fitness metric that consolidates the return of an entire episode. ERL’s selection operator which operates based on this fitness exerts a selection pressure towards regions of the policy space that lead to higher episode-wide return. This process biases the state distribution towards regions that have higher long term returns. This is a form of implicit prioritization that is effective for domains with long time horizons and sparse rewards. Additionally, ERL inherits EA’s population based approach leading to redundancies that serve to stabilize the convergence properties and make the learning process more robust. ERL also uses the population to combine exploration in the parameter space with exploration in the action space which lead to diverse policies that explore the domain effectively.

Figure 1 illustrates ERL’s double layered learning approach where the same set of data (experiences) generated by the evolutionary population is used by the reinforcement learner. The recycling of the same data also enables maximal information extraction from individual experiences leading to improved sample efficiency. Experiments in a range of challenging continuous control benchmark tasks demonstrate that ERL achieves state-of-the-art performances, significantly outperforming prior DRL and EA methods.

2 Background

A standard reinforcement learning setting is formalized as a Markov Decision Process (MDP) and consists of an agent interacting with an environment E over a number of discrete time steps. At each time step , the agent receives a state and maps it to an action using its policy . The agent receives a scalar reward and moves to the next state . The process continues until the agent reaches a terminal state marking the end of an episode. The return is the total accumulated return from time step with discount factor . The goal of the agent is to maximize the expected return. The state-value function describes the expected return from state after taking action and subsequently following policy .

2.1 Deep Deterministic Policy Gradient (DDPG)

Policy gradient methods frame the goal of maximizing return as the minimization of a loss function where parameterizes the agent. A widely used policy gradient method is Deep Deterministic Policy Gradient (DDPG) Lillicrap et al. (2015), a model-free RL algorithm developed for working with continuous high dimensional actions spaces. DDPG uses an actor-critic architecture Sutton and Barto (1998) maintaining a deterministic policy (actor) , and an action-value function approximation (critic) . The critic’s job is to approximate the actor’s action-value function . Both the actor and the critic are parameterized by (deep) neural networks with and , respectively. A separate copy of the actor and critic networks are kept as target networks for stability. These network are updated periodically using the actor and critic networks modulated by a weighting parameter .

A behavioral policy is used to explore during training. The behavioral policy is simply a noisy version of the policy: where is temporally correlated noise generated using the Ornstein-Uhlenbeck process Uhlenbeck and Ornstein (1930). The behavior policy is used to generate experience in the environment. After each action, the tuple ( containing the current state, actor’s action, observed reward and the next state, respectively is saved into a cyclic replay buffer . The actor and critic networks are updated by randomly sampling mini-batches from . The critic is trained by minimizing the loss function:

where = +

The actor is trained using the sampled policy gradient:

The sampled policy gradient with respect to the actor’s parameters is computed by backpropagation through the combined actor and critic network.

2.2 Evolutionary Algorithm

Evolutionary algorithms (EAs) are a class of search algorithms with three primary operators: new solution generation, solution alteration, and selection Fogel (2006); Spears et al. (1993). These operations are applied on a population of candidate solutions to continually generate novel solutions while probabilistically retaining promising ones. The selection operation is generally probabilistic, where solutions with higher fitness values have a higher probability of being selected. Assuming higher fitness values are representative of good solution quality, the overall quality of solutions will improve with each passing generation. In this work, each individual in the evolutionary algorithm defines a deep neural network. Mutation represents random perturbations to the weights (genes) of these neural networks. The evolutionary framework used here is closely related to evolving neural networks, and is often referred to as neuroevolution Floreano et al. (2008); Lüders et al. (2017); Risi and Togelius (2017); Stanley and Miikkulainen (2002).

3 Motivating Example

Figure 2: Comparative performance of DDPG, EA and ERL in a (left) standard and (right) hard Inverted Double Pendulum Task. DDPG solves the standard task easily but fails at the hard task. Both tasks are equivalent for the EA. ERL is able to inherit the best of DDPG and EA, successfully solving both tasks similar to EA while leveraging gradients for greater sample efficiency similar to DDPG.

Consider the standard Inverted Double Pendulum task from OpenAI gym Brockman et al. (2016), a classic continuous control benchmark. Here, an inverted double pendulum starts in a random position, and the goal of the controller is to keep it upright. The task has a state space and action space and is a fairly easy problem to solve for most modern algorithms. Figure 2 (left) shows the comparative performance of DDPG, EA and our proposed approach: Evolutionary Reinforcement Learning (ERL), which combines the mechanisms within EA and DDPG. Unsurprisingly, both ERL and DDPG solve the task under episodes. EA solves the task eventually but is much less sample efficient, requiring approximately episodes. ERL and DDPG are able to leverage gradients that enables faster learning while EA without access to gradients is slower.

We introduce the hard Inverted Double Pendulum by modifying the original task such that the reward is disbursed to the controller only at the end of the episode. During an episode which can consist of up to timesteps, the controller gets a reward of at each step except for the last one where the cumulative reward is given to the agent. Since the agent does not get feedback regularly on its actions but has to wait a long time to get feedback, the task poses an extremely difficult temporal credit assignment challenge.

Figure 2 (right) shows the comparative performance of the three algorithms in the hard Inverted Double Pendulum Task. Since EA does not use intra-episode interactions and compute fitness only based on the cumulative reward of the episode, the hard Inverted Double pendulum task is equivalent to its standard instance for an EA learner. EA retains its performance from the standard task and solves the task after episodes. DDPG on the other hand fails to solve the task entirely. The deceptiveness and sparsity of the reward where the agent has to wait up to steps to receive useful feedback signal creates a difficult temporal credit assignment problem that DDPG is unable to effectively deal with. In contrast, ERL which inherits the temporal credit assignment benefits of an encompassing fitness metric from EA is able to successfully solve the task. Even though the reward is sparse and deceptive, ERL’s selection operator provides a selection pressure for policies with high episode-wide return (fitness). This biases the distribution of states stored in the buffer towards states with higher long term payoff enabling ERL to successfully solve the task. Additionally, ERL is able to leverage gradients which allows it to solve the task within episodes, much faster than the episodes required by EA. This result highlights the key capability of ERL: combining mechanisms within EA and DDPG to achieve the best of both approaches.

4 Evolutionary Reinforcement Learning

1:Initialize actor and critic with weights and , respectively
2:Initialize target actor and critic with weights and , respectively
3:Initialize a population of actors and an empty cyclic replay buffer R
4:Define a a Ornstein-Uhlenbeck noise generator and a random number generator
5:for generation = 1,  do
6:     for actor  do
7:         fitness, R = Evaluate(, R, noise=None, )
8:     end for
9:     Rank the population based on fitness scores
10:     Select the first actors as elites where = int(*k)
11:     Select actors from , to form Set using tournament selection
12:     for Actor Set  do earning Rate
13:         if  then
14:              Randomly sample and perturb weights in with gaussian noise
15:         end if
16:     end for
17:      R = EvaluateR
18:     Sample a random minibatch of T transitions from R
19:     Compute = +
20:     Update by minimizing the loss:
21:     Update using the sampled policy gradient
22:     
23:     Soft update target networks: and
24:     if generation mod  then
25:         Copy the RL actor into the population: for weakest
26:     end if
27:end for
Algorithm 1 Evolutionary Reinforcement Learning
1:procedure Evaluate(, R, noise, )
2:     
3:     for i = 1: do
4:         Reset environment and get initial state
5:         while env is not done do
6:              Select action
7:              Execute action and observe reward and new state
8:              Append transition to
9:               and
10:         end while
11:     end for
12:     Return , R
13:end procedure
Algorithm 2 Function Evaluate

The principal idea behind Evolutionary Reinforcement Learning (ERL) is to incorporate EA’s population-based approach to generate a diverse set of experiences while leveraging powerful gradient based methods from DRL to learn from them. In this work, we instantiate ERL by combining a standard EA with DDPG but any off-policy reinforcement learner that utilizes an actor-critic architecture can be used.

A general flow of the ERL algorithm proceeds as follow: a population of actor networks is initialized with random weights. In addition to the population, one additional actor network (referred to as henceforth) is initialized alongside a critic network. The population of actors ( excluded) are then evaluated in an episode of interaction with the environment. The fitness for each actor is computed as the cumulative sum of the reward that they receive over the timesteps in that episode. A selection operator then selects a portion of the population for survival with probability commensurate on their relative fitness scores. The actors in the population are then probabilistically perturbed by mutating their neural weights using zero-mean Gaussian noise to create the next generation of actors. A select portion of actors with the highest relative fitness are preserved as elites and are shielded from the mutation step.

EA RL: The procedure up till now is reminiscent of a standard EA. However, unlike EA which only learns between episodes using a coarse feedback signal (fitness score), ERL additionally learns from the experiences within episodes. ERL stores each actor’s experiences defined by the tuple (current state, action, next state, reward) in its replay buffer. This is done for every interaction, at every timestep, for every episode, and for each of its actors. The critic samples a random minibatch from this replay buffer and uses it to update its parameters using gradient descent. The critic, alongside the minibatch is then used to train the using the sampled policy gradient. This is similar to the learning procedure for DDPG, except that the replay buffer has access to the experiences from the entire evolutionary population.

Data Reuse: The replay buffer is the central mechanism that enables the flow of information from the evolutionary population to the RL learner. In contrast to a standard EA which would extract the fitness metric from these experiences and disregard them immediately, ERL retains them in the buffer and engages the and critic to learn from them repeatedly using powerful gradient based methods. This mechanism allows for maximal information extraction from each individual experiences leading to improved sample efficiency.

Temporal Credit Assignment: Since fitness scores capture episode-wide return of an individual, the selection operator exerts a strong pressure to favor individuals with higher episode-wide returns. As the buffer is populated by the experiences collected by these individuals, this process biases the state distribution towards regions that have higher episode-wide return. This serves as a form of implicit prioritization that favors experiences leading to higher long term payoffs and is effective for domains with long time horizons and sparse rewards. A RL learner that learns from this state distribution (replay buffer) is biased towards learning policies that optimizes for higher episode-wide return.

Diverse Exploration: A noisy version of the using Ornstein-Uhlenbeck Uhlenbeck and Ornstein (1930) process is used generate additional experiences for the replay buffer. In contrast to the population of actors which explore by noise in their parameter space (neural weights), the explores through noise in its action space. The two processes complement each other and collectively lead to an effective exploration strategy that is able to better explore the policy space.

RL EA: Periodically, the network’s weights are copied into the evolving population of actors, referred to as synchronization. The frequency of synchronization controls the flow of information from the RL learner to the evolutionary population. This is the core mechanism that enables the evolutionary framework to directly leverage the information learned through gradient descent. The process of infusing policy learned by the into the population also serves to stabilize learning and make it more robust to deception. If the policy learned by the is good, it will be selected to survive and extend its influence to the population over subsequent generations. However, if the is bad, it will simply be selected against and discarded. This mechanism ensures that the flow of information from the to the evolutionary population is constructive, and not disruptive. This is particularly relevant for domains with sparse rewards and deceptive local minimas which gradient based methods can be highly susceptible to.

Algorithm 1 and 2 provide a detailed pseudocode of the ERL algorithm using DDPG as its RL component. Adam Kingma and Ba (2014) optimizer with a learning rate of and was used for the and , respectively. The size of the population was set to 10, while the elite fraction varied from to across tasks. The number of trials conducted to compute a fitness score, ranged from to across tasks. The size of the replay buffer and batch size were set to and , respectively. The discount rate and target weight were set to 0.99 and , respectively. The mutation probability was set to while the syncronization period ranged from to across tasks. A detailed description of hyperparameters used for ERL is included in the appendix.

5 Experiments

(a) HalfCheetah
(b) Swimmer
(c) Reacher
(d) Ant
(e) Hopper
(f) Walker2D
Figure 3: Learning curves on Mujoco based continous control benchmarks.

Domain: We evaluated the performance of ERL agents on continuous control tasks simulated using Mujoco Todorov et al. (2012). These are benchmarks used widely in the field Duan et al. (2016); Henderson et al. (2017); Such et al. (2017); Schulman et al. (2017) and are hosted through the OpenAI gym Brockman et al. (2016).

Compared Baselines: We compare the performance of ERL with a standard neuroevolutionary algorithm (EA), DDPG Lillicrap et al. (2015) and Proximal Policy Optimization (PPO) Schulman et al. (2017). DDPG and PPO are state of the art deep reinforcement learning algorithms of the off-policy and and on-policy variety, respectively. PPO builds on the Trust region policy optimization (TRPO) algorithm Schulman et al. (2015a). ERL is implemented using PyTorch Paszke et al. (2017) while OpenAI Baselines Dhariwal et al. (2017) was used to implement PPO and DDPG. The hyperparameters for both algorithms were set to match the original papers except that a larger batch size of was used for DDPG which was shown to improve performance in Islam et al. (2017).

Methodology for Reported Metrics: For DDPG and PPO, the actor network was periodically tested on task instances without any exploratory noise. The average score was then logged as its performance. For ERL, during each training generation, the actor network with the highest fitness was selected as the champion. The champion was then tested on task instances, and the average score was logged. This protocol was implemented to shield the reported metrics from any bias of the population size. Note that all scores are compared against the number of steps in the environment. Each step is defined as an instance where the agent takes an action and gets a reward back from the environment. To make the comparisons fair across single agent and population-based algorithms, all steps taken by all actors in the population are cumulative. For example, one episode of HalfCheetah consists of steps. For a population of actors, each generation consists of evaluating the actors in an episode which would incur steps. We conduct five independent statistical runs with varying random seeds, and report the average with error bars logging the standard deviation.

Results: Figure 3 shows the comparative performance of ERL, EA, DDPG and PPO. The performances of DDPG and PPO were verified to have matched the ones reported in their original papers Lillicrap et al. (2015); Schulman et al. (2017) and consequent works that implemented them Duan et al. (2016); Gu et al. (2017); Haarnoja et al. (2018); Islam et al. (2017). ERL significantly outperforms DDPG across all the benchmarks. Notably, ERL is able to learn on the 3D quadruped locomotion Ant benchmark where DDPG normally fails to make any learning progress Duan et al. (2016); Gu et al. (2017); Haarnoja et al. (2018). ERL also consistently outperforms EA across all but the Swimmer environment, where the two algorithms perform approximately equivalently. Considering that ERL is built primarily using the subcomponents of these two algorithms, this is an important result. Additionally, ERL significantly outperforms PPO in 4 out of the 6 benchmark environments. The quantitative results obtained for ERL also compare very favorable to results reported for prior methods Duan et al. (2016); Gu et al. (2017); Haarnoja et al. (2018); Wu et al. (2017), indicating that ERL achieves state-of-the-art performance on these benchmarks111Videos of learned policies available at https://tinyurl.com/erl-mujoco.

Figure 4: Ablation experiments with the selection operator removed. NS indicates ERL without the selection operator.

The two exceptions are Hopper and Walker2D where ERL eventually matches and exceeds PPO’s performance but is less sample efficient. A common theme in these two environments is early termination of an episode if the agent falls over. Both environments also disburse a constant small reward for each step of survival to encourage the agent to hold balance. Since EA selects for episode-wide return, this setup of reward creates a strong local minima for a policy that simply survives by balancing while staying still. This is the exact behavior EA converges to for both environments. However, while ERL is initially confined by the local minima’s strong basin of attraction, it eventually breaks free from it by virtue of its RL components: temporally correlated exploration in the action space and policy gradient based on experience batches sampled randomly from the replay buffer. This highlights the core aspect of ERL: incorporating the mechanisms within EA and policy gradient methods to achieve the best of both approaches.

Ablation Experiments: We use an ablation experiment to test the value of the selection operator, which is the core mechanism for experience selection within ERL. Figure 4 shows the comparative results in HalfCheetah and Swimmer benchmarks. The performance for each benchmark was normalized by the best score achieved using the full ERL algorithm (Figure 3). Results demonstrate that the selection operator is a crucial part of ERL. Removing the selection operation (NS variants) lead to significant degradation in learning performance (80%) across both benchmarks.

Note on runtime: On average, ERL took approximately more time than DDPG to run. The majority of the added computation stem from the mutation operator, whose cost in comparison to gradient descent was minimal. Additionally, these comparisons are based on implementation of ERL without any parallelization. We anticipate a parallelized implementation of ERL to run significantly faster as corroborated by previous work in population-based approaches Conti et al. (2017); Salimans et al. (2017); Such et al. (2017).

6 Related Work

Using evolutionary algorithms to complement reinforcement learning, and vice versa is not a new idea. Stafylopatis and Blekas combined the two using a Learning Classifier System for autonomous car control Stafylopatis and Blekas (1998). Whiteson and Stone used NEAT Stanley and Miikkulainen (2002), an evolutionary algorithm that evolves both neural topology and weights to optimize function approximators representing the value function in Q-learning Whiteson and Stone (2006). From an evolutionary perspective, combining RL with EA is closely related to the idea of incorporating learning with evolution Ackley and Littman (1991); Drugan (2018); Turney et al. (1996). Fernando et al. leveraged a similar idea to tackle catastrophic forgetting in transfer learning Fernando et al. (2017) and constructing differentiable pattern producing networks capable of discovering CNN architecture automatically Fernando et al. (2016).

Recently, there has been a renewed push in the use of evolutionary algorithms to offer alternatives for (Deep) Reinforcement Learning Risi and Togelius (2017). Salimans et al. used a class of EAs called Evolutionary Strategies (ES) to achieve results competitive with DRL in Atari and robotic control tasks Salimans et al. (2017). The authors were able to achieve significant improvements in clock time by using over a thousand parallel workers highlighting the scalability of ES approaches. Similar scalability and competitive results were demonstrated by Such et al. using a genetic algorithm with novelty search Such et al. (2017). A companion paper applied novelty search Lehman and Stanley (2008) and Quality Diversity Cully et al. (2015); Pugh et al. (2016) to ES to improve exploration Conti et al. (2017). EAs have also been widely used to optimize deep neural network architecture and hyperparmaters Jaderberg et al. (2017); Liu et al. (2017). Conversely, ideas within RL have also been used to improve EAs. Gangwani and Peng devised a genetic algorithm using imitation learning and policy gradients as crossover and mutation operator, respectively Gangwani and Peng (2017). These approaches are complementary to the ERL framework and can be readily combined for potential further improved performance.

7 Discussion

We presented ERL, a hybrid algorithm that leverages the population of an EA to generate diverse experiences to train an RL agent, and reinserts the RL agent into the EA population sporadically to inject gradient information into the EA. ERL inherits EA’s invariance to sparse rewards with long time horizons, ability for diverse exploration, and stability of a population-based approach and complements it with DRL’s ability to leverage gradients for lower sample complexity. Additionally, ERL recycles the date generated by the evolutionary population and leverages the replay buffer to learn from them repeatedly, allowing maximal information extraction from each experience leading to improved sample efficiency. Results in a range of challenging continuous control benchmarks demonstrate that ERL outperforms state-of-the-art DRL algorithms including PPO and DDPG.

From a reinforcement learning perspective, ERL can be viewed as a form of ‘population-driven guide’ that biases exploration towards states with higher long-term returns, promotes diversity of explored policies, and introduces redundancies for stability. From an evolutionary perspective, ERL can be viewed as a Lamarckian mechanism that enables incorporation of powerful gradient based methods to learn at the resolution of an agent’s individual experiences. In general, RL methods learn from an agent’s life (individual experience tuples collected by the agent) whereas EA methods learn from an agent’s death (fitness metric accumulated over a full episode). The principal mechanism behind ERL is the capability to incorporate both modes of learning: learning directly from the high resolution of individual experiences while being aligned to maximize long term return by leveraging the low resolution fitness metric.

In this paper, we used a standard EA as the evolutionary component of ERL. Incorporating more complex evolutionary sub-mechanisms is an exciting area of future work. Some examples include incorporating crossover and better mutation operators Gangwani and Peng (2017), adaptive exploration noise Fortunato et al. (2017); Plappert et al. (2017), and explicit diversity maintenance techniques Conti et al. (2017); Cully et al. (2015); Lehman and Stanley (2008); Such et al. (2017). Other areas of future works will incorporate implicit curriculum based techniques like Hindsight Experience Replay Andrychowicz et al. (2017) and information theoretic techniques Eysenbach et al. (2018); Haarnoja et al. (2018) to further improve exploration.

References

  • Ackley and Littman [1991] D. Ackley and M. Littman. Interactions between learning and evolution. Artificial life II, 10:487–509, 1991.
  • Ahn and Ramakrishna [2003] C. W. Ahn and R. S. Ramakrishna. Elitism-based compact genetic algorithms. IEEE Transactions on Evolutionary Computation, 7(4):367–385, 2003.
  • Andrychowicz et al. [2017] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, O. P. Abbeel, and W. Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pages 5048–5058, 2017.
  • Ba et al. [2016] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  • Bellemare et al. [2016] M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pages 1471–1479, 2016.
  • Bhatnagar et al. [2009] S. Bhatnagar, D. Precup, D. Silver, R. S. Sutton, H. R. Maei, and C. Szepesvári. Convergent temporal-difference learning with arbitrary smooth function approximation. In Advances in Neural Information Processing Systems, pages 1204–1212, 2009.
  • Brockman et al. [2016] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
  • Conti et al. [2017] E. Conti, V. Madhavan, F. P. Such, J. Lehman, K. O. Stanley, and J. Clune. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. arXiv preprint arXiv:1712.06560, 2017.
  • Cully et al. [2015] A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret. Robots that can adapt like animals. Nature, 521(7553):503, 2015.
  • De Asis et al. [2017] K. De Asis, J. F. Hernandez-Garcia, G. Z. Holland, and R. S. Sutton. Multi-step reinforcement learning: A unifying algorithm. arXiv preprint arXiv:1703.01327, 2017.
  • Dhariwal et al. [2017] P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, and Y. Wu. Openai baselines. https://github.com/openai/baselines, 2017.
  • Drugan [2018] M. M. Drugan. Reinforcement learning versus evolutionary computation: A survey on hybrid algorithms. Swarm and Evolutionary Computation, 2018.
  • Duan et al. [2016] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement learning for continuous control. In International Conference on Machine Learning, pages 1329–1338, 2016.
  • Espeholt et al. [2018] L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley, I. Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
  • Eysenbach et al. [2018] B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine. Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070, 2018.
  • Fernando et al. [2016] C. Fernando, D. Banarse, M. Reynolds, F. Besse, D. Pfau, M. Jaderberg, M. Lanctot, and D. Wierstra. Convolution by evolution: Differentiable pattern producing networks. In Proceedings of the Genetic and Evolutionary Computation Conference 2016, pages 109–116. ACM, 2016.
  • Fernando et al. [2017] C. Fernando, D. Banarse, C. Blundell, Y. Zwols, D. Ha, A. A. Rusu, A. Pritzel, and D. Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734, 2017.
  • Floreano et al. [2008] D. Floreano, P. Dürr, and C. Mattiussi. Neuroevolution: from architectures to learning. Evolutionary Intelligence, 1(1):47–62, 2008.
  • Fogel [2006] D. B. Fogel. Evolutionary computation: toward a new philosophy of machine intelligence, volume 1. John Wiley & Sons, 2006.
  • Fortunato et al. [2017] M. Fortunato, M. G. Azar, B. Piot, J. Menick, I. Osband, A. Graves, V. Mnih, R. Munos, D. Hassabis, O. Pietquin, et al. Noisy networks for exploration. arXiv preprint arXiv:1706.10295, 2017.
  • Gangwani and Peng [2017] T. Gangwani and J. Peng. Genetic policy optimization. arXiv preprint arXiv:1711.01012, 2017.
  • Gu et al. [2017] S. Gu, T. Lillicrap, R. E. Turner, Z. Ghahramani, B. Schölkopf, and S. Levine. Interpolated policy gradient: Merging on-policy and off-policy gradient estimation for deep reinforcement learning. In Advances in Neural Information Processing Systems, pages 3849–3858, 2017.
  • Haarnoja et al. [2018] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018.
  • Henderson et al. [2017] P. Henderson, R. Islam, P. Bachman, J. Pineau, D. Precup, and D. Meger. Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560, 2017.
  • Houthooft et al. [2016] R. Houthooft, X. Chen, Y. Duan, J. Schulman, F. De Turck, and P. Abbeel. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pages 1109–1117, 2016.
  • Islam et al. [2017] R. Islam, P. Henderson, M. Gomrokchi, and D. Precup. Reproducibility of benchmarked deep reinforcement learning tasks for continuous control. arXiv preprint arXiv:1708.04133, 2017.
  • Jaderberg et al. [2017] M. Jaderberg, V. Dalibard, S. Osindero, W. M. Czarnecki, J. Donahue, A. Razavi, O. Vinyals, T. Green, I. Dunning, K. Simonyan, et al. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017.
  • Kingma and Ba [2014] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Lehman and Stanley [2008] J. Lehman and K. O. Stanley. Exploiting open-endedness to solve problems through the search for novelty. In ALIFE, pages 329–336, 2008.
  • Lillicrap et al. [2015] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  • Liu et al. [2017] H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu. Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436, 2017.
  • Lüders et al. [2017] B. Lüders, M. Schläger, A. Korach, and S. Risi. Continual and one-shot learning through neural networks with dynamic external memory. In European Conference on the Applications of Evolutionary Computation, pages 886–901. Springer, 2017.
  • Mahmood et al. [2017] A. R. Mahmood, H. Yu, and R. S. Sutton. Multi-step off-policy learning without importance sampling ratios. arXiv preprint arXiv:1702.03006, 2017.
  • Mnih et al. [2015] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
  • Mnih et al. [2016] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pages 1928–1937, 2016.
  • Munos [2016] R. Munos. Q () with off-policy corrections. In Algorithmic Learning Theory: 27th International Conference, ALT 2016, Bari, Italy, October 19-21, 2016, Proceedings, volume 9925, page 305. Springer, 2016.
  • Ostrovski et al. [2017] G. Ostrovski, M. G. Bellemare, A. v. d. Oord, and R. Munos. Count-based exploration with neural density models. arXiv preprint arXiv:1703.01310, 2017.
  • Paszke et al. [2017] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
  • Pathak et al. [2017] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In International Conference on Machine Learning (ICML), volume 2017, 2017.
  • Plappert et al. [2017] M. Plappert, R. Houthooft, P. Dhariwal, S. Sidor, R. Y. Chen, X. Chen, T. Asfour, P. Abbeel, and M. Andrychowicz. Parameter space noise for exploration. arXiv preprint arXiv:1706.01905, 2017.
  • Pugh et al. [2016] J. K. Pugh, L. B. Soros, and K. O. Stanley. Quality diversity: A new frontier for evolutionary computation. Frontiers in Robotics and AI, 3:40, 2016.
  • Risi and Togelius [2017] S. Risi and J. Togelius. Neuroevolution in games: State of the art and open challenges. IEEE Transactions on Computational Intelligence and AI in Games, 9(1):25–41, 2017.
  • Salimans et al. [2017] T. Salimans, J. Ho, X. Chen, and I. Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
  • Schulman et al. [2015a] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In International Conference on Machine Learning, pages 1889–1897, 2015a.
  • Schulman et al. [2015b] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015b.
  • Schulman et al. [2017] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  • Sherstan et al. [2018] C. Sherstan, B. Bennett, K. Young, D. R. Ashley, A. White, M. White, and R. S. Sutton. Directly estimating the variance of the lambda-return using temporal-difference methods. arXiv preprint arXiv:1801.08287, 2018.
  • Silver et al. [2016] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
  • Spears et al. [1993] W. M. Spears, K. A. De Jong, T. Bäck, D. B. Fogel, and H. De Garis. An overview of evolutionary computation. In European Conference on Machine Learning, pages 442–459. Springer, 1993.
  • Stafylopatis and Blekas [1998] A. Stafylopatis and K. Blekas. Autonomous vehicle navigation using evolutionary reinforcement learning. European Journal of Operational Research, 108(2):306–318, 1998.
  • Stanley and Miikkulainen [2002] K. O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2):99–127, 2002.
  • Such et al. [2017] F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune. Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567, 2017.
  • Sutton and Barto [1998] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
  • Tang et al. [2017] H. Tang, R. Houthooft, D. Foote, A. Stooke, O. X. Chen, Y. Duan, J. Schulman, F. DeTurck, and P. Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning. In Advances in Neural Information Processing Systems, pages 2750–2759, 2017.
  • Todorov et al. [2012] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026–5033. IEEE, 2012.
  • Turney et al. [1996] P. Turney, D. Whitley, and R. W. Anderson. Evolution, learning, and instinct: 100 years of the baldwin effect. Evolutionary Computation, 4(3):iv–viii, 1996.
  • Uhlenbeck and Ornstein [1930] G. E. Uhlenbeck and L. S. Ornstein. On the theory of the brownian motion. Physical review, 36(5):823, 1930.
  • Wang et al. [2016] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas. Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224, 2016.
  • Whiteson and Stone [2006] S. Whiteson and P. Stone. Evolutionary function approximation for reinforcement learning. Journal of Machine Learning Research, 7(May):877–917, 2006.
  • Wu et al. [2017] Y. Wu, E. Mansimov, R. B. Grosse, S. Liao, and J. Ba. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In Advances in neural information processing systems, pages 5285–5294, 2017.

Appendix A Experimental Details

This section details the hyperparameters used for Evolutionary Reinforcement Learning (ERL) across all benchmarks. The hyperparameters that were kept consistent across all tasks are listed below.

  • Population size k = 10
    This parameter controls the number of different individuals (actors) that are present in the evolutionary population at any given time. This parameter modulates the proportion of exploration carried out through noise in the actor’s parameter space and its action space. For example, with a population size of 10, for every generation, 10 actors explore through noise in its parameters space (mutation) while 1 actor explores through noise in its action space ().

  • Target weight
    This parameter controls the magnitude of the soft update between the / networks and their target counterparts.

  • Actor Learning Rate =
    This parameter controls the learning rate of the actor network.

  • Critic Learning Rate =
    This parameter controls the learning rate of the critic network.

  • Discount Rate =
    This parameters controls the discount rate used to compute the TD-error.

  • Replay Buffer Size =
    This parameter controls the size of the replay buffer. After the buffer is filled, the oldest experiences are deleted in order to make room for new ones.

  • Batch Size =
    This parameters controls the batch size used to compute the gradients.

  • Actor Neural Architecture = [128, 128]
    The actor network consists of two hidden layers, each with 128 nodes. Layer normalization Ba et al. [2016] was used before layer.

  • Critic Neural Architecture = [200 200, 300]
    The critic network consists of two hidden layers, with 400 and 300 nodes each. However, the first hidden layer is not fully connected to the entirety of the network input. Unlike the actor, the critic takes in both state and action as input. The state and action vectors are each fully connected to a sub-hidden layer of 200 nodes. The two sub-hidden layers are then concatenated to form the first hidden layer of 400 nodes. Layer normalization Ba et al. [2016] was used before each layer.

Table 1 details the hyperparameters that were varied across tasks.

Parameter HalfCheetah Swimmer Reacher Ant Hopper Walker2D
Elite Fraction 0.1 0.1 0.2 0.3 0.3 0.2
Number of Trials 1 1 5 1 5 3
Synchronization Period 10 10 10 1 1 10
Table 1: Hyperparameters for ERL that were varied across tasks.

Elite Fraction

The elite fraction controls the fraction of the population that are categorized as elites. Since an elite individual (actor) is shielded from the mutation step and preserved as it is, the elite fraction modulates the degree of exploration/exploitation within the evolutionary population. In general, tasks with more stochastic dynamics (correlating with more contact points) have a higher variance in fitness values. A higher elite fraction in these tasks helps in reducing the probability of losing good actors due to high variance in fitness, promoting stable learning.

Number of Trials

The number of trials (full episodes) conducted in an environment to compute a fitness score is given by . For example, if is 5, each individual is tested on 5 full episodes of a task, and its cumulative score is averaged across the episodes to compute its fitness score. This is a mechanism to reduce the variance of the fitness score assigned to each individual (actor). In general, tasks with higher stochasticity is assigned a higher to reduce variance. Note that all steps taken during each episode is cumulative when determining the agent’s total steps (the x-axix of the comparative results shown in the paper) for a fair comparison.

Synchronization Period

This parameter controls the frequency of information flow from the to the evolutionary population. A higher generally allows more time for expansive exploration by the evolutionary population while a lower can allow for a more narrower search. The parameter controls how frequently the exploration in action space () shares information with the exploration in the parameter space (actors in the evolutionary population).

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
198343
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description