Learning Self-Imitating Diverse Policies

Learning Self-Imitating Diverse Policies

Tanmay Gangwani
Dept. of Computer Science
UIUC
gangwan2@uiuc.edu
&
Qiang Liu
Dept. of Computer Science
UT Austin
lqiang@cs.utexas.edu
&Jian Peng
Dept. of Computer Science
UIUC
jianpeng@uiuc.edu
Abstract

Deep reinforcement learning algorithms, including policy gradient methods and Q-learning, have been widely applied to a variety of decision-making problems. Their success has relied heavily on having very well designed dense reward signals, and therefore, they often perform badly on the sparse or episodic reward settings. Trajectory-based policy optimization methods, such as cross-entropy method and evolution strategies, do not take into consideration the temporal nature of the problem and often suffer from high sample complexity. Scaling up the efficiency of RL algorithms to real-world problems with sparse or episodic rewards is therefore a pressing need. In this work, we present a new perspective of policy optimization and introduce a self-imitation learning algorithm that exploits and explores well in the sparse and episodic reward settings. First, we view each policy as a state-action visitation distribution and formulate policy optimization as a divergence minimization problem. Then, we show that, with Jensen-Shannon divergence, this divergence minimization problem can be reduced into a policy-gradient algorithm with dense reward learned from experience replays. Experimental results indicate that our algorithm works comparable to existing algorithms in the dense reward setting, and significantly better in the sparse and episodic settings. To encourage exploration, we further apply the Stein variational policy gradient descent with the Jensen-Shannon kernel to learn multiple diverse policies and demonstrate its effectiveness on a number of challenging tasks.

\pdfstringdefDisableCommands

1 Introduction

Deep reinforcement learning (RL) has recently demonstrated significant applicability and superior performance in many problems that were not solved by traditional methods, such as computer and board games  (Mnih et al., 2015; Silver et al., 2016), continuous control (Lillicrap et al., 2015), and robotics (Levine et al., 2016). With deep neural networks, such as convolutional neural networks, used as functional approximators, many traditional reinforcement learning algorithms have been shown to be very effective in solving sequential decision problems. For example, a policy that selects actions under certain state observation can be parameterized by a deep neural network that takes the current state observation as input and gives an action or a distribution over actions as output. Value functions that take both state observation and action as input and predict expected future reward can also be parameterized as neural networks. In order to optimize such neural networks, policy gradient methods  (Mnih et al., 2016; Schulman et al., 2015, 2017a) and Q-learning algorithms (Mnih et al., 2015) capture the temporal structure of the sequential decision problem and decompose the learning into single-timestep optimization, supervised by the immediate and discounted future reward from rollout data.

Unfortunately, when the reward signal becomes sparse or delayed, these RL algorithms may suffer from inferior performance and inefficient sample complexity, mainly due to the scarcity of the immediate supervision when training happens in single-timestep manner. For instance, consider the Atari Montezuma’s revenge game – reward will be received after collecting certain items or arriving at the final destination in the lowest level, while no reward will be received as the agent is trying to reach these goals. The sparsity of the reward makes the neural network training very inefficient and also poses challenges in exploration. It is not hard to see that many of the real-world problems tend to have sparse or even episodic reward, where a non-zero reward will only be received at the end of the trajectory.

In addition to policy gradient and Q-learning methods, alternative algorithms, such as those for global optimization or stochastic optimization, have been recently studied for policy optimization (Salimans et al., 2017; Such et al., 2017). These algorithms do not decompose trajectories into single timesteps but instead apply zeroth-order finite-difference gradient or gradient-free methods to learn policies based on the reward received in each entire trajectory. Usually, random trajectories (samples) are first generated by running the current policy and then the parameters of the policies get updated according to the reward values received during trajectories. The cross-entropy method and evolution strategies are two nominal examples. Although their sample efficiency is often not comparable to the policy gradient methods in the dense reward setting, they are more widely applicable in the sparse or episodic reward settings as only the trajectory-based total reward is needed.

In this work, we introduce a new algorithm that exploits and explores well in the sparse and episodic reward settings. First, we view each policy as a state-action visitation distribution and formulate policy optimization as a divergence minimization problem between the current policy and a set of experience replay trajectories with high returns. Then, we show that with the Jensen-Shannon divergence (), this divergence minimization problem can be reduced into a policy-gradient algorithm with dense reward learned from these experience replays. This algorithm can be seen as self-imitation learning, in which the expert trajectories are collected from self-generated experience replays, rather than some external demonstrations. The algorithm could also be viewed as a cross-entropy method, but in policy-space instead of parameter-space. We can interpret different trajectories in the experience replay to be representative of different policies, and the best among these are combined to influence the current policy.

To encourage exploration, we further apply the Stein variational policy gradient descent with the Jensen-Shannon kernel to simultaneously learn multiple diverse policies. Experimental results indicate that our algorithm works comparable to existing algorithms in the dense reward setting and significantly better in the sparse and episodic settings. We also demonstrate the effectiveness of exploration on a number of challenging tasks.

Related Works.   Cross-entropy method (CEM, Rubinstein & Kroese (2016)) is a stochastic optimization procedure that can be used to search the policy space without needing policy gradients or value functions. CEM has been shown to work on simple tasks in RL such as grid-worlds (Mannor et al., 2003), but not on high-dimensional continuous control tasks.

Learning from Demonstrations (LfD). The objective in LfD, or imitation learning, is to train a control policy to produce a trajectory distribution similar to the demonstrator. Approaches for self-driving cars (Bojarski et al., 2016) and drone manipulation (Ross et al., 2013) have used human-expert data, along with Behavioral cloning algorithm to learn good control policies. Deep Q-learning has been combined with human demonstrations to achieve performance gains in Atari (Hester et al., 2017) and robotics tasks (Večerík et al., 2017; Nair et al., 2017). Human data has also been used in the maximum entropy IRL framework to learn cost functions under which the demonstrations are optimal (Ho & Ermon, 2016; Finn et al., 2016). Besides humans, other sources of expert supervision include planning-based approaches such as iLQR (Levine et al., 2016) and MCTS (Silver et al., 2016). Our algorithm departs from prior work in forgoing external supervision, and instead using the past experiences of the learner itself as demonstration data.

Exploration and Diversity in RL. Count-based exploration methods utilize state-action visitation counts , and award a bonus to rarely visited states (Strehl & Littman, 2008). In large state-spaces, approximation techniques (Tang et al., 2017), and estimation of pseudo-counts by learning density models (Bellemare et al., 2016; Fu et al., 2017) has been researched. Intrinsic motivation has been shown to aid exploration, for instance by using information gain (Houthooft et al., 2016) or prediction error (Stadie et al., 2015) as a bonus. Hindsight Experience Replay (Andrychowicz et al., 2017) adds additional goals (and corresponding rewards) to a Q-learning algorithm. We also obtain additional rewards, but from a discriminator trained on past agent experiences, to accelerate a policy-gradient algorithm. Prior work has looked at training a diverse ensemble of agents with good exploratory skills (Liu et al., 2017; Conti et al., 2017; Florensa et al., 2017). To enjoy the benefits of diversity, we incorporate a modification of SVPG (Liu et al., 2017) in our final algorithm.

2 Main Methods

We start with a brief introduction to RL in Section 2.1, and then introduce our main algorithm of self-imitating learning in Section 2.2. Section 2.3 further extends our main method to learn multiple diverse policies using Stein variational policy gradient with Jensen-Shannon kernel.

2.1 Reinforcement Learning Background

A typical RL setting involves an environment modeled as a Markov Decision Process with an unknown system dynamics model and an initial state distribution . An agent interacts sequentially with the environment in discrete time-steps using a policy which maps the current observation to either a single action (deterministic policy), or a distribution over the action space (stochastic policy). We consider the scenario of stochastic policies over high-dimensional, continuous state and action spaces. The agent receives a per-step reward , and the RL objective involves maximization of the expected discounted sum of rewards, , where is the discount factor. The action-value function is . We define the unnormalized -discounted state-visitation distribution for a policy by , where is the probability of being in state at time , when following policy and starting state . The expected policy return can then be written as , where is the state-action visitation distribution. Using the policy gradient theorem (Sutton et al., 2000), we can get the direction of ascent .

2.2 Policy Optimization as Divergence Minimization with Self-Imitation

Although the policy is given as a conditional distribution, its behavior is better characterized by the corresponding state-action visitation distribution , which fully decides the expected return via . Therefore, distance metrics on a policy should be defined with respect to the visitation distribution , and the policy search should be viewed as finding policies with good visitation distributions that yield high reward. Suppose we have access to a good policy , then it is natural to consider finding a such that its visitation distribution matches . To do so, we can define a divergence measure that captures the similarity between two distributions, and minimize this divergence for policy improvement.

Assume there exists an expert policy , where policy optimization can be framed as minimizing the divergence , that is, finding a policy to imitate . In practice, however, we do not have access to any real guiding expert policy. Instead, we can maintain a selected subset of highly-rewarded trajectories from the previous roll-outs, and optimize the policy to minimize the divergence between and the empirical state-action pair distribution :

(1)

Since it is not always possible to explicitly formulate even with the exact functional form of , we generate rollouts from in the environment and obtain an empirical distribution of . To measure the divergence between two empirical distributions, we use the Jensen-Shannon divergence (), with the following variational form as exploited in generative adversarial nets (Goodfellow et al., 2014)

(2)

where and are empirical density estimators of and , respectively. As shown in the following theorem, it is easy to approximate the gradient of w.r.t policy parameters, thus enabling us to optimize the policy.

Theorem 1

Let and be the state-action visitation distributions induced by two policies and respectively. Then, if the policy is parameterized by , the gradient of with respect to policy parameters () can be approximated as:

Next, we introduce a simple and inexpensive approach to construct the replay memory using high-return past experiences during training. In this way, can be seen as a mixture of deterministic policies, each representing a delta point mass distribution in the trajectory space or a finite discrete visitation distribution of state-action pairs. At each iteration, we apply the current policy to sample trajectories . We hope to include in , the top- trajectories (or trajectories with returns above a threshold) generated thus far during the training process. For this, we use a priority-queue list for which keeps the trajectories sorted according to the total trajectory reward. The reward for each newly sampled trajectory in is compared with the current threshold of the priority-queue, updating accordingly. The frequency of updates is impacted by the exploration capabilities of the agent and the stochasticity in the environment. We find that simply sampling noisy actions from Gaussian policies is sufficient for several locomotion tasks (Section 3). To handle the more challenging variants of these tasks, in the next sub-section, we augment our policy optimization procedure to explicitly encourage diverse policies.

This approach is closely related to the imitation learning paradigm, where expert demonstrations of trajectories are available (from external sources) as the empirical distribution of of an expert policy . Therefore, it can be viewed as a self-imitation learning algorithm from experience replay. As noted in Theorem 1, the gradient estimator of is the same as the vanilla policy gradient if the reward at each timestep is defined as . Therefore, it is possible to interpolate the gradient of and the standard policy gradient:

(3)

where is the mixture policy represented by the samples in . Let which can be computed with parameterized networks for densities and , by solving the optimization (Eq 2) using the current rollouts and , where includes the parameters for and . Using Theorem 1, the interpolated gradient can be further simplified to:

(4)

where is the action-value function calculated using as the reward. This reward is high in the regions of the space frequented more by the expert than the learner, and low in regions visited more by the learner than the expert. The effective in equation 4 is therefore an interpolation between obtained with real environment rewards, and obtained with rewards which are implicitly shaped to guide the learner towards expert behavior. In environments with sparse or deceptive rewards, where the signal from is weak or sub-optimal, a higher value of enables successful learning, as we show in our experiments. Also, since and are continuously evolving, the reward is non-stationary.

Discussion. Qualitatively, our approach can be related to Self-training, which is a semi-supervised learning algorithm (Chapelle et al., 2009). In Self-training, the input is few labeled data and a larger amount of unlabeled data . Iteratively, the most confident predictions from the model on are added to the labeled data repertoire as . This provides increased supervision, leading to an even better estimate of . For sparse RL environments, the per-timestep reward is zero for most . Similar to the self-generated labels , our approach manufactures dense rewards from trajectories in . The dense rewards provide a stronger RL signal to the agent, further enhancing the data in , which in turn makes the dense rewards more representative of the task goals. Algorithm 1 in the Appendix outlines the steps for self-imitation.

2.3 Improving Exploration with Stein Variational Gradient

Since the replay memory is only constructed from the past training rollouts, the quality of the trajectories in is hinged on good exploration by the agent. Consider a maze environment where the robot is only rewarded when it arrives at a goal placed in a far-off corner. Unless the robot reaches once, the trajectories in always have a total reward of zero, and the learning signal from is not useful. Another difficult situation arises when there are local optima in the policy optimization landscape that the agent can fall into; for example, assume the maze has a second goal in the opposite direction of , but with a much smaller reward. With simple exploration, the agent may fill with sub-optimal trajectories leading to , and the reinforcement from would drive it further to .

One approach to achieve better exploration in challenging cases like above is to simultaneously learn multiple diverse policies and enforce them to explore different parts of the high dimensional space. This can be achieved based on the recent work by Liu et al. (2017) on Stein variational policy gradient (SVPG). The idea of SVPG is to find an optimal distribution over the policy parameters which maximizes the expected policy returns, along with an entropy regularization that enforces diversity on the parameter space, i.e.

Without a parametric assumption on , this problem admits a challenging functional optimization problem. Stein variational gradient descent (SVGD,  Liu & Wang (2016)) provides an efficient solution for solving this problem, by approximating with a delta measure , where is an ensemble of policies, and iteratively update with

(5)

where is a positive definite kernel function. The first term in moves the policy to regions with high expected return (exploitation), while the second term creates a repulsion pressure between policies in the ensemble and encourages diversity (exploration). The choice of kernel is critical. Liu et al. (2017) used a simple Gaussian RBF kernel , with the bandwidth dynamically adapted. This, however, assumes a flat Euclidean distance between and , ignoring the structure of the entities defined by them, which are probability distributions. A statistical distance, such as , serves as a better metric for comparing policies (Amari, 1998; Kakade, 2002). Motivated by this, we propose to improve SVPG using JS kernel , where is the state-action visitation distribution obtained by running policy , and is the temperature. The second exploration term in SVPG involves the gradient of the kernel w.r.t policy parameters. With the JS kernel, this requires estimating gradient of , which as shown in Theorem 1, reduces to vanilla policy gradients with an appropriately trained reward function.

Our full algorithm is summarized in Algorithm 2 in the Appendix. We also utilize state-value function networks as baselines to reduce the variance in sampled policy-gradients.

3 Experiments

Our goal in this section is to answer the following questions: 1) Can self-imitation help in scenarios where the agent doesn’t receive a reward at every timestep? 2) How well can the repulsion signal push apart policies in an ensemble, and does this help with exploration in difficult tasks, such as those with local optima? 3) Can a diverse policy ensemble be leveraged in other interesting ways?

We benchmark high-dimensional, continuous-control locomotion tasks based on the MuJoCo physics simulator by extending the OpenAI Baselines (Dhariwal et al., 2017) framework. Our control policies () are modeled as unimodal Gaussians. All feed-forward networks have two layers of 64 hidden units each with tanh non-linearity. For policy-gradient, we use the clipped-surrogate based PPO algorithm (Schulman et al., 2017b). Further implementation details are in the Appendix.

Figure 1: Learning curves for PPO and Self-Imitation on tasks with episodic rewards. Mean and standard-deviation over 5 random seeds is plotted.
Episodic rewards rewards rewards rewards
CEM ES
Walker 2996 252 205 1200 2276 2047 3049 3364 3263 3401
Humanoid 3602 532 426 - 4136 1159 4296 3145 3339 4149
H-Standup 1.8e5 4.4e4 9.6e4 - 1.4e5 1.1e5 1.6e5 9.8e4 1.7e5 1.0e5
Hopper 2618 354 97 1900 2381 2264 2137 2132 2700 2252
Swimmer 173 21 17 - 52 37 127 56 106 68
Invd.Pendulum 8668 344 86 9000 8744 8826 8926 8968 8989 8694
Table 1: Performance of PPO and Self-Imitation on tasks with episodic rewards, and tasks with rewards masked-out in each episode with probability . All runs use 5M timesteps of interaction with the environment. ES performance at 5M timesteps is taken from (Salimans et al., 2017).

3.1 Self-Imitation with Episodic or Sparse Rewards

We evaluate the performance of self-imitation with a single agent in this section; combination with SVPG exploration for multiple agents is discussed in Section 3.2. The environments considered are standard Gym locomotion tasks with their default reward function but with one important modification - rather than providing at each timestep of an episode, we provide at the last timestep of the episode, and zero reward at other timesteps. This is the case for many practical settings where the reward function is hard to design, but scoring each trajectory, possibly by a human (Christiano et al., 2017), is feasible. In Figure 1, we plot the learning curves on three tasks with such episodic rewards. Recall that is the hyper-parameter controlling the weight distribution between gradients with environment rewards and the gradients using the ratio (Equation 4). The baseline PPO agents use , meaning that the entire learning signal comes from the environment. We compare them with self-imitating (SI) agents using a constant value . The capacity of is fixed at 10 trajectories. We didn’t observe our method to be particularly sensitive to the choice of and the capacity value. For instance, works equally well on locomotion tasks with episodic rewards. Further ablation on these two hyper-parameters can be found in the Appendix.

In Figure 1, we see that the PPO agents are unable to make any tangible progress on these tasks, possibly due to difficulty in credit assignment – the lumped rewards at the end of the episode can’t be properly attributed to the individual state-action pairs during the episode. In case of Self-Imitation, the agents receive dense, per-timestep rewards shaped according to high-rewarding trajectories in . This makes credit-assignment easier, leading to successful learning even for very high-dimensional control tasks such as Humanoid.

In Table 1, we measure the impact of self-imitation on more variants of the Gym environments. refers to the probability of masking out each per-timestep reward in an episode. Reward masking is done independently for every new episode, and therefore, the agent receives non-zero feedback at different—albeit only few—timesteps in different episodes. We show the final score, averaged over 5 separate runs, for three values ranging from (sparse) to (dense, the Gym default). Even though these tasks are richer in terms of the RL signal from the environment compared to the episodic case, SI agents () achieve higher average score than the baseline PPO agents () in majority of the tasks for all values. For the episodic case, we also show the performance of CEM and ES (Salimans et al., 2017), since these algorithms depend only on the total trajectory rewards and don’t exploit the temporal structure. CEM perform poorly in most of the cases. ES, while being able to solve the tasks, is sample-inefficient. The ES performance numbers after 5M timesteps of training are taken from Salimans et al. (2017) for a fair comparison with our algorithm.

(a) SI-independent state-density
(b) SI-interact-JS state-density
(c) SI-independent kernel matrix
(d) SI-interact-JS kernel matrix
Figure 2: SI-independent and SI-interact-JS agents on Maze environment.

3.2 Characterizing Ensemble of Diverse Self-Imitating Policies

All tasks used in the previous sub-section provide some reward in each episode, either at episode termination, or occurring sporadically throughout the episode. We now consider situations where the agent might not receive any useful reward signal in the entire episode, where the usefulness is related to the overall task objective. For instance, a reward that drives an agent to a local optimum is not considered useful. We show that independent SI agents are unable to sufficiently explore the state-space, and that augmenting the policy gradient with the repulsion, as in the SVPG objective, can produce diverse policies which solve the tasks.

For didactic purposes, we first consider a simple Maze environment. The start location of the agent

(blue particle) is shown in the figure on the right, along with two regions – the red region is closer to agent’s starting location but has a per-timestep reward of only 1 point if the agent hovers over it; the green region is on the other side of the wall but has a per-timestep reward of 10 points. We refer to our algorithm for training a diverse ensemble of self-imitating agents as SI-interact-JS. It is composed of 8 SI agents which share information for gradient calculation. We use a constant temperature , and the weight on exploration-facilitating repulsion term () is linearly decayed over training. It is compared to an ensemble of 8 independent SI agents. In Figures 1(a) and 1(b), we plot the state-visitation density for SI-independent and SI-interact-JS agents respectively, by sampling few trajectories towards the end of training. While SI-independent clearly gets trapped in the local optimum, SI-interact-JS agents explore wider portions of the maze, with multiple agents reaching the green zone of high reward. Figures 1(c) and 1(d) show the kernel matrices for the two ensembles at the end of training. Cell in the matrix corresponds to the kernel value . For SI-independent, many darker cells indicate that policies are closer (low JS). For SI-interact-JS, which explicitly tries to decrease , the cells are lighter, an indication of dissimilar policies (high JS). Behavior of PPO-independent () is similar to SI-independent () for the Maze task.

Figure 3: Learning curves for various ensembles on sparse locomotion tasks. Mean and standard-deviation over 3 random seeds are plotted.

The Gym locomotion environments from the previous sub-section don’t necessitate a strong exploration strategy since the agents receive a helpful feedback every episode. To appreciate the benefit of diverse policies produced by SI-interact-JS, we create more challenging versions of the Gym benchmarks as follows – SparseHalfCheetah, SparseHopper and SparseAnt yield a forward velocity reward only when the center-of-mass of the corresponding bot is beyond a certain threshold distance. At all timesteps, there is an energy penalty to move the joints, and a survival bonus for bots that can fall over causing premature episode termination (Hopper, Ant). Figure 3 plots the performance of PPO-independent, SI-independent, SI-interact-JS and SI-interact-RBF (which uses RBF-kernel from  Liu et al. (2017) instead of the JS-kernel) on the 3 sparse environments. The results are averaged over 3 separate runs, where for each run, the best agent from the ensemble after training is selected.

The SI-independent agents rely solely on action-space noise from the Gaussian policy parameterization to find high-reward trajectories which can be added to as demonstrations. This is mostly inadequate or slow for sparse environments. Indeed, all demonstrations in for SparseHopper are with the bot standing upright (or tilted) and gathering only the survival bonus, as action-space noise alone can’t discover hopping behavior; for SparseHalfCheetah, has trajectories with the bot haphazardly moving back and forth. For SI-interact-JS, the repulsion encourages the agents to explore the state-space much more effectively, leading to faster discovery of quality trajectories, which then provides good reinforcement through self-imitation. SI-interact-RBF doesn’t perform as well, suggesting that the JS-kernel is more formidable for exploration. PPO-independent agents get stuck in the local optimum for SparseHopper and SparseHalfCheetah – the bot stands still after training, avoiding energy penalty. For SparseAnt, the bot can cross our preset distance threshold using only action-space noise, but learning is slow due to undirected exploration.

3.3 Leveraging Diverse Policies

The diversity-promoting repulsion can be used for various other purposes apart from aiding exploration in the sparse environments considered thus far. First, we consider the paradigm of hierarchical reinforcement learning wherein multiple sub-policies (or skills) are managed by a high-level policy, which chooses the most apt sub-policy to execute at any given time. In Figure 4, we use the Swimmer environment from Gym and show that diverse skills (movements) can be acquired in a pre-training phase when repulsion is used. The skills can then be used in a difficult downstream task. During pre-training with SVPG, exploitation is done with policy-gradients calculated using the norm of the velocity as dense rewards, while the exploration term uses the JS-kernel. As before, we compare an ensemble of 8 interacting agents with 8 independent agents. Figures 3(a) and 3(b) depict the paths taken by the Swimmer after training with independent and interacting agents, respectively. The latter exhibit variety. Figure 3(c) is the downstream task of Swimming+Gathering (Duan et al., 2016) where the bot has to swim and collect the green dots, whilst avoiding the red ones. The utility of pre-training a diverse ensemble is shown in Figure 3(d), which plots the performance on this task while training a higher-level categorical manager policy ().

Diversity can sometimes also help in learning a skill without any rewards from the environment, as observed by Eysenbach et al. (2018) in recent work. We consider a Hopper task with no rewards, but we do require weak supervision in form of the length of each trajectory . Using policy-gradient with as reward and repulsion, we see the emergence of hopping behavior within an ensemble of 8 interacting agents. Videos of the skills acquired can be found here 111https://sites.google.com/site/tesr4t223424.

(a)
(b)
(c)
(d)
Figure 4: Using diverse agents for hierarchical reinforcement learning. (a) Independent agents paths. (b) Interacting agents paths. (c) Swimming+Gathering task. (d) Performance of manager policy with two different pre-trained ensembles as sub-policies.

4 Conclusion

We approached policy optimization for deep RL from the angle of divergence minimization between state-action distributions. This leads to a self-imitation algorithm which improves upon standard policy-gradient methods via the addition of a simple gradient term obtained from implicitly shaped dense rewards. We observe substantial performance gains over the baseline for sparse and episodic reward settings. When used in SVPG with the JS-kernel, we demonstrate the emergence of diverse behaviors which can be used for efficient exploration and avoid local minima for policies.

References

  • Amari (1998) Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998.
  • Andrychowicz et al. (2017) Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pp. 5048–5058, 2017.
  • Bellemare et al. (2016) Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pp. 1471–1479, 2016.
  • Bojarski et al. (2016) Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
  • Chapelle et al. (2009) Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews]. IEEE Transactions on Neural Networks, 20(3):542–542, 2009.
  • Christiano et al. (2017) Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, pp. 4302–4310, 2017.
  • Conti et al. (2017) Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth O Stanley, and Jeff Clune. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. arXiv preprint arXiv:1712.06560, 2017.
  • Dhariwal et al. (2017) Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, and Yuhuai Wu. Openai baselines. https://github.com/openai/baselines, 2017.
  • Duan et al. (2016) Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In International Conference on Machine Learning, pp. 1329–1338, 2016.
  • Eysenbach et al. (2018) Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070, 2018.
  • Finn et al. (2016) Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning, pp. 49–58, 2016.
  • Florensa et al. (2017) Carlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic neural networks for hierarchical reinforcement learning. arXiv preprint arXiv:1704.03012, 2017.
  • Fu et al. (2017) Justin Fu, John Co-Reyes, and Sergey Levine. Ex2: Exploration with exemplar models for deep reinforcement learning. In Advances in Neural Information Processing Systems, pp. 2574–2584, 2017.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
  • Hester et al. (2017) Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, et al. Deep q-learning from demonstrations. arXiv preprint arXiv:1704.03732, 2017.
  • Ho & Ermon (2016) Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pp. 4565–4573, 2016.
  • Houthooft et al. (2016) Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pp. 1109–1117, 2016.
  • Kakade (2002) Sham M Kakade. A natural policy gradient. In Advances in neural information processing systems, pp. 1531–1538, 2002.
  • Levine et al. (2016) Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334–1373, 2016.
  • Lillicrap et al. (2015) Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  • Liu & Wang (2016) Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In Advances In Neural Information Processing Systems, pp. 2378–2386, 2016.
  • Liu et al. (2017) Yang Liu, Prajit Ramachandran, Qiang Liu, and Jian Peng. Stein variational policy gradient. arXiv preprint arXiv:1704.02399, 2017.
  • Mannor et al. (2003) Shie Mannor, Reuven Y Rubinstein, and Yohai Gat. The cross entropy method for fast policy search. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 512–519, 2003.
  • Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
  • Mnih et al. (2016) Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pp. 1928–1937, 2016.
  • Nair et al. (2017) Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Overcoming exploration in reinforcement learning with demonstrations. arXiv preprint arXiv:1709.10089, 2017.
  • Ross et al. (2013) Stéphane Ross, Narek Melik-Barkhudarov, Kumar Shaurya Shankar, Andreas Wendel, Debadeepta Dey, J Andrew Bagnell, and Martial Hebert. Learning monocular reactive uav control in cluttered natural environments. In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pp. 1765–1772. IEEE, 2013.
  • Rubinstein & Kroese (2016) Reuven Y Rubinstein and Dirk P Kroese. Simulation and the Monte Carlo method, volume 10. John Wiley & Sons, 2016.
  • Salimans et al. (2017) Tim Salimans, Jonathan Ho, Xi Chen, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
  • Schulman et al. (2015) John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, pp. 1889–1897, 2015.
  • Schulman et al. (2017a) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017a.
  • Schulman et al. (2017b) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017b.
  • Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
  • Stadie et al. (2015) Bradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.
  • Strehl & Littman (2008) Alexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309–1331, 2008.
  • Such et al. (2017) Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O Stanley, and Jeff Clune. Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567, 2017.
  • Sutton et al. (2000) Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057–1063, 2000.
  • Tang et al. (2017) Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, OpenAI Xi Chen, Yan Duan, John Schulman, Filip DeTurck, and Pieter Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning. In Advances in Neural Information Processing Systems, pp. 2750–2759, 2017.
  • Večerík et al. (2017) Matej Večerík, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, and Martin Riedmiller. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. arXiv preprint arXiv:1707.08817, 2017.

Appendix A Supplementary Material

a.1 Algorithm for Self-Imitation

Notation:
Policy parameters
Discriminator parameters
Environment reward

1 initial parameters
2 empty replay memory
3 for each iteration do
4       Generate batch of trajectories with two rewards for each transition: and
5       Update using priory queue threshold
       /* Update policy */
6       for each minibatch do
7             Calculate with PPO objective using reward
8             Calculate with PPO objective using reward
9             Update with using ADAM
10            
11       end for
      /* Update self-imitation discriminator */
12       for each epoch do
13             Sample mini-batch of (s,a) from
14             Sample mini-batch of (s,a) from
15             Update with log-loss objective using
16       end for
17      
18 end for
Algorithm 1

a.2 Algorithm for Self-Imitating Diverse Policies

Notation:
Policy parameters for rank
Self-imitation discriminator parameters for rank
Empirical density network parameters for rank

/* This is run for every rank */
1 some initial distributions
2 empty replay memory local to rank
3 for each iteration do
4       Generate batch of trajectories
5       Update using priory queue threshold
       /* Update policy */
6       for each minibatch do
7             Calculate using self-imitation (as in Algorithm 1)
8             MPI send: to other ranks
9             MPI recv: from other ranks
10             Calculate using
11               Use and lines 8, 10, 11 in SVPG to get
12               Update with using ADAM
13              
14        end for
       /* Update self-imitation discriminator */
15        for each epoch do
16               Sample mini-batch of (s,a) from
17               Sample mini-batch of (s,a) from
18               Update with log-loss objective using
19        end for
       /* Update state-action visitation network */
20        MPI send: to other ranks
21        MPI send: to other ranks
22        MPI recv: from other ranks
23        MPI recv: from other ranks
24        Update with log-loss objective using , ,
25        Update
26       
27 end for
Algorithm 2

a.3 Ablation Studies

We show the sensitivity of self-imitation to and the capacity of , denoted by . The experiments in this subsection are done on Humanoid and Hopper tasks with episodic rewards. The tables show the average performance over 5 random seeds. For ablation on , is fixed at 10; for ablation on , is fixed at 0.8. With episodic rewards, a higher value of helps boost performance since the RL signal from the environment is weak. With , there isn’t a single best choice for , though all values of give better results than baseline PPO ().

Humanoid Hopper
532 354
395 481
810 645
3602 2618
3891 2633
Humanoid Hopper
2861 1736
2946 2415
3602 2618
2667 1624
4159 2301

a.4 Hyperparameters

  • Horizon (T) = 1000 (locomotion), 250 (Maze), 5000 (Swimming+Gathering)

  • Discount () = 0.99

  • GAE parameter () = 0.95

  • PPO internal epochs = 5

  • PPO learning rate = 1e-4

  • PPO mini-batch = 64

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
199991
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description