OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning
Reinforcement learning has shown promise in learning policies that can solve complex problems. However, manually specifying a good reward function can be difficult, especially for intricate tasks. Inverse reinforcement learning offers a useful paradigm to learn the underlying reward function directly from expert demonstrations. Yet in reality, the corpus of demonstrations may contain trajectories arising from a diverse set of underlying reward functions rather than a single one. Thus, in inverse reinforcement learning, it is useful to consider such a decomposition. The options framework in reinforcement learning is specifically designed to decompose policies in a similar light. We therefore extend the options framework and propose a method to simultaneously recover reward options in addition to policy options. We leverage adversarial methods to learn joint reward-policy options using only observed expert states. We show that this approach works well in both simple and complex continuous control tasks and shows significant performance increases in one-shot transfer learning.
A long term goal of Inverse Reinforcement Learning (IRL) is to be able to learn underlying reward functions and policies solely from human video demonstrations. We call such a case, where the demonstrations come from different contexts and the task must be performed in a novel environment, one-shot transfer learning. For example, given only demonstrations of a human walking on earth, can an agent learn to walk on the moon?
However, such demonstrations would undoubtedly come from a wide range of settings and environments and may not conform to a single reward function. This proves detrimental to current methods which might over-generalize and cause poor performance. In forward RL, decomposing a policy into smaller specialized policy options has been shown to improve results for exactly such cases . Thus, we extend the options framework to IRL and decompose both the reward function and policy. Our method is able to learn deep policies which can specialize to the set of best-fitting experts. Hence, it excels at one-shot transfer learning where single-approximator methods waver.
To accomplish this, we make use of the Generative Adversarial Imitation Learning (GAIL) framework  and formulate a method for learning joint reward-policy options with adversarial methods in IRL. As such, we call our method OptionGAN. This method can implicitly learn divisions in the demonstration state space and accordingly learn policy and reward options. Leveraging a correspondence between Mixture-of-Experts (MoE) and one-step options, we learn a decomposition of rewards and the policy-over-options in an end-to-end fashion. This decomposition is able to capture simple problems and learn any of the underlying rewards in one shot. This gives flexibility and benefits for a variety of future applications (both in reinforcement learning and standard machine learning).
We evaluate OptionGAN in the context of continuous control locomotion tasks, considering both simulated MuJoCo locomotion OpenAI Gym environments , modifications of these environments for task transfer , and a more complex Roboschool task . We show that the final policies learned using joint reward-policy options outperform a single reward approximator and policy network in most cases, and particularly excel at one-shot transfer learning.
One goal in robotics research is to create a system which learns how to accomplish complex tasks simply from observing an expert’s actions (such as videos of humans performing actions). While IRL has been instrumental in working towards this goal, it has become clear that fitting a single reward function which generalizes across many domains is difficult. To this end, several works investigate decomposing the underlying reward functions of expert demonstrations and environments in both IRL and RL . For example, in , reward functions are decomposed into a set of subtasks based on segmenting expert demonstration transitions (known state-action pairs) by analyzing the changes in “local linearity with respect to a kernel function”. Similarly, in , techniques in video editing based on information-similarity are adopted to divide a video demonstration into distinct sections which can then be recombined into a differentiable reward function.
However, simply decomposing the reward function may not be enough, the policy must also be able to adapt to different tasks. Several works have investigated learning a latent dimension along with the policy for such a purpose . This latent dimension allows multiple tasks to be learned by one policy and elicited via the latent variable. In contrast, our work focuses on one-shot transfer learning. In the former work, the desired latent variable must be known and provided, whereas in our formulation the latent structure is inherently encoded in an unsupervised manner. This is inherently accomplished while learning to solve a task composed of a wide range of underlying reward functions and policies in a single framework. Overall, this work contains parallels to all of the aforementioned and other works emphasizing hierarchical policies , but specifically focuses on leveraging MoEs and reward decompositions to fit into the options framework for efficient one-shot transfer learning in IRL.
3Preliminaries and Notation
Markov Decision Processes (MDPs)
MDPs consist of states , actions , a transition function , and a reward function . We formulate our methods in the space of continuous control tasks () using measure-theoretic assumptions. Thus we define a parameterized policy as the probability distribution over actions conditioned on states , modeled by a Gaussian where are the policy parameters. The value of a policy is defined as and the action-value is , where is the discount factor.
The Options framework
In reinforcement learning, an option () can be defined by a triplet (). In this definition, is called an intra-policy option, is an initiation set, and is a termination function (i.e. the probability that an option ends at a given state) . Furthermore, is the policy-over-options. That is, determines which option an agent picks to use until the termination function indicates that a new option should be chosen. Other works explicitly formulate call-and-return options, but we instead simplify to one-step options, where . One-step options have long been discussed as an alternative to temporally extended methods and often provide advantages in terms of optimality and value estimation . Furthermore, we find that our options still converge to temporally extended and interpretable actions.
The idea of creating a mixture of experts (MoEs) was initially formalized to improve learning of neural networks by dividing the input space among several networks and then combining their outputs through a soft weighted average . It has since come into prevalence for generating extremely large neural networks . In our formulation of joint reward-policy options, we leverage a correspondence between Mixture-of-Experts and options. In the case of one-step options, the policy-over-options () can be viewed as a specialized gating function over experts (intra-options policies ): . Several works investigate convergence to a sparse and specialized Mixture-of-Experts . We leverage these works to formulate a Mixture-of-Experts which converges to one-step options.
Policy gradient (PG) methods  formulate a method for optimizing a parameterized policy through stochastic gradient ascent. In the discounted setting, PG methods optimize . The PG theorem states: , where . In Trust Region Policy Optimization (TRPO)  and Proximal Policy Optimization (PPO)  this update is constrained and transformed into the advantage estimation view such that the above becomes a constrained optimization: subject to where is the generalized advantage function according to . In TRPO, this is solved as a constrained conjugate gradient descent problem, while in PPO the constraint is transformed into a penalty term or clipping objective.
Inverse Reinforcement Learning
Inverse Reinforcement Learning was first formulated in the context of MDPs by . In later work, a parametrization of the reward function is learned as a linear combination of the state feature expectation so that the hyperdistance between the expert and the novice’s feature expectation is minimized . It has also been shown that a solution can be formulated using the maximum entropy principle, with the goal of matching feature expectation as well . Generative adversarial imitation learning (GAIL) make use of adversarial techniques from  to perform a similar feature expectation matching . In this case, a discriminator uses state-action pairs (transitions) from the expert demonstrations and novice rollouts to learn a binary classification probability distribution. The probability that a state belongs to an expert demonstration can then be used as the reward for a policy optimization step. However, unlike GAIL, we do not assume knowledge of the expert actions. Rather, we rely solely on observations in the discriminator problem. We therefore refer to our baseline approach as Generative Adversarial Inverse Reinforcement Learning (IRLGAN) as opposed to imitation learning. It is important to note that IRLGAN is GAIL without known actions, we adopt the different naming scheme to highlight this difference. As such, our adversarial game optimizes:
where and are the policy of the novice and expert parameterized by and , respectively, and is the discriminator probability that a sample state belongs to an expert demonstration (parameterized by ). We use this notation since in this case the discriminator approximates a reward function. Similarly to GAIL, we use TRPO during the policy optimization step for simple tasks. However, for complex tasks we adopt PPO. Figure 1 and Algorithm ? show an outline for the general IRLGAN process.
4Reward-Policy Options Framework
Based on the need to infer a decomposition of underlying reward functions from a wide range of expert demonstrations in one-shot transfer learning, we extend the options framework for decomposing rewards as well as policies. In this way, intra-option policies, decomposed rewards, and the policy-over-options can all be learned in concert in a cohesive framework. In this case, an option is formulated by a tuple: (). Here, is a reward option from which a corresponding intra-option policy is derived. That is, each policy option is optimized with respect to its own local reward option. The policy-over-options not only chooses the intra-option policy, but the reward option as well: . For simplicity, we refer to the policy-over-reward-options as (in our formulation, ). There is a parallel to be drawn from this framework to Feudal RL , but here the intrinsic reward function is statically bound to each worker (policy option), whereas in that framework the worker dynamically receives a new intrinsic reward from the manager.
To learn joint reward-policy options, we present a method which fits into the framework of IRLGAN. We reformulate the discriminator as a Mixture-Of-Experts and re-use the gating function when learning a set of policy options. We show that by properly formulating the discriminator loss function, the Mixture-Of-Experts converges to one-step options. This formulation also allows us to use regularizers which encourage distribution of information, diversity, and sparsity in both the reward and policy options.
5Learning Joint Reward-Policy Options
The use of one-step options allows us to learn a policy-over-options in an end-to-end fashion as a Mixture-of-Experts formulation. In the one-step case, selecting an option () using the policy-over-options () can be viewed as a mixture of completely specialized experts such that: . The reward for a given state is composed as: , where are the parameters of the policy-over-options, policy options, and reward options, respectively. Thus, we reformulate our discriminator loss as a weighted mixture of completely specialized experts in Equation 1. This allows us to update the parameters of the policy-over-options and reward options together during the discriminator update.
Here, is the sigmoid cross-entropy loss of the reward options (discriminators). , as will be discussed later on, is a penalty or set of penalties which can encourage certain properties of the policy-over-options or the overall reward signal. As can be seen in Algorithm ? and Figure 1, this loss function can fit directly into the IRLGAN framework.
Having updated the parameters of the policy-over-options and reward options, standard PG methods can be used to optimize the parameters of the intra-option policies. This can be done by weighting the average of the intra-option policy actions with the policy-over-options . While it is possible to update each intra-option policy separately as in , this Mixture-of-Experts formulation is equivalent, as discussed in the next section. Once the gating function specializes over the options, all gradients except for those related to the intra-option policy selected would be weighted by zero. We find that this end-to-end parameter update formulation leads to easier implementation and smoother learning with constraint-based methods.
6Mixture-of-Experts as Options
To ensure that our MoE formulation converges to options in the optimal case, we must properly formulate our loss function such that the gating function specializes over experts. While it may be possible to force a sparse selection of options through a top- choice as in , we find that this leads to instability since for the top- function is not differentiable. As is specified in , a loss function of the form draws cooperation between experts, but a reformulation of the loss, , encourages specialization.
If we view our policy-over-options as a softmax (i.e. ), then the derivative of the loss function with respect to the gating function becomes:
This can intuitively be interpreted as encouraging the gating function to increase the likelihood of choosing an expert when its loss is less than the average loss of all the experts. The gating function will thus move toward deterministic selection of experts.
As we can see in Equation 1, we formulate our discriminator loss in the same way, using each reward option and the policy-over-options as the experts and gating function respectively. This ensures that the policy-over-options specializes over the state space and converges to a deterministic selection of experts. Hence, we can assume that in the optimal case, our formulation of an MoE-style policy-over-options is equivalent to one-step options. Our characterization of this notion of MoE-as-options is further backed by experimental results. Empirically, we still find temporal coherence across option activation despite not explicitly formulating call-and-return options as in .
Due to our formulation of Mixture-of-Experts as options, we can learn our policy-over-options in an end-to-end manner. This allows us to add additional terms to our loss function to encourage the appearance of certain target properties.
Sparsity and Variance Regularization
To ensure an even distribution of activation across the options, we look to conditional computation techniques that encourage sparsity and diversity in hidden layer activations and apply these to our policy-over-options . We borrow three penalty terms , , (adopting a similar notation). In the minibatch setting, these are formulated as:
where is the target sparsity rate (which we set to for all cases). Here, encourages the activation of the policy-over-options with target sparsity “in expectation over the data” . Essentially, encourages a uniform distribution of options over the data while drives toward a target sparsity of activations per example (doubly encouraging our mixtures to be sparse). also encourages varied activations while discouraging uniform selection.
Mutual Information Penalty
To ensure the specialization of each option to a specific partition of the state space, a mutual information (MI) penalty is added.
where and are the outputs of reward options and respectively, and the correlation coefficient of and , defined as .
The resulting loss term is thus computed as:
Thus the overall regularization term becomes:
To evaluate our method of learning joint reward-policy options, we investigate continuous control tasks. We divide our experiments into 3 settings: simple locomotion tasks, one-shot transfer learning, and complex tasks. We compare OptionGAN against IRLGAN in all scenarios, investigating whether dividing the reward and policy into options improves performance against the single approximator case.
All shared hyperparameters are held constant between IRLGAN and OptionGAN evaluation runs. All evaluations are averaged across 10 trials, each using a different random seed. We use the average return of the true reward function across 25 sample rollouts as the evaluation metric. Multilayer perceptrons are used for all approximators as in . For the OptionGAN intra-option policy and reward networks, we use shared hidden layers. That is all share hidden layers and share hidden layers. We use separate parameters for the policy-over-options . Shared layers are used to ensure a fair comparison against a single network of the same number of hidden layers. For simple settings all hidden layers are of size and for complex experiments are . For the 2-options case we set based on a simple hyperparameter search and reported results from . For the 4-options case we relax the regularizer that encourages a uniform distribution of options (), setting .
First, we investigate simple settings without transfer learning for a set of benchmark locomotion tasks provided in OpenAI Gym  using the MuJoCo simulator . We use the Hopper-v1, HalfCheetah-v1, and Walker2d-v1 locomotion environments. The results of this experiment are shown in Table ? and sample learning curves for Hopper and HalfCheetah can be found in Figure 3. We use 10 expert rollouts from a policy trained using TRPO for 500 iterations.
In these simple settings, OptionGAN converges to policies which perform as well or better than the single approximator setting. Importantly, even in these simple settings, the options which our policy selects have a notion of temporal coherence and interpretability despite not explicitly enforcing this in the form of a termination function. This can be seen in the two option version of the Hopper-v1 task in Figure 2. We find that generally each option takes on two behaviour modes. The first option handles: (1) the rolling of the foot during hopper landing; (2) the folding in of the foot in preparation for floating. The second option handles: (1) the last part of take-off where the foot is hyper-extended and body flexed; (2) the part of air travel without any movement.
7.3One-Shot Transfer Learning
We also investigate one-shot transfer learning. In this scenario, the novice is trained on a target environment, while expert demonstrations come from a similar task, but from environments with altered dynamics (i.e. one-shot transfer from varied expert demonstrations to a new environment). To demonstrate the effectiveness of OptionGAN in these settings, we use expert demonstrations from environments with varying gravity conditions as seen in . We vary the gravity (.5, .75, 1.25, 1.5 of Earth’s gravity) and train experts using TRPO for each of these. We gather 10 expert trajectories from each gravity variation, for a total of 40 expert rollouts, to train a novice agent on the normal Earth gravity environment (the default -v1 environment as provided in OpenAI Gym). We repeat this for Hopper-v1, HalfCheetah-v1, and Walker2D-v1.
These gravity tasks are selected due to the demonstration in  that learning sequentially on these varied gravity environments causes catastrophic forgetting of the policy on environments seen earlier in training. This suggests that the dynamics are varied enough that trajectories are difficult to generalize across, yet still share some state representations and task goals. As seen in Figure 3, using options can cause significant performance increases in this area, but performance gains can vary across the number of options and the regularization penalty as seen in Table ?.
Lastly, we investigate slightly more complex tasks. We utilize the HopperSimpleWall-v0 environment provided by the gym-extensions framework  and the RoboschoolHumanoidFlagrun-v1 environment used in . In the first, a wall is placed randomly in the path of the Hopper-v1 agent and simplified sensor readouts are added to the observations as in . In the latter, the goal is to run and reach a frequently changing target. This is an especially complex task with a highly varied state space. In both cases we use an expert trained with TRPO and PPO respectively, to generate 40 expert rollouts. For the Roboschool environment, we find that TRPO does not allow enough exploration to perform adequately, and thus we switch our policy optimization method to the clipping-objective version of PPO.
Convergence of Mixtures to Options
To show that our formulation of Mixture-of-Experts decomposes to options in the optimal case, we investigate the distributions of our policy-over-options. We find that across 40 trials, 100% of activations fell within a reasonable error bound of deterministic selection across 1M samples. That is, in 40 total trials across 4 environments (Hopper-v1, HalfCheetah-v1, Walker2d-v1, RoboschoolHumanoidFlagrun-v1), policies were trained for 500 iterations (or 5k iterations in the case of RoboschoolHumanoidFlagrun-v1). We collected 25k samples at the end of each trial. Among the gating activations across the samples, we recorded the number of gating activations within the range for . 100% fell within this range. 98.72% fell within range . Thus at convergence, both intuitively and empirically we can refer to our gating function over experts as the policy-over-options and each of the experts as options.
Effect of Uniform Distribution Regularizer
We find that forcing a uniform distribution over options can potentially be harmful. This can be seen in the experiment in Figure 4, where we evaluate the 4 option case with . However, relaxing the uniform constraint results in rapid performance increases, particularly in the HalfCheetah-v1 where we see increases in learning speed with 4 options.
There is an intuitive explanation for this. In the 4-option case, with a relaxed uniform distribution penalty, we allow options to drop out during training. In the case of Hopper and Walker tasks, generally 2 options drop out slowly over time, but in HalfCheetah, only one option drops out in the first 20 iterations with a uniform distribution remaining across the remaining options as seen in Figure 3. We posit that in the case of HalfCheetah there is enough mutually exclusive information in the environment state space to divide across 3 options, quickly causing a rapid gain in performance, while the Hopper tasks do not settle as quickly and thus do not see that large gain in performance.
Latent Structure in Expert Demonstrations
Another benefit of using options in the IRL transfer setting is that the underlying latent division of the original expert environments is learned by the policy-over-options. As seen in Figure 5, the expert demonstrations have a clear separation among options. We suspect that options further away from the target gravity are not as specialized due to the fact that their state spaces are covered significantly by a mixture of the closer options (see supplemental material for supporting projected state space mappings). This indicates that the policy-over-options specializes over the experts and is thus inherently beneficial for use in one-shot transfer learning.
We propose a direct extension of the options framework by adding joint reward-policy options. We learn these options in the context of generative adversarial inverse reinforcement learning and show that this method outperforms the single policy case in a variety of tasks – particularly in transfer settings. Furthermore, the learned options demonstrate temporal and interpretable cohesion without specifying a call-and-return termination function.
Our formulation of joint reward-policy options as a Mixture Of Experts allows for: potential upscaling to extremely large networks as in , reward shaping in forward RL, and using similarly specialized MoEs in generative adversarial networks. This work presents an effective and extendable framework. Our optionated networks capture the problem structure effectively, which allows strong generalization in one-shot transfer learning. Moreover, as adversarial methods are now commonly used across a myriad of communities, we believe the embedding of options within this methodology is an excellent delivery mechanism to exploit the benefits of hierarchical RL in many new fields.
We thank CIFAR, NSERC, The Open Philanthropy Project, and the AWS Cloud Credits for Research Program for their generous contributions.
The expectation over the discriminator loss for the option case can be expanded:
As can the regularization terms:
The expert demonstration rollouts (state sequences) for all OpenAI Gym  environments were obtained from policies trained for 1000 iterations using Trust Region Policy Optimization  with parameters , generalized advantage estimation , discount factor and batch size (rollout timesteps shared when updating the discriminator and policy). For the Roboschool  Flagrun-v1 environment, the rollouts were obtained using the PPO pre-trained expert provided with Roboschool.
14Experimental Setup and Hyperparameters
Observations are not normalized in all cases as we found that it did not help or hurt performance. In all cases for advantage estimation we use a value approximator as in , which uses L-BFGS optimization with a mixing fraction of . That is, it uses the current prediction and mixes it with the actual discounted returns with belonging to the actual discounted returns and belonging to the current prediction. This is identical to the original Trust Region Policy Optimization (TRPO) code as provided at: https://github.com/joschu/modular_rl/. We perform a maximum of 20 L-BFGS iterations per value function update. For all the environments, we let the agent act until the maximum allowed timesteps of the specific environment (as set by default in OpenAI Gym), gather the rollouts and keep the number of timesteps per batch desired. For all policy optimization steps in both IRLGAN and OPTIONGAN, we use TRPO with the with parameters set to the same values as the ones used for the expert collection (, generalized advantage estimation , discount factor and batch size ), except for the Roboschool Flagrun Experiment where PPO was used instead, as explained in its respective section below.
14.1Simple Tasks and Transfer Tasks
For simple tasks we use 10 expert rollouts while for transfer tasks we use 40 expert rollouts (10 from each environment variation).
We use a Gaussian Multilayer Perceptron policy as in  with two 64 units hidden layers and tanh hidden layer activations. The output of the network gives the Gaussian mean and the standard deviation is modeled by a single learned variable as in . Similarly for our discriminator network, we use the same architecture with a sigmoid output, tanh hidden layer activations, and a learning rate of for the discriminator. We do not use entropy regularization or regularization as it resulted in worse performance. For every policy update we perform 3 discriminator updates as we found the policy optimization step is able to handle this and results in faster learning.
Aligning with the IRLGAN networks, we make use of a Gaussian Multilayer Perceptron policy as in  with 2 hidden layers of 64 units with tanh hidden layer activations for our shared hidden layers. These hidden layers connect to options depending on the experiment (2 or 4). In this case the output of the network gives the Gaussian mean for each option and the standard deviation is modeled by a single learned variable per option. The policy-over-options is also modeled by a 2 layered 64 units network with tanh activations and a softmax output corresponding to the number of options. For our discriminator, we use the same architecture with tanh hidden layer activations, a sigmoid output and outputs, one for each option. We use the policy over options to create a specialized mixture of experts model with a specialized loss function which converges to options. We use a learning rate of for the discriminator. Same as in IRLGAN, we do not make use of entropy regularization or regularization as we found either regularizers to hurt performance. Instead we use scaling factors for the regularization terms included in the loss: for the 2 options case and for the 4 options case. Again, we perform 3 discriminator updates per policy update
For Roboschool experiments we use proximal policy optimization (PPO) with a clipping objective  (clipping parameter set to ). We perform 5 Adam  policy updates on the PPO clipping objective with a learning rate of . The value function and advantage estimation parameters from previous experiments are maintained while our network architecture sizes are increased to and use ReLU activations instead of tanh.
15Decomposition of Rewards over Expert Demonstrations
We show that the trained policy-over-options network shows some intrinsic structure over the expert demonstrations.
In Figure 8 are shown the activation of the gating function across expert rollouts after training. We see that the underlying division in expert demonstrations is learned by the policy-over-options, which indicates that our method for training the policy-over-options induces it to learn a latent structure to the expert demonstrations and thus can benefit in the transfer case since each option inherently specializes to be used in different environments. We find that options specialized more clearly over the experts with environments closest to the normal gravity environment, while the others use an even mixture of options. This is due to the fact that the mixing specialized options are able to cover the state space of the non-specialized options as we can observe from the state distribution of the expert demonstrations shown in Figure 9.
- While it may be simpler to use an entropy regularizer, we found that in practice it performs worse. Entropy regularization encourages exploration . In the OptionGAN setting, this results in unstable learning, while the mutual information term encourages diversity in the options while providing stable learning.
- Extended experimental details and results can be found in the supplemental. Code is located at:
Abbeel, P., and Ng, A. Y. Apprenticeship learning via inverse reinforcement learning.
Babes, M.; Marivate, V.; Subramanian, K.; and Littman, M. L. Apprenticeship learning about multiple intentions.
Bacon, P.-L.; Harb, J.; and Precup, D. The option-critic architecture.
Bengio, E.; Bacon, P.-L.; Pineau, J.; and Precup, D. Conditional computation in neural networks for faster models.
Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. OpenAI Gym.
Choi, J., and eung Kim, K. Nonparametric bayesian inverse reinforcement learning for multiple reward functions.
Christiano, P.; Shah, Z.; Mordatch, I.; Schneider, J.; Blackwell, T.; Tobin, J.; Abbeel, P.; and Zaremba, W. Transfer from simulation to real world through learning deep inverse dynamics model.
Daniel, C.; Neumann, G.; and Peters, J. R. Hierarchical relative entropy policy search.
Dayan, P., and Hinton, G. E. Feudal reinforcement learning.
Dietterich, T. G. Hierarchical reinforcement learning with the MAXQ value function decomposition.
Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. Generative adversarial nets.
Hausman, K.; Chebotar, Y.; Schaal, S.; Sukhatme, G.; and Lim, J. Multi-modal imitation learning from unstructured demonstrations using generative adversarial nets.
Henderson, P.; Chang, W.-D.; Shkurti, F.; Hansen, J.; Meger, D.; and Dudek, G. Benchmark environments for multitask learning in continuous domains.
Ho, J., and Ermon, S. Generative adversarial imitation learning.
Jacobs, R. A.; Jordan, M. I.; Nowlan, S. J.; and Hinton, G. E. Adaptive mixtures of local experts.
Kingma, D. P., and Ba, J. Adam: A Method for Stochastic Optimization.
Krishnan, S.; Garg, A.; Liaw, R.; Miller, L.; Pokorny, F. T.; and Goldberg, K. Hirl: Hierarchical inverse reinforcement learning for long-horizon tasks with delayed rewards.
Li, Y.; Song, J.; and Ermon, S. InfoGAIL: Interpretable imitation learning from visual demonstrations.
Liu, Y., and Yao, X. Learning and evolution by minimization of mutual information.
Merel, J.; Tassa, Y.; Srinivasan, S.; Lemmon, J.; Wang, Z.; Wayne, G.; and Heess, N. Learning human behaviors from motion capture by adversarial imitation.
Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. Asynchronous methods for deep reinforcement learning.
Ng, A. Y., and Russell, S. J. Algorithms for inverse reinforcement learning.
Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M.; and Moritz, P. Trust region policy optimization.
Schulman, J.; Moritz, P.; Levine, S.; Jordan, M.; and Abbeel, P. High-dimensional continuous control using generalized advantage estimation.
Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. Proximal policy optimization algorithms.
Sermanet, P.; Xu, K.; and Levine, S. Unsupervised perceptual rewards for imitation learning.
Shazeer, N.; Mirhoseini, A.; Maziarz, K.; Davis, A.; Le, Q.; Hinton, G.; and Dean, J. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer.
Sutton, R. S.; McAllester, D. A.; Singh, S. P.; and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation.
Sutton, R. S.; Precup, D.; and Singh, S. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning.
Todorov, E.; Erez, T.; and Tassa, Y. MuJoCo: A physics engine for model-based control.
van Seijen, H.; Fatemi, M.; Romoff, J.; Laroche, R.; Barnes, T.; and Tsang, J. Hybrid reward architecture for reinforcement learning.
Wang, Z.; Merel, J.; Reed, S.; Wayne, G.; de Freitas, N.; and Heess, N. Robust imitation of diverse behaviors.
Ziebart, B. D.; Maas, A.; Bagnell, J. A.; and Dey, A. K. Maximum entropy inverse reinforcement learning.