Learning Action-Transferable Policy with Action Embedding

Learning Action-Transferable Policy with Action Embedding

Yu Chen1, Yingfeng Chen2*, Yu Yang3, Ying Li1, Jianwei Yin1, Changjie Fan2
1 College of Computer Science and Technology, Zhejiang University, Hangzhou, China
{cheny_17, cnliying, zjuyjw}@zju.edu.cn
2 NetEase Fuxi AI Lab, Hangzhou, China
{chenyingfeng1,fanchangjie}@corp.netease.com
3 National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
yyu@lamda.nju.edu.cn
Equal contribution.This work is done while Yu Chen was interning at Fuxi AI Lab in Netease.
Abstract

Despite achieving great success on performance in various sequential decision task, deep reinforcement learning is extremely data inefficient. Many approaches have been proposed to improve the data efficiency, e.g. transfer learning which utilizes knowledge learned from related tasks to accelerate training. Previous researches on transfer learning mostly attempt to learn a common feature space of states across related tasks to exploit knowledge as much as possible. However, semantic information of actions may be shared as well, even between tasks with different action space size. In this work, we first propose a method to learn action embedding for discrete actions in RL from generated trajectories without any prior knowledge, and then leverage it to transfer policy across tasks with different state space and/or discrete action space. We validate our method on a set of gridworld navigation tasks, discretized continuous control tasks and fighting tasks in a commercial video game. Our experimental results show that our method can effectively learn informative action embeddings and accelerate learning by policy transfer across tasks.

Introduction

Deep reinforcement learning (DRL) , which combines reinforcement learning algorithms and deep neural networks , has achieved great success in many domains, such as playing Atari games [21], playing game of Go [24] and robotics control [17]. Although the DRL is viewed as one of the most potential ways to the General Artificial Intelligence, it is still criticized for its data inefficiency. Training an agent from scratch requires considerable numbers of interactions with the environment for a very specific task. Moreover, some researchers points that DRL algorithms likely over-fit to the trained environment [3], once the configuration of the environments changes, the learned policy will not work and needs to be re-trained. One approach to deal with the problem in DRL domain is Transfer Learning (TL) [26], which make use of the knowledge gained while solving one task and applying it to a different but related task, aiming to reduce the consumption of samples and improve the performance.

Several methods have been proposed to transfer various knowledge across tasks [27, 9, 4]. However, all of those work assume that the source tasks and the target tasks share the same space of actions and states, specifically, the sizes of both action and state space are consistent and well-aligned. Obviously, this assumption is unsatisfied in many tasks. For example, in Moba games different heroes have their own unique skills and state representation, so they are usually modeled by individual neural network with different input (state) and output (policy or value). Under this circumstances, direct transfer between models with difference network structure is not feasible.

In order to overcome the formulation discordance of source task and target task, especially when the state and action spaces are different. Many studies have proposed various methods to transfer knowledge across tasks, such as manifold alignment [1, 11] and domain adaptation [6]. However, most of them focus on mapping original state space into a common feature space, rarely considering the action space . In the previous Moba game example, though the skill system of different heroes are distinct, in fact the effect of heroes’ skills may be similar. For most of the heroes, their skills can be classified into several categories, such as ’Damage Skill’, ’Control Skill’, ’Summoning Skill’ and so on. Apparently, if the semantic of actions is learned explicitly, it has chance to utilize the semantic information as a prior to transfer policy across different heroes.

One idea is inter-task mapping [25] which constructs inter-state and inter-action mapping to describe the relations between tasks. However, it can be difficult to learn such a mapping, because it requires the prior knowledge about the semantic and range of state variables and actions. Inspired by recent work in action embedding [28, 7], we study the feasibility of leveraging action embeddings, which automatically learn the semantic of actions, to transfer the policy across tasks with different state space and/or discrete action space.

The main difficulty in our problem is that how to learn meaningful action embeddings that captures the semantic of actions. Additionally, in order to transfer the policy, we need to align the embeddings of target task with the embeddings of source task with different state spaces. To learn action embeddings in RL effectively, the work [28] use a skip-gram model [20] with negative sampling to train action representations according to action contexts from expert demonstrations. Differently, Our method builds on the idea that the semantic of actions should be reflected in its effect and the effect is shown in the state changes, while state changes are implied in the state transition function of RL problem. Thus, we can learn latent representations of actions by learning the dynamics using a transition model in RL which is quite similar to learn word embeddings from a language model [16], and it can learn from any generated trajectories. Further, the parameters of the transition model are frozen or used as initializations to make the embeddings of tasks as close as possible so that the policy can be transferred.

We test our methods with the reinforcement learning algorithms Soft Actor-Critic (SAC) [12] on three sets of environments, a set of gridworld navigation tasks, a set of discretized continuous control tasks in Mujoco [29] and Roboschool and a set of fighting task in a video game.

Our main contributions in this paper are summarized as follows:

  • We propose a method to learn action embeddings from interactions with environments, which may also be used for other purpose, such as automatic task decomposition.

  • We proposed an transfer framework via action embeddings, making the policy transfer across tasks with different action and state spaces possible.

  • Our experimental 111The experimental code of this work is released anonymously at https://github.com/ActionEmbedding/ActionEmbedding.git for reproducibility. results show that our methods can a) learn informative action embeddings b) effectively transfer policy to reduce the time needed for convergence to near-optimal behavior

Related Work

In this section, we briefly review the literature about transfer learning in RL and action embedding respectively.

Transfer in Reinforcement Learning

Transfer learning is always considered as an important and challenging direction in reinforcement learning and have drawn more and more attention. The work [27] propose a method that use a shared distilled policy for joint training of multiple tasks, named Distral. The work [9] introduce a generic framework of meta-learning that can achieve fast adaptation. Successor features and generalized policy improvement are also applied to transfer knowledges [4, 19]. All of these methods focus on tasks that only differ in reward functions.

To transfer across tasks with different state and action spaces, an inter-task mapping is manually constructed by [25] and they build a transferable action-value function based on the mapping. In [2], they introduce a common task subspace between states of tasks, and use it to learn the state mapping between tasks. Further, unsupervised manifold alignment (UMA) are used to autonomously learn an inter-task mapping from source and target task trajectories [1]. The main difference from our work is that we do not learn a direct mapping but try to embed them into a common space where the similarities can be measured by distance. In a similar vein, the work [11] try to learn invariant common features between different agents from a proxy skill and use it as an auxiliary reward. The learning process, however, minimize the distance between state embeddings of corresponding pairs of states which may be difficult to obtain. Adversarial losses, which is based on mutual alignment of visited state distributions between tasks, are used by [30] as auxiliary rewards to train policies. The work [6] later adopt adversarial autoencoder (AAE) to align the representation vectors of target and source states on atari games.

All these work focus on learning a state representation that contains transferable knowledge among the tasks, while in this paper we try to leverage the action representations to transfer the policy.

Action Embedding

Action embeddings is firstly studied by [8], aiming to solve the large discrete action space problem in RL. And they find the optimal actions using a k-nearest neighbors approach. However, the action embeddings are assumed to be given as a prior. In[7], embeddings are used as a part of overall policy and a mapping function is learned to map the embeddings into discrete actions. Act2Vec is introduced by [28], in which a skip-gram model is used to learn representations of actions from expert demonstrations. And they transfer the embeddings from a 2D navigation task to a 3D navigation task.

Different from the existing studies, we present a method that can autonomously capture the relations between actions and learn informative action embeddings without any prior knowledge.

Representation Learning

Representation learning aims to learn representations of the data that make it easier to extract useful information when building classifiers or other predictors [5], and have been applied in a various domains, e.g. NLP [20, 23] and graphs [22]. In reinforcement learning, features are extracted from raw images by convolutional neural networks [21], sparse representations are learned for control [18], and the work [10] combine model-based and model-free approaches via a shared state abstraction. Moreover, in [14], they utilize skill representations to learn versatile skills in hierarchical reinforcement learning. While in this paper, we investigate the representations of discrete actions and use it to improve the RL algorithms.

Background and Problem Formulation

In this section we present the background material that will serve as a foundation for the rest of the paper.

Reinforcement Learning

RL problem is often modeled as Markov Decision Processes (MDPs) which is defined as a tuple . and are sets of states and actions , called state space and action space respectively. In this work, we restrict our focus on discrete action spaces, and denotes the size of action set. is a state transition probability function describing the dynamics of the problem. is a reward function measuring the performance of agents and is a discount factor for future rewards. Further a policy can be defined, which is a conditional distribution over actions for each state. For any policy , its corresponding state value function is and state-action value function is for all and at time step . Given a MDP , the goal of an agent is to find a optimal policy that maximize the expected discounted return .

In this work, we choose Soft Actor-Critic (SAC) [12] which is a model-free off-policy actor-critic method as our RL algorithm. In SAC, actor aims to maximize not only expected reward but also the entropy of the stochastic policy. This change the RL problem to:

where is the trajectory of state-action pairs, is the trade-off coefficient, is the entropy of from its distribution . It is noteworthy that the proposal method is not limit to SAC, it can be extended to any other RL algorithms with appropriate adaption.

Problem Formulation

In this work, we consider the transfer problem between two tasks which can be denoted as a source MDP and a target MDP . Generally, the state and action spaces in the two MDPs might be completely different, so as the dynamics and reward function . However, in this paper, we assume there are some similarities between reward functions. In particular, the goals of two MDPs are similar so that the optimal policy of the target MDP will resemble the optimal policy of the source MDP. What’s more, the dynamics are also assumed to be approximate. Because we hope that the action embeddings of the target task can be learned fast and are aligned with that of the source tasks by using one transition model. For example, in one of our experiments, is to balance a pole on a cart while in there are two poles on the cart. The dimensionality of the states are completely different. And we discretize the continuous actions into different numbers of discrete action so that . The two tasks both need to balance the poles under gravity with a reward for keeping it alive. Therefore, the dynamics and reward functions shares some similarities between them.

Methods

In this section, we will discuss how the action embeddings can be learned by means of transition model. We will then describe how the action embeddings can be combined with RL algorithms and used for policy transfer for a new task.

Learning Action Embeddings

We aim to project discrete actions into a continuous feature space, where the distance between two action representations is close if the effect of the actions are similar. And it is straightforward to measure the effects of actions by state changes, state transition probabilities in RL particularly. Officially, our goal is to learn an embedding matrix , in which for each denotes the row of , such that

where is the dimension of embedding vector , is the state transition probability given state and action , and is KL divergence that measures distance of two distributions. Therefore action embeddings can be learned by training a transition model which predicts the next state with parameter .

Figure 1: Illustration of recurrent state transition model. Circles represent stochastic variables and squares deterministic variables. Solid lines denote the generative process and dashed lines denote that the variable is generated by lookup.
Figure 2: Illustration of the architecture of our methods for tasks with different state space. The grids colored in gray are transferred as initialization for target task. The blue dashes means that the data are used across the module.

We adopt a recurrent model, e.g. LSTM [15], to learn the transition. Given a sequence of state-action pairs , the forward process of the model runs as follows:

(1)

where is the length of the trajectory, is a activation vector that allow the model to access all previous states at time step and is the parameters of the model. Further, the next state is computed by a multi-layered feed-forward network that conditions on , .

The recurrent state transition model can be understood as a deterministic process [13]. To cope with stochastic environments, we can introduce a latent variable as stochastic process, just like variational autoencoder (VAE), in the model. Then the computation of next state become . And each latent variable is sampled from a multivariate Gaussian distribution with the parameter obtained by using a non-linear transformation of the previous hidden state, i.e. where .

The transition model is learned by minimizing the following loss function over the trajectory:

(2)

where the loss function is mean squared error (MSE) in this work. At the meantime the action embeddings will be learned as well. The whole transition model is illustrated in Figure. 1

Action Generation

To utilize the learned embeddings, firstly, we need to combine RL algorithms with embeddings. [28] show that Q-function can be approximated by the inner product of action embeddings and state representations. In this paper, we adopt the architecture proposed by [8], in which the policy outputs over actions within a continuous space and maps the output to discrete action space, since the output may not exist in discrete action space. Specifically, the policy parameterized by can be defined as . The policy output a proto-action for a given state . Then the real action performed is chosen by a nearest neighbor in the learned action embeddings:

where is a mapping from a continuous space to a discrete space based on a nearest-neighbor algorithm. It returns an action in that are closest to proto-action in embedding space by distance.

Note that proto-actions generated by policy are stored in the replay buffer when training the policy, rather than the embedding of performed actions. Otherwise the space occupied by discrete actions is very limited, which often leads to unstable training, especially for networks which predict Q-values according to states and action representations.

Policy Transfer

In this section, we discuss the procedure of policy transfer from source task to target task. For a better understanding, we start from a simple setting in which the state spaces of source and target task are the same while the action spaces differ. For example, an character carries different sets of skills to perform a task, each sets of skills contain different number of skills. Under this setting, the policy can be directly transferred. Since facing the same state, agent should react similarly even carries different sets of skills, and the most relevant skill will be found by nearest-neighbor algorithm in the embedding space. Meanwhile, the embeddings can be learned by the same transition model which assures the actions are embedded into the same latent space.

While considering the tasks that differ in both state and action spaces, the transition model can not be reused since dimension of state are different between source and target tasks. The premise of reuse the transition model is that the actions are embedded into the same or similar space with same state size. Thus, the input of the transition model becomes a sequence , where denotes a non-linear function, mapping the original state space into a common state space, called state embedding. The function can be represented by neural network and trained along with the policy. Note that, in this way, the two modules become interdependent–transition model needs state embeddings as training input and policy needs action embeddings to perform nearest-neighbor action selection. Therefore, we train two modules together, which can also increase the data utilization. Overall, the training process on source task is outlined in Algorithm. 1, and the process of transfer to target task is shown in Figure. 2.

After training on source task, the parameters and are used as the initialization weights of the network trained for the target task in our method. The initialization network will then be trained according to the line 2-15 in Algorithm. 1. The transfer process is shown in Algorithm. 2. This might be seen as a kind of adaptation where the state embedding is regularized by policy and transition model. While there exists some promising methods, such as adversarial autoencoder [6], to align the state representation function with in pre-training process, however, it’s not the main point in this paper.

1:  Randomly initialize the policy with state embedding and transition model with action embeddings
2:  Initialize replay buffer
3:  for episode to  do
4:     Receive initial state from environment
5:     for episode to  do
6:        Select action according to current policy and action embeddings
7:        Execute action and observe reward and new state
8:        Add tuple to
9:        Sample random batch from
10:        Update and state embedding according to RL loss
11:        Sample random batch of trajectories from and process states
12:        Update and over Equation. (2)
13:     end for
14:     Store trajectory in
15:  end for
16:  return  , , ,
Algorithm 1 Training algorithm on source task

Input: Parameters , from source task

1:  Initialize the policy and transition model where and , and randomly initialize state embedding and action embeddings
2:  Train model according to line 2-15 in Algorithm. 1
3:  return  , , ,
Algorithm 2 Transfer algorithm on target task

Experiment

To evaluate the feasibility to learn action embeddings with the dynamics model, we first conduct experiments on two domains to show the semantic of learned embeddings. And next, we evaluate the transferability of our method, named AE-SAC, to transfer both between tasks in the same domains as well as tasks in different domains.

For comparison, we select Soft Actor-Critic [12] for discrete action space denoted by SAC. In all experiments, AE-SAC and SAC use the same hyperparameters. The results of all experiments are averaged over 10 individual runs.

Semantic of Embeddings

We first validate our method of learning action embeddings on a simple environment, in which the agent needs to reach a randomly assigned goal position in a gridworld. Basically, the environment has 4 atom actions: Up, Down, Left and Right at each time step. We consider a -step planning task here, and hence the number of actions will become . Set , we sample 1000 trajectories with maximum length 20 according to a random policy to train the action embeddings with dimension. We project the embedings of actions into 2-D space for understandability, the result is shown in Figure. 3(a), the actions with the same or similar effect are closely positioned, and the embeddings can be clustered into 16 separated groups. What is more, the embedding space can be divided into 4 axises and shows evident symmetry w.r.t to the four directions in gridworld, which means our method effectively capture the semantic of actions. In word embeddings, the relationship between words are often discussed, such as Paris - France + Italy = Rome [20]. Surprisingly, in action embedding, we also get .

Figure 3: Plot show the PCA results of learned embeddings. Each dot in (a) represents an action embedding in gridworld 3-step planning task. Each dot in (b) represents a action embedding of skill in the video game.
Figure 4: The learning curves of methods on gridworld domains for respectively. ’Transfer-from-seq’ denotes the source task is -step planning task.
Figure 5: (a) shows the PCA result of action embeddings of the source task (, colored in blue) and the target task (, colored in red).Circles in yellow indicate the ideal position for target task, and dashed lines denote the deviations after transfer train. (b) shows the PCA result of action embeddings of the source task (mDP, colored in orange) and the target task (mP, colored in blue).

Further, we test our approach on a one-versus-one video game. In this game, heroes can carry different skills to fight against a role with fixed skills controlled by rules. There are totally 15 skills. Differently, we take opponent’s action into considerations as well in order to learn more reasonable representations, since opponent’s actions have a great influence on the states. We set and trained by 5000 fighting records with maximum length 20. The result is shown in Figure. 3 (b), we notice that the skills with special effect, such as Silence and Stun, are distinguished clearly, damage skills are also closer to each other. As annotated in the figure, the DoT Damage skill and Instant Damage are recognized as well.

Gridworld

We first evaluate AE-SAC on same-domain transfer where the state spaces are the same among tasks. Consider -step planning task in the gridworld described before, we demonstrate experiments on three settings with . Thus, action spaces of tasks are 4, 16, 64, respectively. And the state of the task is consisted of current position and the goal position . Agent receives -0.05 each step and +10 reward when the agent reaches the goal. We assessed the performance of using AE-SAC transferred policy from other source tasks versus standard SAC for discrete actions by measuring the averaged 100 episode return of the target task and the number of episodes.

Figure. 4 shows the results on different tasks. As seen, the speed of training has greatly accelerated after transferring and are faster than SAC in all tasks, especially on target task with and . Note that there are not jump starts on performance due to the action embeddings of the target tasks are randomly initialized first and quickly adapted. To transfer policy, the action embeddings of target task should align with source task so that policy could have a promising performance on target task. Figure. 5 (a) displays action embeddings of the source task () and the target task (). As we can see, the embeddings of target task is shifted since the parameters of the dynamic model is not frozen after transferring. but it still shows the same symmetry in a single task. Nonetheless, transfer can still effectively accelerate the training, because the policy only need to amend the biases.

Figure 6: The learning curves of methods on Mujoco and Roboschool tasks.
Figure 7: The learning curves of methods on the fighting video game.

Mujoco and Roboschool

Next, we consider the more difficult problem of cross-domain transfer, where action spaces are different as well as state spaces. We conduct experiments on four tasks, InvertedPendulum and InvertedDoublePendulum in Mujoco [29] and Roboschool, denoted by mP, mDP, rP and rDP for short. On these tasks, agent needs to control the cart to balance the poles on the cart. For mP and rP, agent receives +1 for each step if it is not terminated. And for mDP and rDP, apart from +10 reward for keeping alive, it is penalized for high velocity and states far from the goal state. As for the dynamics, there might share some similarities between InvertedPendulum and InvertedDoublePendulum since the former is included by the latter. The dynamics between Mujoco and Roboschool might differ because the physics engines are quite different, however, they both follow the laws of physics. To evaluate our methods, we discretize the original -dimension continuous control action space into equally spaced values, resulting in a discrete action space with actions. The details of the tasks are summarized in Table. 1.

Task State Dim Act Dim Act Range Discretized
mP 4 1 [-3, 3] 101
mDP 11 1 [-1, 1] 51
rP 5 1 [-1, 1] 91
rDP 9 1 [-1, 1] 71
Table 1: Details of tasks

We train models on these tasks and transfer between each other and Figure. 6 reports the results of cross-domain transfer. Overall, almost all transfer results learn faster than from scratch, especially on tasks of Mujoco. In Figure. 6 (a), the result of transferring from mDP achieves the fastest performance, and the other two are relatively close. As for rP (Figure. 6 (b)), there is only a small gap between methods. The variance of transferring from mDP appears to be quite large at first since these two vary not only in tasks but also the underlying physics engine. In Figure. 6 (c), it is faster to transfer policy from mP. rDP and rP are close at first, and the variance of them are both quite large. All transfer results are about the same in Figure. 6 (d). What’s interesting, in previous experiments, SAC all performs better than AE-SAC, since AE-SAC needs to learn action embeddings at the meantime, which makes it difficult to learn a policy. While, for tasks mDP and rDP, AE-SAC outperforms SAC.

Further, we want to investigate that the properties of action embeddings learned from sequences of pairs of state embedding and action. The relations of action embeddings of these tasks ought to be linear, since the actions are discretized from a continuous action space. Figure. 5 (b) plots the action embeddings of the source task (mDP) and the target task (mP) using PCA to reduce them to 1-dimension together. As shown in Figure. 5 (b), though there are some local oscillations, the overall trend is linear. It’s proved that our methods can still learn meaningful latent representations of actions based on state embeddings.

Fighting Video Game

To show the potential of our method in more practical problem, we validate it on a aforementioned one-versus-one fighting commercial video game. In the scenario, the hero can select a subset of skills to fight against a rule-controlled opponent. The state of this environment is a 48-dimensional vector that consists of the information of the hero and the opponent. Agent receives positive rewards for damaging and winning, and negative rewards for self loss and losing. And agent is punished by choosing unready skills.

In this environment, we select three different set of skills with 5, 6, 7 skills respectively. Some skills are shared between them. We first train the policy individually and then transfer to each other. The performance of methods is measured by average winning rate over 100 recent episode.

Figure. 7 report the result. As seen, for the task with 5 skills (Figure. 6 (a)), the policy transferred from 7 skills suits well and the winning rate rise to 100% soon. While the other one is close to SAC. This can caused by that the skills carried by the target task are quite different from the key skills used by the source policy. In Figure. 7(b), the two transfer results are close and both outperform SAC. And for the task with 7 skills (Figure. 6(c)), the policy from 6 skills adapts better and achieves the best. Overall, all transfer results outperforms AE-SAC and SAC more or less.

Conclusion

In this paper, we investigate learning and leveraging action embeddings of discrete actions to transfer across tasks with different action spaces and/or state spaces in RL. We propose a method that can effectively learn meaningful action embeddings from any generated trajectories by training a transition model. Further, we train RL policies with action embeddings by using a nearest neighbor in the embedding space. Then the policy and the transition model are leveraged as initializations to transfer to the target task, leading to a quick learning for action representations in the target task and adaptation of policy. Our method is evaluated on three sets of tasks, demonstrating that it is capable of improving the initial performance compared to standard RL algorithm for discrete action, even with different state space.

In the future, we try to learn promising action embeddings for continuous action spaces and align the state embeddings with additional restriction.

References

  • [1] H. B. Ammar, E. Eaton, P. Ruvolo, and M. E. Taylor (2015) Unsupervised cross-domain transfer in policy gradient reinforcement learning via manifold alignment. In Twenty-Ninth AAAI Conference on Artificial Intelligence, Cited by: Introduction, Transfer in Reinforcement Learning.
  • [2] H. B. Ammar and M. E. Taylor (2011) Reinforcement learning transfer via common subspaces. In International Workshop on Adaptive and Learning Agents, pp. 21–36. Cited by: Transfer in Reinforcement Learning.
  • [3] S. Banerjee, V. Dhiman, B. Griffin, and J. J. Corso (2018) Do deep reinforcement learning algorithms really learn to navigate?. Cited by: Introduction.
  • [4] A. Barreto, D. Borsa, J. Quan, T. Schaul, D. Silver, M. Hessel, D. Mankowitz, A. Žídek, and R. Munos (2019) Transfer in deep reinforcement learning using successor features and generalised policy improvement. arXiv preprint arXiv:1901.10964. Cited by: Introduction, Transfer in Reinforcement Learning.
  • [5] Y. Bengio, A. Courville, and P. Vincent (2013) Representation learning: a review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35 (8), pp. 1798–1828. Cited by: Representation Learning.
  • [6] T. Carr, M. Chli, and G. Vogiatzis (2019) Domain adaptation for reinforcement learning on the atari. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1859–1861. Cited by: Introduction, Transfer in Reinforcement Learning, Policy Transfer.
  • [7] Y. Chandak, G. Theocharous, J. Kostas, S. M. Jordan, and P. S. Thomas (2019) Learning action representations for reinforcement learning. In Proceedings of the 36th International Conference on Machine Learning, pp. 941–950. Cited by: Introduction, Action Embedding.
  • [8] G. Dulac-Arnold, R. Evans, H. van Hasselt, P. Sunehag, T. Lillicrap, J. Hunt, T. Mann, T. Weber, T. Degris, and B. Coppin (2015) Deep reinforcement learning in large discrete action spaces. arXiv preprint arXiv:1512.07679. Cited by: Action Embedding, Action Generation.
  • [9] C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135. Cited by: Introduction, Transfer in Reinforcement Learning.
  • [10] V. François-Lavet, Y. Bengio, D. Precup, and J. Pineau (2019) Combined reinforcement learning via abstract representations. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 3582–3589. Cited by: Representation Learning.
  • [11] A. Gupta, C. Devin, Y. Liu, P. Abbeel, and S. Levine (2017) Learning invariant feature spaces to transfer skills with reinforcement learning. arXiv preprint arXiv:1703.02949. Cited by: Introduction, Transfer in Reinforcement Learning.
  • [12] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine (2018) Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290. Cited by: Introduction, Reinforcement Learning, Experiment.
  • [13] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson (2018) Learning latent dynamics for planning from pixels. arXiv preprint arXiv:1811.04551. Cited by: Learning Action Embeddings.
  • [14] K. Hausman, J. T. Springenberg, Z. Wang, N. Heess, and M. Riedmiller (2018) Learning an embedding space for transferable robot skills. Cited by: Representation Learning.
  • [15] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: Learning Action Embeddings.
  • [16] Q. Le and T. Mikolov (2014) Distributed representations of sentences and documents. In International conference on machine learning, pp. 1188–1196. Cited by: Introduction.
  • [17] S. Levine, C. Finn, T. Darrell, and P. Abbeel (2016) End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research 17 (1), pp. 1334–1373. Cited by: Introduction.
  • [18] V. Liu, R. Kumaraswamy, L. Le, and M. White (2019) The utility of sparse representations for control in reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 4384–4391. Cited by: Representation Learning.
  • [19] C. Ma, J. Wen, and Y. Bengio (2018) Universal successor representations for transfer reinforcement learning. arXiv preprint arXiv:1804.03758. Cited by: Transfer in Reinforcement Learning.
  • [20] T. Mikolov, K. Chen, G. Corrado, and J. Dean (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Cited by: Introduction, Representation Learning, Semantic of Embeddings.
  • [21] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: Introduction, Representation Learning.
  • [22] F. Nie, W. Zhu, and X. Li (2017) Unsupervised large graph embedding. In Thirty-first AAAI conference on artificial intelligence, Cited by: Representation Learning.
  • [23] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018) Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Cited by: Representation Learning.
  • [24] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. (2016) Mastering the game of go with deep neural networks and tree search. nature 529 (7587), pp. 484. Cited by: Introduction.
  • [25] M. E. Taylor, P. Stone, and Y. Liu (2007) Transfer learning via inter-task mappings for temporal difference learning. Journal of Machine Learning Research 8 (Sep), pp. 2125–2167. Cited by: Introduction, Transfer in Reinforcement Learning.
  • [26] M. E. Taylor and P. Stone (2009) Transfer learning for reinforcement learning domains: a survey. Journal of Machine Learning Research 10 (Jul), pp. 1633–1685. Cited by: Introduction.
  • [27] Y. Teh, V. Bapst, W. M. Czarnecki, J. Quan, J. Kirkpatrick, R. Hadsell, N. Heess, and R. Pascanu (2017) Distral: robust multitask reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4496–4506. Cited by: Introduction, Transfer in Reinforcement Learning.
  • [28] G. Tennenholtz and S. Mannor (2019) The natural language of actions. In Proceedings of the 36th International Conference on Machine Learning, pp. 6196–6205. Cited by: Introduction, Introduction, Action Embedding, Action Generation.
  • [29] E. Todorov, T. Erez, and Y. Tassa (2012) Mujoco: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: Introduction, Mujoco and Roboschool.
  • [30] M. Wulfmeier, I. Posner, and P. Abbeel (2017) Mutual alignment transfer learning. arXiv preprint arXiv:1707.07907. Cited by: Transfer in Reinforcement Learning.
Comments 1
Request Comment
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
388780
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
1

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description