The Natural Language of Actions
Abstract
We introduce Act2Vec, a general framework for learning contextbased action representation for Reinforcement Learning. Representing actions in a vector space help reinforcement learning algorithms achieve better performance by grouping similar actions and utilizing relations between different actions. We show how prior knowledge of an environment can be extracted from demonstrations and injected into action vector representations that encode natural compatible behavior. We then use these for augmenting state representations as well as improving function approximation of Qvalues. We visualize and test action embeddings in three domains including a drawing task, a high dimensional navigation task, and the large action space domain of StarCraft II.
1 Introduction
The question “What is language” has had implications in the fields of neuropsychology, linguistics, and philosophy. One definition tells us that language is “a purely human and noninstinctive method of communicating ideas, emotions, and desires by means of voluntarily produced symbols” (Sapir, 1921). Much like humans adopt languages to communicate, their interaction with an environment uses sophisticated languages to convey information. Inspired by this conceptual analogy, we adopt existing methods in natural language processing (NLP) to gain a deeper understanding of the “natural language of actions” with the ultimate goal of solving reinforcement learning (RL) tasks.
In recent years, many advances were made in the field of distributed representations of words (Mikolov et al., 2013b; Pennington et al., 2014; Zhao et al., 2017). Distributional methods make use of the hypothesis that words which occur in a similar context tend to have similar meaning (Firth, 1957), i.e., the meaning of a word can be inferred from the distribution of words around it. For this reason, these methods are called “distributional” methods. Similarly, the context in which an action is executed holds vital information (e.g., prior knowledge) of the environment. This information can be transferred to a learning agent through distributed action representations in a manifold of diverse environments.
In this paper, actions are represented and characterized by the company they keep (i.e., their context). We assume the context in which actions reside is induced by a demonstrator policy (or set of policies). We use the celebrated continuous skipgram model (Mikolov et al., 2013a) to learn highquality vector representations of actions from large amounts of demonstrated trajectories. In this approach, each action or sequence of actions is embedded in a dimensional vector that characterizes knowledge of acceptable behavior in the environment.
As motivation, consider the problem of navigating a robot. In its basic form, the robot must select a series of primitive actions of going straight, backwards, left, or right. By reading the words “straight”, “backwards”, “left”, and “right” the reader already has a clear understanding of their implication in the physical world. Particularly, it is presumed that moving straight and then left should have higher correlation to moving left and then straight, but for the most part contrast to moving right and then backwards. Moreover, it is well understood that a navigation solution should rarely take the actions straight and backwards successively, as it would postpone the arrival to the desired location. Figure 1 shows a 2dimensional schematic of these action pairs in an illustrated vector space. We later show (Section 4.2) that similarities, relations, and symmetries relating to such actions are present in contextbased action embeddings.
Action embeddings are used to mitigate an agent’s learning process on several fronts. First, action representations can be used to improve expressiveness of state representations, sometimes even replacing them altogether (Section 4.1). In this case, the policy space is augmented to include actionhistories. These policies map current state and action histories to actions, i.e., . Through vector action representations, these policies can be efficiently learned, improving an agent’s overall performance. A conceptual diagram presenting this approach is depicted in Figure 2. Second, similarity between actions can be leveraged to decrease redundant exploration through grouping of actions. In this paper, we show how similarity between actions can improve approximation of values, as well as devise a clusterbased exploration strategy for efficient exploration in large action space domains (Sections 3, 4.2).
Our main contributions in this paper are as follows. (1) We generalize the use of contextbased embedding for actions. We show that novel insights can be acquired, portraying our knowledge of the world via similarity, symmetry, and relations in the action embedding space. (2) We offer uses of action representations for state representation, function approximation, and exploration. We demonstrate the advantage in performance on drawing and navigation domains.
This paper is organized as follows. Section 2 describes the general setting of action embeddings. We describe the Skip Gram with Negative Sampling (SGNS) model used for representing actions and its relation to the pointwise mutual information (PMI) of actions with their context. In Section 3 we show how embeddings can be used for approximating values. We introduce a cluster based exploration procedure for large action spaces. Section 4 includes empirical uses of action representations for solving reinforcement learning tasks, including drawing a square and a navigation task. We visualize semantic relations of actions in the said domains as well as the large action space domain of Starcraft II. We conclude the paper with related work (Section 5) and a short discussion on future directions (Section 6).
2 General Setting
In this section we describe the general framework of Act2Vec. We begin by defining the context assumptions from which we learn action vectors. We then illustrate that embedding actions based on their pointwise mutual information (PMI) with their contexts contributes favorable characteristics. One beneficial property of PMIbased embedding is that close actions issue similar outcomes. In Section 4 we visualize Act2Vec embeddings of several domains, showing their inherent structure is aligned with our prior understanding of the tested domains.
A Markov Decision Processes (MDP) is defined by a 5tuple , where is a set of states, is a discrete set of actions, is a set of transition probabilities, is a scalar reward, and is a discount factor. We consider the general framework of multitask RL in which is sampled from a task distribution . In addition, we consider a corpus of trajectories, , relating to demonstrated policies. We assume the demonstrated policies are optimal w.r.t. to task MDPs sampled from . More specifically, trajectories in are sampled from a distribution of permissible policies , defined by
where is the set of optimal policies, which maximize the discounted return of .
We consider a stateaction tuple at time , and define its context of width as the sequence . We will sometimes refer to stateonly contexts and actiononly contexts as contexts containing only states or only actions, respectively. We denote the set of all possible contexts by . With abuse of notation, we write to denote the pair .
In our work we will focus on the pointwise mutual information (PMI) of and its context . Pointwise mutual information is an informationtheoretic association measure between a pair of discrete outcomes x and y, defined as We will also consider the conditional denoted by .
2.1 Act2Vec
We summarize the SkipGram neural embedding model introduced in (Mikolov et al., 2013a) and trained using the NegativeSampling procedure (SGNS) (Mikolov et al., 2013b). We use this procedure for representing actions (in contrast to words) using their contexts, and refer to this procedure as Act2Vec.
Every action is associated with a vector . In the same manner, every context is associated with a vector . In SGNS, we ask, does the pair come from ? More specifically, we ask, what is the probability that came from ? This probability, denoted by , is modeled as
Here, and are the model parameters to be learned.
Negative Sampling (Mikolov et al., 2013b) tries to minimize for randomly sampled “negative” examples, under the assumption that randomly selecting a context for a given action is likely to result in an unobserved pair. The local objective of every pair is thus given by
where is the empirical unigram distribution of contexts in , and is the number of negative samples.
The global objective can be written as a sum over losses of pairs in the corpus
where denotes the number of times the pair appears in .
Relation to PMI: Optimizing the global objective makes observed actioncontext pairs have similar embeddings, while scattering unobserved pairs. Intuitively, actions that appear in similar contexts should have similar embeddings. In fact, it was recently shown that SGNS implicitly factorizes the actioncontext matrix whose cells are the pointwise mutual information of the respective action and context pairs, shifted by a global constant (Levy & Goldberg, 2014). In what follows, we show that is a useful measure for action representation in reinforcement learning.
2.2 Stateonly context
Stateonly contexts provide us with information about the environment as well as predictions of optimal trajectories. Let us consider the nextstate context . More specifically, given a state action pair we are interested in the measure , where here is the random variable depicting the next state. The following lemma shows that when two actions have similar w.r.t. (with respect to) their next state context, they can be joined into a single action with a small change in value.
Lemma 1.
Let s.t.
where .
Let and denote
Then,
A proof to the lemma can be found in the supplementary material. The lemma illustrates that neglecting differences in actions of high proximity in embedding space has little effect on a policy’s performance. While state contexts provide fruitful information, their PMI may be difficult to approximate in large state domains. For this reason, we turn to actiononly contexts, as described next.
2.3 Actiononly context
Actiononly contexts provide us with meaningful information when they are sampled from . To see this, consider the following property of actiononly contexts. If action is more likely to be optimal than , then any context that has a larger PMI with than will also be more likely to be optimal when chosen with . Formally, let , , and such that
(1) 
Let . If
(2) 
then
(3) 
To show (3), we write the assumption in Equation (2) explicitly:
Next, due to the definition of , we have that
Giving us
where the last step is due to our assumption in Equation 1.
In most practical settings, contexts based embeddings that use state contexts are difficult to train using SGNS. In contrast, actiononly contexts usually consist of orders of magnitude less elements. For this reason, in Section 4 we experiment with actiononly contexts, showing that semantics can be learned even when states are ignored.
3 Act2Vec for function approximation
Lemma 1 gives us intuition as to why actions with similar contexts are in essence of similar importance to overall performance. We use this insight to construct algorithms that depend on similarity between actions in the latent space. Here, we consider the set of discrete actions . Equivalently, as we will see in Section 4.2, this set of actions can be augmented to the set of all action sequences . We are thus concerned with approximating the value of stateaction pairs .
Embedding: When implemented using neural networks, Learning with function approximation consists of approximating the value of stateaction pairs by
(4) 
where are the features learned by the network, and are the linear weights learned in the final layer of the network. When the number of actions is large, this process becomes impractical. In NLP domains, it was recently suggested to use word embeddings to approximate the function (He et al., 2016) as
(5) 
where are the learned features extracted from the embedding of a word . Similar words become close in embedding space, thereby outputting similar Values. This approach can also be applied on action embeddings trained using Act2Vec. These action representations adhere inherent similarities, allowing one to approximate their values, while effectively obtaining complexity of smaller dimension. We will refer to the approximation in Equation 5 as Embedding.
Exp: In Learning, the most fundamental exploration strategy consists of uniformly sampling an action. When the space of actions is large, this process becomes infeasible. In these cases, action representations can be leveraged to construct improved exploration strategies. We introduce a new method of exploration based on action embeddings, which we call Exp. Exp is a straightforward extension of uniform sampling. First, the action embedding space is divided into clusters using a clustering algorithm (e.g., means). The exploration process then follows two steps: (1) Sample a cluster uniformly, and (2) given a cluster, uniformly sample an action within it. Exp ensures actions that have semantically different meanings are sampled uniformly, thereby improving approximation of values.
In Section 4.2 we compare Embedding with Exp to learning with uniform exploration, demonstrating the advantage of using action representations.
4 The Semantics of Actions
Word embeddings have shown to capture large numbers of syntactic and semantic word relationships (Mikolov et al., 2013b). Motivated by this, as well as their relation to , we demonstrate similar interpretation on several reinforcement learning environments. This section is divided into three parts. The first and second parts of this section consider the tasks of drawing a square and navigating in 3d space. In both parts, we demonstrate the semantics captured by actions in their respective domains. We demonstrate the effectiveness of Act2Vec in representing a state using the sequence of previously taken actions (see Figure 2). We then demonstrate the use of Act2Vec with Embedding and Exp. Finally, in the third part of this section we demonstrate the semantic nature of actions learned in the complex strategy game of StarCraft II.
4.1 Drawing
We undertook the task of teaching an agent to draw a square given a sparse reward signal. The action space consisted of 12 types of strokes: Left, Right, Up, Down, and all combinations of corners (e.g., Left+Up). The sparse reward provided the agent with feedback only once she had completed her drawing, with positive feedback only when the drawn shape was rectangular. Technical details of the environment can be found in the supplementary material. We trained Act2Vec with actiononly context over a corpus of 70,000 humanmade drawings in the “square category” of the Quick, Draw! (Cheema et al., 2012) dataset. Projections of these embeddings are depicted in Figure 3(a). The embedding space projection reflects our knowledge of these action strokes. The space is divided into 4 general regions, consisting of strokes in each of the main axis directions. Strokes relating to corners of the squares are centered in distinct clusters, each in proximity to an appropriate direction. The embedding space presents evident symmetry w.r.t. clockwise vs. counterclockwise drawings of squares.
The action space in this environment is relatively small. One way of representing the state is through the set of all previous actions, since in this case . The state was therefore represented as the vector equal to the sum of previous action vectors. We compared three types of action embeddings for representing states: Act2Vec, normalized Act2Vec (using the norm), onehot, and randomized embeddings. Figure 3(b,c) shows results of these representations for different square sizes. Act2Vec proved to be superior on all tasks, especially with increased horizon  where the sparse reward signal drastically affected performance. We also note that normalized Act2Vec achieved similar results with higher efficiency. In addition, all methods but Act2Vec had high variance in their performance over trials, implying they were dependent on the network’s initialization. A detailed overview of the training process can be found in the supplementary material.
4.2 Navigation
In this section we demonstrate how sequences of actions can be embedded using Act2Vec. We then show how embeddings based on trajectories captured in a simple navigation domain can transfer knowledge to a more complex navigation domain, thus improving its learning efficiency.
In physical domains, acceptable movements of objects in space are frequently characterized by smoothness of motion. As such, when we open a door, we move our hand smoothly through space until reaching the knob, after which we complete a smooth rotation of the doorknob. An agent learning in such an environment may tend to explore in a manner that does not adhere to patterns of such permissible behavior (e.g., by uniformly choosing an arbitrary action). Moreover, when inspecting individual tasks, actions incorporate various properties that are particular to the task at hand. Looking left and then right may be a useless sequence of actions when the objective is to reach a goal in space, while essential for tasks where information gathered by looking left contributes to the overall knowledge of an objective on the right (e.g., looking around the walls of a room). In both cases, looking left and immediately right is without question distinct to looking left and then looking left again. These semantics, when captured properly, can assist in solving any navigation task.
When studying the task of navigation, one is free to determine an action space of choice. In most applications, the primitive space of actions is either defined by fixed increments of movement and rotation or by physical forces. Let us consider the former case and more specifically examine the action space consisting of moving forward (marked by ), and rotating our view to the left or to the right (marked by and , respectively). These three actions are in essence sufficient for solving most navigation tasks. Nonetheless, semantic relations w.r.t. these actions become particularly evident when action sequences are used. For this case, we study the augmented action space consisting of all action sequences of length , i.e., . For example, for the case of , the action sequence would relate to taking the action twice, thereby moving two units forward in the world, whereas would relate to moving one unit forward in the world and then rotating our view one unit to the left. This augmented action space holds interesting features, as we see next.
We trained Act2Vec with actiononly context on a corpus of 3000 actions taken from a 2d navigation domain consisting of randomly generated walls. Given a random goal location, we captured actions played by a human player in reaching the goal. Figure 4 (a,b) shows Principal Component Analysis (PCA) projections of the resulting embeddings for action sequences of length . Examining the resulting space, we find two interesting phenomena. First, the embedding space is divided into several logical clusters, each relating to an aspect of movement. In these, sequences are divided according to their forward momentum as well as direction. Second, we observe symmetry w.r.t. the vertical axis, relating to looking left and right. These symmetrical relations capture our understanding of the consequences of executing these action sequences. In the next part of this section we use these learned embeddings in a different navigation domain, teaching an agent how to navigate while avoiding unwanted objects.
Knowledge Transfer: We tested the Act2Vec vector embedding trained on the 2d navigation domain on the DeepMind Lab (Beattie et al., 2016) SeekAvoid environment ^{1}^{1}1While the tasks of navigation in the 2d domain and the SeekAvoid domain are different, the usage of their actions presents similar semantic knowledge, thereby incorporating transfer of knowledge between these domains.. Here, an agent must navigate in 3d space, learning to collect good apples while avoiding bad ones. The agent was trained using Qlearning with function approximation. We tested sequences of length using both methods of function approximation (Equations 4 and 5).
Results, as depicted in Figure 4(c), show superiority of using embeddings to approximate Qvalues. Action sequences of length showed superiority over with uniform exploration, with percent increase in total reward. Sequences of length did not exhibit an increase in performance. We speculate this is due to the uniform exploration process. To overcome this matter, we used kmeans in order to find clusters in the action embedding space. We then evaluated Exp on the resulting clusters. While results were dependent on the initialization of clusters (due to randomization of means), sequences of length showed increase in total reward. Additional technical details can be found in the supplementary material.
Regularization: Learning to navigate using sequence of actions led to smoothness of motion in the resulting tests. This property arose due to the unique clustering of sequences. Particularly, the sequences and were discarded by the agent, allowing for a smooth flow in navigation. Consequently, this property indicates that action sequences act as a regularizer to the navigation task.
4.3 StarCraft II
StarCraft II is a popular video game that presents a very hard challenge for reinforcement learning. Its main difficulties include a huge state and action space as well as a longtime horizon of thousands of states. The consequences of any single action (in particular, early decisions) are typically only observed many frames later, posing difficulties in temporal credit assignment and exploration. Act2Vec offers an opportunity to mitigate some of these challenges, by finding reliable similarities and relations in the action space.
We used a corpus of over a million game replays played by professional and amateur players. The corpus contained over 2 billions played actions, which on average are equivalent to 100 years of consecutive gameplay. The action space was represented by over 500 action functions, each with 13 types of possible arguments, as described in (Vinyals et al., 2017). We trained Act2Vec with actiononly context to embed the action functions into action vectors of dimension , ignoring any action arguments. TSNE (Maaten & Hinton, 2008) projections of the resulting action embeddings are depicted in Figure 5.
In StarCraft II players choose to play one of three species: Terran, Protoss, or Zerg. Once a player chooses her race, she must defeat her opponent through strategic construction of buildings, training of units, movement of units, research of abilities, and more. While a myriad strategies exist, expert players operate in conventional forms. Each race admits to different strategies due to its unique buildings, units, and abilities. Embeddings depicted in Figure 5(a) show distinct clusters of the three different races. Moreover, actions that are commonly used by all three races are projected to a central position with equal distance to all race clusters. Figure 5(b) details a separate tSNE projection of the Terran race action space. Embeddings are clustered into regions with definite, distinct types, including: training of units, construction of buildings, research, and effects. While these actions seem arbitrary in their raw form, Act2Vec, through contextbased embedding, captures their relations through meaningful clusters.
Careful scrutiny of Figure 5(b), shows interesting captured semantics, which reveal common game strategies. As an example, let us analyze the cluster containing actions relating to training Marines, Marauders, Medivacs, and WidowMines. Marines are an allpurpose infantry unit, while Marauders being almost opposite to the Marine units, are effective at the front of an engagement to take damage for Marines. Medivacs are dual purpose dropships and healers. They are common in combination with Marines and Marauders, as they can drop numbers of Marines and Marauders and then support them with healing. A Widow Mine is a light mechanical mine that deals damage to ground or air units. Widow Mines are used, as a standard opening move, in conjunction with Marines and Medivacs, to zone out opposing mineral fields. Other examples of strategic clusters include the Ghost and the Nuke, which are commonly used together, as well as the Stimpack, Combat Shield, and Concussive Shells abilities, which are Marine and Marauder upgrades, all researched at the Tech Lab attached to a Barracks.
The semantic representation of actions in StarCraft II illustrates how low dimensional information can be extracted from high dimensional data through an elementary process. It further emphasizes that knowledge implicitly incorporated in Act2Vec embeddings can be compactly represented without the need to solve the (many times challenging) task at hand.
5 Related Work
Action Embedding: (DulacArnold et al., 2015) proposed to embed discrete actions into a continuous space. They then find optimal actions using a nearest neighbor approach. They do not, however, offer a method in which such action representations can be found. Most related to our work is that of (Chandak et al., 2019) in which an embedding is used as part of the policy’s structure in order to train an agent. Our work provides a complementary aspect, with an approach to directly inject prior knowledge from expert data. In addition, we are able to capture semantics without the need to solve the task at hand.
Representation Learning: Representation learning is concerned with finding an appropriate representation of data in order to perform a machine learning task (Goodfellow et al., ). In particular, deep learning exploits this concept by its very nature (Mnih et al., 2015). Other work related to representation in RL include Predictive State Representations (PSR) (Littman & Sutton, 2002), which capture the state as a vector of predictions of future outcomes, and a Heuristic Embedding of Markov Processes (HEMP) (Engel & Mannor, 2001), which learns to embed transition probabilities using an energybased optimization problem. In contrast to these, actions are less likely to be affected by the “curse of dimensionality” that is inherent in states.
One of the most fundamental work in the field of NLP is word embedding (Mikolov et al., 2013b; Pennington et al., 2014; Zhao et al., 2017), where lowdimensional word representations are learned from unlabeled corpora. Among most word embedding models, Word2Vec (Mikolov et al., 2013b) (trained using SGNS) gains its popularity due to its effectiveness and efficiency. It achieves stateoftheart performance on a range of linguistic tasks within a fraction of the time needed by previous techniques. In a similar fashion, Act2Vec represents actions by their context. It is able to capture meaningful relations between actions  used to improve RL agents in a variety of tasks.
Learning from Demonstration (LfD): Imitation learning is primarily concerned with matching the performance of a demonstrator (Schaal, 1997, 1999; Argall et al., 2009). Demonstrations typically consist of sequences of stateaction pairs , from which an agent must derive a policy that reproduces and generalizes the demonstrations. While we train Act2Vec using a similar corpus, we do not attempt to generalize the demonstrator’s mapping . Our key claim is that vital information is present in the order in which actions are taken. More specifically, the context of an action masks acceptable forms and manners of usage. These natural semantics cannot be generalized from statetoaction mappings, and can be difficult for reinforcement learning agents to capture. By using finite action contexts we are able to create meaningful representations that capture relations and similarities between actions.
MultiTask RL: Multitask learning learns related tasks with a shared representation in parallel, leveraging information in related tasks as an inductive bias, to improve generalization, and to help improve learning for all tasks (Caruana, 1997; Ruder, 2017; Taylor & Stone, 2009a, b; Gupta et al., 2017). In our setting, actions are represented using trajectories sampled from permissible policies^{2}^{2}2In our setting, policies are given as optimal solutions to tasks. In practical settings, due to finite context widths, policies need not be optimal in order to capture relevant, meaningful semantics.. These representations advise on correct operations for learning new tasks, though they only incorporate local, relational information. They provide an approach to implicitly incorporate prior knowledge through representation. Act2Vec can thus be used to improve efficiency of multitask RL methods.
Skill Embedding: Concisely representing skills allow for efficient reuse when learning complex tasks (Pastor et al., 2012; Hausman et al., ; Kroemer & Sukhatme, 2016). Many methods use latent variables and entropy constraints to decrease the uncertainty of identifying an option, allowing for more versatile solutions (Daniel et al., 2012; End et al., 2017; Gabriel et al., 2017). While these latent representations enhance efficiency, their creation process is dependent on the agent’s ability to solve the task at hand. The benefit of using data generated by human demonstrations is that it lets one learn expressive representations without the need to solve any task. Moreover, much of the knowledge that is implicitly acquired from human trajectories may be unattainable by an RL agent. As an example of such a scenario we depict action embeddings learned from human replays in StarCraft II (see Section 4.3, Figure 5). While uptodate RL algorithms have yet to overcome the obstacles and challenges in such problems, Act2Vec efficiently captures evident, valuable relations between actions.
6 Discussion and Future Work
If we recognize actions as symbols of a natural language, and regard this language as “expressive”, we imply that it is an instrument of inner mental states. Even by careful introspection, we know little about these hidden mental states, but by regarding actions as thoughts, beliefs, strategies, we limit our inquiry to what is objective. We therefore describe actions as modes of behavior in relation to the other elements in the context of situation.
When provided with a structured embedding space, one can efficiently eliminate actions. When the number of actions is large, a substantial portion of actions can be eliminated due to their proximity to known suboptimal actions. When the number of actions is substantially small, action sequences can be encoded instead, establishing an extended action space in which similar elimination techniques can be applied.
Recognizing elements of the input as segments of an expressive language allow us to create representations that adhere unique structural characteristics. While the scope of this paper focused on action representation, distributional embedding techniques may also be used to efficiently represent states, policies, or rewards through appropriate contexts. Interpreting these elements relative to their context withholds an array of possibilities for future research.
Lastly, we note that distributional representations can be useful for debugging RL algorithms and their learning process. By visualizing trajectories in the action embedding space, an overseer is able to supervise over an agents progress, finding flaws and improving learning efficiency.
References
 Abbeel & Ng (2004) Abbeel, P. and Ng, A. Y. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twentyfirst international conference on Machine learning, pp. 1. ACM, 2004.
 Argall et al. (2009) Argall, B. D., Chernova, S., Veloso, M., and Browning, B. A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5):469–483, 2009.
 Beattie et al. (2016) Beattie, C., Leibo, J. Z., Teplyashin, D., Ward, T., Wainwright, M., Küttler, H., Lefrancq, A., Green, S., Valdés, V., Sadik, A., et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016.
 Caruana (1997) Caruana, R. Multitask learning. Machine learning, 28(1):41–75, 1997.
 Chandak et al. (2019) Chandak, Y., Theocharous, G., Kostas, J., Jordan, S., and Thomas, P. S. Learning action representations for reinforcement learning. arXiv preprint arXiv:1902.00183, 2019.
 Cheema et al. (2012) Cheema, S., Gulwani, S., and LaViola, J. Quickdraw: improving drawing experience for geometric diagrams. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1037–1064. ACM, 2012.
 Daniel et al. (2012) Daniel, C., Neumann, G., and Peters, J. Hierarchical relative entropy policy search. In Artificial Intelligence and Statistics, pp. 273–281, 2012.
 DulacArnold et al. (2015) DulacArnold, G., Evans, R., van Hasselt, H., Sunehag, P., Lillicrap, T., Hunt, J., Mann, T., Weber, T., Degris, T., and Coppin, B. Deep reinforcement learning in large discrete action spaces. arXiv preprint arXiv:1512.07679, 2015.
 End et al. (2017) End, F., Akrour, R., Peters, J., and Neumann, G. Layered direct policy search for learning hierarchical skills. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 6442–6448. IEEE, 2017.
 Engel & Mannor (2001) Engel, Y. and Mannor, S. Learning embedded maps of markov processes. In in Proceedings of ICML 2001. Citeseer, 2001.
 Firth (1957) Firth, J. R. A synopsis of linguistic theory, 19301955. Studies in linguistic analysis, 1957.
 Gabriel et al. (2017) Gabriel, A., Akrour, R., Peters, J., and Neumann, G. Empowered skills. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 6435–6441. IEEE, 2017.
 (13) Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. Deep learning, volume 1.
 Gupta et al. (2017) Gupta, A., Devin, C., Liu, Y., Abbeel, P., and Levine, S. Learning invariant feature spaces to transfer skills with reinforcement learning. ICLR, 2017.
 (15) Hausman, K., Springenberg, J. T., Wang, Z., Heess, N., and Riedmiller, M. Learning an embedding space for transferable robot skills. ICLR.
 He et al. (2016) He, J., Chen, J., He, X., Gao, J., Li, L., Deng, L., and Ostendorf, M. Deep reinforcement learning with a natural language action space. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 1621–1630, 2016.
 Kroemer & Sukhatme (2016) Kroemer, O. and Sukhatme, G. S. Learning relevant features for manipulation skills using metalevel priors. arXiv preprint arXiv:1605.04439, 2016.
 Levy & Goldberg (2014) Levy, O. and Goldberg, Y. Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pp. 2177–2185, 2014.
 Littman & Sutton (2002) Littman, M. L. and Sutton, R. S. Predictive representations of state. In Advances in neural information processing systems, pp. 1555–1561, 2002.
 Maaten & Hinton (2008) Maaten, L. v. d. and Hinton, G. Visualizing data using tsne. Journal of machine learning research, 9(Nov):2579–2605, 2008.
 Mikolov et al. (2013a) Mikolov, T., Chen, K., Corrado, G., and Dean, J. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013a.
 Mikolov et al. (2013b) Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119, 2013b.
 Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529, 2015.
 Pastor et al. (2012) Pastor, P., Kalakrishnan, M., Righetti, L., and Schaal, S. Towards associative skill memories. In Humanoid Robots (Humanoids), 2012 12th IEEERAS International Conference on, pp. 309–315. IEEE, 2012.
 Pennington et al. (2014) Pennington, J., Socher, R., and Manning, C. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543, 2014.
 Ruder (2017) Ruder, S. An overview of multitask learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017.
 Sapir (1921) Sapir, E. An introduction to the study of speech. Citeseer, 1921.
 Schaal (1997) Schaal, S. Learning from demonstration. In Advances in neural information processing systems, pp. 1040–1046, 1997.
 Schaal (1999) Schaal, S. Is imitation learning the route to humanoid robots? Trends in cognitive sciences, 3(6):233–242, 1999.
 Schulman et al. (2017) Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
 Taylor & Stone (2009a) Taylor, M. E. and Stone, P. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633–1685, 2009a.
 Taylor & Stone (2009b) Taylor, M. E. and Stone, P. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633–1685, 2009b.
 Vinyals et al. (2017) Vinyals, O., Ewalds, T., Bartunov, S., Georgiev, P., Vezhnevets, A. S., Yeo, M., Makhzani, A., Küttler, H., Agapiou, J., Schrittwieser, J., et al. Starcraft ii: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782, 2017.
 Zhao et al. (2017) Zhao, Z., Liu, T., Li, S., Li, B., and Du, X. Ngram2vec: Learning improved word representations from ngram cooccurrence statistics. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 244–253, 2017.
7 Supplementary Material
7.1 Proof of Lemma 1
We show a more general formulation of the lemma. More specifically, we show that it holds for
where is any (possibly adversarial) distribution over the set of actions in . In the case of the lemma, is the uniform distribution.
Writing the assumption explicitly we have that for all
which can be rewritten
giving us
(6) 
Next, let be the MDP defined by , and let be the MDP defined by , where
For all we have that
where in the last step we used Equation 6 and the fact that . For the remainder of the proof we will use the following result proven in (Abbeel & Ng, 2004):
Lemma 2.
Let be MDPs as defined above. If
then
By definition of and we have that . Then, for all
Writing the above explicitly we get
(7) 
Next denote the sets
Then,
By switching the roles and we get the opposite inequality:
Overall,
(8) 
Plugging (8) in (7) and using Lemma 2 we get
The proof follows immediately due to
for .
7.2 Experimental Details
7.2.1 Drawing
We used the Quick, Draw! (Cheema et al., 2012) dataset, consisting of 50 million drawings ranging over 345 categories. While the raw data describes strokes as relative positions of a user’s pen on the canvas, we simplified the action space to include four primitive actions: left, right, up, down, i.e., movement of exactly one pixel in each of the directions. This was done by connecting lines between any two consecutive points in a stroke, and ignoring any drawings consisting of multiple strokes. Finally, we defined our action space to be any sequence of length primitive actions , which we will hereinafter denote as an action stroke.
For our task, we were concerned with drawing a square. We used the 70,000 humandrawn squares in the “square” category of the Quick, Draw! dataset. Scanning the corpus of square drawings we trained Act2Vec on 12 types of actions strokes:

Strokes of length 20 pixels in each of the axis directions: Left, Right, Up, Down.

Strokes of length 20 pixels consisting of corners in each of the possible directions: Left+Up, Up+Right, Right+Down, Down+Left, Right+Up, Up+Left, Left+Down, Down+Right.
Act2Vec was trained with SGNS over epochs using a window size of , embedding dimension , and negative samples.
The reward function was given by , where denotes the length of the side of the drawn shape, and is the desired length of each side. When the drawn shape had more or less than four sides, a reward of was given. In our case we tested values of .
The state was represented as the sum of embeddings of previously taken actions. That is,
where are the used embeddings. We tested four types of embeddings: Act2Vec, Normalized Act2Vec (by norm), onehot embedding, and random embedding. The onehot embedding is defined as the unit vector of length , with zeros everywhere except for position , corresponding to action . Random embeddings were initialized uniformly in .
The agent was trained using Proximal Policy Optimization (PPO) (Schulman et al., 2017) over million iterations for the case of and million for using asynchronous threads. PPO was run using the following parameters: learning rate = , , , experience buffer passes, entropy coefficient = , clipping range = . The value and policy networks were both represented as feedforward neural networks with two hidden layer with units each. Each of the state representation methods were tested for a set of 15 trials, except for random embeddings, which were tested for a set of 100 trials. A different random embeddings was sampled for each tested trial.
7.2.2 Navigation
We start by describing the 2d environment in which the action embeddings where learned. An image of the environment can be seen in Figure 5(a). The environment was divided to a 25x25 grid, in which 300 walls were randomly placed as well as 5 numbers relating to goals. The player was initialized in the center of the grid. The player’s movement was not constrained to the grid, but rather a more continuous movement  similar to that of the SeekAvoid environment. The player was given the following three actions: Move forward one unit, rotate left 25 degrees, and rotate right 25 degrees. The player was given the task to move from its initial position and reach each of the goal numbers (in order).
Actions of a demonstrator player were recorded for different initializations of the 2d environment. Act2Vec was trained over a corpus of 3000 actions. Act2Vec was trained with SGNS over epochs using different window sizes of , embedding dimensions , and negative samples.
The 3d environment used to test the DQN agent was the SeekAvoid lab (Beattie et al., 2016). An image of the environment can be see in in Figure 5(b). The input image was resized to a resolution of pixels with channels (RGB). All experiments used the same state representation: a convolutional neural network with 3 layers: (1) 2d convolution with 8 channels, window size of and stride of , (2) 2d convolution with 16 channels, window size of , and stride of 2, and (3) fully connected layer with hidden features, . For the case of the traditional DQN, the network output was , where were the network’s parameters learned for each action. For the case of Embedding, we used another network to approximate as , where is the embedding of action , and is a feedforward neural network with one hidden layer of 64 units and activation function, and 128 outputs. In this case, the network output was given by , for each of the actions.
The action space consisted of: move forward one unit, turn view 25 degrees to the right, and turn view 25 degrees to the left. Every action was repeated for a sequence of 10 frames. In the case of action sequences, every action in the sequence was repeated for 10 frames, i.e., a sequence of actions was executed for frames.
The agent used a replay memory of one million transitions and a training batch of transitions. The agent was trained over 60000 iterations with a learning rate of 0.00025, and discount factor of . For uniform and Exp exploration decaying was used, starting at a value of and decaying linearly to a value of once reached to iteration .