Learning Shared Representations for Value Functions in Multi-task Reinforcement Learning

Learning Shared Representations for Value Functions in Multi-task Reinforcement Learning

Diana Borsa    Thore Graepel    John Shawe-Taylor
Abstract

We investigate a paradigm in multi-task reinforcement learning (MT-RL) in which an agent is placed in an environment and needs to learn to perform a series of tasks, within this space. Since the environment does not change, there is potentially a lot of common ground amongst tasks and learning to solve them individually seems extremely wasteful. In this paper, we explicitly model and learn this shared structure as it arises in the state-action value space. We will show how one can jointly learn optimal value-functions by modifying the popular value-iteration and policy-iteration procedures to accommodate this shared representation assumption and leverage the power of multi-task supervised learning. Finally, we demonstrate that the proposed model and training procedures, are able to infer good value functions, even under low samples regimes. In addition to data efficiency, we will show in our analysis, that learning abstractions of the state space jointly across tasks leads to more robust, transferable representations with the potential for better generalization.

reinforcement learning, representation learning, value functions, multi-task learning, transfer
\icmladdress

University College London, Dept. of Computer Science, CSML, London WC1E 6EA, UK


1 Introduction

Reinforcement learning (RL) has gained a lot of popularity and has seen remarkable successes in the last years, exploiting and benefiting greatly from the recent developments in general functional approximators, such as neural networks (Mnih et al., 2015). At least part of this success seems to be linked to the ability of these universal functional approximators to distill meaningfully representations (Bengio, 2009), from high-dimensional input states. These enabled RL to scale up to more complex environments and scenarios that were previously prohibited or required a great amount of feature engineering as shown in (Mnih et al., 2015), (Silver et al., 2016). Thus, learning a good abstraction of a given environment and the agent’s role in it, seems to be a key component in developing complex and optimal control mechanisms.

While a lot of progress has been made in improving learning on individual single tasks, there seems to have been a lot less work in trying to re-use or efficiently transfer information from one task to the another (Taylor & Stone, 2009b). Nevertheless, it is natural to assume that the different tasks an agent needs to learn during its life, share a lot of structure and in-build redundancy. And potentially this could be leverage to speed-up learning. In this work we will propose a way to address this aspect, by learning robust, transferable abstractions of the environment that generalize over a set of tasks.

Value functions are a central ideas in reinforcement learning (Sutton & Barto, 1998) and have been successfully used in conjunction with functional approximators to generalize over large state-action spaces. They are a concise way to readily assess the ”goodness” of a state and can be learnt efficiently even in an off-policy fashion. This enables us to decouple the data gathering and the learning process, but most importantly this allows us to re-use past experiences collected under arbitrary or exploratory policies (Sutton & Barto, 1998). More recently, value functions have been shown to exhibit a very nice compositional structure with respect to the state space and goal states (Schaul et al., 2015). This is consistent with earlier studies in (Sutton et al., 2011) that suggest value functions can capture and represent knowledge beyond their current goal that can be leveraged or re-used. Similar structures have been identified in the hierarchical reinforcement learning literature (Dietterich, 2000) or (Sutton et al., 1999). These all motivated our choice of explicitly modelling the presence of this shared structure in the state-action value space.

Using a multi-task RL formulation and following the recent work done in (Calandriello et al., 2014), we firstly outline two general ways of learning RL tasks jointly and sharing knowledge across them by extending two of the most popular procedures for learning value function, Fitted Q-Iteration (Ernst et al., 2005) and Fitted Policy Iteration (Antos et al., 2007), to accommodate this shared structure assumption. Furthermore, taking advantage of the multi-task methods developed in supervised settings, we extend the work in (Calandriello et al., 2014) to account for task-specific components.

We will also show empirically that these lead to an overall improvement on the policies inferred, as well as a decrease in the number of samples per task needed to achieve good performance. We explore the nature of the representation learnt and its potential transferability to new, but related tasks. We show this learning is able infer a compressed structure that nevertheless captures a lot of transferable knowledge, similar to option-like transition models (Sutton et al., 1999) – without us ever specifying a partition of desirable states or subgoals. Finally we will argue that this way of learning, leads to more robust and refined representations which are deemed crucial for learning and planning in complex environments.

2 Proposed Model

2.1 Background and Notation

We define a Markov Decision Process (MDP) as a tuple , where is the set of states, is the set of actions111in this work this will be a finite set, is the transition dynamics which provides a probability over next state , is a reward signal, which is assumed to be bounded (, s.t. ) and is a discount factor.

Given an MDP and any policy , we define the (state-action) value function, as the discounted cumulative reward an agent is expected to collect when starting from state , taking action and then act accordingly to policy :

(1)

The expectation is over all trajectories starting in and obtained by interacting with the environment () while following behaviour policy .

Our goal is to learn an optimal behaviour with respect to this expected cumulative reward. Thus we are looking for s.t.

(2)

We will denote this optimal value function as . And note that finding , automatically gives us an optimal policy by acting greedily with respect to these values. In the following, we denote this greedy operation by .

2.2 Problem formulation

We will consider the scenario in which an agent resides (or is placed) into an environment in which it needs to perform a series of tasks. The overall goal is to learn how to succeed at all these tasks. The environment is described by a state-action space and a transition kernel and the tasks can be specified by different rewards signals , one for each task . This formally gives rise to MDPs which share a lot of structure. Thus, if we can find a way to leverage this structure, we expect this to aid the learning process and lead to better generalization. (Taylor & Stone, 2009a)

2.3 A shared (value function) representation

We propose to model the shared structured found in above defined MDP-s as a shared embedding of the state-action space on which we can build the individual optimal value functions for all considered tasks and potentially new ones.

Figure 1: Proposed Model: Enforcing a shared representation of the state-action space, to be used in modelling all value functions, across a set of tasks .

Thus in this paper we are interested in learning this shared embedding as well as ultimately the optimal behaviour for each of the tasks considered. In the following, we will present how one can extend two of the most popular paradigms of learning value functions, Fitted Q-Iteration and Fitted Policy Iteration, to incorporate this shared structure assumption. This will come down to employing a multi-task learning procedure in the target-fitting step of Q-Iteration and in the policy evaluation step of Policy-Iteration.

3 Multi-task (Fitted) Value Iteration

In this section we outline a general framework of using approximate value iteration to infer the optimal -values (and optimal policies) for a set of tasks, in a given environment following the MT-RL setup previously introduced. The proposed algorithm is an extension of Fitted Q-Iteration (FQI) that allows for joint learning and transfer across tasks. Following the recipe of FQI, at each step in the iteration loop and for each sample in our experience set , we compute the one-step TD target based on our current estimate of the value function. Then, treating these estimates as ground truth, we obtain a regression problem from the state-action space onto the TD targets (which are really place-holders for the true value function). In the case of MT-RL, we will obtain such a regression problem for each task . Now we could, in principle, solve all these regression problems independently for each task, which would amount to applying FQI individually to each task. But our assumption is that there is shared structure between tasks and we would like to make use of this common ground to aid the learning process and arrive at more robust abstractions of the input space. Thus we propose solving the regression problems jointly, accounting for and building upon a common representation. A detailed description of the proposed procedure is outlined in Algorithm 1.

0:   - set of experiences/episodes for each task
  Initialize (parameters)
  while not converged  do
     Compute Targets:
     Multi-task Learning:
     
  end while
  Return:
Algorithm 1 Multi-task Fitted Q-Iteration

Note that, in the spirit of generalization, we do not specify a particular algorithm for the multi-task learning step(MTL in Algorithm 1). There is extensive literature of how to deal with multi-task inference and exploit shared structure between tasks in purely supervised settings, and we will take a look at some instantiations of this step throughout this work.

4 Multi-task (Fitted) Policy Iteration

By a similar argument as the one presented in the last section for MT-FQI, we can extend the framework of general policy iteration to the MT-RL scenario. Policy Iteration algorithms rely on an alternating procedure between a policy evaluation step and a policy improvement step. We can extend this framework to the multi-task case, by defining a (current) set of policies , one for each task, and then we evolve this set of policies jointly at each iteration, . Please find an outline of the proposed procedure in Algorithm 2. For now, we implement the policy improvement step by acting greedily with respect to our current estimates of the value function. This step is done individually for each task.

0:  , set of experiences for each task
  Initialize
  while convergence not reached  do
     Policy Evaluation: Compute MT-PE() via Algorithm 3 where and
     Policy Improvement
  end while
  Return: and policies
Algorithm 2 Multi-Task Policy Iteration (MT-PI)

On the other hand, we allow joint learning and sharing of knowledge in the policy evaluation step. This gives rise to a general procedure we will call Multi-task Policy Evaluation (MT-PE)- see Algorithm 3. In MT-PE, we are given a set of policies , one for each task, and a collection of experiences . Then the aim of the algorithm is to approximate the corresponding value functions associated with acting out policy for task . Note that, in general, this step (policy evaluation) requires on-policy data, for each policy and for each task . This could be quite demanding and inefficient data-wise, as the numbers of tasks grow, not to mention that this is just the inner loop of another iterative algorithm (MT-PI). In this work, we opt for an implementation of the policy evaluation step that circumvents this problem. Making use of the Bellman Expectation Equation, we can compute regression targets for approximating by only using experience of the form previously collected, as we did in the case of Fitted-Q Iteration.

(3)

Therefore, we have now reduced the original problem to a set of regression problems that can be solved jointly, under a shared input space representation. This is very similar to the multi-task learning step employed in MT-FQI, but now the shared structure is learnt to model the input set of policies , rather than the optimal ones. Nevertheless, by constantly improving the set of policies that are presented to the MT-PE step, we should eventually be able to convergence to the optimal policies and thus at this point, the policy evaluation step should be able to recover the shared structure amongst optimal value functions.

0:  , set of experiences for each task A set of policies , for each task that need to be evaluated
  Initialize
  while convergence not reached  do
     Compute Targets:
     Multi-task Learning:
     
  end while
  Return:
Algorithm 3 Multi-Task Policy Evaluation (MT-PE)

5 Multi-task and Representation Learning

In this section will we look at a couple of methods we can plug into the above algorithms in the step. For this we will assume a linear parametrization of the state-action value space -i.e. we assume s.t. all value function of interest can be well approximated by this linear combination of this set of features. In the case of fitted value iteration we want this set of features to fit well the intermediate targets , but ultimately we are interested in a set of features that fit well the optimal value functions and we will see that this turns out to be very small subspace of the original feature space.

(4)

In the case of policy iteration, at each evaluation step we are interested in having a feature space that well approximates the value function corresponding to our current policies . Thus we are looking for s.t. . As policies improve, we will end up fitting optimal or near-optimal value functions. Certainly if the regression step can be done perfectly (no approximation error), policy iteration will continue to improve the policies and in the limit will converge to the optimal value functions. Thus, the representation that will come out of this learning procedure should be similar to the ones learned by value-iteration procedures. Consequently, ultimately what we want in terms of representation is a (low-dimensional) features space that spans the optimal value functions of interest.

5.1 Multi-task feature learning

In terms of planning, the joint problem we are trying to solve can be formalized as inferring where is a regularizer on the weight vectors, that encourages feature sharing. At the same time, we wish to learn a more compact abstraction of the state-action space, that will be shared among tasks. To make this a bit more formal, let , then our assumption can be expressed as: a small set of features such that , where -s form a basis for the relevant low-dim subspace.

Thus for each task , we are trying to solve jointly the following optimization problem:

(5)

In (Argyriou et al., 2008) this was shown to be equivalent to Eq. 6 and can be solved efficiently by an alternating minimization procedure:

(6)

where and this is assumed to be sparse, take .

5.2 Allowing for task specificity

The above procedure can be used to construct very informative shared features – as shown in (Calandriello et al., 2014) and in our experimental section. However, in a lot of scenarios tasks can benefit from having a small and sparse set of features that represent the particularities of each individual task on top of a low-dimensional shared subspace. This is definitely the case in many practical applications and had been observed in purely supervised settings as well – it is simply too restricted to constrain all tasks to be using a single shared structure. Thus researchers have come up with various ways of incorporating task-specific components — see (Zhou et al., 2011), (Jalali et al., 2010), (Chen et al., 2012) and reference therein – and showed that modelling these explicitly can improve both the learning (accuracy and speed) and interpretability of the resulting representations. In this work, we choose just one of these formulations, introduced in (Ando & Zhang, 2005), where we learn a low-dimensional shared representation as before, as well as a task specific vector , on which we place a strong sparsity constraint to encourage common features to still be identified and shared.

(7)

Note that if is a zero matrix, then we will be treating the task as completely independent and on the other hand if is zero for all tasks, we recover the previous formulation. Furthermore, we can place an orthogonality condition on the set of shared features inferred, by enforcing . The resulting optimization problem has the form:

(8)

and can be solved by Alternating Structure Optimization (ASO) – see (Ando & Zhang, 2005).

6 Experiments

We assess the performance and behaviour of our proposed model and learning procedures in the -room navigation task (Sutton et al., 1999). The state space is described by all valid positions an agent might take – any position in the grid, but the wall and the agent has access to four actions . We consider a deterministic dynamics in those directions and all walls are considered elastic – bumping into walls has no effect on your state. Tasks are specified as target locations in the environment the agent need to navigate to. These will be sampled at random from the valid states in the environment. We do not specify a starting state – agents need to learn to navigate to the selected goal position from any part of the environment. When the agent transitions to the goal states, it collects a positive reward. No other reward signal is provided.

Since all of the proposed methods can be run off-policy, thus decoupling the experience gathering and the learning, we sample a modest amount of experience up front for each of the considered tasks . This can be done, in principle, by any behaviour policy, but in all our data-gathering we employ uniformly random exploration.

Once data is gathered or provided, we can proceed with the learning. All experiments were conducted under a restrictive sample budget . Firstly, we would like to compare the proposed joint-representation learning with its single tasks counterparts of FQI and FPI to see what effects, if any, enforcing and learning a shared representation would have. We assess the quality of the inferred greedy policies by the amount of reward they are able to produce under random starts – this is a proxy for the real value function , where .

Figure 2: Quality of the inferred greedy policies, when trained individually and jointly on tasks, with a sample budget of samples/task. We show the average cumulative reward achieved by the agent from random initial positions – the values are normalized with respect to the optimal cumulative reward achievable from the same starting positions . We can see that in most cases, the single-task learning struggles under this sample regime, whereas the joint-learning methods are able to discover much better policies and even recover the optimal ones.

Depending on the selection of starting states, the difficulty of the tasks and thus amount of reward achievable may vary. To ease interpretation, we report the normalized value of the above estimate, with respect to the optimal value function at the starting states. Example results for the first training tasks are displayed in Figure 2. These were obtained by training on randomly sampled tasks on samples of experience per task. We can see that the joint-learning procedures manage to learn good policies, quite close to optimal ones that substantially outperform the single-task learning. Please note that our proposed extension to allow task-specific features, in most cases, improves performance, even when considering a very small set of common features () - which also gives us a much faster convergence in the shared subspace. Indeed, this behaviour seems to be consistent to lower samples sizes, although it is worth mentioning that divergence does occur more often in these extrem conditions (very few samples) and regularization parameters that might ensure convergence (Calandriello et al., 2014) provide a solution that is often worse than even the single task. Outside, those extreme cases, policy and value iteration methods perform very similarly and as we can see from Figure 3 - 4, that they tend to converge to the same solution.

Figure 3: Convergence to optimal value function , as assessed by the Euclidean norm for the different sample complexities: 500 samples/task (top), 750 samples/task (middle), 1000 samples/task (bottom) for the different methods proposed. We report an average over tasks and shaded area corresponds to variances below and above the mean. Note that at sample/task, all joint-representation learning algorithms obtain convergence to the true optimal value functions for all tasks. Also note that between the join-learning methods, the second method, allowing for task-specificity(red lines – AFPI-ASO, AFQI-ASO) yields the better approximations .
Figure 4: To evaluate the quality of the policy learnt, we produce an empirical estimate of for different sample complexities: 500 samples/task (top), 750 samples/task (middle), 1000 samples/task (bottom) for the different methods proposed. As seen from the convergence plot above, for samples the multi-task methods will reliably recover the optimal value functions and implicitly the optimal policies, but so will the single-task methods in a lot of these tasks. At the same time for half that budget (500 samples), the multi-task learning is already able to recover the optimal policies, while the single-task methods converge to a suboptimal value function (blue lines on the plot).

To get a better idea of the average task performance we obtain and how that changes during training, we can look at the average distance between our estimate of the value functions at iteration and the optimal ones . For this small environment, these can be computed analytically. Results for and samples budgets are displayed in Figure 3. We observe quite a big difference between the single-task and multi-task procedures in terms of recovering the true optimal value functions. Convergence to a better MSE happens much faster and we get even asymptotic superior solution. Nevertheless, closeness to the optimal value functions in Euclidean space may not necessarily imply the same relation in policy space. A plot of the quality of policies as a function of value/policy iterations is available in Figure 4. Here, we report the normalized average regret . We can see that the policies in general will converge much faster than the value functions, when comparing with the Q-value convergence in Figure 3. Please also note that the multi-task Fitted Policy-Iteration procedures inherit the same speedy convergence present in the single-task counterpart.

6.1 Learnt shared representations

Probably the most interesting phenomenon encountered in learning these shared representations is the nature of the low dimensional representations inferred. We visualize the inferred set of shared features (Figure 5) and their respective weights in the value-function (Figure 6).

Figure 5: [To be read row-wise] The first three most relevant shared features – corresponding to the top three eigenvalues – learnt via AFPI-MTFL under tasks randomly sampled in the four rooms. Please note that these already enable the navigation between any pair of rooms.
Figure 6: Weighting Coefficients and for the above three most prominent shared features. We can see from these values that the first feature clearly dominated in all tasks. Bottom: Rescaled version of such that we can see the activation of the other two prominent features. Blue corresponds to negative activation and red to positive ones. Given the nature of the features one can readily read out, just by looking at the sign of the weight, which room that task’s goal state is. For instance, if we look at second task: negative activation for both north-side of the environment and west-side of the environment. The goal is indeed located at position in the top-left room.

These were produced via MT-FQI with ASO, with the constraint that the shared subspace has at most -dimensions. And even this seem to be too permissive, as we actually obtain strong activations only for the top 3 features inferred – presented in Figure 5. Thus the learnt representation is very low dimensional, but at the same time expressive enough to effectively approximate optimal value functions.

6.2 Transferring knowledge to new tasks

The learnt representations resemble option-like features (Sutton et al., 1999) that essentially inform the agent, across tasks, how to navigate efficiently between rooms and negotiate the narrow hallways. These are indeed easily transferable ’skills’ that can be use in learning a new task. We test this hypothesis by augmenting the representation for the new task, with this shared subspace. We investigate the benefits of having learnt a shared subspace over a set of training tasks in terms of transferring that knowledge when optimizing for a new task. We augment the feature space for the new task, with the learnt features and then we assess the effect this modification has on learning the new task. In Figure 7, we present an empirical evaluation of the cumulative regret the agent will incur on the inferred (greedy) policy, when trained on the original representation , versus the augmented representation , after seeing a varying amount of samples . We can see that the augmented representation is able to produce a good performance under smaller sample sizes. In general, the learning based on the transferred representation is able to produce a policy that is equivalent to the ones we could learn without transfer under twice as much data. This behaviour is consistent until convergence.

Figure 7: Average performance on a set of 10 new tasks with and without transfer of shared features, as assessed by the normalized average cumulative reward collected over 50 random starts in the environment. The value functions for the new tasks were produced by (single-task) FQI on original feature (no transfer), and respectively the augmented features space and (transfer).

6.3 Connection to Options

As previously, the learnt shared representation seems account for the general topology and dynamics of the environment in the value functions. They nicely partition the environment into relevant regions to facilitate the global navigation to a local neighbourhood of the goal. Some of those features are characteristic of options (Sutton et al., 1999), skills (Konidaris & Barto, 2007), macro-actions literature (Dietterich, 2000) and are hve the potential to drastically improve the efficiency and scalability of RL methods (Barto & Mahadevan, 2003), (Hengst, 2002). In the following we would like to investigate this connection further.

Following the formulation in (Sutton et al., 1999), an option , is a generalization of primitive actions to a temporally extended course of action. is the initiation set from which the option is available, is the policy we are going to follow once the options is triggered and is the probability of termination. In this case, the value function take the form:

where is in the termination state of options . We denote , where is the probability that options will terminate in state after exactly steps. Note that this term accounts for the transition dynamics, the policy of the option and its termination criteria, all of which, for us, are task-invariant. Moreover note that for us, is generally unless the option happens to hit the goal. Thus the above equation, simplifies to:

This is a linear combination between the option transition models in the termination set (subgoals of the option) – which is independent of task and , it only depends on – weighted by the value function of the termination states for each of the tasks – which incorporates the dependency on task and the individual policy employed after the option has terminated. This is very similar to the parametrization we assumed in Eq. 4. This suggest that the learnt representation is able to capture and represent efficiently some option-like transition models without specifying any subgoals, policies nor initial states. We hypothesize that the learnt shared space is actually a compressed basis for these option-transition models. In order to test this hypothesis, we consider an intuitive set of options (like navigation to a particular room) and test if this learnt basis can span for some option and can successfully represent an option-policy.

We define an option to be navigating to a specific room, say room (NW). The initialization set is the set of states outside the room and termination set is any state in the desired room. We also can define an MDP that maintains the same transition dynamics, state and action space, but now the reward signal is zero outside the target room and a constant positive reward in any of the desired termination states room . Note that the value function corresponding to this newly defined semi-MDP (Sutton et al., 1999) is given by: , as 222This is actually true, only under a mild assumption that the agent under will not leave the room, which is where all the reward is.. In this semi-MDP we run FQI and indeed see that we are able to construct a value function, based solely on the learnt -dim feature space , that successfully completes the specified task. Results for all such navigation options are available in Figure 8.

Figure 8: Learned greedy policies (as indicated by the arrows) and value functions ( ) enabling navigation to any of the four rooms, based only on the share feature subspace discovered in the multi-task value function learning of goals randomly sampled in the environment. The value functions were learnt using (single-task) FQI on top of features and required 200 samples to recover option-like policies that enable the agent to reach the desired room.

Please note that the above defined options are quite extended ones. Simpler ones would include making your way outside a particular room – these are along the lines of the options defined in (Sutton et al., 1999) and (Stolle & Precup, 2002) – and these can be easily recovered as well. Actually for these simpler options we require very few samples to obtain the desired behaviour (10-30 samples), although they might not be optimal – please consult Supplementary material for details. The fact that we are able to express a whole variety of such intuitively defined options – much more than the dimensionality of the common subspace on which we are building on – is a clear indication of the expressiveness of this shared representation and its potential transferability in aiding learning of new tasks within the same environment.

7 Related work

There is a good collection of methods that tackle various aspect of multi-task reinforcement learning (Lazaric, 2012) and (Taylor & Stone, 2009a). As with our approach, these methods try to learn jointly either value functions or policies over a set of tasks (Lazaric & Ghavamzadeh, 2010), (Dimitrakakis & Rothkopf, 2011) but under different structure and environment assumptions. A more recent study in (Konidaris et al., 2012), also employs the idea of a shared feature space, but both the learning procedure and the proposed way of transferring this tuned knowledge is very different from ours. The main novel idea this work introduces is modelling explicitly a shared abstraction of the state-action space that can be refined throughout the learning process while optimizing for the value functions. The ability to change the representation throughout the learning process to model the improving set of policies is crucial. This is the only way option-like features could emerges – these already incorporate both the transition model and good policies that generalize over tasks, as shown in the previous section. One of the methods investigated in (Calandriello et al., 2014) in a study on sparsity in multi-task, is very closely related to our learning procedure and this work can be seen as a generalization of that method, although the focus and model assumptions are quite different. Perhaps the most relevant prior work that shares our vision and some of the modelling assumption is the approach in (Schaul et al., 2015) which model a shared state-representation between goals and assume a linear factorization between this state embbeding and task/goal embbeding.

8 Conclusion and Future work

In this work, we investigated the problem of representation learning in multi-task/multi-goal reinforcement learning. We introduced the multi-task RL paradigm and showed how two of the most popular classes of planning algorithms, fitted Q-Iteration and approximate Policy Iteration, can be extended to learn from multiple tasks jointly. Focusing on linear parametrization of the -function, we showed at least two ways in which one can harness the power of well-established multi-task learning and transfer algorithms developed in supervised settings and apply them to inferring a joint structure over optimal value functions, and implicitly over policies. As argued before and shown in these preliminary experiments, RL can benefit a lot from integrating joint treatment of goals and exploiting commonality between tasks. This ought to lead to more efficient learning and better generalization. Although these are very encouraging results, this paradigm does need more investigation to assess convergence behaviour, scalability to more complex tasks, employing other multi-task learning or representation learning procedures and we hope this work will serve as staring point.

References

  • Ando & Zhang (2005) Ando, Rie Kubota and Zhang, Tong. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817–1853, 2005.
  • Antos et al. (2007) Antos, András, Szepesvári, Csaba, and Munos, Rémi. Value-iteration based fitted policy iteration: learning with a single trajectory. In Approximate Dynamic Programming and Reinforcement Learning, 2007. ADPRL 2007. IEEE International Symposium on, pp. 330–337. IEEE, 2007.
  • Argyriou et al. (2008) Argyriou, Andreas, Evgeniou, Theodoros, and Pontil, Massimiliano. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008.
  • Barto & Mahadevan (2003) Barto, Andrew G and Mahadevan, Sridhar. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13(4):341–379, 2003.
  • Bengio (2009) Bengio, Yoshua. Learning deep architectures for ai. Foundations and trends® in Machine Learning, 2(1):1–127, 2009.
  • Calandriello et al. (2014) Calandriello, Daniele, Lazaric, Alessandro, and Restelli, Marcello. Sparse multi-task reinforcement learning. In Advances in Neural Information Processing Systems, pp. 819–827, 2014.
  • Chen et al. (2012) Chen, Jianhui, Liu, Ji, and Ye, Jieping. Learning incoherent sparse and low-rank patterns from multiple tasks. ACM Transactions on Knowledge Discovery from Data (TKDD), 5(4):22, 2012.
  • Dietterich (2000) Dietterich, Thomas G. Hierarchical reinforcement learning with the maxq value function decomposition. J. Artif. Intell. Res.(JAIR), 13:227–303, 2000.
  • Dimitrakakis & Rothkopf (2011) Dimitrakakis, Christos and Rothkopf, Constantin A. Bayesian multitask inverse reinforcement learning. In Recent Advances in Reinforcement Learning, pp. 273–284. Springer, 2011.
  • Ernst et al. (2005) Ernst, Damien, Geurts, Pierre, and Wehenkel, Louis. Tree-based batch mode reinforcement learning. In Journal of Machine Learning Research, pp. 503–556, 2005.
  • Hengst (2002) Hengst, Bernhard. Discovering hierarchy in reinforcement learning with hexq. In ICML, volume 2, pp. 243–250, 2002.
  • Jalali et al. (2010) Jalali, Ali, Sanghavi, Sujay, Ruan, Chao, and Ravikumar, Pradeep K. A dirty model for multi-task learning. In Advances in Neural Information Processing Systems, pp. 964–972, 2010.
  • Konidaris & Barto (2007) Konidaris, George and Barto, Andrew G. Building portable options: Skill transfer in reinforcement learning. In IJCAI, volume 7, pp. 895–900, 2007.
  • Konidaris et al. (2012) Konidaris, George, Scheidwasser, Ilya, and Barto, Andrew G. Transfer in reinforcement learning via shared features. The Journal of Machine Learning Research, 13(1):1333–1371, 2012.
  • Lazaric (2012) Lazaric, Alessandro. Transfer in reinforcement learning: a framework and a survey. In Reinforcement Learning, pp. 143–173. Springer, 2012.
  • Lazaric & Ghavamzadeh (2010) Lazaric, Alessandro and Ghavamzadeh, Mohammad. Bayesian multi-task reinforcement learning. In ICML-27th International Conference on Machine Learning, pp. 599–606. Omnipress, 2010.
  • Mnih et al. (2015) Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
  • Schaul et al. (2015) Schaul, Tom, Horgan, Daniel, Gregor, Karol, and Silver, David. Universal value function approximators. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1312–1320, 2015.
  • Silver et al. (2016) Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, van den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershelvam, Veda, Lanctot, Marc, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
  • Stolle & Precup (2002) Stolle, Martin and Precup, Doina. Learning options in reinforcement learning. In SARA, pp. 212–223. Springer, 2002.
  • Sutton & Barto (1998) Sutton, Richard S and Barto, Andrew G. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
  • Sutton et al. (1999) Sutton, Richard S, Precup, Doina, and Singh, Satinder. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181–211, 1999.
  • Sutton et al. (2011) Sutton, Richard S, Modayil, Joseph, Delp, Michael, Degris, Thomas, Pilarski, Patrick M, White, Adam, and Precup, Doina. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 761–768. International Foundation for Autonomous Agents and Multiagent Systems, 2011.
  • Taylor & Stone (2009a) Taylor, Matthew E and Stone, Peter. Transfer learning for reinforcement learning domains: A survey. The Journal of Machine Learning Research, 10:1633–1685, 2009a.
  • Taylor & Stone (2009b) Taylor, Matthew E and Stone, Peter. Transfer learning for reinforcement learning domains: A survey. The Journal of Machine Learning Research, 10:1633–1685, 2009b.
  • Zhou et al. (2011) Zhou, Jiayu, Chen, Jianhui, and Ye, Jieping. Clustered multi-task learning via alternating structure optimization. In Advances in neural information processing systems, pp. 702–710, 2011.
Figure 9: Learned greedy policies (as indicated by the arrows) and value functions (coloring indicates the value ) enabling navigation to any of the four rooms, based only on the share feature subspace discovered in the multi-task value function learning of goals randomly sampled in the environment. The value functions were learnt using (single-task) FQI on top of features and we show the results when using , , and respectively samples from the option-defined MDP.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10256
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description