Experience Replay Using Transition Sequences

Experience Replay Using Transition Sequences

Thommen George Karimpanal, Roland Bouffanais thommen_george@mymail.sutd.edu.sg, bouffanais@sutd.edu.sg Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372
Abstract

Experience replay is one of the most commonly used approaches to improve the sample efficiency of reinforcement learning algorithms. In this work, we propose an approach to select and replay sequences of transitions in order to accelerate the learning of a reinforcement learning agent in an off-policy setting. In addition to selecting appropriate sequences, we also artificially construct transition sequences using information gathered from previous agent-environment interactions. These sequences, when replayed, allow value function information to trickle down to larger sections of the state/state-action space, thereby making the most of the agent’s experience. We demonstrate our approach on modified versions of standard reinforcement learning tasks such as the mountain car and puddle world problems and empirically show that it enables faster, and more accurate learning of value functions as compared to other forms of experience replay. Further, we briefly discuss some of the possible extensions to this work, as well as applications and situations where this approach could be particularly useful.

keywords:
Experience Replay, Q-learning, Off-Policy, Multi-task Reinforcement Learning, Probabilistic Policy Reuse

1 Introduction

Real-world artificial agents ideally need to be able to learn as much as possible from their interactions with the environment. This is especially true for mobile robots operating within the reinforcement learning (RL) framework, where the cost of acquiring information from the environment through exploration generally exceeds the computational cost of learning  (Wang et al., 2016; Adam et al., 2012; Schaul et al., 2016).

Experience replay (Lin, 1992) is a technique that reuses information gathered from past experiences to improve the efficiency of learning. In order to replay stored experiences using this approach, an off-policy (Sutton and Barto, 2011; Geist et al., 2014) setting is a prerequisite. In off-policy learning, the policy that dictates the agent’s control actions is referred to as the behavior policy. Other policies corresponding to the value/action-value functions of different tasks that the agent aims to learn are referred to as target policies. Off-policy algorithms utilize the agent’s behavior policy to interact with the environment, while simultaneously updating the value functions associated with the target policies. These algorithms can hence be used to parallelize learning, and, thus gather as much knowledge as possible using real experiences (Sutton et al., 2011; White et al., 2012; Modayil et al., 2014). However, when the behavior and target policies differ considerably from each other, the actions executed by the behavior policy may only seldom correspond to those recommended by the target policy. This could lead to poor estimates of the corresponding value function. Such cases could arise in multi-task scenarios where multiple tasks are learned in an off-policy manner. Also, in general, in environments where desirable experiences are rare occurrences, experience replay could be employed to improve the estimates by storing and replaying transitions (state, actions and rewards) from time to time.

Although most experience replay approaches store and reuse individual transitions, replaying sequences of transitions could offer certain advantages. For instance, if a value function update following a particular transition results in a relatively large change in the value of the corresponding state or state-action pair, this change will have a considerable influence on the bootstrapping targets of states or state-action pairs that led to this transition. Hence, the effects of this change should ideally be propagated to these states or state-action pairs. If instead of individual transitions, sequences of transitions are replayed, this propagation can be achieved in a straightforward manner. Our approach aims to improve the efficiency of learning by replaying transition sequences in this manner. The sequences are selected on the basis of the magnitudes of the temporal difference (TD) errors(Sutton and Barto, 2011), associated with them. We hypothesize that selecting sequences that contain transitions associated with higher magnitudes of TD errors allow considerable learning progress to take place. This is enabled by the propagation of the effects of these errors to the values associated with other states or state-action pairs in the transition sequence.

Replaying a larger variety of such sequences would result in a more efficient propagation of the mentioned effects to other regions in the state/state-action space. Hence, in order to aid the propagation in this manner, other sequences that could have occurred are artificially constructed by comparing the state trajectories of previously observed sequences. These virtual transition sequences are appended to the replay memory, and they help bring about learning progress in other regions of the state/state-action space when replayed.

Figure 1: Structure of the proposed algorithm in contrast to the traditional off-policy structure. and denote the action-value function and reward respectively.

The generated transition sequences are virtual in the sense that they may have never occurred in reality, but are constructed from sequences that have actually occurred in the past. The additional replay updates corresponding to the mentioned transition sequences supplement the regular off-policy value function updates that follow the real-world execution of actions, thereby making the most out of the agent’s interactions with the environment.

2 Related Work

The problem of learning from limited experience is not new in the field of RL (Thrun, 1992a; Thomas and Brunskill, 2016). Generally, learning speed and sample efficiency are critical factors that determine the feasibility of deploying learning algorithms in the real world. Particularly for robotics applications, these factors are even more important, as exploration of the environment is typically time and energy expensive (Bakker et al., 2006; Kober et al., 2013). It is thus important for a learning agent to be able to gather as much relevant knowledge as possible from whatever exploratory actions occur.

Off-policy algorithms are well suited to this need as it enables multiple value functions to be learned together in parallel. When the behavior and target policies vary considerably from each other, importance sampling (Rubinstein and Kroese, 2016; Sutton and Barto, 2011) is commonly used in order to obtain more accurate estimates of the value functions. Importance sampling reduces the variance of the estimate by taking into account the distributions associated with the behavior and target policies, and making modifications to the off-policy update equations accordingly. However, the estimates are still unlikely to be close to their optimal values if the agent receives very little experience relevant to a particular task.

This issue is partially addressed with experience replay, in which information contained in the replay memory is used from time to time in order to update the value functions. As a result, the agent is able to learn from uncorrelated historical data, and the sample efficiency of learning is greatly improved. This approach has received a lot of attention in recent years due to its utility in deep RL applications (Mnih et al., 2015, 2016, 2013; Adam et al., 2012; de Bruin et al., 2015). Recent works (Schaul et al., 2016; Narasimhan et al., 2015) have revealed that certain transitions are more useful than others. Schaul et al. (Schaul et al., 2016) prioritized transitions on the basis of their associated TD errors. They also briefly mentioned the possibility of replaying transitions in a sequential manner. The experience replay framework developed by Adam et al. (Adam et al., 2012) involved some variants that replayed sequences of experiences, but these sequences were drawn randomly from the replay memory. More recently, Isele et al. (Isele and Cosgun, 2018) reported a selective experience replay approach aimed at performing well in the context of lifelong learning (Thrun, 1996). The authors of this work proposed a long term replay memory in addition to the conventionally used one. Certain bases for designing this long-term replay memory, such as favoring transitions associated with high rewards and high absolute TD errors are similar to the ones described in the present work. However, the approach does not explore the replay of sequences, and its fundamental purpose is to shield against catastrophic forgetting (Goodfellow et al., 2013) when multiple tasks are learned in sequence. The replay approach described in the present work focuses on enabling more sample-efficient learning in situations where positive rewards occur rarely. Apart from this, Andrychowicz et al. (Andrychowicz et al., 2017) proposed a hindsight experience replay approach, directed at addressing this problem, where each episode is replayed with a goal that is different from the original goal of the agent. The authors reported significant improvements in the learning performance in problems with sparse and binary rewards. These improvements were essentially brought about by allowing the learned value/ values (which would otherwise remain mostly unchanged due to the sparsity of rewards) to undergo significant change under the influence of an arbitrary goal. The underlying idea behind our approach also involves modification of the values in reward-sparse regions of the state-action space. The modifications, however, are not based on arbitrary goals, and are selectively performed on state-action pairs associated with successful transition sequences associated with high absolute TD errors. Nevertheless, the hindsight replay approach is orthogonal to our proposed approach, and hence, could be used in conjunction with it.

Much like in Schaul et al. (Schaul et al., 2016), TD errors have been frequently used as a basis for prioritization in other RL problems (White et al., 2014; Thrun, 1992b; Schaul et al., 2016). In particular, the model-based approach of prioritized sweeping (Moore and Atkeson, 1993; van Seijen and Sutton, 2013) prioritizes backups that are expected to result in a significant change in the value function. The algorithm we propose here uses a model-free architecture, and it is based on the idea of selectively reusing previous experience. However, we describe the reuse of sequences of transitions based on the TD errors observed when these transitions take place. Replaying sequences of experiences also seems to be biologically plausible (Ólafsdóttir et al., 2015; Buhry et al., 2011). In addition, it is known that animals tend to remember experiences that lead to high rewards (Singer and Frank, 2009). This is an idea reflected in our work, as only those transition sequences that lead to high rewards are considered for being stored in the replay memory. In filtering transition sequences in this manner, we simultaneously address the issue of determining which experiences are to be stored.

In addition to selecting transition sequences, we also generate virtual sequences of transitions which the agent could have possibly experienced, but in reality, did not. This virtual experience is then replayed to improve the agent’s learning. Some early approaches in RL, such as the dyna architecture (Sutton, 1990) also made use of simulated experience to improve the value function estimates. However, unlike the approach proposed here, the simulated experience was generated based on models of the reward function and transition probabilities which were continuously updated based on the agent’s interactions with the environment. In this sense, the virtual experience generated in our approach is more grounded in reality, as it is based directly on the data collected through the agent-environment interaction. In more recent work, Fonteneau et al. describe an approach to generate artificial trajectories and use them to find policies with acceptable performance guarantees (Fonteneau et al., 2013). However, this approach is designed for batch RL, and the generated artificial trajectories are not constructed using a TD error basis. Our approach also recognizes the real-world limitations of replay memory (de Bruin et al., 2015), and stores only a certain amount of information at a time, specified by memory parameters. The selected and generated sequences are stored in the replay memory in the form of libraries which are continuously updated so that the agent is equipped with transition sequences that are most relevant to the task at hand.

3 Methodology

The idea of selecting appropriate transition sequences for replay is relatively straightforward. In order to improve the agent’s learning, first, we simply keep track of the state, actions, rewards and absolute values of the TD errors associated with each transition. Generally, in difficult learning environments, high rewards occur rarely. So, when such an event is observed, we consider storing the corresponding sequence of transitions into a replay library . In this manner, we use the reward information as a means to filter transition sequences. The approach is similar to that used by Narasimhan et al. (Narasimhan et al., 2015), where transitions associated with positive rewards are prioritized for replay.

Among the transition sequences considered for inclusion in the library , those containing transitions with high absolute TD error values are considered to be the ones with high potential for learning progress. Hence, they are accordingly prioritized for replay. The key idea is that when the TD error associated with a particular transition is large in magnitude, it generally implies a proportionately greater change in the value of the corresponding state/state-action pair. Such large changes have the potential to influence the values of the states/state-action pairs leading to it, which implies a high potential for learning. Hence, prioritizing such sequences of transitions for replay is likely to bring about greater learning progress. Transition sequences associated with large magnitudes of TD error are retained in the library, while those with lower magnitudes are removed and replaced with superior alternatives. In reality, such transition sequences may be very long and hence, impractical to store. Due to such practical considerations, we store only a portion of the sequence, based on a predetermined memory parameter. The library is continuously updated as and when the agent-environment interaction takes place, such that it will eventually contain sequences associated with the highest absolute TD errors.

As described earlier, replaying suitable sequences allows the effects of large changes in value functions to be propagated throughout the sequence. In order to propagate this information even further to other regions of the state/state-action space, we use the sequences in to construct additional transition sequences which could have possibly occurred. These virtual sequences are stored in another library , and later used for experience replay.

Figure 2: (a) Trajectories corresponding to two hypothetical behavior policies are shown. A portion of the trajectory associated with a high reward (and stored in ) is highlighted (b) The virtual trajectory constructed from the two behavior policies is highlighted. The states, actions and rewards associated with this trajectory constitute a virtual transition sequence.

In order to intuitively describe our approach of artificially constructing sequences, we consider the hypothetical example shown in Figure 2(a), where an agent executes behavior policies that help it learn to navigate towards location from the start location. However, using off-policy learning, we aim to learn value functions corresponding to the policy that helps the agent navigate towards location .

The trajectories shown in Figure 2(a) correspond to hypothetical actions dictated by the behavior policy midway through the learning process, during two separate episodes. The trajectories begin at the start location and terminate at location . However, the trajectory corresponding to behavior policy also happens to pass through location , at which point the agent receives a high reward. This triggers the transition sequence storage mechanism described earlier, and we assume that some portion of the sequence (shown by the highlighted portion of the trajectory in Figure 2(a)) is stored in library . Behavior policy takes the agent directly from the start location towards the location , where it terminates. As the agent moves along its trajectory, it intersects with the state trajectory corresponding to the sequence stored in . Using this intersection, it is possible to artificially construct additional trajectories (and their associated transition sequences) that are successful with respect to the task of navigating to location . The highlighted portions of the trajectories corresponding to the two behavior policies in Figure 2(b) show such a state trajectory, constructed using information related to the intersection of portions of the two previously observed trajectories. The state, action and reward sequences associated with this highlighted trajectory form a virtual transition sequence.

Such artificially constructed transition sequences present the possibility of considerable learning progress. This is because, when replayed, they help propagate the large learning potential (characterized by large magnitudes of TD errors) associated with sequences in to other regions of the state/state-action space. These replay updates supplement the off-policy value function updates that are carried out in parallel, thus accelerating the learning of the task in question. This outlines the basic idea behind our approach.

Fundamentally, our approach can be decomposed into three steps:

  1. Tracking and storage of relevant transition sequences

  2. Construction of virtual transition sequences using the stored transition sequences

  3. Replaying the transition sequences

These steps are explained in detail in Sections 3.1, 3.2 and 3.3.

3.1 Tracking and Storage of Relevant Transition Sequences

As described, virtual transition sequences are constructed by joining together two transition sequences. One of them, say , composed of transitions, is historically successful—it has experienced high rewards with respect to the task, and is part of the library . The other sequence, , is simply a sequence of the latest transitions executed by the agent.

If the agent starts at state and moves through intermediate states and eventually to (most recent state) by executing a series of actions , it receives rewards from the environment. These transitions comprise the transition sequence .

(1)

where:

We respectively refer to , and as the state, action and reward transition sequences corresponding to a series of agent-environment interactions, indexed from to ().

For the case of the transition sequence , we keep track of the sequence of TD errors observed as well. If a high reward is observed in transition , then:

(2)

where .

The memory parameters and are chosen based on the memory constraints of the agent. They determine how much of the recent agent-environment interaction history is to be stored in memory.

It is possible that the agent encounters a number of transitions associated with high rewards while executing the behavior policy. Corresponding to these transitions, a number of successful transition sequences would also exist. These sequences are maintained in the library in a manner similar to the Policy Library through Policy Reuse (PLPR) algorithm (Fernández and Veloso, 2005). To decide whether to include a new transition sequence into the library , we determine the maximum value of the absolute TD error sequence corresponding to and check whether it is -close—the parameter determines the exclusivity of the library—to the maximum of the corresponding values associated with the transition sequences in . If this is the case, then is included in . Since the transition sequences are filtered based on the maximum of the absolute values of TD errors among all the transitions in a sequence, this approach should be able to mitigate problems stemming from low magnitudes of TD errors associated with local optima (Baird, 1999; Tutsoy and Brown, 2016b). Using the absolute TD error as a basis for selection, we maintain a fixed number () of transition sequences in the library . This ensures that the library is continuously updated with the latest transition sequences associated with the highest absolute TD errors. The complete algorithm is illustrated in Algorithm 1.

1:Inputs:
2: Parameter that determines the exclusivity of the library
3: Parameter that determines the number of transition sequences allowed in the library
4: Sequence of absolute TD errors corresponding to a transition sequence
5: A library of transition sequences ()
6: New transition sequence to be evaluated
7:
8:for  do
9:   )
10:end for
11:if  then
12:   
13:   Number of transition sequences in
14:   if  then
15:      
16:   end if
17:end if
Algorithm 1 Maintaining a replay library of transition sequences

3.2 Virtual Transition Sequences

Once the transition sequence is available and a library of successful transition sequences is obtained, we use this information to construct a library of virtual transition sequences . The virtual transition sequences are constructed by first finding points of intersection in the state transition sequences of and the ’s in .

Let us consider the transition sequence :

and a transition sequence :
Let be a sub-matrix of such that:
(3)
Now, if and are sets containing all the elements of sequences and respectively, and if , then:
and

Once points of intersection have been obtained as described above, each of the two sequences and are decomposed into two subsequences at the point of intersection such that:

(4)

where


and


Similarly,

(5)

where


and


The virtual transition sequence is then simply:

(6)

We perform the above procedure for each transition sequence in to obtain the corresponding virtual transition sequences . These virtual transition sequences are stored in a library :

where denotes the number of virtual transition sequences in , subjected to the constraint .

The overall process for constructing and storing virtual transition sequences is summarized in Algorithm 2. Once the library has been constructed, we replay the sequences contained in it to improve the estimates of the value function. The details of this are discussed in Section 3.3.

1:Inputs:
2:Sequence of latest transitions
3:Library containing stored transition sequences
4:Library for storing virtual transition sequences
5:for  do
6:   Extract from (Equation 3)
7:   Find set of states corresponding to the intersection of the state trajectories of and
8:   if  is not empty, then
9:      for each state in do
10:         Treat as the intersection point and decompose and as per Equations 4 and 5
11:      end for
12:      Choose from such that the number of transitions in is maximized
13:   end if
14:   Use the selected to construct the virtual transition sequence as per Equation 6
15:   Use library to store the constructed sequence ()
16:end for
Algorithm 2 Constructing virtual transition sequences

3.3 Replaying the Transition Sequences

In order to make use of the transition sequences described, each of the state-action-reward triads in the transition sequence is replayed as if the agent had actually experienced them.

Similarly, sequences in are also be replayed from time to time. Replaying sequences from and in this manner causes the effects of large absolute TD errors originating from further up in the sequence to propagate through the respective transitions, ultimately leading to more accurate estimates of the value function. The transitions are replayed as per the standard -learning update equation shown below:

(7)

Where and refer to the state and action at transition , and and represent the action-value function and reward corresponding to the task. The variable is a bound variable that represents any action in the action set . The learning rate and discount parameters are represented by and respectively.

The sequence in Equation (6) is a subset of , which is in turn part of the library and thus associated with a high absolute TD error. When replaying , the effects of the high absolute TD errors propagate from the values of state/state-action pairs in to those in . Hence, in case of multiple points of intersection, we consider points that are furthest down . In other words, the intersection point is chosen to maximize the length of . In this manner, a larger number of state-action values experience improvements brought about by replaying the transition sequences.

1:Inputs:
2: learning rate
3: discount factor
4: A library of virtual transition sequences with sequences
5:for  do
6:   number of triads in
7:   
8:   while  do
9:      
10:      
11:   end while
12:end for
Algorithm 3 Replay of virtual transition sequences from library

4 Results and Discussion

We demonstrate our approach on modified versions of two standard reinforcement learning tasks. The first is a multi-task navigation/puddle-world problem (Figure 3), and the second is a multi-task mountain car problem (Figure 6). In both these problems, behavior policies are generated to solve a given task (which we refer to as the primary task) relatively greedily, while the value function for another task of interest (which we refer to as the secondary task) is simultaneously learned in an off-policy manner. The secondary task is intentionally made more difficult by making appropriate modifications to the environment. Such adverse multi-task settings best demonstrate the effectiveness of our approach and emphasize its advantages over other experience replay approaches. We characterize the difficulty of the secondary task with a difficulty ratio , which is the fraction of the executed behavior policies that experience a high reward with respect to the secondary task. A low value of indicates that achieving the secondary task under the given behavior policy is difficult. In both tasks, the values are initialized with random values, and once the agent encounters the goal state of the primary task, the episode terminates.

4.1 Navigation/Puddle-World Task

Figure 3: Navigation environment used to demonstrate the approach of replaying transition sequences

In the navigation environment, the simulated agent is assigned tasks of navigating to certain locations in its environment. We consider two locations, and , which represent the primary and secondary task locations respectively. The environment is set up such that the location corresponding to high rewards with respect to the secondary task lies far away from that of the primary task (see Figure 3). In addition to this, the accessibility to the secondary task location is deliberately limited by surrounding it with obstacles on all but one side. These modifications contribute towards a low value of , especially when the agent operates with a greedy behavior policy with respect to the primary task.

The agent is assumed to be able to sense its location in the environment accurately, and can detect when it ‘bumps’ into an obstacle. It can move around in the environment at a maximum speed of 1 unit per time step by executing actions to take it forwards, backwards, sideways and diagonally forwards or backwards to either side. In addition to these actions, the agent can choose to hold its current position. However, the transitions resulting from these actions are probabilistic in nature. The intended movements occur only 80 % of the time, and for the remaining 20 %, the - and -coordinates may deviate from their intended values by 1 unit. Also, the agent’s location does not change if the chosen action forces it to run into an obstacle.

The agent employs -learning with a relatively greedy policy () that attempts to maximize the expected sum of primary rewards. The reward structure for both tasks is such that the agent receives a high reward () for visiting the respective goal locations, and a high penalty () for bumping into an obstacle in the environment. In addition to this, the agent is assigned a living penalty () for each action that fails to result in the goal state. In all simulations, the discount factor is set to be , the learning rate is set to a value of and the parameter mentioned in Algorithm 1 is set to be . Although various approaches exist to optimize the values of the learning hyperparameters (Even-Dar and Mansour, 2003; Tutsoy and Brown, 2016a; Garcia and Ndiaye, 1998), the values were chosen arbitrarily, such that satisfactory performances were obtained for both the navigation as well as the mountain-car environments.

Figure 4: Comparison of the average secondary returns over runs using different experience replay approaches as well as -learning without experience replay in the navigation environment. The standard errors are all less than . For the different experience replay approaches, the number of replay updates are controlled to be the same.

In the environment described, the agent executes actions to learn the primary task. Simultaneously, the approach described in Section 3 is employed to learn the value functions associated with the secondary task. At each episode of the learning process, the agent’s performance with respect to the secondary task is evaluated. In order to compute the average return for an episode, we allow the agent to execute () greedy actions from a randomly chosen starting point, and record the accumulated reward. The process is repeated for () trials, and the average return for the episode is reported as the average accumulated reward per trial. The average return corresponding to each episode in Figure 4 is computed in this way. The mean of these average returns over all the episodes is reported as in Table 1. That is, the average return corresponding to the episode is given by:

and

Where is the reward obtained by the agent in a step corresponding to the greedy action , in trial , and is the maximum number of episodes.

Figure 4 shows the average return for the secondary task plotted for runs of learning episodes using different learning approaches. The low average value of ( as indicated in Figure 4) indicates the relatively high difficulty of the secondary task under the behavior policy being executed. As observed in Figure 4, an agent that replays transition sequences manages to accumulate high average returns at a much faster rate as compared to regular -learning. The approach also performs better than other experience replay approaches for the same number of replay updates. These replay approaches are applied independently of each other for the secondary task. In Figure 4, the prioritization exponent for prioritized experience replay is set to .

(a)
10 1559.7
100 2509.7
1000 2610.4
(b)
10 1072.5
100 1159.2
1000 2610.4
(c)
10 2236.6
50 2610.4
100 2679.5
  • With regular -learning (without experience replay),

Table 1: Average secondary returns accumulated per episode () using different values of the memory parameters in the navigation environment

Table 1 shows the average return for the secondary task accumulated per episode () during runs of the navigation task for different values of memory parameters , and used in our approach. Each of the parameters are varied separately while keeping the other parameters fixed to their default values. The default values used for , and are , and respectively.

Application to the Primary Task

In the simulations described thus far, the performance of our approach was evaluated on a secondary task, while the agent executed actions relatively greedily with respect to a primary task. Such a setup was chosen in order to ensure a greater sparsity of high rewards for the secondary task. However, the proposed approach of replaying sequences of transitions can also be applied to the primary task in question. In particular, when a less greedy exploration strategy is employed (that is, when is high), such conditions of reward-sparsity can be recreated for the primary task. Figure 5 shows the performance of different experience replay approaches when applied to the primary task, for different values of . As expected, for more exploratory behavior policies, which correspond to lower probabilities of obtaining high rewards, the approach of replaying transition sequences is significantly beneficial, especially at the early stages of learning. However, as the episodes progress, the effects of drastically large absolute TD errors would have already penetrated into other regions of the state-action space, and the agent ceases to benefit as much from replaying transition sequences. Hence, other forms of replay such as experience replay with uniform random sampling, or prioritized experience replay were found to be more useful after the initial learning episodes.

Figure 5: The performance of different experience replay approaches on the primary task in the navigation environment for different values of the exploration parameter , averaged over runs. For these results, the memory parameters used are as follows: , and .

4.2 Mountain Car Task

Figure 6: Mountain car environment used to demonstrate off-policy learning using virtual transition sequences

In the mountain car task, the agent, an under-powered vehicle represented by the circle in Figure 6 is assigned a primary task of getting out of the trough and visiting point . The act of visiting point is treated as the secondary task. The agent is assigned a high reward () for for fulfilling the respective objectives, and a living penalty () is assigned for all other situations. At each time step, the agent can choose from three possible actions: (1) accelerating in the positive direction, (2) accelerating in the negative direction, and (3) applying no control. The environment is discretized such that unique positions and unique velocity values are possible.

The mountain profile is described by the equation such that point is higher than . Also, the average slope leading to is steeper than that leading to . In addition to this, the agent is set to be relatively greedy with respect to the primary task, with an exploration parameter . These factors make the secondary task more difficult, resulting in a low value of () under the policy executed.

Figure 7 shows the average secondary task returns for runs of learning episodes. It is seen that especially during the initial phase of learning, the agent accumulates rewards at a higher rate as compared to other learning approaches. As in the navigation task, the number of replay updates are restricted to be the same while comparing the different experience replay approaches in Figure 7. Analogous to Table 1, Table 2 shows the average secondary returns accumulated per episode () over runs in the mountain-car environment, for different values of the memory parameters. The default values for , and are the same as those mentioned in the navigation environment, that is, , and respectively.

Figure 7: Comparison of the average secondary returns over runs using different experience replay approaches as well as -learning without experience replay in the mountain-car environment. The standard errors are all less than . For the different experience replay approaches, the number of replay updates are controlled to be the same.
(a)
10 221.0
100 225.1
1000 229.9
(b)
10 129.9
100 190.5
1000 229.9
(c)
10 225.6
50 229.9
100 228.4
  • With regular -learning (without experience replay),

Table 2: Average secondary returns accumulated per episode () using different values of the memory parameters in the mountain car environment

From Figures 4 and 7, the agent is seen to be able to accumulate significantly higher average secondary returns per episode when experiences are replayed. Among the experience replay approaches, the approach of replaying transition sequences is superior for the same number of replay updates. This is especially true in the navigation environment, where visits to regions associated with high secondary task rewards are much rarer, as indicated by the low value of . In the mountain car problem, the visits are more frequent, and the differences between the different experience replay approaches are less significant. The value of the prioritization exponent used here is the same as that used in the navigation task. The approach of replaying sequences of transitions also offers noticeable performance improvements when applied to the primary task (as seen in Figure 5), especially during the early stages of learning, and when highly exploratory behavior policies are used. In both the navigation and mountain-car environments, the performances of the approaches that replay individual transitions—experience replay with uniform random sampling and prioritized experience replay—are found to be nearly equivalent. We have not observed a significant advantage of using the prioritized approach, as reported in previous studies (Schaul et al., 2016; Hessel et al., 2017) using deep RL. This perhaps indicates that improvements brought about by the prioritized approach are much more pronounced in deep RL applications.

The approach of replaying transition sequences seems to be particularly sensitive to the memory parameter , with higher average returns being achieved for larger values of . A possible explanation for this could simply be that larger values of correspond to longer sequences, which allow a larger number of replay updates to occur in more regions of the state/state-action space. The influence of the length of the sequence, specified by the parameter is also similar in nature, but its impact on the performance is less emphatic. This could be because longer sequences allow a greater chance for their state trajectories to intersect with those of , thus improving the chances of virtual transition sequences being discovered, and of the agent’s value functions being updated using virtual experiences. However, the parameter , associated with the size of the library does not seem to have a noticeable influence on the performance of this approach. This is probably due to the fact that the library (and consequently ) is continuously updated with new, suitable transition sequences (successful sequences associated with higher magnitudes of TD errors) as and when they are observed. Hence, the storage of a number of transition sequences in the libraries becomes largely redundant.

Although the method of constructing virtual transition sequences is more naturally applicable to the tabular case, it could also possibly be extended to approaches with linear and non-linear function approximation. However, soft intersections between state trajectories would have to be considered instead of absolute intersections. That is, while comparing the state trajectories and , the existence of could be considered if it is close to elements in both and within some specified tolerance limit. Such modifications could allow the approach described here to be applied to deep RL. Transitions that belong to the sequences and could then be selectively replayed, thereby bringing about improvements in the sample efficiency. However, the experience replay approaches (implemented with the mentioned modifications) applied to the environments described in Section 4 did not seem to bring about significant performance improvements when a neural network function approximator was used. The performance of the corresponding deep Q-network (DQN) was approximately the same even without any experience replay. This perhaps, reveals that the performance of the proposed approach needs to be evaluated on more complex problems such as the Atari domain (Mnih et al., 2015). Reliably implementing virtual transition sequences to the function approximation case could be a future area of research. One of the limitations of constructing virtual transition sequences is that in higher dimensional spaces, intersections in the state trajectories become less frequent, in general. However, other sequences in the library can still be replayed. If appropriate sequences have not yet been discovered or constructed, and are thus not available for replay, other experience replay approaches that replay individual transitions can be used to accelerate learning in the meanwhile.

Perhaps another limitation of the approach described here is that constructing the library requires some notion of a goal state associated with high rewards. By tracking the statistical properties such as the mean and variance of the rewards experienced by an agent in its environment in an online manner, the notion of what qualifies as a high reward could be automated using suitable thresholds (Karimpanal and Wilhelm, 2017). In addition to this, other criteria such as the returns or average absolute TD errors of a sequence could also be used to maintain the library.

Figure 8: The variation of computational time per episode with sequence length for the two environments, computed over runs.

It is worth adding that the memory parameters , and have been set arbitrarily in the examples described here. Selecting appropriate values for these parameters as the agent interacts with its environment could be a topic for further research. Figure 8 shows the mean and standard deviations of the computation time per episode for different sequence lengths, over runs. The figure suggests that the computation time increases as longer transition sequences are used, and the trend can be approximated to be linear. These results could also be used to inform the choice of values for and for a given application. The values shown in Figure 8 were obtained from running simulations on a computer with an Intel i7 processor running at 2.7 GHz, using 8GB of RAM, running a Windows 7 operating system.

The approach of replaying transition sequences has direct applications in multi-task RL, where agents are required to learn multiple tasks in parallel. Certain tasks could be associated with the occurrence of relatively rare events when the agent operates under specific behavior policies. The replay of virtual transition sequences could further improve the learning in such tasks. such as robotics, where exploration of the state/state-action space is typically expensive in terms of time and energy. By reusing the agent-environment interactions in the manner described here, reasonable estimates of the value functions corresponding to multiple tasks can be maintained, thereby improving the efficiency of exploration.

5 Conclusion

In this work, we described an approach to replay sequences of transitions to accelerate the learning of tasks in an off-policy setting. Suitable transition sequences are selected and stored in a replay library based on the magnitudes of the TD errors associated with them. Using these sequences, we showed that it is possible to construct virtual experiences in the form of virtual transition sequences, which could be replayed to improve an agent’s learning, especially in environments where desirable events occur rarely. We demonstrated the benefits of this approach by applying it to versions of standard reinforcement learning tasks such as the puddle-world and mountain-car tasks, where the behavior policy was deliberately made drastically different from the target policy. In both tasks, a significant improvement in learning speed was observed compared to regular -learning as well as other forms of experience replay. Further, the influence of the different memory parameters used was described and evaluated empirically, and possible extensions to this work were briefly discussed. Characterized by controllable memory parameters and the potential to significantly improve the efficiency of exploration at the expense of some increase in computation, the approach of using replaying transition sequences could be especially useful in fields such as robotics, where these factors are of prime importance. The extension of this approach to the cases of linear and non-linear function approximation could find significant utility, and is currently being explored.

Acknowledgements

This work is supported by the President’s graduate fellowship (MOE, Singapore) and TL@SUTD under the Systems Technology for Autonomous Reconnaissance & Surveillance (STARS-Autonomy & Control) program. The authors thank Richard S. Sutton from the University of Alberta for his feedback and many helpful discussions during the development of this work.

References

  • S. Adam, L. Busoniu, and R. Babuska (2012) Experience replay for real-time reinforcement learning control. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42 (2), pp. 201–212. Cited by: §1, §2.
  • M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, O. P. Abbeel, and W. Zaremba (2017) Hindsight experience replay. In Advances in Neural Information Processing Systems, pp. 5048–5058. Cited by: §2.
  • L. C. Baird (1999) Reinforcement learning through gradient descent. Robotics Institute, pp. 227. Cited by: §3.1.
  • B. Bakker, V. Zhumatiy, G. Gruener, and J. Schmidhuber (2006) Quasi-online reinforcement learning for robots. In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, pp. 2997–3002. Cited by: §2.
  • L. Buhry, A. H. Azizi, and S. Cheng (2011) Reactivation, replay, and preplay: how it might all fit together. Neural plasticity 2011, pp. 203462. Cited by: §2.
  • T. de Bruin, J. Kober, K. Tuyls, and R. Babuška (2015) The importance of experience replay database composition in deep reinforcement learning. In Deep Reinforcement Learning Workshop, NIPS, Cited by: §2, §2.
  • E. Even-Dar and Y. Mansour (2003) Learning rates for q-learning. Journal of Machine Learning Research 5 (Dec), pp. 1–25. Cited by: §4.1.
  • F. Fernández and M. Veloso (2005) Building a library of policies through policy reuse. Technical report Technical Report CMU-CS-05-174, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA. Cited by: §3.1.
  • R. Fonteneau, S. A. Murphy, L. Wehenkel, and D. Ernst (2013) Batch mode reinforcement learning based on the synthesis of artificial trajectories. Annals of operations research 208 (1), pp. 383–416. Cited by: §2.
  • F. Garcia and S. M. Ndiaye (1998) A learning rate analysis of reinforcement learning algorithms in finite-horizon. In Proceedings of the 15th International Conference on Machine Learning (ML-98, Cited by: §4.1.
  • M. Geist, B. Scherrer, et al. (2014) Off-policy learning with eligibility traces: a survey.. Journal of Machine Learning Research 15 (1), pp. 289–333. Cited by: §1.
  • I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio (2013) An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211. Cited by: §2.
  • M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver (2017) Rainbow: combining improvements in deep reinforcement learning. arXiv preprint arXiv:1710.02298. Cited by: §4.2.
  • D. Isele and A. Cosgun (2018) Selective experience replay for lifelong learning. arXiv preprint arXiv:1802.10269. Cited by: §2.
  • T. G. Karimpanal and E. Wilhelm (2017) Identification and off-policy learning of multiple objectives using adaptive clustering. Neurocomputing 263 (), pp. 39 – 47. Note: Multiobjective Reinforcement Learning: Theory and Applications External Links: ISSN 0925-2312, Document Cited by: §4.2.
  • J. Kober, J. A. Bagnell, and J. Peters (2013) Reinforcement learning in robotics: a survey. The International Journal of Robotics Research 32 (11), pp. 1238–1274. Cited by: §2.
  • L. Lin (1992) Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning 8 (3-4), pp. 293–321. Cited by: §1.
  • V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu (2016) Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, Cited by: §2.
  • V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller (2013) Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. Cited by: §2.
  • V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529–533. Cited by: §2, §4.2.
  • J. Modayil, A. White, and R. S. Sutton (2014) Multi-timescale nexting in a reinforcement learning robot. Adaptive Behavior 22 (2), pp. 146–160. Cited by: §1.
  • A. W. Moore and C. G. Atkeson (1993) Prioritized sweeping: reinforcement learning with less data and less time. Machine learning 13 (1), pp. 103–130. Cited by: §2.
  • K. Narasimhan, T. D. Kulkarni, and R. Barzilay (2015) Language understanding for text-based games using deep reinforcement learning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, L. Màrquez, C. Callison-Burch, J. Su, D. Pighin, and Y. Marton (Eds.), pp. 1–11. External Links: Link Cited by: §2, §3.
  • H. F. Ólafsdóttir, C. Barry, A. B. Saleem, D. Hassabis, and H. J. Spiers (2015) Hippocampal place cells construct reward related sequences through unexplored space. Elife 4, pp. e06063. Cited by: §2.
  • R. Y. Rubinstein and D. P. Kroese (2016) Simulation and the monte carlo method. John Wiley & Sons. Cited by: §2.
  • T. Schaul, J. Quan, I. Antonoglou, and D. Silver (2016) Prioritized experience replay. In International Conference on Learning Representations, Puerto Rico, pp. 1. Cited by: §1, §2, §2, §4.2.
  • A. C. Singer and L. M. Frank (2009) Rewarded outcomes enhance reactivation of experience in the hippocampus. Neuron 64 (6), pp. 910–921. Cited by: §2.
  • R. S. Sutton and A. G. Barto (2011) Reinforcement learning: an introduction. Cambridge Univ Press. Cited by: §1, §1, §2.
  • R. S. Sutton, J. Modayil, M. Delp, T. Degris, P. M. Pilarski, A. White, and D. Precup (2011) Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 761–768. Cited by: §1.
  • R. S. Sutton (1990) Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh Int. Conf. on Machine Learning, pp. 216–224. Cited by: §2.
  • P. S. Thomas and E. Brunskill (2016) Data-efficient off-policy policy evaluation for reinforcement learning. In International Conference on Machine Learning, Cited by: §2.
  • S. B. Thrun (1992a) Efficient exploration in reinforcement learning. Technical report Carnegie Mellon University, Pittsburgh, PA, USA. Cited by: §2.
  • S. B. Thrun (1992b) Efficient exploration in reinforcement learning. Technical report . Cited by: §2.
  • S. Thrun (1996) Is learning the n-th thing any easier than learning the first?. In Advances in neural information processing systems, pp. 640–646. Cited by: §2.
  • O. Tutsoy and M. Brown (2016a) An analysis of value function learning with piecewise linear control. Journal of Experimental & Theoretical Artificial Intelligence 28 (3), pp. 529–545. Cited by: §4.1.
  • O. Tutsoy and M. Brown (2016b) Chaotic dynamics and convergence analysis of temporal difference algorithms with bang-bang control. Optimal Control Applications and Methods 37 (1), pp. 108–126. Cited by: §3.1.
  • H. van Seijen and R. S. Sutton (2013) Planning by Prioritized Sweeping with Small Backups. In Proceedings of the 30th International Conference on Machine Learning, Cycle 3, JMLR Proceedings, Vol. 28, pp. 361–369. Cited by: §2.
  • Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas (2016) Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224. Cited by: §1.
  • A. White, J. Modayil, and R. S. Sutton (2012) Scaling life-long off-policy learning. In Development and Learning and Epigenetic Robotics (ICDL), 2012 IEEE International Conference on, pp. 1–6. Cited by: §1.
  • A. White, J. Modayil, and R. S. Sutton (2014) Surprise and curiosity for big data robotics. In AAAI-14 Workshop on Sequential Decision-Making with Big Data, Quebec City, Quebec, Canada, Cited by: §2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
390129
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description