Accelerated Reinforcement Learning

Accelerated Reinforcement Learning

K. Lakshmanan
Department of Computer Science and Engineering
Indian Institute of Technology (BHU), Varanasi, India
Email: lakshmanank.cse@itbhu.ac.in
Abstract

Policy gradient methods are widely used in reinforcement learning algorithms to search for better policies in the parameterized policy space. They do gradient search in the policy space and are known to converge very slowly. Nesterov developed an accelerated gradient search algorithm for convex optimization problems. This has been recently extended for non-convex and also stochastic optimization. We use Nesterov’s acceleration for policy gradient search in the well-known actor-critic algorithm and show the convergence using ODE method. We tested this algorithm on a scheduling problem. Here an incoming job is scheduled into one of the four queues based on the queue lengths. We see from experimental results that algorithm using Nesterov’s acceleration has significantly better performance compared to algorithm which do not use acceleration. To the best of our knowledge this is the first time Nesterov’s acceleration has been used with actor-critic algorithm.

1 Introduction

In Reinforcement Learning (RL), we have an intelligent agent which has to choose actions depending on the state it is in. There are costs associated with each state and action. The goal is to choose actions so that the accumulated cost is minimized. This type of learning is different from both supervised learning and unsupervised learning. RL algorithms have been successfully applied to problems like control in computer networks, planning in robotics, autonomous driving etc. In these problems we make the Markov assumption i.e., the future state the agent goes to does not depend on past states and actions given the present state and action. Hence we can model the system using Markov Decision Processes (MDP’s), which are standard framework for RL algorithms [1, 14].

State transition probability gives the probability of going to the next state given that we are in some particular state and we pick an particular action. If these probabilities are known then we use model-based algorithms else there are RL algorithms which do not require these probabilities they are called model-free algorithms.

A policy is a mapping from state space to action space. It tells us what action to pick if we are in a particular state. Policies can also be randomized by which we mean it is a mapping from the state space to a probability distribution over the action space. For a given policy (deterministic or randomized) we can define value function which is the total accumulated cost. This can be calculated in different ways, in this paper we consider infinite horizon discounted cost criterion formally defined in the next section.

RL algorithms have polynomial dependence on the state and action spaces of the underlying MDP. In real world applications the size of state and action space is very large. This is because it grows exponentially with some parameter associated with the problem. This makes implementation of the algorithms a challenge. One way to overcome this is to approximate the value function. This is explained in the next section. In the paper we consider the well-known actor-critic algorithm with linear function approximation [2] architecture. There are two parts in this algorithm, one is the actor and another is critic. The critic evaluates the policy while the actor updates the policy based on the evaluation by critic. In our algorithm the actor performs gradient search in a parameterized policy space and the critic part evaluates the policy using temporal difference technique.

In convex optimization Nesterov’s accelerated gradient search is well-known [9, 10]. It achieves a convergence rate of compared to the convergence rate for plain gradient search. Recently it was extended to non-convex and stochastic problems [5]. We have used this accelerated gradient search of Nesterov in the actor part of the algorithm to speed up the convergence. In a related work, Meyer et al. [8] proposed accelerated gradient temporal difference learning where they use Nesterov’s method to accelerate residual gradient based TD algorithm. Our work is different as we use a multi timescale actor-critic algorithm unlike minimizing the Bellman residual error in their paper. We see from experiments on a scheduling problem the this accelerated actor-critic algorithm has much better performance compared to the regular actor-critic algorithm.

The rest of the paper is organized as follows: the next section describes the framework and problem definition. Section 3 gives a brief introduction about Nesterov’s accelerated gradient search. The next section we describe our algorithm. In section 5 we briefly present the convergence analysis and finally we present the experimental results.

2 Framework and Problem Definition

Let be the set of states the agent can be in and be the set of actions which the agent can take. We assume the transition to the next state depends only on the previous state and action taken (controlled Markov property). Formally if and denote the states at time then

where transition probability of moving from one state to another state given an action is given by . Note that we have . We have incur a cost when we pick action at state . This can be calculated from the single-stage cost at time as

We assume to be non-negative, uniformly bounded and mutually independent random variables. Our goal is to choose actions over time so the the total accumulated cost is minimized. How we choose our actions at an state and time is determined by policy . By abuse of terminology we also call as policy. If then the policy is called stationary. A randomized policy is specified via a probability distribution over in other words is a mapping from to written as . It can be seen that the states at time form a Markov chain under any stationary deterministic or randomized policy. We make the following assumption

Assumption 1.

The Markov chain under any stationary randomized policy (SRP) is irreducible.

Since the set of states are finite it follows from the Assumption that is positive recurrent under any SRP.

Let us define the initial distribution over states as , our aim is to find a policy that minimizes the following

(1)

where

(2)

Note that we discount the future rewards by a factor when summing. We have .

It is well-known that the class of SRP’s are complete (Chapter 6 of [11]) i.e., it is enough the find the policy that minimizes equation (1) among SRP’s. From now on, we denote the set of SRP’s as .

Let us denote the optimal cost by ,

For a discounted MDP, the optimal cost satisfies Bellman equation ([11]) for all

We approximate the value function using linear function approximation architecture . When the initial distribution , the th unit vector i.e., and we approximate as , where are parameters which determine the value function and is a fixed dimensional vector called the feature vector associated with state .

We have

(3)

Let be the matrix whose th column ( is . The following is a standard requirement for showing convergence, see for instance [15].

Assumption 2.

The basis function are linearly independent. Further, and has full rank.

3 Nesterov’s Acceleration for Gradient Search

Consider the following problem

where is a smooth convex function and is the variable. This problem can be solved using well-known gradient descent algorithm. This achieves a convergence rate of . In a seminal paper published in 1983, Nesterov proposed an accelerated gradient search algorithm which achieves an convergence rate of (See [9],[10]). This rate is optimal among methods using only gradient of at consecutive iterates. Though originally for convex optimization problems, this method has been recently extended by Ghadimi et al. [5] to non-convex and stochastic programming.

Nesterov’s accelerated gradient search algorithm is simple to describe. We follow the version given in [13]. This is given as

(4)
(5)

We denote the minimizer of by and . Here is the step-size. Let be the Lipschitz constant for . For any step-size with , this algorithm has convergence rate of

4 Actor-critic Algorithm

Actor-critic algorithms are well-known reinforcement learning algorithms. In these algorithms there are two components called the actor and the critic. The critic component does the evaluation of the value function. While the actor improves upon the policy found based on the evaluation by the critic. In our algorithm we use Temporal Difference (TD(0)) learning to evaluate the value function in the critic. TD learning (see [14] originally proposed by Sutton is model free i.e., does not require knowledge of the transition probability. Similar to dynamic programming it updates estimate of the value function at a particular state from previously computed estimates in other states.

In the actor part we do gradient search in the policy parameter space. Both actor and critic use function approximation. Actor-critic algorithms with linear function approximation has been studied extensively in literature for example see [7],[2].

The actor-critic algorithm in [2] is for a constrained MDP. We use the same algorithm without considering the constraints and include Nesterov’s acceleration in the policy gradient search. Instead of using two loops corresponding to the critic (inner) and actor (outer) update, we use an algorithm with two time scales as in [2]. The algorithm we consider has two time scales corresponding to the actor and critic. The step-size sequences associated with each time-scale satisfy the following assumption

(6)

The slower timescale update using step-size corresponds to the outer loop and faster one using step-size to the inner loop. The step-size which goes to zero faster corresponds to the slower time-scale. Thus the step-sizes corresponding to the two time scales also satisfy

(7)

In our algorithm, the critic is updated in the faster time scale and actor in the slower time scale. This is because critic corresponds to the inner loop as it evaluates the value function which we use in the gradient search for the actor update. The book by Borkar (chapter 6 of [4]) has more details on two-time scale algorithms.

4.1 Policy Gradient and Stochastic Approximation

We use policy gradient methods to search for better policies. In these methods the assumption is that the policy depends on a parameter (for example Boltzmann policies, see [3]) taking values in a compact subset of . When considering paramterized policies, our problem becomes that of finding the optimal parameter . Let be parameterized class of SRP. We assume the set to be convex and compact. We will now indicate the parameterized policy itself as . We make the following standard assumption [2] for policy gradient methods

Assumption 3.

For any is continuously differential in .

We can see that

We use Simultaneous Perturbation Stochastic Approximation (SPSA) developed by Spall ([12]) to find the gradient . While finite difference SA requires simulations to find the gradient of a stochastic function, SPSA requires only 2 simulations. In this paper, we use the one simulation variant of SPSA (see Section 10.2 of [4]).

Let with being independent random variables taking values plus or minus 1 with equal probability 1/2. Let policy be governed by parameter with being a positive constant. From Taylor’s expansion we can see that

(8)

Taking expectation the first and the third term become zero since is with equal probability and is independent of . Hence we take to be the estimator of . Using the approximation of we have

(9)

4.2 Temporal Difference Evaluation

TD prediction is well-known in reinforcement learning, for an introduction one can refer to Chapters 6 and 7 of the book by Sutton and Barto [14]. We assume the state in the algorithm to be governed by the parameterized policy . And the policy is determined by parameter where is the perturbation at time step . Let be the sequence of action chosen according to the policy and be the nth update of weight parameter. Recall that is the random cost incurred at time . We have

The TD(0) error at time is given by . We use TD(0) algorithm and to update the estimate i.e. to get the new estimate we add the previous estimate of to the TD error multiplied with the step-size . Since we use linear function approximation to approximate we also approximate the TD error which can now be written as

(10)

4.3 Accelerated Actor-Critic Algorithm

Now we can give the complete actor-critic algorithm with Nesterov’s acceleration, for

(11)
(12)
(13)

The equation (11) corresponds to critic, where is the TD-error defined in equation (10). Equations (12) and (13) are the actor update, where accelerated gradient search corresponding to equations (4) and (5) given in section 3 is performed in the parameterized policy space. Note that the gradient estimate in (9) is used in equation (12).

As mentioned before corresponds to the initial distribution of states. If we are interested in performance of the algorithm starting in some susbset of states we let only those states take positive value. In our experiments on queueing networks, we are start the system with empty queues, so for this state has value 1 and all other states value 0.

5 Convergence Analysis

The convergence analysis of accelerated actor-critic algorithm is straight forward. From equation (7) we have . The theory of multi-timescale stochastic approximation (chapter 6 of [4]) allows us to treat and as constants and while analyzing equation (11). The policies can therefore be treated as time-invariant or stationary policies.

Let be the transition probability matrix with elements

Let denote the stationary distribution under policy . Further let denote the diagonal matrix with diagonal elements , . We denote the Euclidean norm by .

Define as

(14)

for all . Let denote the column vector

We have the following theorem from [2]

Theorem 1.

Under Assumptions 1-2 with and (for given and ), governed by recursion (11) satisfy with probability one. The quantity is the unique solution to

(15)

In particular the following is satisfied

(16)
Proof.

Since the temporal difference equation in (10) and critic recursion in equation (11) are same as equations and of [2], the proof follows from the proof of Theorem 1 in [2]. ∎

Lemma 1.

Under Assumptions 1-2, solution to equation (15) is continuously diffferentiable in .

Proof.

Refer proof of Lemma 1 in [2]. ∎

By abuse of notation, we refer as . Now consider the recursion (13). Let be a vector field on the set in which takes values. Let

Let

We now have the main theorem which states that the policy parameter converges to a local minima of the function . The proof of this theorem uses the result about Nesterov accelerated method in [13] where they use a differential equation for modeling the algorithm.

Theorem 2.

Let assumptions 1-2 hold, assume further that the iterates is stable. Then given , such that for all , , obtained according to equations (12),(13) satisfy as . with probability one.

Proof.

From the theory of two-time scale stochastic approximation (chapter 6 of [4]) it can be seen that while considering slower recursion for the faster recursion for converges to (Theorem 1). Hence the recursion in equation (12) can be replaced by

(17)

where as . By assumption the recursion is stable.

We have as

(18)

where as by Taylor’s expansion. The first term and third term vanishes as and is independent of . This is a single Simultaneous Perturbation Stochastic Approximation (SPSA) scheme for estimating gradient requiring only one function evaluation (see for example page 120 in [4]).

Next we have the ODE for Nesterov scheme i.e., equations (12) and (13) (see equations 1 and 3 from [13]) as

(19)

Now this can be converted to a first order ODE by taking . Now we have . Let , we have

The asymptotically stable equilibria of ODE is the set where and which corresponds to the within the set . These can be seen to correspond to the local minimia of the function .

Now let be defined according to

where is a -dimensional vector. It can be seen that as the second term is similar to the Lyapunov function used in the Theorem 2 of [2]. Then corresponding to the ODE (19), we have , . Thus is a strict Liapunov function for ODE (19). The claim follows from Theorem 1, pp. 339 of [6].

6 Experimental Results

We consider a simple problem of scheduling jobs into four queues with exponential service rate and FCFS queueing discipline. The jobs which arrive at the scheduler according to Poisson distribution and are instantaneously scheduled to one of the four queues. The service rates at the queues are different and scheduler does not know the service rates. It schedules the jobs based on the queue lengths.

Figure 1: Scheduling jobs between four parallel asymmetric queues

At time , the decision to choose the queue to send the job to is the action . Given an action, the queue lengths form an Markov process. Thus the state of the MDP is the queue lengths . The cost is the average queue length of the four queues.

We need to choose actions (schedule jobs) such that the long-run discounted value of is minimized.

We had a threshold at the queues and the feature used in function approximation is as follows

(20)

We tested the actor-critic and accelerated actor-critic algorithm in this problem. The algorithms were run for 50000 iterations. The result averaged for 100 runs is shown in Table 1. We compared the two algorithms using total mean service time and queue length for different discount factors. The accelerated actor-critic algorithm performed significantly better i.e. had lower mean and variance for both these two parameters for all discount factors as can be seen from the table.

Accelerated Actor-Critic Actor-Critic
Mean service time Mean queue length Mean service time Mean queue length
0.9 5.320.424 15.110.094 7.013.096 22.864.822
0.8 5.610.398 15.570.061 7.114.31 19.894.864
0.7 5.590.337 15.650.054 7.113.695 19.674.16
0.6 5.690.386 15.70.062 7.354.267 20.378.469
0.5 5.760.397 15.720.071 7.864.024 20.417.741
Table 1: Expected service time and queue length for different discount factors

7 Conclusion

We have used Nesterov’s accelerated gradient method to search the parameterized policy space. We have shown the convergence using the ODE method. The resultant accelerated actor-critic algorithm is seen to have a much better performance than the one without acceleration. We also plan to use Nesterov’s method in other RL algorithms and also to test them in other applications.

References

  • [1] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA., 1996.
  • [2] S. Bhatnagar. An actor–critic algorithm with function approximation for discounted cost constrained markov decision processes. Syst Control Lett., 59:760–766, 2010.
  • [3] S. Bhatnagar and K. Lakshmanan. An online actor-critic algorithm with function approximation for constrained markov decision processes. J Optim Theory Appl., 153(3):688–708, 2012.
  • [4] V. S. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint. Cambridge University Press, 2008.
  • [5] S. Ghadimi and G. Lan. Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Math. Prog, 156(1):59–99, 2016.
  • [6] M. W. Hirsch. Convergent activation dynamics in continuous time networks. Neural Netw., 2:331–349, 1989.
  • [7] V. R. Konda and J. N. Tsitsiklis. On actor-critic algorithms. SIAM J. Control Optim., 42(4):1143–1166, 2003.
  • [8] D. Meyer, R. Degenne, A. Omrane, and H. Shen. Accelerated gradient temporal differnce learning. In IEEE ADPRL, 2014.
  • [9] Y. Nesterov. A method for solving convex programming problem with convergence rate . Soviet Math. Dokl., 27(2):372–376, 1983.
  • [10] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course, volume 87 of Applied Optimization. Kluwer Academic Press, Boston, MA, 2004.
  • [11] M. L. Puterman. Markov Decision Processes : Discrete Stochastic Dynamic Programming. John Wiley, New York, 1994.
  • [12] J.C̃. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Trans. Autom. Control, 37(3):332–341, 1992.
  • [13] W. Su, S. Boyd, and E.J. Candes. A differential equation for modeling nesterov’s accelerated gradient method: Theory and insights. J Mach Learn Res., 17:1–43, 2016.
  • [14] R. S. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press., 1998.
  • [15] J.N. Tsitsiklis and B. Van Roy. An analysis of temporal difference learning with function approximation. IEEE Trans. Autom. Control, 42(5):674–690, 1997.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
1100
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description