Target-Based Temporal-Difference Learning

Target-Based Temporal-Difference Learning

Donghwan Lee and Niao He D. Lee is with Coordinated Science Laboratory (CSL), University of Illinois at Urbana-Champaign, IL 61801, USA donghwan@illinois.edu.N. He is with the Department of Industrial and Enterprise Systems Engineering, University of Illinois, Urbana-Champaign, IL 61801, USA niaohe@illinois.edu.
Abstract

The use of target networks has been a popular and key component of recent deep Q-learning algorithms for reinforcement learning, yet little is known from the theory side. In this work, we introduce a new family of target-based temporal difference (TD) learning algorithms and provide theoretical analysis on their convergences. In contrast to the standard TD-learning, target-based TD algorithms maintain two separate learning parameters–the target variable and online variable. Particularly, we introduce three members in the family, called the averaging TD, double TD, and periodic TD, where the target variable is updated through an averaging, symmetric, or periodic fashion, mirroring those techniques used in deep Q-learning practice.

We establish asymptotic convergence analyses for both averaging TD and double TD and a finite sample analysis for periodic TD. In addition, we also provide some simulation results showing potentially superior convergence of these target-based TD algorithms compared to the standard TD-learning. While this work focuses on linear function approximation and policy evaluation setting, we consider this as a meaningful step towards the theoretical understanding of deep Q-learning variants with target networks.

1 Introduction

Deep Q-learning [Mnih et al., 2015] has recently captured significant attentions in the reinforcement learning (RL) community for outperforming human in several challenging tasks. Besides the effective use of deep neural networks as function approximators, the success of deep Q-learning is also indispensable to the utilization of a separate target network for calculating target values at each iteration. In practice, using target networks is proven to substantially improve the performance of Q-learning algorithms, and is gradually adopted as a standard technique in modern implementations of Q-learning.

To be more specific, the update of Q-learning with target network can be viewed as follows:

where , is the online variable, and is the target variable. Here the state-action value function is parameterized by . The update of the online variable resembles the stochastic gradient descent step. The term stands for the intermediate reward of taking action in state , and stands for the target value under the target variable, . When the target variable is set to be the same as the online variable at each iteration, this reduces to the standard Q-learning algorithm [Watkins & Dayan, 1992], and is known to be unstable with nonlinear function approximations. Several choices of target networks are proposed in the literature to overcome such instability: (i) periodic update, i.e., the target variable is copied from the online variable every steps, as used for deep Q-learning [Mnih et al., 2015, Wang et al., 2016, Mnih et al., 2016, Gu et al., 2016]; (ii) symmetric update, i.e., the target variable is updated symetrically as the online variable; this is first introduced in double Q-learning [Hasselt, 2010, Van Hasselt et al., 2016]; and (iii) Polyak averaging update, i.e., the target variable takes weighted average over the past values of the online variable; this is used in deep deterministic policy gradient [Lillicrap et al., 2015, Heess et al., 2015] as an example. In the following, we simply refer these as target-based Q-learning algorithms.

While the integration of Q-learning with target networks turns out to be successful in practice, its theoretical convergence analysis remains largely an open yet challenging question. As an intermediate step towards the answer, in this work, we first study target-based temporal difference (TD) learning algorithms and establish their convergence analysis. TD algorithms [Sutton, 1988, Sutton et al., 2009a, b] are designed to evaluate a given policy and are the fundamental building blocks of many RL algorithms. Comprehensive surveys and comparisons among TD-based policy evaluation algorithms can be found in Dann et al. [2014]. Motivated by the target-based Q-learning algorithms [Mnih et al., 2015, Wang et al., 2016], we introduce a target variable into the TD framework and develop a family of target-based TD algorithms with different updating rules for the target variable. In particular, we propose three members in the family, the averaging TD, double TD, and periodic TD, where the target variable is updated through an averaging, symmetric or periodic fashion, respectively. Meanwhile, similar to the standard TD-learning, the online variable takes stochastic gradient steps of the Bellman residual loss function while freezing the target variable. As the target variable changes slowly compared to the online variable, target-based TD algorithms are prone to improve the stability of learning especially if large neural networks are used, although this work will focus mainly on TD with linear function approximators.

Theoretically, we prove the asymptotic convergence of both averaging TD and double TD. We also provide a finite sample analysis for the periodic TD algorithm. Practically, we also run some simulations showing superior convergence of the proposed target-based TD algorithms compared to the standard TD-learning. In particular, our empirical case studies demonstrate that the target TD-learning algorithms outperforms the standard TD-learning in the long run with smaller errors and lower variances, despite their slower convergence at the very beginning. Moreover, our analysis reveals an important connection between the TD-learning and the target-based TD-learning. We consider the work as a meaningful step towards the theoretical understanding of deep Q-learning with general nonlinear function approximation.

Related work. The first target-based reinforcement learning was proposed in [Mnih et al., 2015] for policy optimization problems with nonlinear function approximation, where only empirical results were given. To our best knowledge, target-based reinforcement learning for policy evaluation has not been specifically studied before. A somewhat related family of algorithms are the gradient TD (GTD) learning algorithms [Sutton et al., 2009a, b, Mahadevan et al., 2014, Dai et al., 2017], which minimize the projected Bellman residual through the primal-dual algorithms. The GTD algorithms share some similarities with the proposed target-based TD-learning algorithms in that they also maintain two separate variables – the primal and dual variables, to minimize the objective. Apart from this connection, the GTD algorithms are fundamentally different from the averaging TD and double TD algorithms that we propose. The proposed periodic TD algorithm can be viewed as approximately solving least squares problems across cycles, making it closely related to two families of algorithms, the least-square TD (LSTD) learning algorithms [Bertsekas, 1995, Bradtke & Barto, 1996] and the least squares policy evaluation (LSPE) [Bertsekas & Yu, 2009, Yu & Bertsekas, 2009]. But they also distinct from each other in terms of the subproblems and subroutines used in the algorithms. Particularly, the periodic TD executes stochastic gradient descent steps while LSTD uses the least-square parameter estimation method to minimize the projected Bellman residual. On the other hand, LSPE directly solves the subproblems without successive projected Bellman operator iterations. Moreover, the proposed periodic TD algorithm enjoys a simple finite-sample analysis based on existing results on stochastic approximation.

2 Preliminaries

In this section, we briefly review the basics of the TD-learning algorithm with linear function approximation. We first list a few notations that will be used throughout the paper.

Notation

The following notation is adopted: for a convex closed set , is the projection of onto the set , i.e., ; is the diameter of the set ; for any positive-definite ; and denotes the minimum and maximum eigenvalues of a symmetric matrix , respectively.

2.1 Markov Decision Process (MDP)

In general, a (discounted) Markov decision process is characterized by the tuple , where is a finite state space, is a finite action space, represents the (unknown) state transition probability from state to given action , is a uniformly bounded stochastic reward, and is the discount factor. If action is selected with the current state , then the state transits to with probability and incurs a random reward with expectation . A stochastic policy is a distribution representing the probability , denotes the transition matrix whose entry is , and denotes the stationary distribution of the state under policy , i.e., . The following assumption is standard in the literature.

Assumption 1.

We assume that for all .

We also define and as the stochastic reward and its expectation given the policy and the current state , i.e.

The infinite-horizon discounted value function given policy is

where , stands for the expectation taken with respect to the state-action-reward trajectories.

2.2 Linear Function Approximation

Given pre-selected basis (or feature) functions , is defined as

Here is a positive integer and is a feature vector. It is standard to assume that the columns of do not have any redundancy up to linear combinations. We make the following assumption.

Assumption 2.

has full column rank.

2.3 Reinforcement Learning (RL) Problem

In this paper, the goal of RL with the linear function approximation is to find the weight vector such that approximates the true value function . This is typically done by minimizing the mean-square Bellman error loss function [Sutton et al., 2009a]

(1)

where is defined as a diagonal matrix with diagonal entries equal to a stationary state distribution under the policy . Note that due to creftype 1, . In typical RL setting, the model is unknown, while only samples of the state-action-reward are observed. Therefore, the problem can only be solved in stochastic way using the observations. In order to formally analyze the sample complexity, we consider the following assumption on the samples.

Assumption 3.

There exists a Sampling Oracle (SO) that takes input and generates a new state with probabilities and a stochastic reward .

This oracle model allows us to draw i.i.d. samples from . While such an i.i.d. assumption may not necessarily hold in practice, it is commonly adopted for complexity analysis of RL algorithms in the literature [Sutton et al., 2009a, b, Bhandari et al., 2018, Dalal et al., 2018]. It’s worth mentioning that several recent works also provide complexity analysis when only assuming Markovian noise or exponentially -mixing properties of the samples [Antos et al., 2008, Bhandari et al., 2018, Dai et al., 2018, Srikant & Ying., 2019]. For sake of simplicity, this paper only focuses on the i.i.d. sampling case.

A naive idea for solving 1 is to apply the stochastic gradient descent steps, , where is a step-size and is a stochastic estimator of the true gradient of at ,

This approach is called the residual method [Baird, 1995]. Its main drawback is the double sampling issue [Bertsekas & Tsitsiklis, 1996, Lemma 6.10, pp. 364]: to obtain an unbiased stochastic estimation of , we need two independent samples given any pair . This is possible under creftype 3, but hardly implementable in most real applications.

2.4 Standard TD-Learning

In the standard TD-learning [Sutton, 1988], the gradient term in the last line () is omitted [Bertsekas & Tsitsiklis, 1996, pp. 369]. The resulting update rule is

While the algorithm avoids the double sampling problem and is simple to implement, a key issue here is that the stochastic gradient does not correspond to the true gradient of the loss function or any other objective functions, making the theoretical analysis rather subtle. Asymptotic convergence of the TD-learning was given in the original paper [Sutton, 1988] in tabular case and in Tsitsiklis & Van Roy [1997] with linear function approximation. Finite-time convergence analysis was recently established in Bhandari et al. [2018], Dalal et al. [2018], Srikant & Ying. [2019].

Remark.

The TD-learning can also be interpreted as minimizing the modified loss function at each iteration

where stands for an online variable and stands for a target variable. At each iteration step , it sets the target variable to the value of current online variable and performs a stochastic gradient step

A full algorithm is described in Algorithm 1.

1:Initialize randomly and set .
2:for iteration  do
3:     Sample
4:     Sample
5:     Sample and from SO
6:     Let
7:     Update
8:     Update
9:end for
Algorithm 1 Standard TD-Learning

Inspired by the the recent target-based deep Q-learning algorithms [Mnih et al., 2015], we consider several alternative updating rules for the target variable that are less aggressive and more general. This then leads to the so-called target-based TD-learning. One of the potential benefits is that by slowing down the update for the target variable, we can reduce the correlation of the target value, or the variance in the gradient estimation, which would then improve the stability of the algorithm. To this end, we introduce three variants of target-based TD: averaging TD, double TD, and periodic TD, each of which corresponds to a different strategy of the target update. In the following sections, we discuss these algorithms in details and provide their convergence analysis.

3 Averaging TD-Learning (A-TD)

We start by integrating TD-learning with the Polyak averaging strategy for target variable update. This is motivated by the recent deep Q-learning [Mnih et al., 2015] and DDPG [Lillicrap et al., 2015]. It’s worth pointing out that such a strategy has been commonly used in the deep Q-learning framework, but the convergence analysis remains absent to our best knowledge. Here we first study this for the TD-learning. The basic idea is to minimize the modified loss, , with respect to while freezing , and then enforce (target tracking). Roughly speaking, the tracking step, , is executed with the update

(2)

where is the parameter used to adjust the update speed of the target variable and is a stochastic estimation of . A full algorithm is summarized in Algorithm 2, which is called averaging TD (A-TD).

Compared to the standard TD-learning in Algorithm 1, the only difference comes from the target variable update in the last line of Algorithm 2. In particular, if we set and replace with in the second update, then it reduces to the TD-learning.

1:Initialize and randomly.
2:for iteration  do
3:     Sample
4:     Sample
5:     Sample and from SO
6:     Let
7:     Update
8:     Update
9:end for
Algorithm 2 Averaging TD-Learning (A-TD)

Next, we prove its convergence under certain assumptions. The convergence proof is based on the ODE (ordinary differential equation) approach [Bhatnagar et al., 2012], which is standard technique used in the RL literature [Sutton et al., 2009b]. In the approach, a stochastic recursive algorithm is converted to the corresponding ODE, and the stability of the ODE is used to prove the convergence. The ODE associated with A-TD is as follows:

(3)

We arrive at the following convergence result,.

Theorem 1.

Assume that with a fixed policy , the Markov chain is ergodic and the step-sizes satisfy

(4)

Then, and as with probability one, where

(5)
Remark 1.

Note that in (5) is not identical to the optimal solution of the original problem in (1). Instead, it is the solution of the projected Bellman equation defined as

where is the projected Bellman operator defined by

is the projection onto the range space of , denoted by : . The projection can be performed by the matrix multiplication: we write , where .

Theorem 1 implies that both the target and online variables of the A-TD converge to which solves the projected Bellman equation. The proof of Theorem 1 is provided in Appendix A based on the stochastic approximation approach, where we apply the Borkar and Meyn theorem [Bhatnagar et al., 2012, Appendix D]. Alternatively, the multi-time scale stochastic approximation [Bhatnagar et al., 2012, pp. 23] can be used with slightly different step-size rules. Due to the introduction of target variable updates, deriving a finite-sample analysis for the modified TD-learning is far from straightforward [Dalal et al., 2018, Bhandari et al., 2018]. We will leave this for future investigation.

4 Double TD-Learning (D-TD)

In this section, we introduce a natural extension of the A-TD, which has a more symmetric form. The algorithm mirrors the double Q-learning [Van Hasselt et al., 2016], but with a notable difference. Here, both the online variable and target variable are updated in the same fashion by switching roles. To enforce , we also add a correction term to the gradient update. The algorithm is summarized in Algorithm 3, and referred to as the double TD-learning (D-TD).

1:Initialize and randomly.
2:for iteration  do
3:     Sample
4:     Sample
5:     Sample and from SO
6:     Let
7:     Let
8:     Update
9:     Update
10:end for
Algorithm 3 Double TD-Learning (D-TD)

We provide the convergence of the D-TD with linear function approximation below. The proof is similar to the proof of Theorem 1, and is contained in Appendix B. Noting that asymptotic convergence for double Q-learning has been established in Hasselt [2010] for tabular case, but no result is yet known when linear function approximation is used.

Theorem 2.

Assume that with a fixed policy , the Markov chain is ergodic and the step-sizes satisfy (4). Then, and as with probability one.

If D-TD uses identical initial values for the target and online variables, then the two updates remain identical, i.e., for . In this case, D-TD is equivalent to the TD-learning with a variant of the step-size rule. In practice, this problem can also be resolved if we use different samples for each update, and the convergence result will still apply to this variation of D-TD.

Compared to the corresponding form of the double Q-learning [Hasselt, 2010], D-TD has two modifications. First, we introduce an additional term, or , linking the target and online parameter to enforce a smooth update of the target parameter. This covers double Q-learning as a special case by setting . Moreover, the D-TD updates both target and online parameters in parallel instead of randomly. This approach makes more efficient use of the samples in a slight sacrifice of the computation cost. The convergence of the randomized version is proved with slight modification of the corresponding proof (see Appendix C for details).

5 Periodic TD-Learning (P-TD)

In this section, we propose another version of the target-based TD-learning algorithm, which more resembles that used in the deep Q-learning [Mnih et al., 2015]. It corresponds to the periodic update form of the target variable, which differs from previous sections. Roughly speaking, the target variable is only periodically updated as follows:

where is a stochastic estimator of the gradient . The standard TD-learning is recovered by setting .

Alternatively, one can interpret every iterations of the update as contributing to minimizing the modified loss function

while freezing the target variable. In other words, the above subproblem is approximately solved at each iteration through steps of stochastic gradient descent. We formally present the algorithmic idea in a more general way as depicted in Algorithm 4 and call it the periodic TD algorithm (P-TD).

1:Initialize randomly and set .
2:Set positive integers and the subroutine iteration steps, , for .
3:Set stepsizes, , for the subproblem.
4:for iteration  do
5:     Update
such that
where .
6:     Update
7:end for
8:Return
9:
10:procedure SGD(,,)
11: Subroutine: Stochastic gradient decent steps
12:     Initialize .
13:     for iteration  do
14:         Sample
15:         Sample
16:         Sample and from SO
17:         Let
18:         Update
19:     end for
20:     Return
21:end procedure
Algorithm 4 Periodic TD-Learning (P-TD)

For the P-TD, given a fixed target variable , the subroutine, , runs stochastic gradient descent steps times in order to approximately solve the subproblem , for which an unbiased stochastic gradient estimator is obtained by using observations. Upon solving the subproblem after steps, the next target variable is replaced with the next online variable. This makes it similar to the original deep Q-learning [Mnih et al., 2015] as it is periodic if is set to a constant. Moreover, P-TD is also closely related to the TD-learning Algorithm 1. In particular, if for all , then P-TD corresponds to the standard TD.

Based on the standard results in Bottou et al. [2018, Theorem 4.7], the subroutine converges to the optimal solution, . But as we only apply a finite number steps of , the subroutine will return an approximate solution with a certain error bound in expectation, i.e., .

In the following, we establish a finite-time convergence analysis of P-TD. We first present a result in terms of the expected error of the solution and its bounds with high probability.

Theorem 3.

Consider Algorithm 4. We have

Moreover,

The second result implies that P-TD achieves an -optimal solution with an arbitrarily high probability by approaching and controlling the error bounds . In particular, if for all , then

One can see that the error is essentially decomposed into two terms, one from the approximation errors induced from SGD procedures and one from the contraction property of solving the subproblems, which can also be viewed as solving the projected Bellman equations. Full details of the proof can be found in Appendix D.

To further analyze the approximation error from the SGD procedure, existing convergence results in Bottou et al. [2018, Theorem 4.7] can be applied with slight modifications.

Proposition 1.

Suppose that the SGD method in Algorithm 4 is run with a stepsize sequence such that, for all ,

for some and such that

Then, for all , the expected optimality gap satisfies

(6)

where

and

Proposition 1 ensures that the subroutine iterate, , converges to the solution of the subproblem at the rate of . Combining Proposition 1 with Theorem 3, the overall sample complexity is derived in the following proposition. We defer the proofs to Appendix E and Appendix F.

Proposition 2 (Sample Complexity).

The -optimal solution, , is obtained by Algorithm 4 with SO calls at most , where

and and are defined in Proposition 1.

As a result, the overall sample complexity of P-TD is bounded by . As mentioned earlier, non-asymptotic analysis for even the standard TD algorithm is only recently developed in a few work [Dalal et al., 2018, Bhandari et al., 2018, Srikant & Ying., 2019]. Our sample complexity result on P-TD, which is a target-based TD algorithm, matches with that developed in Bhandari et al. [2018] with similar decaying step-size sequence, up to a log factor. Yet, our analysis is much simpler and builds directly upon existing results on stochastic gradient descent. Moreover, from the computational perspective, although P-TD runs in two loops, it is as the efficient as standard TD.

P-TD also shares some similarity with the least squares temporal difference (LSTD, Bradtke & Barto [1996]) and its stochastic approximation variant (fLSTD-SA, Prashanth et al. [2014]). LSTD is a batch algorithm that directly estimates the optimal solution as described in (5) through samples, which can also be viewed as exactly computing the solution to a least squares subproblem. fLSTD-SA alleviates the computation burden by applying the stochastic gradient descent (the same as TD update) to solve the subproblems. The key difference between fLSTD-SA and P-TD lies in that the objective for P-TD is adjusted by the target variables across cycles. Lastly, P-TD is also closely related to and can be viewed as a special case of the least-squares fitted Q-iteration [Antos et al., 2008]. Both of them solves a similar least squares problems using target values. However, for P-TD, we are able to directly apply the stochastic gradient descent to address the subproblems to near-optimality.

6 Simulations

In this section, we provide some preliminary numerical simulation results showing the efficiency of the proposed target-based TD algorithms. We stress that the main goal of this paper is to introduce the family of target-based TD algorithms with linear function approximation and provide theoretical convergence analysis for target TD algorithms, as an intermediate step towards the understanding of target-based Q-learning algorithms. Hence, our numerical experiments simply focus on testing the convergence, sensitivity in terms of the tuning parameters of these target-based algorithms, as well as effects of using target variables as opposed to the standard TD-learning.

(a) Error evolution over
(b) Error evolution over
Figure 1: (a) Blue line: error evolution of the standard TD-learning with the step-size ; Red line: error evolution of A-TD with the step-size and . The shaded areas depict empirical variances obtained with several realizations. (a) Error over the interval ; (b) Error over the interval .
(a) Error evolution over
(b) Error evolution over
Figure 2: Blue line: error evolution of the standard TD-learning with the step-size ; Red line: error evolution of D-TD with the step-size and . The shaded areas depict empirical variances obtained with several realizations. (a) Error over the interval ; (b) Error over the interval .
(a) Error evolution over
(b) Error evolution over
Figure 3: Blue line: error evolution of the standard TD-learning with the step-size . Red line: error of P-TD with the step-size and . The shaded areas depict empirical variances obtained with several realizations. (a) Error over the interval ; (b) Error over the interval .

6.1 Convergence of A-TD and D-TD

In this example, we consider an MDP with , ,

and , where denotes the uniform distribution in and stands for the reward given policy and the current state . The action space and policy are not explicitly defined here. For the linear function approximation, we consider the feature vector with the radial basis function [Geramifard et al., 2013] ()

Simulation results are given in Figure 1, which illustrate error evolution of the standard TD-learning (blue line) with the step-size, and the proposed A-TD (red line) with the and . The design parameters of both approaches are set to demonstrate reasonably the best performance with trial and errors. Additional simulation results in Appendix G provide comparisons for several different parameters. Figure 1(b) provides the results in the same plot over the interval . The results suggest that although A-TD with initially shows slower convergence, it eventually converges faster than the standard TD with lower variances after certain iterations. With the same setting, comparative results of D-TD are given in Figure 2.

6.2 Convergence of P-TD

In this section, we provide empirical comparative analysis of P-TD and the standard TD-learning. The convergence results of both approaches are quite sensitive to the design parameters to be determined, such as the step-size rules and total number of iterations of the subproblem. We consider the same example as above but with an alternative linear function approximation with the feature vector consisting of the radial basis function

From our own experiences, applying the same step-size rule, , for every yields unstable fluctuations of the error in some cases. For details, the reader is referred to Appendix G, which provides comparisons with different design parameters. The results motivate us to apply an adaptive step-size rules for the subproblem of P-TD so that smaller and smaller step-sizes are applied as the outer-loop steps increases. In particular, we employ the adaptive step-size rule, with for P-TD, and the corresponding simulation results are given in Figure 3, where P-TD outperforms the standard TD with the step-size, , best tuned for comparison. Figure 3(b) provides the results in Figure 3 in the interval , which clearly demonstrates that the error of P-TD is smaller with lower variances.

7 Conclusion

In this paper, we propose a new family of target-based TD-learning algorithms, including the averaging TD, double TD, and periodic TD, and provide theoretical analysis on their convergences. The proposed TD algorithms are largely inspired by the recent success of deep Q-learning using target networks and mirror several of the practical strategies used for updating target network in the literature. Simulation results show that integrating target variables into TD-learning can also help stabilize the convergence by reducing variance of and correlations with the target. Our convergence analysis provides some theoretical understanding of target-based TD algorithms. We hope this would also shed some light on the theoretical analysis for target-based Q-learning algorithms and non-linear RL frameworks.

Possible future topics include (1) developing finite-time convergence analysis for A-TD and D-TD; (2) extending the analysis of the target-based TD-learning to the Q-learning case w/o function approximation; and (3) generalizing the target-based framework to other variations of TD-learning and Q-learning algorithms.

References

  • Antos et al. [2008] Antos, A., Szepesvári, C., and Munos, R. Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, 71(1):89–129, Apr 2008.
  • Antsaklis & Michel [2007] Antsaklis, P. J. and Michel, A. N. A linear systems primer. 2007.
  • Baird [1995] Baird, L. Residual algorithms: reinforcement learning with function approximation. In Machine Learning Proceedings, pp. 30–37. 1995.
  • Bertsekas [1995] Bertsekas, D. P. Dynamic programming and optimal control. Athena Scientific Belmont, MA, 1995.
  • Bertsekas & Tsitsiklis [1996] Bertsekas, D. P. and Tsitsiklis, J. N. Neuro-dynamic programming. Athena Scientific Belmont, MA, 1996.
  • Bertsekas & Yu [2009] Bertsekas, D. P. and Yu, H. Projected equation methods for approximate solution of large linear systems. Journal of Computational and Applied Mathematics, 227(1):27–50, 2009.
  • Bhandari et al. [2018] Bhandari, J., Russo, D., and Singal, R. A finite time analysis of temporal difference learning with linear function approximation. arXiv preprint arXiv:1806.02450, 2018.
  • Bhatnagar et al. [2012] Bhatnagar, S., Prasad, H. L., and Prashanth, L. A. Stochastic recursive algorithms for optimization: simultaneous perturbation methods, volume 434. Springer, 2012.
  • Bottou et al. [2018] Bottou, L., Curtis, F. E., and Nocedal, J. Optimization methods for large-scale machine learning. Siam Review, 60(2):223–311, 2018.
  • Boyd & Vandenberghe [2004] Boyd, S. and Vandenberghe, L. Convex optimization. Cambridge University Press, 2004.
  • Bradtke & Barto [1996] Bradtke, S. J. and Barto, A. G. Linear least-squares algorithms for temporal difference learning. Machine Learning, 22(1):33–57, Mar 1996.
  • Bubeck et al. [2015] Bubeck, S. et al. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning, 8(3-4):231–357, 2015.
  • Chen [1995] Chen, C.-T. Linear System Theory and Design. Oxford University Press, Inc., 1995.
  • Dai et al. [2017] Dai, B., He, N., Pan, Y., Boots, B., and Song, L. Learning from conditional distributions via dual embeddings. In Artificial Intelligence and Statistics, pp. 1458–1467, 2017.
  • Dai et al. [2018] Dai, B., Shaw, A., Li, L., Xiao, L., He, N., Liu, Z., Chen, J., and Song, L. SBEED: Convergent reinforcement learning with nonlinear function approximation. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1125–1134. PMLR, 10–15 Jul 2018.
  • Dalal et al. [2018] Dalal, G., Szörényi, B., Thoppe, G., and Mannor, S. Finite sample analyses for TD(0) with function approximation. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • Dann et al. [2014] Dann, C., Neumann, G., and Peters, J. Policy evaluation with temporal differences: A survey and comparison. Journal of Machine Learning Research, 15(1):809–883, 2014.
  • Geramifard et al. [2013] Geramifard, A., Walsh, T. J., Tellex, S., Chowdhary, G., Roy, N., How, J. P., et al. A tutorial on linear function approximators for dynamic programming and reinforcement learning. Foundations and Trends® in Machine Learning, 6(4):375–451, 2013.
  • Gu et al. [2016] Gu, S., Lillicrap, T., Sutskever, I., and Levine, S. Continuous deep q-learning with model-based acceleration. In International Conference on Machine Learning, pp. 2829–2838, 2016.
  • Hasselt [2010] Hasselt, H. V. Double Q-learning. In Advances in Neural Information Processing Systems, pp. 2613–2621, 2010.
  • Heess et al. [2015] Heess, N., Hunt, J. J., Lillicrap, T. P., and Silver, D. Memory-based control with recurrent neural networks. arXiv preprint arXiv:1512.04455, 2015.
  • Lillicrap et al. [2015] Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  • Mahadevan et al. [2014] Mahadevan, S., Liu, B., Thomas, P., Dabney, W., Giguere, S., Jacek, N., Gemp, I., and Liu, J. Proximal reinforcement learning: A new theory of sequential decision making in primal-dual spaces. arXiv preprint arXiv:1405.6757, 2014.
  • Mnih et al. [2015] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
  • Mnih et al. [2016] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., and Kavukcuoglu, K. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928–1937, 2016.
  • Prashanth et al. [2014] Prashanth, L. A., Korda, N., and Munos, R. Fast lstd using stochastic approximation: Finite time analysis and application to traffic control. In Calders, T., Esposito, F., Hüllermeier, E., and Meo, R. (eds.), Machine Learning and Knowledge Discovery in Databases, pp. 66–81. Springer Berlin Heidelberg, 2014.
  • Srikant & Ying. [2019] Srikant, R. and Ying., L. Finite-time error bounds for linear stochastic approximation and TD learning. arXiv preprint arXiv:1902.00923, 2019.
  • Sutton [1988] Sutton, R. S. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9–44, 1988.
  • Sutton et al. [2009a] Sutton, R. S., Maei, H. R., Precup, D., Bhatnagar, S., Silver, D., Szepesvári, C., and Wiewiora, E. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 993–1000, 2009a.
  • Sutton et al. [2009b] Sutton, R. S., Maei, H. R., and Szepesvári, C. A convergent temporal-difference algorithm for off-policy learning with linear function approximation. In Advances in neural information processing systems, pp. 1609–1616, 2009b.
  • Tsitsiklis & Van Roy [1997] Tsitsiklis, J. N. and Van Roy, B. An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 42(5):674–690, 1997.
  • Van Hasselt et al. [2016] Van Hasselt, H., Guez, A., and Silver, D. Deep reinforcement learning with double Q-learning. In AAAI, volume 2, pp.  5. Phoenix, AZ, 2016.
  • Wang et al. [2016] Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., and Freitas, N. Dueling network architectures for deep reinforcement learning. In International Conference on Machine Learning, pp. 1995–2003, 2016.
  • Watkins & Dayan [1992] Watkins, C. J. C. H. and Dayan, P. Q-learning. Machine learning, 8(3-4):279–292, 1992.
  • Yu & Bertsekas [2009] Yu, H. and Bertsekas, D. P. Convergence results for some temporal difference methods based on least squares. IEEE Transactions on Automatic Control, 54(7):1515–1531, 2009.

Appendix

Appendix A Proof of Theorem 1

The proof is based on the analysis of the general stochastic recursion

where is a mapping . If only the asymptotic convergence is our concern, the ODE (ordinary differential equation) approach [Bhatnagar et al., 2012] is a convenient tool. Before starting the main proof, we review essential knowledge of the linear system theory [Chen, 1995].

Definition 1 (Chen [1995, Definition 5.1]).

The ODE, , , where and , is asymptotically stable if for every finite initial state , as .

Definition 2 (Hurwitz matrix).

A complex square matrix is Hurwitz if all eigenvalues of have strictly negative real parts.

Lemma 1 (Chen [1995, Theorem 5.4]).

The ODE, , , is asymptotically stable if and only if is Hurwitz.

Lemma 2 (Lyapunov theorem [Chen, 1995, Theorem 5.5]).

A complex square matrix is Hurwitz if and only if there exists a positive definite matrix such that , where is the complex conjugate transpose of .

Lemma 3 (Schur complement [Boyd & Vandenberghe, 2004, pp. 651]).

For any complex block matrix , we have

Convergence of many RL algorithms rely on the ODE approaches [Bhatnagar et al., 2012]. One of the most popular approach is based on the Borkar and Meyn theorem [Bhatnagar et al., 2012, Appendix D]. Basic technical assumptions are given below.

Assumption 4.

  1. The mapping is globally Lipschitz continuous and there exists a function such that

  2. The origin in is an asymptotically stable equilibrium for the ODE .

  3. There exists a unique globally asymptotically stable equilibrium for the ODE , i.e., as .

  4. The sequence with is a Martingale difference sequence. In addition, there exists a constant such that for any initial , we have .

  5. The step-sizes satisfy (4).

Lemma 4 (Borkar and Meyn theorem).

Suppose that creftype 4 holds. For any initial , with probability one. In addition, as with probability one.

Based on the technical results, we are in position to prove Theorem 1.

Proof of Theorem 1: The ODE (3) can be expressed as the linear system with an affine term

where

Therefore, the mapping , defined by , is globally Lipschitz continuous. Moreover, we have

Therefore, the first condition in creftype 4 holds. To meet the second condition of creftype 4, by Lemma 1, it suffices to prove that is Hurwitz. The reason is explained below. Suppose that is Hurwitz. If is Hurwitz, it is invertible, and there exists a unique equilibrium for the ODE such that , i.e., . Due to the constant term , it is not clear if such equilibrium point, , is globally asymptotically stable. From [Antsaklis & Michel, 2007, pp. 143], by letting , the ODE can be transformed to , where the origin is the globally asymptotically stable equilibrium point since is Hurwitz. Therefore, is globally asymptotically stable equilibrium point of , and the third condition of creftype 4 is satisfied. Therefore, it remains to prove that is Hurwitz. We first provide a simple analysis and prove that there exists a such that for all , is Hurwitz. To this end, we use the property of the similarity transformation [Antsaklis & Michel, 2007, pp. 88], i.e., is Hurwitz if and only if is Hurwitz for any invertible matrix . Letting , one gets

To prove that is Hurwitz, we use Lemma 2 with and check the sufficient condition

(7)

To check the above matrix inequality, note that is negative definite [Bertsekas & Tsitsiklis, 1996, Lemma 6.6, pp. 300]. By using the Schur complement Lemma 3, (7) holds if and only if