TargetBased TemporalDifference Learning
Abstract
The use of target networks has been a popular and key component of recent deep Qlearning algorithms for reinforcement learning, yet little is known from the theory side. In this work, we introduce a new family of targetbased temporal difference (TD) learning algorithms and provide theoretical analysis on their convergences. In contrast to the standard TDlearning, targetbased TD algorithms maintain two separate learning parameters–the target variable and online variable. Particularly, we introduce three members in the family, called the averaging TD, double TD, and periodic TD, where the target variable is updated through an averaging, symmetric, or periodic fashion, mirroring those techniques used in deep Qlearning practice.
We establish asymptotic convergence analyses for both averaging TD and double TD and a finite sample analysis for periodic TD. In addition, we also provide some simulation results showing potentially superior convergence of these targetbased TD algorithms compared to the standard TDlearning. While this work focuses on linear function approximation and policy evaluation setting, we consider this as a meaningful step towards the theoretical understanding of deep Qlearning variants with target networks.
1 Introduction
Deep Qlearning [Mnih et al., 2015] has recently captured significant attentions in the reinforcement learning (RL) community for outperforming human in several challenging tasks. Besides the effective use of deep neural networks as function approximators, the success of deep Qlearning is also indispensable to the utilization of a separate target network for calculating target values at each iteration. In practice, using target networks is proven to substantially improve the performance of Qlearning algorithms, and is gradually adopted as a standard technique in modern implementations of Qlearning.
To be more specific, the update of Qlearning with target network can be viewed as follows:
where , is the online variable, and is the target variable. Here the stateaction value function is parameterized by . The update of the online variable resembles the stochastic gradient descent step. The term stands for the intermediate reward of taking action in state , and stands for the target value under the target variable, . When the target variable is set to be the same as the online variable at each iteration, this reduces to the standard Qlearning algorithm [Watkins & Dayan, 1992], and is known to be unstable with nonlinear function approximations. Several choices of target networks are proposed in the literature to overcome such instability: (i) periodic update, i.e., the target variable is copied from the online variable every steps, as used for deep Qlearning [Mnih et al., 2015, Wang et al., 2016, Mnih et al., 2016, Gu et al., 2016]; (ii) symmetric update, i.e., the target variable is updated symetrically as the online variable; this is first introduced in double Qlearning [Hasselt, 2010, Van Hasselt et al., 2016]; and (iii) Polyak averaging update, i.e., the target variable takes weighted average over the past values of the online variable; this is used in deep deterministic policy gradient [Lillicrap et al., 2015, Heess et al., 2015] as an example. In the following, we simply refer these as targetbased Qlearning algorithms.
While the integration of Qlearning with target networks turns out to be successful in practice, its theoretical convergence analysis remains largely an open yet challenging question. As an intermediate step towards the answer, in this work, we first study targetbased temporal difference (TD) learning algorithms and establish their convergence analysis. TD algorithms [Sutton, 1988, Sutton et al., 2009a, b] are designed to evaluate a given policy and are the fundamental building blocks of many RL algorithms. Comprehensive surveys and comparisons among TDbased policy evaluation algorithms can be found in Dann et al. [2014]. Motivated by the targetbased Qlearning algorithms [Mnih et al., 2015, Wang et al., 2016], we introduce a target variable into the TD framework and develop a family of targetbased TD algorithms with different updating rules for the target variable. In particular, we propose three members in the family, the averaging TD, double TD, and periodic TD, where the target variable is updated through an averaging, symmetric or periodic fashion, respectively. Meanwhile, similar to the standard TDlearning, the online variable takes stochastic gradient steps of the Bellman residual loss function while freezing the target variable. As the target variable changes slowly compared to the online variable, targetbased TD algorithms are prone to improve the stability of learning especially if large neural networks are used, although this work will focus mainly on TD with linear function approximators.
Theoretically, we prove the asymptotic convergence of both averaging TD and double TD. We also provide a finite sample analysis for the periodic TD algorithm. Practically, we also run some simulations showing superior convergence of the proposed targetbased TD algorithms compared to the standard TDlearning. In particular, our empirical case studies demonstrate that the target TDlearning algorithms outperforms the standard TDlearning in the long run with smaller errors and lower variances, despite their slower convergence at the very beginning. Moreover, our analysis reveals an important connection between the TDlearning and the targetbased TDlearning. We consider the work as a meaningful step towards the theoretical understanding of deep Qlearning with general nonlinear function approximation.
Related work. The first targetbased reinforcement learning was proposed in [Mnih et al., 2015] for policy optimization problems with nonlinear function approximation, where only empirical results were given. To our best knowledge, targetbased reinforcement learning for policy evaluation has not been specifically studied before. A somewhat related family of algorithms are the gradient TD (GTD) learning algorithms [Sutton et al., 2009a, b, Mahadevan et al., 2014, Dai et al., 2017], which minimize the projected Bellman residual through the primaldual algorithms. The GTD algorithms share some similarities with the proposed targetbased TDlearning algorithms in that they also maintain two separate variables – the primal and dual variables, to minimize the objective. Apart from this connection, the GTD algorithms are fundamentally different from the averaging TD and double TD algorithms that we propose. The proposed periodic TD algorithm can be viewed as approximately solving least squares problems across cycles, making it closely related to two families of algorithms, the leastsquare TD (LSTD) learning algorithms [Bertsekas, 1995, Bradtke & Barto, 1996] and the least squares policy evaluation (LSPE) [Bertsekas & Yu, 2009, Yu & Bertsekas, 2009]. But they also distinct from each other in terms of the subproblems and subroutines used in the algorithms. Particularly, the periodic TD executes stochastic gradient descent steps while LSTD uses the leastsquare parameter estimation method to minimize the projected Bellman residual. On the other hand, LSPE directly solves the subproblems without successive projected Bellman operator iterations. Moreover, the proposed periodic TD algorithm enjoys a simple finitesample analysis based on existing results on stochastic approximation.
2 Preliminaries
In this section, we briefly review the basics of the TDlearning algorithm with linear function approximation. We first list a few notations that will be used throughout the paper.
Notation
The following notation is adopted: for a convex closed set , is the projection of onto the set , i.e., ; is the diameter of the set ; for any positivedefinite ; and denotes the minimum and maximum eigenvalues of a symmetric matrix , respectively.
2.1 Markov Decision Process (MDP)
In general, a (discounted) Markov decision process is characterized by the tuple , where is a finite state space, is a finite action space, represents the (unknown) state transition probability from state to given action , is a uniformly bounded stochastic reward, and is the discount factor. If action is selected with the current state , then the state transits to with probability and incurs a random reward with expectation . A stochastic policy is a distribution representing the probability , denotes the transition matrix whose entry is , and denotes the stationary distribution of the state under policy , i.e., . The following assumption is standard in the literature.
Assumption 1.
We assume that for all .
We also define and as the stochastic reward and its expectation given the policy and the current state , i.e.
The infinitehorizon discounted value function given policy is
where , stands for the expectation taken with respect to the stateactionreward trajectories.
2.2 Linear Function Approximation
Given preselected basis (or feature) functions , is defined as
Here is a positive integer and is a feature vector. It is standard to assume that the columns of do not have any redundancy up to linear combinations. We make the following assumption.
Assumption 2.
has full column rank.
2.3 Reinforcement Learning (RL) Problem
In this paper, the goal of RL with the linear function approximation is to find the weight vector such that approximates the true value function . This is typically done by minimizing the meansquare Bellman error loss function [Sutton et al., 2009a]
(1) 
where is defined as a diagonal matrix with diagonal entries equal to a stationary state distribution under the policy . Note that due to creftype 1, . In typical RL setting, the model is unknown, while only samples of the stateactionreward are observed. Therefore, the problem can only be solved in stochastic way using the observations. In order to formally analyze the sample complexity, we consider the following assumption on the samples.
Assumption 3.
There exists a Sampling Oracle (SO) that takes input and generates a new state with probabilities and a stochastic reward .
This oracle model allows us to draw i.i.d. samples from . While such an i.i.d. assumption may not necessarily hold in practice, it is commonly adopted for complexity analysis of RL algorithms in the literature [Sutton et al., 2009a, b, Bhandari et al., 2018, Dalal et al., 2018]. It’s worth mentioning that several recent works also provide complexity analysis when only assuming Markovian noise or exponentially mixing properties of the samples [Antos et al., 2008, Bhandari et al., 2018, Dai et al., 2018, Srikant & Ying., 2019]. For sake of simplicity, this paper only focuses on the i.i.d. sampling case.
A naive idea for solving 1 is to apply the stochastic gradient descent steps, , where is a stepsize and is a stochastic estimator of the true gradient of at ,
This approach is called the residual method [Baird, 1995]. Its main drawback is the double sampling issue [Bertsekas & Tsitsiklis, 1996, Lemma 6.10, pp. 364]: to obtain an unbiased stochastic estimation of , we need two independent samples given any pair . This is possible under creftype 3, but hardly implementable in most real applications.
2.4 Standard TDLearning
In the standard TDlearning [Sutton, 1988], the gradient term in the last line () is omitted [Bertsekas & Tsitsiklis, 1996, pp. 369]. The resulting update rule is
While the algorithm avoids the double sampling problem and is simple to implement, a key issue here is that the stochastic gradient does not correspond to the true gradient of the loss function or any other objective functions, making the theoretical analysis rather subtle. Asymptotic convergence of the TDlearning was given in the original paper [Sutton, 1988] in tabular case and in Tsitsiklis & Van Roy [1997] with linear function approximation. Finitetime convergence analysis was recently established in Bhandari et al. [2018], Dalal et al. [2018], Srikant & Ying. [2019].
Remark.
The TDlearning can also be interpreted as minimizing the modified loss function at each iteration
where stands for an online variable and stands for a target variable. At each iteration step , it sets the target variable to the value of current online variable and performs a stochastic gradient step
A full algorithm is described in Algorithm 1.
Inspired by the the recent targetbased deep Qlearning algorithms [Mnih et al., 2015], we consider several alternative updating rules for the target variable that are less aggressive and more general. This then leads to the socalled targetbased TDlearning. One of the potential benefits is that by slowing down the update for the target variable, we can reduce the correlation of the target value, or the variance in the gradient estimation, which would then improve the stability of the algorithm. To this end, we introduce three variants of targetbased TD: averaging TD, double TD, and periodic TD, each of which corresponds to a different strategy of the target update. In the following sections, we discuss these algorithms in details and provide their convergence analysis.
3 Averaging TDLearning (ATD)
We start by integrating TDlearning with the Polyak averaging strategy for target variable update. This is motivated by the recent deep Qlearning [Mnih et al., 2015] and DDPG [Lillicrap et al., 2015]. It’s worth pointing out that such a strategy has been commonly used in the deep Qlearning framework, but the convergence analysis remains absent to our best knowledge. Here we first study this for the TDlearning. The basic idea is to minimize the modified loss, , with respect to while freezing , and then enforce (target tracking). Roughly speaking, the tracking step, , is executed with the update
(2) 
where is the parameter used to adjust the update speed of the target variable and is a stochastic estimation of . A full algorithm is summarized in Algorithm 2, which is called averaging TD (ATD).
Compared to the standard TDlearning in Algorithm 1, the only difference comes from the target variable update in the last line of Algorithm 2. In particular, if we set and replace with in the second update, then it reduces to the TDlearning.
Next, we prove its convergence under certain assumptions. The convergence proof is based on the ODE (ordinary differential equation) approach [Bhatnagar et al., 2012], which is standard technique used in the RL literature [Sutton et al., 2009b]. In the approach, a stochastic recursive algorithm is converted to the corresponding ODE, and the stability of the ODE is used to prove the convergence. The ODE associated with ATD is as follows:
(3) 
We arrive at the following convergence result,.
Theorem 1.
Assume that with a fixed policy , the Markov chain is ergodic and the stepsizes satisfy
(4) 
Then, and as with probability one, where
(5) 
Remark 1.
Note that in (5) is not identical to the optimal solution of the original problem in (1). Instead, it is the solution of the projected Bellman equation defined as
where is the projected Bellman operator defined by
is the projection onto the range space of , denoted by : . The projection can be performed by the matrix multiplication: we write , where .
Theorem 1 implies that both the target and online variables of the ATD converge to which solves the projected Bellman equation. The proof of Theorem 1 is provided in Appendix A based on the stochastic approximation approach, where we apply the Borkar and Meyn theorem [Bhatnagar et al., 2012, Appendix D]. Alternatively, the multitime scale stochastic approximation [Bhatnagar et al., 2012, pp. 23] can be used with slightly different stepsize rules. Due to the introduction of target variable updates, deriving a finitesample analysis for the modified TDlearning is far from straightforward [Dalal et al., 2018, Bhandari et al., 2018]. We will leave this for future investigation.
4 Double TDLearning (DTD)
In this section, we introduce a natural extension of the ATD, which has a more symmetric form. The algorithm mirrors the double Qlearning [Van Hasselt et al., 2016], but with a notable difference. Here, both the online variable and target variable are updated in the same fashion by switching roles. To enforce , we also add a correction term to the gradient update. The algorithm is summarized in Algorithm 3, and referred to as the double TDlearning (DTD).
We provide the convergence of the DTD with linear function approximation below. The proof is similar to the proof of Theorem 1, and is contained in Appendix B. Noting that asymptotic convergence for double Qlearning has been established in Hasselt [2010] for tabular case, but no result is yet known when linear function approximation is used.
Theorem 2.
Assume that with a fixed policy , the Markov chain is ergodic and the stepsizes satisfy (4). Then, and as with probability one.
If DTD uses identical initial values for the target and online variables, then the two updates remain identical, i.e., for . In this case, DTD is equivalent to the TDlearning with a variant of the stepsize rule. In practice, this problem can also be resolved if we use different samples for each update, and the convergence result will still apply to this variation of DTD.
Compared to the corresponding form of the double Qlearning [Hasselt, 2010], DTD has two modifications. First, we introduce an additional term, or , linking the target and online parameter to enforce a smooth update of the target parameter. This covers double Qlearning as a special case by setting . Moreover, the DTD updates both target and online parameters in parallel instead of randomly. This approach makes more efficient use of the samples in a slight sacrifice of the computation cost. The convergence of the randomized version is proved with slight modification of the corresponding proof (see Appendix C for details).
5 Periodic TDLearning (PTD)
In this section, we propose another version of the targetbased TDlearning algorithm, which more resembles that used in the deep Qlearning [Mnih et al., 2015]. It corresponds to the periodic update form of the target variable, which differs from previous sections. Roughly speaking, the target variable is only periodically updated as follows:
where is a stochastic estimator of the gradient . The standard TDlearning is recovered by setting .
Alternatively, one can interpret every iterations of the update as contributing to minimizing the modified loss function
while freezing the target variable. In other words, the above subproblem is approximately solved at each iteration through steps of stochastic gradient descent. We formally present the algorithmic idea in a more general way as depicted in Algorithm 4 and call it the periodic TD algorithm (PTD).
For the PTD, given a fixed target variable , the subroutine, , runs stochastic gradient descent steps times in order to approximately solve the subproblem , for which an unbiased stochastic gradient estimator is obtained by using observations. Upon solving the subproblem after steps, the next target variable is replaced with the next online variable. This makes it similar to the original deep Qlearning [Mnih et al., 2015] as it is periodic if is set to a constant. Moreover, PTD is also closely related to the TDlearning Algorithm 1. In particular, if for all , then PTD corresponds to the standard TD.
Based on the standard results in Bottou et al. [2018, Theorem 4.7], the subroutine converges to the optimal solution, . But as we only apply a finite number steps of , the subroutine will return an approximate solution with a certain error bound in expectation, i.e., .
In the following, we establish a finitetime convergence analysis of PTD. We first present a result in terms of the expected error of the solution and its bounds with high probability.
Theorem 3.
The second result implies that PTD achieves an optimal solution with an arbitrarily high probability by approaching and controlling the error bounds . In particular, if for all , then
One can see that the error is essentially decomposed into two terms, one from the approximation errors induced from SGD procedures and one from the contraction property of solving the subproblems, which can also be viewed as solving the projected Bellman equations. Full details of the proof can be found in Appendix D.
To further analyze the approximation error from the SGD procedure, existing convergence results in Bottou et al. [2018, Theorem 4.7] can be applied with slight modifications.
Proposition 1.
Suppose that the SGD method in Algorithm 4 is run with a stepsize sequence such that, for all ,
for some and such that
Then, for all , the expected optimality gap satisfies
(6) 
where
and
Proposition 1 ensures that the subroutine iterate, , converges to the solution of the subproblem at the rate of . Combining Proposition 1 with Theorem 3, the overall sample complexity is derived in the following proposition. We defer the proofs to Appendix E and Appendix F.
Proposition 2 (Sample Complexity).
The optimal solution, , is obtained by Algorithm 4 with SO calls at most , where
and and are defined in Proposition 1.
As a result, the overall sample complexity of PTD is bounded by . As mentioned earlier, nonasymptotic analysis for even the standard TD algorithm is only recently developed in a few work [Dalal et al., 2018, Bhandari et al., 2018, Srikant & Ying., 2019]. Our sample complexity result on PTD, which is a targetbased TD algorithm, matches with that developed in Bhandari et al. [2018] with similar decaying stepsize sequence, up to a log factor. Yet, our analysis is much simpler and builds directly upon existing results on stochastic gradient descent. Moreover, from the computational perspective, although PTD runs in two loops, it is as the efficient as standard TD.
PTD also shares some similarity with the least squares temporal difference (LSTD, Bradtke & Barto [1996]) and its stochastic approximation variant (fLSTDSA, Prashanth et al. [2014]). LSTD is a batch algorithm that directly estimates the optimal solution as described in (5) through samples, which can also be viewed as exactly computing the solution to a least squares subproblem. fLSTDSA alleviates the computation burden by applying the stochastic gradient descent (the same as TD update) to solve the subproblems. The key difference between fLSTDSA and PTD lies in that the objective for PTD is adjusted by the target variables across cycles. Lastly, PTD is also closely related to and can be viewed as a special case of the leastsquares fitted Qiteration [Antos et al., 2008]. Both of them solves a similar least squares problems using target values. However, for PTD, we are able to directly apply the stochastic gradient descent to address the subproblems to nearoptimality.
6 Simulations
In this section, we provide some preliminary numerical simulation results showing the efficiency of the proposed targetbased TD algorithms. We stress that the main goal of this paper is to introduce the family of targetbased TD algorithms with linear function approximation and provide theoretical convergence analysis for target TD algorithms, as an intermediate step towards the understanding of targetbased Qlearning algorithms. Hence, our numerical experiments simply focus on testing the convergence, sensitivity in terms of the tuning parameters of these targetbased algorithms, as well as effects of using target variables as opposed to the standard TDlearning.
6.1 Convergence of ATD and DTD
In this example, we consider an MDP with , ,
and , where denotes the uniform distribution in and stands for the reward given policy and the current state . The action space and policy are not explicitly defined here. For the linear function approximation, we consider the feature vector with the radial basis function [Geramifard et al., 2013] ()
Simulation results are given in Figure 1, which illustrate error evolution of the standard TDlearning (blue line) with the stepsize, and the proposed ATD (red line) with the and . The design parameters of both approaches are set to demonstrate reasonably the best performance with trial and errors. Additional simulation results in Appendix G provide comparisons for several different parameters. Figure 1(b) provides the results in the same plot over the interval . The results suggest that although ATD with initially shows slower convergence, it eventually converges faster than the standard TD with lower variances after certain iterations. With the same setting, comparative results of DTD are given in Figure 2.
6.2 Convergence of PTD
In this section, we provide empirical comparative analysis of PTD and the standard TDlearning. The convergence results of both approaches are quite sensitive to the design parameters to be determined, such as the stepsize rules and total number of iterations of the subproblem. We consider the same example as above but with an alternative linear function approximation with the feature vector consisting of the radial basis function
From our own experiences, applying the same stepsize rule, , for every yields unstable fluctuations of the error in some cases. For details, the reader is referred to Appendix G, which provides comparisons with different design parameters. The results motivate us to apply an adaptive stepsize rules for the subproblem of PTD so that smaller and smaller stepsizes are applied as the outerloop steps increases. In particular, we employ the adaptive stepsize rule, with for PTD, and the corresponding simulation results are given in Figure 3, where PTD outperforms the standard TD with the stepsize, , best tuned for comparison. Figure 3(b) provides the results in Figure 3 in the interval , which clearly demonstrates that the error of PTD is smaller with lower variances.
7 Conclusion
In this paper, we propose a new family of targetbased TDlearning algorithms, including the averaging TD, double TD, and periodic TD, and provide theoretical analysis on their convergences. The proposed TD algorithms are largely inspired by the recent success of deep Qlearning using target networks and mirror several of the practical strategies used for updating target network in the literature. Simulation results show that integrating target variables into TDlearning can also help stabilize the convergence by reducing variance of and correlations with the target. Our convergence analysis provides some theoretical understanding of targetbased TD algorithms. We hope this would also shed some light on the theoretical analysis for targetbased Qlearning algorithms and nonlinear RL frameworks.
Possible future topics include (1) developing finitetime convergence analysis for ATD and DTD; (2) extending the analysis of the targetbased TDlearning to the Qlearning case w/o function approximation; and (3) generalizing the targetbased framework to other variations of TDlearning and Qlearning algorithms.
References
 Antos et al. [2008] Antos, A., Szepesvári, C., and Munos, R. Learning nearoptimal policies with Bellmanresidual minimization based fitted policy iteration and a single sample path. Machine Learning, 71(1):89–129, Apr 2008.
 Antsaklis & Michel [2007] Antsaklis, P. J. and Michel, A. N. A linear systems primer. 2007.
 Baird [1995] Baird, L. Residual algorithms: reinforcement learning with function approximation. In Machine Learning Proceedings, pp. 30–37. 1995.
 Bertsekas [1995] Bertsekas, D. P. Dynamic programming and optimal control. Athena Scientific Belmont, MA, 1995.
 Bertsekas & Tsitsiklis [1996] Bertsekas, D. P. and Tsitsiklis, J. N. Neurodynamic programming. Athena Scientific Belmont, MA, 1996.
 Bertsekas & Yu [2009] Bertsekas, D. P. and Yu, H. Projected equation methods for approximate solution of large linear systems. Journal of Computational and Applied Mathematics, 227(1):27–50, 2009.
 Bhandari et al. [2018] Bhandari, J., Russo, D., and Singal, R. A finite time analysis of temporal difference learning with linear function approximation. arXiv preprint arXiv:1806.02450, 2018.
 Bhatnagar et al. [2012] Bhatnagar, S., Prasad, H. L., and Prashanth, L. A. Stochastic recursive algorithms for optimization: simultaneous perturbation methods, volume 434. Springer, 2012.
 Bottou et al. [2018] Bottou, L., Curtis, F. E., and Nocedal, J. Optimization methods for largescale machine learning. Siam Review, 60(2):223–311, 2018.
 Boyd & Vandenberghe [2004] Boyd, S. and Vandenberghe, L. Convex optimization. Cambridge University Press, 2004.
 Bradtke & Barto [1996] Bradtke, S. J. and Barto, A. G. Linear leastsquares algorithms for temporal difference learning. Machine Learning, 22(1):33–57, Mar 1996.
 Bubeck et al. [2015] Bubeck, S. et al. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning, 8(34):231–357, 2015.
 Chen [1995] Chen, C.T. Linear System Theory and Design. Oxford University Press, Inc., 1995.
 Dai et al. [2017] Dai, B., He, N., Pan, Y., Boots, B., and Song, L. Learning from conditional distributions via dual embeddings. In Artificial Intelligence and Statistics, pp. 1458–1467, 2017.
 Dai et al. [2018] Dai, B., Shaw, A., Li, L., Xiao, L., He, N., Liu, Z., Chen, J., and Song, L. SBEED: Convergent reinforcement learning with nonlinear function approximation. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1125–1134. PMLR, 10–15 Jul 2018.
 Dalal et al. [2018] Dalal, G., Szörényi, B., Thoppe, G., and Mannor, S. Finite sample analyses for TD(0) with function approximation. In ThirtySecond AAAI Conference on Artificial Intelligence, 2018.
 Dann et al. [2014] Dann, C., Neumann, G., and Peters, J. Policy evaluation with temporal differences: A survey and comparison. Journal of Machine Learning Research, 15(1):809–883, 2014.
 Geramifard et al. [2013] Geramifard, A., Walsh, T. J., Tellex, S., Chowdhary, G., Roy, N., How, J. P., et al. A tutorial on linear function approximators for dynamic programming and reinforcement learning. Foundations and Trends® in Machine Learning, 6(4):375–451, 2013.
 Gu et al. [2016] Gu, S., Lillicrap, T., Sutskever, I., and Levine, S. Continuous deep qlearning with modelbased acceleration. In International Conference on Machine Learning, pp. 2829–2838, 2016.
 Hasselt [2010] Hasselt, H. V. Double Qlearning. In Advances in Neural Information Processing Systems, pp. 2613–2621, 2010.
 Heess et al. [2015] Heess, N., Hunt, J. J., Lillicrap, T. P., and Silver, D. Memorybased control with recurrent neural networks. arXiv preprint arXiv:1512.04455, 2015.
 Lillicrap et al. [2015] Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
 Mahadevan et al. [2014] Mahadevan, S., Liu, B., Thomas, P., Dabney, W., Giguere, S., Jacek, N., Gemp, I., and Liu, J. Proximal reinforcement learning: A new theory of sequential decision making in primaldual spaces. arXiv preprint arXiv:1405.6757, 2014.
 Mnih et al. [2015] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529, 2015.
 Mnih et al. [2016] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., and Kavukcuoglu, K. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928–1937, 2016.
 Prashanth et al. [2014] Prashanth, L. A., Korda, N., and Munos, R. Fast lstd using stochastic approximation: Finite time analysis and application to traffic control. In Calders, T., Esposito, F., Hüllermeier, E., and Meo, R. (eds.), Machine Learning and Knowledge Discovery in Databases, pp. 66–81. Springer Berlin Heidelberg, 2014.
 Srikant & Ying. [2019] Srikant, R. and Ying., L. Finitetime error bounds for linear stochastic approximation and TD learning. arXiv preprint arXiv:1902.00923, 2019.
 Sutton [1988] Sutton, R. S. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9–44, 1988.
 Sutton et al. [2009a] Sutton, R. S., Maei, H. R., Precup, D., Bhatnagar, S., Silver, D., Szepesvári, C., and Wiewiora, E. Fast gradientdescent methods for temporaldifference learning with linear function approximation. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 993–1000, 2009a.
 Sutton et al. [2009b] Sutton, R. S., Maei, H. R., and Szepesvári, C. A convergent temporaldifference algorithm for offpolicy learning with linear function approximation. In Advances in neural information processing systems, pp. 1609–1616, 2009b.
 Tsitsiklis & Van Roy [1997] Tsitsiklis, J. N. and Van Roy, B. An analysis of temporaldifference learning with function approximation. IEEE Transactions on Automatic Control, 42(5):674–690, 1997.
 Van Hasselt et al. [2016] Van Hasselt, H., Guez, A., and Silver, D. Deep reinforcement learning with double Qlearning. In AAAI, volume 2, pp. 5. Phoenix, AZ, 2016.
 Wang et al. [2016] Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., and Freitas, N. Dueling network architectures for deep reinforcement learning. In International Conference on Machine Learning, pp. 1995–2003, 2016.
 Watkins & Dayan [1992] Watkins, C. J. C. H. and Dayan, P. Qlearning. Machine learning, 8(34):279–292, 1992.
 Yu & Bertsekas [2009] Yu, H. and Bertsekas, D. P. Convergence results for some temporal difference methods based on least squares. IEEE Transactions on Automatic Control, 54(7):1515–1531, 2009.
Appendix
Appendix A Proof of Theorem 1
The proof is based on the analysis of the general stochastic recursion
where is a mapping . If only the asymptotic convergence is our concern, the ODE (ordinary differential equation) approach [Bhatnagar et al., 2012] is a convenient tool. Before starting the main proof, we review essential knowledge of the linear system theory [Chen, 1995].
Definition 1 (Chen [1995, Definition 5.1]).
The ODE, , , where and , is asymptotically stable if for every finite initial state , as .
Definition 2 (Hurwitz matrix).
A complex square matrix is Hurwitz if all eigenvalues of have strictly negative real parts.
Lemma 1 (Chen [1995, Theorem 5.4]).
The ODE, , , is asymptotically stable if and only if is Hurwitz.
Lemma 2 (Lyapunov theorem [Chen, 1995, Theorem 5.5]).
A complex square matrix is Hurwitz if and only if there exists a positive definite matrix such that , where is the complex conjugate transpose of .
Lemma 3 (Schur complement [Boyd & Vandenberghe, 2004, pp. 651]).
For any complex block matrix , we have
Convergence of many RL algorithms rely on the ODE approaches [Bhatnagar et al., 2012]. One of the most popular approach is based on the Borkar and Meyn theorem [Bhatnagar et al., 2012, Appendix D]. Basic technical assumptions are given below.
Assumption 4.

The mapping is globally Lipschitz continuous and there exists a function such that

The origin in is an asymptotically stable equilibrium for the ODE .

There exists a unique globally asymptotically stable equilibrium for the ODE , i.e., as .

The sequence with is a Martingale difference sequence. In addition, there exists a constant such that for any initial , we have .

The stepsizes satisfy (4).
Lemma 4 (Borkar and Meyn theorem).
Suppose that creftype 4 holds. For any initial , with probability one. In addition, as with probability one.
Based on the technical results, we are in position to prove Theorem 1.
Proof of Theorem 1: The ODE (3) can be expressed as the linear system with an affine term
where
Therefore, the mapping , defined by , is globally Lipschitz continuous. Moreover, we have
Therefore, the first condition in creftype 4 holds. To meet the second condition of creftype 4, by Lemma 1, it suffices to prove that is Hurwitz. The reason is explained below. Suppose that is Hurwitz. If is Hurwitz, it is invertible, and there exists a unique equilibrium for the ODE such that , i.e., . Due to the constant term , it is not clear if such equilibrium point, , is globally asymptotically stable. From [Antsaklis & Michel, 2007, pp. 143], by letting , the ODE can be transformed to , where the origin is the globally asymptotically stable equilibrium point since is Hurwitz. Therefore, is globally asymptotically stable equilibrium point of , and the third condition of creftype 4 is satisfied. Therefore, it remains to prove that is Hurwitz. We first provide a simple analysis and prove that there exists a such that for all , is Hurwitz. To this end, we use the property of the similarity transformation [Antsaklis & Michel, 2007, pp. 88], i.e., is Hurwitz if and only if is Hurwitz for any invertible matrix . Letting , one gets
To prove that is Hurwitz, we use Lemma 2 with and check the sufficient condition
(7) 