Zap Q-Learning for Optimal Stopping Time Problems

# Zap Q-Learning for Optimal Stopping Time Problems

Shuhang Chen\authorrefmark1    Adithya M. Devraj\authorrefmark2    Ana Bušić\authorrefmark3    Sean Meyn\authorrefmark4 \authorrefmark1S.C. is with the Department of Mathematics at the University of Florida in Gainesville\authorrefmark2A.D. is with the Department of ECE at the University of Florida\authorrefmark3A.B. is with Inria and DI ENS, École Normale Supérieure, CNRS, PSL Research University, Paris, France\authorrefmark4S.M. is with Department of Electrical and Computer Engineering, University of Florida, and Inria International Chair, ParisAcknowledgements: Financial support from ARO grant W911NF1810334 is gratefully acknowledged. Additional support from EPCN 1609131 & CPS 1646229, and French National Research Agency grant ANR-16-CE05-0008.
###### Abstract

The objective in this paper is to obtain fast converging reinforcement learning algorithms to approximate solutions to the problem of discounted cost optimal stopping in an irreducible, uniformly ergodic Markov chain, evolving on a compact subset of . We build on the dynamic programming approach taken by Tsitsikilis and Van Roy, wherein they propose a Q-learning algorithm to estimate the optimal state-action value function, which then defines an optimal stopping rule. We provide insights as to why the convergence rate of this algorithm can be slow, and propose a fast-converging alternative, the “Zap-Q-learning” algorithm, designed to achieve optimal rate of convergence. For the first time, we prove the convergence of the Zap-Q-learning algorithm under the assumption of linear function approximation setting. We use ODE analysis for the proof, and the optimal asymptotic variance property of the algorithm is reflected via fast convergence in a finance example.

## I Introduction

Consider a discrete-time Markov chain evolving on a general state-space . The goal in optimal stopping time problems is to minimize over all stopping times , the associated expected cost:

 E[τ−1∑n=0βnc(Xn)+βτcs(Xτ)] (1)

where denotes the per-stage cost, the terminal cost, and is the discount factor. Examples of such problems arise mostly in financial applications such as derivatives analysis (see Section V), timing of a purchase or sale of an asset, and more generally in problems that involve sequential analysis.

In this work, the optimal decision rule is approximated using reinforcement learning techniques. We propose and analyze an optimal variance algorithm to approximate the value function associated with the optimal stopping rule.

### I-a Definitions & Problem Setup

We assume that the state-space is compact, and we let denote the associated Borel -algebra. The time-homogeneous Markov chain is defined on a probability space (), and its dynamics is determined by an initial distribution , and a transition kernel : for each and ,

 P(x,A)=Pr(Xn+1∈A∣Xn=x)

It is assumed that is uniformly ergodic: There exisits a unique invariant probability measure , a constant , and , such that, for all and ,

 ∥Pn(x,A)−π(A)∥≤Dρn,n≥0 (2)

Denote by the filtration associated with . The Markov property asserts that for bounded measurable functions ,

 E[h(Xn+1)∣Fn,Xn=x]=∫P(x,dy)h(y)

In this paper, a stopping time is a random variable taking on values in the non-negative integers, with the defining property for each . A stationary policy is defined to be a measurable function that defines a stopping time:

 τϕ=min{n≥0:ϕ(Xn)=1} (3)

The optimal value function is defined as the infimum of (1) over all stopping times: for any ,

 h∗(x):=infτE[τ−1∑n=0βnc(Xn)+βτcs(Xτ)|X0=x] (4)

Similarly, the associated Q-function is defined as

 Q∗(x):=c(x)+βE[h∗(X1)∣X0=x]

It follows that solves the associated Bellman equation [15]: for each ,

 Q∗(x)=c(x)+βE[min(cs(X1),Q∗(X1))|X0=x] (5)

and the optimal stopping rule is defined by the corresponding stationary policy,

 ϕ∗(x)=I{cs(x)≤Q∗(x)} (6)

where denotes the indicator function. Using the general definition (3), an optimal stopping time satisfies .

The Bellman equation (5) can be expressed as the functional fixed point equation: , where denotes the dynamic programming operator: for any function , and ,

 FQ(x):=c(x)+βE[min(cs(X1),Q(X1))|X0=x] (7)

Analysis is framed in the usual Hilbert space of real-valued measurable functions on with inner product:

 ⟨f,g⟩π=E[f(X)g(X)], (8)

and norm:

 ∥f∥π=√⟨f,f⟩π, (9)

where the expectation in (8) is with respect to the steady state distribution . It is assumed throughout that the cost functions and are in .

### I-B Objective

The goal in this work is to approximate using a parameterized family of functions , where denotes the parameter vector. We restrict to linear parameterization throughout, so that:

 Qθ(x):=θ\it\tiny Tψ(x),x∈X (10)

where with , , , denotes the basis functions. For any parameter vector , we denote the Bellman error

 BθE=FQθ−Qθ.

It is assumed that the basis functions are linearly independent: The covariance matrix is full rank, where

 Σψ(i,j)=⟨ψi,ψj⟩π,1≤i,j≤d (11)

In a finite state-space setting, it is possible to construct a consistent algorithm that computes the Q-function exactly [8]. The Q-learning algorithm of Watkins [16, 17] can be used in this case (see [18] for a discussion).

In a function approximation setting, we need to relax the goal of solving (5). As in previous works [15, 6, 18], the goal in this paper is to obtain the solution to a Galerkin relaxation of (5): Find such that,

 E[Bθ∗E(Xn)ψi(Xn)]=0,1≤i≤d, (12)

or equivalently,

 ⟨FQθ∗−Qθ∗,ψi⟩π=0,1≤i≤d. (13)

In [15], the authors show that the solution to the fixed point equations in (13) satisfies (see [15, Theorem 2]):

 ∥Qθ∗−Q∗∥π≤11−β2[minθ∥Qθ−Q∗∥π].

### I-C Literature Survey

Obtaining an approximate solution to the original problem (5) using a modified objective (13) was first considered in [15]. The authors propose an extension of the TD() algorithm of [12, 14], and obtain convergence results under the assumption of a finite state-action space setting.

Though it is not obvious at first sight, the algorithm in [15] is more closely connected to Watkins’ Q-learning algorithm [16, 17], than the TD() algorithm. This is specifically due to a minimum term that appears in (13) (see definition of in (7)), similar to what appears in Q-learning. This is important to note, because Q-learning algorithms are known to have convergence issues under function approximation settings, and this is due to the fact that the dynamic programming operator may not be a contraction in general [1]. The operator defined in (7) is quite special in this sense: it can be shown that it is a contraction with respect to the -norm [15]:

 ∥FQ−FQ′∥π≤β∥Q−Q′∥π,for all Q,Q′∈L2(π)

Since [15], many other algorithms have been proposed to improve the convergence rate. In [6] the authors propose a matrix gain variant of the algorithm presented in [15], improving the rate of convergence in numerical studies. In [18], the authors take a “least squares” approach to solve the problem, and propose the least squares Q-learning algorithm, that has close resemblance to the least squares policy evaluation algorithm (LSPE () of [10]). The authors recognize the high computational complexity of the algorithm, and propose alternative variants. In prior works [6] and [18], though a function-approximation setting is considered, the state-space is assumed finite.

More recently, in [8, 7], the authors propose the Zap Q-learning algorithm to solve for a solution to a fixed point equation similar to (but more general than) (5). The proof of convergence is provided only for the finite state-action space setting, and more restrictively, a tabular basis is assumed (wherein the ’s span all possible functions).

### I-D Contributions

We make the following contributions in this work:

• We extend the convergence analysis of Zap-Q-learning of [7] to the problem of optimal-stopping (1) in a linear function approximation setting (the authors consider only a ‘tabular’ basis in [7])

• The algorithm and analysis presented in this work is superior to previous works on optimal stopping [15, 6, 18] in two ways: Firstly, the analysis in previous works only concern a finite state-action space setting; more importantly, the algorithm we propose has optimal asymptotic variance, implying better convergence rates (see Section III for a discussion and Section V for numerical results).

The extension of the work [7] to the current setting is not trivial: The tabular case is much simpler to analyze with lots of special structures, and in general, the theory for convergence of any Q-learning algorithm in a function approximation setting does not exist. Furthermore, the ODE analysis obtained in this paper (cf. Theorem III.5) provides great insights into the behavior of the Zap-Q algorithm, even in a linear function approximation setting.

The remainder of the paper is organized as follows: Section II contains the approximation architecture, and introduces the Zap-Q-learning algorithm. The assumptions and main results are contained in Section III. Section IV provides a high-level proof of the results, numerical results are collected together in Section V, and conclusions in Section VI. Full proofs are available in the extended version of this paper, available on arXiv [4].

## Ii Q-learning for Optimal Stopping

### Ii-a Notation

The following notation is useful for the convergence analysis. For each , we denote to be the corresponding policy:

 ϕθ(x):=I{cs(x)≤Qθ(x)} (14)

For any function with domain , the operators and are defined as the simple products,

 Sθf(x) :=I{Qθ(x)

Observe that for each , .

The objective (13) can then be expressed:

 A(θ∗)θ∗+β¯¯cs(θ∗)+b∗=0, (16)

where, for each , is a matrix, and and are -dimensional vectors:

 A(θ) :=E[ψ(Xn)βSθψ% \it\tiny T(Xn+1)−ψ(Xn)ψ\it\tiny T(Xn)] (17) b∗ :=E[ψ(Xn)c(Xn)] (18) ¯¯cs(θ) :=E[ψ(Xn)Scθcs(Xn+1)] (19)

### Ii-B Zap Q-Learning

Before we introduce our main algorithm, it is useful to first consider a more general class of “matrix gain” Q-learning algorithms. Given a matrix gain sequence , and a scalar step-size sequence , the corresponding matrix gain Q-learning algorithm for optimal stopping is given by the following recursion:

 θn+1 =θn+αn+1Gn+1ψ(Xn)dn+1 (20)

with denoting the “temporal difference” sequence:

 dn+1:=c(Xn)+βmin(cs(Xn+1),Qθn(Xn+1))−Qθn(Xn)

The algorithm proposed in [15] is (20), with (the identity matrix). This is similar to the TD() algorithm [14, 12].

The fixed point Kalman filter algorithm of [6] can also be written as a special case of (20): We have , where denotes the pseudo-inverse of any matrix , and is an estimate of the mean defined in (11); The estimate can be recursively obtained using standard Monte-Carlo recursion:

 ˆΣψn+1=ˆΣψn+αn+1[ψ(Xn)ψ\it\tiny T(Xn)−ˆΣψn] (21)

In the Zap-Q algorithm, the matrix gain sequence is designed so that the asymptotic covariance of the resulting algorithm is minimized (see Section III for details). It uses a matrix gain , with being an estimate of , and defined in (17).

The term inside the expectation in (17), following the substitution , is denoted

 An+1:=ψ(Xn)[βSθnψ(Xn+1)−ψ(Xn)]\it\tiny T (22)

Using (22), the matrix is recursively estimated using stochastic approximation in the Zap-Q algorithm:

### Ii-C Discussion

Algorithm 1 belongs to a general class of algorithms known as two-time-scale stochastic approximation [2]: the recursion in (24) on the slower time-scale intends to estimate the parameter vector , and for each , the recursion (23) on the faster time-scale intends to estimate the mean . The step-size sequences and have to satisfy the standard requirements for separation of time-scales [2]: for any , we choose

 αn=1/n,γn=1/nϱ (25)

For each , consider the following terms:

 b(θ) =−A(θ)θ−β¯¯cs(θ) (26a) cθ(x) =Qθ(x) (26b) −E[βmin(cs(Xn+1),Qθ(Xn+1))∣Xn=x]

The vector is analogous to in (16), and (26b) recalls the Bellman equation (5). The following Prop. II.1 is direct from these definitions. It shows that is the “projection” of the cost function , similar to how is related to through (18).

###### Proposition II.1

For each , we have:

 b(θ)=E[cθ(Xn)ψ(Xn)] (27)

where the expectation is in steady state. In particular,

 b∗=b(θ∗)

## Iii Assumptions and Main Results

### Iii-a Preliminaries

We first summarize preliminary results here that will be used to establish the main results in the following sections. The proofs of all the technical results are contained in the Appendix of [4].

We start with the contraction property of the operator defined in (7). The following is a result directly obtained from [15] (see [15, Lemma on p. ]).

###### Lemma III.1

The dynamic programming operator defined in (7) satisfies:

 ∥FQ−FQ′∥≤β∥Q−Q′∥,Q,Q′∈L2(π).

Furthermore, is the unique fixed point of in .

Recall that is defined in (10). Similar to the operator , for each we define operators and that operate on functions as follows:

 HθQ(x) = {Q(x),ifQθ(x)

The following Lemma is a slight extension of Lemma III.1.

###### Lemma III.2

For each , the operator satisfies:

 ∥FθQ−FθQ′∥≤β∥Q−Q′∥,Q,Q′∈L2(π)

The next result is a direct consequence of Lemma III.2, and establishes the inveritbility of the matrix for any :

###### Lemma III.3

For each ,

1. The matrix defined in (17) satisfies:

 −v\it\tiny TA(θ)v≥(1−β)v\it\tiny TΣψv, (30)

for each , with defined in (11).

2. Eigenvalues of are strictly bounded away from 0, and is uniformly bounded.

Prop. II.1 implies a Lipschitz bound on the function defined in (26a):

###### Lemma III.4

The mapping is Lipschitz: For some , and each ,

 ∥b(θ1)−b(θ2)∥≤ℓ1∥θ1−θ2∥

### Iii-B Assumptions & Main Result

The following assumptions are made throughout:

Assumption A1: is a uniformly ergodic Markov chain on the compact state space , with a unique invariant probability measure, (cf. (2)).

Assumption A2: There exists a unique solution to the objective (13).

Assumption A3.1: The conditional distribution of given has a density, . This density is also assumed to have uniformly bounded likelihood ratio with respect to the Gaussian density .

Assumption A3.2: It is assumed moreover that the function is in the span of .

Assumption A4: The parameter sequence is bounded a.s..

Assumption A3 consists of technical conditions required for the proof of convergence. The density assumption is imposed to ensure that the conditional expectation given of functions such as are smooth as a function of . Furthermore, it implies that is positive definite.

Assumption A4 is a standard assumption in much of the recent stochastic approximation literature. We conjecture that the boundedness can be established via an extension of the results in [3, 2]. The “ODE at infinity” posed there is stable as required, but the extension of the results to the current setting of two time-scale stochastic approximation with Markovian noise is the only challenge.

The main result of this paper establishes convergence of iterates obtained using Algorithm 1:

###### Theorem III.5

Suppose that Assumptions A1-A4 hold. Then,

• The parameter sequence obtained using the Zap-Q algorithm converges to a.s., where satisfies (13).

• An ODE approximation holds for the sequences by continuous time functions satisfying

 ddtb(t) =−b(t)+b (31) b(t) =−A(w(t))w(t)−β¯¯cs(w(t))

The term ODE approximation is standard in the SA literature: For , let denote the solution to:

 ddtws(t)=ξ(ws(t)),ws(s)=¯¯¯¯w(s) (32)

for some , and denoting the continuous time process constructed from the sequence via linear interpolation. We say that the ODE approximation holds for the sequence , if the following is true for any :

 lims→∞supt∈[s,s+T]∥¯¯¯¯w(t)−ws(t)∥=0,a.s.

Details are made precise in Section IV-B. The optimality of the algorithm in terms of the asymptotic variance is discussed next.

### Iii-C Asymptotic Variance

The asymptotic covariance of any algorithm is defined to be the following limit:

 (33)

Consider the matrix gain Q-learning algorithm (20), and suppose the matrix sequence is constant: . Also, suppose that all eigenvalues of satisfy . Following standard analysis (see Section 2.2 of [8] and references therein), it can be shown that, under general assumptions, the asymptotic covariance of the algorithm (20) can be obtained as a solution to the Lyapunov equation:

 (GA(θ∗)+12I)ΣΘ (34) +GΣEG\it\tiny T=0

where is the “noise covariance matrix”, that is defined as follows.

A “noise sequence” is defined as

 En:=~An+1θ∗+~bn+1+~An+1~θn (35)

where , ,
, with defined in (22), defined in (17),

 bn+1:=ψ(Xn)[c(Xn)+Scθncs(Xn+1)] (36)

and defined in (18). The noise covariance matrix is then defined as the limit

 ΣE=limT→∞1TE[STS\it% \tiny TT] (37)

in which , and the expectation is in steady state.

#### Optimality of the asymptotic covariance

The asymptotic covariance can be obtained as a solution to (34) only when all eigenvalues satisfy . If there exists at least one eigenvalue such that , then, under general conditions, it can be shown that the asymptotic covariance is not finite [8]. This implies that the rate of convergence of is slower than .

It is possible to optimize the covariance over all matrix gains using (34). Specifically, it can be shown that letting will result in the minimum asymptotic covariance , where

 Σ∗=A(θ∗)−1ΣE(A(θ∗)−1)\it\tiny T (38)

That is, for any other gain , denoting to be the asymptotic covariance of the algorithm (20) obtained as a solution to the Lyapunov equation (34), the difference is positive semi-definite. This is specifically true for the algorithms proposed in [15] and [6].

The Zap Q algorithm is specifically designed to achieve the optimal asymptotic covariance. A full proof of optimality will require extra effort. Thm. III.5 tells us that we have the required convergence for this algorithm. Provided we can obtain additional tightness bounds for the scaled error , we obtain a functional Central Limit Theorem with optimal covariance [2]. Minor additional bounds ensure convergence of (33) to the optimal covariance .

The next section is dedicated to the proof of Thm. III.5.

## Iv Proof of Theorem iii.5

### Iv-a Overview of the Proof

Unlike martingale difference assumptions in standard stochastic approximation, the noise in our algorithm is Markovian. The first part of this section establishes that our noise sequence satisfies the so called ODE friendly property [13]: A vector-valued sequence of random variables will be called ODE-friendly if it admits the decomposition,

 Ek=Δk+Tk−Tk−1+εk,k≥1 (39)

in which:

1. is a martingale-difference sequence satisfying a.s. for all

2. is a bounded sequence

3. The final sequence is bounded and satisfies:

 ∞∑k=1γk∥εk∥<∞a.s.. (40)

Intuitively, if an error sequence satisfies the above properties, it can be shown that its asymptotic effect on the parameter update is zero. This allows us to argue that the matrix gain estimate is close to the mean in the fast time-scale recursion (23). We then consider the slow time-scale recursion (24), and obtain the ODE approximations for and the expected projected cost . The fact that these ODE’s are stable (with a unique stationary point) will then establish the convergence of the algorithm.

### Iv-B ODE Analysis

The remainder of this section is dedicated to the proof of the ODE approximation (31). The construction of an approximating ODE involves first defining a continuous time process . Denote

 tn=n∑i=1αi,  n≥1,t0=0, (41)

and define at these time points, with the definition extended to via linear interpolation.

Along with the piecewise linear continuous-time process , denote by the piecewise linear continuous-time process defined similarly, with , . Furthermore, for each , denote

 ¯bt≡b(¯¯¯¯wt):=−A(¯¯¯¯wt)¯wt−β¯¯cs(¯¯¯¯wt)

To construct an ODE, it is convenient first to obtain an alternative and suggestive representation for the pair of equations (23,24).

Lemma IV.1 establishes that the error sequences that appear in the updates for and are “ODE friendly”.

###### Lemma IV.1

The pair of equations (23, 24) can be expressed,

 θn+1 (42a) +EAn+1θn+Eθn+1] ˆAn+1 =ˆAn+γn+1[A(θn)−ˆAn+EAn+1] (42b)

in which the sequences are ODE-friendly.

The following result establishes that recursively obtained by (23) approximates the mean :

###### Lemma IV.2

Suppose the sequence is ODE-friendly. Then,

1. Consequently, only finitely often, and

With the definition of ODE approximation below (32), we have:

###### Lemma IV.3

The ODE approximation for holds: with probability one, the piece-wise continuous function asymptotically tracks the ODE:

 (43)

For a fixed (but arbitrary) time horizon , we define two families of uniformly bounded and uniformly Lipschitz continuous functions: and . Sub-sequential limits of and are denoted and respectively.

We recast the ODE limit of the projected cost as follows:

###### Lemma IV.4

For any sub-sequential limits ,

1. they satisfy .

2. for a.e. ,

 ddtbt =−A(wt)ddtwt=−bt+b∗ (44)

##### Proof of Thm. iii.5

Bounedness of sequences and is established in Lemma IV.2. Together with boundedness assumption of , the ODE approximation is established in Lemma IV.4. Result (i) then follows from those two results using standard arguments from [2].

## V Numerical Results

In this section we illustrate the performance of the Zap Q-learning algorithm in comparison with existing techniques, on a finance problem that has been studied in prior work [6, 15]. We observe that the Zap algorithm performs very well, despite the fact that some of the technical assumptions made in Section III do not hold.

### V-a Finance model

The following finance example is used in [6, 15] to evaluate the performance of their algorithms for optimal stopping. The reader is referred to these references for complete details of the problem set-up.

The Markovian state process considered is the vector of ratios: , , in which is a geometric Brownian motion (derived from an exogenous price-process). This uncontrolled Markov chain is positive Harris recurrent on the state space , so is not compact. The Markov chain is uniformly ergodic.

The “time to exercise” is modeled as a stopping time . The associated expected reward is defined as , with and fixed. The objective of finding a policy that maximizes the expected reward is modeled as an optimal stopping time problem.

The value function is defined to be the infimum (4), with and (the objective in Section I is to minimize the expected cost, while here, the objective is to maximize the expected reward). The associated Q-function is defined using (5), and the associated optimal policy using (6):

When the Q-function is linearly approximated using (10), for a fixed parameter vector , the associated value function can be expressed:

 hϕθ(x) :=E[βτθr(Xτθ)∣x0=x], (45)

where,

 τθ :=min{n:ϕθ(Xn)=1} (46) ϕθ(x) :=I{r(x)≥Qθ(x)}

Given a parameter estimate and the initial state , the corresponding average reward was estimated using Monte-Carlo in the numerical experiments that follow.

### V-B Approximation & Algorithms

Along with Zap Q-learning algorithm we also implement the fixed point Kalman filter algorithm of [6] to estimate . This algorithm is given by the update equations (20) and (21). The computational as well as storage complexities of the least squares Q-learning algorithm (and its variants) [18] are too high for implementation.

### V-C Implementation Details

The experimental setting of [6, 15] is used to define the set of basis functions and other parameters. We choose the dimension of the parameter vector , with the basis functions defined in [6]. The objective here is to compare the performances of the fixed point Kalman filter algorithm with the Zap-Q learning algorithm in terms of the resulting average reward (45).

Recall that the step-size for the Zap Q-learning algorithm is given in (25). We set for all implementations of the Zap algorithm, but similar to what is done in [6], we experiment with different choices for . Specifically, in addition to , we let:

 αn=gb+n (47)

with and experiment with and . In addition, we also implement Zap with . Based on the discussion in Section III-C, we expect this choice of step-size sequences to result in infinite asymptotic variance.

In the implementation of the fixed point Kalman filter algorithm, as suggested by the authors, we choose step-size for the matrix gain update rule in (21), and step-size of the form (47) for the parameter update in (20). Specifically, we let , and and .

The number of iterations for each of the algorithm is fixed to be .

### V-D Experimental Results

The average reward histogram was obtained by the following steps: We simulate parallel simulations of each of the algorithms to obtain as many estimates of . Each of these estimates defines a policy defined in (46). We then estimate the corresponding average reward defined in (45), with .

Along with the average discounted rewards, we also plot the histograms to visualize the asymptotic variance (33), for each of the algorithms. The theoretical values of the covariance matrices and were estimated through the following steps: The matrices and (the limit of the matrix gain used in [6]) were estimated via Monte-Carlo. Estimation of requires an estimate of ; this was taken to be obtained using the Zap-Q algorithm with and . This estimate of was also used to estimate the covariance matrix defined in (37) using the batch means method. The matrices and were then obtained using (34) and (38), respectively.

Fig. 1 contains the histograms of the average rewards obtained using the above algorithms. Fig. 2 contains the histograms of along with a plot of the theoretical prediction.

It was observed that the eigenvalues of the matrix have a wide spread: The condition-number is of the order . Despite a badly conditioned matrix gain, it is observed in Fig. 1, that the average rewards of the Zap-Q algorithms are better than its competitors. It is also observed that the algorithm is robust to the choice of step-sizes. In Fig. 2 we observe that the asymptotic behavior of the algorithms is close match to the theoretical prediction. Specifically, large variance of Zap-Q with step-size confirms that the asymptotic variance is very large (ideally, infinity), if the eigenvalues of the matrix .

## Vi Conclusion

In this paper, we extend the theory for the Zap Q-learning algorithm to a linear function approximation setting, with application to optimal stopping. We prove convergence of the algorithm using ODE analysis, and also observe that it achieves optimal asymptotic variance. The extension of the previous analysis to the current setting is not trivial: Analysis of Zap-Q in the tabular case is much simpler, with lots of special structures, and in general, the theory for convergence of Q-learning algorithms in a function approximation setting does not exist. More importantly, we believe that the ODE analysis obtained in this paper provides important insights into the behavior of the Zap-Q algorithm, even in a function approximation setting. This may be a starting point for analysis of Q-learning algorithms in general function-approximation settings, which is an on-going work.

## References

• [1] D. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Atena Scientific, Cambridge, Mass, 1996.
• [2] V. S. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint. Hindustan Book Agency and Cambridge University Press (jointly), Delhi, India and Cambridge, UK, 2008.
• [3] V. S. Borkar and S. P. Meyn. The ODE method for convergence of stochastic approximation and reinforcement learning. SIAM J. Control Optim., 38(2):447–469, 2000. (also presented at the IEEE CDC, December, 1998).
• [4] S. Chen, A. M. Devraj, A. Bušić, and S. P. Meyn. Zap q-learning for optimal stopping time problems. arXiv preprint arXiv, 2019.
• [5] G. Cheng. Note on some upper bounds for the condition number. JOURNAL OF MATHEMATICAL INEQUALITIES, 8(2):369–374, 2014.
• [6] D. Choi and B. Van Roy. A generalized Kalman filter for fixed point approximation and efficient temporal-difference learning. Discrete Event Dynamic Systems: Theory and Applications, 16(2):207–239, 2006.
• [7] A. M. Devraj and S. Meyn. Zap Q-learning. In Advances in Neural Information Processing Systems, pages 2235–2244, 2017.
• [8] A. M. Devraj and S. P. Meyn. Fastest convergence for Q-learning. ArXiv e-prints, July 2017.
• [9] S. P. Meyn and R. L. Tweedie. Markov chains and stochastic stability. Cambridge University Press, Cambridge, second edition, 2009. Published in the Cambridge Mathematical Library. 1993 edition online.
• [10] A. Nedic and D. Bertsekas. Least squares policy evaluation algorithms with linear function approximation. Discrete Event Dynamic Systems: Theory and Applications, 13(1-2):79–110, 2003.
• [11] A. Shwartz and A. Makowski. On the Poisson equation for Markov chains: existence of solutions and parameter dependence. Technical Report, Technion—Israel Institute of Technology, Haifa 32000, Israel., 1991.
• [12] R. S. Sutton. Learning to predict by the methods of temporal differences. Mach. Learn., 3(1):9–44, 1988.
• [13] V. B. Tadic and S. P. Meyn. Asymptotic properties of two time-scale stochastic approximation algorithms with constant step sizes. In Proceedings of the 2003 American Control Conference, 2003., volume 5, pages 4426–4431. IEEE, 2003.
• [14] J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation. IEEE Trans. Automat. Control, 42(5):674–690, 1997.
• [15] J. N. Tsitsiklis and B. Van Roy. Optimal stopping of Markov processes: Hilbert space theory, approximation algorithms, and an application to pricing high-dimensional financial derivatives. IEEE Trans. Automat. Control, 44(10):1840–1851, 1999.
• [16] C. J. C. H. Watkins. Learning from Delayed Rewards. PhD thesis, King’s College, Cambridge, Cambridge, UK, 1989.
• [17] C. J. C. H. Watkins and P. Dayan. -learning. Machine Learning, 8(3-4):279–292, 1992.
• [18] H. Yu and D. P. Bertsekas. Q-learning algorithms for optimal stopping based on least squares. In 2007 European Control Conference (ECC), pages 2368–2375. IEEE, 2007.

## Vii Appendix

##### Proof of Lemma iii.2

Based on the definition (29), we have:

 ∥FθQ−FθQ′∥ =β∥PHθQ−PHθQ′∥ ≤β∥HθQ−HθQ′∥ ≤β∥Q−Q′∥,

where the first inequality follows from the fact that (with being the induced operator norm in ). The last inequality is true because:

 HθQ(x)−HθQ′(x)=Sθ(Q−Q′)(x),x∈X

##### Proof of Lemma iii.3

To show result (i), we rewrite as the difference of two matrices, , denoting to be the part of the matrix that depends on and to be the one that is independent of :

 ANL(θ) :=E[ψ(Xn)βSθψ\it% \tiny T(Xn+1)] AL :=E[ψ(Xn)ψ\it\tiny T(Xn)]

Proving (30) is equivalent to proving:

 v\it\tiny TANL(θ)v−v\it\tiny TALv≤(β−1)v\it\tiny TΣψv,v∈Rd.

The proof is easier to follow if we suppose that the vector is a difference of two parameter vectors, . Expanding the left hand side of the above inequality:

 v\it\tiny TANL(θ)v =(θ1−θ2)\it\tiny TE[ψ(Xn)βPHθ(Qθ1(Xn)−Qθ2(Xn))]

Next, using Cauchy-Schwartz and the fact that ,

 v\it\tiny TANL(θ)v ≤β∥Qθ1−Qθ2∥∥Hθ(Qθ1−Qθ2)∥ ≤β∥Qθ1−Qθ2∥2 =βv\it\tiny TALv. =(β−1)v\it\tiny TALv+v\it\tiny TALv.

Rearranging the terms, and noting that , the statement of the Lemma follows:

 v\it\tiny TA(θ)v =v\it\tiny TANL(θ)v−v\it\tiny TALv (48) ≤(β−1)v\it\tiny TΣψv

Next, for fixed matrix with eigenvalue-eigenvector pair , , we consider

 v∗A(θ)v=(a\it\tiny T−b\it\tiny Ti)A(θ)(a+bi)=a\it\tiny TA(θ)a+b\it\tiny TA(θ)b+[a\it% \tiny TA(θ)b−b\it\tiny TA(θ)a]i

where denotes the conjugate transpose of . With , it follows that

 Re{λA}v∗v=Re{v∗A(θ)v}=a\it\tiny T% A(θ)a+b\it\tiny TA(θ)b

Let be the largest eigenvalue of , by the inequality (48), the following relation holds

 Re{λA}v∗v =a\it\tiny TA(θ)a+b\it\tiny TA(θ)b ≤(β−1)λψ[a\it\tiny Ta+b\it% \tiny Tb] =(β−1)λψv∗v

Therefore, is negative and bounded above by . For the last part, is bounded using an inequality from [5]

 (49)

where is the dimension. Provided the bound over eigenvalues of and compactness assumption of state space , there exists some constant such that

 (50)

The claim of uniform boundedness of then follows.

##### Proof of Lemma iii.4

For any two parameter vectors , we have:

 ∥b(θ1)−b(θ2)∥ =∥ψ(−FQθ1+FQθ2+Qθ1−Qθ2)∥ ≤∥ψ(FQθ1−FQθ