Convergence Guarantees of Policy Optimization Methods for Markovian Jump Linear Systems

Convergence Guarantees of Policy Optimization Methods for Markovian Jump Linear Systems

Abstract

Recently, policy optimization for control purposes has received renewed attention due to the increasing interest in reinforcement learning. In this paper, we investigate the convergence of policy optimization for quadratic control of Markovian jump linear systems (MJLS). First, we study the optimization landscape of direct policy optimization for MJLS, and, in particular, show that despite the non-convexity of the resultant problem the unique stationary point is the global optimal solution. Next, we prove that the Gauss-Newton method and the natural policy gradient method converge to the optimal state feedback controller for MJLS at a linear rate if initialized at a controller which stabilizes the closed-loop dynamics in the mean square sense. We propose a novel Lyapunov argument to fix a key stability issue in the convergence proof. Finally, we present a numerical example to support our theory. Our work brings new insights for understanding the performance of policy learning methods on controlling unknown MJLS.

I Introduction

Recently, reinforcement learning (RL) [1] has achieved impressive performance on a class of continuous control problems including locomotion [2] and robot manipulation [3]. Policy-based optimization is the main engine behind these RL applications [4]. Specifically, the natural policy gradient method [5] and several related methods including TRPO [6], natural AC [7], and PPO [8] are among the most popular RL algorithms for continuous control tasks. These methods enable flexible policy parameterizations, and are end-to-end in the sense that the control performance metrics are directly optimized.

Despite the empirical successes of policy optimization methods, how to choose these algorithms for a specific control task is still more of an art than a science [9, 10]. This motivates a recent research trend focusing on understanding the performances of RL algorithms on simplified benchmarks. Specifically, significant research has recently been conducted to understand the performance of various model-free or model-based RL algorithms on the classic Linear Quadratic Regulator (LQR) problem [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. In [11], it is shown that despite the non-convexity in the objective function, policy gradient methods can still provably learn the optimal LQR controller. This provides a good sanity check for policy optimization on further control applications.

Built upon the good progress on understanding RL for the LQR problem, this paper moves one step further and studies policy optimization for Markov Jump Linear Systems (MJLS) [22] from a theoretical perspective. MJLS form an important class of systems that arise in many control applications [23, 24, 25, 26, 27, 28]. Recently, stochastic methods in machine learning are also modeled as jump systems [29, 30]. The research on MJLS has great practical value while in the mean time also provides many new interesting theoretical problems. In the classic LQR problem, one aims to control a linear time-invariant (LTI) system whose state/input matrices do not change over time. On the other hand, the state/input matrices of a Markov jump linear system are functions of a jump parameter that is sampled from an underlying Markov chain. Consequently, the behaviors of MJLS become very different from those of LTI systems. Controlling unknown MJLS poses many new challenges over traditional LQR due to the appearance of this Markov jump parameter. For example, in a model-based approach, one has to learn both the state/input matrices and the transition probability of the jump parameter; here, it is the coupling effect between the state/input matrices and the jump parameter distribution causes the main difficulty. Therefore, the quadratic control of MJLS is a meaningful benchmark for further understanding of RL algorithms.

Obviously, studying policy optimization on MJLS control problems is important for further understanding of policy-based RL algorithms. In this paper, we present various convergence guarantees for policy optimization methods on the quadratic control of MJLS. First, we study the optimization landscape of direct policy optimization for MJLS, and demonstrate that despite the non-convexity of the resultant problem, the unique stationary point is the global optimal solution. Next, we prove that the Gauss-Newton method and the natural policy gradient method converge to the optimal state feedback controller for MJLS at a linear rate if a stabilizing initial controller is used. We introduce a novel Lyapunov argument to fix a key stability issue in the convergence proof. Finally, numerical simulations are provided to support our theory.

The most relevant reference of our paper is [11]. Our results generalize the convergence theory of the Gauss-Newton method and the natural policy gradient method in [11] to the MJLS case. This extension is non-trivial. Specifically, one key issue in the convergence proof is to ensure that the iterates never wander into the region of instability. In [11], the system is LTI and the stability argument can be made by using the properties of spectral radius of the state matrix. For MJLS, one cannot directly make such arguments any more due to the stochastic nature of the system. Alternatively, we propose a novel Lyapunov argument to show that the resultant controller is always stabilizing for the MJLS in the mean square sense along the optimization trajectory of the Gauss-Newton method and the natural policy gradient method, if learning rates are chosen properly.

Ii Background and Preliminaries

Ii-a Notation

We denote the set of real numbers by . Let be a square matrix, we use the notation , , , to denote its transpose, spectral norm, trace, and minimum singular value, respectively. We indicate positive definite and positive semidefinite matrices by and , respectively. Given matrices , let denote the block diagonal matrix whose -th block is . Given a function , we use to denote its total derivative.

Ii-B Quadratic Control of Markovian Jump Linear Systems

A Markovian jump linear system is governed by the following discrete-time state-space model

(1)

where is the system state at time , and corresponds to the control action at time . The initial state is assumed to have a distribution . The system matrices and depend on the switching parameter , which takes values on for each . Obviously, we have and for all . The jump parameter forms a discrete-time Markov chain sampled from . The transition probabilities and initial distribution of are given by

(2)

respectively. The transition probabilities satisfy and for each . The initial distribution satisfies .

In this paper, we focus on the quadratic control problem whose objective is to choose the control actions to minimize the following cost function

(3)

where it is assumed that and for each . This problem can be viewed as the MJLS counterpart of the standard LQR problem, and hence is termed as “MJLS LQR problem.” The optimal controller to this MJLS LQR problem, defined by dynamics (1), cost (3), and switching probabilities (2), can be computed by solving a system of coupled Riccati Algebraic Equations [31], which we now describe. First, it is known that the optimal cost can be achieved by a linear state feedback of the form

(4)

with for each . One can solve for all as follows. Let . Formally, let be the unique positive definite solution to the following equations:

(5)

It can be shown that the linear state feedback controller that minimizes the cost function (3) is given by

(6)

We remark that if for all , then the system is said to be periodic with period . Linear periodic systems have been widely studied [32, 33] and are just a special case of MJLS. If , then the MJLS just becomes a linear time-invariant (LTI) system.

Ii-C Policy Optimization for Quadratic Control of LTI Systems

Before proceeding to policy optimization of MJLS, here we review policy gradient methods for the quadratic control of LTI systems [11]. Consider the LTI system with an initial state distribution and a static state feedback controller . We adopt a standard quadratic cost function which can be calculated as

(7)

Obviously, the cost in (II-C) can be computed as where is the solution to the Lyapunov equation . It is also well known [34, 11] that the gradient of (II-C) with respect to can be calculated as

where is the state correlation matrix, i.e. . Based on this gradient formula, one can optimize (II-C) using the (deterministic) policy gradient method , the natural policy gradient method , or the Gauss-Newton method . More explanations for these methods can be found in [11].

In [11], it is shown that there exists a unique such that if is full rank. In addition, all the above methods are shown to converge to linearly if a stabilizing initial policy is used.

Iii Policy Gradient and Optimization Landscape

Now we focus on the policy optimization of the MJLS LQR problem. Since we know the optimal cost can be achieved by a linear state feedback, it is reasonable to restrict the policy search within the class of linear state feedback controllers. Specifically, we can set , where each of the components is the feedback gain of the corresponding mode. With this notation, we consider the following policy optimization problem whose decision variable is .

Problem 1: Policy Optimization for MJLS.
minimize:
subject to:

In this section, we present an explicit formula for the policy gradient and discuss the optimization landscape for the above problem. We want to emphasize that the above problem is indeed a constrained optimization problem. The feasible set of the above problem consists of all stabilizing the closed-loop dynamics in the mean square sense (and hence yielding finite ). We denote this feasible set as . For , the cost in (3) can blow up to infinity, and the differentiability is also an issue. For , the cost is finite and differentiable. To obtain the formula for , we can first rewrite the quadratic cost (3) as

(8)

where is defined to be the solution to the coupled Lyapunov equations

(9)

for all . Recall that we have .

We will denote . This matrix also satisfies the recurrence [22]:

with for all .

We also make the following technical assumptions.

Assumptions.

Along with the standard assumption that and being positive definite for all , we assume that for all and . This indicates that there is a chance of starting from any mode . Moreover, the expected covariance of the initial state is full rank.

Now we are ready to present an explicit formula for the policy gradient .

Lemma 1.

Given , the gradient for the cost function (III) with respect to policy is

(10)

where

(11)

and

(12)
Proof.

The differentiability of can be proved using the implicit function theorem, and this step is similar to the proof of Lemma 3.1 in [35]. Now we derive the gradient formula by modifying the total derivative arguments in [35, 34]. Start by denoting . Then, we can take the total derivative of (9) to show the following relation for each

Hence, the total derivative of the cost (III) is

Recall from [35] that . This leads to the desired result. ∎

Optimization Landscape for MJLS. LTI systems are just a special case of MJLS. Since policy optimization for quadratic control of LTI systems is non-convex, the same is true for the MJLS case. However, from our gradient formula in Lemma 1, we can see that as long as is full rank and for all , it is necessary that a stationary point given by satisfies

Substituting the above equation into the coupled Lyapunov equation (9) leads to the global solution defined by the coupled Algebraic Riccati Equations (II-B). Therefore, the only stationary point is the global optimal solution. Overall, the optimization landscape for the MJLS case is quite similar to the classic LQR case if we allow the initial mode to be sufficiently random, i.e. for all . Based on such similarity, it is reasonable to expect that the local search procedures (e.g. policy gradient) will be able to find the unique global minimum for MJLS despite the non-convex nature of the problem. Compared with the LTI case, the characterization of is more complicated for MJLS. Hence the main technical issue is how to show gradient-based methods can handle the feasibility constraint without using projection. We will use a Lyapunov argument to tackle this issue.

Iv Main Convergence Results

As reviewed in Section II-C, the natural policy gradient method for the LTI case iterates as . For the MJLS case, the natural policy gradient method adopts a similar update rule and iterates as

(13)

The initial policy is denoted as . The Gauss-Newton method uses the following update rule:

(14)

where . In this section, we focus on the convergence guarantees of (13) and (14), and show that both converge to the global optimal solution at a linear rate if they are initialized at a policy in .

To state the main convergence result, it is helpful to denote , , and . We also denote .

Theorem 2.

Suppose s.t. is finite.

  • Gauss-Newton case: For any stepsize , the Gauss-Newton method (14) converges to the global minimum linearly as follows

    (15)
  • Natural policy gradient case: For any stepsize , the natural policy gradient method (13) converges to the global minimum linearly as follows

    (16)
Proof Sketch.

We briefly outline the main proof steps for the Gauss-Newton case. The proof for the natural policy gradient case is similar. The detailed proofs are presented in the appendix.

  1. Show that the one-step progress of the Gauss-Newton gives a policy stabilizing the closed-loop dynamics and yielding a finite cost.

  2. Apply the so-called “almost smoothness” condition to show that the cost associated by the one-step progress of the Gauss-Newton method decreases as follows:

  3. Use induction to show the final convergence result.

It is worth noting that the proof steps for the MJLS and LTI cases are quite similar. We can simply modify the proof arguments for the LTI case in [11] to finish the second and third steps. The main challenge for the MJLS case is how to handle the first step, since one cannot directly modify the spectral radius argument in [11] due to the stochastic nature of MJLS. We develop a novel Lyapunov argument to address this issue. We will only present the details for the first step here since that is the only part requiring new proving techniques. Other steps of the proof for both cases are deferred to the appendix.

How do the policy optimization methods ensure the finite cost along the iteration path? We need to show that for every , we can choose a step size such that the new controller obtained in one-step update of the Gauss-Newton method or the natural policy gradient method (which is denoted as ) will also be stabilizing the closed-loop dynamics in the mean-square sense. This lemma is of new technical novelty compared with the argument for the LTI case in [11]. Notice that the “almost smoothness” condition is required in the second step of the proof outline, as it gives a useful upper bound for in terms of . However, to apply such a condition, one needs to ensure that both and are stabilizing controllers such that and are both finite in the first place. Hence, one has to prove that the iterate never wanders into the region of instability before applying the “almost smoothness” condition.

To show that every controller computed by the Gauss-Newton method or the natural policy gradient method is stabilizing, we propose the following Lyapunov argument. The main idea is that the value function at the current step serves naturally as a Lyapunov function for the next move due to the positive definiteness of . The positive definiteness of guarantees that there is a stability margin around every point along the optimization trajectory. The result for the Gauss-Newton case is formally stated below.

Lemma 3.

Suppose stabilizes the MJLS (1) in the mean square sense. Then the one-step update obtained from the Gauss-Newton method (14) will also be stabilizing if the step size satisfies .

Proof.

Recall from [36] that the controller stabilizes (1) in the mean-square sense if and only if there exists matrices such that

(17)

We will show that the above condition can be satisfied by setting where solves the MJLS Lyapunov equation . Notice the existence of is guaranteed by the assumption . Denote . The Lyapunov equation for can be rewritten as . From this, we can directly obtain

Since are all positive definite, the sum of the first two terms on the right hand side is negative definite. We only need the last two terms to be negative semidefinite. Note that . We have

which is positive semidefinite under the condition . ∎

For the natural gradient method, we have a similar result.

Lemma 4.

Suppose stabilizes the MJLS (1) in the mean square sense. Then the one-step update obtained from the natural policy gradient method (13) will also be stabilizing if satisfies

Proof.

The proof starts with the same steps as the proof of Lemma 3. We will show that the condition (17) can be met by setting where solves the MJLS Lyapunov equation associated with the controller . For the natural policy gradient method, we have and . To show that the last two terms are negative semidefinite, we make the following calculations:

Clearly, the above term is guaranteed to be positive semidefinite if satisfies

Lastly, notice for all . This leads to the desired conclusion. ∎

From the above proof, we can clearly see that can be used to construct a Lyapunov function for if satisfies a bound. This leads to a novel proof for the stability along the natural policy gradient iteration path. This idea can even be extended beyond the linear quadratic control case. Very recently, a similar idea is used to show the convergence properties of policy optimization methods for the mixed control problem where the cost function may not blow up to infinity on the boundary of the feasible set [37].

V Simulation Results

Consider a system with 100 states, 20 inputs, and 100 modes. The system matrices and were generated using drss in MATLAB in order to guarantee that the system would have finite cost with . The probability transition matrix was sampled from a Dirichlet Process . We also assumed that we had equal probability of starting in any initial mode. For simplicity we set and for all .

Fig. 1: Percent error from optimal cost for controllers computed using both policy optimization methods. The Gauss-Newton method converges faster.

In Figure 1 we can see that both policy optimization methods converge to the optimal solution. As expected, Gauss-Newton converges much faster than the natural policy gradient method. The step size of the natural policy gradient method depends on various system parameters, and requires some tuning efforts for each different problem instance.

Vi Conclusion

In this paper we have studied policy optimization for the quadratic control of Markovian jump linear systems. We developed an exact formula for computing the gradient of the cost with respect to a given policy, and presented convergence guarantees for the Gauss-Newton method and the natural policy gradient method. The results include a novel Lyapunov argument to prove the stability of the iterations along the optimization trajectories.

The results obtained further suggest that one could use model-free methods, such as zeroth-order optimization or the REINFORCE algorithm, to learn the optimal control from data. Such model-free techniques will allow us to learn the control of unknown MJLS without dealing with system identification. This would be particularly useful for large scale systems, where the computational complexity grows as the system size increases. We will work on such extensions in the future.

Appendix

This appendix provides the detailed proofs of the convergence rate results presented in this paper. We will first start by proving a few helper lemmas. Then we will provide upper bounds for the cost associated with the one-step progress. Lastly we will show that the both algorithms converge to the optimal policy. Most steps mimic their LTI counterparts.

Lemma 5 (“Almost smoothness”).

Suppose and . The cost function defined in (3) satisfies

Proof.

To simplify the equations, we use and . By definition, we have

(18)

Now we develop a formula for . Based on (9), we have . Using this, we can directly show

Now we can substitute the above formula into (18) and iterate to get the desired result. ∎

Next, we show that is gradient dominated. Recall that we have .

Lemma 6 (Gradient Domination).

Suppose , and . Let be the optimal policy. Given the definitions in Lemma 1, the following sequence of inequalities always holds:

Proof.

For readability, we denote . From Lemma 5, we can complete the squares to show