Convergence rate for a Gauss collocation method applied to constrained optimal control July 10, 2016. Revised December 16, 2017. The authors gratefully acknowledge support by the Office of Naval Research under grant N00014-15-1-2048, by the National Science Foundation under grant DMS-1522629, and by the U.S. Air Force Research Laboratory under contract FA8651-08-D-0108/0054.

# Convergence rate for a Gauss collocation method applied to constrained optimal control ††thanks: July 10, 2016. Revised December 16, 2017. The authors gratefully acknowledge support by the Office of Naval Research under grant N00014-15-1-2048, by the National Science Foundation under grant DMS-1522629, and by the U.S. Air Force Research Laboratory under contract FA8651-08-D-0108/0054.

William W. Hager hager@ufl.edu, http://people.clas.ufl.edu/hager/, PO Box 118105, Department of Mathematics, University of Florida, Gainesville, FL 32611-8105. Phone (352) 294-2308. Fax (352) 392-8357.    Jun Liu juliu@siue.edujuliu/, Department of Mathematics and Statistics, Southern Illinois University Edwardsville, Edwardsville, IL 62026. Phone (618) 650-2220.    Subhashree Mohapatra subha@ufl.edu, Department of Mathematics, University of Florida, Gainesville, FL 32611.    Anil V. Rao anilvrao@ufl.edu, http://www.mae.ufl.edu/rao, Department of Mechanical and Aerospace Engineering, P.O. Box 116250, Gainesville, FL 32611-6250. Phone (352) 392-0961. Fax (352) 392-7303.    Xiang-Sheng Wang xswang@louisiana.eduxxw6637/, Department of Mathematics, University of Louisiana at Lafayette, Lafayette, LA 70503. Phone (337) 482-5281.
###### Abstract

A local convergence rate is established for a Gauss orthogonal collocation method applied to optimal control problems with control constraints. If the Hamiltonian possesses a strong convexity property, then the theory yields convergence for problems whose optimal state and costate possess two square integrable derivatives. The convergence theory is based on a stability result for the sup-norm change in the solution of a variational inequality relative to a 2-norm perturbation, and on a Sobolev space bound for the error in interpolation at the Gauss quadrature points and the additional point . The tightness of the convergence theory is examined using a numerical example.

Key words. Gauss collocation method, convergence rate, optimal control, orthogonal collocation

AMS subject classifications. 49M25, 49M37, 65K05, 90C30

## 1 Introduction

In earlier work [24, 25, 26], we analyze the convergence rate for orthogonal collocation methods applied to unconstrained control problems. In this analysis, it is assumed that the problem solution is smooth, in which case the theory implies that the discrete approximations converge to the solution of the continuous problem at potentially an exponential rate. But when control constraints are present, the solution often possesses limited regularity. The convergence theory developed in the earlier work for unconstrained problems required that the optimal state had at least four derivatives, while for constrained problems, the optimal state may have only two derivatives, at best [4, 7, 20, 28]. The earlier convergence theory was based on a stability analysis for a linearization of the unconstrained control problem; the theory showed that the sup-norm change in the solution was bounded relative to the sup-norm perturbation in the linear system. Here we introduce a convex control constraint, in which case the linearized problem is a variational inequality, or equivalently a differential inclusion, not a linear system. We obtain a bound for the sup-norm change in the solution relative to a 2-norm perturbation in the variational inequality. By using the 2-norm for the perturbation rather than the sup-norm, we are able to avoid both Lebesgue constants and the Markov bound [34] for the sup-norm of the derivative of a polynomial relative to the sup-norm of the original polynomial. Using best approximation results in Sobolev spaces [3, 13], we obtain convergence when the optimal state and costate have only two square integrable derivatives, which implies that the theory is applicable to a class of control constrained problems for which the optimal control is Lipschitz continuous.

The specific collocation scheme analyzed in this paper, presented in [2, 18], is based on collocation at the Gauss quadrature points, or equivalently, at the roots of a Legendre polynomial. Other sets of collocation points that have been studied in the literature include the Lobatto quadrature points [11, 14, 19], the Chebyshev quadrature points [12, 15], the Radau quadrature points [16, 17, 33, 36], and extrema of Jacobi polynomials [39]. Kang [31, 32] obtains a convergence rate for the Lobatto scheme applied to control systems in feedback linearizable normal form by inserting bounds in the discrete problem for the states, the controls, and certain Legendre polynomial expansion coefficients. In our approach, the discretized problem is obtained by simply collocating at the Gauss quadrature points.

Our approximation to the control problem uses a global polynomial defined on the problem domain. Earlier work, including [6, 8, 9, 10, 22, 30, 37], utilizes a piecewise polynomial approximation, in which case convergence is achieved by letting the mesh spacing approach zero, while keeping the polynomial degree fixed. For an orthogonal collocation scheme based on global polynomials, convergence is achieved by letting the degree of the polynomials tend to infinity. Our results show that even when control constraints are present, and a solution possesses limited regularity, convergence can still be achieved with global polynomials.

We consider control problems of the form

 minimizeC(x(1))subject to˙x(t)=f(x(t),u(t)),u(t)∈\@fontswitchU,t∈Ω,x(−1)=x0,(x,u)∈\@fontswitchC1(Ω;Rn)×\@fontswitchC0(Ω;Rm), \hb@xt@.01(1.1)

where , the control constraint set is closed and convex with nonempty interior, the state , denotes the derivative of with respect to , is the initial condition which we assume is given, , , denotes the space of times continuously differentiable functions mapping to . It is assumed that and are at least continuous.

Let denote the space of polynomials of degree at most , and let denote the -fold Cartesian product . We analyze the discretization of (LABEL:P) given by

 minimize C(x(1))subject to ˙x(τi)=f(x(τi),ui),ui∈\@fontswitchU,1≤i≤N,x(−1)=x0,x∈\@fontswitchPnN. \hb@xt@.01(1.2)

The polynomials used to approximate the state should satisfy the dynamics exactly at the collocation points , . The parameter represents an approximation to the control at time . The dimension of is , while there are equations in (LABEL:D) corresponding to the collocated dynamics at points and the initial condition. We collocate at the Gauss quadrature points, which are symmetric about and which satisfy

 −1<τ1<τ2<…<τN<+1.

The analysis also makes use of the two noncollocated points

 τ0=−1andτN+1=+1.

For , we use the sup-norm given by

 ∥x∥∞=sup{|x(t)|:t∈[0,1]},

where is the Euclidean norm. Given , the ball with center and radius is denoted

The following regularity assumption is assumed to hold throughout the paper.

Smoothness. The problem (LABEL:P) has a local minimizer in . There exists an open set and such that

 \@fontswitchBρ(x∗(t),u∗(t))⊂\@fontswitchO for all t∈Ω.

Moreover, the first two derivatives of and are Lipschitz continuous on the closure of and on respectively.

Let denote the solution of the linear costate equation

 ˙λ∗(t)=−∇xH(x∗(t),u∗(t),λ∗(t)),λ∗(1)=∇C(x∗(1)), \hb@xt@.01(1.3)

where is the Hamiltonian defined by and denotes gradient. From the first-order optimality conditions (Pontryagin’s minimum principle), it follows that

 −∇uH(x∗(t),u∗(t),λ∗(t))∈N\@fontswitchU(u∗(t))for all % t∈Ω, \hb@xt@.01(1.4)

where is the normal cone. For any ,

 N\@fontswitchU(u)={w∈Rm:wT(v−u)≤0 for all v∈\@fontswitchU},

while if .

Since the collocation problem (LABEL:D) is finite dimensional, the first-order optimality conditions, or Karush-Kuhn-Tucker conditions, hold when a constraint qualification [35] is satisfied. We show in Lemma LABEL:equiv that the first-order optimality conditions are equivalent to the existence of such that

 ˙λ(τi) = −∇xH(x(τi),ui,λ(τi)),1≤i≤N, \hb@xt@.01(1.5) λ(1) = ∇C(x(1)), \hb@xt@.01(1.6) N\@fontswitchU(ui) ∋ −∇uH(x(τi),ui,λ(τi)),1≤i≤N. \hb@xt@.01(1.7)

The following assumptions are utilized in the convergence analysis.

• For some , the smallest eigenvalue of the Hessian matrices and are greater than , uniformly for .

• For some , the Jacobian of the dynamics satisfies

 ∥∇xf(x∗(t),u∗(t))∥∞≤βand∥∇xf(x∗(t),u∗(t))T∥∞≤β

for all where is the matrix sup-norm (largest absolute row sum), and the Jacobian is an by matrix whose -th row is .

The condition (A2) ensures (see Lemma LABEL:feasible) that in the discrete linearized problem, it is possible to solve for the discrete state in terms of the discrete control. As shown in [24], this property holds in an -collocation framework when the domain is partitioned into mesh intervals with large enough that

 ∥∇xf(x∗(t),u∗(t))∥∞/K≤βand∥∇xf(x∗(t),u∗(t))T∥∞/K≤β

for all .

The coercivity assumption (A1) is not only a sufficient condition for the local optimality of a feasible point of (LABEL:P), but it yields the stability of the discrete linearized problem (see Lemma LABEL:inf-bounds). One would hope that (A1) could be weakened to only require coercivity relative to a subspace associated with the linearized dynamics similar to what is done in [6]. To formulate this weakened condition, we introduce the following 6 matrices:

 A(t)=∇xf(x∗(t),u∗(t)),B(t)=∇uf(x∗(t),u∗(t)),Q(t)=∇xxH(x∗(t),u∗(t),λ∗(τi)),S(t)=∇uxH(x∗(t),u∗(t),λ∗(τi)),R(t)=∇uuH(x∗(t),u∗(t),λ∗(τi)),T=∇2C(x∗(1)).

With this notation and with denoting the inner product, the weaker version of (A1) is that

 x(1)TTx(1)+⟨x,Qx⟩+⟨u,Ru⟩+2⟨x,Su⟩≥α⟨u,u⟩,

whenever satisfies with and for some and satisfying and for almost every . For the Euler integration scheme, we show in [6, Lem. 11] that this weaker condition implies an analogous coercivity property for the discrete problem. The extension of this result from the Euler scheme to orthogonal collocation schemes remains an open problem.

Let be the by matrix defined by

 Dij=˙Lj(τi),where Lj(τ):=N∏l=0l≠jτ−τlτj−τl,1≤i≤N and 0≤j≤N. \hb@xt@.01(1.8)

The matrix is a differentiation matrix in the sense that , , whenever is the polynomial that satisfies for . The submatrix , consisting of the trailing columns of , has the following properties which are utilized in the analysis:

• is invertible and .

• If is the diagonal matrix containing the Gauss quadrature weights , , on the diagonal, then the rows of the matrix have Euclidean norm bounded by .

The invertibility of is proved in [18, Prop. 1]. The bound for the inverse appearing in (P1) is established in Appendix 1. (P2) has been checked numerically for up to 300 in [26]. Some intuition concerning the general validity of (P2) is as follows: It is observed numerically that the last row of the matrix has the largest Euclidean norm among all the rows. Based on the formula for given in [18, Sect. 4.1.2], the -th element in the last row approaches as tends to infinity. Hence, the -th element in the last row of approaches as tends to infinity. Since the quadrature weights sum to 2, the Euclidean norm of the last row of should be close to . Despite the strong numerical evidence for (P2), a proof of (P2) for general is still missing.

The properties (P1) and (P2) are stated separately since they are used in different ways in the analysis. However, (P2) implies (P1) by the Schwarz inequality. That is, if is a row from , then we have

 N∑i=1|ri|=N∑i=1√ωi(|ri|/√ωi)≤(N∑i=1ωi)1/2(N∑i=1r2i/ωi)1/2≤2

since the quadrature weights sum to 2 and when (P2) holds, the Euclidean norm of a row from is at most .

If is a solution of (LABEL:D) associated with the discrete controls , , and if satisfies (LABEL:dcostate)–(LABEL:dcontrolmin), then we define

 XN=[xN(−1),xN(τ1),…,xN(τN),xN(+1)],X∗=[x∗(−1),x∗(τ1),…,x∗(τN),x∗(+1)],UN=[u1,…,uN],U∗=[u∗(τ1),…,u∗(τN)],ΛN=[λN(−1),λN(τ1),…,λN(τN),λN(+1)],Λ∗=[λ∗(−1),λ∗(τ1),…,λ∗(τN),λ∗(+1)].

The following convergence result relative to the vector -norm (largest absolute element) is established. Here denotes the Sobolev space of functions with square integrable derivatives through order and norm denoted .

###### Theorem 1.1

Suppose is a local minimizer for the continuous problem with for some . If both (A1)–(A2) and (P1)–(P2) hold, then for sufficiently large, the discrete problem has a local minimizer and , and an associated multiplier satisfying ; moreover, there exists a constant independent of and such that

 max{∥XN−X∗∥∞,∥UN−U∗∥∞,∥ΛN−Λ∗∥∞} ≤(cN)p−3/2(∥x∗∥\@fontswitchHp(Ω;Rn)+∥λ∗∥\@fontswitchHp(Ω;Rn)),p:=min{η,N+1}. \hb@xt@.01(1.9)

This result was established in [26] for unconstrained control problem, but with the exponent 3/2 replaced by 3 and with . Hence, the analysis is extended to control constrained problems and the exponent of in the convergence estimate is improved by 1.5. Since typical control constrained problems have regularity at most when (A1) holds, there is no guarantee of convergence with the previous estimate.

The paper is organized as follows. In Section LABEL:abstract the discrete optimization problem (LABEL:D) is reformulated as a differential inclusion obtained from the first-order optimality conditions, and a general approach to convergence analysis is presented. We also establish the connection between the Karush-Kuhn-Tucker conditions and the polynomial conditions (LABEL:dcostate)–(LABEL:dcontrolmin). In Section LABEL:sect_interp we use results from [3] to bound the derivative of the interpolation error in . Section LABEL:residual estimates how closely the solution to the continuous problem satisfies the first-order optimality conditions for the discrete problem, while Section LABEL:inverse establishes the invertibility of the linearized dynamics for the discrete problem. Section LABEL:Lip proves a Lipschitz property for the linearized optimality conditions, which yields a proof of Theorem LABEL:maintheorem. A numerical example given in Section LABEL:numerical indicates the potential for further improvements to the convergence rate exponent. Section LABEL:appendix2 contains a result of Yvon Maday concerning the error in best approximation relative to an norm with a singular weight function.

Notation. We let denote the space of polynomials of degree at most , while is the subspace consisting of polynomials in that vanish at and . The Gauss collocation points , , are the roots of the Legendre polynomial of degree . The associated Gauss quadrature weights , , are given by

 ωi=2(1−τ2i)P′N(τi)2. \hb@xt@.01(1.10)

For any , we have [38, Thm. 3.6.24]

 ∫Ωp(t)dt=N∑i=1ωip(τi). \hb@xt@.01(1.11)

Derivatives with respect to are denoted with either a dot above the function as in , which is common in the optimal control literature, or with an accent as in , which is common in the numerical analysis literature. The meaning of the norm is based on context. If , then denotes the maximum of over , where is the Euclidean norm. For a vector , is the maximum of over . If , then is the largest absolute row sum (the matrix norm induced by the vector sup-norm). We often partition a vector into subvectors , . Similarly, if , then . The dimension of the identity matrix is often clear from context; when necessary, the dimension of is specified by a subscript. For example, is the by identity matrix. The gradient is denoted , while denotes the Hessian; subscripts indicate the differentiation variables. Throughout the paper, is a generic constant which is independent of the polynomial degree and the smoothness , and which may have different values in different equations. The vector has all entries equal to one, while the vector has all entries equal to zero; again, their dimension should be clear from context. If is the differentiation matrix introduced in (LABEL:Ddef), then is the -th column of and is the submatrix formed by columns through . We let denote the Kronecker product. If and , then is the by matrix composed of blocks; the block is . We let denote the usual space of functions square integrable on , while is the Sobolev space consisting of functions with square integrable derivatives through order . The norm in is denoted . The seminorm in corresponding to the norm of the derivative is denoted . The subspace of corresponding to functions that vanish at and is denoted . We let denote the -fold Cartesian product .

## 2 Abstract Setting

In the introduction, we formulated the discrete optimization problem (LABEL:D) and the necessary conditions (LABEL:dcostate)–(LABEL:dcontrolmin) in polynomial spaces. However, to prove Theorem LABEL:maintheorem, we reformulate the first-order optimality conditions in Cartesian space. Given a feasible point and for the discrete problem (LABEL:D), define , , and , . As noted earlier, is a differentiation matrix in the sense that

 N∑j=0DijXj=˙x(τi),1≤i≤N.

Since , it follows from the exactness result (LABEL:exact) for Gaussian quadrature that when satisfies the dynamics of (LABEL:D), we have

Hence, the discrete problem (LABEL:D) can be reformulated as the nonlinear programming problem

 minimize C(XN+1) subject to N∑j=0DijXj=f(Xi,Ui),Ui∈\@fontswitchU,1≤i≤N, X0=x0,XN+1=X0+N∑j=1ωjf(Xj,Uj).

To prove Theorem LABEL:maintheorem, we analyze the existence and stability of solutions to the first-order optimality conditions associated with the nonlinear programming problem.

We introduce multipliers , corresponding to each of the constraints in the nonlinear program. The first-order optimality conditions correspond to stationary points of the Lagrangian

 C(XN+1) +N∑i=1⟨μi,f(Xi,Ui)−N∑j=0DijXj⟩+⟨μ0,x0−X0⟩ +⟨μN+1,X0−XN+1+N∑i=1ωif(Xi,Ui)⟩.

The stationarity conditions for the Lagrangian appear below.

 X0 ⇒ μN+1=μ0+N∑i=1Di0μi, \hb@xt@.01(2.2) Xj ⇒ N∑i=1Dijμi=∇xH(Xj,Uj,μj+ωjμN+1),1≤j≤N, \hb@xt@.01(2.3) XN+1 ⇒ μN+1=∇C(XN+1), \hb@xt@.01(2.4) Ui ⇒ −∇uH(Xi,Ui,μi+ωiμN+1)∈N\@fontswitchU(Ui),1≤i≤N. \hb@xt@.01(2.5)

Since there are no state constraints, the conditions (LABEL:NC0)–(LABEL:NC2) are obtained by setting to zero the derivative of the Lagrangian with respect to the indicated variables. The condition (LABEL:NC3) corresponds to stationarity of the Lagrangian respect to the control. The relation between multipliers satisfying (LABEL:NC0)–(LABEL:NC3) and satisfying (LABEL:dcostate)–(LABEL:dcontrolmin) is as follows.

###### Proposition 2.1

The multipliers satisfy if and only if the polynomial satisfying the interpolation conditions and , , is a solution of and .

Proof. We start with multipliers satisfying (LABEL:NC0)–(LABEL:NC3) and show that satisfying the interpolation conditions and , , is a solution of with . The converse follows by reversing all the steps in the derivation. Define for , , and . Hence, we have for . In (LABEL:NC3) we divide by and substitute . In (LABEL:NC1) we divide by , and substitute and

 Dij=−(ωjωi)D†ji,D†i,N+1=−N∑j=1D†ij,1≤i≤N. \hb@xt@.01(2.6)

With these modifications, (LABEL:NC1)–(LABEL:NC3) become

 N+1∑j=1D†ijΛj = −∇xH(Xi,Ui,Λi), \hb@xt@.01(2.7) ΛN+1 = ∇C(XN+1), \hb@xt@.01(2.8) N\@fontswitchU(Ui) ∋ −∇uH(Xi,Ui,Λi), \hb@xt@.01(2.9)

. In [18, Thm. 1] it is shown that if is a polynomial that satisfies the conditions for , then

 N+1∑j=1D†ijΛj=˙λ(τi),1≤i≤N. \hb@xt@.01(2.10)

This identity coupled with (LABEL:Dadjoint)–(LABEL:Dcontrolmin) imply that hold.

Now let us consider the final term in (LABEL:NC0). Since the polynomial that is identically equal to has derivative and since is a differentiation matrix, we have , which implies that , where is the -th column of . Hence, the final term in (LABEL:NC0) can be written

 N∑i=1μiDi0 = −N∑i=1N∑j=1μiDij=−N∑i=1N∑j=1ωj(μiωi)(ωiDijωj) \hb@xt@.01(2.11) = N∑i=1N∑j=1ωiD†ij(Λj−ΛN+1)=N∑i=1N+1∑j=1ωiD†ijΛj.

Again, if is the interpolating polynomial that satisfies for , then by (LABEL:dif), (LABEL:mu0expand), and the exactness of Gaussian quadrature for polynomials in , we have

 N∑i=1μiDi0=N∑i=1ωi˙λ(τi)=∫Ω˙λ(τ)dτ=λ(1)−λ(−1). \hb@xt@.01(2.12)

Since , we deduce from (LABEL:NC0) and (LABEL:Di0) that .

In the proof of Proposition LABEL:equiv, and . We combine (LABEL:NC0), (LABEL:Dadjoint), and (LABEL:mu0expand) to obtain

 ΛN+1=Λ0−N∑i=1ωi∇xH(Xi,Ui,Λi). \hb@xt@.01(2.13)

Based on Proposition LABEL:equiv, the optimality conditions (LABEL:NC0)–(LABEL:NC3) are equivalent to (LABEL:dcostate)–(LABEL:dcontrolmin), which are equivalent to (LABEL:Dadjoint)–(LABEL:Dcontrolmin) and (LABEL:mu0). This latter formulation, which we refer to as the transformed adjoint system in our earlier work [22], is most convenient for the subsequent analysis. This leads us to write the first-order optimality conditions for (LABEL:D) as an inclusion where

 (\@fontswitchT0,\@fontswitchT1,…,\@fontswitchT6)(X,U,Λ)∈Rn×RnN×Rn×Rn×RnN×Rn×RmN.

The 7 components of are defined as

 \@fontswitchT0(X,U,Λ) = X0−x0, \@fontswitchT1i(X,U,Λ) = (N∑j=0DijXj)−f(Xi,Ui),1≤i≤N, \@fontswitchT2(X,U,Λ) = XN+1−X0−N∑j=1ωjf(Xj,Uj), \@fontswitchT3(X,U,Λ) = ΛN+1−Λ0+N∑i=1ωi∇xH(Xi,Ui,Λi), \@fontswitchT4i(X,U,Λ) = (N+1∑j=1D†ijΛj)+∇xH(Xi,Ui,Λi),1≤i≤N, \@fontswitchT<