Convergence Rate for a Radau hp Collocation Method Applied to Constrained Optimal Control October 24, 2017. The authors gratefully acknowledge support by the Office of Naval Research under grants N00014-11-1-0068 and N00014-15-1-2048, and by the National Science Foundation under grants DMS-1522629 and CBET-1404767.

Convergence Rate for a Radau hp Collocation Method Applied to Constrained Optimal Control thanks: October 24, 2017. The authors gratefully acknowledge support by the Office of Naval Research under grants N00014-11-1-0068 and N00014-15-1-2048, and by the National Science Foundation under grants DMS-1522629 and CBET-1404767.

William W. Hager hager@ufl.edu, http://people.clas.ufl.edu/hager/, PO Box 118105, Department of Mathematics, University of Florida, Gainesville, FL 32611-8105. Phone (352) 294-2308. Fax (352) 392-8357.    Hongyan Hou hongyan388@gmail.com, Chemical Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213.    Subhashree Mohapatra subha@ufl.edu, Department of Mathematics, University of Florida, Gainesville, FL 32611.    Anil V. Rao anilvrao@ufl.edu, http://www.mae.ufl.edu/rao, Department of Mechanical and Aerospace Engineering, P.O. Box 116250, Gainesville, FL 32611-6250. Phone (352) 392-0961. Fax:(352) 392-7303.
Abstract

For control problems with control constraints, a local convergence rate is established for an -method based on collocation at the Radau quadrature points in each mesh interval of the discretization. If the continuous problem has a sufficiently smooth solution and the Hamiltonian satisfies a strong convexity condition, then the discrete problem possesses a local minimizer in a neighborhood of the continuous solution, and as either the number of collocation points or the number of mesh intervals increase, the discrete solution convergences to the continuous solution in the sup-norm. The convergence is exponentially fast with respect to the degree of the polynomials on each mesh interval, while the error is bounded by a polynomial in the mesh spacing. An advantage of the -scheme over global polynomials is that there is a convergence guarantee when the mesh is sufficiently small, while the convergence result for global polynomials requires that a norm of the linearized dynamics is sufficiently small. Numerical examples explore the convergence theory.

Key words. hp collocation, Radau collocation, convergence rate, optimal control, orthogonal collocation

AMS subject classifications. 49M25, 49M37, 65K05, 90C30

1 Introduction

A convergence rate is established for an -orthogonal collocation method applied to a constrained control problem of the form

\hb@xt@.01(1.1)

where , the control constraint set is closed and convex with nonempty interior, the state , denotes the derivative of with respect to , , , and is the initial condition, which we assume is given; denotes the space of times continuously differentiable functions mapping to for some . The value of should be clear from context; states and costates always have components and controls have components. It is assumed that and are at least continuous. When the dynamics in (LABEL:P) can be solved for the state as a function of the control , the control problem reduces to a constrained minimization over .

The development of -techniques in the context of finite element methods for boundary-value problems began with the work of Babuška and Gui in [24, 25, 26], and Babuška and Suri in [1, 2, 3]. In the -collocation approach that we develop for (LABEL:P), the time domain is initially partitioned into a mesh. To simplify the discussion, we focus on a uniform mesh consisting of intervals defined by the mesh points where . The dynamics of (LABEL:P) are reformulated using a change of variables. Let be the midpoint of the mesh interval . We make the change of variables , where is half the width of the mesh interval and ; let us define by . Thus corresponds to the restriction of to the mesh interval . Similarly, we define a control corresponding to the restriction of to the mesh interval . In the new variables, the control problem reduces to finding state-control pairs , , each pair defined on the interval , to solve the problem

\hb@xt@.01(1.2)

Since the function does not exist (there is no 0-th mesh interval), we simply define , the initial condition. The condition

\hb@xt@.01(1.3)

in (LABEL:Dtau) corresponds to the initial condition when and to continuity of the state across a mesh interval boundary when . Throughout the paper, (LABEL:continuity) is referred to as the continuity condition.

In the -scheme developed in this paper, the dynamics for are approximated by the Radau collocation scheme developed in [9, 21, 22, 30]. Let denote the space of polynomials of degree at most defined on the interval , and let denote the -fold Cartesian product . We analyze a discrete approximation to (LABEL:Dtau) of the form

\hb@xt@.01(1.4)

Note that there is no polynomial associated with the control; corresponds to the value of the control at . In (LABEL:D) the dimension of is and there are mesh intervals, so a component of the state variable is chosen from a space of dimension . Similarly, there are equations in (LABEL:D) corresponding to the collocated dynamics at points and the continuity conditions.

For simplicity in the analysis, the same degree polynomials are used in each mesh interval, while in practical implementations of the -scheme [9, 10, 36, 38], polynomials of different degrees are often used on different intervals. On intervals where the solution is smooth, high degree polynomials are employed, while on intervals where the solution is nonsmooth, low degree polynomials are used.

We focus on a collocation scheme based on the Radau quadrature points satisfying

The interior abscissa , are the zeros of the degree Jacobi polynomial associated with the weight . These quadrature points are sometimes called the flipped Radau points, while the standard Radau points are , . The analysis is the same for either set of points, while the notation is a little cleaner for the flipped points. Besides the collocation points, our analysis also utilizes the noncollocated point .

As pointed out in [31], in a global collocation scheme where , it may not be possible to solve for the discrete state in terms of the discrete controls for certain choices of . In this paper, we show that locally, it is possible to solve for the discrete state in terms of the discrete control when is sufficiently large, or equivalently, when is sufficiently small, regardless of the choice for . In this respect, the -collocation approach is more robust than a global scheme.

Other global collocation schemes that have been presented in the literature are based on the Lobatto quadrature points [16, 19], on the Chebyshev quadrature points [17, 20], on the Gauss quadrature points [4, 22], and on the extrema of Jacobi polynomials [43]. Kang [34, 35] considers control systems in feedback linearizable normal form, and shows that when the Lobatto discretized control problem is augmented with bounds on the states and control, and on certain Legendre polynomial expansion coefficients, then the objectives in the discrete problem converge to the optimal objective of the continuous problem at an exponential rate. Kang’s analysis does not involve coercivity assumptions for the continuous problem, but instead imposes bounds in the discrete problem. Also, in [23] a consistency result is established for a scheme based on Lobatto collocation.

Any of the global schemes could be developed into an -collocation scheme. Our rationale for basing our -scheme on the Radau collocation points was the following: In numerical experiments such as those in [22], there is often not much difference between the convergence speed of approximations based on either Gauss or Radau collocation, while the Lobatto scheme often converged much slower; and in some cases, the Lobatto costate approximation did not converge due to a null space that arises in the first-order optimality conditions – see [22]. On the other hand, the implementation of an -scheme based on the Radau quadrature points was much simpler than the implementation based on the Gauss quadrature points. The Gauss points lie in the interior of each mesh interval, which requires the introduction of the state value at the mesh points. Since one of the Radau points is a mesh point, there is no need to introduce an additional noncollocated point. The implementation ease of Chebyshev quadrature should be similar to that of Gauss and was not pursued. The -collocation scheme analyzed in this paper corresponds to the scheme implemented in the popular GPOPS-II software package [40] for solving optimal control problems. This paper, in essence, provides a theoretical justification for the algorithm implemented in the software.

For , we use the sup-norm given by

where is the Euclidean norm. Given , the ball with center and radius is denoted

The following regularity assumption is assumed to hold throughout the paper.

Smoothness. The problem (LABEL:P) has a local minimizer in . There exists an open set and such that

Moreover, the first two derivative of and are Lipschitz continuous on the closure of and on respectively.

Let denote the solution of the linear costate equation

\hb@xt@.01(1.5)

where is the Hamiltonian defined by and denotes gradient. By the first-order optimality conditions (Pontryagin’s minimum principle), we have

\hb@xt@.01(1.6)

For any ,

while if .

We will show that the first-order optimality conditions (Karush-Kuhn-Tucker conditions) for (LABEL:D) are equivalent to the existence of , , such that

\hb@xt@.01(1.7)
\hb@xt@.01(1.9)

Since the mesh interval does not exist, (LABEL:dterminal) includes a definition for . As we will see in Proposition LABEL:equiv, for is the multiplier associated with the continuity condition (LABEL:continuity). Throughout the paper, , , is the Radau quadrature weight associated with . By [41, Eq. (3.134b)],

Szegő in [42, Thm. 8.9.1] provides tight estimates for both the and the derivatives of the Jacobi polynomial at which yield a bound of the form

By [41, Thm. 3.26],

for every .

Notice that the system (LABEL:dcostate)–(LABEL:dcontrolmin) for the costate approximation does not contain a continuity condition as in the primal discretization (LABEL:D), so the costate approximation could be discontinuous across the mesh points. Since has dimension and , the approximation to a component of the costate has dimension , while (LABEL:dcostate)–(LABEL:dterminal) provides equations. Hence, if a continuity condition for the costate were imposed at the mesh points, the system of equations (LABEL:dcostate)–(LABEL:dcontrolmin) along with the continuity condition would be overdetermined.

The following two assumptions are utilized in the convergence analysis.

  • For some , the smallest eigenvalue of the Hessian matrices and is greater than or equal to , uniformly for .

  • is large enough, or equivalently is small enough, that and , where

    \hb@xt@.01(1.10)

    Here is the matrix sup-norm (largest absolute row sum).

The coercivity assumption (A1) ensures that the solution of the discrete problem is a local minimizer. The condition (A2) enters into the analysis of stability for the perturbed dynamics; as we will see, it ensures that we can solve the linearized dynamics for the discrete state in terms of the discrete control. In [31, p. 804], where we analyze a Gauss collocation scheme on a single interval, there is no in the analogue of (A2). Hence, the convergence theory in [31] only applies to problems for which is sufficiently small. Consequently, the convergence theory for the -scheme is more robust since it applies to a broader class of problems.

In addition to the two assumptions, the analysis utilizes four properties of the Radau collocation scheme. Let be the by matrix defined by

\hb@xt@.01(1.11)

The matrix is a differentiation matrix in the sense that , , whenever is the polynomial that satisfies for . The submatrix , consisting of the trailing columns of , has the following properties:

  • is invertible and .

  • If is the diagonal matrix containing the Radau quadrature weights on the diagonal, then the rows of the matrix have Euclidean norm bounded by .

The fact that is invertible is established in [21, Prop. 1], while a proof for the bound on the inverse in (P1) is given in the appendix of [33]; however, there is still no proof for (P2). Numerically, we have checked the rows of for up to 300 and found that the last row of always had the largest Euclidean norm. From the formula given in [22, Eq. (53)], the elements in the last row of are the quadrature weights . Hence, the elements in the last row of are the square roots of the Radau quadrature weights, and the Euclidean norm of the last row is , the constant appearing in (P2). Nonetheless, there is still no proof of (P2) for general .

There is a related matrix that enters into the convergence analysis of the -scheme. Let be the by matrix defined by

\hb@xt@.01(1.12)

The matrix arises in the analysis of the costate equation. In Section 4.2.1 of [22], we introduce a matrix which is a differentiation matrix for the collocation points , . That is, if is a polynomial of degree at most and is the vector with components , , then . The matrix only differs from in a single entry: . As a result,

\hb@xt@.01(1.13)

If , then for by the first equality in (LABEL:h282). Since has degree and it vanishes at points, is identically zero and is constant. By the final equation in (LABEL:h282), when , which implies that is identically zero. This shows that is invertible. We find that has the following properties:

  • is invertible and .

  • The rows of the matrix have Euclidean norm bounded by .

In Proposition LABEL:deriv_exact at the end of the paper, an explicit formula is given for the inverse of . However, it is not clear from the formula that is bounded by 2. Numerically, we find that the norms in (P3) and (P4) achieve their maximum in the first row of the matrix, and in the Appendix, we observe that for up to 300, these norms increase monotonically towards the given bounds. Again, a proof of (P3) and (P4) for general is missing. Properties (P1)–(P4) differ from the assumptions (A1)–(A2) in the sense that (A1)–(A2) only hold for certain control problems; in contrast, (P1) has been established for general , while (P2), (P3), and (P4) hold for all choices of that we have checked. The properties (P1)–(P4) are stated as four separate properties since they are used in different ways in the analysis, however, (P2) implies (P1) and (P4) implies (P3) by the Schwarz inequality, as pointed out in the introduction of [33].

In the analysis of the Gauss scheme [31], properties (P3) and (P4) follow immediately from (P1) and (P2) since the analogue of in [31] is related to through an exchange operation. However, due to the asymmetry of the Radau collocation points and the lower degree of the polynomials in the discrete adjoint system (LABEL:dcostate)–(LABEL:dcontrolmin), a corresponding relationship between and in the Radau scheme does not seem to hold. Nonetheless, the bounds in (P3) and (P4) are observed to be the same as the bounds in (P1) and (P2).

Given a local minimizer of (LABEL:P), let , , and be the state, control, and costate associated with the mesh interval and the change of variables . The domain of , , or is where corresponds to and corresponds to . We define the following related discrete variables:

Suppose that , , is a polynomial which is a stationary point of (LABEL:D) for some discrete controls , and suppose that satisfy (LABEL:dcostate)–(LABEL:dcontrolmin). We define the following related discrete variables:

Thus capital letters always refer to discrete variables. As noted earlier, the costate polynomials associated with the discrete problem are typically discontinuous across the mesh points, and .

The convergence analysis only involves the smoothness of the optimal state and associated costate on the interior of each mesh interval. Let denote the Sobolev space of functions with square integrable derivatives on through order . Let denote the space of continuous functions whose restrictions to are contained in for each between 1 and (piecewise ). The norm on is the same as the norm on except that the integral is computed over the interior of each mesh interval. In this paper, the error bounds are expressed in terms of a seminorm which only involves the -th order derivative:

The following convergence result relative to the vector sup-norm (largest absolute element) will be established.

Theorem 1.1

If is a local minimizer for the continuous problem with and for some , and (A1), (A2), and (P1)–(P4) hold, then for sufficiently large or for sufficiently small with , the discrete problem has a local minimizer and associated multiplier satisfying , and we have

\hb@xt@.01(1.14)

where , , and is independent of , , and .

In a recent paper [33], where we analyze a Gauss collocation scheme on a single interval, . The differences between Radau and Gauss collocation are due to the asymmetry of the Radau points, and the asymmetry in the Radau first-order optimality conditions since and .

In the proof of Theorem LABEL:maintheorem, we need to make the right side of (LABEL:maineq) sufficiently small to establish the existence of the claimed solution to the discrete problem. The conditions and in the statement of the theorem ensure that as goes to zero, and go to zero, and as tends to infinity, and go to zero.

Since the discrete costate could be discontinuous across a mesh point, Theorem LABEL:maintheorem implies convergence of the discrete costate on either side of the mesh point to the continuous costate at the mesh point. The discrete problem provides an estimate for the optimal control at in the continuous problem, but not at since this is not a collocation point. Due to the strong convexity assumption (A1), an estimate for the discrete control at can be obtained from the minimum principle (LABEL:controlmin) since the initial state is given, while we have an estimate for the associated costate at . Alternatively, polynomial interpolation could be used to obtain estimates for the optimal control at .

The paper is organized as follows. In Section LABEL:abstract the discrete optimization problem (LABEL:D) is reformulated as a nonlinear system obtained from the first-order optimality conditions, and a general approach to convergence analysis is presented. In Section LABEL:sect_interp we obtain a bound for the error in interpolation at both the collocation points and . Section LABEL:residual obtains an estimate for how closely the solution to the continuous problem satisfies the first-order optimality conditions for the discrete problem. Section LABEL:inverse establishes the invertibility of the linearized dynamics in the sup-norm. Section LABEL:Invert shows the invertibility of the linearized differential inclusion, while Section LABEL:Lip establishes Lipschitz stability of the inverse. This Lipschitz stability property is the basis of Theorem LABEL:maintheorem. Numerical examples are studied in Section LABEL:numerical.

Notation. We let denote the interval , while is the interval . Let denote the space of polynomials of degree at most , while is the subspace consisting of polynomials in that vanish at and . The meaning of the norm is based on context. If , then denotes the maximum of over , where is the Euclidean norm. For a vector , is the maximum of over . If , then is the largest absolute row sum (the matrix norm induced by the vector norm). We let denote the matrix norm induced by the Euclidean vector norm. Throughout the paper, the index is used for the mesh interval, while the indices and are associated with collocation points. If , then for refers to vector with components , for . The dimension of the identity matrix is often clear from context; when necessary, the dimension of is specified by a subscript. For example, is the by identity matrix. The gradient is denoted , while denotes the Hessian; subscripts indicate the differentiation variables. Throughout the paper, is a generic constant which has different values in different equations. The value of is always independent of , , and . The vector has all entries equal to one, while the vector has all entries equal to zero; again, their dimension should be clear from context. If is the differentiation matrix introduced in (LABEL:Ddef), then is the -th column of and is the submatrix formed by columns through . We let denote the Kronecker product. If and , then is the by block matrix whose block is . We let denote the usual space of square integrable functions on , while is the Sobolev space consisting of functions with square integrable derivatives through order . The seminorm in , corresponding to the norm of the -order derivatives, is denoted . The subspace of corresponding to functions that vanish at and is denoted .

2 Abstract setting

Given a feasible point for the discrete problem (LABEL:D), define and . As noted earlier, is a differentiation matrix in the sense that

Hence, the discrete problem (LABEL:D) can be reformulated as

\hb@xt@.01(2.1)

where , the starting condition.

We introduce multipliers associated with the constraints in (LABEL:nlp) and write the Lagrangian as

The first-order optimality conditions for (LABEL:nlp) lead to the following relations (we show the variable with which we differentiate followed by the associated condition):

\hb@xt@.01(2.2)
\hb@xt@.01(2.3)
\hb@xt@.01(2.5)
\hb@xt@.01(2.6)

We first relate the KKT multipliers to the polynomials satisfying (LABEL:dcostate)–(LABEL:dcontrolmin).

Proposition 2.1

The multipliers satisfy if and only if the polynomial given by , , satisfies . Moreover, .

Proof. We start with multipliers satisfying (LABEL:NC0)–(LABEL:NC3). Define for , and let be the polynomial that satisfies . Also, set . In terms of and the matrix , the equations (LABEL:NC1), (LABEL:NC2), and (LABEL:NC3) become

\hb@xt@.01(2.7)
\hb@xt@.01(2.8)
\hb@xt@.01(2.9)

Since the polynomial that is identically equal to has derivative and since is a differentiation matrix, we have , which implies that , where is the -th column of . Hence, the first definition in (LABEL:NC0) can be written

\hb@xt@.01(2.10)
\hb@xt@.01(2.11)

where (LABEL:nc0) is due to (LABEL:nc1)–(LABEL:nc2).

As noted in (LABEL:h282),

\hb@xt@.01(2.12)
\hb@xt@.01(2.13)

This substitution in (LABEL:eq06) yields

\hb@xt@.01(2.14)

Since and -point Radau quadrature is exact for these polynomial, we have

\hb@xt@.01(2.15)

Combine (LABEL:h0) and (LABEL:h1) to obtain

\hb@xt@.01(2.16)

Let be the polynomial that satisfies