Solving Nonlinear Optimal Control Problems With State and Control Delays by Shooting Methods Combined with Numerical Continuation on the Delays

Solving Nonlinear Optimal Control Problems With State and Control Delays by Shooting Methods Combined with Numerical Continuation on the Delays

Riccardo Bonalli Sorbonne Universités, UPMC Univ Paris 06, CNRS UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France and Onera - The French Aerospace Lab, F - 91761 Palaiseau, France (riccardo.bonalli@onera.fr)    Bruno Hérissé Onera - The French Aerospace Lab, F - 91761 Palaiseau, France (bruno.herisse@onera.fr).    Emmanuel Trélat Sorbonne Universités, UPMC Univ Paris 06, CNRS UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France (emmanuel.trelat@upmc.fr, https://www.ljll.math.upmc.fr/trelat/).
Abstract

In this paper we introduce a new procedure to solve nonlinear optimal control problems with delays which exploits indirect methods combined with numerical homotopy procedures. It is known that solving this kind of problems via indirect methods (which arise from the Pontryagin Maximum Principle) is complex and computationally demanding because their implementation is faced to two main difficulties: the extremal equations involve forward and backward terms, and besides, the related shooting method has to be carefully initialized. Here, starting from the solution of the non-delayed version of the optimal control problem, delays are introduced by a numerical continuation. This creates a sequence of optimal delayed solutions that converges to the desired solution. We establish a convergence theorem ensuring the continuous dependence w.r.t. the delay of the optimal state, of the optimal control (in a weak sense) and of the corresponding adjoint vector. The convergence of the adjoint vector represents the most challenging step to prove and it is crucial for the well-posedness of the proposed homotopy procedure. Two numerical examples are proposed and analyzed to show the efficiency of this approach.

Key words. Optimal control, time-delayed systems, indirect methods, shooting methods, numerical homotopy methods, numerical continuation methods.

AMS subject classifications. 49J15, 65H20.

1 Introduction

1.1 Delayed Optimal Control Problems

Let , be positive integers, a positive real number, a measurable subset and define an initial state function and an initial control function . For every and every positive final time , consider the following nonlinear control system on with constant delays

(1)

where and , are of class (at least) w.r.t. their second and third variables. Control systems (LABEL:dynDelay) play an important role describing many phenomena in physics, biology and economics (see, e.g. [1]).

Let be a subset of . Assume that is reachable from for the control system (LABEL:dynDelay), that is, for every , there exists a final time and a control , such that the trajectory , solution of (LABEL:dynDelay) on , satisfies . Such a control is called admissible and we denote by the set of all admissible controls of (LABEL:dynDelay) defined on taking their values in while denotes the set of all admissible controls of (LABEL:dynDelay) defined on taking their values in . Then, and .

Given a couple of delays, we consider the Optimal Control Problem with Delays (OCP) consisting in steering the control system (LABEL:dynDelay) to , while minimizing the cost function

(2)

where and , are of class (at least) w.r.t. their second and third variables. The final time may be fixed or not.

The literature is abundant of numerical methods to solve (OCP). Most of them rely on direct methods, which basically consist in discretizing all the variables concerned and in reducing (OCP) to a finite dimensional problem. The works [2, 3, 4, 5, 6] develop several numerical techniques to convert the optimal control problem with delays into nonlinear constrained optimization problems. On the other hand, [7, 8, 9, 10, 11] propose different approaches that approximate the solution of (OCP) by truncated orthogonal series and reduce the optimal control problem with delays to a system of algebraic equations. Yet, other contributions (see, e.g. [12]) propose an approximating sequence of non-delayed optimal control problems whose solutions converge to the optimal solution of (OCP). However, the dimension induced by these approaches becomes as larger as the discretization is finer.

Some applications, many of which are found in aerospace engineering, like atmospheric reentry and satellite launching (for example in [13]), require great accuracy which can be reached by indirect methods. Moreover, the computational load needed to compute indirect methods remains minimal if compared to the one used to obtain a good solution with direct approaches. It is then interesting to solve efficiently (OCP) via these procedures.

1.2 Indirect Methods Applied to (Ocp)

The core of indirect methods relies on solving, thanks to Newton-like algorithms, the two-point or multi-point boundary value problem which arises from necessary optimality conditions coming from the application of the Pontryagin Maximum Principle (PMP) [14].

The paper [15] was first to provide a maximum principle for optimal control problems with a constant state delay while [16] obtains the same conditions by a simple substitution-like method. In [17] a similar result is achieved for control problems with pure control delays. In [18, 19], necessary conditions are obtained for optimal control problems with multiple constant delays in state and control variables. Moreover, [20] derives a maximum principle for control systems with a time-dependent delay in the state variable. Finally, [21] provides necessary conditions for optimal control problems with multiple constant delays and mixed control-state constraints.

The advantages of indirect methods, whose more basic version is known as shooting method, are their extremely good numerical accuracy and the fact that, if they converge, the convergence is very quick. Indeed, since they rely on the Newton method, they inherit the convergence properties of the Newton method. Nevertheless, their main drawback is related to their difficult initialization (see for example [13]). This is pointed out as soon as the necessary optimality conditions are computed on (OCP).

It is known that (see, e.g. [14, 22, 21]), if , is an optimal solution of (OCP) with optimal final time , there exist a nonpositive scalar and an absolutely continuous mapping called adjoint vector, with , such that the so-called extremal satisfies

(3)

where is the Hamiltonian, and the maximization condition

holds almost everywhere on . Moreover, if the final time is free and, without loss of generality, one supposes that and are points of continuity of ,

(5)

(see [22] for a more general condition using the concept of Lebesgue approximate continuity). The extremal is said normal whenever , and in that case it is usual to normalize the adjoint vector so that ; otherwise it is said abnormal.

Assuming that is known as a function of and (by the maximization condition (LABEL:maxCond)), each iteration of a shooting method consists in solving the coupled dynamics (LABEL:dynDual), where a value of is provided. This means that one has to solve a Differential-Difference Boundary Value Problem (DDBVP) where both forward and backward terms of time appear within mixed type differential equations. The difficulty is then the lack of global information which forbids a purely local integration by usual iterative methods for ODEs. Some techniques to solve mixed type differential equations were developed. For example, [23] proposes an analytical decomposition of the solutions as sums of forward solutions and backward solutions, while [24] provides a solving numerical scheme. However, these approaches treat either only linear cases or the inversion of matrices whose dimension increases as much as the numerical accuracy raises.

In order to initialize correctly a shooting method for (LABEL:dynDual), a guess of the final value of the adjoint vector is not sufficient, but rather, a good numerical guess of the whole function must be provided to make the procedure converge. This represents an additional difficulty with respect to the usual shooting method and it requires a global discretization of (LABEL:dynDual).

It seems that this topic has been little addressed in the literature. The paper [25] proposes a collocation methods to solve the DDBVP arising from (LABEL:dynDual) that turns out to be successful to solve several optimal control problems with delays. However, as a consequence of the collocation method, the degree of interpolating polynomials grows up fast for hard problems. Moreover, a precomputation of points where the solution of (LABEL:dynDual) has discontinuous derivative is needed to make the whole approach feasible, intensifying the quantity of numerical computations.

1.3 Numerical Homotopy Approach

The basic idea of homotopy methods is to solve a difficult problem step by step, starting from a simpler problem, by parameter deformation. Theory and practice of continuation methods are well known (see, e.g. [26]). Combined with the shooting problem derived from the PMP, a homotopy method consists in deforming the problem into a simpler one (that can be easily solved) and then in solving a series of shooting problems step by step to come back to the original problem. The main difficulty of homotopy methods lies in the choice of a sufficiently regular deformation that allows the convergence of the homotopy method. The starting problem should be easy to solve, and the path between this starting problem and the original problem should be handy to model. This path is parametrized by a parameter denoted and, when the homotopic parameter is a real number and the path is linear in , the homotopy method is rather called a continuation method.

Consider the Optimal Control Problem without Delays (OCP)(OCP) which consists of steering to the control system

(6)

while minimizing the cost function

(7)

In many situations, exploiting the non-delayed version of the PMP mixed to other techniques (such as geometric control, dynamical system theory applied to mission design, etc., we refer the reader to [13] for a survey on these procedures), one is able to initialize efficiently a shooting method on (OCP). Thus, it is legitimate to wonder if one may solve (OCP) by indirect methods starting a homotopy method where represents the deformation parameter and (OCP) is taken as the starting problem. This approach is a way to address the flaw of indirect methods applied to (OCP): on one hand, the global adjoint vector of (OCP) could be used to inizialize efficiently a shooting method on (LABEL:dynDual) (for small enough) and, on the other hand, we could solve (LABEL:dynDual) via usual iterative methods for ODEs (for example, by using the global state solution at the previous iteration).

However, one should be careful when using homotopy methods. As we pointed out previously, the existence of a sufficiently regular deformation curve of delays that allows the convergence of the method must be ensured. In [13], it was proved that, in the case of unconstrained optimal control problems without delays where , the existence of a parameter deformation curve is equivalent to ask that neither abnormal minimizers nor conjugate points occur along the homotopy path. Some similar assumptions must be made to apply this procedure to solve succesfully (OCP) by indirect methods.

1.4 Main Contribution and Paper Structure

The idea proposed in this paper consists in introducing a general method that allows to solve successfully (OCP) using indirect methods combined with homotopy procedures, with as deformation parameter, starting from the solution of its non-delayed version (OCP).

The main contribution of the paper is a convergence theorem that ensures the continuous dependence w.r.t. the delay of the optimal state, the optimal control (in a weak sense) and of the corresponding adjoint vector of (OCP). This ensures to reach the optimal solution of (OCP) starting from the optimal solution of (OCP) iteratively by travelling across a sequence converging to . The most challenging and most important nontrivial conclusion is the continuous dependence of the adjoint vectors of (OCP) w.r.t. . This last fact is crucial because it allows indirect methods to solve (OCP) starting a homotopy method on with (OCP) as initial problem.

The article is structured as follows. Section 2 presents the assumptions and the statement of the convergence theorem; moreover, a practical algorithm to solve (OCP) by homotopy is provided. In Section 3 the efficiency of this approach is illustrated by testing the proposed algorithm on two examples. Finally, in the appendices, the technical details of the proof of the main result are provided.

2 Convergence Theorem for (Ocp)

Within the proposed convergence result, it is crucial to split the case in which the delay on the control variable appears from the one which considers only pure state delays. The context of control delays reveals to be more complex, especially, in proving the existence of optimal control for (OCP). Indeed, a standard approach to prove existence would consider usual Filippov’s assumptions (as in the classical reference [27]) which, in the case of control delays, must be extended. In particular, using the Guinn’s reduction (see, e.g. [16]), the control system with delays results to be equivalent to a non-delayed system with a larger number of variables depending on the value of . Such extension was used in [28]. However, it is not difficult to see that the usual assumption concerning the convexity of the epigraph of the extended dynamics is not sufficient to prove Lemma 2.1 in [28]. More details are provided in Remark LABEL:remarkGuinn, in Section LABEL:existenceSect.

2.1 Main Result

We make the following assumptions.

Common assumptions:

  1. is a compact and convex subset of and is a compact subset of ;

  2. (OCP)has a unique solution defined on a neighborhood of ;

  3. The optimal trajectory has a unique extremal lift (up to a multiplicative scalar) defined on , which is normal, denoted , solution of the Maximum Principle;

  4. There exists a positive real number such that, for every and every , denoting the related trajectory arising from the control system (LABEL:dynDelay) with final time , we have

In case of pure state delays:

  1. For every delay , every optimal control of (OCP) is continuous;

  2. The sets

    are convex for every and every , where .

In case of delays both in state and control variables:

  1. For every delay , every optimal control of (OCP) takes its values at extremal points of . Moreover, the optimal final time and are points of continuity of ;

  2. The vector field and the cost function are locally Lipschitz w.r.t. i.e., for every there exist a neighborhood of and a continuous function , such that

    for every (the same statement holds for );

  3. The sets

    are convex for every and every .

Some remarks on these assumptions are in order.

First of all, assumptions and on the uniqueness of the solution of (OCP) and on the uniqueness of its extremal lift are related to the differentiability properties of the value function (see, e.g. [29, 30, 31]). They are standard in optimization and are just made to keep a nice statement (see Theorem LABEL:theoMain). These assumptions can be weakened as follows. If we replace and with the assumption ”every extremal lift of every solution of (OCP) is normal”, then the conclusion provided in Theorem LABEL:theoMain hereafter still holds, except that the convergence properties must be written in terms of closure points. The proof of this fact follows the same guideline used to prove Theorem LABEL:theoMain and we avoid to report the details.

Assumptions and play a complementary role in proving the convergence property for the adjoint vectors. Moreover, Assumption becomes also crucial to ensure the convergence of optimal controls and trajectories when considering delays both in state and control variables. Without this assumption of nonsingular controls, proving these last convergences becomes a hard task. The issue is related to the following fact. Let , be Banach spaces and be a continuous map. Suppose that is a sequence such that and for some . Then, in general, we cannot ensure that . A way to overcome this flaw is to ensure the equivalence between weak convergence and strong convergence under some additional assumptions, and, in our main result, this is achieved thanks to (see, e.g. [32]).

Theorem 2.1.

Assume that assumptions , , and hold.

Consider first the context of pure state delays i.e., problems (OCP) such that and , and assume that assumptions and hold. Then, there exists such that, for every , (OCP) has at least one solution whose arc is defined on , every extremal lift of which is normal. Let be such a normal extremal lift. Then, up to continuous extensions on , as tends to 0,

  • converges to ;

  • converges uniformly to ;

  • converges uniformly to ;

  • converges to in for the weak star topology.

If the final time of (OCP) is fixed, then for every .

On the other hand, consider general problems (OCP) with delays in both state and control variables. If one assumes that, for every , if (OCP) is controllable then it admits an optimal solution, then, under assumptions , and , there exists such that, for every , the same conclusions on the convergences given above hold and, in addition, as tends to 0, converges to almost everywhere in . Moreover, if dynamics and cost are affine w.r.t. in the two control variables, the existence of an optimal solution for every is ensured.

Finally, for every , by extending to the delay all the previous assumptions, we have that the optimal solutions of (OCP) (or in the case of pure state delays) and their related adjoint vectors are continuous w.r.t. at for the above topologies.

The proof of Theorem LABEL:theoMain is technical and lenghty. We report it in Appendix LABEL:appProof.

The last statement of Theorem LABEL:theoMain (the continuous dependence w.r.t. , for every ) is the most general conclusion achieved and extends the first part of the theorem. The proof of this generalization follows the same guidelines of the proof of the continuity at and we avoid to report the details.

We want to stress the fact that the continuous dependence w.r.t. of the adjoint vectors of (OCP) represents the most challenging and the most important result achieved by Theorem LABEL:theoMain. It represents the essential step that allows the proposed homotopy method to converge robustly for every, small enough, couple of delays . The proof of this fact is not easy. An accurate analysis of the convergence of Pontryagin cones in the case of the delayed version of the PMP is required.

Remark 2.2.

Theorem LABEL:theoMain can be extended to obtain stronger convergence conclusions, by using weaker assumptions, in the particular case of dynamics that are affine in the two control variables, and costs of type

where are constants. Indeed, considering assumptions - and either or , the convergence properties established in Theorem LABEL:theoMain for and still hold and, moreover, converges to in for the weak topology, as tends to 0. The proof of this fact arises easily adapting the scheme in Appendix LABEL:appProof. For sake of brevity, we do not give these technical details.

2.2 The Related Algorithm and Its Convergence

Exploiting the statement of Theorem LABEL:theoMain, we may conceive a general algorithm, based on indirect methods, capable of solving (OCP) by applying homotopy procedures on parameter , starting from the solution of its non-delayed version (OCP).

As we explained in the previous sections, the critical behavior coming out from this approach consists of the integration of mixed-type equations that arise from System (LABEL:dynDual). The previous convergence result suggests us the idea that we may solve (LABEL:dynDual) via usual iterative methods for ODEs, for example, by using the global state solution at the previous iteration. Moreover, the global adjoint vector of (OCP) could be used to inizialize, from the beginning, the whole shooting.

  Set , , and fix an integer . Solve (OCP) by indirect methods and denote its numerical extremal solution.
  while  and  do
     A.1) Compute by predictor-corrector methods and steplength adaptation;
     A.2) Solve recursively (OCP) by indirect methods initialized by i.e., fixing , an integer , a tolerance and setting , , do
     while , and  do
        B.1) Guess from (via Newton-like algorithms);
        B.2) Using the PMP, express as function of ;
        B.3) Solve
Then, with as solution of the previous system, solve
        B.4) and .
     end while
     A.3) and .
  end while
Algorithm 1 - Indirect Numerical Homotopy Algorithm for solving (OCP)

These considerations lead us to Algorithm LABEL:alg. To prove the convergence of Algorithm LABEL:alg we apply Theorem LABEL:theoMain. We focus on the case of general state and control delays, highlighting that the same conclusion holds for problems with pure state delays provided that optimal controls can be expressed as continuous functions of the state and the adjoint vector (by using the maximality condition on the Hamiltonian).

Suppose that assumptions , , , , , and hold and that the delay considered is such that . Then, we know that for every in the open ball , (OCP) has at least an optimal solution with normal extremal lift. The first consequence is that, referring to Algorithm LABEL:alg, we can put for every integers , . Thanks to Theorem LABEL:theoMain, , and as soon as . Then, the indirect method inside Algorithm LABEL:alg results to be well defined and well initialized by the adjoint vector of (OCP). Indeed, necessarily, the algorithm will travel backward one of the subsequence converging to the solution of (OCP). Since for every sequence converging to the related extremal lift of (OCP) converge to the one of (OCP) (for the evident topologies), every homotopy methods on lead to the same optimal solution of (OCP).

Remark 2.3.

It is interesting to remark that, at least formally, there are no difficulties to apply Algorithm LABEL:alg to more general (OCP) which consider locally bounded varying delays that are functions of the time and the state i.e., . In this context, some relations close to (LABEL:dynDual)-(LABEL:freeT) are still provided (see, e.g. [33]), so that, the proposed numerical continuation scheme remains well-defined.

3 Numerical Example

In order to prove effectiveness and robustness of our approach, we test it on two examples. As a matter of standard analysis for numerical approaches to solve optimal control problems with delays, we follow the guideline provided by [6]. The first test is an academic example while the second one considers the nontrivial problem consisting of a continuous nonlinear two-stage stirred tank reactor system (CSTR), proposed by [34] and [35].

We stress the fact that, in this paper, we are interested in solving an optimal control problem with delays (OCP) starting from its non-delayed version (OCP), without taking care of how (OCP) is solved. Even if we are aware of the fact that this task is far from being easy, here, we focus our attention on the performance achieved once the solution of (OCP) is known. However, as suggested in Section LABEL:secIntro, in many situations one is able to initialize correctly a shooting method on (OCP) (see [13]).

3.1 Setting Preliminaries

The numerical examples proposed are solved applying verbatim Algorithm LABEL:alg. Good solutions are obtained using a basic linear continuation on . Moreover, an explicit second-order Runge-Kutta method is handled to solve all the ODEs coming from the dual formulation while the routine hybrd [36] is used to solve the shooting problem. The procedure is initialized using the solution of (OCP) provided by the optimization software AMPL [37] combined with the interior point solver IPOPT [38].

We stress the fact that one has to be careful when passing the numerical approximation of the extremals in step B.3) of the previous algorithm. Indeed, it is known that, using collocation methods like Runge-Kutta schemes, the error between the solution and its numerical approximation remains bounded throughout and decreases with , where is the time step while is the order of the method, only if this numerical approximation is obtained by interpolating the numerical values within each subinterval of integration with a polynomial of order . From this remark, it is straightforward that the dimension of the shooting considered in Algorithm LABEL:alg, not only it increases w.r.t. , but it is also proportional to . In the particular case of an explicit second-order Runge-Kutta method, the dimension of shootings is bounded above by (where is the dimension of the state).

The numerical calculations are computed on a machine Intel(R) Xeon(R) CPU E5-1607 v2 3.00GHz, with 8.00 Gb of RAM.

3.2 Analytical Example

Consider the Optimal Control Problem with Delays (OCP) which consists in minimizing the cost subject to

Since no terminal conditions are imposed, this particular (OCP) can only have normal extremals. Then, the Hamiltonian is and the adjoint equation is , with . Finally, we infer from the maximization condition (LABEL:maxCond), that optimal controls are given by . The paper [6] shows that the optimal synthesis of (OCP) can be obtained analytically. In particular, one has

(8)

Considering Remark LABEL:remarkExample, we apply Algorithm LABEL:alg to solve (OCP) with Runge-Kutta time steps and a tolerance of and maximal iterations for hybrd routine. Using a Simpson’s rule, the optimal value is obtained in just in one iteration of the continuation scheme. Moreover, global errors in the sup norm between (LABEL:contEx1) and their numerical approximations respectively of for the control and of for the state are obtained. Figure LABEL:figEx1 shows the optimal quantities for (OCP), its non-delayed version and an intermediate solution when .

Figure 1: a) Optimal states , b) Optimal controls , c) Optimal adjoints for (OCP).

3.3 A Nonlinear Chemical Tank Reactor Model

Let us consider a two-stage nonlinear continuous stirred tank reactor (CSTR) system with a first-order irreversible chemical reaction occurring in each tank. The system was studied by [35] and successively by [34] in the framework of the dynamic programming. This Optimal Control Problem with Delays (OCP) consists in minimizing the cost subject to

where, now, we have a fixed scalar delay which is chosen in the interval and acts on the state only, the final time is fixed and functions , are given by , . Since no terminal conditions are imposed, (OCP) have only normal extremals. The Hamiltonian is