Galerkin approximations of optimal control of nonlinear DDEs

# Galerkin approximations for the optimal control of nonlinear delay differential equations

Mickaël D. Chekroun Department of Atmospheric & Oceanic Sciences, University of California, Los Angeles, CA 90095-1565, USA Axel Kröner Institut für Mathematik, Humboldt-Universität Berlin, 10099 Berlin, Germany; INRIA and CMAP, École Polytechnique, CNRS, Université Paris Saclay, 91128 Palaiseau, France  and  Honghu Liu Department of Mathematics, Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061, USA
###### Abstract.

Optimal control problems of nonlinear delay differential equations (DDEs) are considered for which we propose a general Galerkin approximation scheme built from Koornwinder polynomials. Error estimates for the resulting Galerkin-Koornwinder approximations to the optimal control and the value function, are derived for a broad class of cost functionals and nonlinear DDEs. The approach is illustrated on a delayed logistic equation set not far away from its Hopf bifurcation point in the parameter space. In this case, we show that low-dimensional controls for a standard quadratic cost functional can be efficiently computed from Galerkin-Koornwinder approximations to reduce at a nearly optimal cost the oscillation amplitude displayed by the DDE’s solution. Optimal controls computed from the Pontryagin’s maximum principle (PMP) and the Hamilton-Jacobi-Bellman equation (HJB) associated with the corresponding ODE systems, are shown to provide numerical solutions in good agreement. It is finally argued that the value function computed from the corresponding reduced HJB equation provides a good approximation of that obtained from the full HJB equation.

###### Key words and phrases:
Delay differential equations, Galerkin approximations, Hamilton-Jacobi-Bellman equation, Hopf bifurcation, Koornwinder polynomials, Optimal control

## 1. Introduction

Delay differential equations (DDEs) are widely used in many fields such as biosciences [20, 41, 54, 45], climate dynamics [15, 24, 40, 56, 58], chemistry and engineering [26, 43, 46, 55, 38]. The inclusion of time-lag terms are aimed to account for delayed responses of the modeled systems to either internal or external factors. Examples of such factors include incubation period of infectious diseases [41], wave propagation [10, 56], or time lags arising in engineering [38] to name a few.

In contrast to ordinary differential equations (ODEs), the phase space associated even with a scalar DDE is infinite-dimensional, due to the presence of delay terms implying thus the knowledge of a function as an initial datum, namely the initial history to solve the Cauchy problem [18, 26]; cf. Section 2. The infinite-dimensional nature of the state space renders the related optimal control problems more challenging to solve compared to the ODE case. It is thus natural to seek for low-dimensional approximations to alleviate the inherent computational burden. The need of such approximations is particularly relevant when (nearly) optimal controls in feedback form are sought, to avoid solving the infinite-dimensional Hamilton-Jacobi-Bellman (HJB) equation associated with the optimal control problem of the original DDE.

For optimal control of linear DDEs, averaging methods as well as spline approximations are often used for the design of (sub)optimal controls in either open-loop form or feedback form; see e.g. [1, 4, 34, 42] and references therein. The usage of spline approximations in the case of open-loop control for nonlinear DDEs has also been considered [44], but not in a systematic way. Furthermore, due to the locality of the underlying basis functions, the use of e.g. spline approximations for the design of feedback controls leads, especially in the nonlinear case, to surrogate HJB equations of too high dimension to be of practical interest.

In this article, we bring together a recent approach dealing with the Galerkin approximations for optimal control problems of general nonlinear evolution equations in a Hilbert space [11], with techniques for the finite-dimensional and analytic approximations of nonlinear DDEs based on Legendre-type polynomials, namely the Koornwinder polynomials [9]. Within the resulting framework, we adapt ideas from [11, Sect. 2.6] to derive—for a broad class of cost functionals and nonlinear DDEs—error estimates for the approximations of the value function and optimal control, associated with the Galerkin-Koornwinder (GK) systems; see Theorem 2.1 and Corollary 2.1. These error estimates are formulated in terms of residual energy, namely the energy contained in the controlled DDE solutions as projected onto the orthogonal complement of a given Galerkin subspace.

Our approach is then applied to a Wright equation111Analogue to a logistic equation with delay; see Sect. 3.1. with the purpose of reducing (optimally) the amplitude of the oscillations displayed by the DDE’s solution subject to a quadratic cost; see Sect. 3. For this model, the oscillations emerge through a supercritical Hopf bifurcation as the delay parameter crosses a critical value, , from below. We show, for above and close to , that a suboptimal controller at a nearly optimal cost can be synthesized from a 2D projection of the 6D GK system; see Sect. 3.3. The projection is made onto the space spanned by the first two eigenvectors of the linear part of the 6D GK system; the sixth dimension constituting for this example the minimal dimension—using Koornwinder polynomials—to resolve accurately the linear contribution to the oscillation frequency of the DDE’s solution; see (3.29) in Sect. 3.3.

Using the resulting 2D ODE system, the syntheses of (sub)optimal controls obtained either from application of the Pontryagin’s maximum principle (PMP) or by solving the associated HJB equation, are shown to provide nearly identical numerical solutions. Given the good control skills obtained from the controller synthesized from the reduced HJB equation, one can reasonably infer that the corresponding “reduced” value function provides thus a good approximation of the “full” value function associated with the DDE optimal control problem; see Sect. 4.

The article is organized as follows. In Sect. 2, we first introduce the class of nonlinear DDEs considered in this article and recall in Sect. 2.1 how to recast such DDEs into infinite–dimensional evolution equations in a Hilbert space. We summarize then in Sect. 2.2 the main tools from [9] for the analytic determination of GK systems approximating DDEs. In Sect. 2.3, we derive error estimates for the approximations of the value function and optimal control, obtained from the GK systems. Our approach is applied to the Wright equation in Sect. 3. Section 3.1 introduces the optimal control problem associated with this equation and its abstract functional formulation following Sect. 2.1. The explicit GK approximations of the corresponding optimal control problem are derived in Sect. 3.2. Numerical results based on PMP from a projected 2D GK system are then presented in Sect. 3.3. In Sect. 4, we show that by solving numerically the associated reduced HJB equation, a (sub)optimal control in a feedback form can be synthesized at a nearly optimal cost. Finally, directions to derive reduced systems of even lower dimension from GK approximations are outlined in Sect. 5.

## 2. Optimal control of DDEs: Galerkin-Koornwinder approximations

In this article, we are concerned with optimal control problems associated with nonlinear DDEs of the form

 (2.1) dm(t)dt=am(t)+bm(t−τ)+c∫tt−τm(s)ds+F(m(t),m(t−τ),∫tt−τm(s)ds),

where , and are real numbers, is the delay parameter, and is a nonlinear function. We refer to Sect. 2.3 for assumption on . To simplify the presentation, we restrict ourselves to this class of scalar DDEs, but nonlinear systems of DDEs involving several delays are allowed by the approximation framework of [9] adopted in this article.

Note that even for scalar DDEs such as given by Eq. (2.1), the associated state space is infinite-dimensional. This is due to the presence of time-delay terms, which require providing initial data over an interval where is the delay. It is often desirable, though, to have low-dimensional ODE systems that capture qualitative features, as well as approximating certain quantitative aspects of the DDE dynamics.

The approach adopted here consists of approximating the infinite-dimensional equation (2.1) by a finite-dimensional ODE system built from Koornwinder polynomials following [9], and then solve reduced-order optimal control problems aimed at approximating a given optimal control problem associated with Eq. (2.1).

To justify this approach, Eq. (2.1) needs first to be recast into an evolution equation, in order to apply in a second step, recent results dealing with the Galerkin approximations to optimal control problems governed by general nonlinear evolution equations in Hilbert spaces, such as presented in [11].

As a cornerstone, rigorous Galerkin approximations to Eq. (2.1) are crucial, and are recalled hereafter. We first recall how a DDE such as Eq. (2.1) can be recast into an infinite-dimensional evolution equation in a Hilbert space.

### 2.1. Recasting a DDE into an infinite–dimensional evolution equation

The reformulation of a system of DDEs into an infinite-dimensional ODE is classical. For this purpose, two types of function spaces are typically used as state space: the space of continuous functions , see [26], and the Hilbert space , see [18]. The Banach space setting of continuous functions has been extensively used in the study of bifurcations arising from DDEs, see e.g. [8, 17, 19, 21, 37, 47, 60], while the Hilbert space setting is typically adopted for the approximation of DDEs or their optimal control; see e.g. [1, 2, 3, 9, 36, 33, 35, 42, 22, 28].

Being concerned with the optimal control of scalar DDEs of the form (2.1), we adopt here the Hilbert space setting and consider in that respect our state space to be

 (2.2) H:=L2([−τ,0);R)×R,

endowed with the following inner product, defined for any and in by:

 (2.3) ⟨(f1,γ1),(f2,γ2)⟩H:=1τ∫0−τf1(θ)f2(θ)dθ+γ1γ2.

We will also make use sometimes of the following subspace of :

 (2.4) W:=H1([−τ,0);R)×R,

where denotes the standard Sobolev subspace of .

Let us denote by the time evolution of the history segments of a solution to Eq. (2.1), namely

 (2.5) mt(θ):=m(t+θ),t≥0,θ∈[−τ,0].

Now, by introducing the new variable

 (2.6) y(t):=(mt,m(t))=(mt,mt(0)),t≥0,

Eq. (2.1) can be rewritten as the following abstract ODE in the Hilbert space :

 (2.7) dydt=Ay+F(y),

where the linear operator is defined as

 (2.8) [AΨ](θ) :=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩d+ΨDdθ,θ∈[−τ,0),aΨS+bΨD(−τ)+c∫0−τΨD(s)ds,θ=0,

with the domain of given by (cf. e.g. [33, Prop. 2.6])

 (2.9) D(A)={(ΨD,ΨS)∈H:ΨD∈H1([−τ,0);R),limθ→0−ΨD(θ)=ΨS}.

The nonlinearity is here given, for all in , by

 (2.10) [F(Ψ)](θ) :={0,θ∈[−τ,0),F(ΨS,ΨD(−τ),∫0−τΨD(s)ds),θ=0,

With such as given in (2.9), the operator generates a linear -semigroup on so that the Cauchy problem associated with the linear equation is well-posed in the Hadamard’s sense; see e.g [18, Thm. 2.4.6]. The well-posedness problem for the nonlinear equation depends obviously on the nonlinear term and we refer to [9] and references therein for a discussion of this problem; see also [59].

For later usage, we recall that a mild solution to Eq. (2.7) over with initial datum in , is an element in that satisfies the integral equation

 (2.11) y(t)=T(t)x+∫t0T(t−s)F(y(s))ds,∀t∈[0,T],

where denotes the -semigroup generated by the operator . This notion of mild solutions extend naturally when a control term is added to the RHS of Eq. (2.7); in such a case the definition of a mild solution is amended by the presence of the integral term to the RHS of Eq. (2.11).

### 2.2. Galerkin-Koornwinder approximation of DDEs

Once a DDE is reframed into an infinite-dimensional ODE in , the approximation problem by finite-dimensional ODEs can be addressed in various ways. In that respect, different basis functions have been proposed to decompose the state space ; these include, among others, step functions [1, 36], splines [2, 3], and orthogonal polynomial functions, such as Legendre polynomials [33, 28]. Compared with step functions or splines, the use of orthogonal polynomials leads typically to ODE approximations with lower dimensions, for a given precision [2, 28]. On the other hand, classical polynomial basis functions do not live in the domain of the linear operator underlying the DDE, which leads to technical complications in establishing convergence results [33, 28]; see [9, Remark 2.1-(iii)].

One of the main contributions of [9] consisted in identifying that Koornwinder polynomials [39] lie in the domain of linear operators such as given in (2.8), allowing in turn for adopting a classical Trotter-Kato (TK) approximation approach from the -semigroup theory [25, 49] to deal with the ODE approximation of DDEs such as given by Eq. (2.1). The TK approximation approach can be viewed as the functional analysis operator version of the Lax equivalence principle.222i.e., if “consistency” and “stability” are satisfied, then “convergence” holds, and reciprocally. The work [9] allows thus for positioning the approximation problem of DDEs within a well defined territory. In particular, as pointed out in [11] and discussed in Sect. 2.3 below, the optimal control of DDEs benefits from the TK approximation approach.

In this section, we focus on another important feature pointed out in [9] for applications, namely, Galerkin approximations of DDEs built from Koornwinder polynomials can be efficiently computed via simple analytic formulas; see [9, Sections 5-6 and Appendix C]. We recall below the main elements to do so referring to [9] for more details.

First, let us recall that Koornwinder polynomials are obtained from Legendre polynomials according to the relation

 (2.12) Kn(s):=−(1+s)ddsLn(s)+(n2+n+1)Ln(s),s∈[−1,1],n∈N;

see [39, Eq. (2.1)].

Koornwinder polynomials are known to form an orthogonal set for the following weighted inner product with a point-mass on ,

 (2.13) μ(dx)=12dx+δ1,

where denotes the Dirac point-mass at the right endpoint ; see [39]. In other words,

 (2.14) ∫1−1Kn(s)Kp(s)dμ(s) =12∫1−1Kn(s)Kp(s)ds+Kn(1)Kp(1) =0, if p≠n.

It is also worthwhile noting that the sequence given by

 (2.15) {Kn:=(Kn,Kn(1)):n∈N}

forms an orthogonal basis of the product space

 (2.16) E:=L2([−1,1);R)×R,

where is endowed with the following inner product:

 (2.17) ⟨(f,a),(g,b)⟩E=12∫1−1f(s)g(s)ds+ab,(f,a),(g,b)∈E.

The norm induced by this inner product will be denoted hereafter by .

From the original Koornwinder basis given on the interval , orthogonal polynomials on the interval for the inner product (2.3) can now be easily obtained by using a simple linear transformation defined by:

 (2.18) T:[−τ,0]→[−1,1],θ↦1+2θτ.

Indeed, for given by (2.12), let us define the rescaled polynomial by

 (2.19) Kτn:[−τ,0] →R, θ ↦Kn(1+2θτ),n∈N.

As shown in [9], the sequence

 (2.20) {Kτn:=(Kτn,Kτn(0)):n∈N}

forms an orthogonal basis for the space endowed with the inner product given in (2.3). Note that since [9, Prop. 3.1], we have

 (2.21) Kτn(0)=1.

As shown in [9, Sect. 5 & Appendix C], (rescaled) Koornwinder polynomials allow for analytical Galerkin approximations of general nonlinear systems of DDEs. In the case of a nonlinear scalar DDE such as Eq. (2.1), the Galerkin-Koornwinder (GK) approximation, , is obtained as

 (2.22) yN(t)=N−1∑j=0ξNj(t)Kτj,t≥0,

where the solve the -dimensional ODE system

 (2.23) dξNjdt =1∥Kj∥2EN−1∑n=0(a+bKn(−1)+cτ(2δn,0−1) +2τn−1∑k=0an,k(δj,k∥Kj∥2E−1))ξNn(t) +1∥Kj∥2EF(N−1∑n=0ξNn(t),N−1∑n=0ξNn(t)Kn(−1),τξN0(t)−τN−1∑n=1ξNn(t)), 0≤j≤N−1.

Here the Kronecker symbol has been used, and the coefficients are obtained by solving a triangular linear system in which the right hand side has explicit coefficients depending on ; see [9, Prop. 5.1].

In practice, an approximation of solving Eq. (2.1) is obtained as the state part (at time ) of given by (2.22) which, thanks to the normalization property given in  (2.21), reduces to

 (2.24) mN(t)=N−1∑j=0ξNj(t).

For later usage, we rewrite the above GK system into the following compact form:

 (2.25) dξNdt=MξN+G(ξN),

where denotes the linear part of Eq. (2.23), and the nonlinear part. Namely, is the matrix whose elements are given by

 (2.26) (M)j,n=1∥Kj∥2E(a+bKn(−1)+cτ(2δn,0−1)+2τn−1∑k=0an,k(δj,k∥Kj∥2E−1)),

where , and the nonlinear vector field , is given component-wisely by

 (2.27) Gj(ξN)=1∥Kj∥2EF(N−1∑n=0ξNn(t),N−1∑n=0ξNn(t)Kn(−1),τξN0(t)−τN−1∑n=1ξNn(t)),

for each .

For rigorous convergence results of GK systems (2.25) for Eq. (2.1) when does not depend on the delay term , we refer to [9, Sect. 4]. When does depend on , convergence is also believed to hold, as supported by the striking numerical results shown in [9, Sect. 6].

### 2.3. Galerkin-Koornwinder approximation of nonlinear optimal control problems associated with DDEs

#### 2.3.1. Preliminaries

In this section, we outline how the recent results of [11] concerned with the Galerkin approximations to optimal control problems governed by nonlinear evolution equations in Hilbert spaces, apply to the context of nonlinear DDEs when these approximations are built from the Koornwinder polynomials. We provide thus here further elements regarding the research program outlined in [11, Sect. 4].

We consider here, given a finite horizon , the following controlled version of Eq. (2.7),

 (2.28) dyds =Ay+F(y)+C(u(s)),s∈(t,T],u∈Uad[t,T], y(t) =x∈H.

The (possibly nonlinear) mapping is assumed to be such that , and the control lies in a bounded subset of a separable Hilbert space possibly different from . Assumptions about the set, , of admissible controls, , is made precise below; see (2.36).

Here our Galerkin subspaces are spanned by the rescaled Koornwinder polynomials, namely

 (2.29) HN=span{Kτ0,⋯,KτN−1},N=1,2,⋯,

where is defined in (2.20). Denoting by the projector of onto , the corresponding GK approximation of Eq. (2.28) is then given by

 (2.30) dyNds =ANyN+ΠNF(yN)+ΠNC(u(s)),s∈(t,T],u∈Uad[t,T], yN(t) =ΠNx∈HN,

where .

Given a cost functional assessed along a solution of Eq. (2.28) driven by

 (2.31) Jt,x(u):=∫Tt[G(yt,x(s;u))+E(u(s))]ds,t∈[0,T),u∈Uad[t,T],

(with and to be defined) the focus of [11] was to identify simple checkable conditions that guarantee in—but not limited to—such a context, the convergence of the corresponding value functions, namely

 (2.32) limN→∞supt∈[0,T]|vN(t,ΠNx)−v(t,x)|=0,

where

 (2.33a) v(t,x):=infu∈Uad[t,T]Jt,x(u),(t,x)∈[0,T)×H and v(T,x)=0, (2.33b) vN(t,xN):=infu∈Uad[t,T]JNt,xN(u),(t,xN)∈[0,T)×HN and vN(T,xN)=0.

and denotes the cost functional assessed along the Galerkin approximation to (2.28) driven by and emanating at time from

 (2.34) xN:=ΠNx.

As shown in [11], conditions ensuring the convergence (2.32) can be grouped into three categories. The first set of conditions deal with the operator and its Galerkin approximation . Essentially, it boils down to show that generates a -semigroup on , and that the Trotter-Kato approximation conditions are satisfied; see Assumptions (A0)(A2) in [11, Section 2.1]. The latter conditions are, as mentioned earlier, satisfied in the case of DDEs and GK approximations as shown by [9, Lemmas 4.2 and 4.3].

The second group of conditions identified in [11] is also not restrictive in the sense that only local Lipschitz conditions333Note however that in the case of DDEs, nonlinearities that include discrete delays may complicate the verification of a local Lipschitz property in as given by (2.2). The case of nonlinearities depending on distributed delays and/or the current state can be handled in general more easily; see [9, Sect. 4.3.2]. on , , and are required as well as continuity of and compactness of where is taken to be

 (2.35) Uad[t,T]:={u|[t,T]:u∈Uad},

with

 (2.36) Uad:={u∈Lq(0,T;V):u(s)∈U for a.e. s∈[0,T]},q≥1.

Also required to hold, are a priori bounds—uniform in in —for the solutions of Eq. (2.28) as well as of their Galerkin approximations. Depending on the specific form of Eq. (2.28) such bounds can be derived for a broad class of DDEs; in that respect the proofs of [9, Estimates (4.75)] and [9, Corollary 4.3] can be easily adapted to the case of controlled DDEs.

Finally, the last condition to ensure (2.32), requires that the residual energy of the solution of (2.28) (with ) driven by , satisfies

 (2.37) ∥Π⊥Ny(s;x,u)∥H⟶0,

uniformly with respect to both the control in and the time in ; see Assumption (A7) in [11, Section 2.4]. Here,

 (2.38) Π⊥N:=IdH−ΠN.

Easily checkable conditions ensuring such a double uniform convergence are identified in [11, Section 2.7] for a broad class of evolution equations and their Galerkin approximations but unfortunately do not apply to the case of the GK approximations considered here. The scope of this article does not allow for an investigation of such conditions in the case of DDEs, and results along this direction will be communicated elsewhere.

Nevertheless, error estimates can be still derived in the case of DDEs by adapting elements provided in [11, Section 2.6]. We turn now to the formulation of such error estimates.

#### 2.3.2. Error estimates

Our aim is to derive error estimates for GK approximations to the following type of optimal control problem associated with DDEs (2.1) framed into the abstract form (2.7),

 (P) minJ(x,u) s.t. (y,u)∈L2(0,T;H)×Uad solves (???),

in which is given by (2.31) with , and solves with in .

To do so, we consider the following set of assumptions

• The mappings , are locally Lipschitz, and is continuous.

• The linear operator is diagonalizable over and there exists in such that for each

 (2.39) ReλNj≤α,∀j∈{1,⋯,N},

where denotes the set of (complex) eigenvalues of .

• Let be given by (2.36) with bounded in . For each , in , the problem (2.28) (with ) admits a unique mild solution in , and for each , its GK approximation (2.30) admits a unique solution in . Moreover, there exists a constant such that

 (2.40) ∥y(t;x,u)∥H≤C, ∀t∈[0,T],u∈Uad, ∥yN(t;xN,u)∥H≤C, ∀t∈[0,T],N∈N∗,u∈Uad.
• The mild solution belongs to if lives in .

###### Remark 2.1.
• Compared with the case of eigen-subspaces considered in [11, Section 2.6], the main difference here lies in the regularity assumption (iv) above. This assumption is used to handle the term arising in the error estimates; see e.g. (2.48) below. Note that this term is zero when is self-adjoint and is an eigen-subspace of , which is the setting considered in [11, Section 2.6].

• A way to ensure Condition (iv) to hold consists of proving that the mild solution belongs to . For conditions on ensuring such a regularity see e.g. [6, Theorem 3.2].

Finally, we introduce the cost functional to be the cost functional assessed along the Galerkin approximation solving (2.30) with , namely

 (2.41) JN(xN,u):=∫T0[G(yN(s;xN,u))+E(u(s))]ds,

along with the optimal control problem

 (PN) minJN(xN,u) s.t. (yN,u)∈L2(0,T;HN)×Uad solves (???),
 with yN(0)=xN∈HN.

We are now in position to derive error estimates as formulated in Theorem 2.1 below. For that purpose Table 1 provides a list of the main symbols necessary to a good reading of the proof and statement of this theorem. The proof is largely inspired from that of [11, Theorem 2.3]; the main amendments being specified within this proof.

###### Theorem 2.1.

Assume that assumptions (i)-(iv) hold. Assume furthermore that for each in , there exists a minimizer (resp. ) for the value function (resp. ) defined in (2.33).

Then for any in it holds that

 (2.42) |v(t,x)− vN(t,xN)| ≤Lip(G|B)[√T−t+(T−t)√f(T)]R(u∗t,x,uN,∗t,xN),

with

 (2.43) f(T):=(Lip(F|B)+∥A∥2op)exp(2[α+32Lip(F|B)+12]T),

and

 (2.44) R(u∗t,x,uN,∗t,xN):=∥Π⊥Nyt,x(⋅;u∗t,x)∥L2(t,T;D(A))+∥Π⊥Nyt,x(⋅;uN,∗t,xN)∥L2(t,T;D(A)).
###### Proof.

We provide here the main elements of the proof that needs to be amended from [11, Theorem 2.3]. First note that mild solutions to (2.28) lie in due to Assumption (iv) and since the initial datum is taken in

We want to prove that for each , there exists a constant such that for any , and ,

 (2.45) ∥ΠNyt,x(s;u)−yNt,xN(s;u)∥2H≤f(T)∫st∥Π⊥Nyt,x(s′;u)∥2D(A)ds′,s∈[t,T].

Let us introduce

 (2.46) w(s):=ΠNyt,x(s;u)−yNt,xN(s;u).

By applying to both sides of Eq. (2.28), we obtain that (denoted by to simplify), satisfies:

 dΠNyds =ANΠNy+ΠNAΠ⊥Ny+ΠNF(ΠNy+Π⊥Ny)+ΠNC(u(s)).

This together with (2.30) implies that satisfies the following problem:

 (2.47) dwds =ANw+ΠNAΠ⊥Ny+ΠNF(ΠNy+Π⊥Ny)−ΠNF(yN), w(t) =0.

By taking the -inner product on both sides of (2.47) with , we obtain:

 (2.48) 12d∥w∥2Hds=⟨ANw,w⟩H +⟨ΠNAΠ⊥Ny,w⟩H +⟨ΠN(F(ΠNy+Π⊥Ny)−F(yN)),w⟩H.

We estimate now the term in the above equation using Assumption (ii). For this purpose, note that for any in , the following identity holds:

 (2.49) ⟨ANp,p⟩H=⟨Mξ,ξ⟩Rn,

where is the matrix representation of under the Koornwinder basis given by (2.26), and denotes the column vector whose entries are given by

 ξj:=1∥Kj∥E⟨p,Kτj⟩H,j=0,⋯,N−1.

Note that the matrix has the same eigenvalues as . Thanks to Assumption (ii), is diagonalizable over . Denoting by the normalized eigenvector of corresponding to each , we have

 Mξ=MN∑j=1˜ξjfj=N∑j=1λNj˜ξjfj,

where . It follows then

 ⟨Mξ,ξ⟩Rn=⟨N∑j=1λNj˜ξjfj,ξ⟩Cn=N∑j=1λNj|˜ξj|2=N∑j=1ReλNj|˜ξj|2,

for which the latter equality holds since is real-valued. Due to the bound (2.39), we have thus

 (2.50) ⟨Mξ,ξ⟩Rn≤αN∑j=1|˜ξj|2.

On the other hand, by noting that

 (2.51) |˜ξj|2=⟨˜ξ,˜ξ⟩Cn=⟨ξ,ξ⟩Rn=⟨p,p⟩.

we obtain from (2.49)–(2.51), by taking , that

 (2.52) ⟨ANw(s),w(s)⟩H≤α∥w(s)∥2H,s∈[t,T].

We infer from (2.48) that

 12d∥w(s)∥2Hds≤ (α+32Lip(F|B)+12)∥w(s)∥2H +12∥A∥2op∥Π⊥Ny∥2D(A)+12Lip(F|B)∥Π⊥Ny∥2H,

in which we have also used Assumption (iii) and the local Lipschitz property of on the closed ball in centered at with radius given by (2.40).

By Gronwall’s lemma and recalling that , we obtain thus

 ∥ΠNyt,x(s;u)−yNt,xN(s;u)∥2H≤f(T)∫st∥Π⊥Nyt,x(s′;u)∥2D(A)ds′,

with

Then by noting that

 |Jt,x(u)−JNt,xN(u)|≤Lip(G|B)∫Tt( ∥ΠNyt,x(s;u)−yNt,xN(s;u)∥D(A) +∥Π⊥Nyt,x(s;u)∥D(A))ds,

we have

 (2.53) |Jt,x(u)−JNt,xN(u)| ≤Lip(G|B)[√T−t+(T−t)√f(T)]∥Π⊥Nyt,x(⋅;u)∥L2(t,T;D(A)),

for all

Finally by noting that

 v(t,x)=Jt,x(u∗t,x)≤Jt,x(uN,∗t,xN)andvN(t,xN)=JNt,xN(uN,∗t,xN),

we obtain

 v(t,x)−vN(t,xN)≤Jt,x(uN,∗t,xN)−JNt,xN(uN,∗t,xN).

It follows then from (2.53) that

 (2.54) v(t,x)−vN(t,xN) ≤Lip(G|B)[√T−t+(T−t)√f(T)]∥Π⊥Nyt,x(⋅;uN,∗t,xN)∥L2(t,T;D(A)).

Similarly,

 (2.55) vN(t,xN)−v(t,x) ≤Lip(G|B)[√T−t+(T−t)√f(T)]∥Π⊥Nyt,x(⋅;u∗t,x)∥L2(t,T;D(A)).

The estimate (2.42) results then from (2.54) and (2.55).

We conclude this section with the following corollary providing the error estimates between the optimal control and that obtained from a GK approximation.

###### Corollary 2.1.

Assume that the conditions given in Theorem 2.1 hold. Given in , let us introduce the notations, and . Assume furthermore that there exists such that the following local growth condition is satisfied for the cost functional (with ) given by (2.31):

 (2.56) σ∥u∗−v∥qLq(0,T;V)≤J(x,v)−J(x,u∗),

for all in some neighborhood of , with given by (2.36). Assume finally that lies in . Then,

 (2.57) ∥u∗−u∗N∥qLq(0,T;V) ≤1σLip(G|B)[√T+T√f(T)](∥Π⊥Ny∗(⋅;u∗)∥L2(0,T;D(A)) +2∥Π⊥Ny(⋅;u∗N)∥L2(0,T;D(A))),

where the constant is given by (2.43).

###### Proof.

By the assumptions, we have

 ∥u∗−u∗N