Convergence of the shooting algorithm for singular optimal control problems*

# Convergence of the shooting algorithm for singular optimal control problems

## Abstract

In this article we propose a shooting algorithm for optimal control problems governed by systems that are affine in one part of the control variable. Finitely many equality constraints on the initial and final state are considered. We recall a second order sufficient condition for weak optimality, and show that it guarantees the local quadratic convergence of the algorithm. We show an example and solve it numerically.

o
\IEEEoverridecommandlockouts\overrideIEEEmargins

ptimal control, singular control, second order optimality condition, weak optimality, shooting algorithm, Gauss-Newton method

## 1 Introduction

We investigate optimal control problems governed by ordinary differential equations that are affine in one part of the control variable. This class of system includes both the totally affine and the nonlinear cases. This study is motivated by many models that are found in practice. Among them we can cite the followings: the Goddard’s problem analyzed in Martinon et al. [4, 5, 16], other models concerning the motion of a rocket in Lawden [15], Bell and Jacobson [3], Goh [13], Oberle [19], and an optimal production process in Cho et al. [6].

We can find shooting-like methods applied to the numerical solution of partially affine problems in, for instance, Oberle [18, 20] and Oberle-Taubert [21], where the authors use a generalization of the algorithm that Maurer [16] suggested for totally affine systems. These works present interesting implementations of a shooting-like algorithm, but they do not link the convergence of the method with sufficient conditions of optimality as it is done in this article.

In this paper we propose a shooting algorithm which can be also used to solve problems with bounds on the controls. We give a theoretical support to this method, by showing that a second order sufficient condition for optimality proved in Aronna [1] ensures the local quadratic convergence of the algorithm.

The article is organized as follows. In Section 2 we give the statement of the problem, the main definitions and assumptions, and a first order optimality condition. The shooting algorithm is described in Section 3. In Section 4 we recall a second order sufficient condition for weak optimality. We state the main result of the article in Section 5. In Section 6 we work out an example and solve it numerically.

NOTATIONS. Let denote the value of function at time if is a function that depends only on and the th component of evaluated at Partial derivatives of a function of are referred as or for the derivative in time, and or for the differentiations with respect to space variables. The same convention is extended to higher order derivatives. By we mean the Lebesgue space with domain equal to the interval and with values in The notation refers to the Sobolev spaces.

## 2 Statement of the Problem

We study the optimal control problem (P) given by

 J:=φ0(x0,xT)→min, (1) ˙xt=F(xt,ut,vt)=m∑i=0vi,tfi(xt,ut), a.e. on [0,T], (2) ηj(x0,xT)=0,for j=1…,dη. (3)

Here for for and we put, in sake of simplicity of notation, which is not a variable. The nonlinear control belongs to while denotes the space of affine controls and refers to the state space. When needed, we write for a point in Assume throughout the article that data functions and have Lipschitz-continuous second derivatives. A trajectory is an element that satisfies the state equation (2). If in addition, the constraints in (3) hold, we say that is a feasible trajectory of problem (P).

Set the space of Lipschitz-continuous functions with values in the dimensional space of row-vectors with real components Consider an element and define the pre-Hamiltonian function

 H[λ](x,u,v,t):=ptF(x,u,v), (4)

the initial-final Lagrangian function

 ℓ[λ](ζ0,ζT):=φ0(ζ0,ζT)+dη∑j=1βjηj(ζ0,ζT), (5)

and the Lagrangian function

 L[λ](w):=ℓ[λ](x0,xT)+∫T0pt(F(xt,ut,vt)−˙xt)dt.

Throughout the article we study a nominal feasible trajectory that we assume to be smooth. We present now an hypothesis for the endpoint constraints. Consider the mapping

 G:Rn×U×V→Rdη(x0,u,v)↦η(x0,xT),

where is the solution of (2) associated with

###### Assumption 1

The derivative of at is onto.

The latter hypothesis is usually known as qualification of the endpoint equality constraints.

###### Definition 2.1

It is said that the feasible trajectory is a weak minimum of problem (P) if there exists such that is a minimum in the set of feasible trajectories satisfying

 ∥x−^x∥∞<ε,∥u−^u∥∞<ε∥v−^v∥∞<ε.

The following first order necessary condition holds for See the book by Pontryagin et al. [22] for a proof.

###### Theorem 2.1

Let be a weak solution satisfying Assumption 1, then there exists a unique such that is solution of the costate equation

 −˙^pt=DxH[^λ](^xt,^ut,^vt,t),a.e. on [0,T], (6)

with transversality conditions

 ^p0 =−Dx0ℓ[^λ](^x0,^xT), (7) ^pT =DxTℓ[^λ](^x0,^xT), (8)

and the stationarity condition

 {\vspace3ptHu[^λ](^xt,^ut,^vt,t)=0,Hv[^λ](^xt,^ut,^vt,t)=0,a.e. on [0,T], (9)

is verified.

Throughout this article is considered to be a weak solution and thus, it satisfies (9) for its unique associated multiplier Furthermore, note that since appears linearly in we have that is a singular matrix on Therefore, is a singular solution (as defined in [3] and [5]).

## 3 The shooting algorithm

The purpose of this section is to present an appropriate numerical scheme to solve the problem (P). More precisely, we investigate the formulation and the convergence of an algorithm that approximates an optimal solution provided an initial estimate exists.

### 3.1 Optimality system

In what follows we use the first order optimality conditions (9) to provide a set of equations from which we can determine We obtain an optimality system in the form of a two-point boundary value problem (TPBVP).

Throughout the rest of the article we assume, in sake of simplicity, that whenever some argument of or their derivatives is omitted, they are evaluated at and

We shall recall that for the case where all the control variables appear nonlinearly (), the classical technique is using the stationarity equation

 Hu[^λ](^w)=0, (10)

to write as a function of This procedure is also detailed in [17] and [24]. One is able to do this by assuming, for instance, the strengthened Legendre-Clebsch condition

 Huu[^λ](^w)≻0. (11)

In case (11) holds, due to the Implicit Function Theorem, we can write with being a smooth function. Hence, replacing the occurrences of by in the state and costate equations yields a two-point boundary value problem.

On the other hand, when the system is affine in all the control variables (), we cannot eliminate the control from the equation and, therefore, a different technique is employed (see e.g. [16, 2, 24]). The idea is to consider an index and to take to be the lowest order derivative of in which appears with a coefficient that is not identically zero. Goh [11, 10], Kelley et al. [14] and Robbins [23] proved that is even. This implies that the control does not appear the first time we derive with respect to time, i.e. depends only on and and consequently, it is differentiable in time. Thus the expression

 ¨Hv[^λ](^w)=0 (12)

is well-defined. The control can be retrieved from (12) provided that, for instance, the strengthened generalized Legendre-Clebsch condition

 −∂¨Hv∂v[^λ](^w)≻0 (13)

holds (see Goh [10, 12, 13]). In this case, we can write with being differentiable. By replacing by in the state-costate equations, we get an optimality system in the form of a boundary value problem.

In the problem studied here, where and we aim to use both equations (10) and (12) to retrieve the control as a function of the state and the multiplier We next describe a procedure to achieve this elimination that was proposed in Goh [12, 13]. First let as recall a necessary condition proved in Goh [9] and in [1, Lemma 3.10 and Corollary 5.2]. Define which is referred as the Lie bracket in the variable of and

###### Lemma 3.1 (Necessary conditions for weak optimality)

If is a smooth weak minimum for (P) satisfying Assumption 1, then

 Huv≡0, (14) ^p[fi,fj]x=0,for i,j=1,…,m. (15)

Let us show that can be differentiated twice with respect to the time variable, as it was done in the totally affine case. Observe that (10) may be used to write as a function of In fact, in view of Lemma 3.1, the coefficient of in is zero. Consequently,

 ˙Hu=˙Hu[^λ](^x,^u,^v,˙^u)=0 (16)

and, if the strengthened Legendre-Clebsch condition (11) holds, can be eliminated from (16) yielding

 ˙^u=Γ[^λ](^x,^u,^v). (17)

Take now an index and observe that

 0=˙Hvi=ddt^p^fi=^pm∑j=0^vj[fj,fi]x+Hviu˙^u=^p[f0,fi]x,

where Lemma 3.1 is used in the last equality. Therefore, We can then differentiate one more time, replace the occurrence of by and obtain (12) as it was desired. See that (12) together with the boundary conditions

 Hv[^λ](^wT)=0, (18) ˙Hv[^λ](^w0)=0, (19)

guarantee the second identity in the stationarity condition (9).

Notation: Denote by (OS) the set of equations consisting of (2)-(3), (6)-(8), (10), (12) and the boundary conditions (18)-(19).

###### Remark 3.1

Instead of (18)-(19), we could choose another pair of endpoint conditions among the four possible ones: and always including at least one of order zero. The choice we made will simplify the presentation of the result afterwards.

Observe now that the derivative with respect to of the mapping is given by

 J:=⎛⎜⎝HuuHuv−∂¨Hv∂u−∂¨Hv∂v⎞⎟⎠. (20)

On the other hand, if (11) and (13) are verified, is definite positive along and, consequently, it is nonsingular. In this case we may write and from (10) and (12). Thus (OS) can be regarded as a TPBVP whenever the following hypothesis is verified.

###### Assumption 2

satisfies (11) and (13).

Summing up we get the following result.

###### Proposition 3.1 (Elimination of the control)

If is a smooth weak minimum verifying Assumptions 1 and 2, then

 ^u=U[^λ](^x),^v=V[^λ](^x),

with smooth functions and

###### Remark 3.2

When the linear and nonlinear controls are uncoupled, this elimination of the controls is much simpler. An example is shown in Oberle [20] where a nonlinear control variable can be eliminated by the stationarity of the pre-Hamiltonian, and the remaining problem has two uncoupled controls, one linear and one nonlinear. Another example is the one presented in Section 6.

### 3.2 The algorithm

The aim of this section is to present a numerical scheme to solve system (OS). In view of Proposition 3.1 we can define the following mapping.

###### Definition 3.1

Let be the shooting function given by

 (x0,p0,β)=:ν↦S(ν):=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝η(x0,xT)p0+Dx0ℓ[λ](x0,xT)pT−DxTℓ[λ](x0,xT)Hv[λ](wT)˙Hv(w0)⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,

where is a solution of (2),(6),(10),(12) with initial conditions and and where the occurrences of and were replaced by and

Note that solving (OS) consists of finding such that

 S(^ν)=0. (21)

Since the number of equations in (21) is greater than the number of unknowns, the Gauss-Newton method is a suitable approach to solve it. The shooting algorithm we propose here consists of solving the equation (21) by the Gauss-Newton method.

### 3.3 The Gauss-Newton Method

This algorithm solves the equivalent least squares problem

At each iteration given the approximate value it looks for that gives the minimum of the linear approximation of problem

 minΔ∈D(S)∣∣S(νk)+S′(νk)Δ∣∣2. (22)

Afterwards it updates

 νk+1←νk+Δk.

In order to solve the linear approximation of problem (22) at each iteration we look for in the kernel of the derivative of the objective function, i.e. satisfying

 S′(νk)⊤S′(νk)Δk+S′(νk)⊤S(νk)=0.

Hence, to compute direction the matrix must be nonsingular. Thus, Gauss-Newton method will be applicable provided that is invertible, where It follows easily that is nonsingular if and only if is one-to-one.

Furthermore, since the right hand-side of system (21) is zero, it can be proved that the Gauss-Newton algorithm converges locally quadratically if the function has Lipschitz continuous derivative. The latter holds true here given the regularity hypotheses on the data functions. This convergence result is stated in the proposition below. See e.g. Fletcher [8] for a proof.

###### Proposition 3.2

If is one-to-one then the shooting algorithm is locally quadratically convergent.

## 4 Second order sufficient condition

In this section we present a sufficient condition for optimality proved in [1], and we state in Section 5 afterwards that this condition guarantees the local quadratic convergence of the shooting algorithm proposed above.

Given consider the linearized state equation

 ˙¯xt =Fx,t¯xt+Fu,t¯ut+Fv,t¯vt,a.e. on [0,T], (23) ¯x(0) =¯x0, (24)

where refers to the partial derivative of with respect to i.e. and equivalent notations hold for the other involved derivatives. Take an element and define the second variation of the Lagrangian function

 Ω(¯w):=12D2L[^λ](^w)¯w2.

It can be proved that can be written as

 Ω(¯x,¯u,¯v)=12D2ℓ(^x0,^xT)(¯x0,¯xT)2+∫T0[12¯x⊤Hxx¯x+¯u⊤Hux¯x+¯v⊤Hvx¯x+12¯u⊤Huu¯u+¯v⊤Hvu¯u]dt.

Note that this mapping does not contain a quadratic term on since Hence, one cannot state a sufficient condition in terms of the uniform positivity of on the set of critical directions, as it is done in the totally nonlinear case. Therefore, we use a change of variables introduced by Goh in [11] and transform into a quadratic mapping that may result uniformly positive in an associated transformed set of critical directions.

Consider hence the linear differential system in (23) and the change of variables

 ⎧⎪⎨⎪⎩¯yt:=∫t0¯vsds,¯ξt:=¯xt−Fv,t¯yt,for t∈[0,T]. (25)

This change of variables can be done in any linear system of differential equations, and it is often called Goh’s transformation. Observe that defined in that way satisfies the linear equation

 ˙¯ξ=Fx¯ξ+Fu¯u+B¯y,¯ξ0=¯x0, (26)

where

### 4.1 Critical cones

We define now the sets of critical directions associated with Even if we are working with control variables in and hence the control perturbations are naturally taken in the second order analysis involves quadratic mappings and it is useful to extend them continuously to Given satisfying (23)-(24), consider the linearization of the endpoint constraints and cost function,

 Dηj(^x0,^xT)(¯x0,¯xT)=0,for j=1,…,dη, (27) Dφ0(^x0,^xT)(¯x0,¯xT)≤0. (28)

Define the critical cone in by

 C2:={¯w∈W2:(???)-(???),\,(???)-(???) hold}. (29)

Since we aim to state an optimality condition in terms of the variables after Goh’s transformation, we transform the equations defining Let be a critical direction. Define by transformation (25) and set Then the transformed of (27)-(28) is

 Dηj(^x0,^xT)(¯ξ0,¯ξT+BT¯h)=0, forj=1,…,dη, (30) Dφ0(^x0,^xT)(¯ξ0,¯ξT+BT¯h)≤0. (31)

Consequently, the transformed critical cone is given by

 P2:={(¯ξ,¯u,¯y,¯h)∈W2×Rm:(???), (???)-(???) hold}. (32)

### 4.2 Second variation

Next we state that if is a weak minimum, then the transformation of yields the quadratic mapping

 \vspace3pt¯Ω(¯ξ,¯u,¯y,¯h):=g(¯ξ0,¯ξT,¯h)+∫T0(12¯ξ⊤Hxx¯ξ+¯u⊤Hux¯ξ+¯y⊤M[λ]¯ξ+12¯u⊤Huu[λ]¯u+¯y⊤J[λ]¯u+12¯y⊤R[λ]¯y)dt, (33)

with

 M:=F⊤vHxx−˙Hvx−HvxFx, J:=F⊤vH⊤ux−HvxFu, S:=12(HvxFv+(HvxFv)⊤), V:=12(HvxFv−(HvxFv)⊤), R:=F⊤vHxxFv−(HvxB+(HvxB)⊤)−˙S,
 g(ζ0,ζT,h):=12ℓ′′(ζ0,ζT+Fv,Th)2+h⊤(Hvx,TζT+12STh).

Easy computations show that for Thus, in view of Lemma 3.1, one has that if is a weak minimum. Furthermore, we get the following result, which also uses [1, Theorem 4.4].

###### Theorem 4.1

If is a smooth weak minimum, then

 Ω(¯x,¯u,¯v)=¯Ω(¯ξ,¯u,¯y,¯yT),

for all and given by (25).

### 4.3 The sufficient condition

We state now a second order sufficient condition for strict weak optimality.

Define the order by

 ¯γ(¯ζ0,¯u,¯y,¯h):=|¯ζ0|2+|¯h|2+∫T0(|¯ut|2+|¯yt|2)dt,

for It can also be considered as a function of by setting

 γ(¯ζ0,¯u,¯v):=¯γ(¯ζ0,¯u,¯y,¯yT), (34)

with being the primitive of defined in (25).

###### Definition 4.1

[growth] We say that satisfies growth condition in the weak sense if there exist such that

 J(w)≥J(^w)+ργ(x0−^x0,u−^u,v−^v), (35)

for every feasible trajectory with

###### Theorem 4.2 (Sufficient condition for weak optimality)

Let be a smooth feasible trajectory such that Assumption 1 is satisfied. Then the following assertions hold.

• Assume that there exists such that

 ¯Ω(¯ξ,¯u,¯y,¯h)≥ρ¯γ(¯ξ0,¯u,¯y,¯h),on P2. (36)

Then is a weak minimum satisfying growth in the weak sense.

• Conversely, if is a weak solution satisfying growth in the weak sense then (36) holds for some

## 5 Main result: Convergence of the shooting algorithm

The main result of this article is the theorem below that gives a condition guaranteeing the quadratic convergence of the shooting method near an optimal local solution.

###### Theorem 5.1

Suppose that is a smooth weak minimum satisfying Assumptions 1 and 2, and such that (36) holds. Then the shooting algorithm is locally quadratically convergent.

###### Remark 5.1

The complete proof of this theorem can be found in [1]. The idea of the proof is to show that (36) yields the injectivity of and then use Proposition 3.2. In order to prove that (36) implies that is one-to-one, the following elements are employed: the linearization of (OS) which gives an expression of the derivative the Goh’s transformed of this linearized system and an associated linear-quadratic optimal control problem in the variables involving (26) and (33).

###### Remark 5.2 (Bang-singular solutions)

Finally we claim that the formulation of the shooting algorithm above and the proof of its local convergence (Theorem 5.1) can be done also for problems where the controls are subject to bounds of the type

 0≤ut≤1,0≤vt≤1,a.e. on [0,1]. (37)

More precisely, it holds for solutions for which each control component is a concatenation of bang and singular arcs, i.e. arcs saturating the corresponding inequality in (37), and arcs in the interior of the constraint. This extension follows from a transformation of the problem to one without bounds, and it is detailed in [2, Section 8] for the totally-affine case.

## 6 An example

Consider the following optimal control problem treated in Dmitruk and Shishov [7]:

 J:=−2x1,1x2,1+x3,1→min,˙x1=x2+u,˙x2=v,˙x3=x21+x22+10x2v+u2,x1,0=0,x2,0=0,x3,0=0. (38)

Here, Assumption 1 holds since no final constraints are considered. The pre-Hamiltonian function associated with (38) is, omitting arguments,

 H=p1(x2+u)+p2v+p3(x21+x22+10x2v+u2).

We can easily deduce that The equations (10) and (12) for this problem give

 Hu=p1+2u,¨Hv=−2v+2x1, (39)

and, therefore, Assumption 2 holds true. Agreeing with Proposition 3.1, the control can be eliminated from (39). This yields

 u=−p1/2,v=x1.

We can then write the optimality system (OS) related to (38). The state and costate equations are

 ˙x1=x2−p1/2,˙x2=x1,˙x3=x21+x22+10x2x1+p21/4,˙p1=−2x1,˙p2=−2x2−10x1−p1, (40)

where we do not include since it is constantly equal to 1. The boundary conditions are

 x1,0=0,x2,0=0,x3,0=0,p1,1=−2x2,1,p2,1=−2x1,1,Hv,1=p2,1+10x2,1=0,˙Hv,1=−2x2,1−p1,1=0. (41)

Observe that the last line in (41) can be removed since it is implied by the first equation in the second line. Here the shooting function is given by

 S:R2→R3,(p1,0,p2,0)↦⎛⎜⎝p1,1+2x2,1p2,1+2x1,1p2,1+10x2,1⎞⎟⎠. (42)

In [7] it was checked that the second order sufficient condition (36) held for the control The solution associated with this control has In view of Theorem 4.2, we know that the shooting algorithm converges quadratically for appropriate initial values of

We solved (38) numerically by applying the Gauss-Newton method to the equation for defined in (42). We used implicit Euler scheme for numerical integration of the differential equation. For arbitrary guesses of the algorithm converged to in all the occasions. The tests were done with Scilab.

## 7 Conclusions

We investigated optimal control problems with systems that are affine in some components of the control variable and that have finitely many equality endpoint constraints. For a Mayer problem of this kind of system we proposed a numerical indirect method for approximating a weak solution. For qualified solutions, we proved that the local convergence of the method is guaranteed by a second order sufficient condition for optimality proved before by the author.

We presented an example, in which we showed how to eliminate the control by using the optimality conditions, proposed a shooting formulation and solved it numerically. The tests converged, as it was expected in view of the theoretical result.

## Acknowledgment

Part of this work was done under the supervision of Frédéric Bonnans during my Ph.D. study. I acknowledge him for his great guidance.

I also thank Xavier Dupuis for his careful reading, and the three anonymous reviewers for their useful remarks.

This work is supported by the European Union under the 7th Framework Programme FP7-PEOPLE-2010-ITN Grant agreement number 264735-SADCO.

### References

1. M.S. Aronna. Singular solutions in optimal control: second order conditions and a shooting algorithm. INRIA Research Rapport Nr. 7764 or arXiv:1210.7425, 2011.
2. M.S. Aronna, J.F. Bonnans, and Martinon P. A shooting algorithm for problems with singular arcs. J. Optim. Theory Appl., Published as ‘Online First’, 2013.
3. D.J. Bell and D.H. Jacobson. Singular Optimal Control Problems. Academic Press, 1975.
4. F. Bonnans, J. Laurent-Varin, P. Martinon, and E. Trélat. Numerical study of optimal trajectories with singular arcs for an Ariane 5 launcher. J. Guidance, Control, and Dynamics, 32(1):51–55, 2009.
5. A.E. Bryson, Jr. and Y.C. Ho. Applied optimal control. Hemisphere Publishing Corp. Washington, D. C., 1975. Optimization, estimation, and control, Revised printing.
6. D.I. Cho, P.L. Abad, and M. Parlar. Optimal production and maintenance decisions when a system experience age-dependent deterioration. Optimal Control Appl. Methods, 14(3):153–167, 1993.
7. A. V. Dmitruk and K. K. Shishov. Analysis of a Quadratic Functional with a Partly Singular Legendre Condition. Moscow University Computational Mathematics and Cybernetics, 34(1):16–25, 2010.
8. R. Fletcher. Practical methods of optimization. Vol. 1. John Wiley & Sons Ltd., Chichester, 1980. Unconstrained optimization, A Wiley-Interscience Publication.
9. B.S. Goh. Necessary conditions for singular extremals involving multiple control variables. J. SIAM Control, 4:716–731, 1966.
10. B.S. Goh. Necessary Conditions for the Singular Extremals in the Calculus of Variations. University of Canterbury, 1966.
11. B.S. Goh. The second variation for the singular Bolza problem. J. SIAM Control, 4(2):309–325, 1966.
12. B.S. Goh. Compact forms of the generalized Legendre-Clebsch conditions and the computation of singular control trajectories. In Proceedings of the American Control Conference, volume 5, pages 3410–3413, 1995.
13. B.S. Goh. Optimal singular rocket and aircraft trajectories. In Control and Decision Conference, 2008. CCDC 2008, pages 1531 –1536, 2008.
14. H.J. Kelley, R.E. Kopp, and H.G. Moyer. Singular extremals. In Topics in Optimization, pages 63–101. Academic Press, New York, 1967.
15. D. F. Lawden. Optimal trajectories for space navigation. Butterworths, London, 1963.
16. H. Maurer. Numerical solution of singular control problems using multiple shooting techniques. J. Optim. Theory Appl., 18(2):235–257, 1976.
17. H. Maurer and W. Gillessen. Application of multiple shooting to the numerical solution of optimal control problems with bounded state variables. Computing, 15(2):105–126, 1975.
18. H.J. Oberle. Numerische Behandlung singulärer Steuerungen mit der Mehrzielmethode am Beispiel der Klimatisierung von Sonnenhäusern. PhD thesis. Technische Universität München, 1977.
19. H.J. Oberle. On the numerical computation of minimum-fuel, Earth-Mars transfer. J. Optimization Theory Appl., 22(3):447–453, 1977.
20. H.J. Oberle. Numerical computation of singular control functions in trajectory optimization problems. J. Guidance Control Dynam., 13(1):153–159, 1990.
21. H.J. Oberle and K. Taubert. Existence and multiple solutions of the minimum-fuel orbit transfer problem. J. Optim. Theory Appl., 95(2):243–262, 1997.
22. L. Pontryagin, V. Boltyanski, R. Gamkrelidze, and E. Michtchenko. The Mathematical Theory of Optimal Processes. Wiley Interscience, New York, 1962.
23. H.M. Robbins. A generalized Legendre-Clebsch condition for the singular case of optimal control. IBM J. of Research and Development, 11:361–372, 1967.
24. E. Trélat. Optimal Control and Applications to Aerospace: Some Results and Challenges. J. Optim. Theory Appl., 154(3):713–758, 2012.
Comments 0
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters

Loading ...
220494

You are asking your first question!
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test
Test description