Analytical Initialization of a Continuation-Based Indirect Method for Optimal Control of Endo-Atmospheric Launch Vehicle Systems
In this paper, we propose a strategy to solve endo-atmospheric launch vehicle optimal control problems using indirect methods. More specifically, we combine shooting methods with an adequate continuation algorithm, taking advantage of the knowledge of an analytical solution of a simpler problem. This procedure is resumed in two main steps. We first simplify the physical dynamics to obtain an analytical guidance law which is used as initial guess for a shooting method. Then, a continuation procedure makes the problem converge to the complete dynamics leading to the optimal solution of the original system. Numerical results are presented.
First,Second]Riccardo Bonalli Second]Bruno Hérissé First]Emmanuel Trélat
ptimal control, Real-time control, Continuation methods, Guidance, navigation and control of vehicles.
Guidance of autonomous launch vehicles towards rendezvous points is a complex task often considered in missile applications, mainly for interception of targets. It represents an optimal control problem, whose aim consists in finding a control law enabling the vehicle to join a final point of the 3D space considering prescribed constraints as well as performance criteria. The rendezvous point may be a static point, for example an intermediate point at the middle of the flight mission, as well as a moving point if, for example, the task consists in intercepting a maneuvering target. This requires not only high numerical precision of concerned algorithms but also a real-time processing of optimal trajectories.
The most common approach to solve this kind of task resides on analytical guidance laws. They correct errors coming from perturbations and misreading of the system. For interceptor missile applications, in , ,  guidance laws that maximize the final velocity and so the lethality of the interceptor are designed. Varying velocity is also considered for the design of an energy optimal guidance law in . However, the trajectories induced by guidance laws are not optimal because of some approximations made.
Ensuring the optimality of trajectories can be achieved exploiting direct methods. These techniques consist in discretizing each component of the optimal control problem (the state, the control, etc.) to reduce the whole mathematical representation to a nonlinear constrained optimization problem. Since they are quite robust, they are widely used , , . However, these methods are computationally demanding and can often be used offline uniquely.
In order to manage efficiently real-time processing of optimal control sequences for launch vehicle systems one may consider indirect methods. These use a mathematical study of the system (exploiting the Pontryagin Maximum Principle) to determine some necessary conditions of optimality. Indirect methods converge much faster than direct methods with a better precision. Since the problem is equivalent to the research of zeros of a function, the main difficulty remains their initialization. In ,  the initialization problem is bypassed using finite differences algorithms and multiple-shooting methods respectively. By contrast, in  the authors use second-order conditions and conjugate point theory for a multistage launch vehicle problem; the initialization of the numerical method is based on the particle swarm algorithm. However, most of these approaches remain computationally demanding and not easily applicable in view of real-time processing.
In this paper, we propose to solve endo-atmospheric launch vehicle optimal control problems using indirect methods managing the issue coming from the initialization by both an analytical guidance law and a continuation method. Continuation procedures have shown to be reliable and robust for problems like coplanar orbit transfer and atmospheric reentry , . This combination allows to preserve precision and fast numerical computations. The proposed approach is resumed in two main steps. We first simplify the physical dynamics to obtain an analytical guidance law which is used as initial guess for a shooting method. Then, a continuation procedure makes the problem converge to the complete dynamics leading to the optimal solution of the original system.
The paper is structured as follows. Section 2 introduces the dynamics of a general endo-atmospheric launch vehicle system, the optimal control formulation and the related numerical approach. Section 3 is devoted to the construction of a simplified dynamics able to initialize successfully a shooting method and to make the final algorithm converge efficiently to the complete dynamics. In Section 4 the continuation method to recover the original dynamics is presented with its indirect formulation and numerical tests. Finally, Section 5 contains conclusions and perspectives.
2 Dynamics, Optimal Control Problem and Numerical Method
2.1 Physical Model
Let be an inertial frame centered at the center of the planet , be the NED frame and be the velocity frame. The endo-atmospheric launch vehicle system is modeled as an axisymmetric thrust propelled rigid body of mass . The coordinates ( is the latitude, is the longitude, is the path angle and is the azimuth angle) are used to represent the position of the center of mass of the vehicle and its velocity .
Neglecting the wind velocity, the Coriolis and the centripetal force (this is legitimate because of the short length of the considered trajectories), the dynamics takes the following form
where is the modulus of the gravity, is an aerodynamic efficiency factor, is the angle of attack while is the angle of bank, stands for the normalized lift coefficient while is the mass flow and represents the modulus of the propulsion which depends directly on .
Based on a standard model of flight dynamics , , coefficients and are approximated by and where is the reference area, is the maximal value of the lift coefficient and is the drag coefficient for , which is considered constant; finally, is the air density for which an exponential model is considered, where is a reference altitude. Since is a predefined function of time, in this paper controls are only and .
2.2 Optimal Control Problem and Maximum Principle
Consider now the Optimal Control Problem (OCP)
where , , is the mapping defined by dynamics (2.1), , are smooth submanifolds in and the transfer time is not fixed. Finally,
where is constant. By definition, takes its values in . However, we do not consider any boundaries on preferring to penalize it, using , within the cost.
The Pontryagin Maximum Principle (PMP)  states that, if is optimal with response defined on , and shortly denoted , then there exists and such that a.e. on
where is the Hamiltonian and satisfies the transversality conditions
Treating (OCP) by indirect methods consists in solving
with an appropriate value for .
2.3 Shooting and Continuation Method
It is known  that finding a solution of (6) can be reduced to solve ( is called shooting function) using Newton-type methods. This is the content of the well known shooting method in optimal control , , . Its advantage is its extremely good numerical accuracy, relevant for aerospace applications . Since it relies on the Newton method, it inherits of the very quick convergence properties of the Newton method. The main drawback of the shooting method is that it may be difficult to initialize.
To overcome this difficulty, one can entrust with the robustness of the continuation method. It consists in deforming the problem into a simpler one that we are able to solve and then in solving a series of shooting problems, step by step by parameter deformation, to recover the original problem . This approach increases the efficiency of the shooting method because it allows to relax its initialization. The continuation parameter may be a physical parameter (or several) of the problem, or an artificial one. The path consists of a convex combination of the simpler problem and of the original one, with .
The main algorithm consists then in finding a solution of some simplification of (OCP) first and, from this, solving by continuation the original formulation (OCP).
3 New Guidance Law as Good Estimate for Continuation Method
Continuation methods allow us to solve iteratively (OCP) once the solution of some (usually) simpler optimal control problem is known. Here, we introduce and treat an efficient simpler problem coming from a modification of (2.1).
This modified version of (OCP) is designed with the hope that, on one hand, the shooting method can be easily initialized and, on the other hand, the continuation is feasible. It is at this stage that intuition of aerospace engineer is of primary importance, in order to design simplified problems and homotopy parameters resulting in a meaningful continuation procedure.
3.1 Simplified Problem and Analytical Smooth Controls
If one ignores the contributions of the curvature of the Earth, of the gravity and of the propulsion within (2.1), introducing the curvilinear abscissa and a new variable , the following simplification of (OCP) is obtained (the equation of can be neglected because it does not influence anymore the dynamics)
where , and with initial, final conditions , .
The next step consists in finding an analytical solution of (7). This is achieved exploiting the PMP formulation, as showed in the following.
The associated Hamiltonian function is
It can be remarked that, since the transfer time is not fixed and formulation (7) is autonomous, (8) takes zero as value for all times . Since no constraints are considered on , , from applying the weaker version of PMP , it follows explicitly that
Simple calculations show that
A further analysis streamlines this formulation letting one considering only normal trajectories.
Assume that a.e. in . Then, no abnormal trajectories arise in (7), i.e. .
If , then from (9) it follows in . Using , and raises a homogeneous linear system of equations in whose associated matrix has as determinant which is not zero almost everywhere in by assumption. Then on which raises a contradiction with the PMP.
The assumption a.e. in is relevant because, if the trajectory takes as optimal value, a change of coordinates like can be done on the dynamics during the numerical integration to avoid this singularity. Moreover, other changes of coordinates that optimize the computational speed can be taken into account . Thus, by Proposition 1 and the PMP, we can substitute within the previous simplified formulation.
Proceeding formally under smoothness assumption for optimal controls of this simplified problem, analyzing the second derivative of and allows to seek some relations. Indeed, using (9) and (10) it can be easily shown that
where is a polynomial of degree greater than one of , and their first derivatives. Since the previous expression can be approximated. Iterating the same procedure on one is led to solve ( identifies again a polynomial of degree greater than one of , and their first derivatives)
It is clear that, if additional assumptions allow to neglect the contribution of , , equations of system (11) can be solved independently using the assumption that parameters , and are almost constant along the trajectory, justifying this approximation because scenarios with limited altitude are considered to solve the simplified problem (7). This is not limiting, because, once the shooting method on simplified problem has converged, a continuation procedure on the final point can be started changing the initial scenario.
3.2 Local Controllability of the Simplified System
An evident sufficient condition that allows to neglect the contribution of , within system (11) is to deal with small (in the sense of norm) controls that have small derivatives. This is achieved studying the local controllability around control in
where is the first distributional derivative of . The set of admissible controls on is denoted by where denotes the initial point of the state variable .
where the ’s are (at least) vector fields. This standard result holds (whose proof will be reported in )
All these results can be applied to the dynamics of (7) allowing to give conditions to solve easily (11). The idea is to compute the trajectory solution of (7) given by control and to consider scenarios with final point close to . If the linearized system along is controllable from in time , then, thanks to Theorem 1, the dynamics of (7) is locally controllable at from in time with small controls in . In other words, if the considered scenarios have final point close to we can neglect the contribution of , within system (11) to solve it analytically. It must be noted that it is not restrictive considering scenarios with final point close to to solve (7), because, as said before, a continuation procedure on the final point can be started changing considerably the initial scenario.
The linearized system of (7) has the form
where is the matrix
This system is controllable in any . To show this, we consider the well known fact  that, for a system with no constraint where and are of class , if, defined the sequence , for , there exists such that
then the system is controllable in time . Denoting , , it is easy to show that
which is different from zero. Then, the claim follows.
This result validates the previous approach. Initial conditions for controls and their first derivatives must be sought to solve (11) (where we impose then ). Two conditions are given using the two last equations of (7). The second ones can be investigated exploiting first an analysis of the Line Of Sight (LOS).
3.3 LOS Analysis
The line of sight is the vector joining the current position to the desired final point . It results to be useful defining its modulus and the associated unitary vector . We denote the orthonormal frame where , is the unitary vector in the plane perpendicular to and oriented by and . Frames , are functions of by definition , locally according to
where , are Euler angles. We denote , the associated transition matrices from frame to frame and from frame to frame respectively. The coordinates of a vector within a frame are denoted as . The following analysis seeks some approximate relations between the derivatives of , and angles , under appropriate assumptions.
First-Order Approximation 1 (Foa)
Along the considered trajectories, only weak variations from the LOS direction are allowed, i.e. , . Moreover, the contribution of the rotation of frame around frame is neglected, i.e. .
It must be remarked that this assumption is not a strong one because the initial scenario involved lives in a neighborhood of the null control and the considered trajectories are short enough to make this choice effective. It allows to manipulate transition maps obtaining easily the desired results.
Under (FOA), the transition matrix from frame to frame is
Moreover, we have
Matrix is obtained easily by approximating within the changing variable rotation . The second relation follows by multiplying matrix by .
The next step consists of making explicit the coordinates of the derivative of the unitary vector within frame . This requires two lemmas.
It holds . Moreover, under (FOA), one has
Differentiating the definition of one has
The second expression follows by substituting within the previous equation.
Under (FOA), we have
Some calculations lead to
Gathering together all the previous results, it is straightforward to show the following final result
Relations (13) turn out to be useful to obtain initial condition for analytical controls as shown in the followings.
3.4 New Approximated Analytical Guidance Law
The previous results can be exploited to obtain an approximate solution of (11) and consequently new approximate analytical optimal controls , for (7). We impose , and to be constant along the trajectory (thanks to considerations of Section 3.1). We start considering control . From (11), we have
We set . Assumption (FOA) leads to . Plugging (14) into equation and integrating, one obtains
Then, since under usual assumption , it follows . Integrating this last equation, using (15) and noticing that , one has
where and are gain parameters defined as
Since , the following guidance law for control is deduced
With the same argumentation, exploiting again (13), the following guidance law for control can be easily derived