HOVI in optimal control

# High order variational integrators in the optimal control of mechanical systems

Cédric M. Campos IMUVA, Universidad de Valladolid, 47011 Valladolid, Spain; Instituto de Ciencias Matemáticas, 28049 Madrid, Spain Sina Ober-Blöbaum Department of Mathematics, University of Paderborn, 33098 Paderborn, Germany  and  Emmanuel Trélat Sorbonne Universités, UPMC Univ. Paris 06, CNRS UMR 7598, Laboratoire; Jacques-Louis Lions, Institut Universitaire de France, F-75005, Paris, France
###### Abstract.

In recent years, much effort in designing numerical methods for the simulation and optimization of mechanical systems has been put into schemes which are structure preserving. One particular class are variational integrators which are momentum preserving and symplectic. In this article, we develop two high order variational integrators which distinguish themselves in the dimension of the underling space of approximation and we investigate their application to finite-dimensional optimal control problems posed with mechanical systems. The convergence of state and control variables of the approximated problem is shown. Furthermore, by analyzing the adjoint systems of the optimal control problem and its discretized counterpart, we prove that, for these particular integrators, dualization and discretization commute.

###### Key words and phrases:
optimal control, mechanical systems, geometric integration, variational integrator, high order, Runge-Kutta, direct methods, commutation property
###### 1991 Mathematics Subject Classification:
Primary 65P10, Secondary 65L06, 65K10, 49Mxx

## 1. Introduction

In practice, solving an optimal control problem requires the a priori choice of a numerical method. Many approaches do exist, that are either based on a direct discretization of the optimal control problem, resulting into a nonlinear programming problem, or based on the application of the Pontryagin Maximum Principle, reducing into a boundary value problem. The first class of approaches are called direct, and the second ones, based on the preliminary use of the Pontryagin maximum principle, are called indirect. It can be noted that indirect methods, although very precise, suffer from an important drawback: Unless one already has a good knowledge of the optimal solution, they are very difficult to initialize since they are extremely sensitive to the initial guess. Although many solutions exist in order to carefully initialize a shooting method (see [50, 51]), in most of engineering applications direct approaches are preferred due to their simplicity and robustness. Roughly speaking, direct methods consist of

1. discretizing first the cost and the differential system, in order to reduce the optimal control problem to a usual nonlinear minimization problem with constraints, also called nonlinear programming problem (NLP), with dimension inversely proportional to the smoothness of the discretization;

2. and then dualizing, by applying for instance a Lagrange-Newton method to the NLP, deriving the Karush-Kuhn-Tucker equations (KKT), also called discrete adjoint system, and applying a Newton method to solve the resulting optimality system.

Many variants exist, e.g. [3]). In contrast, indirect methods consist of

1. first dualizing the optimal control problem to derive the adjoint system, by applying the Pontryagin Maximum Principle (PMP) (or, equivalently, the Lagrange multipliers necessary condition for optimality in infinite dimension),

2. and then discretizing, by applying a shooting method (that is, a Newton method composed with a numerical integration method).

In shorter words, direct methods consist of 1) discretize, 2) dualize, and indirect methods consist of the converse: 1) dualize, 2) discretize. It is natural to wonder whether this diagram is commutative or not, under usual approximation assumptions.

Since the pioneering works of [17, 42], it is well known by now that, in spite of usual assumptions ensuring consistency and stability, direct methods may diverge. In other words discretization and dualization do not commute in general. This is due to a complex interplay with the mesh, to the appearance of spurious highfrequencies ultimately causing the divergence (see [17] for very simple finite-dimensional linear quadratic problems and [53] for infinite-dimensional wave equations).

Several remedies and solutions exist, from which [4, 16, 17, 43] are a representative sample. For instance, the results of [17] assert the convergence of optimal control problems under specific smoothness and coercivity assumptions provided that the underlying discretization method be based on a Runge-Kutta method. However, the convergence order of the optimal control solution, which depends on the convergence order of the state and the resulting adjoint scheme, is reduced compared to the order of the Runge-Kutta method applied to the state system. Indeed, the discrete state and adjoint system constitutes a symplectic partitioned Runge-Kutta method for which order conditions on the Runge-Kutta coefficients are derived to preserve the convergence rates. Whereas in [17] the symplectic partitioned Runge-Kutta scheme for state and adjoint is explicitly derived, in a recent overview article [44] a proof is given based on a relation between quadratic invariants (that are also naturally preserved by symplectic partitioned Runge-Kutta integrators) and the fulfillment of the KKT equations. The preservation of convergence rates is referred to as the Covector Mapping Principle (CMP) (see e.g. [16]). More precisely, the CMP is satisfied if there exists an order-preserving map between the adjoint variables corresponding to the dualized discrete problem (KKT) and the discretized dual problem (discretized PMP). For the class of Legendre pseudospectral methods the CMP is proven if additional closure conditions are satisfied (see [16, 43]), whereas for Runge-Kutta methods the CMP holds if the order conditions on the Runge-Kutta coefficients derived in [17] are satisfied. For a detailed discussion on the commutation properties we refer to [42].

While for general dynamical systems, many studies of discretizations of optimal control problems are based on Runge-Kutta methods (see e.g. [12, 13, 18, 21, 52]), particularly for mechanical systems, much effort in designing numerical methods for the simulation and optimization of such systems has been put into schemes which are structure preserving in the sense that important qualitative features of the original dynamics are preserved in its discretization (for an overview on structure preserving integration methods see e.g. [19]). One special class of structure preserving integrators is the class of variational integrators, introduced in [36, 47], which are symplectic and momentum-preserving and have an excellent long-time energy behavior. Variational integrators are based on a discrete variational formulation of the underlying system, e.g. based on a discrete version of Hamilton’s principle or of the Lagrange-d’Alembert principle for conservative [29, 30] or dissipative [24] mechanical systems, respectively. They have been further extended to different kind of systems and applications, e.g. towards constrained [11, 25, 31], non smooth [14], multirate and multiscale [32, 46, 48], Lagrangian PDE systems [28, 35] and electric circuits [41]. In the cited works, typically quadrature rules of first or second order are used in order to approximate the action functional of the system. To design high order variational integrators, higher order quadrature rules based on polynomial collocation can be employed. Such so called Galerkin variational integrators have been introduced in [36] and further studied in [6, 20, 27, 38, 40].

For numerically solving optimal control problems by means of a direct method, the use of variational integrators for the discretization of the problem has been proposed in [8, 33, 39]. This approach, denoted by DMOC (Discrete Mechanics and Optimal Control), yields a finite-difference type discretization of the dynamical constraints of the problem which by construction preserves important structural properties of the system, like the evolution of momentum maps associated to symmetries of the Lagrangian or the energy behavior. For one class of Galerkin variational integrators, that is equivalent to the class of symplectic partitioned Runge-Kutta methods, the adjoint system and its convergence rates are analyzed in [39]. It is shown that, in contrast to a discretization based on standard Runge-Kutta methods in [17], the convergence order of the discrete adjoint system is the same as for the state system due to the symplecticity of the discretization scheme. In particular, we obtain the same symplectic-momentum scheme for both state and adjoint system, that means that discretization and dualization commute for this class of symplectic schemes and the CMP is satisfied. For general classes of variational integrators, the commutation property is still an open question. The contribution of this work is twofold:

1. We derive two different kinds of high order variational integrators based on different dimensions of the underling polynomial approximation (Section 3). While the first well-known integrator is equivalent to a symplectic partitioned Runge-Kutta method, the second integrator, denoted as symplectic Galerkin integrator, yields a “new” method which in general, cannot be written as a standard symplectic Runge-Kutta scheme.

2. For the application of high order variational integrators to finite-dimensional optimal control problems posed with mechanical systems, we show the convergence of state and control variables and prove the commutation of discretization and dualization (Sections 4 and 5).

The paper is structured as follows: In Section 2 the optimal control problem for a mechanical system is introduced. Its discrete version is formulated in Section 3 based on the derivation of two different kinds of high order variational integrators. The first main result is stated in Section 4: Under specific assumptions on the problem setting we prove the convergence of the primal variables for an appropriate choice of the discrete controls (Theorem 4.1). Along the lines of [18], we demonstrate the influence of the control discretization on the convergence behavior by means of several examples. In Section 5 the discrete adjoint system for the symplectic Galerkin method is derived. It turns out that the reformulation of the variational scheme in Section 3 simplifies the analysis. Whereas commutation of discretization and dualization for symplectic partitioned Runge-Kutta methods has already been shown in [39], we prove the same commutation property for the symplectic Galerkin discretization (Theorem 5.2), which is the second main result of this work. In contrast to the discretization with Legendre pseudospectral methods or classical Runge-Kutta methods, no additional closure conditions (see [16]) or conditions on the Runge-Kutta coefficients (see [17]) are required to satisfy the CMP, respectively. Furthermore, using the high order variational integrators presented here, not only the order but also the discretization scheme itself is preserved, i.e. one yields the same schemes for the state and the adjoint system. We conclude with a summary of the results and an outlook for future work in Section 6.

## 2. Optimal control for mechanical systems

### 2.1. Lagrangian dynamics

One of the main subjects of Geometric Mechanics is the study of dynamical systems governed by a Lagrangian. Typically, one considers a mechanical system with configuration manifold together with a Lagrangian function , where the associated state space describes the position and velocity of a particle moving in the system. Usually, the Lagrangian takes the form of kinetic minus potential energy, , for some (positive definite) mass matrix .

A consequence of the principle of least action, also known as Hamilton’s principle, establishes that the natural motions of the system are characterized by stationary solutions of the action, thus, motions satisfying

 (1) δ∫T0L(q(t),˙q(t))dt=0

for zero initial and final variations . The resulting equations of motion are the Euler-Lagrange equations (refer to [1]),

 (2) ddt∂L∂˙q−∂L∂q=0.

When the Lagrangian is regular, that is when the velocity Hessian matrix is non-degenerate, the Lagrangian induces a well defined map, the Lagrangian flow, by , where is the unique solution of the Euler-Lagrange equation (2) with initial condition . By means of the Legendre transform , where is the phase space of positions and momenta , one may transform the Lagrangian flow into the Hamiltonian flow defined by .

Moreover, different preservation laws are present in these systems. For instance the Hamiltonian flow preserves the natural symplectic structure of and the total energy of the system, typically (here still denotes the kinetic energy, but depending on rather than on ). Also, if the Lagrangian possess Lie group symmetries, then Noether’s theorem asserts that the associated momentum maps are conserved, like for instance the linear momentum and/or the angular momentum.

If external (non conservative) forces are present in the system, Hamilton’s principle (1) is replaced by the Lagrange-d’Alembert principle seeking for curves that satisfy the relation

 (3) δ∫T0L(q,˙q)dt+∫T0F(q,˙q)⋅δqdt=0

for zero boundary variations , where the second term is denoted as virtual work. This principle leads to the forced Euler-Lagrange equations

 (4) ddt∂L∂˙q−∂L∂q=F(q,˙q).

The forced version of Noether’s theorem (see e.g. [36]) states that if the force acts orthogonal to the symmetry action, then momentum maps are still preserved by the flow. Otherwise, the change in momentum maps and energy is determined by the amount of forcing in the system.

### 2.2. Optimal control problem

Since we are interested in optimally controlling Lagrangian systems, we assume that the mechanical system may be driven by means of some time dependent control parameter with being the control set. Typically, the control appears as an extra variable in the external force such that in the following we consider forces of the form and we replace the right-hand side of (4) by the control dependent force term .

An optimal control problem for a mechanical system reads (also denoted as Lagranigan optimal control problem in [39])

###### Problem 2.1 (Lagrangian optimal control problem (LOCP)).
 (5a) minq,˙q,uJ(q,˙q,u)=∫T0C(q(t),˙q(t),u(t))dt+Φ(q(T),˙q(T)) subject to (5b) δ∫T0L(q(t),˙q(t))dt+∫T0F(q(t),˙q(t),u(t))⋅δq(t)dt = 0, (5c) (q(0),˙q(0)) = (q0,˙q0),

with minimization over , and . The interval length may either be fixed, or appear as degree of freedom in the optimization problem. Since any optimal control problem with free final time can easily be transformed into a problem with fixed final time (see e.g. [15]), we assume the time to be fixed from now on. The control set is assumed to be closed and convex, and the density cost function and the final cost function are continuously differentiable, being moreover bounded from below.

Henceforth we should assume that the Lagrangian is regular, i.e. there is a (local) one-to-one correspondence between the velocity and the momentum via the Legendre transform and its inverse

 p=∂L∂˙q(q,˙q)and˙q=(∂L∂˙q)−1(q,p).

Thus, the forced Euler-Lagrange equations (4) can be transformed into the partitioned system

 (6) ˙q(t)=f(q(t),p(t)),˙p(t)=g(q(t),p(t),u(t))

with

 (7) f(q,p)=(∂L∂˙q)−1(q,p)and% g(q,p,u)=∂L∂q(q,f(q,p))+F(q,f(q,p),u).

With some abuse of notation we denote the force and the cost functions defined on and , respectively, by , and such that Problem 2.1 can be formulated as an optimal control problem for the partitioned system (6).

###### Problem 2.2 (Optimal control problem (OCP)).
 (8a) minq,p,uJ(q,p,u)=∫T0C(q(t),p(t),u(t))dt+Φ(q(T),p(T)) subject to (8b) ˙q(t) =f(q(t),p(t)),q(0)=q0, (8c) ˙p(t) =g(q(t),p(t),u(t)),p(0)=p0,

with minimization over , and and the functions , are assumed to be Lipschitz continuous.

The first order necessary optimality conditions can be derived by means of the Hamiltonian for the optimal control problem given by

 (9) H(q,p,u,λ,ψ,ρ0)=ρ0C(q,p,u)+λ⋅f(q,p)+ψ⋅g(q,p,u)

with and and are covectors in .

###### Theorem 2.3 (Minimum Principle, e.g.[15]).

Let be an optimal solution to Problem 2.2. Then there exist functions and and a constant satisfying for all such that

 (10a) H(q∗(t),p∗(t),u∗(t),λ(t),ψ(t),ρ0)=minu∈UH(q(t),p(t),u,λ(t),ψ(t),ρ0), for t∈[0,T], and (ρ0,λ,ψ) solves the following initial value problem: (10b) ˙λ =−∇qH(q∗,p∗,u∗,λ,ψ,ρ0), λ(T)=ρ0∇qΦ(q∗(T),p∗(T)), (10c) ˙ψ =−∇pH(q∗,p∗,u∗,λ,ψ,ρ0), ψ(T)=ρ0∇pΦ(q∗(T),p∗(T)).

The vectors and are the costate or the adjoint variables of the Hamiltonian equations of optimal control. The scalar is called the abnormal multiplier. In the abnormal case, it holds , and otherwise the multiplier can be normalized to . Since no final constraint on the state is present in the optimal control problem, the above principle holds true with (as proved for instance in [50]).

###### Remark 2.4.

If is affine w.r.t. , then the topologies can be taken as for the controls, on and on , and the PMP would still be valid for these classes. Besides, optimal control problems where the optimal control is in but not in are very seldom. For instance, if one is able to express in function of , as for the assumptions of Theorem 5.2, then is clearly in .

## 3. Discretization

Since we are interested in solving optimal control problems by some kind of direct method, a discrete approximation of Problem 2.2 is required. To this end, we first introduce two different variational integrators that we employ for the approximation of the control system given in (8b)-(8c). Based on these discrete schemes, we derive the discrete approximations of the optimal control problem that can be solved by standard numerical optimization methods. The controls play no role in the derivations of the variational integrators, therefore we will omit temporarily the dependence of the external force on , which will lighten the notation. The discrete schemes including the approximation of the controls are given in Section 3.3.

### 3.1. Discrete Mechanics and Variational Integrators

Discrete Mechanics is, roughly speaking, a discretization of Geometric Mechanics theory. As a result, one obtains a set of discrete equations corresponding to the Euler-Lagrange equation (4) above but, instead of a direct discretization of the ODE, the latter are derived from a discretization of the base objects of the theory, the state space , the Lagrangian , etc. In fact, one seeks for a sequence that approximates the actual trajectory of the system (), for a constant time-step .

A variational integrator is an iterative rule that outputs this sequence and it is derived in an analogous manner to the continuous framework. Given a discrete Lagrangian and discrete forces , which are in principle thought to approximate the continuous Lagrangian action and the virtual work, respectively, over a short time

 (11a) Ld(qk,qk+1) ≈∫tk+1tkL(q(t),˙q(t))dt, (11b) F−d(qk,qk+1)⋅δqk+F+d(qk,qk+1)⋅δqk+1 ≈∫tk+1tkF(q(t),˙q(t))⋅δq(t)dt,

one applies a variational principle to derive the well-known forced discrete Euler-Lagrange (DEL) equation,

 (12) D1Ld(qk,qk+1)+D2Ld(qk−1,qk)+F−d(qk,qk+1)+F+d(qk−1,qk)=0,

for , where stands for the partial derivative with respect to the -th component. The equation defines an integration rule of the type , however if we define the pre- and post-momenta (also denoted as discrete Legendre transforms)

 (13a) p−k :=−D1Ld(qk,qk+1)−F−d(qk,qk+1),k=0,…,N−1,and (13b) p+k :=D2Ld(qk−1,qk)+F+d(qk−1,qk),k=1,…,N,

the discrete Euler-Lagrange equation (12) is read as the momentum matching and defines an integration rule of the type .

The nice part of the story is that the integrators derived in this way naturally preserve (or nearly preserve) the quantities that are preserved in the continuous framework, the symplectic form, the total energy (for conservative systems) and, in presence of symmetries, the linear and/or angular momentum (for more details, see [36]). Furthermore, other aspects of the continuous theory can be “easily” adapted, symmetry reduction [7, 10, 22], constraints [23, 25], control forces [8, 39], etc.

### 3.2. High Order Variational Integrators

High order variational integrators for time dependent or independent systems (HOVI[t]) are a class of integrators that, by using a multi-stage approach, aim at a high order accuracy on the computation of the natural trajectories of a mechanical system while preserving some intrinsic properties of such systems. In particular, symplectic-partitioned Runge-Kutta methods (spRK) and, what we call here, symplectic Galerkin methods (sG) are -stage variational integrators of order up to .

The derivation of these methods follows the general scheme that comes next, the specifics of each particular case are detailed in the following subsections. For a fixed time step , one considers a series of points , refereed as macro-nodes. Between each couple of macro-nodes , one also considers a set of micro-data, the stages: For the particular cases of sG and spRK methods, we consider micro-nodes and micro-velocities , respectively. Both macro-nodes and micro-data (micro-nodes or micro-velocities) are required to satisfy a variational principle, giving rise to a set of equations, which properly combined, define the final integrator.

Here and after, we will use the following notation: Let denote a set of collocation points and consider the associated Lagrange polynomials and nodal weights, that is,

 lj(t):=∏i≠jt−cicj−ciandbj:=∫10lj(t)dt,

respectively. Note that the pair of ’s define a quadrature rule and that, for appropriate ’s, this rule may be a Gaussian-like quadrature, for instance, Gauss-Legendre, Gauss-Lobatto, Radau or Chebyshev.

Now, for the sake of simplicity and independently of the method, we will use the same notation for the nodal coefficients. We define for spRK and sG, respectively,

 (14) aij:=∫ci0lj(t)dtandaij:=dljdt∣∣ci.

Moreover, for spRK, we will also use the nodal weights and coefficients given by Equation (22) and, for sG, the source and target coefficients

 αj:=lj(0)andβj:=lj(1).

Finally, if denotes a Lagrangian from to coupled with an external force , then we define

 Pi:=∂L∂˙q∣∣i=∂L∂˙q∣∣(Qi,˙Qi)and˙Pi:=∂L∂q∣∣i+Fi=∂L∂q∣∣(Qi,˙Qi)+F(Qi,˙Qi),

where are couples of micro-nodes and micro-velocities given by each method. Besides, will stand for the partial derivative with respect to the -th component.

#### 3.2.1. Symplectic-Partitioned Runge-Kutta Methods

Although the variational derivation of spRK methods in the framework of Geometric Mechanics is an already known fact (see [36] for an “intrinsic” derivation, as the current, or [19] for a “constrained” one), both based on the original works of [47, 45], we present it here again in order to ease the understanding of and the comparison with sG methods below.

Given a point and vectors , we define the polynomial curves

 ˙Q(t):=s∑j=1lj(t/h)˙QjandQ(t):=q0+hs∑j=1∫t/h0lj(τ)dτ˙Qj.

We have

 (15) ˙Qi=˙Q(h⋅ci)andQi:=Q(h⋅ci)=q0+hs∑j=1aij˙Qj.

Note that the polynomial curve is uniquely determined by and . In fact, it is the unique polynomial curve of degree such that and . However, if we define the configuration point

 (16) q1:=Q(h⋅1)=q0+hs∑j=1bj˙Qj

and consider it fixed, then is uniquely determined by , and the ’s but one. Namely, take any such that and fix it, then

 ˙Qi0=⎛⎝q1−q0h−∑j≠i0bj˙Qj⎞⎠/bi0.

We now define the multi-vector discrete Lagrangian

 (17) Ld(˙Q1,…,˙Qs):=hs∑i=1biL(Qi,˙Qi)

and the multi-vector discrete force

 Fd(˙Q1,…,˙Qs)⋅(δQ1,…,δQs):=hs∑i=1biF(Qi,˙Qi)⋅δQi.

Although not explicitly stated, they both depend also on . If we write the micro-node variations in terms of the micro-velocity variations (by definition (15)), we have that the multi-vector discrete force is

 Fd(˙Q1,…,˙Qs)⋅(δQ1,…,δQs)=h2s∑j=1s∑i=1biaijF(Qi,˙Qi)⋅δ˙Qj.

The two-point discrete Lagrangian is then

 (18) Ld(q0,q1):=extPs([0,h],Rn,q0,q1)Ld(˙Q1,…,˙Qs)

where is the space of polynomials of order from to such that and and the vectors ’s determine such polynomials as discussed above. The so called “extremal” is realized by a polynomial such that

 (19) δLd(˙Q1,…,˙Qs)⋅(δ˙Q1,…,δ˙Qs)+Fd(˙Q1,…,˙Qs)⋅(δQ1,…,δQs)=0

for any variations , taking into account that and that . For convenience, the previous equation (19) is developed afterwards.

The two-point discrete forward and backward forces are then

 (20) F±d(q0,q1)⋅δ(q0,q1):=hs∑i=1biF(Qi,˙Qi)⋅∂Qi∂q±δq±,

where and . Using the previous relations, we may write

 F−d=hs∑i=1bi(1−aii0/bi0)FiandF+d=hs∑i=1biaii0/bi0Fi.

By the momenta-matching rule (13), we have that

 −p0 = −Di0Ld(˙Q1,…,˙Qs)/(hbi0)+Dq0Ld(˙Q1,…,˙Qs)+F−d, p1 = Di0Ld(˙Q1,…,˙Qs)/(hbi0)+F+d.

where stands for the partial derivative with respect to . Combining both equations, we obtain that

 Di0Ld+h2s∑i=1biaii0Fi=hbi0p1andp1=p0+Dq0Ld+hs∑i=1biFi.

Coming back to Equation (19), we have that

 0 =δLd(˙Q1,…,˙Qs)⋅(δ˙Q1,…,δ˙Qs)+Fd(˙Q1,…,˙Qs)⋅(δQ1,…,δQs) =∑j≠i0⎛⎝DjLd+h2s∑i=1biaijFi+∂˙Qi0∂˙Qj(Di0Ld+h2s∑i=1biaii0Fi)⎞⎠δ˙Qj.

Therefore, for , we have that

 DjLd+h2s∑i=1biaijFi=bj/bi0⋅(Di0Ld+h2s∑i=1biaii0Fi).

Thus, the integrator is defined by

 (21a) DjLd(˙Q1,…,˙Qs)+h2s∑i=1biaijFi=hbjp1, (21b) q1=q0+hs∑j=1bj˙Qj, (21c) p1=p0+Dq0Ld(˙Q1,…,˙Qs)+hs∑i=1biFi.

Besides, using the definition of the discrete Lagrangian, we have

 DjLd(˙Q1,…,˙Qs)+h2s∑i=1biaijFi = h2s∑i=1biaij˙Pi+hbjPj, Dq0Ld(˙Q1,…,˙Qs)+hs∑i=1biFi = hs∑i=1bi˙Pi.

Therefore, we may write

 Pj=p0+hs∑i=1bi(1−aij/bj)˙Pi=p0+hs∑i=1¯aji˙Pi, p1=p0+hs∑i=1bi˙Pi=p0+hs∑i=1¯bi˙Pi,

were and are given by

 (22) bi¯aij+¯bjaji=bi¯bj,bi=¯bi.

In summary, the equations that define the spRK integrator (with forces), are together with (22)

 (23a) q1= q0+hs∑j=1bj˙Qj, p1= p0+hs∑j=1¯bj˙Pj, (23b) Qi= q0+hs∑j=1aij˙Qj, Pi= p0+hs∑j=1¯aij˙Pj, (23c) Pi= ∂L∂˙q(Qi,˙Qi), ˙Pi= ∂L∂q(Qi,˙Qi)+F(Qi,˙Qi).

#### 3.2.2. Symplectic Galerkin Methods

Galerkin methods are a class of methods to transform a problem given by a continuous operator (such as a differential operator) to a discrete problem. As such, spRK methods fall into the scope of this technique and could be also classified as “symplectic Galerkin” methods. However, we want to stress on the difference between what is called spRK in the literature and what we refer here as sG. The wording should not be confused by the one used in [36].

Given points , we define the polynomial curves

 Q(t):=s∑j=1lj(t/h)Qjand˙Q(t):=1hs∑j=1˙lj(t/h)Qj.

We have

 Qi=Q(h⋅ci)and˙Qi:=˙Q(h⋅ci)=1hs∑j=1aijQj.

Note that the polynomial curve is uniquely determined by the points . In fact, it is the unique polynomial curve of degree such that . However, if we define the configuration points

 (24) q0:=Q(h⋅0)=s∑j=1αjQjandq1:=Q(h⋅1)=s∑j=1βjQj

and consider them fixed, then is uniquely determined by , and the ’s but a couple. For instance, we may consider and as functions of the others, since the relations (24) define a system of linear equations where the coefficient matrix has determinant (if and only if ). More precisely,

 (Q1Qs)=1γ(βs−αs−β1α1)⎛⎝q0−∑s−1j=2αjQjq1−∑s−1j=2βjQj⎞⎠.

We now define the multi-point discrete Lagrangian

 (25) Ld(Q1,…,Qs):=hs∑i=1biL(Qi,˙Qi)

and the multi-vector discrete force

 Fd(Q1,…,Qs)⋅(δQ1,…,δQs):=hs∑i=1biF(Qi,˙Qi)⋅δQi.

The two-point discrete Lagrangian is then

 (26) Ld(q0,q1):=extPs([0,h],Rn,q0,q1)Ld(Q1,…,Qs)

where is the space of polynomials of order from to such that the points ’s determine such polynomials as discussed above. The so called “extremal” is realized by a polynomial such that

 (27) δLd(Q1,…,Qs)⋅(δQ1,…,δQs)+Fd(Q1,…,Qs)⋅(δQ1,…,δQs)=0

for any variations , taking into account that and that , . For convenience, the previous equation (27) is developed afterwards.

The two-point discrete forward and backward forces are then formally defined by Equation (20). Using the previous relations, we may write

 F−d=h(βsb1F1−β1bsFs)/γandF+d=h(α1bsFs−αsb1F1)/γ.

By the momenta-matching rule (13), we have that

 −p0 =−βs/γ⋅(D1Ld+hb1F1)−β1/γ⋅(DsLd+hbsFs)  and p1 =−αs/γ⋅(D1Ld+hb1F1)+α1/γ⋅(D