# Discrete-Time Models for

Implicit Port-Hamiltonian Systems

###### Abstract

Implicit representations of finite-dimensional port-Hamiltonian systems are studied from the perspective of their use in numerical simulation and control design. Implicit representations arise when a system is modeled in Cartesian coordinates and when the system constraints are applied in the form of additional algebraic equations (the system model is in a differential-algebraic equation form). Such representations lend themselves better to sample-data approximations. Given an implicit representation of a port-Hamiltonian system it is shown how to construct a sampled-data model that preserves the port-Hamiltonian (PH) structure under sample and hold.

ort-Hamiltonian systems, nonlinear implicit systems, symplectic integration, sampled-data systems

## 1 Introduction

The class of Hamiltonian systems has a prominent role in many disciplines. It was recently extended in [6] to include open systems, i.e. systems that interact with the environment via a set of inputs and outputs (called ports), giving rise to PH systems. Such extended models immediately reveal the passive properties of the underlying systems, making them particularly well suited for designing passivity-based control (PBC) laws. Two types of model representations of Hamiltonian systems are in widespread use: the explicit representation stated in the form of an ordinary differential equation (ODE) on an abstract manifold [28, 29, 25, 26] and the implicit representation stated in the form of a differential-algebraic equation (DAE) usually evolving in a Euclidean space [4]. While explicit representations are usually preferred in the context of analytical mechanics; see [1, 8, 19], the implicit DAEs models lend themselves better for numerical computations as they lead to simpler expressions for the Hamiltonian functions. The formal relations between the two representations and their equivalence can be established if the system’s configuration space is regarded as an embedded submanifold of the Euclidean space; see [4] for a full development.

Of principal interest here will be the construction of sampled-data (discrete-time) models of PH systems that best approximate their continuous-time counterparts. Sampled-data models are important in digital control implementations and permit for simpler design of PBC laws directly in discrete time. In this context, the notion of “best approximation” deserves clarification.

For linear systems, exact sampled-data models can be constructed by requiring that the solutions of the sampled-data and continuous-time systems coincide at the discretization points. Point-wise model matching is usually impossible in the case of nonlinear systems short of explicit derivation of analytical expressions for their solutions. For general dynamic systems, it is thus the choice of an integration method for generating an approximate solution (like Euler or Runge-Kutta) whose precision relates to the compromise between complexity and order of approximation of the given continuous-time system by its sampled-data counterpart. In the special case of Hamiltonian systems, structure preservation is usually the main criterion for choosing a numerical method; see, e.g., [11] for more details on structure-preserving numerical schemes. Structure-preserving methods guarantee accuracy in long-horizon simulations.

It is known that autonomous Hamiltonian systems conserve two quantities: the Hamiltonian function (i.e., the energy or storage function) and a certain two-form , called the symplectic form. Numerical integration algorithms can either conserve [14, 15] or [27], but not both. Conservation of is usually preferred over conservation of as the symplectic form unambiguously defines the class of Hamiltonian vector fields; see Theorem 2.3 and Remark 1.

### Contribution

Adopting a symplectic-form preserving approach, a sampled-data model for a given continuous-time PH system in developed here. The approach employs implicit representations of PH systems as the latter lead to simpler expressions for the Hamiltonian functions. Specifically, the Hamiltonian functions arising from implicit representations lend themselves well to the application of flow-splitting numerical integration methods; see [27] for a splitting method that applies to autonomous Hamiltonian systems (Hamiltonian systems without ports). Additionally, the discrete-time modeling approach presented here preserves passivity of PH systems (Theorem 4). The passive structure is preserved in the sense that the discrete model is the exact representation of another, possibly non affine, continuous-time PH system which, up to an approximation error of order two with respect to the sampling interval, has the same storage function and the same output function .

### Paper structure

Section 2 introduces the implicit representations of PH systems. This section recalls the results from [4] and serves mainly to introduce the notation and to state the main assumptions (1 and 2). Also, the definition of PH systems is extended to the case of non affine control systems. Splitting and symplectic integration methods for autonomous systems are recalled in § 3. Discrete-time models for implicit PH systems based on vector splitting methods are developed in § 4 and their asymptotic properties are analyzed. An illustrative example is provided confirming the findings.

## 2 Implicit representations of port-Hamiltonian systems

We begin this section by defining the configuration and phase spaces of Hamiltonian systems in implicit form. Conservative properties are also discussed. Similarly to the case of explicit representations of Hamiltonian systems, both the Hamiltonian function and the symplectic two-form of the implicit representation are invariant under the flow of the system. Following [6], input and output ports are added to the implicit representation, insuring passivity of the resulting PH system. Finally, we extend the definition of PH to the the case of non affine control systems. This will be needed when performing backward error analysis later in § 4 since time discretization entails the loss of affinity.

### 2.1 The configuration space

The class of systems considered here is restricted to mechanical finite-dimensional systems with scleronomic constraints that evolve in continuous time. Systems of this type typically consist of rigid bodies held together as one structure by the action of constraint forces. The position of each of the rigid bodies can be unambiguously described in terms of the position of its center of mass and its orientation in space. The configuration space of a spatial system as a whole can thus be viewed as a subset, , of a Cartesian space of dimension that is expressed in terms of smooth, independent constraint functions , with , as the level set

(1) |

where is the inverse image of . Functional independence of the constraints is expressed in terms of the following rank condition.

###### Assumption 1

The rank of is equal to at all points of the set .

Recalling that the rank of a mapping is the rank of its tangent map (pushforward by ), then in any coordinates, the condition requires that the Jacobian has full rank, i.e., at every point of . The value is then called a regular value of , the level set is a regular level set of , and is said to be a defining map for ; see [16], p. 113-114. It hence follows that can be given the differentiable structure of a closed embedded submanifold of ; see [16], Corollary 5.24, p. 114. The Constant-Rank Level Set Theorem ([16], Theorem 5.22, p. 113), further specifies its dimension as .

Because is an embedded submanifold, it can also be regarded as an abstract -dimensional manifold with local coordinates (called generalized coordinates in mechanics). It is then also possible to exhibit an injective inclusion map: (embedding), with , which is a homeomorphism onto the image and such that and . This inclusion serves as a map between local coordinates and global Cartesian coordinates .

### 2.2 Hamiltonian equations

Let be the cotangent bundle of and let be local coordinates. A system is said to be Hamiltonian if its trajectories are integral curves of the Hamiltonian vector field ,

where is the sum of the system’s kinetic and potential energies and , respectively; see [1] for more details and a coordinate-free definition.

The implicit model for a Hamiltonian system is defined as follows. Let be the cotangent bundle of and let be global coordinates. The implicit Hamiltonian vector field takes the form

(2) |

with

(3) |

the unconstrained part of the Hamiltonian vector field and is again the sum of the kinetic and potential energies, but expressed in global coordinates. That is, and ; see [11, 27, 3] for details on the derivation of this equation. The vector field unravels as the DAE system

(4a) | ||||

(4b) |

By applying to both sides of the constraint equations , one obtains the so-called hidden constraints

(5) |

Thus, the system evolves on the closed submanifold

(6) |

and we have . A more rigorous construction of the phase space is given in [4], where it is regarded as an embedding of in . The map as well as the formal relation between and can also be found in [4].

The Lagrange multipliers are defined implicitly by (2). Precisely, application of to the hidden constraints makes appear:

(7) |

Thus, if the matrix

(8) |

is non-singular on , then there are unique satisfying (7) and forcing the integral curves to stay on .

###### Assumption 2

The Hessian matrix is positive definite for all so is convex in .

### 2.3 Energy conservation and symplecticity

It is well known that the flows generated by Hamiltonian vector fields in explicit form preserve the Hamiltonian function, i.e. the total energy of the system is conserved during the evolution of the system [32, 23, 24]. As the explicit and implicit system representations are equivalent, the conservation also holds for the implicit Hamiltonian vector field (2), as is easily confirmed by computing the Lie derivative of along the flow generated by [11]. Indeed,

(9) |

where the third equality holds because and because of the definition of given in (5). Eq. (9) shows that (cf. Eq. (6)), so remains constant along the system trajectories.

It is also well known that, besides the 0-form , Hamiltonian flows preserve a certain 2-form called the
symplectic form. In the case of implicit representations the symplectic form is given by the
formula^{1}^{1}1See [1] for a coordinate-free definition.

(10) |

which acts on vectors of , with Einstein’s summation convention implied.

A differentiable mapping is called symplectic if

(11) |

Here, denotes the pull-back map associated with , defined by {definition} Let be a smooth map of manifolds and let . The pull-back map associated with is a dual map to the push-forward map and is characterized by

The application of this definition permits to re-write (11) in the equivalent form

(12) |

The conservation of along can be established by showing that the Lie derivative is equal to zero, the demonstration bearing similarity to that of the conservation of ; see [4] for the explicit derivation. Here, we cite the result

[11, 17] Let be twice continuously differentiable. The flow of governed by (2) is a symplectic transformation on , i.e.,

for every for which is defined.

###### Remark 1

The converse statement, that every symplectic flow solves Hamilton’s equations for some , is also true, so symplecticity is a characteristic property of Hamiltonian systems [1]. This does not translate to the case of energy conservation, i.e., while every Hamiltonian system conserves energy, not every energy-conserving system is Hamiltonian.

### 2.4 port-Hamiltonian systems

In the presence of external forces and dissipation it is convenient to represent (2) as an input-output system equipped with a pair of port variables , giving rise to a PH system; see [20, 32, 23] for the original definition as stated with respect to Hamiltonian systems in explicit form. Extending on this definition the Hamiltonian systems in implicit form, a port-Hamiltonian system is described in terms of the vector field :

(13) |

with defined as the controlled or input variable, is defined as the output variable that satisfies

(14) |

and where are maps from to .

The vector field (13) and the output unravel as the DAE system

By analogy with the results described in Sec. 2.2, one can determine the Lagrange multipliers explicitly. The constraints imply that

(15) |

from where it follows that, as long as (8) is non-singular, there are unique (in general dependent on as well as on and ) such that and such that the integral curve stays on .

It can be readily seen that an implicit PH system described by (13) no longer preserves . The Lie derivative of is now

which establishes the power balance

(16) |

Since the product is equal to the rate of change in energy, we say that is a power-conjugated pair of port variables. If, in addition, the restriction of to is bounded from below, i.e., if the image of under is bounded from below, then (13) is called passive, or more precisely, lossless. Boundedness of can be easily assessed using the following proposition.

[4] If the potential energy is lower semi-continuous and is compact, then restricted to is bounded from below (hence, the vector field (13) describes a lossless system).

With the inclusion of the control variable , it can no longer be expected that the flow of (13) be symplectic. Indeed, it is not hard to see that the Lie derivative of along is in general different from zero.

#### Example: A double planar pendulum

Let us recall an example from [4], on which we will elaborate when discussing sampled-data models.

Consider the model of a double planar pendulum shown in Fig. 1 that comprises a pair of point masses and whose coordinate positions are and , respectively. The massless bars are of fixed lengths and which gives rise to the two holonomic constraints:

(18) |

with , , , . The rank of the constraint Jacobian is full as

(19) |

for all . Therefore, is a regular value of and is an embedded submanifold of .

The total energy is

(20) |

where and is the standard gravity.

Substituting (18) and (20) in (4) gives

(21a) | ||||

(21b) |

which, together with (18), constitutes a set of DAEs describing the motion of the double pendulum in implicit form. The multipliers and are the magnitudes of the tension along the two bars.

Now, assume that the double pendulum is actuated by application of torques and to the joints that correspond to the angles and , respectively. The resulting linear forces are then and with

(22) |

The manifold defined by (18) is compact and the potential energy is continuous, which confirms that the double pendulum is passive with passive outputs .

An explicit model for the double pendulum as an ODE can be derived by choosing the generalized coordinates on as and , as motivated by Fig. 1. The associated embedding satisfying is readily exhibited as

(23) |

In local coordinates, the total energy is

(24) |

with

The motion of the system is described by

(25) |

(see [4] for more details).

Two representations for the same system were derived, one in the form of an ODE (25) and the other as a DAE (21). The main point of this example can be summarized in the following remark.

###### Remark 2

The implicit DAE representation of the Hamiltonian systems renders a separable Hamiltonian function (20), i.e. the kinetic energy does not depend on and the inertia matrix is a constant diagonal matrix. Moreover, the potential energy is linear, which results in a constant potential energy gradient. On the other hand it is easily verified that the explicit Hamiltonian representation does not share such fortunate properties. The explicit Hamiltonian is no longer separable and the inertia matrix depends on the generalized coordinates .

The a cost of a simpler Hamiltonian function is the higher dimensional implicit model representation and the appearance of the Lagrange multipliers. As will soon be seen, however, implicit representations are particularly advantageous for the purposes of system discretization.

### 2.5 Non affine port-Hamiltonian systems

It was assumed in (13) that the control variables enter the vector field affinely. Many physical systems exhibit this property so, from a modeling point of view, this assumption is not too restrictive. However, for the purpose of performing backward error analysis in § 4, we will need to consider PH systems for which the control might enter in a non affine way. Motivated by the fact that when , we propose the following extended definition of a PH system.

A controlled vector field is said to be port-Hamiltonian if implies that the generated flow is symplectic.

Passivity of a non affine port-Hamiltonian system can be established by redefining the passive output.

A (not necessarily affine) smooth PH system described by a vector field can always be decomposed as , where is a Hamiltonian vector field and are the input vector fields. Hence, satisfies the power balance for some real-valued function and real-valued output functions . (The output functions may now depend directly on as well as on and .)

Following [18], we first show that a smooth control vector field can be split into a drift and a set of vector fields having factored out. Let us define the drift as and let us define the vector fields by the equations

It follows from the chain rule that

Upon integration on both sides of the equation we arrive at

Therefore, we have

(26) |

where the input vector fields are defined by

It follows from the hypothesis that generates a symplectic flow, so it is a Hamiltonian vector field and satisfies for some real-valued function . Applying to shows that with .

###### Remark 3

For an affine PH system, the formulae of this lemma recover the output functions (14) with , that is, .

## 3 Sampled-data models for autonomous Hamiltonian systems

Computing a sampled-data model of a dynamical system basically amounts to computing an approximate solution of the differential equations during a small interval of time. This problem has been studied extensively in the literature of numerical analysis, from which we borrow some results and terminology. In numerical analysis, a sampled-data model is the central component of an integration method or a numerical integrator, up to the point that these terms are used interchangeably.

Mathematical models for sampled-data systems arise in diverse circumstances. In the direct approach to digital control, i.e., as opposed to the emulation of continuous control laws, the design of the controller is performed in discrete time, the designer working directly over a sampled-data model. When designing directly in discrete time, the controller can be directly implemented on a digital device. Also, it is possible to exploit the advantages of switched controls or, e.g., multirate control techniques [21].

Computing the sampled-data model for a given nonlinear system relies on the computation of a solution of the corresponding ODE or DAE, which is in general impossible to do analytically, so one has to settle for an approximate solution.

When simulating the behavior of dynamic systems, a discrete-time model of the continuous system is also used for computing a numerical solution to the initial-value problem. Many different integration methods (or methods for short) can be found in the literature. Let us first focus on integration methods for autonomous systems and further restrict our attention to one-step methods defined by a transformation

where the constant step-size is regarded as a parameter of the method^{2}^{2}2For a more general method, the value of the
need not depend only on , but may also depend on the previous values ,…(a multistep method).
Also, the value of need not be constant in general.. For a given initial condition in the phase space, is applied
recursively to generate a discrete flow that approximates the true flow
, of a given vector field at time instants In this sense, the map
is a discrete-time approximation of (or a sampled-data model of ).

A one-step method has order if the local error satisfies^{3}^{3}3We use big-O notation when quantifying approximation errors,
i.e., for a given pair of functions , , we write as as shorthand for
.

(27) |

uniformly in . A one-step method is said to be consistent if .

Let us now discuss some important properties of numerical integrators, like order and symmetry.

### 3.1 Symplectic methods

If a sampled-data model approximates the discrete time behavior of a Hamiltonian system, one could hope for to inherit its
fundamental qualitative properties: energy conservation and symplecticity. Unfortunately, it is not possible to preserve and
simultaneously, unless agrees with the exact flow up to a reparametrization of time [7]. For this reason, one has
to choose either in favor of one or the other invariant^{4}^{4}4For particular Hamiltonians there might be other invariants, such as momentum
or angular momentum, but in general there need not be.. Energy conserving methods have received some
attention [9, 31, 10, 12, 13], but in light of Remark 1, most of the literature
focuses on symplectic integration algorithms (see [11, 17, 5] and references therein). A comparison between both
approaches is carried out in [30] for the rigid body.

A theoretical advantage of constructing a symplectic one-step method is that, even though only approximates up to the ’th order, it coincides exactly (if one disregards convergence issues) with the flow of another Hamiltonian system, a modified Hamiltonian system described by a modified differential equation.

[11, p. 352] A symplectic method for the constrained Hamiltonian system (4) has a modified equation that is locally of the form

(28a) | ||||

(28b) |

with Furthermore,

all and all . Note that the actual value of the Legendre multipliers differ in general from those obtained for the original system (4).

In other words, for an initial condition , is equal to the solution of (28) at time . Note that (27) provides information about the difference between the actual flow of and the approximate discrete flow . This is the kind of information that forward error analysis aims at. While certainly useful as an indicator of the quality of the approximation, Eq. (27) only evaluates the behavior of the approximate flow on the first iteration, but says nothing about its long time behavior. From (27) alone we cannot infer anything about the error when is large, so we do not know if errors accumulate or if they average out to zero.

On the other hand, Theorem 3.1 tells us that if is symplectic, then there exists a modified continuous system whose flow coincides exactly with the discrete flow generated by . The modified system (28) preserves the Hamiltonian structure of the original system (4) and it is ‘close’ to it in the sense that for a method of order . In other words, a symplectic integration method preserves the original 2-form and a different (but close) Hamiltonian function. This property guaranties that the good behavior of the integration scheme is maintained during many iterations, giving a global nature to the local property (27). This observation is at the center of backward error analysis [11].

### 3.2 Splitting methods

A practical advantage of symplectic schemes is that they lend themselves well to the application of splitting methods. To illustrate the idea, consider again the unconstrained or, otherwise, explicit Hamiltonian vector field on an abstract manifold . If the Hamiltonian function is separable, i.e., if it can be written as , then the vector field can be spilt into two Hamiltonian vector fields

with . Notice that, taken separately, each vector field can be trivially integrated exactly. For (we use the subindex to refer to an element in a sequence, not a particular coordinate) we have

and

A first-order symplectic method can be easily constructed by performing the composition

(29) |

Indeed, the maps and are symplectic because they are the exact flows of Hamiltonian vector fields. Since the composition of two symplectic maps is again symplectic, is symplectic.

Many simple Hamiltonian systems with phase space are not governed by separable Hamiltonians, so splitting methods cannot be applied directly. However, the Hamiltonian function of many mechanical systems becomes separable if the phase space is embedded in (see, e.g., Remark 2). An interesting symplectic method that is particularly well suited for this class of systems was proposed in [27]. Roughly speaking, the idea is to compute a symplectic method for the unconstrained Hamiltonian vector field , i.e., a symplectic map approximating the solution of the ODE (notice the absence of the constraint equations)

at time .

If is separable, can be readily found. The method for the original constrained Hamiltonian vector field is then constructed by taking the image of and applying a correction term that ensures that the value of belongs to , so that the constraints are satisfied. The correction is done in a careful way so that the resulting map is still symplectic (see also [17]). Depending on the accuracy of , the resulting can be of first or second order (see § 4 for details).

### 3.3 Symmetric methods

For each for which the solution is defined, the flow of an autonomous differential equation defines a transformation on the phase space. It follows from the group property of the flow [2] that the inverse of the transformation can be obtained simply by reversing time, that is, . Needless to say, this property does not hold in general for a discrete approximation , which motivates the following definition.

The adjoint method of a method is the inverse map of the original method with reversed time step , i.e.,

In other words, is implicitly defined by . A method for which is called symmetric.

From a theoretical point of view, an approximate discrete-time flow should be symmetric because actual continuous flows are. But symmetry is important from a practical point of view too. It has been proved in [34] that all symmetric methods are of even order, a fact that can be exploited to construct high-order methods from simple lower-order methods. For example, one can take a first-order non-symmetric method, compute its adjoint and construct a symmetric method

(30) |

We know that is at least first order, but since is symmetric, we also know that the order has to be even, so we conclude that the method is actually of second order.

The scheme (30) works particularly well for splitting methods. Take, e.g., the integration scheme (29). The maps and are symmetric (because they are exact solutions of a differential equation), but their composition is not symmetric in general. To remedy this, one can compute the adjoint method

(31) |

and, using (29), (31) and (30), construct

which is a second-order symmetric method.

### 3.4 Modified vector fields and exponential representations

Consider a vector-valued function and a vector field , both defined on . If and are analytic, then the composition of and the generated flow can be expanded in a Taylor series around ,

where , , etc (see [22, 33] for details). In particular, if is taken as the identity function , one obtains the flow . Since an -order method for coincides with the flow of a modified vector field [11, p. 340], it is also possible to expand in a Taylor series, . This exponential notation is a convenient way to express the relationship between a vector field and the flow generated by it, as well as to analyze the composition of flows.

## 4 A splitting method for implicit port-Hamiltonian systems

Suppose that there is a sequence of commands . Each command in the sequence arrives at the discrete instants of time where is a positive real number —such commands could be generated, e.g., by a computer program. Suppose further that a zero-order hold transforms this sequence into piece-wise constant controls

(32) |

which are fed into a PH system. Let be the integral curve of the non-autonomous vector field (13) with control (32) and passing through at . Let

be the corresponding outputs and let with be the sequence obtained by sampling them at discrete instants of time (see Fig. 2). We call the resulting system a sampled-data port-Hamiltonian system.

The goal here is to develop a method for derivation of discrete-time (or sampled-data) models for PH systems given by implicit vector fields. The underlying idea is to split the PH vector field into two components: the vector field describing an unconstrained system with state-space equal to the whole , and a vector field containing the Lagrange multipliers, the one that maintains the trajectories on the submanifold . Splitting the vector field simplifies the computation of the sampled-data model by decomposing the problem into two simpler subproblems.

Now we extend the results of [27] to the PH case. We show that, with a straightforward modification, the method presented in [27], originally intended as an integration scheme for autonomous Hamiltonian systems, can be used to compute sampled-data models that preserve the main properties of a PH system.

Consider again the implicit vector field