A Convex Formulation of the H_{\infty}-Optimal Controller Synthesis Problem for Multi-Delay Systems and Implementation using SOS

# A Convex Formulation of the H∞-Optimal Controller Synthesis Problem for Multi-Delay Systems and Implementation using SOS

Matthew M. Peet, Member, IEEE, M. Peet is with the School for the Engineering of Matter, Transport and Energy, Arizona State University, Tempe, AZ, 85298 USA. e-mail: mpeet@asu.edu
###### Abstract

In this paper, we show how the problem of designing optimal state-feedback controllers for distributed-parameter systems can be formulated as a Linear Operator Inequality - a form of convex optimization with operator variables. Next, we apply this framework to linear systems with multiple delays, parameterizing a class of “complete quadratic” operators using positive matrices, and expressing the result as a Linear Matrix Inequality (LMI). Given the solution to the LMI, finding optimal controller gains requires an analytic formula for operator inversion. Our next result, then, is such an analytic formula - along with additional methods and guidance for real-time implementation. Finally, we show that the resulting LMIs and controllers are accurate to several decimal places as measured by the minimal achievable closed-loop norm as implemented on several test cases, and as compared against high-order Padé approximations. We also illustrate the approach and simulate the response on a scalable numerical from smart-building design.

ptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptpt

A Convex Formulation of the -Optimal Controller Synthesis Problem for Multi-Delay Systems and Implementation using SOS

Matthew M. Peet, Member, IEEE,

00footnotetext: M. Peet is with the School for the Engineering of Matter, Transport and Energy, Arizona State University, Tempe, AZ, 85298 USA. e-mail: mpeet@asu.edu

Index Terms

• Delay Systems, LMIs, Controller Synthesis.

## I Introduction

To control systems with delay, we must account for the transportation and flow of information. Although solutions to equations of the form

 ˙x(t)=A0x(t)+A1x(t−τ)+Bu(t)

appear to be functions of time, they are better understood as functions of both time and space:

 ˙x(t) =Ax(t)+A1v(t,−τ)+Bu(t) ∂tv(t,s) =∂sv(t,s),v(t,0)=x(t).

That is, instead of being lost, the state information, , is preserved as , transported through a hidden process (), moving at fixed velocity (), through a pipe of fixed length (), emerges a fixed time later () as , and influences the evolution at that future time ().

The implication is that controlling a system with delay requires us to regulate both the visible part of the state, , and the hidden process, . This concept is well-established and is expressed efficiently in the use of Lyapunov-Krasovskii (LK) functions - a concept dating back to at least 1959 [1]. LK functionals map and offer a method for combining the states, both current () and hidden () into a single energy metric.

While the concept of a LK functional may seem obvious, this same logic has been relatively neglected in the design of controllers for time-delay systems. That is, a controller should not only account for the present state, , but should also react to the hidden state .

The reason for the relative neglect of the hidden state lies the development of LMI methods for control in the mid-1990s. Specifically, Ricatti equations and later LMIs were shown to be reliable and practical computational tools for designing optimal and robust controllers for finite-dimensional systems. As a result, research on stability and control of time-delay systems focused on developing clever ways to suppress the infinite-dimensional nature of the hidden state and apply LMIs to a resulting problem in - for which these tools were originally designed. For example, model transformations were used in [2, 3, 4], resulting in a Lyapunov function of the form where

 z(t)=x(t−τ)+∫tt−τ(A0x(s)+A1x(s−τ))ds.

More recently, Jenson’s inequality and free-weighting matrices have been used to parameterize ever more complex Lyapunov functions by projecting the distributed hidden state, onto a finite-dimensional vector using variations of

 z(t)=∫0−τv(t,s)ds.

Indeed, this approach was recently formalized and made infinitely scalable in [5] using a projection-based approach so that for any set of basis functions, , we may define an expanded finite-dimensional vector

 zi(t)=∫0−τLi(s)v(t,s)ds.

so that the resulting Lyapunov function becomes .

Given that LMIs were developed for finite-dimensional systems, the desire to project the hidden state , onto a finite-dimensional vector space is understandable. However, this approach severely limits our ability to perform controller synthesis. Specifically, these projections from are not invertible. This is problematic, since standard methods for controller synthesis require the state transformation to be invertible - from primal state to dual state ). In this approach, the controllers are then designed for the dual state and then implemented on the original state using the inverse transformation .

By contrast, in this paper and its companion [6], we initially ignore the limitations of the LMI framework and directly formulate convex controller synthesis conditions on an infinite-dimensional space. Specifically, in [6], we formulated convex stabilizing controller synthesis conditions directly in terms of existence of a invertible state transformation and a dual control operator . In Section II, these results are extended to provide a convex formulation of the optimal full-state feedback controller synthesis problem for a general class of Distributed Parameter Systems (DPS).

Having developed a convex formulation of the controller synthesis problem, the question becomes how to test feasibility of these conditions using LMIs - a tool developed for optimization of positive matrix variables (NOT positive operators). As discussed above, a natural approach is to find a way to project these operators onto a finite-dimensional state space (wherein they become matrices) and indeed, one can view the work of [7, 8] (or in the PDE case [9]) as an attempt to do exactly this. However, these works were unable to recover controller gains and furthermore, the feasibility conditions proposed in [6] and in Theorem 2 explicitly prohibit such an approach, as they require the positive operator to be coercive and a projected operator will necessarily have a non-trivial null-space.

Because projection is not an option, in this paper and in [6], we have proposed to reverse the dominant paradigm by not narrowing the control problem to a finite-dimensional space (where we can apply LMIs), but instead to expand the LMI toolset to explicitly allow for parametrization and optimization of operator variables. To understand how this works, let us now discard ODE-based LK functions of the form and instead focus on LK functions of the form

 V(x,v):=∫0−τv(s)Pv(s)ds

where the LK function is positive if . Now, following the same logic presented above, we increase the complexity of the Lyapunov function by replacing with defined as

 z(s)=⎡⎢ ⎢⎣xZ(s)v(s)∫0−τZ(s,θ)v(θ)dθ⎤⎥ ⎥⎦

where and are vectors of functions and increase the dimension of and hence the complexity of the LK function — resulting in the well-known class of “complete-quadratic” functions. The advantage of this approach, then, is that the resulting LK function can also be represented as

 V(x,v):=∫0−τ[xv(s)](P[xv(⋅)])(s)ds

where

 (P[xv(⋅)])(s)=⎡⎣Px+∫0−τQ(θ)v(θ)dθQ(s)Tx+S(s)v(s)+∫0−τR(s,θ)v(θ)dθ⎤⎦

for some , , and (Defined in Theorem 6). In this way, positive matrices represent not just positive LK functions (of the complete-quadratic type) but also positive operators in a standardized form - denoted . This means that if we assume our operators to have this standard form, we can enforce positivity using LMI constraints. Furthermore, linear constraints on the functions , , and translate to linear constraints on the elements of the matrix .

The contribution of Section III, then, is to assume all operators have the form and state conditions on the functions , , and such that the resulting operators satisfy the conditions of Theorem 2. This result is then formulated as an LMI in Section VII.

One of the drawbacks of the proposed approach is that the resulting controllers are expressed as operators - of the form . The solution to the LMI yields numerical values of and , , and . However, in order to compute the controller gains,

 u(t)=K1x(t)+K2v(t,−τ)+∫0−τK3(s)v(t,s)ds

we need to find , , and such that . This problem is solved in Section V (which is a generalization of the result in [10]) by derivation of an analytic expression for , , and in terms of , , and . Finally, practical implementation requires an efficient numerical scheme for calculating in real-time. This issue is resolved in Section VI.

To make the results of this paper more broadly useful, we have developed efficient implementations and supporting subroutines and numerical implementation tools for the controller. These are available online at [11]. In Section VIII, the results are shown to be non-conservative to several decimal places by comparing the minimal achievable closed-loop -norm bound for several systems and comparing to results obtained using high-order Padé approximations of the same systems. Obviously, these results presented in this paper are significantly better than any known algorithm for controller synthesis with provable performance metrics. Furthermore, these result can be extended in obvious ways to robust control with uncertainty in system parameters or in delay.

As a final note, the reader should be aware that although the discussion here is for a single delay, the results developed are for multiple delays - a case which requires additional mathematical formalism.

### I-a Notation

Shorthand notation used throughout this paper includes the Hilbert spaces of square integrable functions from to and . We use and when domains are clear from context. We also use the extensions and for matrix-valued functions. denotes the continuous functions on . denotes the symmetric matrices. We say an operator is positive on a subset of Hilbert space if for all . is coercive on if for some and for all . Given an operator and a set , we use the shorthand to denote the image of on subset . denotes the identity matrix. is the matrix of zeros with shorthand . We will occasionally denote the intervals and . For a natural number, , we adopt the index shorthand notation which denotes .

## Ii An Convex Formulation of the Controller Synthesis Problem for Distributed Parameter Systems

Consider the generic distributed-parameter system

 ˙x(t) =Ax(t)+B1w(t)+B2u(t) y(t) =Cx(t)+D1w(t)+D2u(t) (1)

where , , , , , and .

We begin with the following mathematical result on duality, which is a reduced version of Theorem 3 in [6].

###### Theorem 1

Suppose is a bounded, coercive linear operator with and which is self-adjoint with respect to the inner product. Then : exists; is bounded; is self-adjoint; ; and is coercive.

Using Theorem 1, we give a convex formulation of the optimal full-state feedback controller synthesis. This result combines: a) A relatively simple extension of the Schur complement Lemma to infinite dimensions; with b) the dual synthesis condition in [6]. We note that the ODE equivalent of this theorem is necessary and sufficient and the proof structure can be credited with, e.g. [12].

###### Theorem 2

Suppose there exists an , an operator which satisfies the conditions of Theorem 1, and an operator such that

for all , , and . Then for any , if and satisfy and

 ˙x(t) =(A+B2ZP−1)x(t)+B1w(t) y(t) =(C+D2ZP−1)x(t)+D1w(t) (2)

for all , then .

Proof: By Theorem 1 : exists; is bounded; is self-adjoint; ; and is coercive.

For , let and be a solution of

 ˙x(t) =(A+B2ZP−1)x(t)+B1w(t) y(t) =(C+D2ZP−1)x(t)+D1w(t)

such that for any finite .

Define the storage function . Then for some . Define . Differentiating the storage function in time, we obtain

 ˙V(t) =⟨P−1x(t),Ax(t)⟩+⟨P−1x(t),B2ZP−1x(t)⟩+⟨P−1x(t),B1w(t)⟩ +⟨Ax(t),P−1x(t)⟩+⟨B2ZP−1x(t),P−1x(t)⟩+⟨B1w(t),P−1x(t)⟩ =⟨z(t),APz(t)⟩+⟨B2Zz(t),z(t)⟩+⟨z(t),B1w(t)⟩+⟨APz(t),z(t)⟩+⟨z(t),B2Zz(t)⟩+⟨B1w(t),z(t)⟩ ≤γ⟨w(t),w(t)⟩−⟨v(t),CPz(t)⟩−⟨CPz(t),v(t)⟩−⟨v(t),D2Zz(t)⟩−⟨D2Zz(t),v(t)⟩ =γ⟨w(t),w(t)⟩−⟨v(t),(C+D2ZP−1)x(t)+D1w(t)⟩−⟨(C+D2ZP−1)x(t)+D1w(t),v(t)⟩+γ⟨v(t),v(t)⟩−ϵ∥z(t)∥2Z =γ⟨w(t),w(t)⟩−⟨v(t),y(t)⟩−⟨y(t),v(t)⟩+γ⟨v(t),v(t)⟩−ϵ∥z(t)∥2Z

for any and all . Choose and we get

 ˙V(t)

Since is bounded, there exists a such that

 V(t)=⟨x(t),P−1x(t)⟩Z=⟨z(t),Pz(t)⟩Z≤σ∥z(t)∥2Z.

We conclude, therefore, that

 ˙V(t)≤−ϵσV(t)+γ∥w(t)∥2−1γ∥y(t)∥2.

Therefore, since , we may conclude by Gronwall-Bellman that . Integrating this expression forward in time, and using , we obtain

 1γ∥y∥2L2≤γ∥w∥2L2

which concludes the proof.

## Iii Theorem 2 Applied to Multi-Delay Systems

Theorem 2 gives a convex formulation of the controller synthesis problem for a general class of distributed-parameter systems. In this section and the next, we apply Theorem 2 to the case of systems with multiple delays. Specifically, we consider solutions to the system of equations given by

 ˙x(t) =A0x(t)+∑iAix(t−τi)+B1w(t)+B2u(t) y(t) =C0x(t)+∑iCix(t−τi)+D1w(t)+D2u(t) (3)

where is the disturbance input, is the controlled input, is the regulated output, are the state variables and for are the delays ordered by increasing magnitude. We assume for .

Our first step, then, is to express System (3) in the abstract form of (1). Following the mathematical formalism developed in [6], we define the inner-product space and for , we define the following shorthand notation

 [xϕi]:={x,ϕ1,⋯,ϕK},

which allows us to simplify expression of the inner product on , which we define to be

 ⟨[yψi],[xϕi]⟩Zm,n,K=τKyTx+K∑i=1∫0−τiψi(s)Tϕi(s)ds.

When , we simplify the notation using . The state-space for System (3) is defined as

 X:={[xϕi]∈Zn,K:ϕi∈Wn2[−τi,0] and ϕi(0)=x for all i∈[K]}.

Note that is a subspace of and inherits the norm of . We furthermore extend this notation to say

 [xϕi](s)=[yf(s,i)]

if and for and .

We now represent the infinitesimal generator, , of Eqn. (3) as

 A[xϕi](s):=[A0x+∑Ki=1Aiϕi(−τi)˙ϕi(s)].

Furthermore, , , , , , and are defined as

 (B1w)(s):=[B1w0],(B2u)(s):=[B2u0],(C[ψϕi]):=[C0ψ+∑iCiϕi(−τi)], (D1w)(s):=[D1w],(D2u)(s):=[D2u]

Having defined these operators, we note that for any solution of Eqn. (3), using the above notation if we define

 (x(t))(s)=[x1(t)x2(t)](s)=[x(t)x(t+s)]

Then satisfies Eqn. (1) using the operator definitions given above. The converse statement is also true.

### Iii-a A Parametrization of Operators for the Controller Synthesis Problem

We now introduce a class of operators , parameterized by matrix and matrix-valued functions , , as

 (P{P,Qi,Si,Rij}[xϕi])(s):=⎡⎢⎣Px+∑Ki=1∫0−τiQi(s)ϕi(s)dsτKQi(s)Tx+τKSi(s)ϕi(s)+∑Kj=1∫0−τjRij(s,θ)ϕj(θ)dθ.⎤⎥⎦

For this class of operators, the following Lemma combines Lemmas 3 and 4 in [6] and gives conditions under which satisfies the conditions of Theorem 1.

###### Lemma 3

Suppose that , and , , and for all . Further suppose is coercive on . Then : is a self-adjoint bounded linear operator with respect to the inner product defined on ; maps ; and .

Starting in Section IV, we will assume , , and are polynomial and give LMI conditions for positivity of operators of the form .

### Iii-B The Controller Synthesis Problem for Multiple-Delay Systems

Theorem 2 gives a convex formulation of the controller synthesis problem, where the data is the operators , , , , , and and the variables are the operators and . For multi-delay systems, we have defined the 6 operators and parameterized the decision variables using . We now likewise parameterize the decision variables using matrices , and functions as

 (Z[ψϕi]):=[Z0ψ+∑iZ1iϕi(−τi)+∑i∫0−τiZ2i(s)ϕi(s)ds].

The following theorem gives convex constraints on the variables , , , , , and under which Theorem 2 is satisfied when , , , , , and are as defined above.

###### Theorem 4

Suppose that there exist , and such that , and for all , and matrices , and such that for all and

for all and where

 L0:=A0P+K∑i=1(τKAiQi(−τi)T+12Si(0)), D=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣−γτKI1τKD11τKC0P+∑iCiQi(−τi)T+1τKD2Z0C1S1(−τ1)+1τKD2Z11…CKSK(−τK)+1τKD2Z1K∗T−γτKIBT10…0∗T∗TB2Z0+ZT0BT2+L0+LT0τKA1S1(−τ1)+B2Z11…τKAKSK(−τK)+B2Z1K∗T∗T∗T−S1(−τ1)…0⋮⋮⋮⋮⋱⋮∗T∗T∗T∗T…−Sk(−τK)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ Ei(s)=1τK⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣C0Qi(s)+∑jCjRji(−τj,s)+D2Z2i(s)0τK(A0Qi(s)+˙Qi(s)+∑Kj=1AjRji(−τj,s)+B2Z2i(s))0⋮0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ Gij(s,θ):=∂∂sRij(s,θ)+∂∂θRji(s,θ)T,i,j∈[K].

Then if

 u(t)=ZP−1[x(t)x(t+s)]

where

 (Z[xϕi])(s):=Z0x+K∑i=1Z1iϕi(−τi)+K∑i=1∫0−τiZ2i(s)ϕi(s)ds.\vspace−2mm

then for any , if and satisfy Eqn. (3), .

Proof: For any , using the definitions of , and , , , , , and given above, and satisfy Eqn. (3) if and only if and satisfy Eqn. (1). Therefore, if

for all , , and . The rest of the proof is lengthy but straightforward. We simply show that if we define

 f=[z2,1(−τ1)T⋯z2,K(−τK)T]T,

then

 (4) =⟨⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣⎡⎢ ⎢ ⎢⎣vwz1f⎤⎥ ⎥ ⎥⎦z2i⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,P{D,Ei,˙Si,Gij}⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣⎡⎢ ⎢ ⎢⎣vwz1f⎤⎥ ⎥ ⎥⎦z2i⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦⟩Zq+m+n(K+1),n,K≤−ϵ∥∥∥[z1z2i]∥∥∥2Zn,K=−ϵ∥z∥2Zn,K.

Before we begin, for convenience and efficiency of presentation, we will denote and

 h:=[vTwTzT1fT]T.

It may also be helpful to note that the quadratic form defined by a operator expands out as

 =τKhTDh+τKK∑i=1∫0−τihTEi(s)z2i(s)