A Convex Formulation of the Optimal Controller Synthesis Problem for MultiDelay Systems and Implementation using SOS
Abstract
In this paper, we show how the problem of designing optimal statefeedback controllers for distributedparameter systems can be formulated as a Linear Operator Inequality  a form of convex optimization with operator variables. Next, we apply this framework to linear systems with multiple delays, parameterizing a class of “complete quadratic” operators using positive matrices, and expressing the result as a Linear Matrix Inequality (LMI). Given the solution to the LMI, finding optimal controller gains requires an analytic formula for operator inversion. Our next result, then, is such an analytic formula  along with additional methods and guidance for realtime implementation. Finally, we show that the resulting LMIs and controllers are accurate to several decimal places as measured by the minimal achievable closedloop norm as implemented on several test cases, and as compared against highorder Padé approximations. We also illustrate the approach and simulate the response on a scalable numerical from smartbuilding design.
ptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptpt
A Convex Formulation of the Optimal Controller Synthesis Problem for MultiDelay Systems and Implementation using SOS
Matthew M. Peet, Member, IEEE,
^{0}^{0}footnotetext: M. Peet is with the School for the Engineering of Matter, Transport and Energy, Arizona State University, Tempe, AZ, 85298 USA. email: mpeet@asu.edu
Index Terms

Delay Systems, LMIs, Controller Synthesis.
I Introduction
To control systems with delay, we must account for the transportation and flow of information. Although solutions to equations of the form
appear to be functions of time, they are better understood as functions of both time and space:
That is, instead of being lost, the state information, , is preserved as , transported through a hidden process (), moving at fixed velocity (), through a pipe of fixed length (), emerges a fixed time later () as , and influences the evolution at that future time ().
The implication is that controlling a system with delay requires us to regulate both the visible part of the state, , and the hidden process, . This concept is wellestablished and is expressed efficiently in the use of LyapunovKrasovskii (LK) functions  a concept dating back to at least 1959 [1]. LK functionals map and offer a method for combining the states, both current () and hidden () into a single energy metric.
While the concept of a LK functional may seem obvious, this same logic has been relatively neglected in the design of controllers for timedelay systems. That is, a controller should not only account for the present state, , but should also react to the hidden state .
The reason for the relative neglect of the hidden state lies the development of LMI methods for control in the mid1990s. Specifically, Ricatti equations and later LMIs were shown to be reliable and practical computational tools for designing optimal and robust controllers for finitedimensional systems. As a result, research on stability and control of timedelay systems focused on developing clever ways to suppress the infinitedimensional nature of the hidden state and apply LMIs to a resulting problem in  for which these tools were originally designed. For example, model transformations were used in [2, 3, 4], resulting in a Lyapunov function of the form where
More recently, Jenson’s inequality and freeweighting matrices have been used to parameterize ever more complex Lyapunov functions by projecting the distributed hidden state, onto a finitedimensional vector using variations of
Indeed, this approach was recently formalized and made infinitely scalable in [5] using a projectionbased approach so that for any set of basis functions, , we may define an expanded finitedimensional vector
so that the resulting Lyapunov function becomes .
Given that LMIs were developed for finitedimensional systems, the desire to project the hidden state , onto a finitedimensional vector space is understandable. However, this approach severely limits our ability to perform controller synthesis. Specifically, these projections from are not invertible. This is problematic, since standard methods for controller synthesis require the state transformation to be invertible  from primal state to dual state ). In this approach, the controllers are then designed for the dual state and then implemented on the original state using the inverse transformation .
By contrast, in this paper and its companion [6], we initially ignore the limitations of the LMI framework and directly formulate convex controller synthesis conditions on an infinitedimensional space. Specifically, in [6], we formulated convex stabilizing controller synthesis conditions directly in terms of existence of a invertible state transformation and a dual control operator . In Section II, these results are extended to provide a convex formulation of the optimal fullstate feedback controller synthesis problem for a general class of Distributed Parameter Systems (DPS).
Having developed a convex formulation of the controller synthesis problem, the question becomes how to test feasibility of these conditions using LMIs  a tool developed for optimization of positive matrix variables (NOT positive operators). As discussed above, a natural approach is to find a way to project these operators onto a finitedimensional state space (wherein they become matrices) and indeed, one can view the work of [7, 8] (or in the PDE case [9]) as an attempt to do exactly this. However, these works were unable to recover controller gains and furthermore, the feasibility conditions proposed in [6] and in Theorem 2 explicitly prohibit such an approach, as they require the positive operator to be coercive and a projected operator will necessarily have a nontrivial nullspace.
Because projection is not an option, in this paper and in [6], we have proposed to reverse the dominant paradigm by not narrowing the control problem to a finitedimensional space (where we can apply LMIs), but instead to expand the LMI toolset to explicitly allow for parametrization and optimization of operator variables. To understand how this works, let us now discard ODEbased LK functions of the form and instead focus on LK functions of the form
where the LK function is positive if . Now, following the same logic presented above, we increase the complexity of the Lyapunov function by replacing with defined as
where and are vectors of functions and increase the dimension of and hence the complexity of the LK function — resulting in the wellknown class of “completequadratic” functions. The advantage of this approach, then, is that the resulting LK function can also be represented as
where
for some , , and (Defined in Theorem 6). In this way, positive matrices represent not just positive LK functions (of the completequadratic type) but also positive operators in a standardized form  denoted . This means that if we assume our operators to have this standard form, we can enforce positivity using LMI constraints. Furthermore, linear constraints on the functions , , and translate to linear constraints on the elements of the matrix .
The contribution of Section III, then, is to assume all operators have the form and state conditions on the functions , , and such that the resulting operators satisfy the conditions of Theorem 2. This result is then formulated as an LMI in Section VII.
One of the drawbacks of the proposed approach is that the resulting controllers are expressed as operators  of the form . The solution to the LMI yields numerical values of and , , and . However, in order to compute the controller gains,
we need to find , , and such that . This problem is solved in Section V (which is a generalization of the result in [10]) by derivation of an analytic expression for , , and in terms of , , and . Finally, practical implementation requires an efficient numerical scheme for calculating in realtime. This issue is resolved in Section VI.
To make the results of this paper more broadly useful, we have developed efficient implementations and supporting subroutines and numerical implementation tools for the controller. These are available online at [11]. In Section VIII, the results are shown to be nonconservative to several decimal places by comparing the minimal achievable closedloop norm bound for several systems and comparing to results obtained using highorder Padé approximations of the same systems. Obviously, these results presented in this paper are significantly better than any known algorithm for controller synthesis with provable performance metrics. Furthermore, these result can be extended in obvious ways to robust control with uncertainty in system parameters or in delay.
As a final note, the reader should be aware that although the discussion here is for a single delay, the results developed are for multiple delays  a case which requires additional mathematical formalism.
Ia Notation
Shorthand notation used throughout this paper includes the Hilbert spaces of square integrable functions from to and . We use and when domains are clear from context. We also use the extensions and for matrixvalued functions. denotes the continuous functions on . denotes the symmetric matrices. We say an operator is positive on a subset of Hilbert space if for all . is coercive on if for some and for all . Given an operator and a set , we use the shorthand to denote the image of on subset . denotes the identity matrix. is the matrix of zeros with shorthand . We will occasionally denote the intervals and . For a natural number, , we adopt the index shorthand notation which denotes .
Ii An Convex Formulation of the Controller Synthesis Problem for Distributed Parameter Systems
Consider the generic distributedparameter system
(1) 
where , , , , , and .
We begin with the following mathematical result on duality, which is a reduced version of Theorem 3 in [6].
Theorem 1
Suppose is a bounded, coercive linear operator with and which is selfadjoint with respect to the inner product. Then : exists; is bounded; is selfadjoint; ; and is coercive.
Using Theorem 1, we give a convex formulation of the optimal fullstate feedback controller synthesis. This result combines: a) A relatively simple extension of the Schur complement Lemma to infinite dimensions; with b) the dual synthesis condition in [6]. We note that the ODE equivalent of this theorem is necessary and sufficient and the proof structure can be credited with, e.g. [12].
Theorem 2
Suppose there exists an , an operator which satisfies the conditions of Theorem 1, and an operator such that
for all , , and . Then for any , if and satisfy and
(2) 
for all , then .
Proof: By Theorem 1 : exists; is bounded; is selfadjoint; ; and is coercive.
For , let and be a solution of
such that for any finite .
Define the storage function . Then for some . Define . Differentiating the storage function in time, we obtain
for any and all . Choose and we get
Since is bounded, there exists a such that
We conclude, therefore, that
Therefore, since , we may conclude by GronwallBellman that . Integrating this expression forward in time, and using , we obtain
which concludes the proof.
Iii Theorem 2 Applied to MultiDelay Systems
Theorem 2 gives a convex formulation of the controller synthesis problem for a general class of distributedparameter systems. In this section and the next, we apply Theorem 2 to the case of systems with multiple delays. Specifically, we consider solutions to the system of equations given by
(3) 
where is the disturbance input, is the controlled input, is the regulated output, are the state variables and for are the delays ordered by increasing magnitude. We assume for .
Our first step, then, is to express System (3) in the abstract form of (1). Following the mathematical formalism developed in [6], we define the innerproduct space and for , we define the following shorthand notation
which allows us to simplify expression of the inner product on , which we define to be
When , we simplify the notation using . The statespace for System (3) is defined as
Note that is a subspace of and inherits the norm of . We furthermore extend this notation to say
if and for and .
We now represent the infinitesimal generator, , of Eqn. (3) as
Furthermore, , , , , , and are defined as
Having defined these operators, we note that for any solution of Eqn. (3), using the above notation if we define
Then satisfies Eqn. (1) using the operator definitions given above. The converse statement is also true.
Iiia A Parametrization of Operators for the Controller Synthesis Problem
We now introduce a class of operators , parameterized by matrix and matrixvalued functions , , as
For this class of operators, the following Lemma combines Lemmas 3 and 4 in [6] and gives conditions under which satisfies the conditions of Theorem 1.
Lemma 3
Suppose that , and , , and for all . Further suppose is coercive on . Then : is a selfadjoint bounded linear operator with respect to the inner product defined on ; maps ; and .
Starting in Section IV, we will assume , , and are polynomial and give LMI conditions for positivity of operators of the form .
IiiB The Controller Synthesis Problem for MultipleDelay Systems
Theorem 2 gives a convex formulation of the controller synthesis problem, where the data is the operators , , , , , and and the variables are the operators and . For multidelay systems, we have defined the 6 operators and parameterized the decision variables using . We now likewise parameterize the decision variables using matrices , and functions as
The following theorem gives convex constraints on the variables , , , , , and under which Theorem 2 is satisfied when , , , , , and are as defined above.
Theorem 4
Suppose that there exist , and such that , and for all , and matrices , and such that for all and
for all and where
Proof: For any , using the definitions of , and , , , , , and given above, and satisfy Eqn. (3) if and only if and satisfy Eqn. (1). Therefore, if
for all , , and . The rest of the proof is lengthy but straightforward. We simply show that if we define
then
(4)  
Before we begin, for convenience and efficiency of presentation, we will denote and
It may also be helpful to note that the quadratic form defined by a operator expands out as
(5) 
Our task, therefore, is simply to write all the terms we find in (4) in the form of Equation (5) for an appropriate choice of matrix and functions , , and . Fortunately, the most complicated part of this operation has already been completed. Indeed, from Theorem 5 in [6], we have the first two terms can be represented as
where and