Structure Preserving H-infinity Optimal PI Control

# Structure Preserving H-infinity Optimal PI Control

[    [    [ Lund University, Sweden (e-mail: rantzer@control.lth.se). Lund University, Sweden (e-mail: carolina.lidstrom@control.lth.se). Lund University, Sweden (e-mail: richard.pates@control.lth.se).
###### Abstract

A multi-variable PI (proportional integrating) controller is proved to be optimal for an important class of control problems where performance is specified in terms of frequency weighted H-infinity norms. The problem class includes networked systems with a subsystem in each node and control action along each edge. For such systems, the optimal PI controller is decentralized in the sense that control action along a given network edge is entirely determined by states at nodes connected by that edge.

D

First]Anders Rantzer Second]Carolina Lidström Third]Richard Pates

istributed control, decentralized control, linear systems, robust control

## 1 Introduction

Classical theory for multi-variable control synthesis suffers from a severe lack of scalability. Not only does the computational cost for Riccati equations and LMIs grow rapidly with the state dimension, but also implementation of the resulting controllers becomes unmanageable in large networks due to requirements for communication, computation an memory. Because of this situation, considerable research efforts have recently been devoted to development of scalable and structure preserving methods for design and implementation of networked control systems building on early contributions by Bamieh et al. (2002), D’Andrea and Dullerud (2003) and Rotkowitz and Lall (2002).

In practice, most scalable control architectures are built on layering. For example, control systems in the process industry are often organized in a hierarchical manner, where scalar PID controllers are used at the lowest level and the reference values of these controllers are computed on a slower time scale by centralized optimization algorithms. This approach often works well provided that the scalar loops are reasonably decoupled and that the coordination dynamics are comparatively slow. Other important applications where decentralized and layered control architectures have been successfully applied are power systems and Internet traffic control.

A scalable approach to control synthesis has recently been developed based on the notion of positive systems (Rantzer (2015)) and the nonlinear counterpart monotone systems (Dirr et al. (2015)). Such systems are characterized by existence of Lyapunov functions and other performance certificates whose complexity grows only linearly with the systems size. Interestingly, the maximal gain is always attained at zero frequency. Restricting attention to closed loop positive systems, distributed static controllers were optimized subject to performance in Tanaka and Langbort (2011), while performance was considered in Briat (2013).

The powerful synthesis methods for positive systems have raised an important question: How restrictive is a demand for closed loop positivity in optimal control? It was therefore a remarkable step forward when Lidström and Rantzer (2016) showed that for a large class of networked control systems with “diffusive” dynamics (i.e. symmetric state matrix), it is not restrictive at all. Instead, controllers defined by a simple closed form expression achieve the same level of performance as centralized controllers derived using Riccati equations or LMIs. In particular, the following problem was considered:

Given a graph and the system

 ˙xi =aixi+∑(i,j)∈E(uij−uji)+wi i ∈V, (1)

find a control law of the form that minimizes the norm of the transfer function from the disturbance to the controlled output .

Lidström and Rantzer (2016) proved that when an optimal control law is given by

 uij =xi/ai−xj/aj (2)

and the closed loop from to is a positive system. This control law is trivial to compute and decentralized in the sense that control action on the edge is entirely determined by the states in node and node .

The expressions above define a rare but important class of systems where decentralized controllers are known to achieve the same performance as the best centralized ones. Still, the setting is insufficient for many practical applications. In particular, as all proportional controllers, the control law (2) is unable to remove static errors in presence of constant disturbances. The purpose of this paper is to get rid of this deficiency, by modifying the performance criterion to optimize dynamic controllers with integral action.

The structure of the paper is as follows: After introduction of some basic notation in section 2, we prove the main result in section 3, consider some basic applications in section 4 and 5, before making conclusions in section 6. A well known matrix optimization result is included for completeness as a short appendix.

## 2 Notation

Let be the set of proper (bounded at infinity) rational functions with real coefficients. The set of matrices with elements in is denoted . Given , we say that is stabilizing provided that

 [IK][I+PK]−1[IP]∈RH(k+l)×(k+l)∞

has no poles in the closed right half plane. Furthermore . For a matrix , we denote the pseudo-inverse by and the spectral norm by . For a square symmetric matrix the notation means that is positive definite, while means that is negative definite.

## 3 Main result

{thm}

Let with symmetric negative definite. Assume that . Then the problem

Minimize ∥(I+KP)−1K∥∞ ∥1sP(I+KP)−1∥∞≤τ

over stabilizing , is solved by

 ˆK(s) =k(BTA−2−1sBTA−1)

where .

{pf}

Define and factorize as , where and have full column rank. Then

 FˆK(s) =k(s+kBTA−2B)−1BTA−2(sI−A) =kH(s+kGTA−2GHTH)−1GTA−2(sI−A)

so the poles of are the eigenvalues of , which are equal to the non-zero eigenvalues of . In particular, and are stable, so is stabilizing.

We also have

 I ⪯τ2k2BTA−2B BT(ω2I+A2)−1B ⪯τ2[ω2I+k2(BTA−2B)2]

so

 τ ≥∥∥(iωI−A)−1B(iωI+kBTA−2B)−1∥∥ =∥∥∥1iωP(iω)(I+ˆK(iω)P(iω))−1∥∥∥.

In general

 PFKP=P−P(I+KP)−1

so the constraint gives

 P(0)FK(0)P(0)=P(0).

Hence consider the minimization problem at , i.e. to minimize subject to . Standard calculations (Lemma A in the appendix) gives that the minimal value is attained by . In particular, since , it follows that is optimal at .

The inequality can be rewritten as

 FˆK(iω)∗FˆK(iω)⪯γ2I A−2B[ω2k−2I+(BTA−2B)2]−1BTA−2⪯γ2(ω2I+A2)−1

The last inequality holds trivially for and it holds for all other provided that , which is equivalent to the assumption on . Thus takes the minimal value and the proof is complete.

## 4 Control on Networks

Theorem 3 can be applied to the following problem:

Given a graph , suppose that

 {˙xi=aixi+biui+∑(i,j)∈E(uij+vji)ei=ri−xi (3)

, , and , . The signals , , and can be viewed as input-, disturbance-, reference- and error-signals respectively. The problem is to find a control law that minimizes the -gain from to while keeping the -gain from to bounded by when , .

The optimal controller has a distributed realization with one integrator at every node:

 ⎧⎪ ⎪⎨⎪ ⎪⎩˙zi=eiuij=zi/ai−ei/a2i−zj/aj+ej/a2jui=bi(zi/ai−ei/a2i) (4)

Just like (2), this control law is decentralized in the sense that control action on the edge is entirely determined by the errors at nodes and . The closed loop map from to has transfer matrix

 (sI−A)−1(sI+kBBTA−2)−1,

and non-negative impulse response. It should be noted that unless the controller realization is minimal, the controller will have integrators that are not stabilized in closed loop. For this reason, it necessary that for at least one node in every connected component of the graph.

## 5 Example

Consider the system depicted in Figure 1. The dynamics of the levels in the two buffers marked 1 and 2, around some steady state, is given by

 ˙x1 =−x1+u1−u12 e1 =r1−x1 (5) ˙x2 =−2x2+u12 e2 =r2−x2,

where is the level in buffer 1 and is the level in buffer 2. The transfer function of (5) is given by with

 A =(−100−2) B =(1−101). (6)

The criterion on is given by and is given by

 u1(t) =−ke1−k∫t0e1(σ)dσ u12(t) =ke1−k2e2+k∫t0(e1(σ)−14e2(σ))dσ.

for . Notice that each control input is only using the state(s) it affects through the matrix , i.e., the controller has the same zero-block structure as . Thus, the controller only considers local information, where local is subject to the interconnection specified by .

The optimization objective stated in Theorem 1 concerns the transfer function

 (I+KP)−1K,

which maps the reference value to the control input , as depicted in Figure 2. Thus, the first objective is to minimize the control effort needed to follow the reference value . The second performance criterion

 ∥∥∥1sP(I+KP)−1∥∥∥∞≤τ (7)

specifies the control quality in terms of disturbance rejection. The impact from a low frequency process disturbance should be attenuated by the feedback loop. The parameter is a time constant that determines the bandwidth of the control loop. The impact of will be illustrated below.

Given

 P(s) =(sI−A)−1B ˆK(s) =∥(A−1B)†∥τ(BTA−2−1sBTA−1),

Figure 3 plots the norms of and against the frequency for three different values of .

The first diagram clearly shows that the disturbance rejection is increasingly effective as the time constant is reduced. However, as shown in the second diagram, this comes at the price of larger control signals at higher frequencies, while the gain at remains unchanged. In the step response diagrams, a smaller value of results in larger control values for small , while the steady state values remain unchanged.

## 6 Conclusions

This paper has formulated a class of dynamic state feedback control problems for which a structure preserving PI controller is optimal. An explicit expression for the optimal gain has been given, which clarifies the relationship between plant structure and achievable preformance.

{ack}

This work was supported by the Swedish Research Council through the LCCC Linnaeus Center. The authors are members of the LCCC Linnaeus Center and the eLLIIT Excellence Center at Lund University.

## References

• Bamieh et al. (2002) Bassam Bamieh, Fernando Paganini, and Munther A. Dahleh. Distributed control of spatially invariant systems. IEEE Transactions on Automatic Control, 47(7), July 2002.
• Briat (2013) Corentin Briat. Robust stability and stabilization of uncertain linear positive systems via integral linear constraints: -gain and -gain characterization. International Journal of Robust and Nonlinear Control, 23:1932–1954, 2013.
• D’Andrea and Dullerud (2003) Raffaello D’Andrea and Geir E Dullerud. Distributed control design for spatially interconnected systems. IEEE Transactions on automatic control, 48(9):1478–1495, 2003.
• Dirr et al. (2015) Gunther Dirr, Hiroshi Ito, Anders Rantzer, and Björn Rüffer. Separable Lyapunov functions for monotone systems: Constructions and limitations. Discrete and Continuous Dynamical Systems. Series B, 20(8), 2015.
• Lidström and Rantzer (2016) Carolina Lidström and Anders Rantzer. Optimal H-infinity state feedback for systems with symmetric and Hurwitz state matrix. In Proceedings of American Control Conference, Boston, July 2016.
• Rantzer (2015) Anders Rantzer. Scalable control of positive systems. European Journal of Control, 24:72–80, 2015.
• Rotkowitz and Lall (2002) Michael Rotkowitz and Sanjay Lall. Decentralized control information structures preserved under feedback. In Proceedings of the IEEE Conference on Decision and Control, December 2002.
• Tanaka and Langbort (2011) Takashi Tanaka and Cédric Langbort. The bounded real lemma for internally positive systems and H-infinity structured static state feedback. IEEE Transactions on Automatic Control, 56(9):2218–2223, September 2011.

## Appendix A Appendix

{lem}

Let . Then

 minX∈Cm×n ∥X∥s.t.AXA=A (8)

has the minimal value , attained by . {pf} Let be a unit vector in and let be any feasible point for (8). Consider:

 minx∈Cn |x|s.t.y=Ax and x=Xy. (9)

Observe that because is feasible , the optimal solution always exists. The value equals , giving a lower bound for . Relaxing (9) by removing the second constraint gives a least squares problem with solution and value . Maximizing over gives . The result follows because achieves this lower bound.

You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters