An Algebraic Approach to the Control of Decentralized Systems

# An Algebraic Approach to the Control of Decentralized Systems

Laurent Lessard      Sanjay Lall
###### Abstract

Optimal decentralized controller design is notoriously difficult, but recent research has identified large subclasses of such problems that may be convexified and thus are amenable to solution via efficient numerical methods. One recently discovered sufficient condition for convexity is quadratic invariance (QI). Despite the simple algebraic characterization of QI, which relates the plant and controller maps, proving convexity of the set of achievable closed-loop maps requires tools from functional analysis. In this work, we present a new formulation of quadratic invariance that is purely algebraic. While our results are similar in flavor to those from traditional QI theory, they do not follow from that body of work. Furthermore, they are applicable to new types of systems that are difficult to treat using functional analysis. Examples discussed include rational transfer matrices, systems with delays, and multidimensional systems.

An Algebraic Approach to the Control of Decentralized Systems
 Laurent Lessard  and  Sanjay Lall

## I Introduction

The problem of designing control systems where multiple controllers are interconnected over a network to control a collection of interconnected plants is long-standing and difficult [4, 31]. Quadratic invariance is a mathematical condition which, when it holds, allows one to bring to bear the tools of Youla parameterization to find optimal controllers [23, 24]. A network system has the requisite quadratic invariance under a surprisingly wide set of circumstances. These include cases where the controllers can communicate more quickly than the plant dynamics propagate through the network [22].

There is a large and diverse body of literature addressing decentralized control theory and specifically conditions that make a problem more tractable in some sense. The seminal work of Ho and Chu [9] develops the partial nestedness condition under which there exists an optimal decentralized controller that is linear. More recently, Qi, Salapaka, et al. identified many different decentralized control architectures that may be cast as tractable optimization problems [18]. LMI formulations of distributed control problems are developed in [6, 14]. Stabilization was fully characterized for all QI problems in [25]. Explicit state-space solutions were also found for classes of delayed problems [12], poset-causal problems [26], and two-player output-feedback problems [17].

There have been relatively few works treating decentralized control from a purely algebraic perspective. One recent example is the work of Shin and Lall [27], where elimination theory is used to express solutions to decentralized control problems as projections of semialgebraic sets. Quadratic invariance was first treated using an analytic framework in [23, 24]. The aim of this paper is to address an algebraic treatment of quadratic invariance. The consequence of this is not only a new proof of existing results in some cases but also an extension of these results to a significantly different class of models. Instead of requiring analytic properties of our system model, we will require algebraic ones. For example, [24] requires that the set of allowable controllers be a closed inert subspace, whereas in this work we require that it be a module. The class of systems covered in this paper includes multidimensional systems, which are not covered in existing works. This is discussed in Section VI-C.

Many topics in control have historically been treated from both analytic and algebraic viewpoints. As early as 1965, Kalman proposed the use of modules as the natural framework in which to represent linear state-space systems [10]. When systems are viewed as maps on signal spaces, one has many choices. If one represents systems as transfer functions, then one can either consider the generality of transfer functions in a Hardy space and use analytic methods to prove results, or one can consider formal power series or rational functions and use algebraic methods. Often, the two frameworks use very different proof techniques, which provide different insights and ranges of applicability. This is a fundamental choice in how we represent the basic objects [11]. This dichotomy exists in many facets of the control systems literature. For example, spectral factorization is easily considered from an algebraic perspective. The Riesz-Fejér theorem states that a trigonometric polynomial which is nonnegative on the circle may be factored into the product of two polynomials, one of which is holomorphic inside the disc and the other outside [19]. This is the fundamental algebraic version of the discrete-time SISO spectral factorization result. For comparison, the analytic version of this result is commonly known as Wiener-Hopf factorization [30]. Of course, the same choice of frameworks exists beyond factorization. The theory of stabilization as introduced by Youla [32] was developed both algebraically [29] and analytically [28]. The idea of algebraic representations have also proven useful in areas such as realization theory [2], model reduction [3], and nonlinear systems theory [7].

The work in this paper is based on preliminary results that first appeared in [16, 15]. Unlike these early works, all invariance results in the present work include both necessary and sufficient conditions, and all the proofs are purely algebraic. The paper is organized as follows. The remainder of the introduction gives an overview of quadratic invariance and existing analytic results. Invariance results are proven and discussed for matrices, rings and fields, and rational functions in Sections IIIV, and  V, respectively. We present some illustrative examples in Section VI and we summarize our contributions in Section VIII.

We have adopted the notation convention from [23, 24] to make the works readily comparable. Given a plant , which is a map from an input space to an output space , we seek to design a controller that achieves desirable performance when connected in feedback with . The main object of interest is the function given by

 h(K)=−K(I−GK)−1

Here, the domain is the set of maps such that is invertible. The image of is again because is an involution. That is, . We will be more specific shortly about the nature of the spaces and (and consequently the maps and ).

The motivation for studying is that it is a linear fractional transform that occurs in feedback control. Consider for example the four-block plant of Figure 1.

For simplicity, assume for now that . In Figure 1, the set of achievable closed-loop maps subject to belonging to some set is given by

 C={P11−P12h(K)P21|K∈S}

Selecting a controller that optimizes some closed-loop performance metric is equivalent to selecting the best and then finding the that yields this .

Roughly, the works [23, 24] give a necessary and sufficient condition such that . If this condition holds, then , and so the set of achievable closed-loop maps is affine and easily searchable. The condition is called quadratic invariance, and a generic definition is given below.

We say that the set is quadratically invariant (QI) under if for all , we have .

In [23], the input and output spaces are Banach spaces, and and are bounded linear operators. In [24], the extended spaces and are used and the associated maps are then continuous linear operators. In this work we use different spaces still, but the generic definition of quadratic invariance remains the same.

We now state the main results from [23, 24]. Additional notation and terminology is defined after each theorem statement.

###### Theorem 2 (see [23]).

Suppose and are Banach spaces, and is a closed subspace. Further suppose that . Then

 S is QI with respect to G⟺h(S∩M)=S∩M

In Theorem 2, denotes the set of bounded linear operator from the first argument to the second. Also, is the set of such that , the unbounded connected component of the resolvent set of . The condition that is admittedly technical in nature, but the result of the theorem is very simple; quadratic invariance is equivalent to being invariant under .

###### Theorem 3 (see [24]).

Suppose and is an inert closed subspace. Then

 S is QI with respect to G⟺h(S)=S

In Theorem 3, denotes the set of continuous linear maps from the first argument to the second. The requirement that be inert means that the impulse response matrix of must be entry-wise bounded over every finite time interval for all . Among other things, this technical condition guarantees that is always invertible, and so is well-defined over all of . An analogous result to Theorem 3 in which is replaced by is also provided in [24].

It is interesting to note that both sides of the equivalences proved in Theorems 2 and 3 are purely algebraic statements. In other words, they can be stated in terms of a finite number of algebraic operations (addition, multiplication, inversion). This seems at odds with the technical assumptions required in the theorems. For example, being a closed subspace means that should contain all of its limit points. This is an analytic concept requiring an underlying norm or a topology at the very least.

This observation is the starting point for this work, where we show that invariance results akin to Theorems 2 and 3 can be obtained in a purely algebraic setting without requiring anything more than well-defined addition, multiplication, and inversion. This makes rings and fields the natural objects to work with, and we will discuss them at greater length in Section III. In Section VI we give three specific settings where these algebraic tools offer a natural framework for modeling control systems. These are the cases of sparse controllers, networks with delays, and multidimensional systems.

## Ii The matrix case

The real matrix case is an example that illustrates when quadratic invariance may be treated either analytically or algebraically. In this section, we present the invariance result in the matrix case, and give an outline of the proof using both the existing analytic approach [23, 24] and the algebraic approach that is expanded upon in more detail in later sections of this work. We present the proofs in sufficient detail to highlight the mathematical machinery being used, but we skip over less relevant details in the interest of clarity.

###### Theorem 4 (QI for matrices).

Suppose and is a subspace. Then the following holds.

 S is QI with respect to G⟺h(S∩M)=S∩M
###### Proof.

We outline a proof of the forward direction . If is QI with respect to , then by definition we have for all . The first step is to show that for as well. This can be proven by induction using the identity

 K(GK)i+1=12((K+K(GK)i)G(K+K(GK)i)−KGK−(K(GK)i)G(K(GK)i))

Next, we examine the function when . It suffices to show that , since is involutive. We prove this result first via an analytic approach similar to the one used in [23], and then using an algebraic approach.

The analytic approach is to use an infinite series expansion. For , we have the following convergent series.

 K(I−αGK)−1=∞∑i=0K(GK)iαifor |α|<1∥GK∥ (1)

Since for all , and is a finite-dimensional subspace and therefore closed, the infinite sum converges to an element of . Using an analytic continuation argument [23], one can show that for all such that . It then follows that for all , as required.

The algebraic approach is to use a finite series expansion. Pick some such that is invertible. By the Cayley-Hamilton theorem, there exist such that

 (I−GK)−1=p0I+p1(I−GK)+⋯+pm−1(I−GK)m−1

Expanding and collecting like powers of , we find that

 K(I−GK)−1=q0K+q1KGK+⋯+qm−1K(GK)m−1

for some . Once again, for all , so every term in this finite sum belongs to the subspace and therefore . The difference is that we did not require an analytic continuation, nor did we make use of the fact that is closed.

The converse direction can also be proven either using an analytic argument as in [23], or using an algebraic argument as we will develop in Section IV.

Notice that we give two proofs of the forward direction of Theorem 4, one analytic and the other algebraic. The analytic approach is based on convergence and hence depends on the topology. Since this particular result is only stated for matrices, the choice of topology does not matter, but for more general convolution operators on infinite signal spaces the topology has a significant effect on the applicability of the result and the technical machinery required to effect the proof. However, the algebraic approach is much more simple, and only relies on addition, multiplication and inversion.

The main results of this paper, given in Sections IV and V, generalize and expand upon the algebraic approach used above to matrices with entries that belong to a commutative ring .

## Iii Algebraic preliminaries

The fundamental algebraic properties we wish to capture are simply addition and multiplication. This leads naturally to rings and fields, which are a fundamental building block of abstract algebra. These concepts also provide the framework which is commonly used to state many widely-used results in control theory. For example, the set of real-rational transfer functions is a field, and the subset of proper ones is a ring. This viewpoint can be extremely useful, for example, when parameterizing all stabilizing controllers [29]. We refer the reader to [13] for an introduction to these concepts.

We now explain some of the conventions used throughout this paper. The integers, reals, rationals, and complex numbers are denoted by , , , and , respectively. We use to denote an arbitrary commutative ring with identity, and to denote an arbitrary field. The additive and multiplicative identity elements of are denoted and , respectively, but we will often omit the subscript when it is clear by context. An invertible element of is called a unit, and the set of units is written as . We write and to respectively denote the ring of polynomials and the field of rational functions in the indeterminate . If is an ideal, we write . If is maximal ideal, the associated quotient ring is a field, and is called the residue field. Finally, we make use of the notion of an -module, which is the generalization of a vector space when the scalars belong to a ring rather than a field .

In this paper, we consider finite matrices with elements that belong to . Much of the familiar linear algebra theory carries over to this more general setting. We refer the reader to [5] for introduction to abstract linear algebra. We write to mean the set of matrices with entries in . Matrix multiplication between matrices of compatible dimensions is defined in the standard way. When the matrices are square, we write , which is a ring. The identity matrix is denoted , that is, the matrix whose diagonal and off-diagonal entries are and , respectively.

Many concepts from matrix theory carry over to the more general setting. Specifically, if , the determinant is defined by the conventional Laplace expansion. The adjugate (or classical adjoint) also makes sense, as it is defined in terms of determinants of submatrices. The fundamental identity for adjugates holds as well, namely

The matrix is invertible if and only if . In this case, the inverse is unique, and is given by

For an introduction to the adjugate and associated results, we refer the reader to [20]. We now state a fundamental result.

###### Proposition 5.

Suppose . Let be the characteristic polynomial of , given by . Suppose has the form

 pA(x)=p0+p1x+⋯+pnxn

Then the following equations hold in .

###### Proof.

See Section VII.

Item (i) in Proposition 5 is commonly known as the Cayley-Hamilton theorem. This result plays an important role in our approach because it enables us to express quantities such as as finite sums.

## Iv Invariance for rings and fields

In this section, we take the notion of quadratic invariance discussed in the introduction and show how it fits into the framework of matrices over commutative rings or fields. These results generalize the algebraic invariance result for real matrices from Section II. Complete proofs for all the results of this section are given in Section VII.

Our first main invariance result holds over matrices whose entries belong to an arbitrary commutative ring with identity. Terminology is explained after the theorem statement.

###### Theorem 6 (QI for rings).

Suppose and is an -module.

1. If , then

 S is QI with respect to G⟹Kadj(IR−GK)∈S for all K∈S
2. If every residue field of has at least elements, then

 S is QI with respect to G⟸Kadj(IR−GK)∈S for all K∈S

The notion of -module is analogous to that of a subspace. That is, contains all linear combinations of its elements, where the linear combinations have coefficients in .

Theorem 6 contains several technical conditions which we will now explain. For the result, the condition means that must be a unit. We will now show that this condition is necessary by providing a counterexample. The condition is satisfied for example in the ring of rationals , but not in the ring of integers . Consider therefore the following integer example.

It is straightforward to check that is a -module, and is quadratically invariant with respect to . Now consider the following particular element of .

 K0=⎡⎢⎣001010100⎤⎥⎦∈S

and so

Therefore the first part of Theorem 6 does not hold. For more details on why this is so, refer to the proof of Theorem 6 in Section VII.

For the result, a residue field is the field obtained by taking the quotient for some maximal ideal . For example, in the ring of integers , the maximal ideals are the sets for some prime . So for , the associated ideal is the set of even integers, and the residue field is ; the integers modulo . This field only has two elements and so the conditions of the theorem would not be satisfied for matrices with at least two rows or two columns.

We now specialize the above invariance result to fields. Axiomatically, a field is simply a ring for which every nonzero element is a unit. The results of this section hold for an arbitrary field . We begin by stating the invariance result, and then we explain the differences between the field and ring cases. In particular, several concepts become simpler when the ring in question is a field.

###### Theorem 7 (QI for fields).

Suppose and is a subspace over . Further suppose that contains at least distinct elements and .

 S is QI with respect to G⟺h(S∩M)=S∩M

where .

The characteristic of a field , denoted is the smallest such that . When there is no such , then we say . Note that requiring is the same as the condition that when the ring is specialized to the field .

In the real-number case , both technical assumptions are always satisfied because has infinitely many elements and . We then precisely recover Theorem 4. Note that Theorem 7 may also be applied to finite fields such as , the field of integers modulo .

## V Invariance for rationals

In this section, we specialize the ring and field invariance results of Section IV to rational functions in multiple variables. This leads to quadratic invariance results without any technical requirement on such as closure or the existence of limits. As we shall see in Section VI, this framework can accommodate systems with delays or spatiotemporal systems.

Let be the set of rational functions in the indeterminate with coefficients in . Because is a field, we may apply Theorem 7. We obtain the following result.

###### Theorem 8 (QI for rationals).

Suppose , and is an -module.

 S is QI with respect to G⟺h(S∩M)=S∩M

Theorem 8 is the simplest algebraic result for quadratic invariance of rational functions, and provides the technical basis for the remainder of this paper. However, it is not directly applicable to most control systems, because for physical models one typically has the constraint that the system is proper. This applies for example if is a transfer function that represents a causal time-invariant system, and we seek a controller that is also causal and time-invariant.

Let be the set of proper rational functions in the indeterminate , and let denote the strictly proper rationals. Note that proper rationals are a ring rather than a field, because the inverse of a proper rational function is generally not proper. The result is given below.

###### Theorem 9 (QI for proper rationals).

Suppose , and is an -module.

 S is QI with respect to G⟺h(S)=S

We now extend the rational results of Theorems 8 and 9 to rational functions of multiple variables with mixed properness constraints. This will allow this approach to be applied to two additional classes of systems. The first is networks of linear systems interconnected by delays, and the second is multidimensional systems. These are discussed in Section VI. Specifically, define the sets of indeterminates and . We are interested in the ring of multivariate rationals where we have imposed a properness constraint on each of the . Note that this is not the same as the rational function itself being proper. For example,

 g=s1s2s3s21+2s2+s3

is proper in each of the variables , , , but is not proper by the standard definition since the degree of the numerator is larger than that of the denominator. We will use the subscripts p or sp to apply individually to each of the while ignoring the . Therefore, . The multivariate rational invariance result is given below.

###### Theorem 10 (QI for multivariate rationals).

Suppose that , and is an -module.

 S is QI with respect to G⟺h(S)=S

In Section VI, we give an example of a class of systems that can be represented by a multivariate rational function with mixed properness constraints such as those used in Theorem 10.

The invariance results of Section IV only rely on the algebraic properties of the objects involved, so our results may be applied to a variety of examples beyond the ones mentioned in this section. As a simple example, we may replace by or in Theorems 810.

## Vi Examples

In this section, we show some examples of problems that can be modeled using our algebraic framework. The purpose is to illustrate that the constraint that be an -module occurs frequently and in a variety of different situations.

### Vi-a Sparse controllers

The simplest class of systems that we can analyze are systems with rational transfer functions subject to controllers with sparsity constraints. If every nonzero entry in the controller is required to be a proper rational function in , it is clear that the set of admissible controllers is an -module.

### Vi-B Network with delays

Consider a distributed system where the subsystems affect one another via delay constraints. We wish to design a decentralized controller subject to communication delay constraints between subcontrollers.

Consider the simple example of two plants, each with their own controller. We represent the plants and their associated controllers by the transfer functions . Suppose the controllers communicate with each other using a bilateral network that taxes all transmissions with a delay . The example is illustrated in Figure 2

For simplicity, suppose each controller transmits all the measurements it receives to the other controller. Then the global plant and controller are characterized by the maps

 [y1y2]=[G100G2][u1u2]and[u1u2]=[K11K12dK21dK22][y1y2]

Such an architecture is QI, and more detailed topologies of this type were studied in [24, 22]. In general, the algebraic framework allows us to treat scenarios where the plant and controller are rational functions in and . The constraint , where properness is enforced on and independently, naturally guarantees that negative delays are forbidden, thus enforcing causality.

Define the delay of a transfer function as the difference between the degree of in its denominator and numerator. For example,

 delay(1sd+2)=1anddelay(s+d2s2d+d5)=3

As a convention, . We can impose delay constraints on the controller using a set of the form

 S={K∈R(s,d)\textupp∣∣delay(Kij)≥aij}

where is the minimum delay (in multiples of ) between subcontrollers and . One can verify that is an -module, and so we may apply Theorem 10 to deduce an invariance condition. Similar results proved using very different methods can be found in [22].

### Vi-C Multidimensional systems

In many cases, we are interested in modeling and control of systems whose states, inputs or outputs may be functions of a spatial independent variable, such as in control of continuum mechanical models such as fluids or elastic solids. While the dynamics may be readily described by linear operator models, analysis of such models has typically been performed using the tools of semigroup theory [1]. Taking Fourier transforms spatially and Laplace transforms temporally, one arrives at an algebraic formulation of such systems [2, 21], where a linear system is described by a rational function of two or more independent frequency variables. The properness requirement that arises from causality is then only required with respect to the temporal frequency variable. This notion of multivariate transfer functions is used to represent spatiotemporal dynamics in a variety of important papers [21, 8, 3, 6]. Using this framework, we are able to circumvent the analytical requirements and still explicitly construct the set of closed-loop maps for such systems. This class of systems is not addressed by existing results on quadratic invariance, and in particular it is not covered by the results of [24]. Our approach allows one to simply describe the set of achievable closed-loop maps for such systems in both the centralized and decentralized cases. In general, our framework allows us to consider transfer functions in two sets of variables where we impose properness on the but not on the .

It is however worth noting that synthesis of the optimal controller for such multidimensional systems is a challenging problem, even in the centralized case. Therefore one cannot simply combine the results in this paper with existing exact synthesis formulae or tools from the centralized case, and synthesize decentralized multidimensional controllers.

## Vii Proofs of main results

Proof of Proposition 5. Apply the identity (2) to , which we view as an element of , and obtain

The adjugate is defined in terms of minors, so each entry of is an element of of degree at most . Because of the ring isomorphism (see [5, §III.C] for a proof), we may write for some . Substituting into (3) and viewing the result as as a polynomial identity in , we obtain

 (A−Ix)(B0+B1x+⋯+Bn−1xn−1)=p0I+p1Ix+⋯+pnIxn (4)

Expanding (4) and comparing coefficients, we obtain

 AB0 =p0I ABk−Bk−1 =pkIfor k=1,…,n−1 −Bn−1 =pnI

Left-multiplying each equation by and summing all equations, we obtain (i). Similarly, left-multiplying each equation by for and summing, we obtain

which is the statement of (ii).

### Vii-a Invariance for rings

In the following subsection, is an arbitrary commutative ring with identity. We begin with a lemma that allows us conclude that if and , then we have for .

###### Lemma 11.

Suppose and is an -module. Further suppose that . If is quadratically invariant with respect to , then for all :

 K(GK)i∈Sfor i=1,2,…
###### Proof.

The result follows by induction, using the identity:

 K(GK)i+1=2−1R((K+K(GK)i)G(K+K(GK)i)−KGK−(K(GK)i)G(K(GK)i))

where is the multiplicative inverse of , which exists by assumption.

Note that Lemma 11 requires that . As shown in the first example of Section IV, this requirement is necessary. If we strengthen the notion of quadratic invariance to instead require that for all , then the conclusion of Lemma 11 is trivial and the assumption is no longer required.

We will require an important property of polynomials. Specifically, we will need conditions under which a polynomial is uniquely specified by the values it takes on at finitely many points. For a real polynomial , the result is well-known. If we have for all , then . In other words, all coefficients of are zeros and thus is the zero polynomial. The same result does not hold when we replace by a general field or by a commutative ring . As a simple example, consider , a polynomial with coefficients in the integers modulo 2. Then clearly , yet .

Polynomials are closely related to Vandermonde matrices, which we now define. The Vandermonde matrix generated by is defined as

 V=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1r1…rn−111r2…rn−12⋮⋮⋱⋮1rN…rn−1N⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦∈RN×n (5)

So if , then the Vandermonde matrix (5) relates the values of evaluated at to the polynomial coefficients via

 ⎡⎢ ⎢⎣f(r1)⋮f(rN)⎤⎥ ⎥⎦=V⎡⎢ ⎢⎣a0⋮an−1⎤⎥ ⎥⎦

If there exists a left-invertible Vandermonde matrix , then the coefficients are uniquely determined by the values . If is a field, we may assume without loss of generality that the Vandermonde matrix is square. The answer is given by the following proposition.

###### Proposition 12.

Suppose is a field. The following statements are equivalent

1. The field contains at least distinct elements.

2. There exists an invertible Vandermonde matrix.

3. There exists a left-invertible Vandermonde matrix for some .

The proof follows from the following well-known formula for the determinant of a square Vandermonde matrix.

 detV=∏1≤i

The result is more complicated if  is a commutative ring with identity. In reference to Proposition 12, there are cases where (iii)(ii). As an example, consider the ring where . It is easy to check that the only units of are . There are no invertible Vandermonde matrices in this ring. To see why, note that if is generated by then , which is a unit if and only if each factor is a unit. Adding the factors together yields , a contradiction. However, there exists a left-invertible Vandermonde matrix in this ring. For example,

 ⎡⎢⎣1111012β014β2⎤⎥⎦⎡⎢ ⎢ ⎢ ⎢⎣11−10β1+β017+15β−4−β0−11−3β20−7−2β1⎤⎥ ⎥ ⎥ ⎥⎦=⎡⎢⎣100010001⎤⎥⎦

The following lemma gives a complete characterization of left-invertibility for Vandermonde matrices in a general commutative ring with identity.

###### Lemma 13.

Suppose is a commutative ring with identity. The following statements are equivalent.

1. Every residue field of has at least elements.

2. The ideal generated by the determinants of all Vandermonde matrices is equal to , the unit ideal.

3. For some , there exists a left-invertible Vandermonde matrix.

###### Proof.

We begin by showing (i)(ii). If (i) is false, there exists some maximal ideal such that the quotient ring contains fewer than elements. Therefore, given any set of elements , we must have for some . It follows that if is the Vandermonde matrix generated by then it satisfies

 detV=∏1≤i

Therefore for every Vandermonde matrix. So the ideal generated by all Vandermonde determinants is contained in , a proper ideal, and so cannot be the unit ideal. This shows that (ii) is false. Conversely, if (ii) is false then the ideal generated by all Vandermonde determinants must be proper, and so is contained in some maximal ideal . In particular, for every Vandermonde matrix . Therefore, for all Vandermonde matrices with entries in . Because is a field, every nonzero element is a unit. So is a unit of if and only if is generated by distinct elements. We conclude that must have fewer than distinct elements and so (i) is false.

The result (ii)(iii) follows from the Cauchy-Binet formula, which gives the following expansion for where and are not necessarily square but have a square product.

 det(LV)=∑s⊆{1,…,N}|s|=ndet(L:,s)det(Vs,:)

Here, the sum is taken over all subsets of with elements. The corresponding columns and rows are extracted from and respectively and the associated determinants are multiplied together. Since each is an Vandermonde determinant, if is a left-inverse for , then , and (ii) follows. For the converse result (ii)(iii), a left-inverse can be explicitly constructed; see [20] for a proof.

###### Lemma 14.

Suppose is a commutative ring with identity, and is an -module generated by . Consider the following statements.

1. Every residue field of has at least elements.

2. Suppose is the -module generated by some set . Then is also generated by the set

Then (i)(ii). If the are also a basis for , then (i)(ii).

###### Proof.

To prove (ii), it suffices to show that each is a linear combination of terms of the form . If (i) holds, then by Lemma 13 there is some left-invertible Vandermonde matrix . Suppose is generated by , and suppose is a left-inverse of . Then it is straightforward to check that

 (6)

as required. For the converse, suppose (ii) holds. Then an equation of the form (6) must hold for some and . If the form a basis for , then the coefficients corresponding to each in (6) must vanish. Therefore,

 N∑j=1Lijrk−1j={1i=k0i≠kfor i,k=1,…,n

In other words, , where is the (left-invertible) Vandermonde matrix generated by .

Lemma 14 may be specialized to polynomials and thus yields a sufficient condition under which a for all implies that is the zero polynomial.

###### Corollary 15.

Suppose is a commutative ring with identity and every residue field of has at least elements. Suppose and . If for all , then .

###### Proof.

Suppose . Applying Lemma 14 to the module generated by , we conclude that is generated by . But for all , therefore , and so .

We now have the tools we need to prove our main invariance result for rings.

Proof of Theorem 6. The result is trivial if or . In this case, either or is scalar, so every is QI with respect to every . Further, , so the right-hand side always holds as well. We assume from now on that .

Suppose is QI with respect to , and let . Using Proposition 5, write:

where the are obtained by expanding each term and collecting like powers of . If , then by Lemma 11 all terms in the sum are in . Since is an -module, it follows that .

Conversely, suppose that is not QI with respect to . Therefore, there exists some such that . We proceed by way of contradiction. Suppose that for all . In particular, it must hold for with . Therefore, we conclude that

Because each entry of the adjugate matrix is the determinant of an minor, it follows that is a polynomial in of degree at most with coefficients in . Letting , we obtain

 (K0B0)r+(K0B1)r2+⋯+(K0Bm−1)rm∈Sfor % all r∈R

If every residue field of has at least elements, we conclude from Lemma 14 that for . Recall the identity (2), and let . We then obtain

 (IR−xGK0)(B0+xB1+⋯+xm−1Bm−1) =det(IR−xGK0)IR =(q0+q1x+⋯+qmxm)IR

where the only depend on and . Because of the ring isomorphism , the equation above is a polynomial identity in . So we may collect like powers of and set all coefficients to zero. For the first two coefficients, we have and . Note also that , which follows because . Therefore, . Based on our earlier conclusion that , we have . Since by assumption and is an -module, it follows that , a contradiction. If we use the identity , we now have . Carrying out a similar argument, we deduce that the result also holds when every residue field of has at least elements, thus completing the proof.

The counterexample given in Section IV is an example for which . In that case, Theorem 6 fails because Lemma 11 fails. One way to avoid the technical requirement that is to strengthen the notion of quadratic invariance. For example, if we require that for all then Lemma 11 follows without the requirement that . Thus, we would obtain a weaker version of the first part of Theorem 6. The proof of Theorem 7 uses similar machinery to that used in the proof of Theorem 6.

Proof of Theorem 7. As in Theorem 6, the cases and are trivial, so we assume . If , then , which means is invertible. Furthermore, contains at least distinct elements, so we may apply both parts of Theorem 6. Therefore, being quadratically invariant with respect to is equivalent to for all . If we restrict to , then . So we deduce that

Since , it follows that , and by the involutive property of , we deduce that .

Conversely, suppose that . Then it follows that for all . Suppose is not quadratically invariant with respect to , so there is some such that . Proceeding as in the proof of Theorem 6, let and obtain

 (K0B0)r+(K0B1)r2+⋯+(K0Bm−1)rm∈Sfor % all r∈R such that rK0∈M

In other words, the inclusion holds whenever . This is a polynomial of degree so it has at most roots. Therefore, the constraint that excludes at most elements of . Therefore, must contain at least elements so that there are at least elements remaining after all roots of have been excluded. As in the proof of Theorem 6, we may apply a similar argument to conclude that contains at least elements, and the proof in complete.

### Vii-B Invariance for rationals

Rational functions are fundamentally a field, so our first invariance result follows immediately from our invariance result for fields.

Proof of Theorem 8. Note that is a field, so is a subspace over . Furthermore, has infinitely many elements and , so the result follows from Theorem 7.

The rationals become a ring when we impose a properness constraint. Furthermore, the strictly proper rationals are an ideal . We can also check that this ideal is maximal, because the proper rationals that are not strictly proper are precisely the units of . Indeed, is the unique maximal ideal of , so is a local ring.

We may use these structural facts to prove the following lemma, which gives a condition that guarantees the invertibility of .

###### Lemma 16.

Suppose and . Then is invertible, and .

###### Proof.

Since is maximal and , we have

 det(I−GK)≡det(I)≡1(modR(s)\textupsp)

It follows that , and is therefore a unit. So is invertible.

The motivation for choosing a strictly proper and a proper is inspired by classical feedback control. If we think of the proper rationals as transfer functions, a strictly proper means that the controller has no direct feedthrough term, a common assumption that ensures well-posedness of the closed-loop interconnection. We now present the proof of the invariance result for proper rationals.

Proof of Theorem 9. Note that is a ring and . Because has a unique maximal ideal , the only residue field is , which has infinitely many elements. Applying Theorem 6, we conclude that is QI with respect to if and only if for all . However, is always invertible by Lemma 16, so if and only if . By the involutive property of , this is equivalent to .

Proof of Theorem 10. First, note that , so we may think of the multivariate transfer functions as proper transfer functions in with coefficients that are rational functions of . As in Theorem 9, we still have , but there is no longer a unique maximal ideal. Indeed, the maximal ideals are the sets:

 Mi ={f∈(R(x))(s)\textupp∣∣f is strictly proper in si} =((R(x))(s∖si)\textupp)(si)\textupsp

and it is easy to check that the corresponding residue fields each have infinitely many elements. Furthermore, for each , as in the proof of Lemma 16. So does not belong to any maximum ideal and must therefore be a unit. The rest follows as in the proof of Theorem 9.

## Viii Summary

In this paper, we give an algebraic treatment of quadratic invariance, the well-known condition under which decentralized control synthesis may be reduced to a convex optimization problem. Our results hold for commutative rings with identity, and in particular specialize to the natural system-theoretic case of proper rational functions in one variable, as well as multidimensional rational functions. This formulation has the advantage of avoiding some of the technicalities in analytic treatments. In particular, notions of topology, limits, or norms are not required. Thus, quadratic invariance may be viewed as a purely algebraic concept.

## Acknowledgments

The proof of Lemma 13 is due to Thomas Goodwillie, and the noninvertible Vandermonde example on Page VII-A is due to David Speyer.

## References

• [1] B. Bamieh, F. Paganini, and M. Dahleh. Distributed control of spatially invariant systems. IEEE Transactions on Automatic Control, 47(7):1091–1107, 2002.
• [2] C. Beck. On formal power series representations for uncertain systems. IEEE Transactions on Automatic Control, 46(2):314–319, 2001.
• [3] C. L. Beck, J. Doyle, and K. Glover. Model-reduction of multidimensional and uncertain systems. IEEE Transactions on Automatic Control, 41(10):1466–1477, 1996.
• [4] V. D. Blondel and J. N. Tsitsiklis. A survey of computational complexity results in systems and control. Automatica, 36(9):1249–1274, 2000.
• [5] M. L. Curtis. Abstract linear algebra. Springer, 1990.
• [6] R. D’Andrea and G. E. Dullerud. Distributed control design for spatially interconnected systems. IEEE Transactions on Automatic Control, 48(9):1478–1495, 2003.
• [7] M. Fliess, M. Lamnabhi, and F. Lamnabhi-Lagarrigue. An algebraic approach to nonlinear functional expansions. IEEE Transactions on Circuits and Systems, 30(8):554–570, 1983.
• [8] E. Fornasini and G. Marchesini. Doubly-indexed dynamical systems: State-space models and structural properties. Theory of Computing Systems, 12(1):59–72, 1978.
• [9] Y.-C. Ho and K.-C. Chu. Team decision theory and information structures in optimal control problems—Part I. IEEE Transactions on Automatic Control, 17(1):15–22, 1972.
• [10] R. E. Kalman. Algebraic structure of linear dynamical systems, I. the module of . Proceedings of the National Academy of Sciences of the United States of America, 54(6):1503–1508, 1965.
• [11] D. Klarner. Algebraic theory for difference and differential equations. The American Mathematical Monthly, 76(4):366–373, 1969.
• [12] A. Lamperski and L. Lessard. Optimal state-feedback control under sparsity and delay constraints. In 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems, pages 204–209, 2012.
• [13] S. Lang. Undergraduate algebra. Springer, 2005.
• [14] C. Langbort, R. S. Chandra, and R. D’Andrea. Distributed control design for systems interconnected over an arbitrary graph. IEEE Transactions on Automatic Control, 49(9):1502–1519, 2004.
• [15] L. Lessard. Tractability of complex control systems. PhD thesis, Stanford University, 2011.
• [16] L. Lessard and S. Lall. An algebraic framework for quadratic invariance. In IEEE Conference on Decision and Control, pages 2698–2703, 2010.
• [17] L. Lessard and S. Lall. A state-space solution to the two-player decentralized optimal control problem. In Allerton Conference on Communication, Control, and Computing, pages 1559–1564, 2011.
• [18] X. Qi, M. V. Salapaka, P. G. Voulgaris, and M. Khammash. Structured optimal and robust control with multiple criteria: a convex solution. IEEE Transactions on Automatic Control, 49(10):1623–1640, 2004.
• [19] F. Riesz and B. Sz.-Nagy. Functional analysis. Dover Publications, 1990.
• [20] D. W. Robinson. The classical adjoint. Linear Algebra and its Applications, 411:254–276, 2005.
• [21] R. Roesser. A discrete state-space model for linear image processing. IEEE Transactions on Automatic Control, 20(1):1–10, 1975.
• [22] M. Rotkowitz, R. Cogill, and S. Lall. Convexity of optimal control over networks with delays and arbitrary topology. International Journal of Systems, Control and Communication, 2(1):30–54, 2010.
• [23] M. Rotkowitz and S. Lall. Decentralized control information structures preserved under feedback. In IEEE Conference on Decision and Control, pages 569–575, 2002.
• [24] M. Rotkowitz and S. Lall. A characterization of convex problems in decentralized control. IEEE Transactions on Automatic Control, 51(2):274–286, 2006.
• [25]  S ‘
. Sabău and N. Martins.
Necessary and sufficient conditions for stabilizability subject to quadratic invariance. In IEEE Conference on Decision and Control, pages 2459–2466, 2011.
• [26] P. Shah and P. A. Parrilo. -optimal decentralized control over posets: A state-space solution for state-feedback. IEEE Transactions on Automatic Control, 58(12):3084–3096, 2013.
• [27] H. Shin and S. Lall. Decentralized control via Gröbner bases and variable elimination. IEEE Transactions on Automatic Control, 57(4):1030–1035, 2012.
• [28] M. C. Smith. On stabilization and the existence of coprime factorizations. IEEE Transactions on Automatic Control, 34(9):1005–1007, 1989.
• [29] M. Vidyasagar. Control system synthesis: a factorization approach. MIT Press, 1985.
• [30] N. Wiener. Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications. MIT press, 1964.
• [31] H. S. Witsenhausen. A counterexample in stochastic optimum control. SIAM Journal on Control, 6(1):131–147, 1968.
• [32] D. Youla, H. Jabr, and J. Bongiorno Jr. Modern Wiener-Hopf design of optimal controllers–Part II: The multivariable case. IEEE Transactions on Automatic Control, 21(3):319–338, 1976.