An Algebraic Approach to the Control of Decentralized Systems
Abstract
Optimal decentralized controller design is notoriously difficult, but recent research has identified large subclasses of such problems that may be convexified and thus are amenable to solution via efficient numerical methods. One recently discovered sufficient condition for convexity is quadratic invariance (QI). Despite the simple algebraic characterization of QI, which relates the plant and controller maps, proving convexity of the set of achievable closedloop maps requires tools from functional analysis. In this work, we present a new formulation of quadratic invariance that is purely algebraic. While our results are similar in flavor to those from traditional QI theory, they do not follow from that body of work. Furthermore, they are applicable to new types of systems that are difficult to treat using functional analysis. Examples discussed include rational transfer matrices, systems with delays, and multidimensional systems.
Laurent Lessard and Sanjay Lall 
I Introduction
The problem of designing control systems where multiple controllers are interconnected over a network to control a collection of interconnected plants is longstanding and difficult [4, 31]. Quadratic invariance is a mathematical condition which, when it holds, allows one to bring to bear the tools of Youla parameterization to find optimal controllers [23, 24]. A network system has the requisite quadratic invariance under a surprisingly wide set of circumstances. These include cases where the controllers can communicate more quickly than the plant dynamics propagate through the network [22].
There is a large and diverse body of literature addressing decentralized control theory and specifically conditions that make a problem more tractable in some sense. The seminal work of Ho and Chu [9] develops the partial nestedness condition under which there exists an optimal decentralized controller that is linear. More recently, Qi, Salapaka, et al. identified many different decentralized control architectures that may be cast as tractable optimization problems [18]. LMI formulations of distributed control problems are developed in [6, 14]. Stabilization was fully characterized for all QI problems in [25]. Explicit statespace solutions were also found for classes of delayed problems [12], posetcausal problems [26], and twoplayer outputfeedback problems [17].
There have been relatively few works treating decentralized control from a purely algebraic perspective. One recent example is the work of Shin and Lall [27], where elimination theory is used to express solutions to decentralized control problems as projections of semialgebraic sets. Quadratic invariance was first treated using an analytic framework in [23, 24]. The aim of this paper is to address an algebraic treatment of quadratic invariance. The consequence of this is not only a new proof of existing results in some cases but also an extension of these results to a significantly different class of models. Instead of requiring analytic properties of our system model, we will require algebraic ones. For example, [24] requires that the set of allowable controllers be a closed inert subspace, whereas in this work we require that it be a module. The class of systems covered in this paper includes multidimensional systems, which are not covered in existing works. This is discussed in Section VIC.
Many topics in control have historically been treated from both analytic and algebraic viewpoints. As early as 1965, Kalman proposed the use of modules as the natural framework in which to represent linear statespace systems [10]. When systems are viewed as maps on signal spaces, one has many choices. If one represents systems as transfer functions, then one can either consider the generality of transfer functions in a Hardy space and use analytic methods to prove results, or one can consider formal power series or rational functions and use algebraic methods. Often, the two frameworks use very different proof techniques, which provide different insights and ranges of applicability. This is a fundamental choice in how we represent the basic objects [11]. This dichotomy exists in many facets of the control systems literature. For example, spectral factorization is easily considered from an algebraic perspective. The RieszFejér theorem states that a trigonometric polynomial which is nonnegative on the circle may be factored into the product of two polynomials, one of which is holomorphic inside the disc and the other outside [19]. This is the fundamental algebraic version of the discretetime SISO spectral factorization result. For comparison, the analytic version of this result is commonly known as WienerHopf factorization [30]. Of course, the same choice of frameworks exists beyond factorization. The theory of stabilization as introduced by Youla [32] was developed both algebraically [29] and analytically [28]. The idea of algebraic representations have also proven useful in areas such as realization theory [2], model reduction [3], and nonlinear systems theory [7].
The work in this paper is based on preliminary results that first appeared in [16, 15]. Unlike these early works, all invariance results in the present work include both necessary and sufficient conditions, and all the proofs are purely algebraic. The paper is organized as follows. The remainder of the introduction gives an overview of quadratic invariance and existing analytic results. Invariance results are proven and discussed for matrices, rings and fields, and rational functions in Sections II, IV, and V, respectively. We present some illustrative examples in Section VI and we summarize our contributions in Section VIII.
Ia Quadratic invariance
We have adopted the notation convention from [23, 24] to make the works readily comparable. Given a plant , which is a map from an input space to an output space , we seek to design a controller that achieves desirable performance when connected in feedback with . The main object of interest is the function given by
Here, the domain is the set of maps such that is invertible. The image of is again because is an involution. That is, . We will be more specific shortly about the nature of the spaces and (and consequently the maps and ).
The motivation for studying is that it is a linear fractional transform that occurs in feedback control. Consider for example the fourblock plant of Figure 1.
For simplicity, assume for now that . In Figure 1, the set of achievable closedloop maps subject to belonging to some set is given by
Selecting a controller that optimizes some closedloop performance metric is equivalent to selecting the best and then finding the that yields this .
Roughly, the works [23, 24] give a necessary and sufficient condition such that . If this condition holds, then , and so the set of achievable closedloop maps is affine and easily searchable. The condition is called quadratic invariance, and a generic definition is given below.
Definition 1 (Quadratic invariance).
We say that the set is quadratically invariant (QI) under if for all , we have .
In [23], the input and output spaces are Banach spaces, and and are bounded linear operators. In [24], the extended spaces and are used and the associated maps are then continuous linear operators. In this work we use different spaces still, but the generic definition of quadratic invariance remains the same.
We now state the main results from [23, 24]. Additional notation and terminology is defined after each theorem statement.
Theorem 2 (see [23]).
Suppose and are Banach spaces, and is a closed subspace. Further suppose that . Then
In Theorem 2, denotes the set of bounded linear operator from the first argument to the second. Also, is the set of such that , the unbounded connected component of the resolvent set of . The condition that is admittedly technical in nature, but the result of the theorem is very simple; quadratic invariance is equivalent to being invariant under .
Theorem 3 (see [24]).
Suppose and is an inert closed subspace. Then
In Theorem 3, denotes the set of continuous linear maps from the first argument to the second. The requirement that be inert means that the impulse response matrix of must be entrywise bounded over every finite time interval for all . Among other things, this technical condition guarantees that is always invertible, and so is welldefined over all of . An analogous result to Theorem 3 in which is replaced by is also provided in [24].
It is interesting to note that both sides of the equivalences proved in Theorems 2 and 3 are purely algebraic statements. In other words, they can be stated in terms of a finite number of algebraic operations (addition, multiplication, inversion). This seems at odds with the technical assumptions required in the theorems. For example, being a closed subspace means that should contain all of its limit points. This is an analytic concept requiring an underlying norm or a topology at the very least.
This observation is the starting point for this work, where we show that invariance results akin to Theorems 2 and 3 can be obtained in a purely algebraic setting without requiring anything more than welldefined addition, multiplication, and inversion. This makes rings and fields the natural objects to work with, and we will discuss them at greater length in Section III. In Section VI we give three specific settings where these algebraic tools offer a natural framework for modeling control systems. These are the cases of sparse controllers, networks with delays, and multidimensional systems.
Ii The matrix case
The real matrix case is an example that illustrates when quadratic invariance may be treated either analytically or algebraically. In this section, we present the invariance result in the matrix case, and give an outline of the proof using both the existing analytic approach [23, 24] and the algebraic approach that is expanded upon in more detail in later sections of this work. We present the proofs in sufficient detail to highlight the mathematical machinery being used, but we skip over less relevant details in the interest of clarity.
Theorem 4 (QI for matrices).
Suppose and is a subspace. Then the following holds.
Proof.
We outline a proof of the forward direction . If is QI with respect to , then by definition we have for all . The first step is to show that for as well. This can be proven by induction using the identity
Next, we examine the function when . It suffices to show that , since is involutive. We prove this result first via an analytic approach similar to the one used in [23], and then using an algebraic approach.
The analytic approach is to use an infinite series expansion. For , we have the following convergent series.
(1) 
Since for all , and is a finitedimensional subspace and therefore closed, the infinite sum converges to an element of . Using an analytic continuation argument [23], one can show that for all such that . It then follows that for all , as required.
The algebraic approach is to use a finite series expansion. Pick some such that is invertible. By the CayleyHamilton theorem, there exist such that
Expanding and collecting like powers of , we find that
for some . Once again, for all , so every term in this finite sum belongs to the subspace and therefore . The difference is that we did not require an analytic continuation, nor did we make use of the fact that is closed.
Notice that we give two proofs of the forward direction of Theorem 4, one analytic and the other algebraic. The analytic approach is based on convergence and hence depends on the topology. Since this particular result is only stated for matrices, the choice of topology does not matter, but for more general convolution operators on infinite signal spaces the topology has a significant effect on the applicability of the result and the technical machinery required to effect the proof. However, the algebraic approach is much more simple, and only relies on addition, multiplication and inversion.
Iii Algebraic preliminaries
The fundamental algebraic properties we wish to capture are simply addition and multiplication. This leads naturally to rings and fields, which are a fundamental building block of abstract algebra. These concepts also provide the framework which is commonly used to state many widelyused results in control theory. For example, the set of realrational transfer functions is a field, and the subset of proper ones is a ring. This viewpoint can be extremely useful, for example, when parameterizing all stabilizing controllers [29]. We refer the reader to [13] for an introduction to these concepts.
We now explain some of the conventions used throughout this paper. The integers, reals, rationals, and complex numbers are denoted by , , , and , respectively. We use to denote an arbitrary commutative ring with identity, and to denote an arbitrary field. The additive and multiplicative identity elements of are denoted and , respectively, but we will often omit the subscript when it is clear by context. An invertible element of is called a unit, and the set of units is written as . We write and to respectively denote the ring of polynomials and the field of rational functions in the indeterminate . If is an ideal, we write . If is maximal ideal, the associated quotient ring is a field, and is called the residue field. Finally, we make use of the notion of an module, which is the generalization of a vector space when the scalars belong to a ring rather than a field .
In this paper, we consider finite matrices with elements that belong to . Much of the familiar linear algebra theory carries over to this more general setting. We refer the reader to [5] for introduction to abstract linear algebra. We write to mean the set of matrices with entries in . Matrix multiplication between matrices of compatible dimensions is defined in the standard way. When the matrices are square, we write , which is a ring. The identity matrix is denoted , that is, the matrix whose diagonal and offdiagonal entries are and , respectively.
Many concepts from matrix theory carry over to the more general setting. Specifically, if , the determinant is defined by the conventional Laplace expansion. The adjugate (or classical adjoint) also makes sense, as it is defined in terms of determinants of submatrices. The fundamental identity for adjugates holds as well, namely
(2) 
The matrix is invertible if and only if . In this case, the inverse is unique, and is given by
For an introduction to the adjugate and associated results, we refer the reader to [20]. We now state a fundamental result.
Proposition 5.
Suppose . Let be the characteristic polynomial of , given by . Suppose has the form
Then the following equations hold in .
Proof.
See Section VII.
Iv Invariance for rings and fields
In this section, we take the notion of quadratic invariance discussed in the introduction and show how it fits into the framework of matrices over commutative rings or fields. These results generalize the algebraic invariance result for real matrices from Section II. Complete proofs for all the results of this section are given in Section VII.
Our first main invariance result holds over matrices whose entries belong to an arbitrary commutative ring with identity. Terminology is explained after the theorem statement.
Theorem 6 (QI for rings).
Suppose and is an module.

If , then

If every residue field of has at least elements, then
The notion of module is analogous to that of a subspace. That is, contains all linear combinations of its elements, where the linear combinations have coefficients in .
Theorem 6 contains several technical conditions which we will now explain. For the result, the condition means that must be a unit. We will now show that this condition is necessary by providing a counterexample. The condition is satisfied for example in the ring of rationals , but not in the ring of integers . Consider therefore the following integer example.
It is straightforward to check that is a module, and is quadratically invariant with respect to . Now consider the following particular element of .
and so
Therefore the first part of Theorem 6 does not hold. For more details on why this is so, refer to the proof of Theorem 6 in Section VII.
For the result, a residue field is the field obtained by taking the quotient for some maximal ideal . For example, in the ring of integers , the maximal ideals are the sets for some prime . So for , the associated ideal is the set of even integers, and the residue field is ; the integers modulo . This field only has two elements and so the conditions of the theorem would not be satisfied for matrices with at least two rows or two columns.
We now specialize the above invariance result to fields. Axiomatically, a field is simply a ring for which every nonzero element is a unit. The results of this section hold for an arbitrary field . We begin by stating the invariance result, and then we explain the differences between the field and ring cases. In particular, several concepts become simpler when the ring in question is a field.
Theorem 7 (QI for fields).
Suppose and is a subspace over . Further suppose that contains at least distinct elements and .
where .
The characteristic of a field , denoted is the smallest such that . When there is no such , then we say . Note that requiring is the same as the condition that when the ring is specialized to the field .
V Invariance for rationals
In this section, we specialize the ring and field invariance results of Section IV to rational functions in multiple variables. This leads to quadratic invariance results without any technical requirement on such as closure or the existence of limits. As we shall see in Section VI, this framework can accommodate systems with delays or spatiotemporal systems.
Let be the set of rational functions in the indeterminate with coefficients in . Because is a field, we may apply Theorem 7. We obtain the following result.
Theorem 8 (QI for rationals).
Suppose , and is an module.
Theorem 8 is the simplest algebraic result for quadratic invariance of rational functions, and provides the technical basis for the remainder of this paper. However, it is not directly applicable to most control systems, because for physical models one typically has the constraint that the system is proper. This applies for example if is a transfer function that represents a causal timeinvariant system, and we seek a controller that is also causal and timeinvariant.
Let be the set of proper rational functions in the indeterminate , and let denote the strictly proper rationals. Note that proper rationals are a ring rather than a field, because the inverse of a proper rational function is generally not proper. The result is given below.
Theorem 9 (QI for proper rationals).
Suppose , and is an module.
We now extend the rational results of Theorems 8 and 9 to rational functions of multiple variables with mixed properness constraints. This will allow this approach to be applied to two additional classes of systems. The first is networks of linear systems interconnected by delays, and the second is multidimensional systems. These are discussed in Section VI. Specifically, define the sets of indeterminates and . We are interested in the ring of multivariate rationals where we have imposed a properness constraint on each of the . Note that this is not the same as the rational function itself being proper. For example,
is proper in each of the variables , , , but is not proper by the standard definition since the degree of the numerator is larger than that of the denominator. We will use the subscripts p or sp to apply individually to each of the while ignoring the . Therefore, . The multivariate rational invariance result is given below.
Theorem 10 (QI for multivariate rationals).
Suppose that , and is an module.
Vi Examples
In this section, we show some examples of problems that can be modeled using our algebraic framework. The purpose is to illustrate that the constraint that be an module occurs frequently and in a variety of different situations.
Via Sparse controllers
The simplest class of systems that we can analyze are systems with rational transfer functions subject to controllers with sparsity constraints. If every nonzero entry in the controller is required to be a proper rational function in , it is clear that the set of admissible controllers is an module.
ViB Network with delays
Consider a distributed system where the subsystems affect one another via delay constraints. We wish to design a decentralized controller subject to communication delay constraints between subcontrollers.
Consider the simple example of two plants, each with their own controller. We represent the plants and their associated controllers by the transfer functions . Suppose the controllers communicate with each other using a bilateral network that taxes all transmissions with a delay . The example is illustrated in Figure 2
For simplicity, suppose each controller transmits all the measurements it receives to the other controller. Then the global plant and controller are characterized by the maps
Such an architecture is QI, and more detailed topologies of this type were studied in [24, 22]. In general, the algebraic framework allows us to treat scenarios where the plant and controller are rational functions in and . The constraint , where properness is enforced on and independently, naturally guarantees that negative delays are forbidden, thus enforcing causality.
Define the delay of a transfer function as the difference between the degree of in its denominator and numerator. For example,
As a convention, . We can impose delay constraints on the controller using a set of the form
where is the minimum delay (in multiples of ) between subcontrollers and . One can verify that is an module, and so we may apply Theorem 10 to deduce an invariance condition. Similar results proved using very different methods can be found in [22].
ViC Multidimensional systems
In many cases, we are interested in modeling and control of systems whose states, inputs or outputs may be functions of a spatial independent variable, such as in control of continuum mechanical models such as fluids or elastic solids. While the dynamics may be readily described by linear operator models, analysis of such models has typically been performed using the tools of semigroup theory [1]. Taking Fourier transforms spatially and Laplace transforms temporally, one arrives at an algebraic formulation of such systems [2, 21], where a linear system is described by a rational function of two or more independent frequency variables. The properness requirement that arises from causality is then only required with respect to the temporal frequency variable. This notion of multivariate transfer functions is used to represent spatiotemporal dynamics in a variety of important papers [21, 8, 3, 6]. Using this framework, we are able to circumvent the analytical requirements and still explicitly construct the set of closedloop maps for such systems. This class of systems is not addressed by existing results on quadratic invariance, and in particular it is not covered by the results of [24]. Our approach allows one to simply describe the set of achievable closedloop maps for such systems in both the centralized and decentralized cases. In general, our framework allows us to consider transfer functions in two sets of variables where we impose properness on the but not on the .
It is however worth noting that synthesis of the optimal controller for such multidimensional systems is a challenging problem, even in the centralized case. Therefore one cannot simply combine the results in this paper with existing exact synthesis formulae or tools from the centralized case, and synthesize decentralized multidimensional controllers.
Vii Proofs of main results
Proof of Proposition 5. Apply the identity (2) to , which we view as an element of , and obtain
(3) 
The adjugate is defined in terms of minors, so each entry of is an element of of degree at most . Because of the ring isomorphism (see [5, §III.C] for a proof), we may write for some . Substituting into (3) and viewing the result as as a polynomial identity in , we obtain
(4) 
Expanding (4) and comparing coefficients, we obtain
Leftmultiplying each equation by and summing all equations, we obtain (i). Similarly, leftmultiplying each equation by for and summing, we obtain
which is the statement of (ii).
Viia Invariance for rings
In the following subsection, is an arbitrary commutative ring with identity. We begin with a lemma that allows us conclude that if and , then we have for .
Lemma 11.
Suppose and is an module. Further suppose that . If is quadratically invariant with respect to , then for all :
Proof.
The result follows by induction, using the identity:
where is the multiplicative inverse of , which exists by assumption.
Note that Lemma 11 requires that . As shown in the first example of Section IV, this requirement is necessary. If we strengthen the notion of quadratic invariance to instead require that for all , then the conclusion of Lemma 11 is trivial and the assumption is no longer required.
We will require an important property of polynomials. Specifically, we will need conditions under which a polynomial is uniquely specified by the values it takes on at finitely many points. For a real polynomial , the result is wellknown. If we have for all , then . In other words, all coefficients of are zeros and thus is the zero polynomial. The same result does not hold when we replace by a general field or by a commutative ring . As a simple example, consider , a polynomial with coefficients in the integers modulo 2. Then clearly , yet .
Polynomials are closely related to Vandermonde matrices, which we now define. The Vandermonde matrix generated by is defined as
(5) 
So if , then the Vandermonde matrix (5) relates the values of evaluated at to the polynomial coefficients via
If there exists a leftinvertible Vandermonde matrix , then the coefficients are uniquely determined by the values . If is a field, we may assume without loss of generality that the Vandermonde matrix is square. The answer is given by the following proposition.
Proposition 12.
Suppose is a field. The following statements are equivalent

The field contains at least distinct elements.

There exists an invertible Vandermonde matrix.

There exists a leftinvertible Vandermonde matrix for some .
The proof follows from the following wellknown formula for the determinant of a square Vandermonde matrix.
The result is more complicated if is a commutative ring with identity. In reference to Proposition 12, there are cases where (iii)(ii). As an example, consider the ring where . It is easy to check that the only units of are . There are no invertible Vandermonde matrices in this ring. To see why, note that if is generated by then , which is a unit if and only if each factor is a unit. Adding the factors together yields , a contradiction. However, there exists a leftinvertible Vandermonde matrix in this ring. For example,
The following lemma gives a complete characterization of leftinvertibility for Vandermonde matrices in a general commutative ring with identity.
Lemma 13.
Suppose is a commutative ring with identity. The following statements are equivalent.

Every residue field of has at least elements.

The ideal generated by the determinants of all Vandermonde matrices is equal to , the unit ideal.

For some , there exists a leftinvertible Vandermonde matrix.
Proof.
We begin by showing (i)(ii). If (i) is false, there exists some maximal ideal such that the quotient ring contains fewer than elements. Therefore, given any set of elements , we must have for some . It follows that if is the Vandermonde matrix generated by then it satisfies
Therefore for every Vandermonde matrix. So the ideal generated by all Vandermonde determinants is contained in , a proper ideal, and so cannot be the unit ideal. This shows that (ii) is false. Conversely, if (ii) is false then the ideal generated by all Vandermonde determinants must be proper, and so is contained in some maximal ideal . In particular, for every Vandermonde matrix . Therefore, for all Vandermonde matrices with entries in . Because is a field, every nonzero element is a unit. So is a unit of if and only if is generated by distinct elements. We conclude that must have fewer than distinct elements and so (i) is false.
The result (ii)(iii) follows from the CauchyBinet formula, which gives the following expansion for where and are not necessarily square but have a square product.
Here, the sum is taken over all subsets of with elements. The corresponding columns and rows are extracted from and respectively and the associated determinants are multiplied together. Since each is an Vandermonde determinant, if is a leftinverse for , then , and (ii) follows. For the converse result (ii)(iii), a leftinverse can be explicitly constructed; see [20] for a proof.
Lemma 14.
Proof.
To prove (ii), it suffices to show that each is a linear combination of terms of the form . If (i) holds, then by Lemma 13 there is some leftinvertible Vandermonde matrix . Suppose is generated by , and suppose is a leftinverse of . Then it is straightforward to check that
(6) 
as required. For the converse, suppose (ii) holds. Then an equation of the form (6) must hold for some and . If the form a basis for , then the coefficients corresponding to each in (6) must vanish. Therefore,
In other words, , where is the (leftinvertible) Vandermonde matrix generated by .
Lemma 14 may be specialized to polynomials and thus yields a sufficient condition under which a for all implies that is the zero polynomial.
Corollary 15.
Suppose is a commutative ring with identity and every residue field of has at least elements. Suppose and . If for all , then .
Proof.
Suppose . Applying Lemma 14 to the module generated by , we conclude that is generated by . But for all , therefore , and so .
We now have the tools we need to prove our main invariance result for rings.
Proof of Theorem 6. The result is trivial if or . In this case, either or is scalar, so every is QI with respect to every . Further, , so the righthand side always holds as well. We assume from now on that .
Suppose is QI with respect to , and let . Using Proposition 5, write:
where the are obtained by expanding each term and collecting like powers of . If , then by Lemma 11 all terms in the sum are in . Since is an module, it follows that .
Conversely, suppose that is not QI with respect to . Therefore, there exists some such that . We proceed by way of contradiction. Suppose that for all . In particular, it must hold for with . Therefore, we conclude that
Because each entry of the adjugate matrix is the determinant of an minor, it follows that is a polynomial in of degree at most with coefficients in . Letting , we obtain
If every residue field of has at least elements, we conclude from Lemma 14 that for . Recall the identity (2), and let . We then obtain
where the only depend on and . Because of the ring isomorphism , the equation above is a polynomial identity in . So we may collect like powers of and set all coefficients to zero. For the first two coefficients, we have and . Note also that , which follows because . Therefore, . Based on our earlier conclusion that , we have . Since by assumption and is an module, it follows that , a contradiction. If we use the identity , we now have . Carrying out a similar argument, we deduce that the result also holds when every residue field of has at least elements, thus completing the proof.
The counterexample given in Section IV is an example for which . In that case, Theorem 6 fails because Lemma 11 fails. One way to avoid the technical requirement that is to strengthen the notion of quadratic invariance. For example, if we require that for all then Lemma 11 follows without the requirement that . Thus, we would obtain a weaker version of the first part of Theorem 6. The proof of Theorem 7 uses similar machinery to that used in the proof of Theorem 6.
Proof of Theorem 7. As in Theorem 6, the cases and are trivial, so we assume . If , then , which means is invertible. Furthermore, contains at least distinct elements, so we may apply both parts of Theorem 6. Therefore, being quadratically invariant with respect to is equivalent to for all . If we restrict to , then . So we deduce that
Since , it follows that , and by the involutive property of , we deduce that .
Conversely, suppose that . Then it follows that for all . Suppose is not quadratically invariant with respect to , so there is some such that . Proceeding as in the proof of Theorem 6, let and obtain
In other words, the inclusion holds whenever . This is a polynomial of degree so it has at most roots. Therefore, the constraint that excludes at most elements of . Therefore, must contain at least elements so that there are at least elements remaining after all roots of have been excluded. As in the proof of Theorem 6, we may apply a similar argument to conclude that contains at least elements, and the proof in complete.
ViiB Invariance for rationals
Rational functions are fundamentally a field, so our first invariance result follows immediately from our invariance result for fields.
Proof of Theorem 8. Note that is a field, so is a subspace over . Furthermore, has infinitely many elements and , so the result follows from Theorem 7.
The rationals become a ring when we impose a properness constraint. Furthermore, the strictly proper rationals are an ideal . We can also check that this ideal is maximal, because the proper rationals that are not strictly proper are precisely the units of . Indeed, is the unique maximal ideal of , so is a local ring.
We may use these structural facts to prove the following lemma, which gives a condition that guarantees the invertibility of .
Lemma 16.
Suppose and . Then is invertible, and .
Proof.
Since is maximal and , we have
It follows that , and is therefore a unit. So is invertible.
The motivation for choosing a strictly proper and a proper is inspired by classical feedback control. If we think of the proper rationals as transfer functions, a strictly proper means that the controller has no direct feedthrough term, a common assumption that ensures wellposedness of the closedloop interconnection. We now present the proof of the invariance result for proper rationals.
Proof of Theorem 9. Note that is a ring and . Because has a unique maximal ideal , the only residue field is , which has infinitely many elements. Applying Theorem 6, we conclude that is QI with respect to if and only if for all . However, is always invertible by Lemma 16, so if and only if . By the involutive property of , this is equivalent to .
Proof of Theorem 10. First, note that , so we may think of the multivariate transfer functions as proper transfer functions in with coefficients that are rational functions of . As in Theorem 9, we still have , but there is no longer a unique maximal ideal. Indeed, the maximal ideals are the sets:
and it is easy to check that the corresponding residue fields each have infinitely many elements. Furthermore, for each , as in the proof of Lemma 16. So does not belong to any maximum ideal and must therefore be a unit. The rest follows as in the proof of Theorem 9.
Viii Summary
In this paper, we give an algebraic treatment of quadratic invariance, the wellknown condition under which decentralized control synthesis may be reduced to a convex optimization problem. Our results hold for commutative rings with identity, and in particular specialize to the natural systemtheoretic case of proper rational functions in one variable, as well as multidimensional rational functions. This formulation has the advantage of avoiding some of the technicalities in analytic treatments. In particular, notions of topology, limits, or norms are not required. Thus, quadratic invariance may be viewed as a purely algebraic concept.
Acknowledgments
References
 [1] B. Bamieh, F. Paganini, and M. Dahleh. Distributed control of spatially invariant systems. IEEE Transactions on Automatic Control, 47(7):1091–1107, 2002.
 [2] C. Beck. On formal power series representations for uncertain systems. IEEE Transactions on Automatic Control, 46(2):314–319, 2001.
 [3] C. L. Beck, J. Doyle, and K. Glover. Modelreduction of multidimensional and uncertain systems. IEEE Transactions on Automatic Control, 41(10):1466–1477, 1996.
 [4] V. D. Blondel and J. N. Tsitsiklis. A survey of computational complexity results in systems and control. Automatica, 36(9):1249–1274, 2000.
 [5] M. L. Curtis. Abstract linear algebra. Springer, 1990.
 [6] R. D’Andrea and G. E. Dullerud. Distributed control design for spatially interconnected systems. IEEE Transactions on Automatic Control, 48(9):1478–1495, 2003.
 [7] M. Fliess, M. Lamnabhi, and F. LamnabhiLagarrigue. An algebraic approach to nonlinear functional expansions. IEEE Transactions on Circuits and Systems, 30(8):554–570, 1983.
 [8] E. Fornasini and G. Marchesini. Doublyindexed dynamical systems: Statespace models and structural properties. Theory of Computing Systems, 12(1):59–72, 1978.
 [9] Y.C. Ho and K.C. Chu. Team decision theory and information structures in optimal control problems—Part I. IEEE Transactions on Automatic Control, 17(1):15–22, 1972.
 [10] R. E. Kalman. Algebraic structure of linear dynamical systems, I. the module of . Proceedings of the National Academy of Sciences of the United States of America, 54(6):1503–1508, 1965.
 [11] D. Klarner. Algebraic theory for difference and differential equations. The American Mathematical Monthly, 76(4):366–373, 1969.
 [12] A. Lamperski and L. Lessard. Optimal statefeedback control under sparsity and delay constraints. In 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems, pages 204–209, 2012.
 [13] S. Lang. Undergraduate algebra. Springer, 2005.
 [14] C. Langbort, R. S. Chandra, and R. D’Andrea. Distributed control design for systems interconnected over an arbitrary graph. IEEE Transactions on Automatic Control, 49(9):1502–1519, 2004.
 [15] L. Lessard. Tractability of complex control systems. PhD thesis, Stanford University, 2011.
 [16] L. Lessard and S. Lall. An algebraic framework for quadratic invariance. In IEEE Conference on Decision and Control, pages 2698–2703, 2010.
 [17] L. Lessard and S. Lall. A statespace solution to the twoplayer decentralized optimal control problem. In Allerton Conference on Communication, Control, and Computing, pages 1559–1564, 2011.
 [18] X. Qi, M. V. Salapaka, P. G. Voulgaris, and M. Khammash. Structured optimal and robust control with multiple criteria: a convex solution. IEEE Transactions on Automatic Control, 49(10):1623–1640, 2004.
 [19] F. Riesz and B. Sz.Nagy. Functional analysis. Dover Publications, 1990.
 [20] D. W. Robinson. The classical adjoint. Linear Algebra and its Applications, 411:254–276, 2005.
 [21] R. Roesser. A discrete statespace model for linear image processing. IEEE Transactions on Automatic Control, 20(1):1–10, 1975.
 [22] M. Rotkowitz, R. Cogill, and S. Lall. Convexity of optimal control over networks with delays and arbitrary topology. International Journal of Systems, Control and Communication, 2(1):30–54, 2010.
 [23] M. Rotkowitz and S. Lall. Decentralized control information structures preserved under feedback. In IEEE Conference on Decision and Control, pages 569–575, 2002.
 [24] M. Rotkowitz and S. Lall. A characterization of convex problems in decentralized control. IEEE Transactions on Automatic Control, 51(2):274–286, 2006.

[25]
S ‘  [26] P. Shah and P. A. Parrilo. optimal decentralized control over posets: A statespace solution for statefeedback. IEEE Transactions on Automatic Control, 58(12):3084–3096, 2013.
 [27] H. Shin and S. Lall. Decentralized control via Gröbner bases and variable elimination. IEEE Transactions on Automatic Control, 57(4):1030–1035, 2012.
 [28] M. C. Smith. On stabilization and the existence of coprime factorizations. IEEE Transactions on Automatic Control, 34(9):1005–1007, 1989.
 [29] M. Vidyasagar. Control system synthesis: a factorization approach. MIT Press, 1985.
 [30] N. Wiener. Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications. MIT press, 1964.
 [31] H. S. Witsenhausen. A counterexample in stochastic optimum control. SIAM Journal on Control, 6(1):131–147, 1968.
 [32] D. Youla, H. Jabr, and J. Bongiorno Jr. Modern WienerHopf design of optimal controllers–Part II: The multivariable case. IEEE Transactions on Automatic Control, 21(3):319–338, 1976.