Solving LargeScale Robust Stability Problems by Exploiting the Parallel Structure of Polya’s Theorem
Abstract
In this paper, we propose a distributed computing approach to solving largescale robust stability problems on the simplex. Our approach is to formulate the robust stability problem as an optimization problem with polynomial variables and polynomial inequality constraints. We use Polya’s theorem to convert the polynomial optimization problem to a set of highly structured Linear Matrix Inequalities (LMIs). We then use a slight modification of a common interiorpoint primaldual algorithm to solve the structured LMI constraints. This yields a set of extremely large yet structured computations. We then map the structure of the computations to a decentralized computing environment consisting of independent processing nodes with a structured adjacency matrix. The result is an algorithm which can solve the robust stability problem with the same percore complexity as the deterministic stability problem with a conservatism which is only a function of the number of processors available. Numerical tests on cluster computers and supercomputers demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors and analyze systems with 100+ dimensional statespace. The proposed algorithms can be extended to perform stability analysis of nonlinear systems and robust controller synthesis.
I Introduction
This paper addresses the problem of stability of largescale systems with several unknown parameters. Control system theory when applied in practical situations often involves the use of large statespace models, typically due to inherent complexity of the system, the interconnection of subsystems, or the reduction of an infinitedimensional or PDE model to a finitedimensional approximation. One approach to dealing with such large scale models has been to use model reduction techniques such as balanced truncation [1]. However, the use of model reduction techniques are not necessarily robust and can result in arbitrarily large errors. In addition to large statespace, practical problems often contain uncertainty in the model due to modeling errors, linearization, or fluctuation in the operating conditions. The problem of stability and control of systems with uncertainty has been widely studied. See, e.g. the texts [2, 3, 4, 5]. However, a limitation of existing computational methods for analysis and control of systems with uncertainty is high complexity. This is a consequence of fact that the problem of robust stability of systems with parametric uncertainty is known to be NPhard [6, 7]. The result is that for systems with parametric uncertainty and with hundreds of states, existing algorithms will fail with the primary point of failure usually being lack of unallocated memory.
In this paper, we seek to distribute the computation laterally over an array of processors within the context of existing computational resources. Specifically, we seek to utilize clustercomputing, supercomputing and Graphics Processing Unit (GPU)computing architectures. When designing algorithms to run in a parallel computing environment, one must both synchronize computational tasks among the processors while minimizing communication overhead among the processors. This can be difficult, as each architecture has a specific communication graph. we account for communication by explicitly modeling the required communication graph between processors. This communication graph is then mapped to the processor architecture using the MessagePassing Interface (MPI) [8]. While there are many algorithms for robust stability analysis and control of linear systems, ours is the first which explicitly accounts for the processing architecture in the emerging multicore computing environment.
Our approach to robust stability is based on the wellestablished use of parameterdependent QuadraticInTheState (QITS) Lyapunov functions. The use of parameterdependent Lyapunov QITS functions eliminates the conservativity associated with e.g. quadratic stability [9], at the cost of requiring some restriction on the rate of parameter variation. Specifically, our QITS Lyapunov variables are polynomials in the vector of uncertain parameters. This is a generalization of the use of QITS Lyapunov functions with affine parameter dependence as in [10] and expanded in, e.g. [11, 12, 13, 14]. The use of polynomial QITS Lyapunov variables can be motivated by [15], wherein it is shown that any feasible parameterdependent LMI with parameters inside a compact set has a polynomial solution or [16] wherein it is shown that local stability of a nonlinear vector field implies the existence of a polynomial Lyapunov function.
There are several results which use polynomial QITS Lyapunov functions to prove robust stability. In most cases, the stability problem is reduced to the general problem of optimization of polynomial variables subject to LMI constraints  an NPhard problem [17]. To avoid NPhardness, the polynomial optimization problem is usually solved in an asymptotic manner by posing a sequence of sufficient conditions of increasing accuracy and decreasing conservatism. For example, building on the result in [15], [18] provides a sequence of increasingly precise LMI conditions for robust stability analysis of linear systems with affine dependency on uncertain parameters on the complex unit ball. Necessary and sufficient stability conditions for linear systems with one uncertain parameter are derived in [19], providing an explicit bound on the degree of the polynomialtype Lyapunov function. The result is extended to multiparameterdependent linear systems in [20]. Another important approach to optimization of polynomials is the Sum of Squares (SOS) methodology which replaces the polynomial positivity constraint with the constraint that the polynomial admits a representation as a sum of squares of polynomials [21, 22, 23, 24]. A version of this theorem for polynomials with matrix coefficients can be found in [23]. While we have worked extensively with the SOS methodology, we have not, as of yet, been able to adapt algorithms for solving the resulting LMI conditions to a parallelcomputing environment. Finally, there have been several results in recent years on the use of Polya’s theorem to solve polynomial optimization problems [25] on the simplex. An extension of the Polya’s theorem for uncertain parameters on the multisimplex or hypercube can be found in [26]. The approach presented in this paper is an extension of the use of Polya’s theorem for solving polynomial optimization problems in a parallel computing environment.
The goal of this project is to create algorithms which explicitly map computation, communication and storage to existing parallel processing architectures. This goal is motivated by the failure of existing generalpurpose SemiDefinite Programming (SDP) solvers to efficiently utilize platforms for largescale computation. Specifically, it is wellestablished that linear programming and semidefinite programming both belong to the complexity class PComplete, also known as the class of inherently sequential problems. Although there have been several attempts to map certain SDP solvers to a parallel computing environment [27, 28], certain critical steps cannot be distributed. The result is that as the number of processors increases, certain bottleneck computations dominate, leading a saturation in computational speed of these solvers (Amdahl’s law [29]). We avoid these bottleneck computations and communications by exploiting the particular structure of the LMI conditions associated with Polya’s theorem. Note that, in principle, a perfectly designed generalpurpose SDP algorithm could identify the structure of the SDP, as we have, and map the communication, computation and memory constraints to the parallel architecture. Indeed, there has been a great deal of research on creating programming languages which attempt to do just this [30, 31]. However, at present such languages are mostly theoretical and have certainly not been incorporated into existing SDP solvers.
In addition to parallel SDP solvers, there have been some efforts to exploit structure in certain polynomial optimization algorithms to reducing the size and complexity of the resulting LMI’s. For example, in [32] symmetry was used to reduce the size of the SDP variables. Specific sparsity structure was used in [33, 34, 35] to reduce the complexity of the linear algebra calculations. Generalized approaches to the use of sparsity in SDP algorithms can be found in [34]. Groebner basis techniques [36, 37] have been used by [33] to simplify the formulation of the SDPs associated with the SOS decomposition problems.
The paper is organized around two independent problems: setting up the sequence of structured SDPs associated with Polya’s theorem and solving them. Note that the problem of decentralizing the setup algorithm is significant in that for largescale systems, the instantiation of the problem may be beyond the memory and computational capacity of a single processing node. For the setup problem, the algorithm that we propose has no centralized memory or computational requirements whatsoever. Furthermore, if a sufficient number of processors are available, the number of messages does not change with the size of the statespace or the number of Polya’s iterations. In addition, the ideal communication architecture for the setup algorithm does not correspond to the communication structure of GPU computing or supercomputing. In the second problem, we propose a variant of a standard SDP primaldual algorithm and map the computational, memory and communication requirements to a parallel computing environment. Unlike the setup algorithm, the primaldual algorithm does have a small centralized component corresponding to the update of the set of dual variables. However, we have structured the algorithm so that the size of this dual computation is solely a function of the degree of the polynomial QITS Lyapunov function and does not depend on the number of Polya’s iterations, meaning that the sequence of algorithms has fixed centralized computational and communication complexity. In addition, there is no communication between processors, which means that the algorithm is well suited to most parallel computing architectures. A graph representation of the communication architecture of both the setup and SDP algorithms has also been provided in the relevant sections.
Combining the setup and SDP components and testing the result of both in cluster computing environments, we demonstrate the capability of robust analysis and control of systems with 100+ states and several uncertain parameters. Specifically, we ran a series of numerical experiments using a local Linux cluster and the Blue Gene supercomputer (with 200 processor allocation). First, we applied the algorithm to a current problem in robust stability analysis of magnetic confinement fusion using a discretized PDE model. Next, we examine the accuracy of the algorithm as Polya’s iterations progress and compare this accuracy with the SOS approach. We show that unlike the generalpurpose parallel SDP solver SDPARA [28], the speedup  the increase in processing speed per additional processor  of our algorithm shows no evidence of saturation. Finally, we calculate the envelope of the algorithm on the Linux cluster in terms of the maximum statespace dimension, number of processors and Polya’s iterations.
NOTATION
We represent variate monomials as , where is the vector of variables and is the vector of exponents and is the degree of the monomial. We define as the totally ordered set of the exponents of variate monomials of degree , where the ordering is lexicographic. In lexicographical ordering precedes , if the left most nonzero entry of is positive. The lexicographical index of every can be calculated using the map defined as [38]
(1) 
where as in [39]
(2) 
is the cardinality of , i.e., the number of variate monomials of degree . For convenience, we also define the index of a monomial to be . We represent variate homogeneous polynomials of degree as
(3) 
where is the matrix coefficient of the monomial . We denote the element corresponding to the row and column of matrix as . The subspace of symmetric matrices in is denoted by . We define a basis for as
(4) 
where
(5) 
Note that this choice of basis is arbitrary  any other basis could be used. However, any change in basis would require modifications to the formulae defined in this paper. The canonical basis for is denoted by for , where The vector with all entries equal to one is denoted by . The trace of is denoted by . The blockdiagonal matrix with diagonal blocks is denoted or occasionally as . The identity and zero matrices are denoted by and .
Ii Preliminaries
Consider the linear system
(6) 
where and is a vector of uncertain parameters. In this paper, we consider the case where is a homogeneous polynomial and where is the unit simplex, i.e.,
(7) 
If is not homogeneous, we can obtain an equivalent homogeneous representation in the following manner. Suppose is a nonhomogeneous polynomial with , is of degree and has monomials with nonzero coefficients. Define , where is the degree of the monomial of according to lexicographical ordering. Now define the polynomial as per the following.

Let .

For , multiply the monomial of , according to lexicographical ordering, by .
Then, since , for all and hence all properties of are retained by the homogeneous system .
1) Example: Construction of the homogeneous system .
Consider the nonhomogeneous polynomial of degree , where . Using the above procedure, the homogeneous polynomial can be constructed as
(8) 
The following is a stability condition [25] for System (6). {thm} System (6) is stable if and only if there exists a polynomial matrix such that and
(9) 
for all . A similar condition also holds for discretetime linear systems. The conditions associated with Theorem II are infinitedimensional LMIs, meaning they must hold at infinite number of points. Such problems are known to be NPhard [17]. In this paper we derive a sequence of polynomialtime algorithms such that their outputs converge to the solution of the infinitedimensional LMI. Key to this result is Polya’s Theorem [40]. A variation of this theorem for matrices is given as follows.
(Polya’s Theorem) The homogeneous polynomial for all if and only if for all sufficiently large ,
(10) 
has all positive definite coefficients.
Upper bounds for Polya’s exponent can be found as in [41]. However, these bounds are based on the properties of and are difficult to determine a priori. In this paper, we show that applying Polya’s Theorem to the robust stability problem, i.e., the inequalities in Theorem II yields a semidefinite programming condition with an efficiently distributable structure. This is discussed in the following section.
Iii Problem SetUp
In this section, we show how Polya’s theorem can be used to determine the robust stability of an uncertain system using linear matrix inequalities with a distributable structure.
Iiia Polya’s Algorithm
We consider the stability of the system described by Equation (6). We are interested in finding a which satisfies the conditions of Theorem II. According to Polya’s theorem, the constraints of Theorem II are satisfied if for some sufficiently large and , the polynomials
(11) 
(12) 
have all positive definite coefficients.
Let be a homogeneous polynomial of degree which can be represented as
(13) 
where the coefficients and where we recall that is the set of the exponents of all variate monomials of degree . Since is a homogeneous polynomial of degree , we can write it as
(14) 
where the coefficients . By substituting (13) and (14) into (11) and (12) and defining as the degree of , the conditions of Theorem II can be represented in the form
(15) 
(16) 
Here is defined to be the scalar coefficient which multiplies in the th monomial of the homogeneous polynomial using the lexicographical ordering. Likewise is the term which left or right multiplies in the th monomial of using the lexicographical ordering. For an intuitive explanation as to how these and terms are calculated, we consider a simple example. Precise formulae for these terms will follow the example.
1) Example: Calculating the and coefficients.
Consider and . By expanding Equation (11) for we have
(17) 
The terms are then extracted as
(18) 
Next, by expanding Equation (12) for we have
(19) 
The terms are then extracted as
(20) 
2) General Formula: The can be formally defined recursively as follows. Let the initial values for be defined as
(21) 
Then, iterating for , we let
(22) 
Finally, we set . To obtain , set the initial values as
(23) 
Then, iterating for , we let
(24) 
Finally, set .
For the case of largescale systems, computing and storing and is a significant challenge due to the number of these coefficients. Specifically, the number of terms increases with (number of uncertain parameters in system (6)), (degree of ), (degree of ) and (Polya’s exponents) as follows.
3) Number of coefficients: For given and , since and , the number of coefficients is the product of and . Recall that card is the number of all variate monomials of degree and can be calculated using (2) as follows.
(25) 
Likewise, card, i.e., the number of all variate monomials of degree is calculated using (2) as follows.
(26) 
The number of coefficients is .
4) Number of coefficients: For given and , since and , the number of coefficients is the product of and . By using (2), we have
(27) 
The number of coefficients is .
The number of and coefficients and the required memory to store these coefficients are shown in Figs. 1 and 2 in terms of the number of uncertain parameters and for different Polya’s exponents. In all cases .
It is observed from Fig. 2 that, even for small and , the required memory is in the Terabyte range. In [38], we proposed a decentralized computing approach to the calculation of on large cluster computers. In the present work, we extend this method to the calculation of and the SDP elements which will be discussed in the following section. We express the LMIs associated with conditions (15) and (16) as an SDP in both primal and dual forms. We also discuss the structure of the primal and dual SDP variables and the constraints.
IiiB SDP Problem Elements
A semidefinite programming problem can be stated either in primal or dual format. Given , and , the primal problem is of the form
(28) 
where the linear operator is defined as
(29) 
is the primal variable. Given a primal SDP, the associated dual problem is
(30) 
where is the transpose operator and is given by
(31) 
and where and are the dual variables. The elements , and of the SDP problem associated with the LMIs in (15) and (16) are defined as follows. We define the element as
(32) 
where
(33) 
where recall that is the number of monomials in , is the number of monomials in , where is the dimension of system (6), is the number of uncertain parameters and is a small positive parameter.
For , define elements as
(34) 
where is the number of dual variables in (30) and is equal to the product of the number of uppertriangular elements in each (the coefficients in ) and the number of coefficients in (i.e. the cardinality of ). Since there are coefficients in and each coefficient has uppertriangular elements, we find
(35) 
To define the blocks, first we define the function ,
(36) 
which maps each variable to a basis matrix , where recall that is the basis for . Note that a different choice of basis would require a different function . Then for ,
(37) 
Finally, to complete the SDP problem associated with Polya’s algorithm set
(38) 
IiiC Parallel Setup Algorithm
In this section, we propose a decentralized, iterative algorithm for calculating the terms , , and as defined in (22), (24), (32) and (34). The algorithm has been implemented in C++, using MPI (Message Passing Interface) and is available at: www.sites.google.com/a/asu.edu/kamyar/software. We present an abridged description of this algorithm in Algorithm 1, wherein is the number of available processors.
(39) 
(40) 
Note that we have only addressed the problem of robust stability analysis, using the polynomial inequality
for . However, we can generalize the decentralized setup algorithm to consider a more general class of feasibility problems, i.e.,
(41) 
for . One motivation behind the development of such generalized setup algorithm is that the parameterdependent versions of the LMIs associated with and synthesis problems in [42, 43] can be formulated in the form of (41).
IiiD Setup algorithm: Complexity Analysis
Since checking the positive definiteness of all representatives of a square matrix with parameters on proper real intervals is intractable [7], the question of feasibility of (9) is also intractable. To solve the problem of inherent intractability we establish a trade off between accuracy and complexity. In fact, we develop a sequence of decentralized polynomialtime algorithms whose solutions converge to the exact solution of the NPhard problem. In other words, the translation of a polynomial optimization problem to an LMI problem is the main source of complexity. This high complexity is unavoidable and, in fact, is the reason we seek parallel algorithms.
Algorithm 1 distributes the computation and storage of and among the processors and their dedicated memories, respectively. In an ideal case, where the number of available processors is sufficiently large (equal to the number of monomials in , i.e. ) only one monomial ( of and of ) are assigned to each processor.
1) Computational complexity analysis: The most computationally expensive part of the setup algorithm is the calculation of the blocks in (37). Considering that the cost of matrixmatrix multiplication is , the cost of calculating each block is According to (34) and (37), the total number of blocks is . Hence, as per Algorithm 1, each processor processes of the blocks, where is the number of available processors. Thus the per processor computational cost of calculating the at each Polya’s iteration is
(42) 
By substituting for from (35), card from (25), from (26) and from (27), the per processor computation cost at each iteration is
(43) 
assuming that and . For example, for the case of largescale systems (large and ), the computation cost per processor at each iteration is having processors, having processors and having processors. Thus for the case where , the number of operations grows more slowly in than in .
2) Communication complexity analysis: Communication between processors can be modeled by a directed graph , where the set of nodes is the set of indices of the available processors and the set of edges is the set of all pairs of processors that communicate with each other. For every directed graph we can define an adjacency matrix . If processor communicates with processor , then , otherwise . In this section, we only define the adjacency matrix for the part of the algorithm that performs Polya’s iterations on . For Polya’s iterations on , the adjacency matrix can be defined in a similar manner. For simplicity, we assume that at each iteration, the number of available processors is equal to the number of monomials in