Inner approximations for polynomial matrix inequalities and robust stability regions
Abstract
Following a polynomial approach, many robust fixedorder controller design problems can be formulated as optimization problems whose set of feasible solutions is modelled by parametrized polynomial matrix inequalities (PMI). These feasibility sets are typically nonconvex. Given a parametrized PMI set, we provide a hierarchy of linear matrix inequality (LMI) problems whose optimal solutions generate inner approximations modelled by a single polynomial superlevel set. Those inner approximations converge in a welldefined analytic sense to the nonconvex original feasible set, with asymptotically vanishing conservatism. One may also impose the hierarchy of inner approximations to be nested or convex. In the latter case they do not converge any more to the feasible set, but they can be used in a convex optimization framework at the price of some conservatism. Finally, we show that the specific geometry of nonconvex polynomial stability regions can be exploited to improve convergence of the hierarchy of inner approximations.
Keywords: polynomial matrix inequality, linear matrix inequality, robust optimization, robust fixedorder controller design, moments, positive polynomials.
1 Introduction
Linear system stability can be formulated semialgebraically in the space of coefficients of the characteristic polynomial. The region of stability is generally nonconvex in this space, and this is a major obstacle when solving fixedorder and/or robust controller design problems. Using the Hermite stability criterion, these problems can be formulated as parametrized polynomial matrix inequalities (PMIs) where parameters account for uncertainties and the decision variables are controller coefficients. Recent results on real algebraic geometry and generalized problems of moments can be used to build up a hierarchy of convex linear matrix inequality (LMI) outer approximations of the region of stability, with asymptotic convergence to its convex hull, see e.g. [6] for a software implementation and examples, and see [7] for an application to PMI problems arising from static output feedback design.
If outer approximations of nonconvex semialgebraic sets can be readily constructed with these LMI relaxations, inner approximations are much harder to obtain. However, for controller design purposes, inner approximations are essential since they correspond to sufficient conditions and hence guarantees of stability or robust stability. In the robust systems control literature, convex inner approximations of the stability region have been proposed in the form of polytopes [14], ellipsoids [4] or more general LMI regions [5, 9] derived from polynomial positivity conditions. Interval analysis can also be used in this context, see e.g. [18].
In this paper we provide a numerical scheme for approximating from inside the feasible set of a parametrized PMI (for some matrix polynomial ), that is, the set of points such that for all values of the parameter in some specified domain (assumed to be a basic compact semialgebraic set^{1}^{1}1A basic semialgebraic set is a set defined by intersecting a finite number of polynomial superlevel sets.). This includes as a special case the approximation of the stability region (and the robust stability region) of linear systems. The particular case where is affine in covers parametrized LMIs with many applications in robust control, as surveyed e.g. in [15].
Given a compact set containing , this numerical scheme consists of building up a sequence of inner approximations , , which fulfils two essential conditions:

The approximation converges in a welldefined analytic sense ;

Each set is defined in a simple manner, as a superlevel set of a single polynomial. In our mind, this feature is essential for a successful implementation in practical applications.
More precisely, we provide a hierarchy of inner approximations of , where each is a basic semialgebraic set for some polynomial of degree . The vector of coefficients of the polynomial is an optimal solution of an LMI problem. When increases, the convergence of to is very strong. Indeed, the Lebesgue volume of converges to the Lebesgue volume of . In fact, on any (a priori fixed) compact set , the sequence converges for the norm on to the function where is the minimum eigenvalue of the matrixpolynomial associated with the PMI. Consequently, in (Lebesgue) measure on , and almost everywhere and almost uniformly on , for a subsequence . In addition, if one defines the piecewise polynomial , then almost everywhere, almost uniformly and in (Lebesgue) measure on .
In addition, we can easily enforce that the inner approximations are nested and/or convex. Of course, for the latter convex approximations, convergence to is lost if is not convex. However, on the other hand, having a convex inner approximation of may reveal to be very useful, e.g., for optimization purposes.
On the practical and computational sides, the quality of the approximation of depends heavily on the chosen set on which to make the approximation of the function . The smaller , the better the approximation. In particular, it is worth emphasizing that when the set to approximate is the stability or robust stability region of a linear system, then its particular geometry can be exploited to construct a tight bounding set . Therefore, a good approximation of is obtained significantly faster than with an arbitrary set containing .
Finally, let us insist that the main goal of the paper is to show that it is possible to provide a tight and explicit inner approximation with no quantifier, of nonconvex feasible sets described with quantifiers. Then this new feasible set canÂ be used for optimization purposes and we are facing two cases:

the convex case: if and are convex polynomials, and then the optimization problem is polynomially solvable. Indeed, functions , , are polynomially computable, of polynomial growth, and the feasible set is polynomially bounded. Then polynomial solvability of the problem follows from [3, Theorem 5.3.1].

the nonconvex case: if is not convex then notice that firstly we still have an optimization problem with no quantifier, a nontrivial improvement. Secondly we are now faced with an polynomial optimization problem with a single polynomial constraint and possibly bound constraints . One may then apply the hierarchy of convex LMI relaxations described in [11, Chapter 5]. Of course, in general, polynomial optimization is NPhard. However, if the size of the problem is relatively small and the degree of is small, practice seems to reveal that the problem is solved exactly with few relaxations in many cases, see [11, §5.3.3]. In addition, if some structured sparsity in the data is present then one may even solve problems of potentially large size by using an appropriate sparse version of these LMI relaxations as described in [17], see also [11, §4.6].
The outline of the paper is as follows. In Section 2 we formally state the problem to be solved. In Section 3 we describe our hierarchy of inner approximations. In Section 4, we show that the specific geometry of the stability region can be exploited, as illustrated on several standard problems of robust control. The final section collects technical results and the proofs.
2 Problem statement
Let denote the ring or real polynomials in the variables , and let be the vector space of real polynomials of degree at most . Similarly, let denote the convex cone of real polynomials that are sums of squares (SOS) of polynomials, and its subcone of SOS polynomials of degree at most . Denote by the space of real symmetric matrices. For a given matrix , the notation means that is positive semidefinite, i.e., all its eigenvalues are real and nonnegative.
Let be a matrix polynomial, i.e. a matrix whose entries are scalar multivariate polynomials of the vector indeterminates and . Then
(1) 
defines a parametrized polynomial matrix inequality (PMI) set, where is a vector of decision variables, is a vector of uncertain parameters belonging to a compact semialgebraic set
(2) 
described by given polynomials , and is a given symmetric polynomial matrix of size . As is compact, without loss of generality we assume that for some , , where is sufficiently large.
We also assume that is bounded and that we are given a compact set with explicitly known moments , , of the Lebesgue measure on , i.e.
(3) 
where . Typical choices for are a box or a ball. To fix ideas, let
for some polynomials . Again, with no loss of generality, we may and will assume that for some , , where is sufficiently large. Finally, denote by the Lebesgue volume of any Borel set .
We are now ready to state our polynomial inner approximation problem.
Problem 1
(Inner Approximations) Given set , build up a sequence of basic closed semialgebraic sets , for some , such that
In addition, we may want the sequence of inner approximations to satisfy additional nesting or convexity conditions.
Problem 2
(Nested Inner Approximations) Solve Problem 1 with the additional constraint
Problem 3
(Convex Inner Approximations) Given set , build up a sequence of nested basic closed convex semialgebraic sets , for some , such that
3 A hierarchy of semialgebraic inner approximations
Given a polynomial matrix which defines the set in (1), polynomials which define the uncertain set in (2), let denote the Euclidean unit sphere of and let be the function:
(4) 
as the robust minimum eigenvalue function of . Function is continuous but not necessarily differentiable. It allows to define set alternatively as the superlevel set
3.1 Primal SOS SDP problems
Let be the constant polynomial . Let , and consider the hierarchy of convex optimization problems indexed by the parameter , :
(5) 
where decision variables are coefficients of polynomials , and coefficients of SOS polynomials , , and , . Note in particular that the degrees of the polynomials should be such that , and . Since higher degree terms may cancel, the degrees can be chosen strictly greater than these lower bounds. However, in the experiments described later on in the paper, we systematically chose the lowest possible degrees.
For each fixed, the associated optimization problem (5) is a semidefinite programming (SDP) problem. Indeed, stating that the two polynomials in both sides of the equation in (5) are identical translates into linear equalities between the coefficients of polynomials and stating that some of them are SOS translates into semidefiniteness of appropriate symmetric matrices. For more details, the interested reader is referred to e.g. [11, Chapter 2].
3.2 Dual moment SDP problems
To define the dual to SDP problem (5) we must introduce some notations.
With a sequence , , let be the linear functional
With , the moment matrix of order associated with is the real symmetric matrix with rows and columns indexed in , and defined by
(6) 
A sequence has a representing measure if there exists a finite Borel measure on , such that for every .
With as above and , the localizing matrix of order associated with and is the real symmetric matrix with rows and columns indexed by , and whose entry is given by
(7) 
3.3 Convergence
Before stating our main results, let us recall some standard notions of functional analysis. Let be a function of , and let denote a sequence of functions of indexed by . Lebesgue space is the Banach space of integrable functions on equipped with the norm
Regarding sequence , we use the following notions of convergence in when :

in norm means ;

in Lebesgue measure means that for every ,

almost everywhere means that pointwise except possibly for with ;

almost uniformly means that for every given , there is a set such that and uniformly on ;

finally, with the notation we mean that while satisfying for all .
For more details on these related notions of convergence, see [1, §2.5].
Lemma 1
For every , SDP problem (5) has an optimal solution and
(9) 
A detailed proof of Lemma 1 can be found in §6.1. In particular we prove that there is no duality gap between SOS SDP problem (5) and moment SDP problem (8), i.e. .
For every , let be the piecewise polynomial
(10) 
We are now in position to prove our main result.
Theorem 1
Let be an optimal solution of SDP problem (5) and consider the associated sequence for . Then:
(a) in norm and in Lebesgue measure;
(b) almost everywhere, almost uniformly and in Lebesgue measure.
3.4 Polynomial and piecewise polynomial inner approximations
Corollary 1
A proof can be found in §6.3.
3.5 Nested polynomial inner approximations
We now consider Problem 2 where is constrained to be a polynomial instead of a piecewise polynomial. We need to slightly modify SDP problem (5). Suppose that at step in the hierarchy we have already obtained an optimal solution , such that on , for all . At step we now solve SDP problem (5) with the additional constraint
(14) 
with unknown SOS polynomials and .
Corollary 2
For a proof see §6.4.
3.6 Convex nested polynomial inner approximations
Finally, for , denote by the Hessian matrix of at , and consider SDP problem (5) with the additional constraint
(15) 
for some SOS polynomials , and .
Corollary 3
The proof follows along the same lines as the proof of Corollary 2.
3.7 Example
Consider the nonconvex planar PMI set
which is Example IIE in [7] scaled to fit within the unit box
whose moments (3) are readily given by
On Figure 1 we represent the degree two and degree four solutions to SDP problem (5), modelled by YALMIP 3 and solved by SeDuMi 1.3 under a Matlab environment. We see in particular that the degree four polynomial superlevel set is somewhat smaller than expected. This is due to the fact that the objective function in problem (5) is the integral of over the whole box , not only over PMI set . There is a significant role played by the components of the integral on complement set , and this deteriorates the inner approximation.
This issue can be addressed partly by embedding in a tighter set , for example here the unit disk
whose moments (3) are given by
where is the gamma function such that for integer . See [12, Theorem 3.1] for the general expression^{2}^{2}2Note however that there is an incorrect factor in the right handside of equation (3.3) in [12]. of moments of the unit disk in .
On Figure 2 we represent the degree two and degree four solutions to SDP problem (5). Comparing with Figure 1, we see that the approximations embedded in the unit disk are much tighter than the approximations embedded in the unit box. Finally, on Figure 3 we represent the tighter degree six and degree eight inner approximations within the unit disk.
4 Geometry of control problems
As explained in the introduction, inner approximations of the stability regions are essential for fixedorder controller design. The PMI regions arising from parametric stability conditions have a specific geometry that can be exploited to improve the convergence of the hierarchy of inner approximations. In this section, we first recall Hermite’s PMI formulation of (discretetime) stability conditions. Then we recall that the PMI stability region is the image of a unit box through a multiaffine mapping, which allows to derive explicit expressions for the moments of the fulldimensional stability region, as well as tight polytopic outer approximations of lowdimensional affine sections of the stability region. Numerical examples illustrate these techniques for fixedorder nominal and robustly stabilizing controller design.
4.1 Hermite’s PMI
Derived by the French mathematician Charles Hermite in 1854, the Hermite matrix criterion is a symmetric version of the RouthHurwitz criterion for assessing stability of a polynomial. Originally it was derived for locating the roots of a polynomial in the open upper half of the complex plane, but with a fractional transform it can be readily transposed to the open unit disk and discretetime stability. The criterion says that a polynomial has all its roots in the open unit disk if and only if its Hermite matrix is positive definite, where
are by upperright triangular Toeplitz matrices, see e.g. the entrywise formulas of [2, Theorem 3.13] or the construction explained in [4]. The Hermite matrix is by, symmetric and quadratic in coefficients , so that the interior of the PMI set
is the parametric stability domain which is bounded, connected but nonconvex for . Optimal controller design amounts to optimizing over semialgebraic set .
4.2 Multiaffine mapping of the unit box
As explained e.g. in [14] or [16, §3.5] and references therein, stability domain can also be constructed as the image of the unit box (in the space of socalled reflection coefficients) through a multiaffine mapping. More explicitly where and multiaffine mapping is defined by
in the case . The general expression of for other values of is not given here for space reasons, but it follows readily from the construction outlined above.
Using this mapping we can obtain moments (3) of analytically:
(16) 
where is the determinant of the Jacobian of , in the case . For space reasons we do not give here the explicit value of as a function of , but it can be obtained by integration by parts.
Finally, let us mention a wellknown geometric property of : its convex hull is a polytope whose vertices correspond to the polynomials with roots equal to or . For example, when , we have
(17) 
4.3 Third degree stability region
Consider the problem of approximating from the inside the nonconvex stability region of a discretetime third degree polynomial . An ellipsoidal inner approximation was proposed in [4]. The Hermite polynomial matrix defining as in (1) is given by
The boundary of consists of two triangles and a hyperbolic paraboloid. The convex hull of is the simplex described in (17). We have analytic expressions (16) for the moments (3) of .
4.4 Fixedorder controller design
Consider the linear discretetime system with characteristic polynomial depending affinely on two real design parameters and . It follows from Hermite’s stability criterion that this polynomial has its roots in the open unit disk if and only if
is positive definite. As recalled in (4.2), the convex hull of the fourdimensional stability domain of a degree four polynomial is the simplex with vertices , , , , corresponding to the five polynomials with zeros equal to or . Using elementary linear algebra, we find out that the image of this simplex through the affine mapping parametrized by is the twodimensional simplex
The (closure of the) stability region is therefore included in , whose moments (3) are readily obtained e.g. by the explicit formulas of [10].
On Figures 7 and 8 we represent the degree two, four, six and eight inner approximations to , corresponding to stability regions for the linear system. We observe that the approximations become tight rather quickly. This is due to the fact that is a good outer approximation of with known moments. Tighter outer approximations would result in tighter inner approximations of , but then the moments of can be hard to compute, see [8].
4.5 Robust controller design
Now consider the uncertain polynomial with with uncertain Hermite matrix and the corresponding parametrized PMI stability region in (1). Let us use the same bounding set as in §4.4.
On Figure 9 we represent the degree two and degree four inner approximations to , corresponding to robust stability regions for the linear system. Comparing with Figure 7 we see that the approximations are smaller, and in particular they do not touch the stability boundary to cope with the robustness requirements.
5 Conclusion
We have constructed a hierarchy of inner approximations of feasible sets defined by parametrized or uncertain polynomial matrix inequalities (PMI). Each inner approximation is computed by solving a convex linear matrix inequality (LMI) problem. The hierarchy converges in a welldefined analytic sense, so that conservatism of the approximation is guaranteed to vanish asymptotically. In addition, the inner approximations are simple polynomial or piecewisepolynomial superlevel sets, so that optimization over these sets is significantly simpler than optimization over the original parametrized PMI set. In particular, we remove the possibly complicated dependence of the problem data on the uncertain parameters.
One may also impose the hierarchy of inner approximations to be nested. Finally, one may also impose the inner approximations to be convex. In this latter case they do not converge any more to the feasible set but, on the other hand, optimization over the parametrized PMI set can be reformulated as a convex polynomial optimization problem (of course at the price of some conservatism). Ideally, beyond convexity, we may also want the inner convex approximation to be semidefinite representable (as an explicit affine projection of an affine section of the SDP cone), and deriving such a representation may be an interesting research direction.
The tradeoff to be found is between tightness of the inner approximation and degree of the defining polynomials. In the context of robust control design, a satisfactory inner approximation can be possibly computed offline, and then used afterwards online in a feedback control setup.
Our methodology is valid for general parametrized PMI problems. However, in the case of parametrized PMI problems coming from fixedorder robust controller design problems, geometric insight can be exploited to improve convergence of the hierarchy. The key information is the knowledge of the moments of the Lebesgue measure on a compact set which tightly contains the parametrized PMI set we want to approximate from the inside. In turns out that for robust control problems this knowledge is available easily, as illustrated in the paper by several examples.
The main limitation of the approach lies in the ability of solving primal moment and dual polynomial sumofsquares LMI problems. Stateoftheart generalpurpose semidefinite programming solvers can currently address problems of relatively moderate dimensions, but problem structure and data sparsity can be exploited for larger problems.
Acknowledgements
The first author acknowledges support by project No. 103/10/0628 of the Grant Agency of the Czech Republic. We are grateful to Luca Zaccarian for pointing out a mistake in a previous version of this paper.
6 Appendix
6.1 Proof of Lemma 1
Proof: The dual of polynomial SOS SDP problem (5) is moment SDP problem (8). Slater’s condition cannot hold for (8) because has empty interior in . However it turns out that Slater’s condition holds for an equivalent version of SDP problem (8), i.e., the latter has a strictly feasible solution . Indeed, let be the ideal generated by the polynomial so that the real variety associated with is just the unit sphere . It turns out that the real radical^{3}^{3}3 is Zariski dense in so that . But being irreducible, is a prime ideal and so . of is itself, that is, (where for , denotes the vanishing ideal). And after embedding in , we still have .
Let , and let . The monomials , , form a basis of the quotient space . Moreover, for every ,
for some real coefficients , and some . This is because every time one sees a monomial with , one uses to reduce this monomial modulo . For instance
etc. Therefore, for every ,
(18) 
for some real coefficients , and some .
So because of the constraints , the semidefinite program (8) is equivalent to the semidefinite program:
(19) 
where the smaller moment matrix is the submatrix of obtained by looking only at rows and columns indexed in the monomial basis , , instead of . Similarly, the smaller localizing matrix is the submatrix of obtained by looking only at rows and columns indexed in the monomial basis , , instead of ; and similarly for .
Indeed, in view of (18) and using , every column of associated with is a linear combination of columns associated with . And similary for and . Hence, , and
for all , .
Next, let be the sequence of moments of the (product) measure uniformly distributed on , and scaled so that for all