Convex inner approximations of nonconvex semialgebraic sets applied to fixedorder controller design^{1}^{1}1A preliminary version of this work was presented during the International Symposium on Mathematical Theory of Networks and Systems, Budapest, Hungary, 59 July 2010.
Abstract
We describe an elementary algorithm to build convex inner approximations of nonconvex sets. Both input and output sets are basic semialgebraic sets given as lists of defining multivariate polynomials. Even though no optimality guarantees can be given (e.g. in terms of volume maximization for bounded sets), the algorithm is designed to preserve convex boundaries as much as possible, while removing regions with concave boundaries. In particular, the algorithm leaves invariant a given convex set. The algorithm is based on Gloptipoly 3, a publicdomain Matlab package solving nonconvex polynomial optimization problems with the help of convex semidefinite programming (optimization over linear matrix inequalities, or LMIs). We illustrate how the algorithm can be used to design fixedorder controllers for linear systems, following a polynomial approach.
Keywords: polynomials; nonconvex optimization; LMI; fixedorder controller design
1 Introduction
The set of controllers stabilizing a linear system is generally nonconvex in the parameter space, and this is an essential difficulty faced by numerical algorithms of computeraided control system design, see e.g. [4] and references therein. It follows from the derivation of the RouthHurwitz stability criterion (or its discretetime counterpart) that the set of stabilizing controllers is real basic semialgebraic, i.e. it is the intersection of sublevel sets of given multivariate polynomials. A convex inner approximation of this nonconvex semialgebraic stability region was obtained in [4] in the form of linear matrix inequalities (LMI) obtained from univariate polynomial positivity conditions, see also [9]. Convex polytopic inner approximations were also obtained in [16], for discretetime stability, using reflection coefficients. Convex inner approximations make it possible to design stabilizing controllers with the help of convex optimization techniques, at the price of loosing optimality w.r.t. closedloop performance criteria ( norm, norm or alike).
Generally speaking, the technical literature abounds of convex outer approximations of nonconvex semialgebraic sets. In particular, such approximations form the basis of many branchandbound global optimization algorithms [15]. By construction, Lasserre’s hierarchy of LMI relaxations for polynomial programming is a sequence of embedded convex outer approximations which are semidefinite representable, i.e. which are obtained by projecting affine sections of the convex cone of positive semidefinite matrices, at the price of introducing lifting variables [6].
After some literature search, we could not locate any systematic constructive procedure to generate convex inner approximations of nonconvex semialgebraic sets, contrasting sharply with the many convex outer approximations mentioned above. In the context of fixedorder controller design, inner approximations correspond to a guarantee of stability, at the price of loosing optimality. No such stability guarantee can be ensured with outer approximations.
The main contribution of this paper is therefore an elementary algorithm, readily implementable in Matlab, that generates convex inner approximations of nonconvex sets. Both input and output sets are basic semialgebraic sets given as lists of defining multivariate polynomials. Even though no optimality guarantees can be given in terms of volume maximization for bounded sets, the algorithm is designed to preserve convex boundaries as much as possible, while removing regions with concave boundaries. In particular, the algorithm leaves invariant a given convex set. The algorithm is based on Gloptipoly 3, a publicdomain Matlab package solving nonconvex polynomial optimization problems with the help of convex LMIs [7]. Even though the algorithm can be useful on its own, e.g. for testing convexity of semialgebraic sets, we illustrate how it can be used to design fixedorder controllers for linear systems, following a polynomial approach.
2 Convex inner approximation
Given a basic closed semialgebraic set
(1) 
where are multivariate polynomials, we are interested in computing another basic closed semialgebraic set
(2) 
which is a valid convex inner approximation of , in the sense that
Ideally, we would like to find the tightest possible approximation, in the sense that the complement set is as small as possible. Mathematically we may formulate the problem as the volume minimization problem
but since set is not necessarily bounded we should make sure that this integral makes sense. Moreover, computing the volume of a given semialgebraic set is a difficult task in general [8], so we expect that optimizing such a quantity is as much as difficult. In practice, in this paper, we will content ourselves of an inner approximation that removes the nonconvex parts of the boundary and keeps the convex parts as much as possible.
3 Detecting nonconvexity
Before describing the method, let us recall some basics definitions on polynomials and differential geometry. Let be a multivariate polynomial of total degree . Let
be its gradient vector and
its (symmetric) Hessian polynomial matrix. Define the optimization problem
(3) 
with global minimizers and .
Let us make the following nondegeneracy assumption on defining polynomials :
Assumption 1
There is no point such that and vanish simultaneously while satisfying for , .
Since the polynomial system , , involves equations for unknowns, Assumption 1 is satisfied generically. In other words, in the Euclidean space of coefficients of polynomials , instances violating Assumption 1 belong to a variety of Lebesgue measure zero, and an arbitrarily small perturbation on the coefficients generates a perturbed set satisfying Assumption 1.
Proof: The boundary of set consists of points such that for some , and for . In the neighborhood of such a point, consider the Taylor series
(4) 
where denotes terms of degree or higher in entries of vector , the local coordinates. By Assumption 1, the gradient does not vanish along the boundary, and hence convexity of the boundary is inferred from the quadratic term in expression (4). More specifically, when , vector belongs to the hyperplane tangent to at point . Let be a matrix spanning this linear subspace of dimension so that for some . The quadratic form can be diagonalised with the congruence transformation (Schur decomposition), and hence . The eigenvalues , are reciprocals of the principal curvatures of the surface. Problem (3) then amounts to finding the minimum curvature, which is nonnegative when the surface is locally convex around .
In the case of threedimensional surfaces (), the ideas of tangent plane, local coordinates and principal curvatures used in the proof of Theorem 1 are standard notions of differential geometry, see e.g. Section 3.3. in [2] for connections between principal curvatures and eigenvalues of the local Hessian form (called the second fundamental form, once suitably normalized).
As an example illustrating the proof of Theorem 1, consider the hyperboloid of one sheet with gradient and Hessian
At the origin , the tangent plane is and is a bivariate quadratic form with eigenvalues and , corresponding respectively to the convex parabola (positive curvature) and concave hyperbola (negative curvature), see Figure 1.
Theorem 1 can be exploited in an algorithmic way to generate a convex inner approximation of a semialgebraic set.
Algorithm 1
(Convex inner approximation)

Input: Polynomials , defining set as in (1). Small nonnegative scalar .

Output: Polynomials , defining set as in (2).

Step 1: Let .

Step 2: If then go to Step 5.

Step 3: If , solve optimization problem (3) for optimum and minimizers . If , go to Step 5.

Step 4: If , then select one of the minimizers , , let . Then let , and go to step 3.

Step 5: Let . If then go to Step 2.

Step 6: Return , .
The idea behind the algorithm is as follows. At Step 3, by solving the polynomial optimization problem of Theorem 1 we identify a point of minimal curvature along algebraic varieties defining the boundary of . If the minimal curvature is negative, then we separate the point from the set with a gradient hyperplane, and we iterate on the resulting semialgebraic set. At the end, we obtain a valid inner approximation.
Note that Step 2 checks if the boundary is affine, in which case the minimum curvature is zero and there is no optimization problem to be solved.
The key parameter of the algorithm is the small positive scalar used at Step 4 for separating strictly a point of minimal curvature, so that the algorithm does not identify it again at the next iteration. Moreover, in Step 4, one must elect arbitrarily a minimizer. We will discuss this issue later in this paper.
Finally, as pointed out to us by a referee, the ordering of the sequence of input polynomials has an impact on the sequence of output polynomials , and especially on the size of the convex inner approximation . However, it seems very difficult to design a priori an optimal ordering policy.
4 Matlab code and geometric examples
At each step of Algorithm 1 we have to solve a potentially nonconvex polynomial optimization problem. For that purpose, we use Gloptipoly 3, a publicdomain Matlab package [7]. The methodology consists in building and solving a hierarchy of embedded linear matrix inequality (LMI) relaxations of the polynomial optimization problem, see the survey [12]. The LMI problems are solved numerically with the help of any semidefinite programming solver (by default Gloptipoly 3 uses SeDuMi). Under the assumption that our original semialgebraic set is compact, the sequence of minimizers obtained by solving the LMI relaxations is ensured to converge mononotically to the global minimum. Under the additional assumption that the global optima live on a zerodimensional variety (i.e. there is a finite number of them), Gloptipoly 3 eventually extracts some of them (not necessarily all, but at least one) using numerical linear algebra. The LMI problems in the hierarchy have a growing number of variables and constraints, and the main issue is that we cannot predict in advance how large has to be the LMI problem to guarantee global optimality. In practice however we observe that it is not necessary to go very deep in the hierarchy to have a numerical certificate of global optimality.
4.1 Hyperbola
Let us first with the elementary example of an unbounded nonconvex hyperbolic region with , for which optimization problem (3) reads
Necessary optimality conditions yield immediately global minimizers , and , , and hence two additional (normalized) affine constraints and defining the slab which is indeed a valid inner approximation of .
4.2 Egg quartic
Now we show that Algorithm 1 can be used to detect convexity of a semialgebraic set. Consider the smooth quartic sublevel set represented on Figure 2. Assumption 1 is ensured since the gradient cannot vanish for real .
A Matlab implementation of the first steps of the algorithm can be easily written using Gloptipoly 3:
% problem data mpol x y 2 p1 = x(1)^4+x(2)^4+x(1)^2+x(2); g1 = diff(p1,x); % gradient H1 = diff(g1,x); % Hessian % LMI relaxation order order = 3; % build LMI relaxation P = msdp(min(y’*H1*y), p1==0, ... g1*y==0, y’*y==1, order); % solve LMI relaxation [status,obj] = msol(P)
Notice that we solve the LMI relaxation of order 3 (e.g. moments of degree 6) of problem (3). In GloptiPoly, an LMI relaxation is solved with the command msol which returns two output arguments: status and obj. Argument status can take the following values:

1 if the LMI relaxation is infeasible or could not be solved for numerical reasons;

0 if the LMI relaxation could be solved but it is impossible to detect global optimality and to extract global optimizers, in which case obj is a lower (resp. upper) bound on the global minimum (resp. maximum) of the original optimization problem;

+1 if the LMI relaxation could be solved, global optimality is certified and global minimizers are extracted, in which case obj is the global optimum of the original optimization problem.
Running the above script, Gloptipoly returns obj = 2.0000 and status = 1, certifying that the minimal curvature is strictly positive, and hence that the polynomial sublevel set is convex.
Note that in this simple case, convexity of set follows directly from positive semidefiniteness of the Hessian , yet Algorithm 1 can systematically detect convexity in more complicated cases.
4.3 Waterdrop quartic
Consider the quartic which has a singular point at the origin, hence violating Assumption 1.
Applying Algorithm 1, the LMI relaxation of order 4 (moments of degree 8) yields a globally minimal curvature of achieved at the 2 points and . With the two additional affine constraints , , the resulting set has a globally minimal curvature of certified at the LMI relaxation of order 4, and therefore it is a valid convex inner approximation of , see Figure 3.
This example illustrates that Algorithm 1 can work even when Assumption 1 is violated. Here the singularity is removed by the additional affine constraints. This example also shows that symmetry of the problem can be exploited, since two global minimizers are found (distinct points with the same minimal curvature) to remove two nonconvex parts of the boundary simultaneously.
4.4 Singular quartic
Consider the quartic which has a singular point at the origin, hence violating Assumption 1.
Running Algorithm 1, we obtain the following sequence of bounds on the minimum curvature, for increasing LMI relaxation orders:
GloptiPoly is not able to certify global optimality, so we can only speculate that the global minimum is zero and hence that set is convex, see Figure 4. We may say that set is numerically convex.
Indeed if we strenghten the constraint into for a small positive , say , then GloptiPoly 3 certifies global optimality and convexity with obj = 4.0627e7 at the 4th LMI relaxation. On the other hand, if we relax the constraint into with a negative , then GloptiPoly 3 certifies global optimality and nonconvexity with obj = 0.22313 at the 4th LMI relaxation. We can conclude that the optimum of problem 3 is sensitive, or illconditioned, with respect to the problem data, the coefficients of . The reason behind this illconditioning is the singularity of at the origin, see Figure 5 which represents the effect of perturbing the constraint around the singularity.
5 Control applications
In this section we focus on control applications of Algorithm 1, which is used to generate convex inner approximation of stability regions in the parameter space.
5.1 Thirdorder discretetime stability region
Algorithm 1 can lend insight into the (nonconvex) geometry of the stability region. Consider the simplest nontrivial case of a thirdorder discretetime polynomial which is stable (roots within the open unit disk) if and only if parameter lies within the interior of compact region . Stability region is nonconvex, delimited by two planes , and a hyperbolic paraboloid see e.g. [1, Example 11.4].
Optimization problem (3) corresponding to convexity check of the hyperbolic paraboloid reads as follows:
(5) 
The objective function and the last constraint depend only on , and necessary optimality conditions obtained by differentiating the Lagrangian with respect to yield the symmetric pencil equation
From the determinant of the above 3by3 matrix, equal to , we conclude that multiplier can be equal to , or . The choice implies which is inconsistent with the last but one constraint in (5). The choice yields , , with and the objective function . The choice yields , , and the objective function , a negative minimum curvature. Therefore region is indeed nonconvex.
From the remaining constraints in (5), we conclude that the minimal curvature points can be found along the portion of parabola included in the halfplanes and . Any plane tangent to the hyperbolic paraboloid at a point along the parabola can be used to generate a valid inner approximation of the stability region. For example, with the choice , we generate the gradient halfplane .
More generally, for discretetime polynomials of degree , stability region is the image of the box (of socalled reflection coefficients) though a multiaffine mapping, see e.g. [16] and references therein. The boundary of consists of ruled surfaces, and the convex hull of is generated by the images of the vertices of through the multiaffine mapping. It would be interesting to investigate whether this particular geometry can be exploited to generate systematically a convex inner approximation of maximum volume of the stability region .
5.2 Fixedorder controller design
Consider the openloop discretetime system , parametrized by , in negative closedloop configuration with the controller . The characteristic polynomial is equal to , and it is Schur stable (all roots in the open unit disk) if and only if , and where
The affine inequalities , define a polytope in the controller parameter plane , and the inequality defines a cubic region.
In the case , with the following Gloptipoly 3 implementation of Steps 13 of Algorithm 1:
mpol x y 2 p1 = x(1)+x(2); p2 = 6*x(1)+4*x(2); p3 = 10*x(2)4; p4 = 8+4*x(2)+6*x(1); p5 = 4+x(1)+x(2); p6 = 6*x(1)^2*x(2)+3*x(1)^210*x(1)*x(2)2*x(1)3*x(2)^3+6*x(2)^2+x(2); g6 = diff(p6,x); % gradient H6 = diff(g6,x); % Hessian % LMI relaxation order order = input(’LMI relaxation order = ’); % build LMI relaxation P = msdp(min(y’*H6*y), p6==0, p1<=0, p2<=0, p3<=0, p4<=0, p5<=0, ... g6*y==0, y’*y==1, order); % solve LMI relaxation [status,obj] = msol(P)
we obtain a negative lower bound obj = 3.5583 at the 2nd LMI relaxation, which is inconclusive. At the 3rd LMI relaxation, we obtain a positive lower bound obj = 0.8973 which certifies convexity of the stability region, see Figure 6.
Since the stability region is convex, we can optimize over it with standard techniques of convex optimization. More specifically, a recent result in [11] indicates that any limit point of any sequence of admissible stationary points of the logarithmic barrier function is a KarushKuhnTucker point satisfying first order optimality condition. In particular, the gradient of vanishes at the analytic center of the set. Using Maple (or a numerical local optimization method) we can readily obtain the analytic center , (fivedigit approximations of algebraic coefficients of degree 17) corresponding to a controller well inside the stability region. Such a controller can be considered as nonfragile, in the sense that some uncertainty on its coefficients will not threaten closedloop stability.
Now for the choice we carry on again our study of convexity of the stability region with the help of a similar GloptiPoly script. At the 2nd LMI relaxation we obtain a negative lower bound obj = 385.14 which is inconclusive. At the 3rd LMI relaxation, we obtain a negative lower bound obj = 380.88 which is also inconclusive. Eventually, at the 4th LMI relaxation, we obtain a negative lower bound obj = 380.87 which is certified to be the global minimum with status = 1. The point at which the minimum curvature is achieved is a vertex of the stability region, and the tangent at this point of the nonconvex part of the boundary is used to generate a valid inner approximation , see Figure 7. Any point chosen in this triangular region corresponds to a stabilizing controller.
We see that here the choice of the point of minimum curvature is not optimal in terms of maximizing the surface of . A point chosen elsewhere along the negatively curved part of the boundary would be likely to generate a larger convex inner approximation.
5.3 Optimal control with semialgebraic constraints
In Model Predictive Control (MPC), an optimal control problem is solved recursively. This resolution is usually based on direct methods that consist of deriving a nonlinear program from the optimal control problem by discretization of the dynamics and the path constraints. Since the embedded software has strict specification on algorithm complexity and realtime computation, convexity of the program is a key feature [17]. Indeed, in this context, our convex inner approximation of the admissible space become valuable to speed up the computation even at the price of some conservatism.
In openloop control design, convexity of the problem is a matter of concern especially when the optimal control problem is part of an MPC procedure. In this case, the optimal control problem is solved mostly using direct methods that transfom it into a parametric optimization problem. Convexity permits to limits the complexity of the resolution and so reduces the computation time of an optimal solution. Unfortunately, in dynamic inversion techniques based on differential flatness, the generally convex constraints on the states and inputs are replaced by nonconvex admissible sets in the flat output space, see [17] and reference therein for details. Thus, in such a method, it is necessary to design inner convex approximation of the admissible subset to develop a tractable algorithm [18].
Consider the following optimal control problem
The objective of this problem is to steer the linear system from an initial state to a final state in a fixed time inside the admissible state subset defined e.g. by the waterdrop quartic defined in section 4.3:
(6) 
We describe thereafter a classical methodology for solving the previous optimal control problem using flatnessbased dynamic inversion, see [13, 14, 19] for other examples. As the dynamics are linear and fully actuated, dynamic inversion can be used to develop an efficient algorithm for the considered problem [19]. Thus, the system trajectory can be parametrized by a userspecified sufficiently smooth function so that and . The function is classically described by a chosen basis and the associated vector of weighting coefficient such that
In order to derive a finite dimensional program, the admissible set constraint is discretized and enforced at a finite number of time instants such that . Since is nonconvex, we obtain a finitedimensional nonlinear nonconvex program:
The inner approximation calculated previously in section 4.3 is given by where and is the gradient of evaluated at and , respectively. The use of the inner approximation as admissible subset leads to the following convex program:
In the following, we set , and , . The time function is a 5segmentpiecewise polynomial of the 4th order (degree 3) defined on a Bspline basis. We run both programs for different values of . In table 1 we compare the computation times and optimal costs. See Figure 8 for the state trajectories.
10  20  50  100  200  500  1000  

CPU time [s]  Convex  0.029  0.045  0.056  0.058  0.102  0.225  0.498 
Nonconvex  0.109  0.195  0.199  0.409  0.513  0.836  1.46  
Optimal cost  Convex  1.65  1.67  1.68  1.69  1.68  1.68  1.68 
Nonconvex  1.52  1.52  1.52  1.52  1.52  1.52  1.52 
For this example, we observe the positive effect that convexity has on the reduction of the computational burden, balanced by the relatively small loss of performance.
6 Conclusion
We have presented a generalpurpose computational algorithm to generate a convex inner approximation of a given basic semialgebraic set. The inner approximation is not guaranteed to be of maximum volume, but the algorithm has the favorable features of leaving invariant a convex set, and preserving convex boundaries while removing nonconvex regions by enforcing linear constraints at points of minimum curvature. Even though our initial motivation was to construct convex inner approximations of stability regions for fixedorder controller design, our algorithm can be used on its own for checking convexity of semialgebraic sets.
Each step of the algorithm consists in solving a potentially nonconvex polynomial optimization problem with the help of a hierarchy of convex LMI relaxations. For this we use Gloptipoly 3, unfortunately with no guarantee of a priori computational burden, even though in practice it is observed that global optimality is ensured at a moderate cost, as soon as the dimension of the ambient space is small. Numerical experiments indicate that the approach may be practical for ambient dimensions up to 4 or 5. For larger problems, we can rely on more sophisticated nonlinear or global optimization codes [15], even though this possibility has not been investigated in this paper. Indeed, our main driving force is to contribute with a readily available Matlab implementation.
Our algorithm returns a sequence of polynomials such that the intersection of their sublevel sets is geometrically convex. However, the individual polynomials (of degree two or more) are not necessarily convex functions. One may therefore question the relevance of applying a relatively complex algorithm to obtain a convex inner approximation in the form of a list of defining polynomials which are not necessary individually convex. A recent result of [11] indicates however that any local optimization method based on standard firstorder optimality conditions for logarithmic barrier functions will generate a sequence of iterates converging to the global minimum of a convex function over convex sets. In other words, geometric convexity seems to be more important that convexity of the individual defining polynomials.
Indeed, if convexity of the inner approximation is guaranteed in the presented work, convexity of the defining polynomials would allow the use of constant multipliers to certificate optimality in a nonlinear optimization framework. Instead, with no guarantee of convexity of the defining polynomials, the geometric proprety of convexity of the sets is more delicate to exploit efficiently by optimization algorithms.
Finally, let us emphasized that it is conjectured that all convex semialgebraic sets are semidefinite representable in [3], see also [10]. It may then become possible to fully exploit the geometric convexity of our inner convex through an explicit representation as a projection of an affine section of the semidefinite cone. For example, in our target application domain, this would allow to use semidefinite programming to find a suboptimal stabilizing fixedorder controller.
Acknowledgements
The first author is grateful to J. W. Helton for many discussions and ideas leading to Theorem 1.
References
 [1] J. Ackermann et al. Robust control: the parameter space approach. Springer, 2nd edition, 2002.
 [2] M. P. do Carmo. Differential geometry of curves and surfaces. Prentice Hall, 1976.
 [3] J. W. Helton, J. Nie. Sufficient and necessary conditions for semidefinite representability of convex hulls and sets, SIAM J. Optimization, 20(2):759791, 2009.
 [4] D. Henrion, M. Šebek, V. Kučera. Positive polynomials and robust stabilization with fixedorder controllers, IEEE Trans. Autom. Control, 48(7):11781186, 2003.
 [5] D. Henrion, D. Peaucelle, D. Arzelier, M. Šebek. Ellipsoidal approximation of the stability domain of a polynomial, IEEE Trans. Autom. Control, 48(12):22552259, 2003.
 [6] D. Henrion, J. B. Lasserre. Solving nonconvex optimization problems  How GloptiPoly is applied to problems in robust and nonlinear control, IEEE Control Systems Magazine, 24(3):7283, 2004.
 [7] D. Henrion, J. B. Lasserre, J. Löfberg, GloptiPoly 3: moments, optimization and semidefinite programming, Optimization Methods and Software, 24(45):761779, 2009.
 [8] D. Henrion, J. B. Lasserre, C. Savorgnan. Approximate volume and integration for basic semialgebraic sets, SIAM Review, 51(4):722743, 2009.
 [9] A. Karimi, H. Khatibi, R. Longchamp. Robust control of polytopic systems by convex optimization. Automatica, 43(6):13951402, 2007.
 [10] J. B. Lasserre. Convex sets with semidefinite representation, Math. Programming, 120:457477, 2009.
 [11] J. B. Lasserre. On convex optimization without convex representation. Optimization Letters, 5:549556, 2011.
 [12] M. Laurent. Sums of squares, moment matrices and optimization over polynomials, in: M. Putinar, S. Sullivant (Eds.), Emerging applications of algebraic geometry, IMA Vol. Math. Appli., 149:157270, Springer, 2009.
 [13] M. B. Milam, K. Mushambi, R. M. Murray. A new computational approach to realtime trajectory generation for constrained mechanical systems. Proc. IEEE Conference on Decision and Control, 2000.
 [14] M. Petit, M. B. Milam, R. M. Murray. Inversion based constrained trajectory optimization. Proc. IFAC Symposium on Nonlinear Control Systems, 2001.
 [15] A. Neumaier. Complete search in continuous global optimization and constraint satisfaction, in: A. Iserles (Ed.), Acta Numerica, Cambridge Univ. Press, 271369, 2004.
 [16] U. Nurges. Robust pole assignment via reflection coefficients of polynomials, Automatica, 42(7):12231230, 2006.
 [17] M. Ross, F. Fahroo. Issues in the realtime computation of optimal control. Mathematical and Computer Modelling, 43:11721188, 2005.
 [18] C. Louembet, F. Cazaurang, A. Zolghadri, C. Pittet, C. Charbonnel. Path planning for satellite slew manoeuvers: a combined flatness and collocation based approach. IET Control Theory & Applications, 3(4):481491, 2009.
 [19] J. Lévine, D. V. Nguyen. Flat output characterisation for linear systems using polynomial matrices. Systems and Control Letters, 48:6975, 2003.