Irreducible
infeasible subsystems of
semidefinite systems
Abstract.
Given real symmetric matrices , let denote the linear matrix pencil . Farkas’ lemma for semidefinite programming then characterizes feasibility of the system in terms of an alternative spectrahedron. In the wellstudied special case of linear programming, a theorem by Gleeson and Ryan states that the index sets of irreducible infeasible subsystems are exactly the vertices of the corresponding alternative polyhedron.
We show that one direction of this theorem can be generalized to the nonlinear situation of extreme points of general spectrahedra. The reverse direction, however, is not true in general, which we show by means of counterexamples. On the positive side, an irreducible infeasible block subsystem is obtained whenever the extreme point has minimal block support. Motivated by results from sparse recovery, we provide a criterion for the uniqueness of solutions of semidefinite block systems.
1. Introduction
The structure of infeasible linear inequality systems is quite well understood. In particular, Farkas’ Lemma, also called Theorem of the Alternative, gives a characterization of infeasibility (see, e.g., [19]). Moreover, the basic building blocks are socalled Irreducible Infeasible Systems (IISs, also called Irreducible Inconsistent Systems), i.e., infeasible subsystems such that every proper subsystem is feasible. An extension of the Theorem of the Alternative due to Gleeson and Ryan [12] states that the IISs of an infeasible linear inequality system correspond exactly to the vertices of a socalled alternative polyhedron (see Theorem 3.4). These IISs provide a means to analyze infeasibilities of a system, see, e.g., [4, 6, 22] and the book [5]. Today, standard optimization software can compute (hopefully) small IISs. Further investigations include the mixedinteger case [13] and the application within Benders’ decomposition [7].
In this article, we consider infeasible systems in spectrahedral form
where , are symmetric matrices and “” denotes that a matrix is positive semidefinite (psd). There are wellknown generalizations of the Theorem of the Alternative to this setting (see, e.g., [21]), although one has to be more careful, since feasibility might only be attained in the limit – see Proposition 2.1 for a more precise statement. As in the linear case, solutions of certain alternative systems give a certificate of the (weak) feasibility of .
In this context, the following natural question arise: How can infeasible semidefinite systems be analyzed? What can be said about the structure of irreducible infeasible semidefinite systems? Moreover, is there a generalization of the theorem of Gleeson and Ryan to this setting?
These questions are motivated by solving mixedinteger semidefinite programs using branchandbound in which an SDP is solved in every node (see, e.g., [11]). Then it often happens that these SDPs turn out to be infeasible. One would now like to learn from this infeasibility in order to strengthen the relaxations of other nodes. This is done in mixedinteger and SAT solvers, see, e.g., [1, 25].
To come up with an appropriate definition of an IIS for a semidefinite system it appears to be very natural to consider block systems. Then an IIS is given by an inclusion minimal set of infeasible block subsystems (see Definition 2.6). We will show in Section 3 that one direction of the above mentioned connection can be generalized: there always exists an extreme point of the alternative system that corresponds to a given IIS, see Theorem 3.5. The reverse direction, however, is not true in general, which we show and discuss via two counterexamples, see Examples 3.6 and 3.7. On the positive side, whenever an extreme point has (inclusionwise) minimal block support, the corresponding subsystem forms indeed an IIS. This leads to the general task to compute such points.
In the particular case in which the alternative semidefinite system has a unique solution, this algorithmic challenge simplifies to solving one semidefinite program. Motivated by results from sparse recovery, we provide a criterion for the uniqueness of solutions of semidefinite block systems. In Section 4, we generalize the results in [9, 14, 23, 24] to give unique recovery characterizations for a block semidefinite system in Theorem 4.1.
Notation. In the paper, we use the following notation. Let be the set of all (real) symmetric matrices. For a matrix and , let be the submatrix containing the rows and columns of indexed by . For , , we denote the inner product by
where denotes the trace. And denotes the operator norm , where are the eigenvalues of .
2. Infeasible systems and block structure
Let , …, . For , we consider the linear (matrix) pencil
and the linear matrix inequality (LMI) . With respect to infeasibility, we will use the following result, where denotes the identity matrix.
Proposition 2.1.
Either is feasible for every or there exists with , , and .
This statement is equivalent to Sturm’s Farkas’ Lemma for semidefinite programming (see [15, Lemmas 3.1.1 and 3.1.2]) and a variation of [21, Thm. 2.21], and its proof is provided for completeness.
Proof.
Consider the following dual pair of semidefinite programs (SDPs):
(2.1)  
(2.2) 
Setting , shows that (2.1) has a Slater point. Moreover, is feasible for (2.2). The strong duality theorem (see, e.g., [21, Thm. 2.14]) implies that (2.2) attains its optimal value and the objective values are the same.
Suppose that no with , , exists. By scaling, this implies that no such exists with . And since the zero matrix is feasible for (2.2), the optimal value of (2.2) is 0. By the strong duality theorem, the optimal value of (2.1) is also 0. Either (2.1) attains this value and we are done, or there exists a sequence such that and . This implies the theorem. ∎
Remark 2.2.
In slight deviation from parts of the literature, we call weakly feasible, if for every the system is feasible; compare this, for instance, to the definition in [21], which requires to be feasible for some such that . Moreover, is weakly infeasible if it is not weakly feasible. Note the slight inaccuracy of this naming convention, which should, however, not lead to confusion in the present paper.
Corollary 2.3.
Assume that there exists with , . Then either is feasible or there exists with , , and .
Proof.
Our subsequent definition of an alternative spectrahedron will allow to handle structured semidefinite systems. To motivate this viewpoint, consider a simple example where the goal is to check whether two given halfplanes () and a given disc in the Euclidean plane with center have a common point. The smallest LMIrepresentation (w.r.t. matrix size) of is
and thus the existence of a point in is equivalent to the feasibility of the LMI
(2.3) 
In order to capture such natural structure within semidefinite systems, one arrives at block systems. In particular, already in the simple example this allows then to consider the subsystem of the disc as an entity. Formally, this yields the following.
Definition 2.4.
Let and a partition of the set . A linear pencil is in blockdiagonal form with blocks if each is 0 outside of the blocks , i.e., for all and all . Note that the blocks might be decomposable, i.e., at least one block consists of blocks of smaller size while still retaining the block structure of .
Assumption 2.5.
To avoid trivial infeasibilities, we will assume that for each block , , there exists such that is weakly feasible.
Definition 2.6.
Let be in blockdiagonal form with blocks .

For , the block subsystem of with respect to is given by for the index set . By convention, and is a feasible system.

A block subsystem with respect to some is an irreducible infeasible subsystem (IIS) if is weakly infeasible, but is weakly feasible for all .

Given a matrix , its block support is defined as
Remark 2.7.
Linear inequality systems arise if all matrices of are diagonal. In this case, each inequality is of the form
If this system is written as , then IISs correspond to infeasible subsystems of such that each proper subsystem is feasible.
The linear case arises, in particular, if the block system satisfies (and hence ); then the blocks are not decomposable. However, it is also possible that the blocks are decomposable. In this case, the system consists of linear inequality systems , …, , each defining a polyhedron. If the intersection of these polyhedra is empty, the original LMI is infeasible; see Example 3.6 below.
Remark 2.8.
An alternative way to define IISs would be to consider subsets such that is (weakly) infeasible, but is (weakly) feasible for every proper subset of . However, this definition would not retain the structure within semidefinite systems such as (2.3).
3. Alternative systems
In view of Theorem 2.1, we define the following, where the abbreviation for the LMI will allow for a convenient notation. For general background on spectrahedra, we refer to [3, 20].
Definition 3.1.
The alternative spectrahedron for is
Assumption 3.2.
By standard polarity theory, a block structure of the system can also be assumed for . We therefore only consider matrices in blockdiagonal form, where the blocks are indexed by .
The definition of the alternative spectrahedron immediately implies:
Lemma 3.3.
Let be a weakly infeasible semidefinite system with blocks .

For any , there exists an infeasible subsystem of with block support contained in .

For any with inclusionminimal block support, the index set defines an IIS of .
As mentioned in the introduction, in the linear case there exists a characterization of IISs:
Theorem 3.4 (Gleeson and Ryan [12]).
Consider an infeasible system , where , . The index sets of the IISs of are exactly the support sets of the vertices of the alternative polyhedron
A proof can be found in [12] and [17]. Note that in the non decomposable linear case, is equivalent to the alternative spectrahedron .
One goal of this paper is to investigate whether/how far Theorem 3.4 generalizes to the spectrahedral situation. We can show that one of the directions can be generalized.
Theorem 3.5.
Let be a weakly infeasible LMI with blocks . For each index set of an IIS, there exists an extremal point of with block support .
The following proof proceeds by revealing the convexgeometric structure of the alternative spectrahedron.
Proof.
Without loss of generality, we can assume that for some . By Proposition 2.1, the alternative spectrahedron contains a feasible point supported exactly on the blocks . In order to show that the alternative spectrahedron contains an extremal point with block support , we first observe that has at least one extremal point. This follows from the fact that the positive semidefinite cone is pointed and thus any slice of a subspace with this cone cannot have a nontrivial lineality space either.
By [18, Theorem 18.5], the alternative spectrahedron can be written in the form
where is the set of its extremal points and is the set of extremal directions of . Hence, by a general version of Carathéodory’s Theorem (see [18, Theorem 17.1]), there exist , , extremal points and extremal rays of the alternative spectrahedron such that
with , and . Since are positive semidefinite and , , the block support of each , must be contained in the block support of . Due to the minimality of , all must have the same block support. Hence, the block support of is exactly , so that it is an extremal point with the desired property. ∎
We also provide the following shorter proof, which, however, reveals less structural insights.
Alternative proof.
Consider the intersection
Then has an extreme point , since it is the intersection of the pointed positive semidefinite cone with an affine space and therefore also pointed. Now let be the block support of . Then by construction. If , we are done, since is an extreme point of as well: Assume , , would be the strict convex combination of two other feasible points and , such that w.l.o.g. has a support in a block outside of . Then
would give a contradiction.
Moreover, if , shows that is infeasible. Thus, would not be minimal. ∎
The converse of this theorem is, however, not true in general. This direction may already fail in the presence of blocks of size 2. We will demonstrate this by two counterexamples. The first one is linear, but decomposable. The second one is not decomposable, but nonlinear.
Example 3.6.
Let , and
The blocks are , , , and this example corresponds to the three polyhedra
see Figure 1 for an illustration. In this case, only the diagonal elements of the points in the alternative spectrahedron are relevant, which can be formulated as the polyhedron
is a onedimensional polytope with the two vertices
For the vertex of , we have . However, this does not correspond to an IIS, since gives a proper subsystem that is infeasible.
To come up with nondecomposable blocks, the next counterexample deals with a deformed version.
Example 3.7.
For , consider the linear matrix pencil given by
and the matrices and of Example 3.6. For , the system specializes to Example 3.6. For the two lines in Figure 1 indexed by 3 and 4 deform to a quadratic curve; see Figure 2. Note that the quadratic curve has a second component corresponding to the lower right block being negative definite.
The alternative spectrahedron is given by the set of symmetric block matrices
satisfying
In coordinates, is the set bounded by the ellipse in Figure 3 (for ). For , the ellipse becomes a circle. Independent of , i.e., for any , there are two distinguished extreme points, namely and , corresponding to the matrices
in . The diagonals of these matrices are exactly the two vertices of the alternative polyhedron as in Example 3.6. While the right matrix corresponds to an IIS, the left matrix does not.
These two examples motivate the question of how to compute IISs. By Lemma 3.3 it would suffice to compute a solution with minimal block support. This can be obtained by a greedy approach in which one iteratively solves semidefinite programs and fixes blocks to 0. Note, however, that computing an IIS with minimal cardinality block support is NPhard already in the linear case, see [2].
In the particular case in which the alternative semidefinite system has a unique solution, this algorithmic challenge simplifies to solving one semidefinite program. In the next section we discuss universal conditions under which the alternative semidefinite system has a unique solution.
4. Universal unique solutions of alternative semidefinite systems
For a block matrix with blocks , denote by the number of blocks with at least one positive eigenvalue of and by the number of blocks with at least one negative eigenvalue. Note that in case of a positive semidefinite matrix , the value coincides with .
The following statement is a generalization of [23, Theorem 1] to the case of block semidefinite systems. See also [14] for a variant for linear systems and [24, Theorem 5] for a different (and nonblock) generalization of that theorem to the semidefinite case.
Theorem 4.1.
For any psd block matrix with , the set
is a singleton if and only if for all symmetric , with , , we have and .
Proof.
Assume w.l.o.g. that there exists a symmetric with , , and . The case is analogous, since the mapping exchanges positive and negative eigenvalues and we have , , as well. For simplicity we further assume . Then there exists a decomposition
where is a regular block matrix (with respect to the blocks , …, ) and where are the eigenvalues of . In fact can be assumed to be orthonormal () by performing a principal axis transformation for each block and combining the parts.
By reordering we can assume that the negative eigenvalues appear in the first blocks. We then define the diagonal matrices , with
Then . We now obtain the block matrices
with , , , and . By construction for all . Moreover, for , we have
Since , this implies
Hence, the set also contains and is thus not a singleton.
Conversely, assume that there exists a psd matrix with such that
is not a singleton. That is, there exists a matrix with
and . By the principal axis transformation, can be written as
with an orthonormal block matrix (w.r.t. the blocks ) and , for and for . Setting , we have and .
The block matrix then satisfies and
Then in only the first blocks can have negative eigenvalues. Since the transformation matrix respects the block structure, we have . ∎
Remark 4.2.
Example 4.3.
Consider again the matrices from Example 3.6. Setting
yields and . In this case . For the corresponding system of equations to have as the unique solution, we would need and for all symmetric with . However,
satisfies the equality constraints, but has , which is in accordance with Example 3.6, in which two extreme point solutions arise.
Example 4.4.
Let be even and consider the linear system of equations
(4.1) 
Then form a symmetric matrix . Let , , be appropriate symmetric matrices such that
is equivalent to . Without loss of generality, these are block matrices with respect to the blocks , …, . In the notation of Theorem 4.1, and with for all is equivalent to with .
Then . We can assume w.l.o.g. (by possible multiplication with ) that . Then will be positive, while will be negative. Thus, any solution to , , satisfies . By Theorem 4.1, the system , , has the unique (symmetric) solution if . Note that the rank of the matrix is , which shows that the system has infinitely many solutions if is an arbitrary matrix.
Remark 4.5.
Consider the condition on in Theorem 4.1. The total number of blocks is at most , and if a block contributes both to and , the block has to have at least size 2. Therefore, . This implies that the largest for which and can hold is . Example 4.4 shows that this bound is tight (if is odd, one can ignore a single variable in and use the construction on the remaining part). Note that for even this bound can only be attained in the LPcase, i.e., if all matrices are diagonal.
Example 4.6.
Let be divisible by 3, define , and consider the blocks , …, . Take the same linear system of equations as in Example 4.4 and fill in the variables of a solution into the symmetric block matrix as follows:
Let , , be symmetric matrices such that
is equivalent to from (4.1). We can assume that the are block matrices for the above blocks. As in Example 4.4, assuming that , the equations imply that , while . Thus, denoting , each block has the following structure:
In both cases, the eigenvalues are . Therefore, each block is counted both in and in . Thus, any solution to , , will satisfy . By Theorem 4.1, the system , , has the unique (symmetric) solution if .
Remark 4.7.
Note that the uniqueness conditions of Example 4.6 include matrices with negative entries, which would not be allowed in the LPcase (as in Example 4.4); for example, if blocks of consist of the positive definite matrices
and 0blocks otherwise. This shows that while the size of is possibly smaller than in the LPcase, the general spectrahedron case allows for a wider range of cases of in which uniqueness appears.
5. Conclusion and outlook
We have shown that one direction of the GleesonRyanTheorem for infeasible linear systems generalizes to infeasible block semidefinite systems, but the other direction does not. To overcome the situation to identify IISs, we have provided a criterion for particular semidefinite block systems to have a unique feasible solution. If this particular situation does not arise, it is an open question whether one can obtain an IIS by solving a single semidefinite program.
By Lemma 3.3 it would suffice to find solutions of minimal block support. For a matrix the number of nonzero blocks can be written as
where denotes the number of nonzeros in a vector . Thus, it would suffice to solve the following problem to find an IIS:
(5.1) 
Unfortunately, the “norm” is nonconvex and thus hard to handle, for instance, (5.1) is NPhard. However, for linear systems recent developments, see, e.g., [8, 10, 16], suggest to replace by
which leads to the following convex optimization problem:
(5.2) 
Lemma 5.1.
Problem (5.2) can be formulated as SDP.
Proof.
Use the second ordercone condition
to represent with new variables and minimize the objective function . It is wellknown that second order conditions are special cases of semidefinite conditions (see, e.g., [21]). Since is already a positive semidefinite condition, this concludes the proof. ∎
References
 [1] T. Achterberg. Conflict analysis in mixed integer programming. Discrete Opt., 4(1):4–20, 2007.
 [2] E. Amaldi, M. E. Pfetsch, and L. E. Trotter, Jr. On the maximum feasible subsystem problem, IISs, and IIShypergraphs. Math. Program., 95(3):533–554, 2003.
 [3] G. Blekherman, P. A. Parrilo, and R. R. Thomas. Semidefinite Optimization and Convex Algebraic Geometry. SIAM, Philadelphia, PA, 2013.
 [4] J. W. Chinneck. Finding a useful subset of constraints for analysis in an infeasible linear program. INFORMS J. Comput., 9(2):164–174, 1997.
 [5] J. W. Chinneck. Feasibility and infeasibility in optimization: algorithms and computational methods, volume 118 of International Series in Operations Research and Management Sciences. Springer, 2008.
 [6] J. W. Chinneck and E. W. Dravnieks. Locating minimal infeasible constraint sets in linear programs. ORSA J. Comput., 3(2):157–168, 1991.
 [7] G. Codato and M. Fischetti. Combinatorial Benders’ cuts. In D. Bienstock and G. Nemhauser, editors, Proc. 10th International Conference on Integer Programming and Combinatorial Optimization (IPCO), New York, volume 3064 of LNCS, pages 178–195. SpringerVerlag, Berlin Heidelberg, 2004.
 [8] Y. C. Eldar, P. Kuppinger, and H. Bölcskei. Blocksparse signals: Uncertainty relations and efficient recovery. IEEE Transactions on Signal Processing, 58(6):3042–3054, 2010.
 [9] E. Elhamifar and R. Vidal. Blocksparse recovery via convex optimization. IEEE Trans. Signal Process., 60(8):4094–4107, 2012.
 [10] E. Elhamifar and R. Vidal. Blocksparse recovery via convex optimization. IEEE Transactions On Signal Processing, 60(8):4094–4107, 2012.
 [11] T. Gally, M. E. Pfetsch, and S. Ulbrich. A framework for solving mixedinteger semidefinite programs. To appear in Optimization Methods and Software.
 [12] J. Gleeson and J. Ryan. Identifying minimally infeasible subsystems of inequalities. ORSA J. Comput., 2(1):61–63, 1990.
 [13] O. Guieu and J. W. Chinneck. Analyzing infeasible mixedinteger and integer linear programs. INFORMS J. Comput., 11(1):63–77, 1999.
 [14] M. A. Khajehnejad, A. G. Dimakis, W. Xu, and B. Hassibi. Sparse recovery of nonnegative signals with minimal expansion. IEEE Trans. Signal Processing, 59(1):196–208, 2011.
 [15] I. Klep and S. Schweighofer. An exact duality theory for semidefinite programming based on sums of squares. Math. of Oper. Res., 38(3):569–590, 2013.
 [16] J. H. Lin and S. Li. Block sparse recovery via mixed / minimization. Acta Mathematica Sinica, English Series, 29(7):1401–1412, 2013.
 [17] M. E. Pfetsch. The Maximum Feasible Subsystem Problem and VertexFacet Incidences of Polyhedra. PhD thesis, TU Berlin, 2003.
 [18] R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, NJ, 1997.
 [19] A. Schrijver. Theory of Linear and Integer Programming. WileyInterscience Series in Discrete Mathematics. John Wiley & Sons, Ltd., Chichester, 1986.
 [20] T. Theobald. Some recent developments in spectrahedral computation. In G. Böckle, W. Decker, and G. Malle, editors, Algorithmic and Experimental Methods in Algebra, Geometry, and Number Theory, pages 717–739. Springer, 2017.
 [21] L. Tunçel. Polyhedral and Semidefinite Programming Methods in Combinatorial Optimization. Fields Institute Monographs. American Mathematical Society, 2010.
 [22] J. N. M. van Loon. Irreducibly inconsistent systems of linear inequalities. Eur. J. Oper. Res., 8(3):283–288, 1981.
 [23] M. Wang and A. Tang. Conditions for a unique nonnegative solution to an underdetermined system. In 47th Annual Allerton Conf. on Communication, Control, and Computing, Monticello IL, 2009.
 [24] M. Wang, W. Xu, and A. Tang. A unique “nonnegative” solution to an underdetermined system: From vectors to matrices. IEEE Trans. Signal Processing, 59(3):1007–1016, 2011.
 [25] J. Witzig, T. Berthold, and S. Heinz. Experiments with conflict analysis in mixed integer programming. In Integration of AI and OR Techniques in Constraint Programming, volume 10335, pages 211–222, 2017.