Irreducible infeasible subsystems of semidefinite systems

# Irreducible infeasible subsystems of semidefinite systems

Kai Kellner Frankfurt am Main, Germany Marc E. Pfetsch Department of Mathematics, TU Darmstadt, Dolivostr. 15, 64293 Darmstadt, Germany  and  Thorsten Theobald Goethe-Universität, FB 12 – Institut für Mathematik, Postfach 11 19 32, 60054 Frankfurt am Main, Germany
July 27, 2019
###### Abstract.

Given real symmetric -matrices , let denote the linear matrix pencil . Farkas’ lemma for semidefinite programming then characterizes feasibility of the system in terms of an alternative spectrahedron. In the well-studied special case of linear programming, a theorem by Gleeson and Ryan states that the index sets of irreducible infeasible subsystems are exactly the vertices of the corresponding alternative polyhedron.

We show that one direction of this theorem can be generalized to the nonlinear situation of extreme points of general spectrahedra. The reverse direction, however, is not true in general, which we show by means of counterexamples. On the positive side, an irreducible infeasible block subsystem is obtained whenever the extreme point has minimal block support. Motivated by results from sparse recovery, we provide a criterion for the uniqueness of solutions of semidefinite block systems.

## 1. Introduction

The structure of infeasible linear inequality systems is quite well understood. In particular, Farkas’ Lemma, also called Theorem of the Alternative, gives a characterization of infeasibility (see, e.g., [19]). Moreover, the basic building blocks are so-called Irreducible Infeasible Systems (IISs, also called Irreducible Inconsistent Systems), i.e., infeasible subsystems such that every proper subsystem is feasible. An extension of the Theorem of the Alternative due to Gleeson and Ryan [12] states that the IISs of an infeasible linear inequality system correspond exactly to the vertices of a so-called alternative polyhedron (see Theorem 3.4). These IISs provide a means to analyze infeasibilities of a system, see, e.g., [4, 6, 22] and the book [5]. Today, standard optimization software can compute (hopefully) small IISs. Further investigations include the mixed-integer case [13] and the application within Benders’ decomposition [7].

 A(y)\coloneqqA0−m∑i=1yiAi⪰0,

where , are symmetric matrices and “” denotes that a matrix is positive semidefinite (psd). There are well-known generalizations of the Theorem of the Alternative to this setting (see, e.g., [21]), although one has to be more careful, since feasibility might only be attained in the limit – see Proposition 2.1 for a more precise statement. As in the linear case, solutions of certain alternative systems give a certificate of the (weak) feasibility of .

In this context, the following natural question arise: How can infeasible semidefinite systems be analyzed? What can be said about the structure of irreducible infeasible semidefinite systems? Moreover, is there a generalization of the theorem of Gleeson and Ryan to this setting?

These questions are motivated by solving mixed-integer semidefinite programs using branch-and-bound in which an SDP is solved in every node (see, e.g., [11]). Then it often happens that these SDPs turn out to be infeasible. One would now like to learn from this infeasibility in order to strengthen the relaxations of other nodes. This is done in mixed-integer and SAT solvers, see, e.g., [1, 25].

To come up with an appropriate definition of an IIS for a semidefinite system it appears to be very natural to consider block systems. Then an IIS is given by an inclusion minimal set of infeasible block subsystems (see Definition 2.6). We will show in Section 3 that one direction of the above mentioned connection can be generalized: there always exists an extreme point of the alternative system that corresponds to a given IIS, see Theorem 3.5. The reverse direction, however, is not true in general, which we show and discuss via two counterexamples, see Examples 3.6 and 3.7. On the positive side, whenever an extreme point has (inclusionwise) minimal block support, the corresponding subsystem forms indeed an IIS. This leads to the general task to compute such points.

In the particular case in which the alternative semidefinite system has a unique solution, this algorithmic challenge simplifies to solving one semidefinite program. Motivated by results from sparse recovery, we provide a criterion for the uniqueness of solutions of semidefinite block systems. In Section 4, we generalize the results in [9, 14, 23, 24] to give unique recovery characterizations for a block semidefinite system in Theorem 4.1.

Notation. In the paper, we use the following notation. Let be the set of all (real) symmetric matrices. For a matrix and , let be the submatrix containing the rows and columns of  indexed by . For , , we denote the inner product by

 A∙B=tr(A⊤B)=n∑i,j=1AijBij,

where denotes the trace. And denotes the operator norm , where are the eigenvalues of .

## 2. Infeasible systems and block structure

Let , …, . For , we consider the linear (matrix) pencil

 A(y)\coloneqqA0−m∑i=1yiAi

and the linear matrix inequality (LMI) . With respect to infeasibility, we will use the following result, where denotes the identity matrix.

###### Proposition 2.1.

Either is feasible for every or there exists with , , and .

This statement is equivalent to Sturm’s Farkas’ Lemma for semidefinite programming (see [15, Lemmas 3.1.1 and 3.1.2]) and a variation of [21, Thm. 2.21], and its proof is provided for completeness.

###### Proof.

Consider the following dual pair of semidefinite programs (SDPs):

 (2.1) inf{η:A(y)+η\mathdsI⪰0,η≥0}, (2.2) sup{−A0∙X:Ai∙X=0,i∈[m],tr(X)≤1,X⪰0}.

Setting , shows that (2.1) has a Slater point. Moreover, is feasible for (2.2). The strong duality theorem (see, e.g., [21, Thm. 2.14]) implies that (2.2) attains its optimal value and the objective values are the same.

Suppose that no with , , exists. By scaling, this implies that no such exists with . And since the zero matrix is feasible for (2.2), the optimal value of (2.2) is 0. By the strong duality theorem, the optimal value of (2.1) is also 0. Either (2.1) attains this value and we are done, or there exists a sequence such that and . This implies the theorem. ∎

###### Remark 2.2.

In slight deviation from parts of the literature, we call weakly feasible, if for every the system is feasible; compare this, for instance, to the definition in [21], which requires to be feasible for some such that . Moreover, is weakly infeasible if it is not weakly feasible. Note the slight inaccuracy of this naming convention, which should, however, not lead to confusion in the present paper.

###### Corollary 2.3.

Assume that there exists with , . Then either is feasible or there exists with , , and .

###### Proof.

By scaling  to satisfy , the assumption guarantees that (2.2) above has a Slater point and therefore that the optimal value of (2.1) is attained, see, e.g., [21, Cor. 2.17]. The remaining part of the proof is as for the one of Theorem 2.1. ∎

Our subsequent definition of an alternative spectrahedron will allow to handle structured semidefinite systems. To motivate this viewpoint, consider a simple example where the goal is to check whether two given halfplanes () and a given disc in the Euclidean plane with center have a common point. The smallest LMI-representation (w.r.t. matrix size) of is

 K(r,c;y):=(r+c1−y1y2−c2y2−c2r−c1+y1)⪰0,

and thus the existence of a point in is equivalent to the feasibility of the LMI

 (2.3) A(y)=⎛⎜⎝α1y1+β1y2+γ1α2y1+β2y2+γ2K(r,c;y)⎞⎟⎠⪰0.

In order to capture such natural structure within semidefinite systems, one arrives at block systems. In particular, already in the simple example this allows then to consider the -subsystem of the disc as an entity. Formally, this yields the following.

###### Definition 2.4.

Let and a partition of the set . A linear pencil is in block-diagonal form with blocks if each is 0 outside of the blocks , i.e., for all and all . Note that the blocks might be decomposable, i.e., at least one block consists of blocks of smaller size while still retaining the block structure of .

###### Assumption 2.5.

To avoid trivial infeasibilities, we will assume that for each block , , there exists such that is weakly feasible.

###### Definition 2.6.

Let be in block-diagonal form with blocks .

1. For , the block subsystem of with respect to is given by for the index set . By convention, and is a feasible system.

2. A block subsystem with respect to some is an irreducible infeasible subsystem (IIS) if is weakly infeasible, but is weakly feasible for all .

3. Given a matrix , its block support is defined as

 BS(X)\coloneqq{i∈[k]:XBi≠0}.
###### Remark 2.7.

Linear inequality systems arise if all matrices of are diagonal. In this case, each inequality is of the form

 (A0)jj−m∑i=1yi(Ai)jj≥0,j∈[n].

If this system is written as , then IISs correspond to infeasible subsystems of such that each proper subsystem is feasible.

The linear case arises, in particular, if the block system satisfies (and hence ); then the blocks are not decomposable. However, it is also possible that the blocks are decomposable. In this case, the system consists of linear inequality systems , …, , each defining a polyhedron. If the intersection of these polyhedra is empty, the original LMI is infeasible; see Example 3.6 below.

###### Remark 2.8.

An alternative way to define IISs would be to consider subsets such that is (weakly) infeasible, but is (weakly) feasible for every proper subset of . However, this definition would not retain the structure within semidefinite systems such as (2.3).

## 3. Alternative systems

In view of Theorem 2.1, we define the following, where the abbreviation for the LMI will allow for a convenient notation. For general background on spectrahedra, we refer to [3, 20].

###### Definition 3.1.

The alternative spectrahedron for is

 S(Σ)\coloneqq{X⪰0:Ai∙X=0,i∈[m],A0∙X=−1}.
###### Assumption 3.2.

By standard polarity theory, a block structure of the system can also be assumed for . We therefore only consider matrices in block-diagonal form, where the blocks are indexed by .

The definition of the alternative spectrahedron immediately implies:

###### Lemma 3.3.

Let be a weakly infeasible semidefinite system with blocks .

1. For any , there exists an infeasible subsystem of with block support contained in .

2. For any with inclusion-minimal block support, the index set defines an IIS of .

As mentioned in the introduction, in the linear case there exists a characterization of IISs:

###### Theorem 3.4 (Gleeson and Ryan [12]).

Consider an infeasible system , where , . The index sets of the IISs of are exactly the support sets of the vertices of the alternative polyhedron

 P(Σ)={y∈\mathdsRm:y⊤A=0,y⊤b=−1,y≥0}.

A proof can be found in [12] and [17]. Note that in the non decomposable linear case, is equivalent to the alternative spectrahedron .

One goal of this paper is to investigate whether/how far Theorem 3.4 generalizes to the spectrahedral situation. We can show that one of the directions can be generalized.

###### Theorem 3.5.

Let be a weakly infeasible LMI with blocks . For each index set of an IIS, there exists an extremal point of with block support .

The following proof proceeds by revealing the convex-geometric structure of the alternative spectrahedron.

###### Proof.

Without loss of generality, we can assume that for some . By Proposition 2.1, the alternative spectrahedron contains a feasible point supported exactly on the blocks . In order to show that the alternative spectrahedron contains an extremal point with block support , we first observe that has at least one extremal point. This follows from the fact that the positive semidefinite cone is pointed and thus any slice of a subspace with this cone cannot have a nontrivial lineality space either.

By [18, Theorem 18.5], the alternative spectrahedron can be written in the form

 S(Σ)=conv(E∪F),

where is the set of its extremal points and is the set of extremal directions of . Hence, by a general version of Carathéodory’s Theorem (see [18, Theorem 17.1]), there exist , , extremal points and extremal rays of the alternative spectrahedron such that

 X = r∑i=1λiV(i)+s∑j=1μjW(j)

with , and . Since are positive semidefinite and , , the block support of each , must be contained in the block support of . Due to the minimality of , all must have the same block support. Hence, the block support of is exactly , so that it is an extremal point with the desired property. ∎

We also provide the following shorter proof, which, however, reveals less structural insights.

###### Alternative proof.

Consider the intersection

 S′\coloneqqS(Σ)∩{X:XBi=0,i∉B(I)}.

Then  has an extreme point , since it is the intersection of the pointed positive semidefinite cone with an affine space and therefore also pointed. Now let be the block support of . Then by construction. If , we are done, since is an extreme point of  as well: Assume , , would be the strict convex combination of two other feasible points  and , such that w.l.o.g.  has a support in a block  outside of . Then

 tr(X′B)=0=λtr(ZB)>0+(1−λ)tr(YB)≥0,

Moreover, if , shows that is infeasible. Thus, would not be minimal. ∎

The converse of this theorem is, however, not true in general. This direction may already fail in the presence of blocks of size 2. We will demonstrate this by two counterexamples. The first one is linear, but decomposable. The second one is not decomposable, but nonlinear.

###### Example 3.6.

Let , and

 A0=⎛⎜ ⎜ ⎜⎝0−1−100−2⎞⎟ ⎟ ⎟⎠,A1=⎛⎜ ⎜ ⎜⎝1−1−100−1⎞⎟ ⎟ ⎟⎠,A2=⎛⎜ ⎜ ⎜⎝0−11000⎞⎟ ⎟ ⎟⎠.

The blocks are , , , and this example corresponds to the three polyhedra

 P1\coloneqq{y∈\mathdsR2:y1≤0},P2\coloneqq{y∈\mathdsR2:y1+y2≥1}, P3\coloneqq{y∈\mathdsR2:−y1+y2≤−1,y1≥2},

see Figure 1 for an illustration. In this case, only the diagonal elements of the points in the alternative spectrahedron are relevant, which can be formulated as the polyhedron

is a one-dimensional polytope with the two vertices

 (1,12,12,0)⊤and(12,0,0,12)⊤.

For the vertex of , we have . However, this does not correspond to an IIS, since gives a proper subsystem that is infeasible.

To come up with non-decomposable blocks, the next counterexample deals with a deformed version.

###### Example 3.7.

For , consider the linear matrix pencil given by

 A0=⎛⎜ ⎜ ⎜⎝0−1−1εε−2⎞⎟ ⎟ ⎟⎠

and the matrices and of Example 3.6. For , the system specializes to Example 3.6. For the two lines in Figure 1 indexed by 3 and 4 deform to a quadratic curve; see Figure 2. Note that the quadratic curve has a second component corresponding to the lower right block being negative definite.

The alternative spectrahedron is given by the set of symmetric block matrices

 X=diag([X11],[X22],[X33X34X34X44])

satisfying

 X11=1−X44+2εX34 ≥0, X22=X33=12−X44+εX34 ≥0, X44 ≥0, (12−X44+εX34)⋅X44−X234 ≥0.

In -coordinates, is the set bounded by the ellipse in Figure 3 (for ). For , the ellipse becomes a circle. Independent of , i.e., for any , there are two distinguished extreme points, namely and , corresponding to the matrices

 ⎛⎜ ⎜ ⎜ ⎜ ⎜⎝11212000⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ and ⎛⎜ ⎜ ⎜ ⎜ ⎜⎝12000012⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

in . The diagonals of these matrices are exactly the two vertices of the alternative polyhedron as in Example 3.6. While the right matrix corresponds to an IIS, the left matrix does not.

These two examples motivate the question of how to compute IISs. By Lemma 3.3 it would suffice to compute a solution with minimal block support. This can be obtained by a greedy approach in which one iteratively solves semidefinite programs and fixes blocks to 0. Note, however, that computing an IIS with minimal cardinality block support is NP-hard already in the linear case, see [2].

In the particular case in which the alternative semidefinite system has a unique solution, this algorithmic challenge simplifies to solving one semidefinite program. In the next section we discuss universal conditions under which the alternative semidefinite system has a unique solution.

## 4. Universal unique solutions of alternative semidefinite systems

For a block matrix with blocks , denote by the number of blocks with at least one positive eigenvalue of and by the number of blocks with at least one negative eigenvalue. Note that in case of a positive semidefinite matrix , the value coincides with .

The following statement is a generalization of [23, Theorem 1] to the case of block semidefinite systems. See also [14] for a variant for linear systems and [24, Theorem 5] for a different (and non-block) generalization of that theorem to the semidefinite case.

###### Theorem 4.1.

For any psd block matrix with , the set

 {X⪰0:Ai∙X=Ai∙X0,i∈[m]}

is a singleton if and only if for all symmetric , with , , we have and .

###### Proof.

Assume w.l.o.g. that there exists a symmetric with , , and . The case is analogous, since the mapping exchanges positive and negative eigenvalues and we have , , as well. For simplicity we further assume . Then there exists a decomposition

 V = S⊤DS,

where is a regular block matrix (with respect to the blocks , …, ) and where are the eigenvalues of . In fact can be assumed to be orthonormal () by performing a principal axis transformation for each block and combining the parts.

By reordering we can assume that the negative eigenvalues appear in the first  blocks. We then define the diagonal matrices , with

 D1ii={−λiif λi<0,0otherwise,D2ii={λi%ifλi>0,0otherwise,i∈[n].

Then . We now obtain the block matrices

 X1=S⊤D1S,X2=S⊤D2S,

with , , , and . By construction for all . Moreover, for , we have

 Ai∙X1=Ai∙(S⊤D1S)=Ai∙(S⊤(D2−D)S).

Since , this implies

 Ai∙X1=Ai∙(S⊤D2S)=Ai∙X2.

Hence, the set also contains and is thus not a singleton.

Conversely, assume that there exists a psd matrix with such that

 {X⪰0:Ai∙X=Ai∙X0,i∈[m]}

is not a singleton. That is, there exists a matrix with

 Ai∙¯X=Ai∙X0,i∈[m]

and . By the principal axis transformation, can be written as

 X0=S⊤D0S

with an orthonormal block matrix (w.r.t. the blocks ) and , for and for . Setting , we have and .

The block matrix then satisfies and

 V = S⊤(¯Y−D0)S.

Then in only the first blocks can have negative eigenvalues. Since the transformation matrix respects the block structure, we have . ∎

###### Remark 4.2.

In the special case in which all blocks have size 1, Theorem 4.1 can be stated as follows: In this case all matrices , are diagonal. Let , and let be the matrix formed by the rows . Then is equivalent to . The condition states that for all with we have and , which is [23, Theorem 1].

###### Example 4.3.

Consider again the matrices from Example 3.6. Setting

 X0=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝120000000000000012⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

yields and . In this case . For the corresponding system of equations to have as the unique solution, we would need and for all symmetric with . However,

 V=⎛⎜ ⎜ ⎜⎝100001000010000−1⎞⎟ ⎟ ⎟⎠

satisfies the equality constraints, but has , which is in accordance with Example 3.6, in which two extreme point solutions arise.

###### Example 4.4.

Let be even and consider the linear system of equations

 (4.1) Dv\coloneqq⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝1100⋯00110⋯0⋮⋱⋱⋮0⋯01100⋯0011⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠v=0.

Then form a symmetric matrix . Let , , be appropriate symmetric matrices such that

 Ai∙V=0,i=1,…,n−1,

is equivalent to . Without loss of generality, these are block matrices with respect to the blocks , …, . In the notation of Theorem 4.1, and with for all is equivalent to with .

Then . We can assume w.l.o.g. (by possible multiplication with ) that . Then will be positive, while will be negative. Thus, any solution  to , , satisfies . By Theorem 4.1, the system , , has the unique (symmetric) solution if . Note that the rank of the matrix  is , which shows that the system has infinitely many solutions if is an arbitrary matrix.

###### Remark 4.5.

Consider the condition on  in Theorem 4.1. The total number of blocks is at most , and if a block contributes both to and , the block has to have at least size 2. Therefore, . This implies that the largest for which and can hold is . Example 4.4 shows that this bound is tight (if is odd, one can ignore a single variable in  and use the construction on the remaining part). Note that for even this bound can only be attained in the LP-case, i.e., if all matrices are diagonal.

###### Example 4.6.

Let be divisible by 3, define , and consider the blocks , …, . Take the same linear system of equations as in Example 4.4 and fill in the variables of a solution  into the symmetric block matrix as follows:

 V=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝v1v3v3v2v4v6v6v5⋱v3k−2v3kv3kv3k−1⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠.

Let , , be symmetric matrices such that

 Ai∙V=0,i=1,…,n−1

is equivalent to from (4.1). We can assume that the are block matrices for the above blocks. As in Example 4.4, assuming that , the equations imply that , while . Thus, denoting , each block has the following structure:

 ([r]λλλ−λ)or([r]−λ−λ−λλ).

In both cases, the eigenvalues are . Therefore, each block is counted both in and in . Thus, any solution  to , , will satisfy . By Theorem 4.1, the system , , has the unique (symmetric) solution if .

###### Remark 4.7.

Note that the uniqueness conditions of Example 4.6 include matrices  with negative entries, which would not be allowed in the LP-case (as in Example 4.4); for example, if blocks of consist of the positive definite matrices

 ([r]2−1−12)

and 0-blocks otherwise. This shows that while the size of  is possibly smaller than in the LP-case, the general spectrahedron case allows for a wider range of cases of in which uniqueness appears.

## 5. Conclusion and outlook

We have shown that one direction of the Gleeson-Ryan-Theorem for infeasible linear systems generalizes to infeasible block semidefinite systems, but the other direction does not. To overcome the situation to identify IISs, we have provided a criterion for particular semidefinite block systems to have a unique feasible solution. If this particular situation does not arise, it is an open question whether one can obtain an IIS by solving a single semidefinite program.

By Lemma 3.3 it would suffice to find solutions of minimal block support. For a matrix  the number of nonzero blocks can be written as

 ∥X∥2,0\coloneqq∥∥(∥XB1∥2,…,∥XBk∥2)∥∥0,

where denotes the number of nonzeros in a vector . Thus, it would suffice to solve the following problem to find an IIS:

 (5.1) min{∥X∥2,0:X∈S(Σ)}.

Unfortunately, the “norm” is nonconvex and thus hard to handle, for instance, (5.1) is NP-hard. However, for linear systems recent developments, see, e.g., [8, 10, 16], suggest to replace by

 ∥X∥2,1\coloneqq∥∥(∥XB1∥2,…,∥XBk∥2)∥∥1=k∑i=1∥XBi∥2,

which leads to the following convex optimization problem:

 (5.2)
###### Lemma 5.1.

Problem (5.2) can be formulated as SDP.

###### Proof.

Use the second order-cone condition

 {(x,t):(∑x2i)1/2≤t}

to represent with new variables and minimize the objective function . It is well-known that second order conditions are special cases of semidefinite conditions (see, e.g., [21]). Since is already a positive semidefinite condition, this concludes the proof. ∎

An interesting line of future research would investigate conditions under which (5.2) provides an optimal solution for (5.1), which would try to generalize the above mentioned results from the linear to the block semidefinite case.

## References

• [1] T. Achterberg. Conflict analysis in mixed integer programming. Discrete Opt., 4(1):4–20, 2007.
• [2] E. Amaldi, M. E. Pfetsch, and L. E. Trotter, Jr. On the maximum feasible subsystem problem, IISs, and IIS-hypergraphs. Math. Program., 95(3):533–554, 2003.
• [3] G. Blekherman, P. A. Parrilo, and R. R. Thomas. Semidefinite Optimization and Convex Algebraic Geometry. SIAM, Philadelphia, PA, 2013.
• [4] J. W. Chinneck. Finding a useful subset of constraints for analysis in an infeasible linear program. INFORMS J. Comput., 9(2):164–174, 1997.
• [5] J. W. Chinneck. Feasibility and infeasibility in optimization: algorithms and computational methods, volume 118 of International Series in Operations Research and Management Sciences. Springer, 2008.
• [6] J. W. Chinneck and E. W. Dravnieks. Locating minimal infeasible constraint sets in linear programs. ORSA J. Comput., 3(2):157–168, 1991.
• [7] G. Codato and M. Fischetti. Combinatorial Benders’ cuts. In D. Bienstock and G. Nemhauser, editors, Proc. 10th International Conference on Integer Programming and Combinatorial Optimization (IPCO), New York, volume 3064 of LNCS, pages 178–195. Springer-Verlag, Berlin Heidelberg, 2004.
• [8] Y. C. Eldar, P. Kuppinger, and H. Bölcskei. Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Transactions on Signal Processing, 58(6):3042–3054, 2010.
• [9] E. Elhamifar and R. Vidal. Block-sparse recovery via convex optimization. IEEE Trans. Signal Process., 60(8):4094–4107, 2012.
• [10] E. Elhamifar and R. Vidal. Block-sparse recovery via convex optimization. IEEE Transactions On Signal Processing, 60(8):4094–4107, 2012.
• [11] T. Gally, M. E. Pfetsch, and S. Ulbrich. A framework for solving mixed-integer semidefinite programs. To appear in Optimization Methods and Software.
• [12] J. Gleeson and J. Ryan. Identifying minimally infeasible subsystems of inequalities. ORSA J. Comput., 2(1):61–63, 1990.
• [13] O. Guieu and J. W. Chinneck. Analyzing infeasible mixed-integer and integer linear programs. INFORMS J. Comput., 11(1):63–77, 1999.
• [14] M. A. Khajehnejad, A. G. Dimakis, W. Xu, and B. Hassibi. Sparse recovery of nonnegative signals with minimal expansion. IEEE Trans. Signal Processing, 59(1):196–208, 2011.
• [15] I. Klep and S. Schweighofer. An exact duality theory for semidefinite programming based on sums of squares. Math. of Oper. Res., 38(3):569–590, 2013.
• [16] J. H. Lin and S. Li. Block sparse recovery via mixed / minimization. Acta Mathematica Sinica, English Series, 29(7):1401–1412, 2013.
• [17] M. E. Pfetsch. The Maximum Feasible Subsystem Problem and Vertex-Facet Incidences of Polyhedra. PhD thesis, TU Berlin, 2003.
• [18] R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, NJ, 1997.
• [19] A. Schrijver. Theory of Linear and Integer Programming. Wiley-Interscience Series in Discrete Mathematics. John Wiley & Sons, Ltd., Chichester, 1986.
• [20] T. Theobald. Some recent developments in spectrahedral computation. In G. Böckle, W. Decker, and G. Malle, editors, Algorithmic and Experimental Methods in Algebra, Geometry, and Number Theory, pages 717–739. Springer, 2017.
• [21] L. Tunçel. Polyhedral and Semidefinite Programming Methods in Combinatorial Optimization. Fields Institute Monographs. American Mathematical Society, 2010.
• [22] J. N. M. van Loon. Irreducibly inconsistent systems of linear inequalities. Eur. J. Oper. Res., 8(3):283–288, 1981.
• [23] M. Wang and A. Tang. Conditions for a unique non-negative solution to an underdetermined system. In 47th Annual Allerton Conf. on Communication, Control, and Computing, Monticello IL, 2009.
• [24] M. Wang, W. Xu, and A. Tang. A unique “nonnegative” solution to an underdetermined system: From vectors to matrices. IEEE Trans. Signal Processing, 59(3):1007–1016, 2011.
• [25] J. Witzig, T. Berthold, and S. Heinz. Experiments with conflict analysis in mixed integer programming. In Integration of AI and OR Techniques in Constraint Programming, volume 10335, pages 211–222, 2017.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters