A linear complementarity based characterization of the weighted independence number and the independent domination number in graphs

# A linear complementarity based characterization of the weighted independence number and the independent domination number in graphs

Parthe Pandit    Ankur A. Kulkarni Parthe and Ankur are with the Systems and Control Engineering group at the Indian Institute of Technology Bombay Mumbai, India 400076. They can be contacted at parthe.pandit@iitb.ac.in and kulkarni.ankur@iitb.ac.in, respectively.
###### Abstract

The linear complementarity problem is a continuous optimization problem that generalizes convex quadratic programming, Nash equilibria of bimatrix games and several such problems. This paper presents a continuous optimization formulation for the weighted independence number of a graph by characterizing it as the maximum weighted norm over the solution set of a linear complementarity problem (LCP). The minimum norm of solutions of this LCP is a lower bound on the independent domination number of the graph. Unlike the case of the maximum norm, this lower bound is in general weak, but we show it to be tight if the graph is a forest. Using methods from the theory of LCPs, we obtain a few graph theoretic results. In particular, we provide a stronger variant of the Lovász theta of a graph. We then provide sufficient conditions for a graph to be well-covered, i.e., for all maximal independent sets to also be maximum. This condition is also shown to be necessary for well-coveredness if the graph is a forest. Finally, the reduction of the maximum independent set problem to a linear program with (linear) complementarity constraints (LPCC) shows that LPCCs are hard to approximate.

## 1 Introduction

This paper concerns a new continuous optimization formulation for the independence number of a graph. An undirected graph is given by the pair where is a finite set of vertices and is a set of unordered pairs of vertices called edges. Two vertices are said to be connected if there exists an edge between them. Connected vertices are also called neighbours. An independent set of is a set of pairwise disconnected vertices and an independent set of largest cardinality called a maximum independent set. The cardinality of the maximum independent set is called the independence number of denoted .

Closely related are the concepts of maximality and domination. An independent set is said to be maximal if it is not a subset of any larger independent set. Clearly a maximum independent set is maximal but the converse not true in general. A set is a dominating set if every has a neighbour One can show that every vertex not in a maximal independent set has at least one neighbour in the set, whereby maximal independent sets are also dominating sets.

Computing the independence number of a general graph is NP-complete, although it is known to be solvable in polynomial time for some subclasses, such as claw-free graphs and perfect graphs [16, 6]. Computing the independence number is clearly a discrete optimization problem. However there are several continuous optimization formulations for this quantity. Perhaps the most well known amongst them is the result by Motzkin and Strauss [17] which shows that for a graph with vertices,

 1α(G)=min{x⊤(A+I)x∣e⊤x=1;x≥0},

where e is the vector111Throughout this paper, vectors are column vectors of 1’s in , is the adjacency matrix of (i.e., if and otherwise), and is the identity matrix. Among other continuous formulations, the ones by Harant are noteworthy [9, 8]. Specifically, [8, Theorem 7] shows,

 α(G)=max{e⊤x−12x⊤Ax∣0≤x≤e}.

For a given weight vector , the weight of a set is the quantity . The weighted independence number denoted is the maximum of the weights over all the independent sets, i.e.,

 αw(G):=max{∑i∈Swi∣S⊆Visindependent}.

Clearly is . This paper characterizes the weighted independence number of a graph in terms of the linear complementarity problem (LCP). Given a matrix and , is the following problem:

Notice that due to the nonnegativity of and , the last condition is equivalent to requiring for all This requirement is referred to as the complementarity condition. A vector is said to be a solution of if it satisfies the above three conditions. LCPs arise naturally in the characterization of equilibria in bimatrix games and several other problems in operations research. We discuss this problem class later in this paper.

For a simple graph222We consider only simple graphs, i.e., graphs without self loops which means for all . with vertices, consider the , i.e.,

 LCP(G) Find x∈Rn such that x≥0, (A+I)x≥e, x⊤((A+I)x−e)=0.

We refer to this as and its solution set as . Let the characteristic vector of a set be denoted by ; it is the vector in whose element is 1 iff . It is easy to show that if is a maximum independent set in then solves . Consequently, we always have,

 α(G)≤max{e⊤x∣xsolvesLCP(G)}. (1)

Our main result in this paper shows that the inequality in (1) is always tight, even for the weighted independence number.

###### Theorem 1

For any simple graph and weight vector ,

 αw(G)=max{w⊤x∣xsolvesLCP(A+I,−e)},

where is the adjacency matrix of , is the identity matrix and e is the vector of 1’s in .

To note why the above result is not obvious, consider the quantity defined as the smallest size of a maximal independent set in (also known as minimum independent dominating set). One can show that the characteristic vector of every maximal independent set solves (Lemma 4 in the next section). Hence, analogous to (1),

 β(G)≥min{e⊤x∣x solves LCP(A+I,−e)}. (2)

We show in Section 3.2 that this inequality is in general strict, however, equality is achieved when the graph is a forest (i.e., graph that is a union of disjoint trees). Indeed we have the following theorem.

###### Theorem 2

For a forest ,

 β(G)=min{e⊤x∣xsolvesLCP(A+I,−e% )},

where is the adjacency matrix of , is the identity matrix and e is the vector of 1’s in .

### 1.1 Contributions

Our main contribution in this paper are centered around Theorem 1 and its consequences. We also consider the analogous problem of the minimum norm of points in and its relation to As mentioned above, unlike for the maximum (i.e. Theorem 1), the inequality in (2) is in general strict; in fact the right hand side in (2) need not even be an integer. However, as indicated by Theorem 2, we show that this inequality is tight for forest graphs.

We perform a semidefinite programming (SDP) relaxation for the optimization problem resulting from Theorem 1 to give a tighter version of the Lovász theta [15]. The optimization problem in Theorem 1 results in a more compact integer linear program (ILP) formulation for than previous edge-based ILP formulations. The feasible lattice of this new ILP characterizes only the maximal independent sets of the graph. An application of lift-and-project relaxations gives our improved Lovász theta variant. Numerically we have verified that our variant is in general stronger than other Lovász theta variants that employ the same number of constraints.

Graphs for which all maximal independent sets are of the same cardinality are called well-covered graphs [20]. Using Theorem 1 and Theorem 2 we derive a new characterization for the well-coveredness for forests: specifically, a forest is well-covered if and only if is constant for (here too the “if” direction is easy to see; the “only if” needs a proof).

Theorem 1 gives a characterization of the weighted independence number of a graph via a linear program with (linear) complementarity constraints (LPCC). An LPCC in its most general form is written as,

 LPCC \rm maximizex,y c⊤x+d⊤y \rm subject to  Bx+Cy ≥ b,Mx+Ny+q ≥ 0,x ≥ 0,x⊤(Mx+Ny+q) = 0.

Notice that a feasible pair for the LPCC comprises of an , that solves , and another variable that parametrizes this LCP, and the pair must also satisfy an additional affine constraint Clearly, taking to be 0 vectors or matrices of appropriate dimension gives a special case of the LPCC in which a linear function is maximized over the solution set of an LCP. This precisely the structure of Theorem 1.

LPCCs generalize several problem classes including linear programming, and finding sparse (minimum norm) solutions of linear equations. Their study is gathering momentum in the operations research literature [11, 12] as new applications get discovered. Theorem 1 reveals weighted independence number as another application. LCPs on the other hand are a widely and deeply studied problem class; see, e.g., [18] and [4]. Theorem 1 brings in the possibility of using results from the theory of LCPs and LPCCs to develop algorithms or bounds on the independence number. Indeed our results are obtained by appealing to properties of LCPs.

Finally, the reduction of the independence number problem to an LPCC shows that for an LPCC with variables, it is NP-hard to approximate it within a factor of of its optimal value (assuming ) even for a strong class of problems with only binary data. This follows from the fact that approximating the independence number is NP-hard due to a result by Håstad [10].

### 1.2 Organization of the paper

The rest of the paper is organized as follows. Section 2 elaborates on a few properties of and recounts some background about LCPs. It is followed by the proof of Theorem 1 in Section 3. Section 4 derives results pertaining to our SDP relaxation, well-covered graphs and the complexity of LPCCs. The paper concludes in Section 5.

## 2 Preliminaries

### 2.1 Background on LCPs

Much of what follows is standard and well-documented [4]; we recount it here for the benefit of the reader. Linear complementarity problems arise naturally through the modeling of several problems in optimization and allied areas. As an example, consider a convex quadratic program:

 QP \rm minimizex 12x⊤Qx+c⊤x \rm subject to  Ax ≥ b,:λx ≥ 0,

where is symmetric and positive semidefinite matrix, and and are a matrix and a vector of appropriate dimension. If denotes the vector of Lagrange multipliers corresponding to the constraint “", from the Karush-Kuhn-Tucker conditions it is easy to derive that solves QP if and only if there exists such that,

 (xλ)≥0,(Qx+c−A⊤λAx−b)≥0,(xλ)⊤(Qx+c−A⊤λAx−b)=0.

This is clearly an LCP in the -space.

Another, famous, example comes from Nash equilibria of two person games. Consider a simultaneous move game with two players and loss matrices . A Nash equilibrium [19] is a pair of vectors such that,

 (x∗)⊤Ay∗≤x⊤Ay∗,∀x∈Δn,(x∗)⊤By∗≤(x∗)⊤By,∀y∈Δm,

where is the probability simplex in Assuming have positive entries, by suitable transformations (see, e.g., [4, p. 6]), it can be shown that if is a Nash equilibrium, then , where,

 x′=x∗/(x∗)⊤By∗y′=y∗/(x∗)⊤Ay∗,

solves with,

 M=(0AB⊤0),q=−e,

where e denotes a vector of 1’s in Conversely, if solves then and is a Nash equilibrium. More generally, certain equilibria of games involving coupled constraints [14] also reduce to LCPs. For more applications, we refer the reader to [4].

LCPs may have unique, finitely many, infinitely many or no solutions. In the case where it has a solution, we say that the LCP is solvable. LCPs with rational inputs are known to be NP-complete [2] (Theorem 1 also yields this as a corollary). Without the complementarity condition, i.e., requirement (3) in the definition of , an LCP amounts only to finding a feasible point for a set of linear inequalities. Since the complementarity condition is equivalent to asking that ‘for all , either or ’, one can equivalently reformulate the LCP as asking for an that is feasible for at least one out of systems of linear inequalities. Specifically, if for some subset of indices , if satisfies the following linear inequalities,

 x≥0, y=Mx+q≥0,xj=0, ∀j∉Sand  yj=0,  ∀j∈S,

then clearly solves . Conversely, if solves then one may take to verify the above inequalities. The hardness of an LCP arises from the exponential number of possible choices for This also demonstrates that although an LCP is ostensibly a continuous optimization problem, it implicitly encodes a problem of combinatorial character.

Results on LCPs concern questions such as existence, uniqueness, and boundedness of solutions, and their stability to changes in the vector , in addition to computation. A typical line of attack has been to characterize classes of matrices and vectors for which the has the desired properties. A vast variety of matrix classes have been analyzed; we refer the reader to [4] more on this topic.

### 2.2 LCP(A+I,−e) and its properties

For a graph with vertices we now study a few properties of , i.e., where is the adjacency matrix of , is the identity matrix, and e is the vector of 1’s. We define the support of a vector as

 σ(x):={i∈V∣xi>0}.

For , we denote by the neighbourhood of set relative to . For a singleton we denote it by and the subscript is dropped if . For , we define and denote by the component of . We call the sum of the closed neighbourhood of with respect to . Clearly,

 Ci(x):=xi+∑j∈Vaijxj=xi+∑j∈N(i)xj. (2)

Observe that vector solves is equivalent to

 x≥0, ⇔xi≥0,∀i∈V, (3) (AG+I)x≥e, ⇔Ci(x)≥1,∀i∈V, (4) x⊤((A+I)x−e)=0. ⇔xi(Ci(x)−1)=0,∀i∈V. (5)

For the rest of the paper, the constraint is called the complementarity constraint for vertex . Note that may have fractional solutions. For example if is regular with degree , then solves We now study a few additional properties of the structure of the . For a graph , let denote the vertex set of , and for a set , let denote the subgraph induced by . For a vector of size , denote by the corresponding subvector of indexed by vertices in .

###### Lemma 3

Consider a graph and the , where is the adjacency matrix of , is the identity matrix, and e is the vector of 1’s. Then,

1. [label=()]

2. ,

3. ,

4. ,

5. If a graph is a disjoint union of graphs and , then, .

6. For a graph , if , is a dominating set of .

7. For a graph , if , then and .

Proof :

1. [label=()]

2. Clearly, for all , which violates (3). Thus

3. Let , then . By definition . Hence for all with equality occurring only when .

4. Let then . Now suppose for some , then and (4) is violated. Hence .

5. Let , and be the adjacency matrices of , and respectively. Let and let for , and respectively denote the subvectors of and e indexed by vertices in . Observe that since is a disjoint union of two graphs, is a block diagonal matrix with and as diagonal blocks.

Since , we have , and . This means for , , and , whereby . Conversely, if for then This proves the lemma.

6. If , then by Lemma 3 3 we have, . Hence for all , whereby . This means that every vertex not in has at least one neighbour in . This proves that is a dominating set.

7. Let and , whereby for all . Hence for the graph , . Moreover, notice that for a vertex in , the sum of closed neighbourhoods denoted by , since all vertices have . Hence for in we have, , and . Hence .

We now study a property associated with the integer solutions of .

###### Lemma 4

For a graph , a vector is an integral solution of if and only if it is the characteristic vector of a maximal independent set of .

Proof : From Lemma 3 , we know that integer solution to is necessarily a binary vector and hence it is the characteristic vector of some set contained in .

Consider such a binary vector for some set . It always satisfies . We first show that satisfying the complementarity constraint (5) is equivalent to being an independent set. Next, we show that, if is independent, then satisfying (4) is equivalent to being a maximal independent set. These claims together complete the proof.

First we note that is an independent set if an only if the sum : If is an independent set then for all and hence this sum is 0. Conversely, if this sum vanishes, then all the terms appearing in it, being non-negative are necessarily zero whereby is an independent set. Observe that this sum is in fact . Since is binary, and hence . Hence,

 Sisindependent⟺1⊤S((A+I)1S−e)=0⟺1S satisfies (5).

Finally, if is an independent set, then . Moreover for all means every vertex not in has at least one neighbour in . Recall that this is a property of maximal independent sets. Hence,

 IfSisindependent and (A+I)1S≥e⟺Sisa maximal independent set.

This concludes the proof of the lemma.

As a consequence of Lemma 4, we have,

 α(G)=max{e⊤x∣x∈{0,1}n∩SOL(G)}andβ(G)=min{e⊤x∣x∈{0,1}n∩SOL(G)}.

The next lemma provides an upper bound on the -norm of a solution of if the support of the solution contains a maximal independent set.

###### Lemma 5

If a maximal independent set of a graph is contained in the support of a solution to , then the norm of the solution is upper bound by the cardinality of the set.

Proof : Let be a solution of the such that , and is a maximal independent set. We have to show . Let . Then,

 Ci(x)=∑j∈Vaijxj+xi=1.

Summing over gives,

 e⊤x+∑i∈σ(x)∑j∈Vaijxj=|σ(x)|=|S|+|U|.

Thus,

 |S|−e⊤x (a)= ∑i∈S∑j∈Uaijxj+∑i∈U∑jaijxj−|U|. (b)= ∑j∈U|NS(j)|xj+∑i∈U∑jaijxj−|U|. (c)= ∑j∈U|NS(j)|xj−∑i∈Uxi.

The equality in follows from splitting the first summation and applying for all . To obtain the equality in the order of summation in the first term is interchanged. Equality is obtained by adding constraints .

Recall that for a maximal independent set , every vertex not in has at least 1 neighbour in . Hence for all . Hence we have,

 e⊤x≤|S|,∀S⊆σ(x),suchthatSismaximallyindependent.

This proves the lemma.

Lemma 5 describes an upper bound for solutions containing a maximal independent set in their support. If the graph is a forest, i.e., a collection of trees then for every solution of the there exists a maximal independent set in its support. This is proved later in Lemma 9.

The following lemma states a few results regarding when belongs to a few specific classes of graphs namely regular graphs, cliques and trees.

###### Lemma 6
1. [label=()]

2. For a complete graph over vertices, .

3. For a forest , if and then is a disjoint union of or .

4. For a forest , if then is a union of ’s and ’s.

5. For a regular graph over vertices with degree , .

Proof :

1. [label=()]

2. For a complete graph , the matrix of all ones. Let , then the complementarity constraint simplifies to . Since and by Lemma 3 , . This implies .

Observe that if the graph is , for all . Let , then , , whereby . Hence .

3. We first show that if the graph is a tree and there exists a solution to the with full support, then the tree must be either or . Consider a tree or , and let with , i.e., for all . Then for all due to the complementarity constraint. Consider a leaf vertex of and its neighbour . Since , degree of . Hence , a strict subset. Hence we have, which is a contradiction. This proves the claim.

Now consider a forest , and let be its connected component. Let such that , and let denote the subvector of indexed by vertices in . By Lemma 3 4, we know that . Moreover for all components of . Hence is either or , and 2 stands proven.

4. For a forest , let . From Lemma 3 we have, and . Observe that is also a forest. Hence it follows from Lemma 6 2, that is a union of s and s.

5. For a regular graph, is an eigenvalue-eigenvector pair of the adjacency matrix. Hence with and . Using Lemma 5 proves 4 since is the cardinality of the smallest maximal independent set of .

## 3 Main results

### 3.1 Proof of Theorem 1

For a vector of non-negative333It can be easily shown that for unconstrained , where is the subgraph of over vertices with non-negative weights . Thus we only consider non-negative weight vectors for the rest of the paper. weights , let,

 Mw(G):=max{w⊤x∣xsolvesLCP(A+I,−e)}. (6)

To reiterate the statement of the theorem – For a simple graph ,

 αw(G)=Mw(G).

Proof of Theorem 1: We prove Theorem 1 by showing inequalities in both directions. From Lemma 4, for a simple graph , the characteristic vector of every maximal independent set is a solution to the . The maximum weighted independent set being a maximal independent set444This is true only since . One can easily construct a graph with unconstrained vertex weights such that the maximum weighted independent set is not a maximal independent set. gives a feasible vector for the maximization problem (6). Hence,

 αw(G)=w⊤1S∗≤Mw(G).

We show by induction on the number of vertices of . For the graph consisting of a single vertex, the adjacency matrix is the scalar 0 and . Thus the statement holds for the base case .

Let us assume the induction hypothesis for all graphs with vertices, i.e.,

 αw(G)≥Mw(G),∀G=(V,E),suchthat|V|

Let be a graph with vertices labelled . Let be the maximizer of (6), i.e.,

Case I: . Let the maximum weighted independent set be , i.e., . Let be its complement. The complementarity constraint on dictates , i.e.,

 Ci(x∗)=∑j∈Vaijx∗j+x∗i=1. (7)

Hence,

 Mw(G∗)=∑i∈Scwix∗i+∑i∈Swix∗i (d)= ∑i∈Scwix∗i+∑i∈Swi−∑i∈S∑j∈Vwiaijx∗j, (8) (e)= αw(G∗)+∑j∈Scwjx∗j−∑i∈S∑j∈Scwiaijx∗j, = αw(G∗)−∑j∈Scx∗j⋅(∑i∈Swiaij−wj).

Here is obtained by multiplying each equation (7) by and adding these equations for , and then substituting the resulting resulting expression for . follows from using for and .

We now show an intermediate inequality . To prove this suppose the contrary holds for some , i.e., . Now consider the set . Clearly is an independent set. Moreover, the weight of is greater than the weight of by , a positive quantity, by assumption. This contradicts that is a weighted maximum independent set. Hence whereby, from (8),

 Mw(G∗)≤αw(G∗),

as required.
Case II: , a strict subset. Let without loss of generality. Let be the subgraph of induced by . Let be vectors in such that and for all . Clearly . Also for all ,

 Ci(y)=∑j∈Uaijyj+yi=∑j∈Uaijx∗j+x∗i=∑j∈V∗aijx∗j+x∗i=Ci(x∗)≥1.

Moreover for all ,

 yi(Ci(y)−1)=x∗i(Ci(x∗)−1)(f)=0,

where follows due to . Hence and we have,

 Mw(G∗)=w⊤x∗=˜w⊤y≤M˜w(G∗U).

The inequality above holds since is a feasible vector for the maximization program defining . Now since is a graph with vertices, the induction hypothesis dictates that . Moreover, since is a subgraph of , every independent set in is an independent set in and hence we have . Hence we have,

 Mw(G∗)≤αw(G∗).

After considering two exhaustive cases, the inequality is proved. This concludes the proof of Theorem 1.

The uniform weighted version of Theorem 1 is stated below.

###### Theorem 7

For any simple graph ,

 α(G)=max{e⊤x∣x∈SOL(G)}=maxx∈Zn{e⊤x∣x∈SOL(G)},

where is the adjacency matrix of , is the identity matrix and e is the vector of 1’s in .

Recall that from Lemma 4, for a graph , with adjacency matrix , the integer solutions of are characteristic vectors of maximal independent sets of , whereby is the maximum norm of binary vectors in . The next section discusses the minimum norm of points in .

### 3.2 Minimum ℓ1 norm solution of LCP(G) and the independent domination number

We now study a few properties of the minimum norm of vectors in . We define,

 m(G):=min{e⊤x∣x∈SOL(G)}. (9)

Interestingly, unlike the maximum norm, the minimum norm is not necessarily an integer. Recall that for a simple graph G, is the size of the smallest maximal independent set. Hence by Lemma 4 we have,

 β(G)=min{e⊤x∣x∈SOL(G)∩{0,1}n}≥m(G). (10)

The inequality above is strict in general and we show that the gap is strict even for bipartite and regular graphs. However equality is guaranteed if the graph is a forest (this is the claim of Theorem 2 we prove below).

###### Lemma 8

For a regular graph with vertices and degree ,

 m(Rn,d)=nd+1.

Proof : Observe that is the norm of the vector . Recall that if is the adjacency matrix of , then is an eigenvalue-eigenvector pair and hence . Hence,