A Generalization of Nemhauser and Trotter’s
Local Optimization
Theorem
Abstract.
The NemhauserTrotter local optimization theorem applies to the NPhard Vertex Cover problem and has applications in approximation as well as parameterized algorithmics. We present a framework that generalizes Nemhauser and Trotter’s result to vertex deletion and graph packing problems, introducing novel algorithmic strategies based on purely combinatorial arguments (not referring to linear programming as the NemhauserTrotter result originally did).
We exhibit our framework using a generalization of Vertex Cover, called BoundedDegree Deletion, that has promise to become an important tool in the analysis of gene and other biological networks. For some fixed , BoundedDegree Deletion asks to delete as few vertices as possible from a graph in order to transform it into a graph with maximum vertex degree at most . Vertex Cover is the special case of . Our generalization of the NemhauserTrotter theorem implies that BoundedDegree Deletion has a problem kernel with a linear number of vertices for every constant . We also outline an application of our extremal combinatorial approach to the problem of packing stars with a bounded number of leaves. Finally, charting the border between (parameterized) tractability and intractability for BoundedDegree Deletion, we provide a W[2]hardness result for BoundedDegree Deletion in case of unbounded values.
Key words and phrases:
Algorithms, computational complexity, NPhard problems, W[2]completeness, graph problems, combinatorial optimization, fixedparameter tractability, kernelizationMichael R. Fellows \@ifemptypcru
pcru Jiong Guo Hannes Moser Rolf Niedermeier \@ifemptyjena
jena
section1[Introduction]Introduction Nemhauser and Trotter [20] proved a famous theorem in combinatorial optimization. In terms of the NPhard Vertex Cover^{1}^{1}1Vertex Cover is the following problem: Given an undirected graph, find a minimumcardinality set of vertices such that each edge has at least one endpoint in . problem, it can be formulated as follows:
NTTheorem [20, 4]. For an undirected graph one can compute in polynomial time two disjoint vertex subsets and , such that the following three properties hold:

If is a vertex cover of the induced subgraph , then is a vertex cover of .

There is a minimumcardinality vertex cover of with .

Every vertex cover of the induced subgraph has size at least .
In other words, the NTTheorem provides a polynomialtime data reduction for Vertex Cover. That is, for vertices in it can already be decided in polynomial time to put them into the solution set and vertices in can be ignored for finding a solution. The NTTheorem is very useful for approximating Vertex Cover. The point is that the search for an approximate solution can be restricted to the induced subgraph . The NTTheorem directly delivers a factor approximation for Vertex Cover by choosing as the vertex cover. Chen et al. [7] first observed that the NTTheorem directly yields a vertex problem kernel for Vertex Cover, where the parameter denotes the size of the solution set. Indeed, this is in a sense an “ultimate” kernelization result in parameterized complexity analysis [10, 11, 21] because there is good reason to believe that there is a matching lower bound for the kernel size unless PNP [16].
Since its publication numerous authors have referred to the importance of the NTTheorem from the viewpoint of polynomialtime approximation algorithms (e.g., [4, 17]) as well as from the viewpoint of parameterized algorithmics (e.g., [1, 7, 9]). The relevance of the NTTheorem comes from both its practical usefulness in solving the Vertex Cover problem as well as its theoretical depth having led to numerous further studies and followup work [1, 4, 9]. In this work, our main contribution is to provide a more general and more widely applicable version of the NTTheorem. The corresponding algorithmic strategies and proof techniques, however, are not achieved by a generalization of known proofs of the NTTheorem but are completely different and are based on extremal combinatorial arguments. Vertex Cover can be formulated as the problem of finding a minimumcardinality set of vertices whose deletion makes a graph edgefree, that is, the remaining vertices have degree . Our main result is to prove a generalization of the NTTheorem that helps in finding a minimumcardinality set of vertices whose deletion leaves a graph of maximum degree for arbitrary but fixed . Clearly, is the special case of Vertex Cover.
paragraph4[Motivation.]Motivation. Since the NPhard BoundedDegree Deletion problem—given a graph and two positive integers and , find at most vertices whose deletion leaves a graph of maximum vertex degree —stands in the center of our considerations, some more explanations about its relevance follow. BoundedDegree Deletion (or its dual problem) already appears in some theoretical work, e.g., [6, 18, 22], but so far it has received considerably less attention than Vertex Cover, one of the best studied problems in combinatorial optimization [17]. To advocate and justify more research on BoundedDegree Deletion, we describe an application in computational biology. In the analysis of genetic networks based on microarray data, recently a cliquecentric approach has shown great success [3, 8]. Roughly speaking, finding cliques or nearcliques (called paracliques [8]) has been a central tool. Since finding cliques is computationally hard (also with respect to approximation), Chesler et al. [8, page 241] state that “cliques are identified through a transformation to the complementary dual Vertex Cover problem and the use of highly parallel algorithms based on the notion of fixedparameter tractability.” More specifically, in these Vertex Coverbased algorithms polynomialtime data reduction (such as the NTTheorem) plays a decisive role [19] (also see [1]) for efficient solvability of the given realworld data. However, since biological and other realworld data typically contain errors, the demand for finding cliques (that is, fully connected subgraphs) often seems overly restrictive and somewhat relaxed notations of cliques are more appropriate. For instance, Chesler et al. [8] introduced paracliques, which are achieved by greedily extending the found cliques by vertices that are connected to almost all (para)clique vertices. An elegant mathematical concept of “relaxed cliques” is that of plexes^{2}^{2}2Introduced in 1978 by Seidman and Foster [24] in the context of social network analysis. Recently, this concept has again found increased interest [2, 18]. where one demands that each plex vertex does not need to be connected to all other vertices in the plex but to all but . Thus, cliques are plexes. The corresponding problem to find maximumcardinality plexes in a graph is basically as computationally hard as clique detection is [2, 18]. However, as Vertex Cover is the dual problem for clique detection, BoundedDegree Deletion is the dual problem for plex detection: An vertex graph has an plex of size iff its complement graph has a solution set for BoundedDegree Deletion with of size , and the solution sets can directly be computed from each other. The Vertex Cover polynomialtime data reduction algorithm has played an important role in the practical success story of analyzing realworld genetic and other biological networks [3, 8]. Our new polynomialtime data reduction algorithms for BoundedDegree Deletion have the potential to play a similar role.
paragraph4[Our results.]Our results. Our main theorem can be formulated as follows.
BDDDRTheorem (Theorem 2). For an undirected vertex and edge graph , we can compute two disjoint vertex subsets and in time, such that the following three properties hold:

If is a solution set for BoundedDegree Deletion of the induced subgraph , then is a solution set for BoundedDegree Deletion of .

There is a minimumcardinality solution set for BoundedDegree Deletion of with .

Every solution set for BoundedDegree Deletion of the induced subgraph has size at least
In terms of parameterized algorithmics, this gives a vertex problem kernel for BoundedDegree Deletion, which is linear in for constant values, thus joining a number of other recent “linear kernelization results” [5, 12, 14, 15]. Our general result specializes to a vertex problem kernel for Vertex Cover (the NTTheorem provides a size problem kernel), but applies to a larger class of problems. For instance, a slightly modified version of the BDDDRTheorem (with essentially the same proof) yields a vertex problem kernel for the problem of packing at least vertexdisjoint length paths of an input graph, giving the same bound as shown in work focussing on this problem [23].^{3}^{3}3Very recently, Wang et al. [25] improved the bound to a bound. We claim that our kernelization based on the BDDDRTheorem method can be easily adapted to also deliver the bound. For the problem, where, given an undirected graph, one seeks a set of at least vertexdisjoint stars^{4}^{4}4A star is a tree where all of the vertices but one are leaves. of the same constant size, we show that a kernel with a linear number of vertices can be achieved, improving the best previous quadratic kernelization [23]. We emphasize that our data reduction technique is based on extremal combinatorial arguments; the resulting combinatorial kernelization algorithm has practical potential and implementation work is underway. Note that for our algorithm computes the same type of structure as in the “crown decomposition” kernelization for Vertex Cover (see, for example, [1]). However, for the structure returned by our algorithm is much more complicated; in particular, unlike for Vertex Cover crown decompositions, in the BDDDRTheorem the set is not necessarily a separator and the set does not necessarily form an independent set.
Exploring the borders of parameterized tractability of BoundedDegree Deletion for arbitrary values of the degree value , we show the following.
Theorem 1.
For unbounded (given as part of the input), BoundedDegree Deletion is complete with respect to the parameter denoting the number of vertices to delete.
In other words, there is no hope for fixedparameter tractability with respect to the parameter in the case of unbounded values. Due to the lack of space the proof of Theorem 1 and several proofs of lemmas needed to show Theorem 2 are omitted.
section1[Preliminaries]Preliminaries
A bddset for a graph is a vertex subset whose removal from yields a graph in which each vertex has degree at most . The central problem of this paper is
BoundedDegree Deletion
 Input:
An undirected graph , and integers and .
 Question:
Does there exist a bddset of size at most for ?
In this paper, for a graph and a vertex set , let be the subgraph of induced by and . The open neighborhood of a vertex or a vertex set in a graph is denoted as and , respectively. The closed neighborhood is denoted as and . We write and to denote the vertex and edge set of , respectively. A packing of a graph is a set of pairwise vertexdisjoint subgraphs of . A graph has maximum degree when every vertex in the graph has degree at most . A graph property is called hereditary if every induced subgraph of a graph with this property has the property as well.
Parameterized algorithmics [10, 11, 21] is an approach to finding optimal solutions for NPhard problems. A common method in parameterized algorithmics is to provide polynomialtime executable data reduction rules that lead to a problem kernel [13]. This is the most important concept for this paper. Given a parameterized problem instance , a data reduction rule replaces by an instance in polynomial time such that , , and is a Yesinstance if and only if is a Yesinstance. A parameterized problem is said to have a problem kernel, or, equivalently, kernelization, if, after the exhaustive application of the data reduction rules, the resulting reduced instance has size for a function depending only on . Roughly speaking, the kernel size plays a similar role in the subject of problem kernelization as the approximation factor plays for approximation algorithms.
section1[A Local Optimization Algorithm for BoundedDegree Deletion]A Local Optimization Algorithm for BoundedDegree Deletion The main result of this section is the following generalization of the NemhauserTrotterTheorem [20] for BoundedDegree Deletion with constant .
Theorem 2 (BDDDRTheorem).
For an vertex and edge graph , we can compute two disjoint vertex subsets and in time, such that the following three properties hold:

If is a bddset of , then is a bddset of .

There is a minimumcardinality bddset of with .

Every bddset of has size at least .
This first two properties are called the local optimality conditions. The remainder of this section is dedicated to the proof of this theorem. More specifically, we present an algorithm called compute_AB (see Figure 1) which outputs two sets and fulfilling the three properties given in Theorem 2. The core of this algorithm is the procedure find_extremal (see Figure 2) running in time. This procedure returns two disjoint vertex subsets and that, among others, satisfy the local optimality conditions. The procedure is iteratively called by compute_AB. The overall output sets and then are the union of the outputs of all applications of find_extremal. Actually, find_extremal searches for , , satisfying the following two conditions:

Each vertex in has degree at most in , and

is a minimumcardinality bddset for .
It is not hard to see that these two conditions are stronger than the local optimality conditions of Theorem 2:
Lemma 1.
Lemma 1 will be used in the proof of Theorem 2—it helps to make the description of the underlying algorithm and the corresponding correctness proofs more accessible. As a direct application of Theorem 2, we get the following corollary.
Corollary 1.
BoundedDegree Deletion with constant admits a problem kernel with at most vertices, which is computable in time.
We use the following easytoverify forbidden subgraph characterization of boundeddegree graphs: A graph has maximum degree if and only if there is no “star” in .
Definition 0.1.
For , the graph is called an star. The vertex is called the center of the star. The vertices are the leaves of the star. A star is an star with .
Due to this forbidden subgraph characterization of boundeddegree graphs, we can also derive a linear kernelization for the Star Packing problem. In this problem, given an undirected graph, one seeks for at least vertexdisjoint stars for a constant . With a slight modification of the proof of Theorem 2, we get the following corollary.
Corollary 2.
Star Packing admits a problem kernel with at most vertices, which is computable in time.
For , the best known kernelization result was a kernel [23]. Note that the special case of Star Packing with is also called Packing, a problem wellstudied in the literature, see [23, 25]. Corollary 2 gives a vertex problem kernel. The bestknown bound is [25]. However, the improvement from the formerly best bound [23] is achieved by improving a properly defined witness structure by local modifications. This trick also works with our approach, that is, we can show that the NTlike approach also yields a vertex problem kernel for Star Packing.
subsection2[The Algorithm]The Algorithm We start with an informal description of the algorithm. As stated in the introduction of this section, the central part is Algorithm compute_AB shown in Figure 1.
Using the characterization of boundeddegree graphs by forbidding large stars, in line 2 compute_AB starts with computing two vertex sets and : First, with a straightforward greedy algorithm, compute a maximal star packing of , that is, a set of vertexdisjoint stars that cannot be extended by adding another star. Let be the set of vertices of the star packing. Since the number of stars in the packing is a lower bound for the size of a minimum bddset, is a factor approximate bddset. Greedily remove vertices from such that is still a bddset, and finally set . We call the witness and the corresponding residual.
If the residual is too big (condition in line 3), the sets and are passed in line 4 to the procedure find_extremal in Figure 2 which computes two sets and satisfying conditions C1 and C2. Computing and represents the first step to find a subset pair satisfying condition C1: Since there is no vertex that has degree more than in (due to the fact that is a bddset), the search is limited to those subset pairs where is a subset of the witness and is a subset of .
Algorithm compute_AB calls find_extremal iteratively until the sets and , which are constructed by the union of the outputs of all applications of find_extremal (see line 5), satisfy the third property in Theorem 2. In the following, we intuitively describe the basic ideas behind find_extremal.
To construct the set from , we compute again a star packing with the centers of the stars being from and the leaves being from . We relax, on the one hand, the requirement that the stars in the packing have exactly leaves, that is, the packing might contain stars. On the other hand, should have a maximum number of edges. The rough idea behind the requirement for a maximum number of edges is to maximize the number of stars in in the course of the algorithm. Moreover, we can observe that, by setting equal to the center set of the stars in and equal to the leaf set of the stars in , is a minimum bddset of (condition C2). We call such a packing a maximumedge center star packing. For computing , the algorithm constructs an auxiliary bipartite graph with as one vertex subset and as the other. The edge set of consists of the edges in with exactly one endpoint in . See line 1 of Figure 2. Obviously, a maximumedge center star packing of corresponds onetoone with a maximumedge packing of stars in that have their centers in and have at most leaves in the other vertex subset. Then, the star packing can be computed by using techniques for computing maximum matchings in (in the following, let starpacking(,,,) denote an algorithm that computes a maximumedge center star packing on the bipartite graph ).
The most involved part of find_extremal in Figure 2 is to guarantee that the output subsets in line 4 fulfill condition C1. To this end, one uses an iterative approach to compute the star packing . Roughly speaking, in each iteration, if the subsets and do not fulfill condition C1, then exclude from further iterations the vertices from that themselves or whose neighbors violate this condition. See lines 2 to 15 of Figure 2 for more details of the iterative computation. Herein, for , the sets and , where is initialized with the empty set, and is computed using , store the vertices excluded from computing . To find the vertices that themselves cause the violation of the condition, that is, vertices in that have neighbors in , one uses an augmenting path computation in lines 7 to 11 to get in line 12 subsets and such that the vertices in do not themselves violate the condition. Roughly speaking, the existence of an edge from some vertex in to some vertex in would imply that the star packing is not maximum (witnessed by an augmenting path beginning with —in principle, this idea is also used for finding crown decompositions, cf. [1]). The vertices whose neighbors cause the violation of condition C1 are all vertices in with neighbors in that themselves have neighbors in . These neighbors in and the corresponding vertices in are excluded in line 4 and line 18. We will see that the number of all excluded vertices is , thus, in total, we do not exclude too many vertices with this iterative method. The formal proof of correctness is given in the following subsection.
subsection2[Running Time and Correctness]Running Time and Correctness Now, we show that compute_AB in Figure 1 computes in the claimed time two vertex subsets and that fulfill the three properties given in Theorem 2.
subsubsection3[Running Time of find_extremal.]Running Time of find_extremal. We begin with the proof of the running time of the procedure find_extremal in Figure 2, which uses the following lemmas.
Lemma 2.
Procedure starpacking in Figure 2 runs in time.
The next lemma is also used for the correctness proof; in particular, it guarantees the termination of the algorithm.
Proof.
In lines 4 and 5 of Figure 2, all vertices in and their neighbors are excluded from the star packing in the th iteration of the outer loop. Moreover, the vertices in are excluded from the set (line 6). Therefore, a vertex in cannot be added to in line 12. Thus (set to in line 15) contains . Moreover, this containment is proper, as otherwise the condition in line 13 would be true.
Lemma 4.
Procedure find_extremal runs in time.
subsubsection3[Correctness of find_extremal.]Correctness of find_extremal. The correctness proof for find_extremal in Figure 2 is more involved than its running time analysis. The following lemmas provide some properties of which are needed.
Lemma 5.
Proof.
(Sketch) To prove (1), first of all, we show that implies , since, otherwise, we could get a augmenting path from some element in to . A augmenting path is a path where the edges in and the edges not in alternate, and the first and the last edge are not in . This augmenting path can be constructed in an inductive way by simulating the construction of in lines 6 to 11 of Figure 2. From this augmenting path, we can then construct a center star packing that has more edges than , contradicting that has maximum cardinality. Second, every vertex in is a center of a star due to the definition of and Procedure starpacking. Finally, if a vertex is the center of a star with less than leaves, then again we get a augmenting path from some element in to .
Lemma 6.
For each there is no edge in between and .
Proof.
The next lemma shows that the output of find_extremal fulfills the local optimality conditions.
Lemma 7.
Proof.
Clearly, the output consists of two disjoints sets. The algorithm returns in lines 14 or 19 of Figure 2. If it returns in line 19, then the output is empty and contains only vertices that have a distance at least to the vertices in : The condition in line 3 implies and, therefore, contains all vertices in that have distance at most to the vertices in . Since is a bddset of , all vertices in and their neighbors in have a degree at most . This implies that both conditions hold for the output returned in this line. It remains to consider the output returned in line 14.
To show that condition C1 holds, recall that has maximum degree and that . Therefore, if for a vertex in we have , then has degree at most in . Thus, to show that each vertex in has degree at most in , it suffices to prove that . We show separately that and that .
The assignment in line 8 and the untilcondition in line 11 directly give . Due to Lemma 6 there is no edge in between and , where (the ifcondition in line 13, which has to be satisfied for the procedure to return in line 14). From this it follows that the vertices in have no vertex in as neighbor and, thus, . Therefore, .
subsubsection3[Running Time and Correctness of compute_AB]Running Time and Correctness of compute_AB To prove the running time and correctness of compute_AB, we have to show that the output of find_extremal contains sufficiently many vertices of . To this end, the following lemma plays a decisive role.
Proof.
The proof is by induction on . The claim trivially holds for , since . Assume that the claim is true for . Since (Lemma 3), we have
We first bound the size of . Since was set to at the end of the th iteration of the outer loop (line 15), the vertices in were not excluded from computing the packing (line 5) of the th iteration. Moreover, for the star packing computed in the th iteration, since, otherwise, the set in line 6 would contain a vertex in and, then, line 8 would include into , which would contradict the fact that (line 15). Due to property 2 in Lemma 5 the leaves of every star in with center in are vertices in and, thus, the vertices in are leaves of stars in with centers in . Since each star has at most leaves, the set has size at most . The remaining part is easy to bound: since all the vertices in have degree at most , we get
With the induction hypothesis, we get that
Lemma 9.
Procedure find_extremal always finds two sets and such that .
Proof.
If find_extremal terminates, then for the graph resulting by removing from . Since and , we have and , and by Lemma 8 it follows immediately that .
Therefore, if , then find_extremal always returns two sets and such that is not empty.
Lemma 10.
Algorithm compute_AB runs in time.
Lemma 11.
The sets and computed by compute_AB fulfill the three properties given in Theorem 2.
Proof.
Since every output by find_extremal in line 4 of compute_AB in Figure 1 fulfills conditions C1 and C2 (Lemma 7), the pair output in line 3 of compute_AB fulfills conditions C1 and C2, and, therefore, also the local optimality conditions (Lemma 1). It remains to show that fulfills the size condition.
Let and be the last computed witness and residual, respectively. Since the condition in line 3 is true, we know that . Recall that is a factor approximate bddset for . Thus, every bddset of has size at least . Since the output sets and fulfill the local optimality conditions and the boundeddegree property is hereditary, every bddset of has size at least
The inequality (*) follows from the fact that is small, that is, (note that ).
section1[Conclusion]Conclusion Our main result is to generalize the NemhauserTrotterTheorem, which applies to the BoundedDegree Deletion problem with (that is, Vertex Cover), to the general case with arbitrary . In particular, in this way we contribute problem kernels with a number of vertices linear in the solution size for all constant values of for BoundedDegree Deletion. To this end, we developed a new algorithmic strategy that is based on extremal combinatorial arguments. The original NTTheorem [20] has been proven using linear programming relaxations—we see no way how this could have been generalized to BoundedDegree Deletion. By way of contrast, we presented a purely combinatorial data reduction algorithm which is also completely different from known combinatorial data reduction algorithms for Vertex Cover (see [1, 4, 9]). Finally, Baldwin et al. [3, page 175] remarked that, with respect to practical applicability in the case of Vertex Cover kernelization, combinatorial data reduction algorithms are more powerful than “slower methods that rely on linear programming relaxation”. Hence, we expect that benefits similar to those derived from Vertex Cover kernelization for biological network analysis (see the motivation part of our introductory discussion) may be provided by BoundedDegree Deletion kernelization.
References
 [1] F. N. AbuKhzam, M. R. Fellows, M. A. Langston, and W. H. Suters. Crown structures for vertex cover kernelization. Theory Comput. Syst., 41(3):411–430, 2007.
 [2] B. Balasundaram, S. Butenko, I. V. Hicks, and S. Sachdeva. Clique relaxations in social network analysis: The maximum plex problem. Manuscript, 2008.
 [3] N. Baldwin, E. Chesler, S. Kirov, M. Langston, J. Snoddy, R. Williams, and B. Zhang. Computational, integrative, and comparative methods for the elucidation of genetic coexpression networks. Journal of Biomedicine and Biotechnology, 2(2005):172–180, 2005.
 [4] R. BarYehuda and S. Even. A localratio theorem for approximating the weighted vertex cover problem. Ann. of Discrete Math., 25:27–45, 1985.
 [5] H. L. Bodlaender and E. Penninkx. A linear kernel for planar feedback vertex set. In Proc. 3rd IWPEC, volume 5018 of LNCS, pages 160–171. Springer, 2008.
 [6] H. L. Bodlaender and B. van Antwerpende Fluiter. Reduction algorithms for graphs of small treewidth. Inform. and Comput., 167(2):86–119, 2001.
 [7] J. Chen, I. A. Kanj, and W. Jia. Vertex cover: Further observations and further improvements. J. Algorithms, 41(2):280–301, 2001.
 [8] E. J. Chesler, L. Lu, S. Shou, Y. Qu, J. Gu, J. Wang, H. C. Hsu, J. D. Mountz, N. E. Baldwin, M. A. Langston, D. W. Threadgill, K. F. Manly, and R. W. Williams. Complex trait analysis of gene expression uncovers polygenic and pleiotropic networks that modulate nervous system function. Nature Genetics, 37(3):233–242, 2005.
 [9] M. Chlebík and J. Chlebíková. Crown reductions for the minimum weighted vertex cover problem. Discrete Appl. Math., 156:292–312, 2008.
 [10] R. G. Downey and M. R. Fellows. Parameterized Complexity. Springer, 1999.
 [11] J. Flum and M. Grohe. Parameterized Complexity Theory. Springer, 2006.
 [12] J. Guo. A more effective linear kernelization for cluster editing. Theor. Comput. Sci., 2008. To appear.
 [13] J. Guo and R. Niedermeier. Invitation to data reduction and problem kernelization. ACM SIGACT News, 38(1):31–45, 2007.
 [14] J. Guo and R. Niedermeier. Linear problem kernels for NPhard problems on planar graphs. In Proc. 34th ICALP, volume 4596 of LNCS, pages 375–386. Springer, 2007.
 [15] I. A. Kanj, M. J. Pelsmajer, G. Xia, and M. Schaefer. On the induced matching problem. J. Comput. System Sci., 2009. To appear.
 [16] S. Khot and O. Regev. Vertex cover might be hard to approximate to within . J. Comput. System Sci., 74(3):335–349, 2008.
 [17] S. Khuller. The Vertex Cover problem. ACM SIGACT News, 33(2):31–33, 2002.
 [18] C. Komusiewicz, F. Hüffner, H. Moser, and R. Niedermeier. Isolation concepts for enumerating dense subgraphs. In Proc. 13th COCOON, volume 4598 of LNCS, pages 140–150. Springer, 2007.
 [19] M. A. Langston, 2008. Personal communication.
 [20] G. L. Nemhauser and L. E. Trotter. Vertex packings: Structural properties and algorithms. Math. Program., 8:232–248, 1975.
 [21] R. Niedermeier. Invitation to FixedParameter Algorithms. Oxford University Press, 2006.
 [22] N. Nishimura, P. Ragde, and D. M. Thilikos. Fast fixedparameter tractable algorithms for nontrivial generalizations of Vertex Cover. Discrete Appl. Math., 152(1–3):229–245, 2005.
 [23] E. Prieto and C. Sloper. Looking at the stars. Theor. Comput. Sci., 351(3):437–445, 2006.
 [24] S. B. Seidman and B. L. Foster. A graphtheoretic generalization of the clique concept. Journal of Mathematical Sociology, 6:139–154, 1978.
 [25] J. Wang, D. Ning, Q. Feng, and J. Chen. An improved parameterized algorithm for a generalized matching problem. In Proc. 5th TAMC, volume 4978 of LNCS, pages 212–222. Springer, 2008.