A Generalization of Nemhauser and Trotter’s
The Nemhauser-Trotter local optimization theorem applies to the NP-hard Vertex Cover problem and has applications in approximation as well as parameterized algorithmics. We present a framework that generalizes Nemhauser and Trotter’s result to vertex deletion and graph packing problems, introducing novel algorithmic strategies based on purely combinatorial arguments (not referring to linear programming as the Nemhauser-Trotter result originally did).
We exhibit our framework using a generalization of Vertex Cover, called Bounded-Degree Deletion, that has promise to become an important tool in the analysis of gene and other biological networks. For some fixed , Bounded-Degree Deletion asks to delete as few vertices as possible from a graph in order to transform it into a graph with maximum vertex degree at most . Vertex Cover is the special case of . Our generalization of the Nemhauser-Trotter theorem implies that Bounded-Degree Deletion has a problem kernel with a linear number of vertices for every constant . We also outline an application of our extremal combinatorial approach to the problem of packing stars with a bounded number of leaves. Finally, charting the border between (parameterized) tractability and intractability for Bounded-Degree Deletion, we provide a W-hardness result for Bounded-Degree Deletion in case of unbounded -values.
Key words and phrases:Algorithms, computational complexity, NP-hard problems, W-completeness, graph problems, combinatorial optimization, fixed-parameter tractability, kernelization
Michael R. Fellows \@ifemptypcru
pcru Jiong Guo Hannes Moser Rolf Niedermeier \@ifemptyjena
section1[Introduction]Introduction Nemhauser and Trotter  proved a famous theorem in combinatorial optimization. In terms of the NP-hard Vertex Cover111Vertex Cover is the following problem: Given an undirected graph, find a minimum-cardinality set of vertices such that each edge has at least one endpoint in . problem, it can be formulated as follows:
If is a vertex cover of the induced subgraph , then is a vertex cover of .
There is a minimum-cardinality vertex cover of with .
Every vertex cover of the induced subgraph has size at least .
In other words, the NT-Theorem provides a polynomial-time data reduction for Vertex Cover. That is, for vertices in it can already be decided in polynomial time to put them into the solution set and vertices in can be ignored for finding a solution. The NT-Theorem is very useful for approximating Vertex Cover. The point is that the search for an approximate solution can be restricted to the induced subgraph . The NT-Theorem directly delivers a factor- approximation for Vertex Cover by choosing as the vertex cover. Chen et al.  first observed that the NT-Theorem directly yields a -vertex problem kernel for Vertex Cover, where the parameter denotes the size of the solution set. Indeed, this is in a sense an “ultimate” kernelization result in parameterized complexity analysis [10, 11, 21] because there is good reason to believe that there is a matching lower bound for the kernel size unless PNP .
Since its publication numerous authors have referred to the importance of the NT-Theorem from the viewpoint of polynomial-time approximation algorithms (e.g., [4, 17]) as well as from the viewpoint of parameterized algorithmics (e.g., [1, 7, 9]). The relevance of the NT-Theorem comes from both its practical usefulness in solving the Vertex Cover problem as well as its theoretical depth having led to numerous further studies and follow-up work [1, 4, 9]. In this work, our main contribution is to provide a more general and more widely applicable version of the NT-Theorem. The corresponding algorithmic strategies and proof techniques, however, are not achieved by a generalization of known proofs of the NT-Theorem but are completely different and are based on extremal combinatorial arguments. Vertex Cover can be formulated as the problem of finding a minimum-cardinality set of vertices whose deletion makes a graph edge-free, that is, the remaining vertices have degree . Our main result is to prove a generalization of the NT-Theorem that helps in finding a minimum-cardinality set of vertices whose deletion leaves a graph of maximum degree for arbitrary but fixed . Clearly, is the special case of Vertex Cover.
paragraph4[Motivation.]Motivation. Since the NP-hard Bounded-Degree Deletion problem—given a graph and two positive integers and , find at most vertices whose deletion leaves a graph of maximum vertex degree —stands in the center of our considerations, some more explanations about its relevance follow. Bounded-Degree Deletion (or its dual problem) already appears in some theoretical work, e.g., [6, 18, 22], but so far it has received considerably less attention than Vertex Cover, one of the best studied problems in combinatorial optimization . To advocate and justify more research on Bounded-Degree Deletion, we describe an application in computational biology. In the analysis of genetic networks based on micro-array data, recently a clique-centric approach has shown great success [3, 8]. Roughly speaking, finding cliques or near-cliques (called paracliques ) has been a central tool. Since finding cliques is computationally hard (also with respect to approximation), Chesler et al. [8, page 241] state that “cliques are identified through a transformation to the complementary dual Vertex Cover problem and the use of highly parallel algorithms based on the notion of fixed-parameter tractability.” More specifically, in these Vertex Cover-based algorithms polynomial-time data reduction (such as the NT-Theorem) plays a decisive role  (also see ) for efficient solvability of the given real-world data. However, since biological and other real-world data typically contain errors, the demand for finding cliques (that is, fully connected subgraphs) often seems overly restrictive and somewhat relaxed notations of cliques are more appropriate. For instance, Chesler et al.  introduced paracliques, which are achieved by greedily extending the found cliques by vertices that are connected to almost all (para)clique vertices. An elegant mathematical concept of “relaxed cliques” is that of -plexes222Introduced in 1978 by Seidman and Foster  in the context of social network analysis. Recently, this concept has again found increased interest [2, 18]. where one demands that each -plex vertex does not need to be connected to all other vertices in the -plex but to all but . Thus, cliques are -plexes. The corresponding problem to find maximum-cardinality -plexes in a graph is basically as computationally hard as clique detection is [2, 18]. However, as Vertex Cover is the dual problem for clique detection, Bounded-Degree Deletion is the dual problem for -plex detection: An -vertex graph has an -plex of size iff its complement graph has a solution set for Bounded-Degree Deletion with of size , and the solution sets can directly be computed from each other. The Vertex Cover polynomial-time data reduction algorithm has played an important role in the practical success story of analyzing real-world genetic and other biological networks [3, 8]. Our new polynomial-time data reduction algorithms for Bounded-Degree Deletion have the potential to play a similar role.
paragraph4[Our results.]Our results. Our main theorem can be formulated as follows.
BDD-DR-Theorem (Theorem 2). For an undirected -vertex and -edge graph , we can compute two disjoint vertex subsets and in time, such that the following three properties hold:
If is a solution set for Bounded-Degree Deletion of the induced subgraph , then is a solution set for Bounded-Degree Deletion of .
There is a minimum-cardinality solution set for Bounded-Degree Deletion of with .
Every solution set for Bounded-Degree Deletion of the induced subgraph has size at least
In terms of parameterized algorithmics, this gives a -vertex problem kernel for Bounded-Degree Deletion, which is linear in for constant -values, thus joining a number of other recent “linear kernelization results” [5, 12, 14, 15]. Our general result specializes to a -vertex problem kernel for Vertex Cover (the NT-Theorem provides a size- problem kernel), but applies to a larger class of problems. For instance, a slightly modified version of the BDD-DR-Theorem (with essentially the same proof) yields a -vertex problem kernel for the problem of packing at least vertex-disjoint length- paths of an input graph, giving the same bound as shown in work focussing on this problem .333Very recently, Wang et al.  improved the -bound to a -bound. We claim that our kernelization based on the BDD-DR-Theorem method can be easily adapted to also deliver the -bound. For the problem, where, given an undirected graph, one seeks a set of at least vertex-disjoint stars444A star is a tree where all of the vertices but one are leaves. of the same constant size, we show that a kernel with a linear number of vertices can be achieved, improving the best previous quadratic kernelization . We emphasize that our data reduction technique is based on extremal combinatorial arguments; the resulting combinatorial kernelization algorithm has practical potential and implementation work is underway. Note that for our algorithm computes the same type of structure as in the “crown decomposition” kernelization for Vertex Cover (see, for example, ). However, for the structure returned by our algorithm is much more complicated; in particular, unlike for Vertex Cover crown decompositions, in the BDD-DR-Theorem the set is not necessarily a separator and the set does not necessarily form an independent set.
Exploring the borders of parameterized tractability of Bounded-Degree Deletion for arbitrary values of the degree value , we show the following.
For unbounded (given as part of the input), Bounded-Degree Deletion is -complete with respect to the parameter denoting the number of vertices to delete.
In other words, there is no hope for fixed-parameter tractability with respect to the parameter in the case of unbounded -values. Due to the lack of space the proof of Theorem 1 and several proofs of lemmas needed to show Theorem 2 are omitted.
A bdd--set for a graph is a vertex subset whose removal from yields a graph in which each vertex has degree at most . The central problem of this paper is
An undirected graph , and integers and .
Does there exist a bdd--set of size at most for ?
In this paper, for a graph and a vertex set , let be the subgraph of induced by and . The open neighborhood of a vertex or a vertex set in a graph is denoted as and , respectively. The closed neighborhood is denoted as and . We write and to denote the vertex and edge set of , respectively. A packing of a graph is a set of pairwise vertex-disjoint subgraphs of . A graph has maximum degree when every vertex in the graph has degree at most . A graph property is called hereditary if every induced subgraph of a graph with this property has the property as well.
Parameterized algorithmics [10, 11, 21] is an approach to finding optimal solutions for NP-hard problems. A common method in parameterized algorithmics is to provide polynomial-time executable data reduction rules that lead to a problem kernel . This is the most important concept for this paper. Given a parameterized problem instance , a data reduction rule replaces by an instance in polynomial time such that , , and is a Yes-instance if and only if is a Yes-instance. A parameterized problem is said to have a problem kernel, or, equivalently, kernelization, if, after the exhaustive application of the data reduction rules, the resulting reduced instance has size for a function depending only on . Roughly speaking, the kernel size plays a similar role in the subject of problem kernelization as the approximation factor plays for approximation algorithms.
section1[A Local Optimization Algorithm for Bounded-Degree Deletion]A Local Optimization Algorithm for Bounded-Degree Deletion The main result of this section is the following generalization of the Nemhauser-Trotter-Theorem  for Bounded-Degree Deletion with constant .
Theorem 2 (BDD-DR-Theorem).
For an -vertex and -edge graph , we can compute two disjoint vertex subsets and in time, such that the following three properties hold:
If is a bdd--set of , then is a bdd--set of .
There is a minimum-cardinality bdd--set of with .
Every bdd--set of has size at least .
This first two properties are called the local optimality conditions. The remainder of this section is dedicated to the proof of this theorem. More specifically, we present an algorithm called compute_AB (see Figure 1) which outputs two sets and fulfilling the three properties given in Theorem 2. The core of this algorithm is the procedure find_extremal (see Figure 2) running in time. This procedure returns two disjoint vertex subsets and that, among others, satisfy the local optimality conditions. The procedure is iteratively called by compute_AB. The overall output sets and then are the union of the outputs of all applications of find_extremal. Actually, find_extremal searches for , , satisfying the following two conditions:
Each vertex in has degree at most in , and
is a minimum-cardinality bdd--set for .
It is not hard to see that these two conditions are stronger than the local optimality conditions of Theorem 2:
Lemma 1 will be used in the proof of Theorem 2—it helps to make the description of the underlying algorithm and the corresponding correctness proofs more accessible. As a direct application of Theorem 2, we get the following corollary.
Bounded-Degree Deletion with constant admits a problem kernel with at most vertices, which is computable in time.
We use the following easy-to-verify forbidden subgraph characterization of bounded-degree graphs: A graph has maximum degree if and only if there is no “-star” in .
For , the graph is called an -star. The vertex is called the center of the star. The vertices are the leaves of the star. A -star is an -star with .
Due to this forbidden subgraph characterization of bounded-degree graphs, we can also derive a linear kernelization for the -Star Packing problem. In this problem, given an undirected graph, one seeks for at least vertex-disjoint -stars for a constant . With a slight modification of the proof of Theorem 2, we get the following corollary.
-Star Packing admits a problem kernel with at most vertices, which is computable in time.
For , the best known kernelization result was a kernel . Note that the special case of -Star Packing with is also called -Packing, a problem well-studied in the literature, see [23, 25]. Corollary 2 gives a -vertex problem kernel. The best-known bound is . However, the improvement from the formerly best bound  is achieved by improving a properly defined witness structure by local modifications. This trick also works with our approach, that is, we can show that the NT-like approach also yields a -vertex problem kernel for -Star Packing.
subsection2[The Algorithm]The Algorithm We start with an informal description of the algorithm. As stated in the introduction of this section, the central part is Algorithm compute_AB shown in Figure 1.
Using the characterization of bounded-degree graphs by forbidding large stars, in line 2 compute_AB starts with computing two vertex sets and : First, with a straightforward greedy algorithm, compute a maximal -star packing of , that is, a set of vertex-disjoint -stars that cannot be extended by adding another -star. Let be the set of vertices of the star packing. Since the number of stars in the packing is a lower bound for the size of a minimum bdd--set, is a factor- approximate bdd--set. Greedily remove vertices from such that is still a bdd--set, and finally set . We call the witness and the corresponding residual.
If the residual is too big (condition in line 3), the sets and are passed in line 4 to the procedure find_extremal in Figure 2 which computes two sets and satisfying conditions C1 and C2. Computing and represents the first step to find a subset pair satisfying condition C1: Since there is no vertex that has degree more than in (due to the fact that is a bdd--set), the search is limited to those subset pairs where is a subset of the witness and is a subset of .
Algorithm compute_AB calls find_extremal iteratively until the sets and , which are constructed by the union of the outputs of all applications of find_extremal (see line 5), satisfy the third property in Theorem 2. In the following, we intuitively describe the basic ideas behind find_extremal.
To construct the set from , we compute again a star packing with the centers of the stars being from and the leaves being from . We relax, on the one hand, the requirement that the stars in the packing have exactly leaves, that is, the packing might contain -stars. On the other hand, should have a maximum number of edges. The rough idea behind the requirement for a maximum number of edges is to maximize the number of -stars in in the course of the algorithm. Moreover, we can observe that, by setting equal to the center set of the -stars in and equal to the leaf set of the -stars in , is a minimum bdd--set of (condition C2). We call such a packing a maximum-edge -center -star packing. For computing , the algorithm constructs an auxiliary bipartite graph with as one vertex subset and as the other. The edge set of consists of the edges in with exactly one endpoint in . See line 1 of Figure 2. Obviously, a maximum-edge -center -star packing of corresponds one-to-one with a maximum-edge packing of stars in that have their centers in and have at most leaves in the other vertex subset. Then, the star packing can be computed by using techniques for computing maximum matchings in (in the following, let star-packing(,,,) denote an algorithm that computes a maximum-edge -center -star packing on the bipartite graph ).
The most involved part of find_extremal in Figure 2 is to guarantee that the output subsets in line 4 fulfill condition C1. To this end, one uses an iterative approach to compute the star packing . Roughly speaking, in each iteration, if the subsets and do not fulfill condition C1, then exclude from further iterations the vertices from that themselves or whose neighbors violate this condition. See lines 2 to 15 of Figure 2 for more details of the iterative computation. Herein, for , the sets and , where is initialized with the empty set, and is computed using , store the vertices excluded from computing . To find the vertices that themselves cause the violation of the condition, that is, vertices in that have neighbors in , one uses an augmenting path computation in lines 7 to 11 to get in line 12 subsets and such that the vertices in do not themselves violate the condition. Roughly speaking, the existence of an edge from some vertex in to some vertex in would imply that the -star packing is not maximum (witnessed by an augmenting path beginning with —in principle, this idea is also used for finding crown decompositions, cf. ). The vertices whose neighbors cause the violation of condition C1 are all vertices in with neighbors in that themselves have neighbors in . These neighbors in and the corresponding vertices in are excluded in line 4 and line 18. We will see that the number of all excluded vertices is , thus, in total, we do not exclude too many vertices with this iterative method. The formal proof of correctness is given in the following subsection.
subsection2[Running Time and Correctness]Running Time and Correctness Now, we show that compute_AB in Figure 1 computes in the claimed time two vertex subsets and that fulfill the three properties given in Theorem 2.
subsubsection3[Running Time of find_extremal.]Running Time of find_extremal. We begin with the proof of the running time of the procedure find_extremal in Figure 2, which uses the following lemmas.
Procedure star-packing in Figure 2 runs in time.
The next lemma is also used for the correctness proof; in particular, it guarantees the termination of the algorithm.
In lines 4 and 5 of Figure 2, all vertices in and their neighbors are excluded from the star packing in the th iteration of the outer loop. Moreover, the vertices in are excluded from the set (line 6). Therefore, a vertex in cannot be added to in line 12. Thus (set to in line 15) contains . Moreover, this containment is proper, as otherwise the condition in line 13 would be true.
Procedure find_extremal runs in time.
subsubsection3[Correctness of find_extremal.]Correctness of find_extremal. The correctness proof for find_extremal in Figure 2 is more involved than its running time analysis. The following lemmas provide some properties of which are needed.
(Sketch) To prove (1), first of all, we show that implies , since, otherwise, we could get a -augmenting path from some element in to . A -augmenting path is a path where the edges in and the edges not in alternate, and the first and the last edge are not in . This -augmenting path can be constructed in an inductive way by simulating the construction of in lines 6 to 11 of Figure 2. From this -augmenting path, we can then construct a -center -star packing that has more edges than , contradicting that has maximum cardinality. Second, every vertex in is a center of a star due to the definition of and Procedure star-packing. Finally, if a vertex is the center of a star with less than leaves, then again we get a -augmenting path from some element in to .
For each there is no edge in between and .
The next lemma shows that the output of find_extremal fulfills the local optimality conditions.
Clearly, the output consists of two disjoints sets. The algorithm returns in lines 14 or 19 of Figure 2. If it returns in line 19, then the output is empty and contains only vertices that have a distance at least to the vertices in : The condition in line 3 implies and, therefore, contains all vertices in that have distance at most to the vertices in . Since is a bdd--set of , all vertices in and their neighbors in have a degree at most . This implies that both conditions hold for the output returned in this line. It remains to consider the output returned in line 14.
To show that condition C1 holds, recall that has maximum degree and that . Therefore, if for a vertex in we have , then has degree at most in . Thus, to show that each vertex in has degree at most in , it suffices to prove that . We show separately that and that .
The assignment in line 8 and the until-condition in line 11 directly give . Due to Lemma 6 there is no edge in between and , where (the if-condition in line 13, which has to be satisfied for the procedure to return in line 14). From this it follows that the vertices in have no vertex in as neighbor and, thus, . Therefore, .
subsubsection3[Running Time and Correctness of compute_AB]Running Time and Correctness of compute_AB To prove the running time and correctness of compute_AB, we have to show that the output of find_extremal contains sufficiently many vertices of . To this end, the following lemma plays a decisive role.
The proof is by induction on . The claim trivially holds for , since . Assume that the claim is true for . Since (Lemma 3), we have
We first bound the size of . Since was set to at the end of the th iteration of the outer loop (line 15), the vertices in were not excluded from computing the packing (line 5) of the th iteration. Moreover, for the star packing computed in the th iteration, since, otherwise, the set in line 6 would contain a vertex in and, then, line 8 would include into , which would contradict the fact that (line 15). Due to property 2 in Lemma 5 the leaves of every star in with center in are vertices in and, thus, the vertices in are leaves of stars in with centers in . Since each star has at most leaves, the set has size at most . The remaining part is easy to bound: since all the vertices in have degree at most , we get
With the induction hypothesis, we get that
Procedure find_extremal always finds two sets and such that .
If find_extremal terminates, then for the graph resulting by removing from . Since and , we have and , and by Lemma 8 it follows immediately that .
Therefore, if , then find_extremal always returns two sets and such that is not empty.
Algorithm compute_AB runs in time.
The sets and computed by compute_AB fulfill the three properties given in Theorem 2.
Since every output by find_extremal in line 4 of compute_AB in Figure 1 fulfills conditions C1 and C2 (Lemma 7), the pair output in line 3 of compute_AB fulfills conditions C1 and C2, and, therefore, also the local optimality conditions (Lemma 1). It remains to show that fulfills the size condition.
Let and be the last computed witness and residual, respectively. Since the condition in line 3 is true, we know that . Recall that is a factor- approximate bdd--set for . Thus, every bdd--set of has size at least . Since the output sets and fulfill the local optimality conditions and the bounded-degree property is hereditary, every bdd--set of has size at least
The inequality (*) follows from the fact that is small, that is, (note that ).
section1[Conclusion]Conclusion Our main result is to generalize the Nemhauser-Trotter-Theorem, which applies to the Bounded-Degree Deletion problem with (that is, Vertex Cover), to the general case with arbitrary . In particular, in this way we contribute problem kernels with a number of vertices linear in the solution size for all constant values of for Bounded-Degree Deletion. To this end, we developed a new algorithmic strategy that is based on extremal combinatorial arguments. The original NT-Theorem  has been proven using linear programming relaxations—we see no way how this could have been generalized to Bounded-Degree Deletion. By way of contrast, we presented a purely combinatorial data reduction algorithm which is also completely different from known combinatorial data reduction algorithms for Vertex Cover (see [1, 4, 9]). Finally, Baldwin et al. [3, page 175] remarked that, with respect to practical applicability in the case of Vertex Cover kernelization, combinatorial data reduction algorithms are more powerful than “slower methods that rely on linear programming relaxation”. Hence, we expect that benefits similar to those derived from Vertex Cover kernelization for biological network analysis (see the motivation part of our introductory discussion) may be provided by Bounded-Degree Deletion kernelization.
-  F. N. Abu-Khzam, M. R. Fellows, M. A. Langston, and W. H. Suters. Crown structures for vertex cover kernelization. Theory Comput. Syst., 41(3):411–430, 2007.
-  B. Balasundaram, S. Butenko, I. V. Hicks, and S. Sachdeva. Clique relaxations in social network analysis: The maximum -plex problem. Manuscript, 2008.
-  N. Baldwin, E. Chesler, S. Kirov, M. Langston, J. Snoddy, R. Williams, and B. Zhang. Computational, integrative, and comparative methods for the elucidation of genetic coexpression networks. Journal of Biomedicine and Biotechnology, 2(2005):172–180, 2005.
-  R. Bar-Yehuda and S. Even. A local-ratio theorem for approximating the weighted vertex cover problem. Ann. of Discrete Math., 25:27–45, 1985.
-  H. L. Bodlaender and E. Penninkx. A linear kernel for planar feedback vertex set. In Proc. 3rd IWPEC, volume 5018 of LNCS, pages 160–171. Springer, 2008.
-  H. L. Bodlaender and B. van Antwerpen-de Fluiter. Reduction algorithms for graphs of small treewidth. Inform. and Comput., 167(2):86–119, 2001.
-  J. Chen, I. A. Kanj, and W. Jia. Vertex cover: Further observations and further improvements. J. Algorithms, 41(2):280–301, 2001.
-  E. J. Chesler, L. Lu, S. Shou, Y. Qu, J. Gu, J. Wang, H. C. Hsu, J. D. Mountz, N. E. Baldwin, M. A. Langston, D. W. Threadgill, K. F. Manly, and R. W. Williams. Complex trait analysis of gene expression uncovers polygenic and pleiotropic networks that modulate nervous system function. Nature Genetics, 37(3):233–242, 2005.
-  M. Chlebík and J. Chlebíková. Crown reductions for the minimum weighted vertex cover problem. Discrete Appl. Math., 156:292–312, 2008.
-  R. G. Downey and M. R. Fellows. Parameterized Complexity. Springer, 1999.
-  J. Flum and M. Grohe. Parameterized Complexity Theory. Springer, 2006.
-  J. Guo. A more effective linear kernelization for cluster editing. Theor. Comput. Sci., 2008. To appear.
-  J. Guo and R. Niedermeier. Invitation to data reduction and problem kernelization. ACM SIGACT News, 38(1):31–45, 2007.
-  J. Guo and R. Niedermeier. Linear problem kernels for NP-hard problems on planar graphs. In Proc. 34th ICALP, volume 4596 of LNCS, pages 375–386. Springer, 2007.
-  I. A. Kanj, M. J. Pelsmajer, G. Xia, and M. Schaefer. On the induced matching problem. J. Comput. System Sci., 2009. To appear.
-  S. Khot and O. Regev. Vertex cover might be hard to approximate to within . J. Comput. System Sci., 74(3):335–349, 2008.
-  S. Khuller. The Vertex Cover problem. ACM SIGACT News, 33(2):31–33, 2002.
-  C. Komusiewicz, F. Hüffner, H. Moser, and R. Niedermeier. Isolation concepts for enumerating dense subgraphs. In Proc. 13th COCOON, volume 4598 of LNCS, pages 140–150. Springer, 2007.
-  M. A. Langston, 2008. Personal communication.
-  G. L. Nemhauser and L. E. Trotter. Vertex packings: Structural properties and algorithms. Math. Program., 8:232–248, 1975.
-  R. Niedermeier. Invitation to Fixed-Parameter Algorithms. Oxford University Press, 2006.
-  N. Nishimura, P. Ragde, and D. M. Thilikos. Fast fixed-parameter tractable algorithms for nontrivial generalizations of Vertex Cover. Discrete Appl. Math., 152(1–3):229–245, 2005.
-  E. Prieto and C. Sloper. Looking at the stars. Theor. Comput. Sci., 351(3):437–445, 2006.
-  S. B. Seidman and B. L. Foster. A graph-theoretic generalization of the clique concept. Journal of Mathematical Sociology, 6:139–154, 1978.
-  J. Wang, D. Ning, Q. Feng, and J. Chen. An improved parameterized algorithm for a generalized matching problem. In Proc. 5th TAMC, volume 4978 of LNCS, pages 212–222. Springer, 2008.