Exploiting Reduction Rules and Data Structures:Local Search for Minimum Vertex Cover in Massive Graphs

Exploiting Reduction Rules and Data Structures:
Local Search for Minimum Vertex Cover in Massive Graphs

Abstract

The Minimum Vertex Cover (MinVC) problem is a well-known NP-hard problem. Recently there has been great interest in solving this problem on real-world massive graphs. For such graphs, local search is a promising approach to finding optimal or near-optimal solutions. In this paper we propose a local search algorithm that exploits reduction rules and data structures to solve the MinVC problem in such graphs. Experimental results on a wide range of real-word massive graphs show that our algorithm finds better covers than state-of-the-art local search algorithms for MinVC. Also we present interesting results about the complexities of some well-known heuristics.

Exploiting Reduction Rules and Data Structures:
Local Search for Minimum Vertex Cover in Massive Graphs


Copyright © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Introduction

The Minimum Vertex Cover (MinVC) problem is a well-known NP-hard problem (?) with many real-world applications (?). Given a simple undirected graph where is the vertex set and is the edge set. An edge is a set s.t. , and we say that and are endpoints of . A vertex cover of a graph is a subset s.t. for each , at least one of ’s endpoints is in . The size of a vertex cover is the number of vertices in it. The MinVC problem is to find a vertex cover of minimum size.

With growing interest in social networks, scientific computation networks and wireless sensor networks, etc., the MinVC problem has re-emerged even with greater significance and complexity, so solving this problem in massive graphs has become an active research agenda. In this paper we are concerned in finding a vertex cover whose size is as small as possible.

It is hard to approximate MinVC within any factor smaller than 1.3606 (?). During last decades there were many works in local search for MinVC like (??). Recently FastVC (?) makes a breakthrough in massive graphs. It makes a balance between the time efficiency and the guidance effectiveness of heuristics. However, we realize that FastVC exploits very little about the structural information. Also in order to achieve satisfactory time efficiency, it sacrifices the guidance effectiveness.

The aim of this work is to develop a local search MinVC solver to deal with massive graphs with strong structures. The basic framework is this. Firstly, we exploit reduction rules to construct good starting vertex covers. Then we use local search to find better covers. In both the construction stage and the local search stage, we exploit a novel data structure called alternative partitions to pursue time efficiency, without sacrificing the quality of heuristics. Since we are now focusing on the impacts of the reduction rules and the data structures, we use naive local search strategies, so our solver may be too greedy. For future works, we will exploit some strategies to diversify our local search.

Our solver constructs starting vertex covers by incorporating reduction rules. In our experiments, our construction heuristic performs close to or even better than FastVC, on a large portion of the graphs. Moreover, it outputs a cover typically within 10 seconds. Hence, it provides a good starting point for later search. Furthermore, for a small portion of the graphs, our heuristic guarantees that it has found an optimal cover, due to the power of reduction rules. So far as we know, this is the first time reduction rules are applied in a local search MinVC solver, although they have been widely discussed in the community of theoretical computer science.

We also propose a brand new data structure to achieve time efficiency. The main idea is to partition the vertices wrt. their scores, i.e., two vertices are in the same partition if they have the same score, otherwise they are in different partitions. Thanks to this data structure, (1) as to the construction stage, the complexity of two important construction heuristics has been lowered, from to ; (2) as to the local search stage, the complexity of the best-picking heuristic has also been lowered, from to where is the set of vertices to be selected in local search, and is the average degree. Later in this paper we will prove these results rigorously. We applied these theoretical results in our solver, so we call our solver LinCom (Linear-Complexity-Heuristic Solver).

We tested LinCom and FastVC on the standard benchmark of massive graphs from the Network Data Repository111http://www.graphrepository.com./networks.php (?). Our experiments show that among all the 12 classes of instances in this benchmark, LinCom falls behind FastVC in only one class. Moreover LinCom finds smaller covers for a considerable portion of the graphs. This improvement is big, since it rarely happens in the literature (?).

Preliminaries

Basic Notations

If is an edge of , we say that and are neighbors. We define as . The degree of a vertex , denoted by , is defined as . We use and to denote the average degree and the maximum degree of graph respectively, suppressing if understood from the context. An edge is covered by a vertex set if one of its endpoints is in , i.e., or (or both). Otherwise it is uncovered by .

Local Search for MinVC

Most local search algorithms solve the MinVC problem by iteratively solving its decision version-given a positive integer , searching for a -sized vertex cover. A general framework is Algorithm 1. We denote the current candidate solution as , which is a set of vertices selected for covering.

1 ;
2 while not reach terminate condition do
3       if  covers all edges then
4             ;
5             remove a vertex from ;
6            
7      exchange a pair of vertices;
8      
return
Algorithm 1 A Local Search Framework for MinVC

Algorithm 1 consists of two stages: the construction stage (Line 1) and the local search stage (Line 1 to 1). At the beginning, an initial vertex cover is constructed by the procedure. Throughout this paper, this initial cover is called the starting vertex cover. Besides returns another parameter, i.e., which takes the value optimal-guaranteed or optimal-not-guaranteed (See Algorithm 2).

In the local search stage, each time a -sized cover is found (Line 1), the algorithm removes a vertex from (Line 1) and begins to search for a -sized cover, until some termination condition is reached (Line 1). The move to a neighboring candidate solution consists of exchanging a pair of vertices (Line 1): a vertex is removed from , and a vertex is added into . Such an exchanging procedure is also called a step by convention. Thus the local search moves step by step in the search space to find a better vertex cover. When the algorithm terminates, it outputs the smallest vertex cover that has been found.

For a vertex , the loss of , denoted as , is defined as the number of covered edges that will become uncovered by removing from . For a vertex , the gain of , denoted as , is defined as the number of uncovered edges that will become covered by adding into . Both loss and gain are scoring properties of vertices. In any step, a vertex has two possible states: inside and outside , and we use to denote the number of steps that have been performed since last time ’s state was changed.

The Construction Stage

Previous procedures construct a starting vertex cover from an empty set mainly as below:

  1. Max-gain: select a vertex with the maximum and add into , breaking ties randomly. Repeat this procedure until becomes a cover. (?)

  2. Min-gain: Select a vertex with the minimum positive and add all ’s neighbors into , breaking ties randomly. Repeat this procedure until becomes a cover. Redundant vertices (vertices whose is 0) in are then removed. (??)

  3. Edge-greedy: Select an uncovered edge , add the endpoint with higher degree into . Repeat this procedure until becomes a cover. Redundant vertices in are then removed by a read-one procedure. (?)

Reduction Rules for MinVC

Our solver will incorporate the following reduction rules in the procedure to handle vertices of small degrees.

Degree-1 Rule: If contains a vertex s.t. , then there is a minimum vertex cover of that contains .

The two rules below are from (?).

Degree-2 with Triangle Rule: If contains a vertex s.t. and , then there is a minimum vertex cover of that contains both and .

Degree-2 with Quadrilateral Rule: If contains two vertices and s.t. and , then there is a minimum vertex cover of that contains both and .

Since we are to develop a local search solver, we now rewrite them in the terminologies of local search.

Degree-1 Rule: If and is a neighbor of s.t. , then put into the .

Degree-2 with Triangle Rule: If , and are both ’s neighbors s.t. and , then put both and into the .

Degree-2 with Quadrilateral Rule: If = = , and both are neighbors shared by s.t. and , then put both and into the .

Incorporating Reduction Rules

We incorporate reduction rules in order to: (1) construct smaller starting vertex covers; (2) help confirm optimality.

Constructing A Vertex Cover with Reductions

Like (?), our procedure also consists of an extending phase (Lines 2 to 2) and a shrinking phase (Line 2). Notice that if we construct a cover by only using reduction rules, then it must be optimal. So we employ a predicate max_gain_used s.t. max_gain_used = true if Line 2 has been executed, and max_gain_used = false otherwise.

input : A graph
output : A cover and whether-optimal-guaranteed
1 ;
2 max_gain_used false;
3 while there exist uncovered edges do
4       Repeatedly apply the Degree-2 with Triangle Rule until it is not applicable;
5       Repeatedly apply the Degree-2 with Quadrilateral Rule until it is not applicable;
6       Repeatedly apply the Degree-1 Rule until it is not applicable;
7       if any rule above is applicable then continue;
8       if all edges are covered then break;
9       max_gain_used true;
10       pick a vertex with the maximum (ties are broken randomly), put it into ;
11 eliminateRedundantVertices();
12 if max_gain_used = true then
13       return (, optimal-not-guaranteed);
14else
15       return (, optimal-guaranteed);
Algorithm 2 InitVC

In Line 2, we initialize to be an empty set. Then we extend to be a vertex cover of , by iteratively adding a vertex into . Lines 2 to 2 apply reduction rules to put vertices into . Line 2 ensures that no reduction rules are applicable before making use of the max-gain heuristic. After the extending phase (Lines 2 to 2), Line 2 removes the redundant vertices from just as what (?) did.

Fixing Vertices in the Starting Vertex Cover

When Algorithm 2 constructs a starting vertex cover, we realize that some of the vertices are put into based on pure reductions. That is, they were put into when max_gain_used = false. Hence, there exist a minimum vertex cover which contains all of such vertices, and we call them inferred vertices. In local search we can fix the inferred vertices in s.t. they are never allowed to be removed from . It seems that such a procedure are able to reduce the search space and speed up the search.

So we employ an array , whose element is an indicator for a vertex. During the execution of Algorithm 2, we maintain the array as below:

  1. Rule 1: Before the extending phase, for each vertex , is set to false.

  2. Rule 2: When putting a vertex into , we check whether max_gain_used = false. If so, is set to true.

Thus when Algorithm 2 is completed, = true if is an inferred vertex, and = false otherwise. So later when we are doing local search, we can forbid from being removed from if = true, as is shown in Line 3 and 3 in Algorithm 3.

A Local Search MinVC Solver

input : A graph , the cutoff time
output : A vertex cover of
1 ;
2 if  optimal-guaranteed then return ;
3 while elapsed time cutoff do
4       if  covers all edges then
5             ;
6             remove a vertex s.t. false with minimum from , breaking ties randomly;
7            
8      remove a vertex s.t. false with the minimum , breaking ties randomly;
9       a random uncovered edge;
10       add the endpoint of with the greater , breaking ties in favor of the older one;
11      
return ;
Algorithm 3 LinCom(, cutoff)

Our solver LinCom is outlined in Algorithm 3. At first a vertex cover is constructed. If the returned cover is guaranteed to be optimal, the algorithm will immediately return.

Then at each step, the algorithm first chooses a vertex s.t. is not an inferred vertex (i.e., = false) with the minimum , breaking ties randomly. Then the algorithm picks a random uncovered edge , chooses one of ’s endpoints with the greater and adds it, breaking ties in favor of the older one.

Data Structures

In order to lower the complexities, we exploited an efficient data structure named alternative partitions (See Figure 1).

Alternative Partitions

We use loss- (resp. gain-) partition to denote the partition that contains vertices in (resp. outside ) whose loss (resp. gain) is (Figure 1). All the loss- partitions are shown as dark regions, and all the gain- partitions are shown as light ones. Since the dark and the light regions are distributed alternatively, we call them alternative partitions. Obviously we have

Proposition 1
  1. where .

  2. where .

Then we use Algorithm 4 to find those vertices in with the minimum loss.

input : A sequence of alternative partitions
output : A random vertex with minimum loss
1 ;
2 while the loss- partition is empty do  ;
return a random vertex in the loss- partition;
Algorithm 4 randomMinLossVertex

In this algorithm we first check whether there are any vertices whose loss is 0. If so, we randomly return one of them. Otherwise, we go on to check whether there are any vertices whose loss is 1, 2, until we find a non-empty partition. Then we randomly return one in that partition. So we have,

Proposition 2

The complexity of Algorithm 4 is .

Similarly we have

Proposition 3

The complexity of finding the partition with the maximum/minimum gain is .

Implementations

Given a graph and a candidate solution , we implement the alternative partitions on an array where each position holds a vertex (See Figure 1). Besides, we maintain two additional arrays of pointers, each of which points to the beginning of a specific partition. Imagine the array as a book of vertices and the pointer arrays as the indexes of the book.

Initializing the Partitions

At first when is empty, there are no dark regions in our data structure, so initializing the partitions is equivalent to sorting the vertices into a monotonic nondecreasing order, based on their gain. Notice that at this time, the gain of any vertex is equal to its degree, so we now need to sort vertices by degrees. By Proposition 1, this satisfies the assumption of counting sort which runs in linear time (?). Thus we have,

Proposition 4

Initializating the partitions is .

Maintaining the Partitions

After initializations, there are two cases in which a particular vertex, say , has to be moved from one partition to another: (1) adding (resp. removing) into (resp. from) ; (2) increasing/decreasing / by 1. Thus the core operation is to move a vertex to an adjacent partition.

Figure 1: Adding into (a)
Figure 2: Adding into (b)
Figure 3: Adding into (c)

Now we show how to do this with an example (See Figure 1 to 3). In this example, we are to add into . Initially and are in the gain-52 partition and thus their gain is 52 (Figure 1). Notice that after being added, ’s loss will become 52, i.e., it should be in the loss-52 partition. Thus the operation is performed like this: (1) is swapped with (Figure 2); (2) P is moved (Figure 3).

We define placeVertexIntoC() as the procedure that moves from certain gain- partition to the respective loss- partition, puts it into and updates its score. And we define gainMinusMinus() as the procedure that moves from certain gain- partition to the respective gain- partition and updates its score. Analogously we define placeVertexOutfromC(), lossMinusMinus(), gainPlusPlus(), and lossPlusPlus(). Then we have

Proposition 5

All the procedures are of complexities.

Complexity Analysis

In this section, we evaluate the complexities of the best-picking and the vertex cover construction heuristics.

Complexity of The Best-picking Heuristic

Along with adding/removing a vertex , we have to move this vertex and all its neighbors to other partitions. Thus by Proposition 5, maintaining the partitions will take time plus an amount of time proportional to . Thus,

Proposition 6

When a vertex is added/removed, the complexity of maintaining the partitions at each step is .

By Proposition 2 and 6, we have

Proposition 7

The best-picking heuristic in Algorithm 3 can be done in complexity.

In the local search stage, by Proposition 1, we have

Theorem 8

Suppose that each vertex has equal probability to be added or removed, then the average complexity of the best-picking heuristic in Algorithm 3 is .

It is nice because (?) stated that the best-picking heuristic was of complexity. Since most real-world graphs are sparse (???), we have .

Complexity of The Max-gain/Min-gain Heuristics

input : A graph
output : A cover and whether-optimal-guaranteed
1 ; ; initialize the partitions;
2 while  do
3       ;
4       while the gain- partition is empty do  ;
5       a random vertex in the gain- partition;
6       foreach  do
7             if  then continue;
8             placeVertexIntoC(); ;
9             foreach  do
10                   if  then  lossMinusMinus() ;
11                   else  gainMinusMinus() ;
12                  
13            
14      
return (, optimal-not-guaranteed);
Algorithm 5 minGainConstructVC

(?) formally proved that the max-gain heuristic had a worst-case complexity of . Moreover, both (?) and (?) proved rigorously that the worst-case complexity of the min-gain heuristic was . Yet with the alternative partitions, we have

Theorem 9

The min-gain/max-gain heuristic constructs a vertex cover in complexity, where is the starting vertex cover.

Proof: We use to denote the set of uncovered edges.

  1. We prove the case for min-gain by Algorithm 5. By Proposition 4, Line 5 has a complexity of .

    In any cycle of the outer while-loop, if the condition in Line 5 is tested for times, then , and thus neighbors of will be put into . That is, in any cycle, the number of tests done in Line 5 is equal to the number of vertices that will be put. So that condition will be tested for exactly times during the algorithm.

    Given in Line 5, the algorithm tests each of its neighbors whether they are in in Line 5. Considering the case that we have to test every neighbor of every vertex, the total number of tests done is . Thus the condition in Line 5 will be tested for at most times.

    After putting a vertex into in Line 5, we have to update the information about its neighbors (Line 5-5). Again considering the extreme case above, the total number of updates (gainMinusMinus or lossMinusMinus) will be at most . By Proposition 5, the time spent in Line 5-5 during the algorithm is . To conclude, the overall complexity is .

  2. We prove the case for max-gain by Algorithm 6.

    In Line 6, we initialize to be which is equal to the maximum gain at this time. Notice that the value of the maximum gain never increases in the construction stage. So during the execution, whenever we find that there are no vertices whose gain is , we go on to check whether there are any vertices whose gain is . Thus, during the execution, is always the value of the maximum gain.

    When the condition in Line 6 is tested, there are two cases: (1) if succeeds, then is decreased by 1; (2) if fails, then one vertex is put into . So the number of tests done in Line 6 is exactly . Similarly, the overall complexity is .

 

input : A graph
output : A cover , whether-optimal-guaranteed
1 ; ; initialize the partitions;
2 ;
3 while  do
4      while the gain- partition is empty do ;
5       a random vertex in the gain- partition;
6       placeVertexIntoC(); ;
7       foreach  do
8             if  then  lossMinusMinus() ;
9             else  gainMinusMinus() ;
10            
11      
return (, optimal-not-guaranteed);
Algorithm 6 maxGainConstructVC

Besides, we compared Algorithm 6 with the traditional one (?) through experiments. Moreover, as to the min-gain heuristic, we program it ourselves in two ways: Algorithm 5 and the previous way. It shows that our methods are faster than the traditional ones by orders of magnitude on large instances. So our experimental results were completely consistent with the theoretical expectations. So far we have not derived the complexity of Algorithm 2 yet, but we believe that it is also linear, because our procedure outputs a vertex cover typically within 10 seconds.

Because the max-gain heuristic was proposed about three decades ago (?), and (?) still proved the complexity, our result is surprising. Note that partitioning is a general method and can also be applied to solve huge instances for other problems.

Experimental Evaluation

In this section, we carry out extensive experiments to evaluate LinCom on massive graphs, compared against the state-of-the-art local search MinVC algorithm FastVC. To show the individual impacts, we also present the performances of our procedure (named as InitVC in the tables).

Benchmarks

We downloaded all 139 instances222http://lcs.ios.ac.cn/c̃aisw/Resource/realworld%20graphs.tar.gz. They were originally online,333http://www.graphrepository.com./networks.php and then transformed to DIMACS graph format. But we excluded three extremely large ones, since they are out of memory for all the algorithms here. Thus we tested all the solvers on the remaining 136 instances. Some of them have recently been used in testing parallel algorithms for Maximum Clique and Coloring problems (??).

Experiment Setup

All the solvers were compiled by g++ 4.6.3 with the ’-O3’ option. For FastVC444http://lcs.ios.ac.cn/ caisw/Code/FastVC.zip, we adopt the parameter setting reported in (?). The experiments were conducted on a cluster equipped with a number of Intel(R) Xeon(R) CPUs X5650 @2.67GHz with 8GB RAM, running Red Hat Santiago OS.

All the algorithms are executed on each instance with a time limit of 1000 seconds, with seeds from 1 to 100. For each algorithm on each instance, we report the minimum size (””) and averaged size (””) of vertex covers found by the algorithm. To make the comparisons clearer, we also report the difference (””) between the minimum size of vertex cover found by FastVC and that found by LinCom. A positive means that LinCom finds a smaller vertex cover, while a negative means that FastVC finds a smaller vertex cover. The numbers of vertices of these graphs lie between to . We omit them and readers may refer to (?) or the download website.

Experimental Results

We show the main experimental results in Tables 1 and 2. For the sake of space, we do not report the results on graphs with less than 1000 vertices. Furthermore, we do not report the results on graphs where LinCom and FastVC precisely return both the same minimum size and average size.

Graph FastVC InitVC LinCom
ca-AstroPh 11483 (11483) 11483 (11483.36) 11483 (11483.01) 0
ca-citeseer 129193 (129193) 129193 (129193.82) 129193 (129193.36) 0
ca-coauthors-dblp 472179 (472179) 472234 (472242.19) 472179 (472179.02) 0
ca-CondMat 12480 (12480) 12481 (12481.25) 12480 (12480.06) 0
ca-dblp-2010 121969 (121969) 121970 (121971.02) 121969 (121969.64) 0
ca-dblp-2012 164949 (164949) 164949 (164950.88) 164949 (164950.35) 0
ca-hollywood-2009 864052 (864052) 864052 (864053.9) 864052 (864052.01) 0
ca-MathSciNet 139951 (139951) 139951 (139952.45) 139951 (139952.23) 0
socfb-A-anon 375231 (375232.94) 375230 (375230.82) 375230 (375230.82) 1
socfb-B-anon 303048 (303048.93) 303048 (303048) 303048 (303048) 0
socfb-Berkeley13 17209 (17212.18) 17280 (17290.32 ) 17210 (17215.93) -1
socfb-CMU 4986 (4986.72) 5002 (5007.41) 4986 (4987.24) 0
socfb-Duke14 7683 (7683.05) 7707 (7712.34) 7683 (7684.98) 0
socfb-Indiana 23313 (23317.19) 23426 (23439.12) 23319 (23323.79) -6
socfb-MIT 4657 (4657) 4663 (4669.13) 4657 (4657.56) 0
socfb-OR 36547 (36549.44 ) 36586 (36594.26) 36548 (36549.50) -1
socfb-Penn94 31161 (31164.95) 31299 (31313.34) 31165 (31170.78) -4
socfb-Stanford3 8517 (8517.89) 8534 (8540.01) 8518 (8518.35) -1
socfb-Texas84 28166 (28171.54) 28306 (28317.76) 28169 (28178.98) -3
socfb-UCLA 15222 (15224.41) 15279 (15294.25) 15224 (15228.85) -2
socfb-UConn 13230 (13231.60) 13287 (13300.16) 13232 (13235.99) -2
socfb-UCSB37 11261 (11262.88) 11310 (11316.65) 11262 (11265.54) -1
socfb-UF 27305 (27309.04) 27440 (27453.23) 27310 (27316.25) -5
socfb-UIllinois 24090 (24093.97) 24209 (24222.07) 24095 (24101.18) -5
socfb-Wisconsin87 18383 (18385.46) 18468 (18483.70) 18384 (18390.13) -1
ia-enron-large 12781 (12781) 12781 (12781.2) 12781 (12781.2) 0
inf-power 2203 (2203) 2203 (2203.01) 2203 (2203.01) 0
inf-roadNet-CA 1001254 (1001325.29) 1007098 (1007362.34) 1001058 (1001139.61) 196
inf-roadNet-PA 555203 (555248.74) 558206 (558343.72) 555035 (555107.22) 168
rec-amazon 47606(47606.01) 47605 (47611.64) 47605 (47605.62) 1
rt-retweet-crawl 81044 (81047.81) 81040 (81040) 81040 (81040) 4
Table 1: Experimental results on collaboration networks, facebook networks, interaction networks, infrastructure networks, recommend networks and retweet networks
Graph FastVC InitVC LinCom
sc-ldoor 856754 (856757.36) 858142 (858173.08) 856755 (856757.18) -1
sc-msdoor 381558 (381559.23) 382102 (382120.66) 381559 (381559.86) -1
sc-nasasrb 51242 (51247.27) 51575 (51605.64) 51243 (51249.23) -1
sc-pkustk11 83911 (83912.97) 84124 (84146.02) 83911 (83913.52) 0
sc-pkustk13 89217 (89220.46) 89625 (89652.49) 89219 (89222.95) -2
sc-pwtk 207711 (207720.22) 208713 (208760.96) 207698 (207711.11) 13
sc-shipsec1 117305 (117338.65) 118727 (118788.57) 117278 (117319.88 ) 27
sc-shipsec5 147140 (147179.12) 147656 (147710.75) 146991 (147022.95) 149
soc-BlogCatalog 20752 (20752) 20752 (20752.01) 20752 (20752.01) 0
soc-brightkite 21190 (21190) 21190 (21190.09) 21190 (21190.09) 0
soc-buzznet 30625 (30625) 30613 (30613) 30613 (30613) 12
soc-delicious 85660 (85696.77) 85343 (85364.83) 85319 (85333.75) 341
soc-digg 103243 (103244.72) 103234 (103234.01) 103234 (103234.01) 9
soc-epinions 9757 (9757) 9757 (9757.02) 9757 (9757.02) 0
soc-flickr 153272 (153272.03) 153271 (153274.09) 153271 (153271.45) 1
soc-flixster 96317 (96317) 96317 (96317.02) 96317 (96317.02) 0
soc-FourSquare 90108 (90109.09) 90108 (90108.13) 90108 (90108.13) 0
soc-gowalla 84222 (84222.36) 84222 (84224.28) 84222 (84222.07) 0
soc-livejournal 1869044 (1869054.64) 1868997(1869010.13) 1868924 (1868932.92) 120
soc-pokec 843419 (843432.58) 843768 (843783.01) 843344 (843347.38) 75
soc-youtube 146376 (146376.13) 146376 (146376.35) 146376 (146376.1) 0
soc-youtube-snap 276945 (276945) 276945 (276945.21) 276945 (276945.21) 0
tech-as-skitter 527161 (527204.59) 525132 (525149.68) 525086 (525099.14) 2075
tech-RL-caida 74924 (74940.83 ) 74618 (74625.67) 74607 (74615.25) 317
scc_infect-dublin 9104 (9104) 9110 (9112.56 ) 9103 (9103) 1
scc_retweet-crawl 8419 (8419) 8419 (8419.02) 8419 (8419.02) 0
web-arabic-2005 114425 (114427.28) 114431 (114435.40) 114420 (114420.67) 5
web-BerkStan 5384 (5384) 5388 (5388.13) 5384 (5384.13) 0
web-it-2004 414671 (414675.12) 414854 (414874.98) 414646 (414649) 25
web-spam 2298 (2298.01) 2297 (2298.07) 2297 (2297.26) 1
web-wikipedia2009 648315 (648321.83) 648385 (648401.24) 648300 (648312.39) 15
Table 2: Experimental results on scientific computation networks, social networks, technological networks, temporal reachability networks and web link networks

From the results in Tables 1 and 2, we observe that:

1) LinCom attains the best known solutions for most instances, and makes a significant progress. In Fact, among all the 136 tested instances LinCom has found covers with 26 less vertices on average. This improvement is big, since it rarely happens to find a better solution (?).

2) LinCom is more robust. Actually out of 12 classes, LinCom outperforms FastVC over 7 classes, while FastVC outperforms LinCom over 1 class (e.g., facebook networks). It seems that our local search is too greedy and not as effective as FastVC for facebook networks.

3) There are quite a few instances (e.g., soc-delicious) where InitVC outperforms FastVC. This illustrates that our procedure generates desired starting vertex covers.

Furthermore, the solutions to the following 9 instances are guaranteed to be optimal: ca-CSphd, ca-Erdos992, ia-email-EU, ia-reality, ia-wiki-Talk, soc-douban, soc-LiveMocha, soc-twitter-follows, tech-internet-as. So our procedure is sometimes complete in practice.

Conclusions and Future Work

In this paper, we have developed a local search algorithm for MinVC called LinCom, based on reduction rules and data structures. The reduction rules help generate a better quality starting vertex cover, while the data structures lower the complexities of the heuristics.

The main contributions are two folds: (1) we have lowered the complexity of two vertex cover construction heuristics and the best-picking heuristic based on the score-based alternative partitions at the theoretical level; (2) we apply these results and some reduction rules to develop a local search solver which outperforms the state-of-the-art.

As for future works we will utilize various diversification strategies to in our solver. Also, we will apply reduction rules to select vertices for exchanging in local search.

References

  • [Barabasi and Albert 1999] Barabasi, A.-L., and Albert, R. 1999. Emergence of scaling in random networks. Science 286(5439):509–512.
  • [Cai et al. 2013] Cai, S.; Su, K.; Luo, C.; and Sattar, A. 2013. Numvc: An efficient local search algorithm for minimum vertex cover. J. Artif. Intell. Res. (JAIR) 46:687–716.
  • [Cai 2015] Cai, S. 2015. Balance between complexity and quality: Local search for minimum vertex cover in massive graphs. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, 747–753.
  • [Chen, Kanj, and Jia 2001] Chen, J.; Kanj, I. A.; and Jia, W. 2001. Vertex cover: Further observations and further improvements. J. Algorithms 41(2):280–301.
  • [Chung et al. 2006] Chung, F.; Lu, L.; of the Mathematical Sciences, C. B.; and (U.S.), N. S. F. 2006. Complex Graphs and Networks. Number no. 107 in Complex graphs and networks. American Mathematical Society.
  • [Cormen et al. 2009] Cormen, T. H.; Leiserson, C. E.; Rivest, R. L.; and Stein, C. 2009. Introduction to Algorithms (3. ed.). MIT Press.
  • [Dinur and Safra 2004] Dinur, I., and Safra, S. 2004. On the hardness of approximating label-cover. Inf. Process. Lett. 89(5):247–254.
  • [Eubank et al. 2004] Eubank, S.; Kumar, V. S. A.; Marathe, M. V.; Srinivasan, A.; and Wang, N. 2004. Structural and algorithmic aspects of massive social networks. In Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2004, New Orleans, Louisiana, USA, January 11-14, 2004, 718–727.
  • [Johnson and Trick 1996] Johnson, D. J., and Trick, M. A., eds. 1996. Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge, Workshop, October 11-13, 1993. Boston, MA, USA: American Mathematical Society.
  • [Karp 1972] Karp, R. M. 1972. Reducibility among combinatorial problems. In Proceedings of a symposium on the Complexity of Computer Computations, held March 20-22, 1972, at the IBM Thomas J. Watson Research Center, Yorktown Heights, New York., 85–103.
  • [Kettani, Ramdani, and Tadili 2013] Kettani, O.; Ramdani, F.; and Tadili, B. 2013. Article: A heuristic approach for the vertex cover problem. International Journal of Computer Applications 82(4):9–11. Full text available.
  • [Papadimitrious and Steiglitz 1982] Papadimitrious, C. H., and Steiglitz, K. 1982. Combinatorial Optimaization: Algorithms and Complexity. Prentice Hall, New York, USA.
  • [Richter, Helmert, and Gretton 2007] Richter, S.; Helmert, M.; and Gretton, C. 2007. A stochastic local search approach to vertex cover. In KI 2007: Advances in Artificial Intelligence, 30th Annual German Conference on AI, KI 2007, Osnabrück, Germany, September 10-13, 2007, Proceedings, 412–426.
  • [Rossi and Ahmed 2014] Rossi, R. A., and Ahmed, N. K. 2014. Coloring large complex networks. Social Netw. Analys. Mining 4(1):228.
  • [Rossi and Ahmed 2015] Rossi, R. A., and Ahmed, N. K. 2015. The network data repository with interactive graph analytics and visualization. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence.
  • [Rossi et al. 2014] Rossi, R. A.; Gleich, D. F.; Gebremedhin, A. H.; and Patwary, M. M. A. 2014. Fast maximum clique algorithms for large graphs. In 23rd International World Wide Web Conference, WWW ’14, Seoul, Republic of Korea, April 7-11, 2014, Companion Volume, 365–366.
  • [Ugurlu 2012] Ugurlu, O. 2012. New heuristic algorithm for unweighted minimum vertex cover. In Problems of Cybernetics and Informatics (PCI), 2012 IV International Conference, 1–4.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
4145
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description