Variations on Memetic Algorithms for Graph Coloring Problems

Variations on Memetic Algorithms
for Graph Coloring Problems

Laurent Moalic laurent.moalic@utbm.fr Univ. Bourgogne Franche-Comté, UTBM, OPERA, Belfort, France Alexandre Gondran alexandre.gondran@enac.fr ENAC, French Civil Aviation University, Toulouse, France

Abstract - Graph vertex coloring with a given number of colors is a well-known and much-studied NP-complete problem. The most effective methods to solve this problem are proved to be hybrid algorithms such as memetic algorithms or quantum annealing. Those hybrid algorithms use a powerful local search inside a population-based algorithm. This paper presents a new memetic algorithm based on one of the most effective algorithms: the Hybrid Evolutionary Algorithm (HEA) from Galinier and Hao (1999). The proposed algorithm, denoted HEAD - for HEA in Duet - works with a population of only two individuals. Moreover, a new way of managing diversity is brought by HEAD. These two main differences greatly improve the results, both in terms of solution quality and computational time. HEAD has produced several good results for the popular DIMACS benchmark graphs, such as 222-colorings for <dsjc1000.9>, 81-colorings for <flat1000_76_0> and even 47-colorings for <dsjc500.5> and 82-colorings for <dsjc1000.5>.

Keywords - Combinatorial optimization, Metaheuristics, Coloring, Graph, Evolutionary

1 Introduction

Given an undirected graph with a set of vertices and a set of edges, graph vertex coloring involves assigning each vertex with a color so that two adjacent vertices (linked by an edge) feature different colors. The Graph Vertex Coloring Problem (GVCP) consists in finding the minimum number of colors, called chromatic number , required to color the graph while respecting these binary constraints. The GVCP is a well-documented and much-studied problem because this simple formalization can be applied to various issues such as frequency assignment problems [1, 2], scheduling problems [3, 4, 5] and flight level allocation problems [6, 7]. Most problems that involve sharing a rare resource (colors) between different operators (vertices) can be modeled as a GVCP. The GVCP is NP-hard [8].

Given a positive integer corresponding to the number of colors, a -coloring of a given graph is a function that assigns to each vertex an integer between and as follows :

The value is called the color of vertex . The vertices assigned to the same color define a color class, denoted . An equivalent view is to consider a -coloring as a partition of into subsets of vertices: .

We recall some definitions :

  • a -coloring is called legal or proper -coloring if it respects the following binary constraints : . Otherwise the -coloring is called non legal or non proper; and edges such as are called conflicting edges, and and conflicting vertices.

  • A -coloring is a complete coloring because a color is assigned to all vertices; if some vertices can remain uncolored, the coloring is said to be partial.

  • An independent set or a stable set is a set of vertices, no two of which are adjacent. It is possible to assign the same color to all the vertices of an independent set without producing any conflicting edge. The problem of finding a minimal graph partition of independent sets is then equivalent to the GVCP.

The -coloring problem - finding a proper -coloring of a given graph - is NP-complete [9] for . The best performing exact algorithms are generally not able to find a proper -coloring in reasonable time when the number of vertices is greater than for random graphs [10, 11, 12]. Only for few bigger graphs, exact approaches can be applied successfully [13]. In the general case, for large graphs, one uses heuristics that partially explore the search-space to occasionally find a proper -coloring in a reasonable time frame. However, this partial search does not guarantee that a better solution does not exist. Heuristics find only an upper bound of by successively solving the -coloring problem with decreasing values of .

This paper proposes two versions of a hybrid metaheuristic algorithm, denoted HEAD’ and HEAD, integrating a tabu search procedure with an evolutionary algorithm for the -coloring problem. This algorithm is built on the well-known Hybrid Evolutionary Algorithm (HEA) of Galinier and Hao [14]. However, HEAD is characterized by two original aspects: the use of a population of only two individuals and an innovative way to manage the diversity. This new simple approach of memetic algorithms provides excellent results on DIMACS benchmark graphs.

The organization of this paper is as follows. First, Section 2 reviews related works and methods of the literature proposed for graph coloring and focuses on some heuristics reused in HEAD. Section 3 describes our memetic algorithm, HEAD solving the graph -coloring problem. The experimental results are presented in Section 4. Section 5 analyzes why HEAD obtains significantly better results than HEA and investigates some of the impacts of diversification. Finally, we consider the conclusions of this study and discuss possible future researches in Section 6.

2 Related works

Comprehensive surveys on the GVCP can be found in [15, 16, 17]. These first two studies classify heuristics according to the chosen search-space. The Variable Space Search of [18] is innovative and didactic because it works with three different search-spaces. Another more classical mean of classifying the different methods is to consider how these methods explore the search-space; three types of heuristics are defined: constructive methods, local searches and population-based approaches.

We recall some important mechanisms of TabuCol and HEA algorithms because our algorithm HEAD shares common features with these algorithms. Moreover, we briefly present some aspects of the new Quantum Annealing algorithm for graph coloring denoted QA-col [19, 20, 21] which has produced, since 2012, most of the best known colorings on DIMACS benchmark.

2.1 TabuCol

In 1987, Hertz and de Werra [22] presented the TabuCol algorithm, one year after Fred Glover introduced the tabu search. This algorithm, which solves -coloring problems, was enhanced in 1999 by [14] and in 2008 by [18]. The three basic features of this local search algorithm are as follows:

  • Search-Space and Objective Function: the algorithm is a -fixed penalty strategy. This means that the number of colors is fixed and non-proper colorings are taken into account. The aim is to find a coloring that minimizes the number of conflicting edges under the constraints of the number of given colors and of completed coloring (see [16] for more details on the different strategies used in graph coloring).

  • Neighborhood: a -coloring solution is a neighbor of another -coloring solution if the color of only one conflicting vertex is different. This move is called a critic 1-move. A 1-move is characterized by an integer couple where is the vertex number and the new color of . Therefore the neighborhood size depends on the number of conflicting vertices.

  • Move Strategy: the move strategy is the standard tabu search strategy. Even if the objective function is worse, at each iteration, one of the best neighbors which are not inside the tabu list is chosen. Note that all the neighborhood is explored. If there are several best moves, one chooses one of them at random. The tabu list is not the list of each already-visited solution because this is computationally expensive. It is more efficient to place only the reverse moves inside the tabu list. Indeed, the aim is to prevent returning to previous solutions, and it is possible to reach this goal by forbidding the reverse moves during a given number of iterations (i.e. the tabu tenure). The tabu tenure is dynamic: it depends on the neighborhood size. A basic aspiration criterion is also implemented: it accepts a tabu move to a -coloring, which has a better objective function than the best -coloring encountered so far.

Data structures have a major impact on algorithm efficiency, constituting one of the main differences between the Hertz and de Werra version of TabuCol [22] and the Galinier and Hao version [14]. Checking that a 1-move is tabu or not and updating the tabu list are operations performed in constant time. TabuCol also uses an incremental evaluation [23]: the objective function of the neighbors is not computed from scratch, but only the difference between the two solutions is computed. This is a very important feature for local search efficiency. Finding the best 1-move corresponds to find the maximum value of a integer matrix. An efficient implementation of incremental data structures is well explained in [15].

Another benefit of this version of TabuCol is that it has only two parameters, and to adjust in order to control the tabu tenure, , by:

where is the number of conflicting vertices in the curent solution . Moreover, [14] has demonstrated on a very large number of instances that with the same setting ( a random integer inside and ), TabuCol obtained very good -colorings. Indeed, one of the main disadvantages of heuristics is that the number of parameters to set is high and difficult to adjust. This version of TabuCol is very robust. Thus we retained the setting of [14] in all our tests.

2.2 Memetic Algorithms for graph coloring and Hea

Memetic Algorithms [24] (MA) are hybrid metaheuristics using a local search algorithm inside a population-based algorithm. They can also be viewed as specific Evolutionary Algorithms (EAs) where all individuals of the population are local minimums (of a specific neighborhood). In MA, the mutation of the EA is replaced by a local search algorithm. It is very important to note that most of the running time of a MA is spent in the local search. These hybridizations combine the benefits of population-based methods, which are better for diversification by means of a crossover operator, and local search methods, which are better for intensification.

In graph coloring, the Hybrid Evolutionary Algorithm (HEA) of Galinier and Hao [14] is a MA; the mutation of the EA is replaced by the tabu search TabuCol. HEA is one of the best algorithms for solving the GVCP; From 1999 until 2012, it provided most of the best results for DIMACS benchmark graphs [25], particularly for difficult graphs such as <dsjc500.5> and <dsjc1000.5> (see table 1). These results were obtained with a population of 10 individuals.

The crossover used in HEA is called the Greedy Partition Crossover (GPX). The two main principles of GPX are: 1) a coloring is a partition of vertices into color classes and not an assignment of colors to vertices, and 2) large color classes should be transmitted to the child. Figure 1 gives an example of GPX for a problem with three colors (red, blue and green) and 10 vertices (A, B, C, D, E, F, G, H, I and J). The first step is to transmit to the child the largest color class of the first parent. If there are several largest color classes, one of them is chosen at random. After having withdrawn those vertices in the second parent, one proceeds to step 2 where one transmits to the child the largest color class of the second parent. This process is repeated until all the colors are used. There are most probably still some uncolored vertices in the child solution. The final step (step ) is to randomly add those vertices to the color classes. Notice that GPX is asymmetrical: the order of the parents is important; starting the crossover with parent 1 or parent 2 can produce very different offsprings. Notice also that GPX is a random crossover: applying GPX twice with the same parents does not produce the same offspring. The final step is very important because it produces many conflicts. Indeed if the two parents have very different structures (in terms of color classes), then a large number of vertices remain uncolored at step , and there are many conflicting edges in the offspring (cf. figure 4). We investigate some modifications of GPX in section 5.

Figure 1: An example of GPX crossover for a graph of 10 vertices (A, B, C, D, E, F, G, H, I and J) and three colors (red, blue and green). This example comes from [14].

2.3 QA-col: Quantum Annealing for graph coloring

In 2012 Olawale Titiloye and Alan Crispin [19, 20, 21] proposed a Quantum Annealing algorithm for graph coloring, denoted QA-col. QA-col produces most of the best-known colorings for the DIMACS benchmark. In order to achieve this level of performance, QA-col is based on parallel computing. We briefly present some aspects of this new type of algorithm.

In a standard Simulating Annealing algorithm (SA), the probability of accepting a candidate solution is managed through a temperature criterion. The value of the temperature decreases during the SA iterations. As MA, a Quantum Annealing (QA) is a population-based algorithm, but it does not perform crossovers and the local search is an SA. The only interaction between the individuals of the population occurs through a specific local attraction-repulsion process. The SA used in QA-col algorithm is a -fixed penalty strategy like TabuCol: the individuals are non-proper -colorings. The objective function of each SA minimizes a linear combination of the number of conflicting edges and a given population diversity criterion as detailed later. The neighborhood used is defined by critic 1-moves like TabuCol. More precisely, in QA-col, the -colorings of the population are arbitrarily ordered in a ring topology: each -coloring has two neighbors associated with it. The second term of the objective function (called Hamiltonian cf. equation (1) of [19]) can be seen as a diversity criterion based on a specific distance applicable to partitions. Given two -colorings (i.e. partitions) and , the distance, which we called pairwise partition distance, between and is the following :

where is the XOR operation and is Iverson bracket: if is true and equals otherwise. Then, given one -coloring of the population, the diversity criterion is defined as the sum of the pairwise partition distances between and its two neighbors and in the ring topology : which ranges from to ; The value of this diversity is integrated into the objective function of each SA. As with the temperature, if the distance increases, there will be a higher probability that the solution will be accepted (attractive process). If the distance decreases, then there will be a lower probability that the solution will be accepted (repulsive process). Therefore in QA-col the only interaction between the -colorings of the population is realized through this distance process.

Although previous approaches are very efficient the reasons for this are difficult to assess. They use many parameters and several intensification and diversification operators and thus the benefit of each item is not easily evaluated. Our approach has been to identify which elements of HEA are the most significant in order to define a more efficient algorithm.

3 Head: Hybrid Evolutionary Algorithm in Duet

The basic components of HEA are the TabuCol algorithm, which is a very powerful local search for intensification, and the GPX crossover, which adds some diversity. The intensification/diversification balance is difficult to achieve. In order to simplify the numerous parameters involved in EAs, we have chosen to consider a population with only two individuals. We present two versions of our algorithm denoted HEAD’ and HEAD for HEA in Duet.

3.1 First hybrid algorithm: HEAD’

Algorithm 1 describes the pseudo code of the first version of the proposed algorithm, denoted HEAD’.

Input: , the number of colors; , the number of TabuCol iterations.
Output: the best -coloring found:
init()  /* initialize with random -colorings */
1 do
2        GPX(, ) GPX(, ) TabuCol(,) TabuCol(,) saveBest()
while  and
Algorithm 1 - HEAD’ - first version of HEAD: HEA in Duet for -coloring problem

This algorithm can be seen as two parallel TabuCol algorithms which periodically interact by crossover.

After randomly initializing the two solutions (with init() function), the algorithm repeats an instructions loop until a stop criterion occurs. First, we introduce some diversity with the crossover operator, then the two offspring and are improved by means of the TabuCol algorithm. Next, we register the best solution and we systematically replace the parents by the two children. An iteration of this algorithm is called a generation. The main parameter of TabuCol is , the number of iterations performed by the algorithm, the other TabuCol parameters and are used to define the tabu tenure and are considered fixed in our algorithm. Algorithm 1 stops either because a legal -coloring is found () or because the two -colorings are equal in terms of the set-theoretic partition distance (cf. Section 5).

A major risk is a premature convergence of HEAD’. Algorithm 1 stops sometimes too quickly: the two individuals are equal before finding a legal coloring. It is then necessary to reintroduce diversity into the population. In conventional EAs, the search space exploration is largely brought by the size of the population: the greater the size, the greater the search diversity. In the next section we propose an alternative to the population size in order to reinforce diversification.

3.2 Improved hybrid algorithm: HEAD

Input: , the number of colors; , the number of TabuCol iterations; , the number of generations into one cycle.
Output: the best -coloring found:
init()  /* initialize with random -colorings */
1 do
        GPX(, ) GPX(, ) TabuCol(,) TabuCol(,) saveBest(, , /* best -coloring of the current cycle */
2        saveBest() if  then
                 /* best -coloring of the previous cycle */
3               init()
4       
while  and
Algorithm 2 HEAD - second version of HEAD with two extra elite solutions

Algorithm 2 summarizes the second version of our algorithm, simply denoted HEAD. We add two other candidate solutions (similar to elite solutions), and , in order to reintroduce some diversity to the duet. Indeed, after a given number of generations, the two individuals of the population become increasingly similar within the search-space. To maintain the population diversity, the idea is to replace one of the two candidates solutions by a solution previously encountered by the algorithm. We define one cycle as a number of generations. Solution is the best solution found during the current cycle and solution the best solution found during the previous cycle. At the end of each cycle, the solution replaces one of the population individuals. Figure 2 presents the graphic view of algorithm 2.

Figure 2: Diagram of HEAD

This elitist mechanism provides relevant behaviors to the algorithm as it can be observed in the computational results of section 4.2. Indeed, elite solutions have the best fitness value of each cycle. It is clearly interesting in terms of intensification. Moreover, when the elite solution is reintroduced, it is generally different enough from the other individuals to be relevant in terms of diversification. In the next section, we show how the use of this elitist mechanism can enhance the results.

4 Experimental Results

In this section we present the results obtained with the two versions of the proposed memetic algorithm. To validate the proposed approach, the results of HEAD are compared with the results obtained by the best methods currently known.

4.1 Instances and Benchmarks

Test instances are selected among the most studied graphs since the 1990s, which are known to be very difficult (the second DIMACS challenge of 1992-1993 [25]111ftp://dimacs.rutgers.edu/pub/challenge/graph/
benchmarks/color/
).

We focus during the study on some types of graphs from the DIMACS benchmark: <dsjc>, <dsjr>, <flat>, <r>, <le> and <C> which are randomly or quasi-randomly generated graphs. <dsjc> graphs and <C> graphs are random graphs with vertices, with each vertex connected to an average of vertices; is the graph density. The chromatic number of these graphs is unknown. Likewise for <r[c]> and <dsjr> graphs which are geometric random graphs with vertices and a density equal to . [c] denotes the complement of such a graph. <flat> and <le> graphs have another structure: they are built for a known chromatic number. The <flat> graph or <le[abcd]> graph has vertices and is the chromatic number.

4.2 Computational Results

HEAD and HEAD’ were coded in C++. The results were obtained with an Intel Core i5 3.30GHz processor - 4 cores and 16GB of RAM. Note that the RAM size has no impact on the calculations: even for large graphs such as <dsjc1000.9> (with 1000 vertices and a high density of 0.9), the use of memory does not exceed 125 MB. The main characteristic is the processor speed.

As shown in Section 3, the proposed algorithms have two successive calls to local search (lines 6 and 7 of the algorithms 1 and 2), one for each child of the current generation. Almost all of the time is spent on performing the local search. Both local searches can be parallelized when using a multi-core processor architecture. This is what we have done using the OpenMP API (Open Multi-Processing), which has the advantage of being a cross-platform (Linux, Windows, MacOS, etc.) and simple to use. Thus, when an execution of 15 minutes is given, the required CPU time is actually close to 30 minutes if using only one processing core.

Table 1 presents results of the principal methods known to date for 19 difficult graphs. For each graph, the lowest number of colors found by each algorithm is indicated (upper bound of ). For TabuCol [22] the reported results are from [18] (2008) which are better than those of 1987. The most recent algorithms, QA-col (Quantum Annealing for graph coloring [21]) and IECOL (Improving the Extraction and Expansion method for large graph COLoring [26]), provide the best results but QA-col is based on a cluster of PC using 10 processing cores simultaneously and IECOL is profiled for large graphs ( vertices). Note that HEA [14], AmaCol [27], MACOL [28], EXTRACOL [29] and IECOL are also population-based algorithms using TabuCol and GPX crossover or an improvement of GPX (GPX with parents for MACOL and EXTRACOL and the GPX process is replaced in AmaCol by a selection of color classes among a very large pool of color classes). Only QA-col has another approach based on several parallel simulated annealing algorithms interacting together with an attractive/repulsive process (cf. section 2.3).

LS Hybrid algorithm
1987/2008 1999 2008 2010 2011 2012 2012
Graphs HEAD TabuCol HEA AmaCol MACOL EXTRACOL IECOL QA-col
[22, 18] [14] [27] [28] [29] [26] [21]
dsjc250.5 28 28 28 28 28 - - 28
dsjc500.1 12 13 - 12 12 - - -
dsjc500.5 47 49 48 48 48 - - 47
dsjc500.9 126 127 - 126 126 - - 126
dsjc1000.1 20 - 20 20 20 20 20 20
dsjc1000.5 82 89 83 84 83 83 83 82
dsjc1000.9 222 227 224 224 223 222 222 222
r250.5 65 - - - 65 - - 65
r1000.1c 98 - - - 98 101 98 98
r1000.5 245 - - - 245 249 245 234
dsjr500.1c 85 85 - 86 85 - - 85
le450_25c 25 26 26 26 25 - - 25
le450_25d 25 26 - 26 25 - - 25
flat300_28_0 31 31 31 31 29 - - 31
flat1000_50_0 50 50 - 50 50 50 50 -
flat1000_60_0 60 60 - 60 60 60 60 -
flat1000_76_0 81 88 83 84 82 82 81 81
C2000.5 146 - - - 148 146 145 145
C4000.5 266 - - - 272 260 259 259
Table 1: Best coloring found

Table 2 presents the results obtained with HEAD’, the first version of HEAD (without elite). This simplest version finds the best known results for most of the studied graphs (13/19); Only QA-col (and IECOL for <C> graphs) occasionally finds a solution with less color. The column indicates the number of iterations of the TabuCol algorithm (this is the stop criterion of TabuCol). This parameter has been determined for each graph after an empirical analysis for finding the most suitable value. The column GPX refers to the GPX used inside HEAD’. Indeed, in section 5, we define two modifications of the standard GPX (Std): the unbalanced GPX (U()) and the random GPX (R()). One can notice that the choice of the unbalanced or the random crossover is based on the study of the algorithm in the standard mode (standard GPX). If the algorithm needs too many generations for converging we introduce the unbalanced GPX. At the opposite, if the algorithm converges quickly without finding any legal k-coloring we introduce the random crossover. Section 5 details the modifications of the GPX crossover (section 5.2.1 for the random GPX and section 5.2.2 for the unbalanced GPX).

The column Success evaluates the robustness of this method, providing the success rate: success_runs/total_runs. A success run is one which finds a legal -coloring. The average number of generations or crossovers performed during one success run is given by the Cross value. The total average number of iterations of TabuCol preformed during HEAD’ is

The column Time indicates the average CPU time in minutes of success runs.

HEAD’ success rate is rarely 100%, but in case of success, the running time is generally very short. The main drawback of HEAD’ is that it sometimes converges too quickly. In such instances it cannot find a legal solution before the two individuals in a generation become identical. The first option to correct this rapid convergence, is to increase the number of iterations of each TabuCol. The second option is to use the random GPX instead of the standard one (section 5.2.1). However, these options are not considered sufficient. The second version, HEAD, adds more diversity while performing an intensifying role.

Instances k GPX Success Iter Cross Time
dsjc250.5 28 6000 Std 17/20 79 0.01 min
dsjc500.1 12 8000 Std 15/20 158 0.03 min
dsjc500.5 48 8000 Std 9/20 334 0.2 min
dsjc500.9 126 25000 Std 10/20 517 1 min
dsjc1000.1 20 7000 Std 7/20 588 0.2 min
dsjc1000.5 83 40000 Std 16/20 1723 10 min
dsjc1000.9 222 60000 Std 1/20 3711 33 min
223 30000 Std 4/20 1114 5 min

r250.5
65 12000 Std 1/20 33828 12 min
65 2000 R(20) 6/20 132773 10 min

r1000.1c
98 65000 Std 1/20 18 0.1 min
98 25000 R(98) 20/20 130 0.4 min

r1000.5
245 360000 Std 20/20 3636 135 min
245 240000 U(0.98) 17/20 1352 39 min

dsjr500.1c
85 4200000 Std 1/20 1 0.2 min
85 1000 R(85) 13/20 279 0.02 min


le450_25c
25 21000000 Std 20/20 57 38 min
25 300000 U(0.98) 10/20 477 2.4 min
le450_25d 25 21000000 Std 20/20 135 64 min
25 340000 U(0.98) 10/20 317 2 min

flat300_28_0
31 4000 Std 20/20 117 0.02 min
flat1000_50_0 50 130000 Std 20/20 4 0.3 min
flat1000_60_0 60 130000 Std 20/20 9 0.5 min
flat1000_76_0 81 40000 Std 1/20 18577 137 min
82 40000 Std 18/20 1969 11 min
C2000.5 148 140000 Std 10/10 6308 794 min
C4000.5 275 140000 Std 8/10 4091 3496 min
Table 2: Results of HEAD’, the first version of HEAD algorithm (without elites)

Table 3 shows the results obtained with HEAD. For all the studied graphs except four (<flat300_28_0>, <r1000.5>, <C2000.5> and <C4000.5>), HEAD finds the best known results. Only the Quantum Annealing algorithm, using ten CPU cores simultaneously, and IECOL for large graphs, achieve this level of performance. In particular, <dsjc500.5> is solved with only 47 colors and <flat1000_76_0> with 81 colors.

Instances k GPX Success Iter Cross Time
dsjc250.5 28 6000 Std 20/20 77 0.01 min
dsjc500.1 12 4000 Std 20/20 483 0.1 min
dsjc500.5 47 8000 Std 2/10000 1517 0.8 min
48 8000 Std 20/20 479 0.2 min
dsjc500.9 126 15000 Std 13/20 970 1.2 min
dsjc1000.1 20 3000 Std 20/20 567 0.2 min
dsjc1000.5 82 60000 Std 3/20 8366 48 min
83 40000 Std 20/20 1200 6 min
dsjc1000.9 222 50000 Std 2/20 11662 86 min
223 30000 Std 19/20 2107 10 min

r250.5
65 10000 Std 1/20 34898 13 min
65 4000 R(20) 20/20 48918 6.3 min
r1000.1c 98 45000 Std 3/20 42 0.2 min
98 25000 R(98) 20/20 78 0.24 min
r1000.5 245 360000 Std 20/20 6491 244 min
245 240000 U(0.98) 20/20 1104 25 min

dsjr500.1c
85 4200000 Std 1/20 1 0.2 min
85 400 R(85) 20/20 408 0.02 min

le450_25c
25 22000000 Std 20/20 62 30 min
25 220000 U(0.98) 20/20 885 5 min
le450_25d 25 21000000 Std 20/20 161 90 min
25 220000 U(0.98) 20/20 534 2 min

flat300_28_0
31 4000 Std 20/20 120 0.02 min
flat1000_50_0 50 130000 Std 20/20 5 0.3 min
flat1000_60_0 60 130000 Std 20/20 9 0.5 min
flat1000_76_0 81 60000 Std 3/20 8795 60 min
82 40000 Std 20/20 1052 5 min
C2000.5 146 140000 Std 8/10 6358 281 min
147 140000 Std 10/10 2595 124 min
C4000.5 266 140000 Std 4/10 9034 1923 min
267 140000 Std 8/10 5723 1433 min
Table 3: Results of the second version of HEAD algorithm (with elites) including the indication of CPU time

The computation time of HEAD is generally close to that of HEAD’ but the former algorithm is more robust with a success rate of almost 100%. In particular, the two graphs <dsjc500.5> and <dsjc1000.1> with 48 and 20 colors respectively are resolved each time, and in less than one CPU minute on average (CPU 3.3GHz). Using a multicore CPU, these instances are solved in less than 30 seconds on average, often in less than 10 seconds. As a comparison, the shortest time reported in the literature for <dsjc1000.1> is 32 minutes for QA-col [20] (2011) with a 3GHz processor, 65 minutes for IECOL (2012) with a 2.8GHz processor, 93 minutes for EXTRACOL [29] (2011) with a 2.8GHz processor and 108 minutes for MACOL [28] (2010) with a 3.4GHz processor.

5 Analysis of diversification

HEAD shares common features with HEA, but it obtains significantly better results with respect to solution quality and computing time. It is beneficial to analyze why the new mechanisms introduced with HEAD gives rise to such a large change.

A first answer can be formulated with regard to computing time. It can be observed that of the running time of HEA, Amacol, MACOL and HEAD is spent during calculating TabuCol algorithms. Considering a population of -colorings in the case of HEA and Amacol ( in case of MACOL) requires more time than only two such individuals for HEAD.

In our study HEAD is not considered as a standard MA, but rather as two separated TabuCol algorithms. After a given number of iterations, instead of stopping the two TabuCol, we reintroduce diversity with the crossover operator GPX. The difficulty is to reintroduce the correct dose of diversity. Indeed the danger of the crossover is that of completely destroying the solution structure. GPX is a powerful crossover operator compared to others [23] because it transmits the biggest color classes of the two parents, thus keeping a large part of the parents’ structures. Very interesting and relevant studies about how to manage diversity for graph coloring heuristics can be found in [30, 31].

We present in this section an analysis of GPX crossover indicating that it is more accurate to have parents that are not too far away in the search-space - according to the distance presented below (section 5.1).

Several tests are also performed in this section in order to analyze the role of diversification in the HEAD algorithm. The two main mechanisms leading to diversification in HEAD are the GPX crossover and the population update process. In a first set of tests (section 5.2), we slightly modify the dose of diversification in the GPX crossover and analyze the results. In a second set of tests (section 5.3), we focus on the population update process: in HEAD, the two produced children systematically replace both parents, even if they have worse fitness values than their parents. If the replacement is not systematic, the diversification decreases.

5.1 Distance between parents and GPX crossover

GPX crossover is a diversification operator: it generates solutions in numerous uncharted regions of the search-space. However, there is a risk of providing too much diversity, and thus breaking the structure of the current solution. This is the principal failing of basic crossovers used before GPX [23].

An interesting feature of GPX is its ability to explore new areas of the search-space without breaking the structures of current -colorings. There are many parameters that affect the dose of diversity of GPX. One of the easily identifiable parameters is the distance between the two parents.

The set-theoretic partition distance [14, 32] between two -colorings and is defined as the least number of 1-move steps (i.e. a color change of one vertex) for transforming to . This distance has to be independent of the permutation of the color classes, then before counting the number of 1-moves, we have to match each color class of with the nearest color class of . This problem is a maximum weighted bipartite matching if we consider each color class of and as the vertices of a bipartite graph; an edge links a color class of with a color class of with an associated value corresponding to the number of vertices shared by those classes. The set-theoretic partition distance is then calculated as follows: where is the number of vertices of the initial graph and the result of the matching; i.e. the maximal total number of sharing vertices in the same class for and . Figure 3 gives an example of the computation of this distance between two -colorings. The possible values range from to less than . Indeed it is not possible to have totally different -coloring.

Figure 3: A graph with 10 vertices (A, B, C, D, E, F, G, H, I and J), three colors (red, blue and green) and two -colorings: coloring 1 and coloring 2. We defined the weighted bipartite graph corresponding to the number of vertices shared by color classes of coloring 1 and coloring 2. The bold lines correspond to the maximum weighted bipartite matching. The maximal total number of sharing vertices in the same class is equal to . Then the set-theoretic partition distance between those two -colorings is equal to: coloring 1, coloring 2. This distance is independent of the permutation of the color classes.

If we highlight two -colorings that have very low objective functions but that are very different (in terms of the distance), then they would have a high probability of producing children with very high objective functions following crossover. The danger of the crossover is of completely destroying the -coloring structure. On the other hand, two very close -colorings (in terms of the distance) produce a child with an almost identical objective function. Chart 4 shows the correlation between the distance separating two -colorings having the same number of conflicting edges (objective functions equal to 40) and the number of conflicting edges of the child produced after GPX crossover. This chart is obtained considering colors into the <dsjc500.5> graph. More precisely, this chart results of the following steps: 1) First, 100 non legal -colorings, called parents, are randomly generated with a fitness (that is a number of conflicting edges) equal to 40. Tabucol algorithm is used to generate these 100 parents (Tabucol is stopped when exactly 40 conflicted edges are found). 2) A GPX crossover is performed on all possible pairs of parents, generating for each pair two new non legal -colorings, called children. Indeed, GPX is asymmetrical, then the order of the parents is important. By this way, () children are generated. 3) We perform twice the steps 1) and 2), therefore the total number of generated children is equal to . Each point of the chart corresponds to one child. The -axis indicates the fitness of the child. The -axis indicates the distance in the search-space between the two parents of the child. There is a quasi-linear correlation between these two parameters (Pearson correlation coefficient equals to 0.973).

Figure 4: Each point of the chart, called child-solution, corresponds to one non-legal -coloring of the <dsjc500.5> graph; It has been produced by the GPX crossover of two other non-legal -colorings, called parents-solutions. Both parents are randomly generated with the same fitness value (the same number of conflicting edges). Their distance in the search-space is indicated on the abscissa axis. The number of conflicting edges of the child is indicated on the ordinate axis.

Moreover, chart 4 shows that a crossover never improves a -coloring. As stated in section 2.2, the last step of GPX produces many conflicts. Indeed, if the two parents are very far in terms of , then a large number of vertices remain uncolored at the final step of GPX. Those vertices are then randomly added to the color classes, producing many conflicting edges in the offspring. This explains why in MA, a local search always follows a crossover operator.

Figure 5 presents the evolution of the objective function (i.e. the number of conflicting edges) of the two -colorings of the population at each generation of HEAD. It also indicates the distance between the two -colorings. This figure is obtained by considering one typical run to find one -coloring of <dsjc500.5> graph. The objective function of the two -colorings ( and ) are very close during the whole run: the average of the difference on the generations is equal to with a variance of . Figure 5 shows that there is a significant correlation between the quality of the two -colorings (in terms of fitness values) and the distance between them before the GPX crossover: the Pearson correlation coefficient is equal to 0.927 (respectively equal to 0.930) between and (resp. between and ). Those plots give the main key for understanding why HEAD is more effective than HEA: the linear anti-correlation between the two -colorings with approximately same objective function values is around equal to . The same level of correlation with a population of 10 individuals using HEA cannot be obtained except with sophisticated sharing process.

Figure 5: Red and green lines indicate the number of conflicting edges of the two -colorings of the population: and (left ordinate axis) at each generation of one run of HEAD applied on <dsjc500.5> graph (abscissa axis). The blue line indicates the distance between the two -colorings during the run: (right ordinate axis).

Diversity is necessary when an algorithm is trapped in a local minimum but diversity should be avoided in other case. The next subsections analyze several levers which may able to increase or decrease the diversity in HEAD.

5.2 Dose of diversification in the GPX crossover

Some modifications are performed on the GPX crossover in order to increase (as for the first test) or decrease (as for the second test) the dose of diversification within this operator.

5.2.1 Test on GPX with increased randomness: random draw of a number of color classes

In order to increase the level of randomness within the GPX crossover, we randomize the GPX. It should be remembered (cf. section 2.2) that at each step of the GPX, the selected parent transmits the largest color class to the child. In this test, we begin by randomly transmitting color classes chosen from the parents to the child; after those steps, we start again by alternately transmitting the largest color class from each parent ( is the random level). If , then the crossover is the same as the initial GPX. If increases, then the randomness and the diversity also increase. To evaluate this modification of the crossover, we count the cumulative iterations number of TabuCol that one HEAD run requires in order to find a legal -coloring. For each value, the algorithm is run ten times in order to produce more robust results. For the test, we consider the -coloring problem for graph <dsjc500.5> of the DIMACS benchmark. Figure 6 shows in abscissa the random level and in ordinate the average number of iterations required to find a legal -coloring.

Figure 6: Average iteration number required to find a legal -coloring for the <dsjc500.5> graph in function of the randomness level; abscissa: , the randomness level; ordinate: the average iteration number

First, , where is the number of colors, but we stop the computation for , because from , the algorithm does not find a -coloring within an acceptable computing time limit. This means that when we introduce too much diversification, the algorithm cannot find a legal solution. Indeed, for a high value, the crossover does not transmit the good features of the parents, therefore the child appears to be a random initial solution. When , the algorithm finds a legal coloring in more or less 10 million iterations. It is not easy to decide which -value obtains the quickest result. However this parameter enables an increase of diversity in HEAD. This version of GPX is called random GPX and noted R() with in tables 2 and 3. It is used for three graphs <r250.5>, <r1000.1c> and <dsjr500.1c> because the standard GPX does not operate effectively. The fact that these three graphs are more structured that the others may explain why the random GPX works better.

5.2.2 Test on GPX with decreased randomness: imbalanced crossover

In the standard GPX, the role of each parent is balanced: they alternatively transmit their largest color class to the child. Of course, the parent which first transmits its largest class, has more importance than the other; this is why it is an asymmetric crossover. In this test, we give a higher importance to one of the parents. At each step of the crossover, we randomly draw the parent that transmits its largest color class with a different probability for each parent. We introduce , the probability of selecting the first parent; is the probability of selecting the second parent. For example, if , then, at each step of GPX, parent 1 has a 3 in 4 chance of being selected to transmit its largest color class (parent 2 has a 1 in 4 chance). If , it means that both parents have an equal probability (a fifty-fifty chance to be chosen); this almost corresponds to the standard GPX. If , it means that the child is a copy of parent 1; there are no more crossovers and therefore HEAD is a TabuCol with two initial solutions. When becomes further from the value , the chance and diversity brought by the crossover decrease. Figure 7 shows in abscissa the probability and in ordinate the average number of necessary iterations required to find a legal -coloring (as in the previous test).

Figure 7: Average number of iterations required to find a legal -coloring for <dsjc500.5> graph according to the imbalanced crossover; abscissa: , probability to select the first parent at each step of GPX; ordinate: average iteration number

It can be remarked initially that the results are clearly symmetrical with respect to . The best results are obtained for . The impact of this parameter is weaker than that of the previous one: the control of the reduction in diversification is finer. This version of GPX is called unbalanced GPX and noted U() with in tables 2 and 3. It is used for three graphs <le450_25c>, <le450_25d> and <r1000.5> since the standard GPX does not operate effectively.

5.3 Test on parent replacement: systematic or not

In HEAD, the two children systematically replace both parents, even if they have worse fitness values than their parents. This replacement rule is modified in this test. If the fitness value of the child is lower than that of its parents, the child automatically replaces one of the parents. Otherwise, we introduce a probability corresponding to the probability of the parent replacement, even if the child is worse than his parents. If , the replacement is systematic as in standard HEAD and if , the replacement is performed only if the children are better (lower fitness value). When the -value decreases, the diversity also decreases. Figure 8 shows in abscissa the parent replacement probability and in ordinate the average number of iterations required to find a legal -coloring (as in the previous test).

Figure 8: Average number of iterations required to find a legal -coloring for <dsjc500.5> graph in function of the parents’ replacement policy; abscissa: parent replacement probability; ordinate: average number of iterations

If the parent replacement probability or a very low , then more time is required to produce the results. The absence or the lack of diversification is shown to penalize the search. However, for a large range of values: , it is not possible to define the best policy for criterion. The dramatic change in behavior of HEAD occurs very quickly around .

These studies enable a clearer understanding of the role of the diversification operators (crossover and parent updating).

The criteria presented here, such as the random level of the crossover or the imbalanced level of the crossover, have shown their efficiency on some graphs. These modifications could successfully be applied into future algorithms in order to manage the diversity dynamically.

6 Conclusion

We proposed a new algorithm for the graph coloring problem, called HEAD. This memetic algorithm combines the local search algorithm TabuCol as an intensification operator with the crossover operator GPX as a way to escape from local minima. Its originality is that it works with a simple population of only two individuals. In order to prevent premature convergence, the proposed approach introduces an innovative way for managing the diversification based on elite solutions.

The computational experiments, carried out on a set of challenging DIMACS graphs, show that HEAD produces accurate results, such as 222-colorings for <dsjc1000.9>, 81-colorings for <flat1000_76_0> and even 47-colorings for <dsjc500.5> and 82-colorings for <dsjc1000.5>, which have up to this point only been found by quantum annealing [21] with a massive multi-CPU. The results achieved by HEAD let us think that this scheme could be successfully applied to other problems, where a stochastic or asymmetric crossover can be defined.

We performed an in-depth analysis on the crossover operator in order to better understand its role in the diversification process. Some interesting criteria have been identified, such as the crossover’s levels of randomness and imbalance. Those criteria pave the way for further researches.

References

  • [1] K. Aardal, S. Hoesel, A. Koster, C. Mannino, A. Sassano, Models and solution techniques for frequency assignment problems, Quarterly Journal of the Belgian, French and Italian Operations Research Societies 1 (4) (2003) 261–317.
  • [2] M. Dib, A. Caminada, H. Mabed, Frequency management in Radio military Networks, in: INFORMS Telecom 2010, 10th INFORMS Telecommunications Conference, Montreal, Canada, 2010.
  • [3] F. T. Leighton, A Graph Coloring Algorithm for Large Scheduling Problems, Journal of Research of the National Bureau of Standards 84 (6) (1979) 489–506.
  • [4] N. Zufferey, P. Amstutz, P. Giaccari, Graph Colouring Approaches for a Satellite Range Scheduling Problem, Journal of Scheduling 11 (4) (2008) 263 – 277.
  • [5] D. C. Wood, A Technique for Coloring a Graph Applicable to Large-Scale Timetabling Problems, Computer Journal 12 (1969) 317–322.
  • [6] N. Barnier, P. Brisset, Graph Coloring for Air Traffic Flow Management, Annals of Operations Research 130 (1-4) (2004) 163–178.
  • [7] C. Allignol, N. Barnier, A. Gondran, Optimized Flight Level Allocation at the Continental Scale, in: International Conference on Research in Air Transportation (ICRAT 2012), Berkeley, California, USA, 22-25/05/2012, 2012.
  • [8] M. R. Garey, D. S. Johnson, Computers and Intractability: A Guide to the Theory of -Completeness, Freeman, San Francisco, CA, USA, 1979.
  • [9] R. Karp, Reducibility among combinatorial problems, in: R. E. Miller, J. W. Thatcher (Eds.), Complexity of Computer Computations, Plenum Press, New York, USA, 1972, pp. 85–103.
  • [10] D. S. Johnson, C. R. Aragon, L. A. McGeoch, C. Schevon, Optimization by Simulated Annealing: An Experimental Evaluation; Part II, Graph Coloring and Number Partitioning, Operations Research 39 (3) (1991) 378–406.
  • [11] N. Dubois, D. de Werra, Epcot: An efficient procedure for coloring optimally with Tabu Search, Computers & Mathematics with Applications 25 (10–11) (1993) 35–45.
  • [12] E. Malaguti, M. Monaci, P. Toth, An Exact Approach for the Vertex Coloring Problem, Discrete Optimization 8 (2) (2011) 174–190.
  • [13] S. Held, W. Cook, E. Sewell, Safe lower bounds for graph coloring, Integer Programming and Combinatoral Optimization (2011) 261–273.
  • [14] P. Galinier, J.-K. Hao, Hybrid evolutionary algorithms for graph coloring, Journal of Combinatorial Optimization 3 (4) (1999) 379–397.
  • [15] P. Galinier, A. Hertz, A survey of local search methods for graph coloring, Computers & Operations Research 33 (2006) 2547–2562.
  • [16] P. Galinier, J.-P. Hamiez, J.-K. Hao, D. C. Porumbel, Recent Advances in Graph Vertex Coloring, in: I. Zelinka, V. Snásel, A. Abraham (Eds.), Handbook of Optimization, Vol. 38 of Intelligent Systems Reference Library, Springer, 2013, pp. 505–528.
  • [17] E. Malaguti, P. Toth, A survey on vertex coloring problems, International Transactions in Operational Research 17 (1) (2010) 1–34.
  • [18] A. Hertz, M. Plumettaz, N. Zufferey, Variable Space Search for Graph Coloring, Discrete Applied Mathematics 156 (13) (2008) 2551 – 2560.
  • [19] O. Titiloye, A. Crispin, Graph Coloring with a Distributed Hybrid Quantum Annealing Algorithm, in: J. O’Shea, N. Nguyen, K. Crockett, R. Howlett, L. Jain (Eds.), Agent and Multi-Agent Systems: Technologies and Applications, Vol. 6682 of Lecture Notes in Computer Science, Springer Berlin / Heidelberg, 2011, pp. 553–562.
  • [20] O. Titiloye, A. Crispin, Quantum annealing of the graph coloring problem, Discrete Optimization 8 (2) (2011) 376–384.
  • [21] O. Titiloye, A. Crispin, Parameter Tuning Patterns for Random Graph Coloring with Quantum Annealing, PLoS ONE 7 (11) (2012) e50060.
  • [22] A. Hertz, D. de Werra, Using Tabu Search Techniques for Graph Coloring, Computing 39 (4) (1987) 345–351.
  • [23] C. Fleurent, J. Ferland, Genetic and Hybrid Algorithms for Graph Coloring, Annals of Operations Research 63 (1996) 437–464.
  • [24] J.-K. Hao, Memetic Algorithms in Discrete Optimization, in: F. Neri, C. Cotta, P. Moscato (Eds.), Handbook of Memetic Algorithms, Vol. 379 of Studies in Computational Intelligence, Springer, 2012, pp. 73–94.
  • [25] D. S. Johnson, M. Trick (Eds.), Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge, 1993, Vol. 26 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, American Mathematical Society, Providence, RI, USA, 1996.
  • [26] J.-K. Hao, Q. Wu, Improving the extraction and expansion method for large graph coloring, Discrete Applied Mathematics 160 (16–17) (2012) 2397–2407.
  • [27] P. Galinier, A. Hertz, N. Zufferey, An adaptive memory algorithm for the -coloring problem, Discrete Applied Mathematics 156 (2) (2008) 267–279.
  • [28] Z. Lü, J.-K. Hao, A memetic algorithm for graph coloring, European Journal of Operational Research 203 (1) (2010) 241–250.
  • [29] Q. Wu, J.-K. Hao, Coloring large graphs based on independent set extraction, Computers & Operations Research 39 (2) (2012) 283–290.
  • [30] D. C. Porumbel, J.-K. Hao, P. Kuntz, An evolutionary approach with diversity guarantee and well-informed grouping recombination for graph coloring, Computers & Operations Research 37 (2010) 1822–1832.
  • [31] R. Lewis, Graph Coloring and Recombination, in: J. Kacprzyk, W. Pedrycz (Eds.), Handbook of Computational Intelligence, Springer Berlin Heidelberg, Berlin, Heidelberg, 2015, Ch. Graph Coloring and Recombination, pp. 1239–1254.
  • [32] D. Gusfield, Partition-distance: A problem and class of perfect graphs arising in clustering, Information Processing Letters 82 (3) (2002) 159–164.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
48772
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description