A Coevolutionary Variable Neighborhood Search Algorithm for Discrete Multitasking (CoVNS): Application to Community Detection over Graphs

A Coevolutionary Variable Neighborhood Search Algorithm for Discrete Multitasking (CoVNS): Application to Community Detection over Graphs


The main goal of the multitasking optimization paradigm is to solve multiple and concurrent optimization tasks in a simultaneous way through a single search process. For attaining promising results, potential complementarities and synergies between tasks are properly exploited, helping each other by virtue of the exchange of genetic material. This paper is focused on Evolutionary Multitasking, which is a perspective for dealing with multitasking optimization scenarios by embracing concepts from Evolutionary Computation. This work contributes to this field by presenting a new multitasking approach named as Coevolutionary Variable Neighborhood Search Algorithm, which finds its inspiration on both the Variable Neighborhood Search metaheuristic and coevolutionary strategies. The second contribution of this paper is the application field, which is the optimal partitioning of graph instances whose connections among nodes are directed and weighted. This paper pioneers on the simultaneous solving of this kind of tasks. Two different multitasking scenarios are considered, each comprising 11 graph instances. Results obtained by our method are compared to those issued by a parallel Variable Neighborhood Search and independent executions of the basic Variable Neighborhood Search. The discussion on such results support our hypothesis that the proposed method is a promising scheme for simultaneous solving community detection problems over graphs.

Transfer Optimization, Evolutionary Multitasking, Variable Neighborhood Search, Community Detection.

I Introduction

Transfer Optimization is an incipient research stream within the general field of optimization. Currently, this area is gathering a significant momentum from the related community, leading to an intense scientific production during the last years [26]. The main inspiration behind this paradigm is to exploit what has been learned through the optimization of one problem or tasks for the solving of another related or unrelated task. Due to its relatively youth, efforts dedicated to the transferability of knowledge among optimization problems has not been remarkable until recent years, when this concept has become a priority for a wider research community. Arguably, the ever-growing complexity and dimensionality of optimization scenarios has made researchers to turn their attention on methods that allow efficiently harnessing knowledge acquired beforehand.

In this regard, three different categories can be distinguished in Transfer Optimization [16]: sequential transfer [11], multitasking [17] and multiform optimization. In this paper, we put our attention on the second of these categories. In a nutshell, multitasking is devoted to the simultaneous tackling of different tasks of equal priority by dynamically exploiting existing complementarities and synergies among them.

More concretely, the present paper is focused on Evolutionary Multitasking (EM, [27]), which deals with multitasking optimization scenarios by embracing concepts, operators and search strategies from the area of Evolutionary Computation [1, 7]. Related to this specific branch, a particular flavor of EM has shown a remarkable performance when dealing with multitasking environments: Multifactorial Optimization strategy (MFO, [15]). Until now, MFO has been successfully adopted for solving different continuous, discrete, multi- and single-objective optimization tasks [34, 12, 37, 14]. Furthermore, a specific method has garnered most of the literature around this concept: the Multifactorial Evolutionary Algorithm (MFEA, [15]). Unfortunately, alternative methods that populate the EM community are still scarce.

This lack of competitive EM methods is one of the main motivations for the development of this research work. Specifically, this paper proposes a novel EM metaheuristic algorithm based on the well-known Variable Neighborhood Search (VNS, [23]) for solving discrete multitasking environments. The Coevolutionary Variable Neighborhood Search Algorithm (CoVNS) herein presented takes a step further beyond the state of the art in two different directions. Firstly, we contribute to the EM field by proposing a new competitive algorithm which, unlike most works published so far in this specific topic, does not hinge on the MFO paradigm. Secondly, CoVNS is a pioneering attempt at exploring the applicability of VNS to the Transfer Optimization paradigm.

Besides the novelty of the method itself, a second contribution of this work relates to the application scenario to which it is applied. It is relevant to first underscore that we focus on discrete optimization In particular, the problem tackled in this work is the detection of communities in weighted directed graphs [31], namely, the optimal partitioning of graph instances whose connections among nodes are directed and weighted. This scenario has been less addressed in the literature than other networks of simpler nature [21, 28]. This being said, to the best of our knowledge this study is the first of its kind dealing with multitasking for solving several community detection problems at the same time. To this end, the discovery of optimal partitions is formulated as an optimization problem, which is driven by a measure of modularity adapted to the directional and weighted nature of the edges of the network [24, 25]. Results from an extensive experimental setup are presented and discussed to show that the proposed CoVNS excels at solving such multitasking scenarios, outperforming non-multitasking variants of the same algorithm and, hence, providing informed evidence of the benefits of knowledge exchange among tasks.

The remainder of the article is organized as follows. Section II provides background and related work. Section III poses the mathematical formulation of the community detection problems in weighted directed networks. Next, Section IV exposes in detail the main features of the proposed CoVNS. The experimentation setup and discussion of the results are given in Section V. Finally, Section VI concludes the paper with an outlook towards further research.

Ii Background

In order to contextualize this work and properly assess its scientific contribution, this section provides a short overview of the EM research area. In recent years, this scientific branch has emerged as a competitive paradigm for tackling simultaneous optimization tasks. The adoption of evolutionary computation concepts to multitasking (giving rise to EM) has become the de facto search strategy: by designing a unified search space, these population-based algorithms allow for an inherent parallel evolution of the whole set of tasks, and for the transfer of genetic material among individuals to exploit inter-task synergies [26, 15].

There is a solid consensus that EM was only materialized through the perspective of MFO until late 2017 [6]. Since then, this incipient research field is gathering a notable corpus of literature focused on new algorithmic schemes, such as the multitasking multi-swarm optimization introduced in [33], the coevolutionary multitasking scheme proposed in [5] or the coevolutionary bat algorithm detailed in [29]. Further alternatives to MFEA have also emerged, partly inspired by the concepts of this influential method. Some examples are the multifactorial differential evolution proposed in [10], the multifactorial cellular genetic algorithm in [30], the particle swarm optimization-firefly hybridization introduced in [35], or the multifactorial brain storm optimization algorithm presented in [38]. Although in this work the EM environment under consideration is not addressed by using the MFO strategy, we refer interested readers to [3, 36, 39] for a recent overview on these methods.

We can mathematically formulate an EM scenario as an environment comprised by concurrent problems or tasks , which must be simultaneously optimized. Thus, the scenario could be characterized by the existence of as many search spaces as tasks. Furthermore, each of the problems to be solved has a fitness function (objective) , where denotes the search space of task . We define the main objective of EM as the discovery of a group of solutions such that .

An aspect of paramount importance for adequately understanding the above formulation and the EM paradigm itself is that each solution in the population is evolved over an unified search space , which relates to via an encoding/decoding function . For this reason, each individual in should be decoded to yield a task-specific solution for each of the tasks.

Iii Problem Statement

We now proceed by defining the community detection problem over weighted graphs. First, we model the network as a graph , where represents the group of nodes or vertices of the network, stands for the set of edges connecting every pair of vertices, and is a function assigning a non-negative weight to each edge. Furthermore, we consider that (i.e. no self loops), and that if nodes and are not linked. For notation purposes we define , yielding a adjacency matrix given by and fulfilling , with denoting trace of a matrix. Lastly, the directed characteristic of the graph is guaranteed by not imposing any requirement on the symmetry of the adjacency matrix, that is, is not necessarily equal to for any .

Using this notation, the task of detecting communities in a network can be defined as the partition of the vertex set into a number of disjoint, arbitrarily-sized, non-empty groups. Let us denote as the amount of partitions , such that and (i.e., no overlapping communities). Under this formulation, the community to which node belongs can be represented as .

With all this, we should bear in mind that the weighted directed feature of the graphs used in this paper enforces the reformulation of the in-degree and out-degree values that participate in conventional modularity formulations. A way to redefine such measures is to formulate the so-called input and output strengths of node , which are given by:


that is, as the sum of the weight of the incident (outgoing) edges to (from) node . It is worth noting here that these values represent both the directivity and the weighted nature of adjacency matrix . Therefore, these two quantities are of paramount importance for properly redefining the concept of communities, in an analogous way to the role played by in- and out-degree values when clustering undirected, unweighted networks.

Bearing all the above formulation in mind, a quality measure for a given partition can be furnished from the main definition of the classical modularity for undirected graphs introduced in [25, 21]. By defining a binary function , so that if as per the partition set by (and otherwise), the modularity in weighted directed networks can be calculated as:


where represents the sum of the weights of every edge of the graph [4]. Thus, detecting a high-quality partition of a weighted directed network can be defined as:


where stands for the whole set of possible partitions of elements into nonempty subsets. It is interesting to point out that the cardinality of this set is given by the -th Bell number [18]). As a brief example, a small graph composed by nodes amounts up to possible partitions. Assuming now that the computation of the modularity in (2) takes just microsecond, a practitioner would need more than six months to exhaustively evaluate all the possible partitions. This example is illustrative of the convenience of using heuristics and meta-heuristics for efficiently solving this complex combinatorial problem, and the adoption of multi-tasking approaches when solving several instances of the problem at the same time.

Iv Proposed Variable Neighborhood Search for Discrete Multitasking

Inspired by concepts from previous solvers [5, 29], one of the remarkable features of the proposed CoVNS is its multi-population nature. Thus, CoVNS comprises a fixed number of subpopulations or demes [22], composed by the same amount of candidates. The number of subpopulations is equal to the number of tasks to be solved. Furthermore, each of the demes is devoted to the optimization of a specific task , meaning that individuals belonging to subpopulation are only evaluated on task as per its objective .

The coevolutionary strategy of CoVNS implies the migration of individuals across subpopulations. Therefore, the consideration of an unified representation becomes necessary. To realize this, the same philosophy of MFEA has been adopted. Nonetheless, one of the main innovative feature of CoVNS is that each deme has its partial view (often restricted by the problem size) of the common search space, potentially requiring a size adjustment when different subpopulations share their individuals.

Let us focus on the community finding problem for exemplifying this noted size adjustment. First, we encode each individual using a label-based representation [19]. In this way, each solution belonging to a subpopulation is denoted as a combination of integers from the range , where represents the number of edges in the graph. The value of the -th component of represents the cluster label to which node belongs. For instance, if we assume a network composed by nodes, a possible individual for task could be . The communities represented by this individual would be , where , and . Furthermore, the use of this encoding strategy requires a repairing procedure to avoidance of ambiguities in the representation. To this end, we design a similar procedure to the repairing function proposed in [9]: ambiguities such as those present in and (representing both the same partition) are solved by standardizing the solution to .

Turning our attention again to the unified representation used in CoVNS, we denote the dimension of each task (i.e. the number of nodes) as . Thus, once an individual is about to be migrated to a deme in which the dimension of the tasks to be optimized is , only the first elements are considered, reducing in this fashion the phenotype of the solution. In the opposite case, i.e. if , the reverse procedure is carried out. In such a case, and taking into account that when a solution is transferred to another subpopulation it replaces another individual , all elements from to are introduced in respecting the order as in .

1 Randomly generate individuals (initial population) Evaluate each individual for all the tasks Arrange subpopulations (demes) Set while termination criterion not met do
2       Update iteration counter: for each deme  do
3             for each individual in the subpopulation do
4                   Generate new solution succFun = rand() if  then
5                         Accept the new solution
8      if  then
9             for each deme k do
10                   for  do
11                         Replace the worst solution in deme by the best solution in deme
Return the best individual in for each task
Algorithm 1 Proposed CoVNS multitasking solver

With all this, Algorithm 1 shows the pseudo-code of the proposed CoVNS. As can be seen in this high-level description, in the initialization phase individuals are randomly generated. Then, each solution is assessed over all the considered tasks. After this evaluation phase, each subpopulation is generated by choosing the best individuals for the task at hand. This means that the same solution can be chosen for being part of different demes. Once all subpopulations are built, each evolves independently by following the main concepts of a basic discrete VNS. More concisely, each individual, at each iteration, undergoes a successor generation procedure by applying a movement operator on a random basis (, , or ). These operators have been introduced in previous studies [28]. For each of these functions, the subscript indicates the amount of randomly chosen nodes, which are extracted from its assigned community. In , the chosen elements are re-inserted in already existing communities, whereas in they can be also introduced in newly generated partitions.

Furthermore, every iterations, each deme transfers number of individuals to a randomly chosen subpopulation. It should be pointed here that = , where represents the number of function evaluations per execution. Furthermore, we set proportional to the population size as . In our study, and as a result of a thorough empirical process, and . Moreover, individuals chosen to be migrated are the best ones, replacing the worst of the destination subpopulation. Lastly, CoVNS completes its search process after objective function evaluations, after which the best individual of each deme is returned.

V Experimental Setup and Results

For properly gauging the performance of the proposed CoVNS, an extensive set of experiments has been conducted, which is detailed in this section. First, in Section V-A we elaborate on the benchmark problems used for the proposed algorithm, along with the rest of details of the experimentation setup. Next, in Section V-B we examine and discuss on the results from such experiments.

V-a Benchmark Problems and Experimentation Setup

As has been mentioned in preceding sections, the benefits of the proposed method will be showcased by considering, as tasks, the optimal partitioning of weighted and directed graphs. Accordingly, the performance of CoVNS has been tested over two multitasking scenarios, each composed by different graph instances. In order to assess the advantage of exchanging genetic material between demes, the performance of our method has been compared to that yielded by two approaches: a separated VNS (sVNS) and a parallel VNS (pVNS). The first approach solves each problem separately by using a single VNS search. For these executions, a fair configuration has been applied for the operators and parameters. The second of the approaches is a parallel implementation of VNS, with no coevolution strategy (each subpopulation evolves independently). Even though no relevant algorithmic differences exist between sVNS and pVNS, the consideration of the parallel approach permits to quantify the contribution of the exchange of knowledge among demes to the convergence of the overall solver.

Parameter Value Parameter Value Parameter Value
Population size 1110 Population size 1110 Population size 10
Successor functions , , Successor functions , , Successor functions
Function evaluations 10111000 Function evaluations 10111000 Function evaluations 101000
TABLE I: Parameter values set for CoVNS, pVNS and sVNS.

Having said that, each multitasking scenario is composed by 11 synthetically generated network instances, which should be optimized in a simultaneous fashion by the three aforementioned methods. Specifically, both benchmarks consist of networks of sizes from 50 to 100 nodes. Each graph has a number of ground truth communities, which are modeled by first creating a partition of the network (with random sizes for its constituent communities ), and then by connecting nodes within every community with probability and nodes of different communities with probability . Weights for every link are modeled as uniformly distributed random variables with support (intra-community edges) and (inter-community edges).

The first environment is called ordered incremental (OI), and all the tasks included in this scenario has been named as OI_V_M, where is the number of nodes populating the graph and the amount of underlying partitions as per the ground truth partition of the network at hand. Regarding and , all datasets have an assigned value of and , respectively. The main characteristic of this OI scenario is that instances have been generated in an incremental and ordered way. In other words, new instances have been built by extending the precedent smaller instance respecting the predecessor’s graph structure and node identifiers. For instance, all the nodes belonging to the instance OI_60_8 are also present in the subsequent OI_65_8 instance, in identical order. Furthermore, the new 5 nodes are added in the 61 to 65 positions of the adjacency matrix. By imposing these conditions our intention is to maintain the order of nodes in the matrix adjacency, guaranteeing that the best solution (partitions) of each instances will share most of their structure.

The second scenario has been coined as unordered incremental (UI), naming all the cases as UI_V_M, following the same criterion as with the previous OI environment. In these instances, we keep and . Therefore, the main difference between the two environments is that the new incremental nodes in UI are inserted in the first positions, i.e. the new 5 nodes introduced in UI_65_8 in comparison to UI_60_8 are added in 1 to 5 positions. This apparently slight modification alters significantly the adjacency matrix and thereby the structure of the best solution corresponding to each incrementally generated graph instance.

The rationale behind this experimental setup follows from influential works [40, 13], which emphasize that one of the most critical aspects when dealing with EM environments is the analysis of the mutual information among the optimized tasks. In fact, it is widely acknowledged that this synergy between tasks is of crucial importance for reaching profitable genetic material exchanges. For this reason, the exploration of what features and characteristics should share different tasks for being synergistic is also valuable in this research context. Therefore, these experiments will help gain a deeper understanding about the conditions that should be met and the performance boundaries when opting for Transfer Optimization in the context of community detection over graphs.

Finally, independent executions have been carried out for each test case, aiming at shedding light on the statistical significance of eventually discovered performance gapss. Regarding the ending criterion of each method, every run ends after objective function evaluations, where represents the number of individuals per subpopulation. Using this formula, we ensure fairness in comparisons between CoVNS, pVNS and sVNS, dedicating to each approach the same amount of computational resources [20]. To support the replicability of this work, parameters employed for the implemented techniques are shown in Table I.

V-B Results and Discussion

Table II depicts the results obtained by CoVNS, pVNS and sVNS. Outcomes obtained for each dataset and test case (OI and UI) are given in terms of fitness average, best solution found and standard deviation. It should be mentioned here that the measure used for comparison is the modularity value attained by the solvers (as described in Section III). In addition, we ease the visualization of the outcomes by highlighting the best average results in bold. Furthermore, in order to ascertain the statistical relevance of differences among algorithms, two different hypothesis tests have been carried out for both OI and UI environments [8]. Results of these tests can be analyzed in Table III. First, the Friedman’s non-parametric test for multiple comparison permits proving if differences in performances among the techniques can be cataloged as statistically significant. Thus, first column of Table III depicts the mean ranking returned by this test for each of the compared methods in both test cases (the lower the rank, the better the performance). Furthermore, to assess the statistical significance of the better performance method (CoVNS in both test cases), a Holm’s post-hoc test has been performed using our proposal as control solver. This way, the resulting unadjusted and adjusted -values have been included in the second and third columns of Table III.

Several interesting conclusions can be drawn from Table II. To begin with, CoVNS dominates as the best performing method in all the instances that compose the OI multitasking environment. Furthermore, Table III supports the significance of these results at a 99% confidence level, taking into account that all the -values of the Holm’s post-hoc test are lower than . These findings statistically conclude that solving OI instances in a simultaneous way and sharing knowledge among different subpopulations contributes to reaching better results. More specifically, since CoVNS has demonstrated to be statistically superior than pVNS, we can confirm that just the simultaneous solving of the tasks is not enough for attaining higher performances. The competitive advantage arises from the efficient sharing of genetic material through individuals belonging to synergistic tasks. As expected, pVNS and sVNS perform similarly, as the only difference between them is the parallelization of the search process (at the level of deme and entire search process, respectively).

The second important fact is that the structure of the networks is of paramount importance for leveraging genetic transfer. This conclusion becomes evident in the results attained for the DI multitasking environment. In this test case, CoVNS performs best in 6 out of 11 instances, although the overall performance gap is not statistically significant as observed in Table III. These outcomes clearly brings us to the conclusion that the genetic material sharing among non-complementary instances does not provide any competitive advantage for the search process. We recall at this point that, as opposed to OI instances, in UI tasks the structure of incrementally generated graphs changes considerably as more nodes are added to the graphs. Therefore, we conclude that although CoVNS seemingly outperforms both pVNS and sVNS, there is no statistical evidence that the sharing of knowledge leads to significant better outcomes.

In fact, this analysis leads to the two main conclusions of this paper. This first one regards the composition of complementary graphs. As observed in this experimentation, for materializing positive genetic transfer among tasks, network instances should share their structure in an incremental way as explained in the case of OI so as to enforce a degree of overlap between their optimal partitions. Secondly, CoVNS has demonstrated to be a promising method for simultaneous solving community detection problems over graphs, obtaining significant competitive advantages whenever the networks are interrelated.


Ordered Incremental

CoVNS 0.330 0.322 0.342 0.311 0.291 0.301 0.276 0.256 0.247 0.252 0.230
0.365 0.354 0.379 0.348 0.324 0.328 0.302 0.282 0.272 0.283 0.271
0.022 0.025 0.026 0.023 0.024 0.021 0.017 0.015 0.015 0.020 0.022
pVNS 0.322 0.294 0.280 0.252 0.224 0.224 0.200 0.179 0.172 0.165 0.157
0.360 0.337 0.305 0.302 0.243 0.255 0.218 0.198 0.197 0.191 0.174
0.022 0.024 0.021 0.024 0.010 0.015 0.010 0.008 0.013 0.011 0.009
sVNS 0.319 0.286 0.290 0.260 0.229 0.226 0.205 0.189 0.169 0.172 0.160
0.344 0.307 0.318 0.284 0.254 0.271 0.221 0.202 0.198 0.193 0.171
0.014 0.016 0.024 0.018 0.132 0.020 0.012 0.010 0.012 0.009 0.008

Unordered Incremental

CoVNS 0.299 0.279 0.287 0.251 0.227 0.231 0.205 0.180 0.169 0.168 0.164
0.325 0.325 0.316 0.270 0.257 0.262 0.247 0.200 0.193 0.186 0.194
0.015 0.025 0.020 0.015 0.015 0.017 0.021 0.012 0.010 0.010 0.012
pVNS 0.323 0.282 0.270 0.243 0.226 0.222 0.201 0.183 0.167 0.169 0.163
0.369 0.317 0.293 0.284 0.245 0.259 0.228 0.203 0.196 0.193 0.183
0.019 0.019 0.016 0.020 0.009 0.018 0.010 0.015 0.012 0.010 0.014
sVNS 0.322 0.295 0.280 0.258 0.219 0.217 0.201 0.203 0.166 0.162 0.152
0.375 0.340 0.317 0.299 0.270 0.250 0.223 0.224 0.189 0.177 0.177
0.027 0.025 0.019 0.021 0.021 0.019 0.013 0.011 0.012 0.010 0.012
TABLE II: Results obtained by CoVNS, pVNS and sVNS for all both test environments. Best average results have been highlighted in bold. Each (algorithm,instance) cell indicates average (top), best (middle) and standard deviation (bottom) of the modularity fitness computed over 20 independent runs.
Friedman’s Test Holm’s Post Hoc
Rank Unadjusted Adjusted


pVNS 2.7273 0.000051 0.000102
sVNS 2.2727 0.002838 0.002838


CoVNS 1.7273
pVNS 2.0455 0.240955 0.481909
sVNS 2.2273 0.455545 0.481909
TABLE III: Results of the Friedman’s non-parametric tests, and unadjusted and adjusted -values obtained through the application of Holm’s post-hoc procedure using CoVNS as control algorithm.

Vi Conclusions and Future Work

This paper has elaborated on the design, implementation and validation of a novel Coevolutionary Variable Neighborhood Search algorithm for dealing with evolutionary multitasking scenarios. The proposed method relies on a discrete adaptation of the VNS heuristic, incorporating further elements from co-evolutionary multitasking algorithms [5, 29]. In addition to the method itself, an equally important contribution of this work is the first attempt at applying Transfer Optimization to community detection over weighted and directed graphs. In this way, we have compared the results attained by CoVNS over two test cases composed of 11 datasets with the ones furnished by a parallel (not coevolutionary) VNS and by independent executions of the VNS. The obtained results validate our hypothesis: the knowledge sharing that lies at the heart of CoVNS is crucial for reaching better results when simultaneously solving complementary tasks.

Several research lines have been arranged as future work. In the short term, we plan to evaluate the scalability of the proposed method by analyzing its computational efficiency when simultaneously dealing with a high number of cases. We will also explore the adaptation of the method to other combinatorial optimization problems stemming from other research fields [32]. In a longer term, we plan to endow this method with enhanced adaptive mechanisms so as to automatically define the optimal strategy for sharing knowledge according to the detected level of relationship amongst tasks. To this end, we plan to design schemes for automatically detecting the level synergy of the optimizing graphs during the search process, in order to autonomously boost the transfer of knowledge. We expect that these methods, currently under active investigation in other related works [2], will help the solver adaptively harness positive knowledge transfers, and stay resilient against negative (hence, counterproductive) genetic shares.


The authors would like to thank the Spanish Centro para el Desarrollo Tecnologico Industrial (CDTI, Ministry of Science and Innovation) through the “Red Cervera” Programme (AI4ES project), as well as by the Basque Government through EMAITEK and ELKARTEK (ref. 3KIA) funding grants. J. Del Ser also acknowledges funding support from the Department of Education of the Basque Government (Consolidated Research Group MATHMODE, IT1294-19).


  1. T. Bäck, D. B. Fogel and Z. Michalewicz (1997) Handbook of evolutionary computation. CRC Press. Cited by: §I.
  2. Y. Bai, H. Ding, S. Bian, T. Chen, Y. Sun and W. Wang (2019) Simgnn: a neural network approach to fast graph similarity computation. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 384–392. Cited by: §VI.
  3. K. K. Bali, Y. Ong, A. Gupta and P. S. Tan (2019) Multifactorial evolutionary algorithm with online transfer parameter estimation: mfea-ii. IEEE Transactions on Evolutionary Computation. Cited by: §II.
  4. T. Chakraborty, A. Dalmia, A. Mukherjee and N. Ganguly (2017) Metrics for community analysis: a survey. ACM Computing Surveys (CSUR) 50 (4), pp. 1–37. Cited by: §III.
  5. M. Cheng, A. Gupta, Y. Ong and Z. Ni (2017) Coevolutionary multitasking for concurrent global optimization: with case studies in complex engineering design. Engineering Applications of Artificial Intelligence 64, pp. 13–24. Cited by: §II, §IV, §VI.
  6. B. Da, Y. Ong, L. Feng, A. K. Qin, A. Gupta, Z. Zhu, C. Ting, K. Tang and X. Yao (2017) Evolutionary multitasking for single-objective continuous optimization: benchmark problems, performance metric, and baseline results. Note: arXiv:1706.03470 Cited by: §II.
  7. J. Del Ser, E. Osaba, D. Molina, X. Yang, S. Salcedo-Sanz, D. Camacho, S. Das, P. N. Suganthan, C. A. C. Coello and F. Herrera (2019) Bio-inspired computation: where we stand and what’s next. Swarm and Evolutionary Computation 48, pp. 220–250. Cited by: §I.
  8. J. Derrac, S. García, D. Molina and F. Herrera (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation 1 (1), pp. 3–18. Cited by: §V-B.
  9. E. Falkenauer (1998) Genetic algorithms and grouping problems. John Wiley & Sons, Inc.. Cited by: §IV.
  10. L. Feng, W. Zhou, L. Zhou, S. Jiang, J. Zhong, B. Da, Z. Zhu and Y. Wang (2017) An empirical study of multifactorial PSO and multifactorial DE. In IEEE Congress on Evolutionary Computation, pp. 921–928. Cited by: §II.
  11. L. Feng, Y. Ong, A. Tan and I. W. Tsang (2015) Memes as building blocks: a case study on evolutionary optimization+ transfer learning for routing problems. Memetic Computing 7 (3), pp. 159–180. Cited by: §I.
  12. M. Gong, Z. Tang, H. Li and J. Zhang (2019) Evolutionary multitasking with dynamic resource allocating strategy. IEEE Transactions on Evolutionary Computation 23 (5), pp. 858–869. Cited by: §I.
  13. A. Gupta, Y. Ong, B. Da, L. Feng and S. D. Handoko (2016) Landscape synergy in evolutionary multitasking. In IEEE Congress on Evolutionary Computation (CEC), pp. 3076–3083. Cited by: §V-A.
  14. A. Gupta, Y. Ong, L. Feng and K. C. Tan (2016) Multiobjective multifactorial optimization in evolutionary multitasking. IEEE Transactions on Cybernetics 47 (7), pp. 1652–1665. Cited by: §I.
  15. A. Gupta, Y. Ong and L. Feng (2015) Multifactorial evolution: toward evolutionary multitasking. IEEE Transactions on Evolutionary Computation 20 (3), pp. 343–357. Cited by: §I, §II.
  16. A. Gupta, Y. Ong and L. Feng (2017) Insights on transfer optimization: because experience is the best teacher. IEEE Transactions on Emerging Topics in Computational Intelligence 2 (1), pp. 51–64. Cited by: §I.
  17. A. Gupta and Y. Ong (2016) Genetic transfer or population diversification? deciphering the secret ingredients of evolutionary multitask optimization. In IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–7. Cited by: §I.
  18. J. M. Harris, J. L. Hirst and M. J. Mossinghoff (2008) Combinatorics and graph theory. Vol. 2, Springer. Cited by: §III.
  19. E. R. Hruschka, R. J. Campello and A. A. Freitas (2009) A survey of evolutionary algorithms for clustering. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39 (2), pp. 133–155. Cited by: §IV.
  20. A. LaTorre, D. Molina, E. Osaba, J. Del Ser and F. Herrera (2020) Fairness in bio-inspired optimization research: a prescription of methodological guidelines for comparing meta-heuristics. External Links: arXiv:2004.09969 Cited by: §V-A.
  21. E. A. Leicht and M. E. Newman (2008) Community structure in directed networks. Physical review letters 100 (11), pp. 118703. Cited by: §I, §III.
  22. G. Luque and E. Alba (2011) Parallel genetic algorithms: theory and real world applications. Vol. 367, Springer. Cited by: §IV.
  23. N. Mladenović and P. Hansen (1997) Variable neighborhood search. Computers & operations research 24 (11), pp. 1097–1100. Cited by: §I.
  24. M. E. Newman and M. Girvan (2004) Finding and evaluating community structure in networks. Physical review E 69 (2), pp. 026113. Cited by: §I.
  25. M. E. Newman (2004) Analysis of weighted networks. Physical review E 70 (5), pp. 056131. Cited by: §I, §III.
  26. Y. Ong and A. Gupta (2016) Evolutionary multitasking: a computer science view of cognitive multitasking. Cognitive Computation 8 (2), pp. 125–142. Cited by: §I, §II.
  27. Y. Ong (2016) Towards evolutionary multitasking: a new paradigm in evolutionary computation. In Computational Intelligence, Cyber Security and Computational Models, pp. 25–26. Cited by: §I.
  28. E. Osaba, J. Del Ser, D. Camacho, M. N. Bilbao and X. Yang (2020) Community detection in networks using bio-inspired optimization: latest developments, new results and perspectives with a selection of recent meta-heuristics. Applied Soft Computing 87, pp. 106010. Cited by: §I, §IV.
  29. E. Osaba, J. Del Ser, X. Yang, A. Iglesias and A. Galvez (2020) COEBA: a coevolutionary bat algorithm for discrete evolutionary multitasking. In International Conference on Computational Science, pp. 244–256. Cited by: §II, §IV, §VI.
  30. E. Osaba, A. D. Martinez, J. L. Lobo, J. Del Ser and F. Herrera (2020) Multifactorial cellular genetic algorithm (mfcga): algorithmic design, performance comparison and genetic transferability analysis. In IEEE Congress on Evolutionary Computation, pp. 1–8. Cited by: §II.
  31. C. Pizzuti (2017) Evolutionary computation for community detection in networks: a review. IEEE Transactions on Evolutionary Computation 22 (3), pp. 464–483. Cited by: §I.
  32. R. Precup and R. David (2019) Nature-inspired optimization algorithms for fuzzy controlled servo systems. Butterworth-Heinemann. Cited by: §VI.
  33. H. Song, A. Qin, P. Tsai and J. Liang (2019) Multitasking multi-swarm optimization. In IEEE Congress on Evolutionary Computation (CEC), pp. 1937–1944. Cited by: §II.
  34. C. Wang, H. Ma, G. Chen and S. Hartmann (2019) Evolutionary multitasking for semantic web service composition. Note: arXiv:1902.06370 Cited by: §I.
  35. H. Xiao, G. Yokoya and T. Hatanaka (2019) Multifactorial pso-fa hybrid algorithm for multiple car design benchmark. In IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 1926–1931. Cited by: §II.
  36. J. Yi, J. Bai, H. He, W. Zhou and L. Yao (2020) A multifactorial evolutionary algorithm for multitasking under interval uncertainties. IEEE Transactions on Evolutionary Computation. Cited by: §II.
  37. Y. Yu, A. Zhu, Z. Zhu, Q. Lin, J. Yin and X. Ma (2019) Multifactorial differential evolution with opposition-based learning for multi-tasking optimization. In IEEE Congress on Evolutionary Computation (CEC), pp. 1898–1905. Cited by: §I.
  38. X. Zheng, Y. Lei, M. Gong and Z. Tang (2016) Multifactorial brain storm optimization algorithm. In International Conference on Bio-Inspired Computing: Theories and Applications, pp. 47–53. Cited by: §II.
  39. L. Zhou, L. Feng, K. C. Tan, J. Zhong, Z. Zhu, K. Liu and C. Chen (2020) Toward adaptive knowledge transfer in multifactorial evolutionary computation. IEEE Transactions on Cybernetics. Cited by: §II.
  40. L. Zhou, L. Feng, J. Zhong, Z. Zhu, B. Da and Z. Wu (2018) A study of similarity measure between tasks for multifactorial evolutionary algorithm. In Proceedings of the ACM Genetic and Evolutionary Computation Conference Companion, pp. 229–230. Cited by: §V-A.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description