Weight-based Fish School Search algorithm for Many-Objective Optimization

Weight-based Fish School Search algorithm for Many-Objective Optimization

I.M.C. Albuquerque* Computational intelligence research group - Polytechnical School of Pernambuco Benfica 455, Recife-PE, Brazil J.B. Monteiro-Filho Computational intelligence research group - Polytechnical School of Pernambuco Benfica 455, Recife-PE, Brazil F.B. Lima Neto Computational intelligence research group - Polytechnical School of Pernambuco Benfica 455, Recife-PE, Brazil

Optimization problems with more than one objective consist in a very attractive topic for researchers due to its applicability in real-world situations. Over the years, the research effort in Computational Intelligence area resulted in algorithms able to achieve good results by solving problems with more than one conflicting objective. However, these techniques do not exhibit the same performance as the number of objectives increases and become greater than 3. This paper proposes an adaptation of the metaheuristic Fish School Search to solve optimization problems with many objectives. This adaptation is based on the division of the school in clusters that are specialized in solving a single-objective problem generated by the decomposition of the original problem. For this, we used concepts and ideas often found in the literature and applied in state-of-the-art algorithms, namely: (i) reference points and lines in the objectives space; (ii) clustering process; and (iii) the decomposition technique Penalty-based Boundary Intersection. The proposed algorithm was compared with two state-of-the-art bio-inspired algorithms. Results have shown competitiveness, as well as the necessity of improving the performance of the proposed technique on multi-modal many-objective problems.

1 Introduction

Real-world problems, most of times, involve optimizing more than one objective. In many cases, these objectives are in conflict with each other and there might not be a single solution capable of optimizing all goals at the same time. These problems are known as multi-objective optimization problems. In order to solve them, candidate algorithms should provide as answers a set of good “trade-off" solutions so that a decision maker can choose one or more [21]. Mathematically, a multi-objective optimization can be defined by means as Pareto-dominance [6].

Not rarely, in real-world problems many (i.e. more than 3) objectives need to be considered [7]. In those cases, solving the optimization problem is even more difficult because the increase of objectives number makes it harder to distinguish a good solution from a not so good one [24]. These many-objective problems appear in several areas such as software engineering [20] and automotive engine calibration [16].

The research on evolutionary and swarm intelligence algorithms for solving many-objective optimization problems features straightforward applications of existing multi-objective algorithms for problems with 2 or 3 objectives. Despite their success in solving problems with 2 objectives, these algorithms face lots of difficulties in solving many-objective optimization problems. The most important challenge faced by multi-objective algorithms while trying to solve many-objective problems, mostly for the Pareto-based ones [6], is the search ability deterioration with the increase of the number of objectives. When the number of objectives increases, almost all solutions in the population become non-dominated, i.e. equally good [13]. In those cases, evolutionary algorithms lose their selection pressure and swarm intelligence algorithms lose their exploitation ability. That is, the algorithms capability of converge to good solutions is deteriorated. Thus, multi-objective algorithms scale up poorly in many-objective optimization problems [5]. In order to tackle this issue, researchers in the field have proposed many different solutions, such as adopting new preference relations as fuzzy Pareto-dominance [10] and -dominance [14].

Other approach used by researchers to deal with the loss of convergence capability is the decomposition of the many-objective problem in simpler single-objective problems, as it is done in MOEA/D [25]. Algorithms based on this idea have shown good performance in solving many-objective optimization problems [23]. Another advantageous practice in the research of bio-inspired algorithms for solving many-objective optimization problems is the use of reference points in the objectives space. This to guide the search process in order to keep the diversity among the solutions returned by and algorithm. This is an important factor to be considered because many algorithms tend to favor dominance resistant solutions [12], which are solutions with high performance in at least one of the objectives, but with especially poor performance in the other. An example of successful use of reference points is the NSGA-III algorithm [7]. In recent years some researchers are dedicating their attention to combine both decomposition-based approach and reference points for enhancing the performance of many-objective optimization algorithms. An example of state-of-the-art algorithm that uses both techniques is the Many-objective Evolutionary Algorithm Based on Dominance and Decomposition (MOEA/DD) [15] and the -Dominance Evolutionary Algorithm (-DEA) [24].

This work incorporates decomposition-based approach and reference points to the swarm intelligence algorithm Fish School Search (FSS) [1] to adapt it to solve many-objective optimization problems. FSS versions produced so far already presented good results in optimizing multi-modal functions [4] (e.g. multi-plateau functions [17]) and multi-objective problems [2]. Also, when compared to other swarm intelligence algorithms, one of FSS’s control mechanisms, the weights of the fishes, natively provides a way to store how good the solutions found by the algorithm are. This is an important because most of the other swarm intelligence techniques proposed to solve many-objective problems use an external archive to store good solutions to guide the search process. The schemes applied to manage such archives is one of the issues that current focus of researchers in the field [21], due to the fact the methods chosen to prune, update and select solutions within the archive are crucial to algorithms achieve good performances [11].

The central idea of this work is to extend a multi-modal version of FSS, the weight-based Fish School Search (wFSS) [4], to solve problems with many objectives without the needing of storing good solutions in an external archive. The proposed algorithm, named weight-based many-objective Fish School Search (wmoFSS) has the same operators as the original version of FSS and a new clustering operator is included to promote diversity during the searching process. This clustering process is responsible to split the swarm in sub-groups. Moreover, in wmoFSS the MaOP is decomposed in sub-problems and each one is assigned to one sub-group that tries to solve it. Hence, the whole swarm solves all sub-problems at the same time. In comparison with the original version of FSS, the most important difference in wmoFSS is the fact that each fish has now a vector w of weights in which a single component represents the correspondent weight of a fish considering the objectives individually.

This paper is organized as follows: Section 2 describes the main concepts within wmoFSS; Section 3 presents the novel algorithm; In Section 4 the experimental design is detailed, all results are presented and analyzed; Section 5 presents conclusions related with this work as well as future works.

2 Background

2.1 Fish School Search algorithm

The Fish School Search (FSS) algorithm is a population-based metaheuristic algorithm inspired in the behavior of swimming fishes that expand and contract while looking for food. Each fish -dimensional location represents a possible solution for the optimization problem. The algorithm makes use of a feature named weights for all the fishes, which represents a cumulative account on how successful has been the search for each fish in the school.

FSS is composed by the Feeding and movement operators, the latter being divided into three sub-components: Individual, Collective-instinctive and Collective-volitive. The Individual component of the movement allows every fish in the school to perform a random local search looking for promising regions in the search space. This component is computed using the following equation:


where and represent the position of a fish before and after the movement caused by the Individual component, respectively. rand(-1,1) is a -dimensional vector whose components are uniformly distributed random numbers varying from up to . is a parameter responsible to set the maximum displacement for this movement. A new position is only accepted if the fitness of the fish improves within position changes. If it is not the case, the fish remains in the same position and .

The Collective-instinctive component of the movement is the average of Individual movements for all fishes in the school. A vector I representing the weighted average of displacements for each fish is calculated according to:


where is the size of the school. The displacement represented by vector I is defined in a way that fishes which experienced a higher improvement will attract other fishes to its position. After vector I is computed, every fish will be encouraged to move according to:


The last movement component, the Collective-volitive, is used in order to regulate school’s exploration/exploitation ability during the search process. First of all, the barycenter B of the school is calculated based on each fish position and weight , as described in equation 4:


Then, if the total school weight has increased from the last to the current iteration, If school total weight has not improved, fishes are spread away from the barycenter.

Besides the movement operators, the Feeding operator is used in order to update weights for all fishes according to:


where is the weight parameter for fish , is the fitness variation between last and new position and represents the maximum absolute value of fitness variation among all fishes in the school. is only allowed to vary from 1 up to , which is an user-defined attribute. The weights of all fishes are initialized with the value .

2.2 Weight-based Fish School Search Algorithm

Introduced in the work of Lima-Neto and Lacerda [4], wFSS is a weight-based niching version of FSS intended to provide multiple solutions in a single run for multi-modal optimization problems. The niching strategy is based on a new operator called Link Formation. This operator is responsible for defining leaders for the fishes in order to form sub-swarms and works according to the following: each fish chooses randomly another fish in the school. If is heavier than , then now has a link with and follows (i.e. leads ). Otherwise, nothing happens. However, if already has a leader and the sum of weights of followers is higher than weight, then stops following and starts to follow . In each iteration, if becomes heavier than its leader, the link will be broken.

In addition to Link Formation operator inclusion, some modifications were performed in the computation of the collective components of the movement operators in order to emphasize leaders influence on each sub-swarm. Also, the Collective-volitive component of the movement is also modified in a sense that the barycenter is now calculated for each fish with relation to its leader. If the fish does not have a leader, its barycenter will be its current position.

3 Weight-based Many-Objective Fish School Search Algorithm

The wmoFSS algorithm incorporates a series of modifications in FSS and some of its previously proposed versions (FSS-SAR [17] and FSS-NF [18]) in order to adapt it to solve many-objective optimization problems. In this section we describe in details the framework of wmoFSS and highlight the differences between this algorithm and the original version of FSS.

3.1 Core Idea

The main idea of wmoFSS is to split up the whole swarm into sub-swarms, as in wFSS, and also to decompose the many-objective optimization problem into scalar sub-problems, as in MOEA/D. Unlike wFSS, wmoFSS substitutes the Link Formation operator by a Clustering operator similar to the proposed by Deb and Jain [7]. Each sub-swarm is responsible to solve one sub-problem and all sub-swarms work simultaneously. Thus, the algorithm is able to provide multiple solutions in a single run. From the other versions of FSS, wmoFSS uses the Stagnation Avoidance Routine [17] in the Individual component of the movement and a Feeding operator similar to the one used in NF versions [18]. The wmoFSS algorithm also incorporates the reference points generation scheme and normalization procedure used in NSGA-III and a comparison criterion based on the -dominance utilized in -DEA algorithm.

The following pseudocode describes the main framework of wmoFSS. First, a set of reference points in the objectives space is generated, the swarm is randomly initialized following an uniform distribution on the decision space and split in clusters. We used the two-layer reference points generation method proposed by Deb and Jain [7]. Then, until a stopping criterion is not met, operators are applied in a sense that firstly fishes perform a greedy local search, then the Feeding operator is applied to each fish and the collective components of the movement are calculated considering each cluster individually. At last, the algorithm sort each cluster based on the Pareto-dominance criterion [6] and return the non-dominated solutions of each cluster.

1:  Create reference points;
2:  Initialize fishes randomly;
3:  Run Clustering operator;
4:  while Stopping condition is not met do
5:     for Each fish on the school do
6:        Move each fish according to its Individual component;
7:        Run Feeding operator;
8:        Leaders definition;
9:        Move each fish according to its Collective-instinctive component;
10:        Move each fish according to its Collective-volitive component;
11:     end for
12:  end while
13:  Sort each cluster based on Pareto-dominance;
14:  Return non-dominated solutions from each cluster.

3.2 Clustering Operator

The Clustering operator of wmoFSS is responsible for splitting the population into sub-swarms or clusters, as well as the Link Formation of wFSS. This operator divides the school into clusters according to the perpendicular Euclidean distance between a fish and a set of reference lines in the objective space. A reference line is defined as a line that goes from the ideal point to a reference point generated by the approach previously explained. The assignment process executed by the Clustering operator is similar to the one used by NSGA-III algorithm. Considering a set of reference points and a set of solutions (fishes), the closest fishes of a reference line are assigned to it. If is not a multiple of , the remaining fishes are assigned to its closest reference lines. Within this clustering procedure, we wanted to avoid the possibility that a reference line has no fish associated with and some regions of the PF might not be reached.

3.3 Individual Component of the Movement

As FSS and wFSS, the Individual component of the movement in wmoFSS is responsible for each fish greedy local search, as described by Equation 1. On the other hand, in wmoFSS the aggregated weight (further defined is this section) is the attribute considered in the comparison between the current and candidate positions.

3.4 Feeding Operator

In wmoFSS the weight of a fish is a vector with components corresponding to the success with respect to each objective. The Feeding operator is then responsible to calculate this components and also to aggregate them to provide a unique score to each solution in the swarm.

Its first task is to normalize the solutions in the objective space. This step is necessary due to the fact the objectives might not vary in the same range and we want to avoid giving preference to one objective in spite of the others during the aggregation procedure. We decided to consider an adaptive normalization process because the extreme objectives values found by fishes within the search varies during the execution.

Our normalization process is straightforward. All components of the objective vector of each fish are normalized into an interval which extremes are the minimum and maximum values found so far for this objective. It is noteworthy to remember that the vector whose coordinates are the best values for the objectives is the ideal point and represented as . If the ideal point of the problem is known, we use it. The vector which coordinates are the worst values found for each objective is an approximation of the nadir point (nadir objective vector), denoted as . This vector is constructed from the worst values of each objective considering just solutions of the Pareto front (PF) [6]. However, we use an approximation of due to its difficult estimation [8].

For a fish with objective vector , the -th component of its weight vector, corresponding to the -th objective function, is computed according to equation 6:


is the maximum value found for the -th objective until iteration and is the -th coordinate of the known ideal point or the current value found by the algorithm so far.

After the calculation of the weight vector for all fishes in the school, the Feeding operator computes the aggregated weight of each fish. In order to do this, we define that the aggregated weight of a fish , with weight vector , belonging to the cluster associated with the reference line , is the value of the achievement scalaring function (ASF) Penalty-based Boundary Intersection (PBI) obtained for this fish, which is calculated by the following equation [25]:


is a user-defined parameter and here we chose [15]. As we mentioned before, consists in the Euclidean distance between and its orthogonal projection onto and represents a measure of how far is from the reference line associated with its cluster. is the Euclidean distance between the orthogonal projection of onto and the ideal point.

It is important to highlight here that an ASF can be seen as a way of represent the “success" obtained by a solution in a determined problem and there many ways to define those functions. Among all possible choices for achievement scalarization function, as the often used Weighted-sum and the Tchebycheff, we selected the PBI due to the fact this ASF have shown to provide good results when used to solve problem from the DTLZ test suite [7], the set of problems used to evaluate wmoFSS in this work. Furthermore, this function allows the user to control the balance between convergence and diversity by changing the value of .

3.5 Leaders Definition

After splitting the population into clusters, to feed the fishes and obtain the value of their aggregated weights, the leader of each cluster is defined. We used a criterion inspired on the -dominance [23] to assign leadership in each sub-swarm. This dominance relation just considers individuals from the same cluster and it is based on the PBI aggregation method and consider as quality measure the value of the PBI achievement scalarizing function of a normalized objective vector in all reference directions considered by the algorithm. However, for wmoFSS, we want that each sub-swarm is specialized in solving its respective subproblem, thus we choose to consider just the reference direction associated with the cluster of a fish. Hence, according to this modified -dominance, which we denoted as -dominance, a fish dominates a fish if and only if: (i) and belong to the same cluster; (ii) .

Based on the aforementioned definitions, the leaders assignment process consists in to sort each cluster according to -dominance and assign leadership to all -non-dominated solutions. Informations regarding clusters, leaders and values of aggregated weight vectors are used to compute the collective components of the movement.

3.6 Collective-Instinctive Component of the Movement Operator

The Collective-instinctive component of the movement in wmoFSS is computed considering just fishes in the same cluster. The values of aggregated weight are used to measure the increase of success achieved for each fish, as in FSS-NF versions. This measure is defined as for minimization problems and for maximization problems. For a fish , its position after the collective-instinctive movement is computed according to:


where is the cluster of fish . The parameter is equal to 1 if is a cluster leader and 0 otherwise. As for the FSS-SAR versions, only the fishes which improved their aggregated weight are allowed to contribute to this component computation.

3.7 Collective-Volitive Component of the Movement Operator

As in Collective-instinctive component computation, in the Collective-volitive component of the movement only fishes in the same cluster are considered for to determine the value of the barycenter, that is, instead of calculate school’s barycenter, in wmoFSS a barycenter for each cluster is computed. Given a fish , the barycenter of its cluster is computed as follows:


The movement towards barycenter is computed exactly as in FSS. Clusters leaders are not affected by this operator as well as for the Collective-instinctive component.

4 Experimental Design

To evaluate wmoFSS performance, we selected the first four problems of the DTLZ [9] due to the fact they appear very often in literature [22]. We compared the Inverted Generational Distance (IGD) metric [3] obtained by wmoFSS with the results of 2 state-of-the-art algorithms: Many-objective Particle Swarm Optimization (MaOPSO) [11] and NSGA-III. We choose both algorithms due to: (i)wmoFSS incorporates many aspects of NSGA-III, such as the use of reference points and clustering the population, (ii)MaOPSO is a swarm-intelligence technique very different from wmoFSS which uses an external archive to guide the search. The IGD values used for comparison work were provided by MaOPSO authors and are the same used in [11].

Before comparing wmoFSS with the state-of-the-art algorithms, we performed a parameter selection step to choose the values of , and number of fishes and also the clustering and normalization methods. In this preliminary step, we adopt a methodology inspired on the Factorial Analysis method [19]. After this step, the results of median, maximum and minimum values of IGD for the -objective DTLZ1, 2, 3 and 4 problems with obtained for wmoFSS are compared with the NSGA-III algorithm [7] and the MaOPSO [11].

4.1 Results

We show in Table 1 the results obtained within the experiments involving wmoFSS, NSGA-III and MaOPSO.

DTLZ1 3 3.63E-02 1.51E-03 6.98E-04
7.27E-02 1.74E-03 6.99E-04
1.66E-02 1.37E-03 6.98E-04
5 1.85E-02 1.59E-03 8.25E-04
2.27E-02 1.88E-03 8.25E-04
9.78E-03 1.50E-03 8.24E-04
10 1.22E-02 1.42E-03 1.43E-03
2.32E-02 2.62E-03 1.44E-03
8.21E-03 1.38E-03 1.42E-03
DTLZ2 3 4.44E-03 3.77E-03 2.27E-03
4.67E-03 4.11E-03 2.27E-03
4.24E-03 3.55E-03 2.27E-03
5 4.71E-03 1.45E-02 2.94E-03
4.80E-03 1.53E-02 2.94E-03
4.62E-03 1.10E-02 2.94E-03
10 6.07E-03 1.23E-02 4.95E-03
6.13E-03 1.25E-02 4.97E-03
5.97E-03 9.93E-03 4.94E-03
DTLZ3 3 1.04E+00 3.89E-03 2.27E-03
1.41E+00 4.36E-03 2.67E-01
5.29E-01 3.52E-03 2.27E-03
5 4.67E-01 1.47E-02 2.94E-03
5.72E-01 1.91E-02 2.95E-03
3.12E-01 4.88E-03 2.94E-03
10 1.69E-01 1.20E-02 4.95E-03
2.66E-01 1.36E-02 7.54E-03
8.83E-02 6.45E-03 4.93E-03
DTLZ4 3 8.21E-03 3.80E-03 2.45E-03
9.29E-03 4.14E-03 2.59E-03
7.54E-03 3.60E-03 2.36E-03
5 6.15E-03 5.00E-03 3.88E-03
6.58E-03 5.24E-03 4.09E-03
5.86E-03 4.83E-03 3.51E-03
10 6.33E-03 5.10E-03 4.92E-03
6.50E-03 5.20E-03 4.95E-03
6.22E-03 5.02E-03 4.90E-03
Table 1: Results of median, maximum and minimum IGD obtained on the DTLZ1, 2, 3 and 4 test problems for

From Table 1, one can notice that, besides the multi-modal problems, wmoFSS shows competitive performance in comparison with NSGA-III and MaOPSO. Regarding NSGA-III, wmoFSS achieved results of the same order for the 3-objective DTLZ2 and outperformed this algorithm on the 5 and 10-objective DTLZ2. For the DTLZ4 problem, wmoFSS did not outperformed NSGA-III in any of the cases considered, but achieved median, maximum and minimum IGD of the same order as NSGA-III. In comparison with MaOPSO, wmoFSS did not achieved better results for either DTLZ2 or DTLZ4, but the median, maximum and minimum IGD obtained have the same order of magnitude. From these results, it is also possible to see that wmoFSS did not achieve good results for the DTLZ1 and DTLZ3 problems. This problems are multi-modal instances of the DTLZ test suite and have multiple local PFs. We attribute this difficulty of the proposed to technique to the smaller capability of a greedy search process does not get trapped into local optima.

To provide a graphical analysis of results, we show in Figures 1 and 2 two scatter plots of the set of solutions returned by wmoFSS and MaOPSO for the 3-objective DTLZ2 problem. In both cases the best IGD results were selected. A random sample of the true PF is represented by the smaller dots and the set returned by each algorithm is represented by the bigger dots.

Figure 1: Scatter plot of the solutions returned by wmoFSS on the 3-objective DTLZ2 problem
Figure 2: Scatter plot of the solutions returned by MaOPSO on the 3-objective DTLZ2 problem

One can notice that, besides the fact that wmoFSS has worse IGD in this instance, the set of solutions returned by wmoFSS is able to cover some regions of the PF that the MaOPSO was not able to.

5 Conclusion

In this work we conceived, implemented and tested the Weight-based Many-Objective Fish School Search algorithm (wmoFSS), an adaptation of the metaheuristic Fish School Search to solve many-objective optimization problems. Our goal was to propose a competitive approach that encompasses the necessary aspects and concepts desirably the ones readily applied by state-of-the-art algorithms.

When comparing wmoFSS with state-of-the-art algorithms, we concluded that the proposed technique achieved competitive results in the uni-modal DTLZ instances, as it achieved IGD values of the same magnitude order as the other algorithms, and outperformed NSGA-III on the -objective DTLZ2.

As the proposed algorithm did not present yet good results when solving multi-modal instances of the considered test suite, we propose as future work to apply other search mechanisms in the Individual component of the movement. We expect that this will increase wmoFSS search ability. Also, we plan to test wmoFSS in other test suites, such as the WFG set of problems [wmoFSSpaper]


  • [1] C. J. Bastos Filho, F. B. de Lima Neto, A. J. Lins, A. I. Nascimento, and M. P. Lima. A novel search algorithm based on fish school behavior. In Systems, Man and Cybernetics, 2008. SMC 2008. IEEE International Conference on, pages 2646–2651. IEEE, 2008.
  • [2] C. J. Bastos-Filho and A. C. Guimarães. Multi-objective fish school search. International Journal of Swarm Intelligence Research (IJSIR), 6(1):23–40, 2015.
  • [3] P. A. Bosman and D. Thierens. The balance between proximity and diversity in multiobjective evolutionary algorithms. IEEE transactions on evolutionary computation, 7(2):174–188, 2003.
  • [4] F. Buarque De Lima Neto and M. Gomes Pereira de Lacerda. Weight based fish school search. In Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, pages 270–277. IEEE, 2014.
  • [5] O. R. Castro, R. Santana, and A. Pozo. C-multi: A competent multi-swarm approach for many-objective problems. Neurocomputing, 180:68–78, 2016.
  • [6] K. Deb. Multi-objective optimization using evolutionary algorithms, volume 16. John Wiley & Sons, 2001.
  • [7] K. Deb and H. Jain. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: Solving problems with box constraints. IEEE Transactions on Evolutionary Computation, 18(4):577–601, 2014.
  • [8] K. Deb, K. Miettinen, and S. Chaudhuri. Toward an estimation of nadir objective vector using a hybrid of evolutionary and local search approaches. IEEE Transactions on Evolutionary Computation, 14(6):821–841, 2010.
  • [9] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler. Scalable multi-objective optimization test problems. In Evolutionary Computation, 2002. CEC’02. Proceedings of the 2002 Congress on, volume 1, pages 825–830. IEEE, 2002.
  • [10] M. Farina and P. Amato. A fuzzy definition of" optimality" for many-criteria optimization problems. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 34(3):315–326, 2004.
  • [11] E. Figueiredo, T. Ludermir, and C. Bastos-Filho. Many objective particle swarm optimization. Information Sciences, 374:115–134, 2016.
  • [12] K. Ikeda, H. Kita, and S. Kobayashi. Failure of pareto-based moeas: does non-dominated really mean near to optimal? In Evolutionary Computation, 2001. Proceedings of the 2001 Congress on, volume 2, pages 957–962. IEEE, 2001.
  • [13] H. Ishibuchi, N. Tsukamoto, and Y. Nojima. Evolutionary many-objective optimization: A short review. In IEEE congress on evolutionary computation, pages 2419–2426, 2008.
  • [14] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler. Combining convergence and diversity in evolutionary multiobjective optimization. Evolutionary computation, 10(3):263–282, 2002.
  • [15] K. Li, K. Deb, Q. Zhang, and S. Kwong. An evolutionary many-objective optimization algorithm based on dominance and decomposition. IEEE Transactions on Evolutionary Computation, 19(5):694–716, 2015.
  • [16] R. J. Lygoe, M. Cary, and P. J. Fleming. A real-world application of a many-objective optimisation complexity reduction process. In International Conference on Evolutionary Multi-Criterion Optimization, pages 641–655. Springer, 2013.
  • [17] J. B. Monteiro Filho, I. M. C. Albuquerque, F. B. d. L. Neto, and F. V. Ferreira. Optimizing multi-plateau functions with fss-sar (stagnation avoidance routine). In IEEE-Symposium Series on Computational Intelligence, pages 1–23, 2016.
  • [18] J. B. Monteiro Filho, I. M. C. Albuquerque, F. B. L. Neto, and F. V. S. Ferreira. Improved search mechanisms for the fish school search algorithm. In International Conference on Intelligent Systems Design and Applications, pages 362–371. Springer, 2016.
  • [19] D. C. Montgomery. Design and analysis of experiments. John Wiley & Sons, 2008.
  • [20] K. Praditwong, M. Harman, and X. Yao. Software module clustering as a multi-objective search problem. IEEE Transactions on Software Engineering, 37(2):264–282, 2011.
  • [21] M. Reyes-Sierra and C. C. Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. International journal of computational intelligence research, 2(3):287–308, 2006.
  • [22] C. von Lücken, B. Barán, and C. Brizuela. A survey on multi-objective evolutionary algorithms for many-objective problems. Computational Optimization and Applications, 58(3):707–756, 2014.
  • [23] Y. Yuan and H. Xu. Multiobjective flexible job shop scheduling using memetic algorithms. IEEE Transactions on Automation Science and Engineering, 12(1):336–353, 2015.
  • [24] Y. Yuan, H. Xu, B. Wang, and X. Yao. A new dominance relation-based evolutionary algorithm for many-objective optimization. IEEE Transactions on Evolutionary Computation, 20(1):16–37, 2016.
  • [25] Q. Zhang and H. Li. Moea/d: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on evolutionary computation, 11(6):712–731, 2007.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description