Non-elitist Evolutionary Multi-objective Optimizers Revisited

Non-elitist Evolutionary Multi-objective Optimizers Revisited

Abstract.

Since around 2000, it has been considered that elitist evolutionary multi-objective optimization algorithms (EMOAs) always outperform non-elitist EMOAs. This paper revisits the performance of non-elitist EMOAs for bi-objective continuous optimization when using an unbounded external archive. This paper examines the performance of EMOAs with two elitist and one non-elitist environmental selections. The performance of EMOAs is evaluated on the bi-objective BBOB problem suite provided by the COCO platform. In contrast to conventional wisdom, results show that non-elitist EMOAs with particular crossover methods perform significantly well on the bi-objective BBOB problems with many decision variables when using the unbounded external archive. This paper also analyzes the properties of the non-elitist selection.

Evolutionary multi-objective optimization, continuous optimization, non-elitist environmental selections
1234567891011

1. Introduction

Since no solution can simultaneously minimize multiple conflicting objective functions in general, the ultimate goal of multi-objective optimization problems (MOPs) is to find a Pareto optimal solution preferred by a decision maker (Miettinen, 1998). When the decision maker’s preference information is unavailable a priori, an “a posteriori” decision making is performed. The decision maker selects the final solution from a solution set that approximates the Pareto front in the objective space.

An evolutionary multi-objective optimization algorithm (EMOA) is frequently used to find an approximation of the Pareto front for the “a posteriori” decision making (Deb, 2001). A number of EMOAs have been proposed in the literature. Classical EMOAs include VEGA (Schaffer, 1985), MOGA (Fonseca and Fleming, 1993), and NSGA (Srinivas and Deb, 1994) proposed in the 1980s and 1990s. They are non-elitist EMOAs, which do not have a mechanism to maintain non-dominated solutions in the population. Some elitist EMOAs (e.g., SPEA (Zitzler and Thiele, 1999), SPEA2 (Zitzler et al., 2001), and NSGA-II (Deb et al., 2002a)) have been proposed in the early 2000s. Elitist EMOAs explicitly keep non-dominated solutions found during the search process.

Some EMOAs store non-dominated solutions found so far in an unbounded or bounded external archive independently from the population. For example, MOGLS (Ishibuchi and Murata, 1998) proposed in the mid-1990s does not maintain elite solutions in the population but stores all non-dominated solutions found so far in the unbounded external archive. -MOEA (Deb et al., 2005) stores non-dominated solutions in the population and -nondominated solutions in the unbounded external archive. PESA (Corne et al., 2000) uses the non-elitist population and the elitist bounded external archive. The external archive in these EMOAs (e.g., MOGLS, -MOEA, and PESA) plays two roles. The first role is to provide non-dominated solutions found so far to the decision maker. The performance of these types of EMOAs is also evaluated based on solutions in the external archive, rather than the population. The second role is to perform an elitist search. For example, parents for mating are selected from the external archive in PESA. Some elitist individuals in the external archive can enter the population in MOGLS. Since these types of EMOAs explicitly exploit elitist solutions as explained above, they can be categorized into elitist EMOAs.

Apart from algorithm development, the external archive has been used only for the first role (e.g., (Fonseca and Fleming, 1993; López-Ibáñez et al., 2011; Bringmann et al., 2014; Brockhoff et al., 2015; Wessing et al., 2017)). As pointed out in (Bringmann et al., 2014), good potential solutions found so far are likely to be discarded from the population. The external archive that stores all non-dominated solutions independently from EMOAs can address this issue. The external archive for the first role can be incorporated into all EMOAs without any changes in their algorithmic behavior. The external archive is useful for real-world problems where the evaluation of each solution is expensive, i.e., the total number of examined solutions is limited, and the archive maintenance cost is relatively small in comparison with the solution evaluation cost. If the decision maker wants to examine a small number of non-dominated solutions, solution selection methods are available such as hypervolume indicator-based selection methods (e.g., (Bringmann et al., 2014)) and distance-based selection methods (e.g., (Singh et al., 2019)).

This paper revisits non-elitist EMOAs with the unbounded external archive only for the first role (performance evaluation). When the performance of EMOAs is evaluated based on solutions in the external archive as in (López-Ibáñez et al., 2011; Bringmann et al., 2014; Brockhoff et al., 2015; Wessing et al., 2017), the role of EMOAs is only to find non-dominated solutions with high quality. Thus, EMOAs do not need to maintain non-dominated solutions found so far in the current population with the population size . We investigate three environmental selections: best-all (BA), best-family (BF), and best-children (BC). While BA and BF are elitist selections, BC is a non-elitist selection. Although BA is a traditional -selection, BF and BC restrict a selection only among parents and children. Thus, non-parents do not directly participate in the selection process in BF and BC unlike traditional - and -selections. In BC, all parents are removed from the population regardless of their quality. Then, the top-ranked out of children enter the population. Subsection 2.3 explains BA, BF, and BC in detail. We examine the performance of EMOAs with the three selections on the bi-objective BBOB problem suite (Tusar et al., 2016). We use five crossover methods and four ranking methods in representative EMOAs.

Our contributions in this paper are at least threefold:

  • We demonstrate that the non-elitist BC selection performs significantly well on the bi-objective BBOB problems with many decision variables when using the unbounded external archive. Although most EMOAs proposed in the 2000s are elitist EMOAs, our results indicate that efficient non-elitist EMOAs could be designed. Thus, our results significantly expand the design possibility of EMOAs.

  • We demonstrate that restricted replacements in BF and BC are suitable for crossover methods with the preservation of statistics (Kita et al., 1998) (e.g., the property where the covariance matrix of children is the same as that of the parents) such as SPX (Tsutsui et al., 1999) and REX (Akimoto et al., 2009).

  • We discuss why the simple BA selection performs worse than the restricted BF and BC selections. We also analyze the properties of the non-elitist BC selection.

The rest of this paper is organized as follows. Section 2 provides some preliminaries of this paper, including the definition of MOPs, the five crossover methods, and the three environmental selections. Section 3 describes the experimental setup. Section 4 examines the performance of the three environmental selections. Section 5 concludes this paper with discussions on future research directions.

(a) SBX
(b) BLX
(c) PCX
(d) SPX
(e) REX
Figure 1. Distribution of children generated by the five crossover methods. Large red points are their parents.

2. Preliminaries

2.1. Definition of continuous MOPs

A continuous MOP is to find a solution that minimizes a given objective function vector . Here, is the -dimensional solution space, and is the -dimensional objective space. is the number of decision variables, and is the number of objective functions.

A solution is said to dominate iff for all and for at least one index . If is not dominated by any other solutions in , is a Pareto optimal solution. The set of all is the Pareto optimal solution set, and the set of all is the Pareto front. The goal of MOPs for the “a posteriori” decision making is to find a non-dominated solution set that approximates the Pareto front in the objective space.

2.2. Crossover methods in real-coded GAs

We use the following five crossover methods in real-coded GAs: simulated binary crossover (SBX) (Deb and Agrawal, 1995), blend crossover (BLX) (Eshelman and Schaffer, 1992), parent-centric crossover (PCX) (Deb et al., 2002b), simplex crossover (SPX) (Tsutsui et al., 1999), and real-coded ensemble crossover (REX) (Akimoto et al., 2009). Here, we briefly explain the five crossover methods.

Traditional GAs use two variation operators: crossover and mutation. In contrast, real-coded GAs with BLX, PCX, SPX, and REX do not need the mutation operator because they can generate diverse children by adjusting their control parameters (e.g., the expansion rate in SPX). However, the polynomial mutation (PM) (Deb and Agrawal, 1995) is applied to two children generated by SBX in most studies. In other words, SBX and PM have been considered to be a set. For this reason, we apply PM to children generated only by SBX. We refer to “SBX and PM” as “SBX” for simplicity.

Table 1 shows the properties of the five crossover methods. Although SBX and BLX are traditional two-parent crossover methods, PCX, SPX, and REX are multi-parent crossover methods. PCX, SPX, and REX are rotationally invariant. The performance of EMOAs with rotationally invariant operators does not depend on the coordinate system. While PCX and REX use a Normal probability distribution, BLX and SPX use a uniform probability distribution. The probability distribution used in SBX is unclear. Although the center of the distribution of children is the mean vector of parents in BLX, SPX, and REX, that is one of the parents in SBX and PCX. SPX and REX have a property called the “preservation of statistics” proposed in (Kita et al., 1998). In a crossover method with this property, children inherit the statistics (e.g., the mean vector and the covariance matrix) from their parents.

Cent. Prob. Rot. Sta. Parameters
SBX parent ? ,
BLX mean U
PCX parent N ,
SPX mean U
REX mean N
Table 1. Properties of the five crossover methods, including the center of the distribution of children (parent or mean), the type of probability distribution (U: uniform or N: normal), the rotational invariance, the preservation of statistics, the number of parents , and other control parameters.

Figure 1 shows the distribution of children generated by the five crossover methods. SBX simulates the working principle of the single-point crossover in binary-coded GAs. Since SBX is a variable-wise operator, most children are generated along the coordinate axes. The distribution of children is controlled by in SBX (and in PM). In BLX, the -th element () of a child is uniformly randomly selected from the range . Here, and . and are parents, and is the expansion factor.

PCX is a parent-centric version of UNDX- (Kita et al., 1999), which is a multi-parent extension of unimodal normal distribution crossover (UNDX) (Ono and Kobayashi, 1997). While the center of the distribution of children is the mean vector of parents in UNDX-, that is one of the parents in PCX. PCX requires two parameters and that control the variances of two Normal distributions. SPX can be viewed as being a rotationally invariant version of BLX. SPX uniformly generates children inside an expanded simplex formed by parents. The theoretical analysis presented in (Higuchi et al., 2000) shows that SPX with the expansion factor satisfies the preservation of statistics. REX is a generalized version of UNDX-. REX using a zero-mean Normal distribution with the variance satisfies the preservation of statistics (Akimoto, 2010).

2.3. Environmental selections

We consider a “simple” EMOA shown in Algorithm 1. After the initialization of the population with the population size (line 1), the following operations are repeatedly performed until a termination condition is satisfied. First, parents are randomly selected from such that their indices are different from each other (line 3). Let be a set of the parents. Then, children are generated by applying a crossover method to the same parents times (line 4).12 To effectively exploit the neighborhood of the parents, the same parents are generally used to generate children in GAs for single-objective optimization (Akimoto, 2010). Let be a set of the children. At the end of each iteration, the environmental selection is performed using , , and (line 5).

Below, we explain the following three environmental selections: best-all (BA), best-family (BF), and best-children (BC). Note that our main contributions in this paper are analysis of BA, BF, and BC in Section 4, not proposing BA, BF, and BC. Algorithms 2, 3, and 4 show BA, BF, and BC, respectively. While BA and BF are elitist selections, BC is a non-elitist selection. The three selections require a method of ranking individuals based on their quality. Similar to MO-CMA-ES (Igel et al., 2007), BA, BF, and BC can be combined with any ranking method. In this paper, we use four ranking methods in NSGA-II (Deb et al., 2002a), SMS-EMOA (Beume et al., 2007), SPEA2 (Zitzler et al., 2001), and IBEA with the additive indicator (Zitzler and Künzli, 2004). We denote their ranking methods as “NS”, “SM”, “SP”, and “IB”, respectively. Individuals are ranked based on their non-domination levels in NS and SM. The tie-breakers are the crowding distance in NS and the hypervolume contribution in SM. In SP and IB, individuals are sorted based on their so-called fitness values in descending order. In this paper, X-Y-Z represents the EMOA (Algorithm 1) with an environmental selection X, a crossover method Y, and a ranking method Z. For example, BA-SBX-NS is the EMOA with BA, SBX, and NS.

1 , initialize the population ;
2 while The termination criteria are not met do
3        Randomly select parents from ;
4        Generate children by applying the crossover method to ;
5        ;
6        ;
7       
Algorithm 1 The simple EMOA
1 Assign ranks to all individuals in ;
2 and ;
3 for  do
4        Select the best ranked individual from ;
5        and ;
6       
Algorithm 2 BA (the elitist selection)
1 Assign ranks to all individuals in ;
2 and ;
3 for  do
4        Select the best ranked individual from ;
5        and ;
6       
Algorithm 3 BF (the elitist restricted selection)
1 ;
2 Assign ranks to all individuals in ;
3 for  do
4        Select the best ranked individual from ;
5        and ;
6       
Algorithm 4 BC (the non-elitist restricted selection)

In BA (Algorithm 2), the top-ranked individuals are selected from the union of and . BA is the traditional elitist -selection used in most EMOAs (e.g., NSGA-II and SPEA2). It should be noted that BA-SBX-NS is not identical to NSGA-II. The differences between BA-SBX-NS and NSGA-II are the random parent selection and the children generation. The same parents are used to generate children in BA. For the same reason, BA-SBX-SP, BA-SBX-SM, and BA-SBX-IB are not identical to SPEA2, SMS-EMOA, and IBEA, respectively.

In BF (Algorithm 3), the environmental selection is performed only among the so-called “family” that consists of children in and parents in . After all individuals in the union of and have been ranked, only parents in are removed from . Then, the best individuals are selected from the union of and . Although non-parents in do not directly participate in the selection process, they contribute to assign ranks to individuals in the union of and . While the maximum number of individuals replaced by children is in BA, that is in BF. Since only parents can be replaced by children in BF, non-parents can survive to the next iteration with no comparison. Selections among families as in BF are used in GAs for single-objective optimization (e.g., the deterministic crowding (Mahfoud, 1992)).

In BC (Algorithm 4), the environmental selection is performed among children in . We assume that . After parents in have been removed from , all individuals in the union of and are ranked. Then, the best individuals are selected from . Since all parents are deleted from regardless of their quality, BC does not maintain non-dominated individuals in . Thus, BC is a non-elitist selection in contrast to the elitist BA and BF selections. While individuals in are replaced with children in in most classical -EMOAs (e.g., MOGA), only parents in are replaced with the best out of children in in BC. Thus, BC is different from the traditional -selection.

BC can be viewed as being an extension of just generation gap (JGG) (Akimoto, 2010) to multi-objective optimization. JGG is an environmental selection in GAs for single-objective continuous optimization. The only difference between BC and JGG is how to assign ranks to individuals. Individuals are ranked based on their objective values in JGG and their objective vectors in BC. The results presented in (Akimoto, 2010) show that non-elitist GAs with JGG significantly outperform elitist GAs on single-objective test problems (especially multimodal problems) when using crossover methods with the preservation of statistics.

3. Experimental settings

We conducted all experiments using the comparing continuous optimizers (COCO) platform (Hansen et al., 2016). COCO is the standard platform used in the black box optimization benchmarking (BBOB) workshops held at GECCO (2009–present). We used the latest COCO software (version 2.2.2) downloaded from https://github.com/numbbo/coco. COCO provides six types of BBOB problem suites, including the single-objective BBOB noiseless problem suite (Hansen et al., 2009). The bi-objective BBOB problem suite (Tusar et al., 2016) consists of 55 bi-objective test problems designed based on the idea presented in (Brockhoff et al., 2015). Each bi-objective BBOB problem is constructed by combining two single-objective BBOB problems. For example, the first and second objective functions of are the Sphere function and the rotated Rastrigin function, respectively. The number of decision variables is . For details of the 55 bi-objective test problems, see (Tusar et al., 2016). For each problem, runs were performed. These settings adhere to the analysis procedure adopted by the GECCO BBOB community. The maximum number of function evaluations was set to .

COCO also provides the post-processing tool that aggregates experimental data. COCO automatically stores all non-dominated solutions found by an optimizer in the unbounded external archive. The performance indicator (Brockhoff et al., 2016) in COCO is mainly based on the hypervolume value of non-dominated solutions in the unbounded external archive. When no solution in the external archive dominates a predefined reference point in the normalized objective space, the value is calculated based on the distance to the so-called region of interest. For details of , see (Brockhoff et al., 2016).

We implemented all algorithms using jMetal (Durillo and Nebro, 2011). Source codes of all algorithms are available at https://sites.google.com/view/nemorgecco2019/. For all five crossover methods (except for PCX), we used the control parameters recommended in the literature shown in Table 1. Since PCX with performed poorly in our preliminary study, we set to similar to SPX and REX. For comparison, we evaluated the performance of the original NSGA-II, SPEA2, SMS-EMOA, and IBEA. SBX and PM with , , , and were used in the original EMOAs. As in (Tusar and Filipic, 2016), was set to . The number of children was set to . We set the value based on our preliminary results and studies of GAs for single-objective optimization (e.g., (Akimoto et al., 2009; Akimoto, 2010)).

(a)
(b)
(c)
Figure 2. Results of the original NSGA-II, BA-SPX-NS, BF-SPX-NS, and BC-SPX-NS on all 55 bi-objective BBOB test problems with (higher is better). For the notation X-Y-Z, see Subsection 2.3.

4. Results

This section shows analysis of the three environmental selections (BA, BF, and BC). Since SPX is suitable for BF and BC, we mainly discuss results of EMOAs with SPX. Although results of EMOAs with REX are similar to those with SPX, we do not show them here due to space constraint. As shown in Subsection 4.4, SBX, BLX, and PCX are not suitable for BA, BF, and BC.

Subsection 4.1 shows a comparison among BA-SPX-NS, BF-SPX-NS, BC-SPX-NS, and the original NSGA-II. Subsection 4.2 investigates why BA performs poorly. Subsection 4.3 analyzes the advantages and disadvantages of the non-elitist BC compared with the elitist BF. Subsection 4.4 examines the performance of BA, BF, and BC with other crossover methods (SBX, BLX, PCX, and REX). Subsection 4.5 presents a comparison of BA, BF, and BC with other ranking methods (SP, SM, and IB).

4.1. Comparison of BA, BF, and BC

Figure 2 shows results of the original NSGA-II, BA-SPX-NS, BF-SPX-NS, and BC-SPX-NS on all 55 BBOB problems with . Due to space constraint, results for are not shown, but they are similar to results for . In this section, we use the SPX crossover and the NS ranking method. In Figure 2, “best 2016” is a virtual algorithm portfolio that is constructed from the performance data of 15 algorithms participating in the GECCO BBOB 2016 workshop. Note that “best 2016” does not mean the best optimizer among the 15 algorithms.

Figure 2 shows the bootstrapped empirical cumulative distribution (ECDF) of the number of function evaluations (FEvals) divided by (FEvals/) for 58 target indicator values for all 55 BBOB problems with each . We used the COCO software to generate all ECDF figures in this paper. In Figure 2, the vertical axis indicates the proportion of target indicator values which a given optimizer can reach within specified function evaluations. For example, in Figure 2 (b), BF-SPX-NS reaches about 40 percent of all 58 target indicator values within evaluations on all 55 problems with in all runs. If an optimizer finds all Pareto optimal solutions on all 55 problems in all runs, the vertical value becomes 1. More detailed explanations of the ECDF (including illustrative examples) are found in (Brockhoff et al., 2015, 2016).

Statistical significance is also tested with the rank-sum test () for a given target value using the COCO software. However, statistical test results are almost consistent with ECDF figures. Additionally, the space of this paper is limited. For these reasons, we show only ECDF figures. The statistical test results and other ECDF figures are available at https://sites.google.com/view/nemorgecco2019/.

Figure 2 shows that BA-SPX-NS performs the best until evaluations for . However, the increase of deteriorates the performance of BA-SPX-NS. The evolution of BA-SPX-NS clearly stagnates for . The original NSGA-II is the best performer in the early stage for . BF-SPX-NS and BC-SPX-NS perform better than NSGA-II and BA-SPX-NS in the later stage for all . Interestingly, the non-elitist BC-SPX-NS performs the best in the later stage for . Although it has been believed that elitist EMOAs always outperform non-elitist EMOAs for about two decades, our results show that the non-elitist BC-SPX-NS performs better than the elitist NSGA-II, BA-SPX-NS, and BF-SPX-NS on the bi-objective BBOB problems with when using the unbounded external archive.

Note that BC-SPX-NS is not always the best optimizer on all 55 BBOB problems with . Figure 3 shows results on and with . While BF-SPX-NS outperforms BC-SPX-NS on , BC-SPX-NS outperforms BF-SPX-NS on . Similar to Figure 3, the best optimizer is different depending on the test problem. We attempted to clarify which problem groups BC performs the best (e.g., BC has the best performance on multimodal problems with weak global structure such as and ). Unfortunately, we could not find such a result. An in-depth analysis is needed to understand on which problems BC performs well or poorly.

(a) : BF outperforms BC
(b) : BC outperforms BF
Figure 3. Results of NSGA-II, BA-SPX-NS, BF-SPX-NS, and BC-SPX-NS on and with .

4.2. Why does BA perform poorly?

Here, we discuss the poor performance of BA-SPX-NS observed in Subsection 4.1. The biased distribution of children is likely to cause the poor performance of BA-SPX-NS. As shown in Figure 1 (d), SPX generates children inside a simplex formed by parents. If the parents are close to each other in the solution space, their children are likely to be in local area. If non-parents in the population are ranked worse than the children, the non-parents are replaced with the children in BA. This means that non-parents in not-well-explored area cannot survive to the next iteration. Thus, BA-SPX-NS is likely to lose diversity in the solution and objective spaces as the search progresses.

One may think that the above-mentioned issue caused by the biased distribution of children can be addressed by setting to a small value. Figure 4 shows BA-SPX-NS with on all 55 BBOB problems with . In Figure 4, “” is identical to BA-SPX-NS in Figure 2. Figure 4 also shows the results of NSGA-II, BF-SPX-NS and BC-SPX-NS derived from Figure 2. Figure 4 shows that the performance of BA-SPX-NS can be improved by setting to a small value. However, BA-SPX-NS with any is outperformed by NSGA-II, BF-SPX-NS, and BC-SPX-NS at the later stage.

In general, a large enough number of children are necessary to find better solutions in the current search area (Akimoto, 2010). Thus, BA is in a dilemma. A large value is helpful for BA to exploit the current search area, but it causes premature convergence. A small value can prevent BA from the premature convergence, but it is not sufficiently large to exploit the current search area. In addition to SPX, we observed the same issue in other crossover methods (except for SBX).

In contrast to BA, only parents can be replaced with children in BF and BC. This restricted replacement in BF and BC can help the population to maintain the diversity. Even if non-parents in not-well-explored area are dominated by the children, the non-parents can survive to the next iteration with no comparison. Thus, BF and BC can address the BA’s dilemma. In fact, BF-SPX-NS and BC-SPX-NS perform significantly better than BA-SPX-NS.

(a)
(b)
Figure 4. Results of BA-SPX-NS with various values.

4.3. Advantages and disadvantages of BC

As shown in Subsection 4.1, the non-elitist BC performs better than the elitist BF for . Here, we discuss the advantages and disadvantages of BC compared with BF.

Figure 5 (a) shows raw indicator values of the population in BF-SPX-NS and BC-SPX-NS on with , which consists of two rotated Rastrigin function instances. In all 55 BBOB test problems, can be viewed as being a representative multimodal problem. We slightly modified the COCO software to calculate the value of the population (not the external archive). A lower raw value is better. The range of the value in Figure 5 (a) is limited to in order to focus on the interesting behavior of BC-SPX-NS. Although the value of the elitist BF-SPX-NS almost13 monotonically decreases as the search progresses, that of the non-elitist BC-SPX-NS is unstable. Since BC does not maintain best-so-far non-dominated solutions in the population, its value sometimes deteriorates compared with the previous iteration.

Figure 5 (b) shows the cumulative number of parents replaced by children. In BF-SPX-NS, the evolution of clearly stagnates after function evaluations. This result means that BF-SPX-NS rarely generates better children than parents. In fact, the raw value of BF-SPX-NS is not significantly improved after function evaluations, as shown in Figure 5 (a). Since BC-SPX-NS always replaces parents with the best out of children for every iteration, linearly increases. Thus, the replacement of individuals in BC occurs more frequently than that in BF. This property of BC is helpful for exploration of the search space.

The above observations indicate that BC has a similar advantage to simulated annealing (Kirkpatrick et al., 1983), which can move to a worse search point. As pointed out by Deb and Goel (Deb and Goel, 2001), if an elitist EMOA prematurely converges to local Pareto optimal solutions, it is very likely to stagnate. Unless the elitist EMOA finds better solutions far from the current search area, it cannot escape from local Pareto optimal solutions. In contrast, the non-elitist BC always replaces parents with children regardless of the quality of parents. While most elitist environmental selections accept only “downhill” moves on minimization problems, the non-elitist BC can accept “uphill” moves as in simulated annealing. The uphill moves in BC help the population to escape from local Pareto optimal solutions on some multimodal problems.

(a) values
(b) Num. replacements
Figure 5. (a) Raw indicator values on with (lower is better). (b) Cumulative number of parents replaced by children. Results of a single run are shown.

However, BC has at least two disadvantages compared with the elitist BF. First, as discussed in Subsection 4.1, BC performs worse than BF on some problems even with . Second, as reported in Subsection 4.1, BC performs worse than BF at the early stage. Since BC can accept “uphill” moves as in simulated annealing, the exploitative ability of BC is worse than that of BF. A deterministic or adaptive method of switching BC and BF may be promising to exploit their advantages.

(a) SBX
(b) BLX
(c) PCX
(d) REX
Figure 6. Results of BA, BF, and BC with (a) SBX, (b) BLX, (c) PCX, and (d) REX on all 55 BBOB problems with . Results of the original NSGA-II are also shown.
(a) SPEA2
(b) SMS-EMOA
(c) IBEA
Figure 7. Results of BA, BF, and BC with the ranking methods in (a) SPEA2, (b) SMS-EMOA, and (c) IBEA on all 55 BBOB problems with . Results of the original SPEA2, SMS-EMOA, and IBEA are also shown.

4.4. Which crossover methods are suitable for the non-elitist BC?

The results in Subsection 4.1 show that BC-SPX-NS outperforms BA-SPX-NS, BF-SPX-NS, and NSGA-II for . Here, we examine which crossover methods are suitable for BC. We are not interested in which crossover method is best. Even though BC-SPX-NS outperforms BC-PCX-NS, it does not mean that SPX performs better than PCX. It only means that SPX is more suitable for BC than PCX.

Figure 6 shows results of the three selections with SBX, BLX, PCX, and REX on all 55 BBOB problems with . Due to space constraint, only results for are shown here. The NS ranking method is used in BA, BC, and BF. Figure 6 (a) shows that BA-SBX-NS outperforms BF-SBX-NS and BC-SBX-NS. This good performance of BA-SBX-NS is inconsistent with the results in Subsection 4.1. Since SBX can generate children far from their parents as shown in Figure 1, the distribution of children discussed in Subsection 4.2 does not significantly influence the performance of BA. However, BA-SBX-NS performs worse than NSGA-II. Figure 6 (b) and (c) show similar results. The evolution of the three selections with BLX and PCX clearly stagnates. Figure 6 (d) shows that results with REX are consistent with the results with SPX. BC-REX-NS is the best optimizer at the later stage. BF-REX-NS also performs better than NSGA-II.

In summary, SPX and REX are suitable for BC and BF, while SBX, BLX, and PCX are not suitable for them. These results indicate that crossover methods with the preservation of statistics are suitable for BC (and BF). As shown in Table 1, only SPX and REX satisfy the preservation of statistics among the five crossover methods. The results presented in (Akimoto, 2010) show that SPX and UNDX- (a special version of REX) are suitable for JGG (a similar selection to BC) in GAs for single-objective continuous optimization. Interestingly, our results on continuous MOPs are consistent with the results on single-objective continuous optimization problems. A similarity analysis between single-objective optimizers and multi-objective optimizers as in (Wessing et al., 2017) may be interesting.

4.5. Comparison of BA, BF, and BC with other ranking methods

We used the NS ranking method in Subsection 4.1. We investigate whether similar results can be obtained when using the SP, SM, and IB ranking methods (see Subsection 2.3).

Figure 7 shows the comparison of BA, BF, and BC with SP, SM, and IB for . We do not show results for , but they are similar to the results in Subsection 4.1. SPX is used as a crossover method. Figures 7 (a), (b), and (c) also show results of the original SPEA2, SMS-EMOA, and IBEA, respectively.

Figure 7 shows that results with SP, SM, and IB are consistent with the results with NS. BF and BC outperform the original SPEA2, SMS-EMOA, and IBEA at the later stage. BC is the best optimizer at the later stage. The poor performance of BA can be observed in Figure 7. Our results show that the relative performance of BA, BF, and BC does not significantly depend on the choice of a ranking method.

5. Conclusion

We examined the effectiveness of the two elitist selections (BA and BF) and the non-elitist selection (BC) on the bi-objective BBOB problem suite. We used five crossover methods and four ranking methods. For about two decades, it has been considered that elitist EMOAs always outperform non-elitist EMOAs. Interestingly, our results show that the non-elitist BC performs better than the two elitist selections and the four original EMOAs (NSGA-II, SPEA2, SMS-EMOA, and IBEA) on the bi-objective BBOB problems with many decision variables when using the unbounded external archive and a crossover method with the preservation of statistics (i.e., SPX and REX). The choice of a ranking method does not significantly influence the relative performance of BC. We also analyzed the advantages and disadvantages of the non-elitist BC selection.

A number of interesting directions for future work remain. Although only elitist EMOAs have been studied in the 2000s, our results indicate that efficient non-elitist EMOAs could be realized. Designing non-elitist versions of MO-ES (Wessing et al., 2017) and MO-CMA-ES (Igel et al., 2007) based on BC may be promising.

Acknowledgments

This work was supported by National Natural Science Foundation of China (Grant No. 61876075), the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (Grant No. 2017ZT07X386), Shenzhen Peacock Plan (Grant No. KQTD2016112514355531), the Science and Technology Innovation Committee Foundation of Shenzhen (Grant No. ZDSYS201703031748284), and the Program for University Key Laboratory of Guangdong Province (Grant No. 2017KSYS008).

Footnotes

  1. copyright: rightsretained
  2. doi: 10.1145/nnnnnnn.nnnnnnn
  3. isbn: 978-x-xxxx-xxxx-x/YY/MM
  4. journalyear: 2019
  5. copyright: acmcopyright
  6. conference: Genetic and Evolutionary Computation Conference; July 13–17, 2019; Prague, Czech Republic
  7. booktitle: Genetic and Evolutionary Computation Conference (GECCO ’19), July 13–17, 2019, Prague, Czech Republic
  8. price: 15.00
  9. doi: 10.1145/3321707.3321754
  10. isbn: 978-1-4503-6111-8/19/07
  11. ccs: Mathematics of computing Evolutionary algorithms
  12. Since SBX generates two children in a single operation, SBX is performed times.
  13. The monotonic improvement of the hypervolume value over time is guaranteed only when using the unbounded external archive (López-Ibáñez et al., 2011).

References

  1. Adaptation of expansion rate for real-coded crossovers. In GECCO, pp. 739–746. Cited by: 2nd item, §2.2, §3.
  2. Design of Evolutionary Computation for Continuous Optimization. Ph.D. Thesis, Tokyo Institute of Technology. Cited by: §2.2, §2.3, §2.3, §3, §4.2, §4.4.
  3. SMS-EMOA: multiobjective selection based on dominated hypervolume. EJOR 181 (3), pp. 1653–1669. Cited by: §2.3.
  4. Generic Postprocessing via Subset Selection for Hypervolume and Epsilon-Indicator. In PPSN, pp. 518–527. Cited by: §1, §1.
  5. Benchmarking Numerical Multiobjective Optimizers Revisited. In GECCO, pp. 639–646. Cited by: §1, §1, §3, §4.1.
  6. Biobjective Performance Assessment with the COCO Platform. CoRR abs/1605.01746. Cited by: §3, §4.1.
  7. The Pareto Envelope-Based Selection Algorithm for Multi-objective Optimisation. In PPSN, pp. 839–848. Cited by: §1.
  8. Simulated Binary Crossover for Continuous Search Space. Complex Systems 9 (2). Cited by: §2.2, §2.2.
  9. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE TEVC 6 (2), pp. 182–197. Cited by: §1, §2.3.
  10. A Computationally Efficient Evolutionary Algorithm for Real-Parameter Optimization. Evol. Comput. 10 (4), pp. 345–369. Cited by: §2.2.
  11. Controlled Elitist Non-dominated Sorting Genetic Algorithms for Better Convergence. In EMO, pp. 67–81. Cited by: §4.3.
  12. Evaluating the epsilon-Domination Based Multi-Objective Evolutionary Algorithm for a Quick Computation of Pareto-Optimal Solutions. Evol. Comput. 13 (4), pp. 501–525. Cited by: §1.
  13. Multi-objective optimization using evolutionary algorithms. John Wiley & Sons. Cited by: §1.
  14. jMetal: A Java framework for multi-objective optimization. Adv. Eng. Softw. 42 (10), pp. 760–771. Cited by: §3.
  15. Real-Coded Genetic Algorithms and Interval-Schemata. In FOGA, pp. 187–202. Cited by: §2.2.
  16. Genetic Algorithms for Multiobjective Optimization: FormulationDiscussion and Generalization. In ICGA, pp. 416–423. Cited by: §1, §1.
  17. COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting. CoRR abs/1603.08785. Cited by: §3.
  18. Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions. Technical report Technical Report RR-6829, INRIA. Cited by: §3.
  19. Theoretical Analysis of Simplex Crossover for Real-Coded Genetic Algorithms. In PPSN, pp. 365–374. Cited by: §2.2.
  20. Covariance Matrix Adaptation for Multi-objective Optimization. Evol. Comput. 15 (1), pp. 1–28. Cited by: §2.3, §5.
  21. A multi-objective genetic local search algorithm and its application to flowshop scheduling. IEEE Trans. SMC, Part C 28 (3), pp. 392–403. Cited by: §1.
  22. Optimization by simulated annealing. science 220 (4598), pp. 671–680. Cited by: §4.3.
  23. Theoretical Analysis of the Unimodal Normal Distiibution Crossover for Real-coded Genetic Algorithms. In IEEE CEC, pp. 529–534. Cited by: 2nd item, §2.2.
  24. Multi-parental extension of the unimodal normal distribution crossover for real-coded genetic algorithms. In IEEE CEC, pp. 1581–1587. Cited by: §2.2.
  25. On Sequential Online Archiving of Objective Vectors. In EMO, pp. 46–60. Cited by: §1, §1, footnote 2.
  26. Crowding and Preselection Revisited. In PPSN, pp. 27–36. Cited by: §2.3.
  27. Nonlinear multiobjective optimization. Springer. Cited by: §1.
  28. A Real Coded Genetic Algorithm for Function Optimization Using Unimodal Normal Distributed Crossover. In GECCO, pp. 246–253. Cited by: §2.2.
  29. Multiple objective optimization with vector evaluated genetic algorithms. In ICGA, pp. 93–100. Cited by: §1.
  30. Distance based subset selection for benchmarking in evolutionary multi/many-objective optimization. IEEE TEVC, pp. . Cited by: §1.
  31. Multiobjective Optimization Using Nondominated Sorting in Genetic Algorithms. Evol. Comput. 2 (3), pp. 221–248. Cited by: §1.
  32. Multi-parent Recombination with Simplex Crossover in Real Coded Genetic Algorithms. In GECCO, pp. 657–664. Cited by: 2nd item, §2.2.
  33. COCO: The Bi-objective Black Box Optimization Benchmarking (bbob-biobj) Test Suite. CoRR abs/1604.00359. Cited by: §1, §3.
  34. Performance of the DEMO Algorithm on the Bi-objective BBOB Test Suite. In GECCO, pp. 1249–1256. Cited by: §3.
  35. Toward Step-Size Adaptation in Evolutionary Multiobjective Optimization. In EMO, pp. 670–684. Cited by: §1, §1, §4.4, §5.
  36. Indicator-based selection in multiobjective search. In PPSN, pp. 832–842. Cited by: §2.3.
  37. SPEA2: Improving the Strength Pareto Evolutionary Algorithm. Technical report ETHZ. Cited by: §1, §2.3.
  38. Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach. IEEE TEVC 3 (4), pp. 257–271. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
414411
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description