A Review of Evolutionary Multi-modal Multi-objective Optimization

A Review of Evolutionary Multi-modal Multi-objective Optimization


Multi-modal multi-objective optimization aims to find all Pareto optimal solutions including overlapping solutions in the objective space. Multi-modal multi-objective optimization has been investigated in the evolutionary computation community since 2005. However, it is difficult to survey existing studies in this field because they have been independently conducted and do not explicitly use the term “multi-modal multi-objective optimization”. To address this issue, this paper reviews existing studies of evolutionary multi-modal multi-objective optimization, including studies published under names that are different from “multi-modal multi-objective optimization”. Our review also clarifies open issues in this research area.

Multi-modal multi-objective optimization, evolutionary algorithms, test problems, performance indicators


I Introduction

A multi-objective evolutionary algorithm (MOEA) is an efficient optimizer for a multi-objective optimization problem (MOP) [1]. MOEAs aim to find a non-dominated solution set that approximates the Pareto front in the objective space. The set of non-dominated solutions found by an MOEA is usually used in an “a posteriori” decision-making process [2]. A decision maker selects a final solution from the solution set according to her/his preference.

Fig. 1: Illustration of a situation where the four solutions are identical or close to each other in the objective space but are far from each other in the solution space (a minimization problem).

Since the quality of a solution set is usually evaluated in the objective space, the distribution of solutions in the solution space has not received much attention in the evolutionary multi-objective optimization (EMO) community. However, the decision maker may want to compare the final solution to other dissimilar solutions that have an equivalent quality or a slightly inferior quality [3, 4]. Fig. 1 shows a simple example. In Fig. 1, the four solutions , , , and are far from each other in the solution space but close to each other in the objective space. and have the same objective vector. and are similar in the objective space. is dominated by these solutions. This kind of situation can be found in a number of real-world problems, including functional brain imaging problems [3], diesel engine design problems [5], distillation plant layout problems [6], rocket engine design problems [7], and game map generation problems [8].

If multiple diverse solutions with similar objective vectors like , , , and in Fig. 1 are obtained, the decision maker can select the final solution according to her/his preference in the solution space. For example, if in Fig. 1 becomes unavailable for some reason (e.g., material shortages, mechanical failures, traffic accidents, and law revisions), the decision maker can select a substitute from , , and .

A practical example is given in [4], which deals with two-objective space mission design problems. In [4], Schütze et al. considered two dissimilar solutions and for a minimization problem, whose objective vectors are and , respectively. Although dominates , the difference between and is small enough. The first design variable is the departure time from the Earth (in days). Thus, the departure times of and differ by days (). If the decision maker accepts with a slightly inferior quality in addition to , the two launch plans can be considered. If is not realizable for some reason, can be the final solution instead of . As explained here, multiple solutions with almost equivalent quality support a reliable decision-making process. If these solutions have a large diversity in the solution space, they can provide insightful information for engineering design [3, 5].

A multi-modal multi-objective optimization problem (MMOP) involves finding all solutions that are equivalent to Pareto optimal solutions [3, 9, 10]. Below, we explain the difference between MOPs and MMOPs using the two-objective and two-variable Two-On-One problem [11]. Figs. 2 (a) and (b) show the Pareto front and the Pareto optimal solution set of Two-On-One, respectively. Two-On-One has two equivalent Pareto optimal solution subsets and that are symmetrical with respect to the origin, where . Figs. 2 (c) and (d) show and , respectively. In Two-On-One, the three solution sets , , and (Figs. 2 (b), (c) and (d)) are mapped to (Fig. 2 (a)) by the objective functions. On the one hand, the goal of MOPs is generally to find a solution set that approximates the Pareto front in the objective space. Since and are mapped to the same in the objective space, it is sufficient for MOPs to find either or . On the other hand, the goal of MMOPs is to find the entire equivalent Pareto optimal solution set in the solution space. In contrast to MOPs, it is necessary to find both and in MMOPs. Since most MOEAs (e.g., NSGA-II [12] and SPEA2 [13]) do not have mechanisms to maintain the solution space diversity, it is expected that they do not work well for MMOPs. Thus, multi-modal multi-objective evolutionary algorithms (MMEAs) that handle the solution space diversity are necessary for MMOPs.

This paper presents a review of evolutionary multi-modal multi-objective optimization. This topic is not new and has been studied for more than ten years. Early studies include [14, 5, 15, 3, 11, 16]. Unfortunately, most existing studies were independently conducted and did not use the term “MMOPs” (i.e., they are not tagged). For this reason, it is difficult to survey existing studies of MMOPs despite their significant contributions. In this paper, we review related studies of MMOPs including those published under names that were different from “multi-modal multi-objective optimization”. We also clarify open issues in this field. Multi-modal single-objective optimization problems (MSOPs) have been well studied in the evolutionary computation community [10]. Thus, useful clues to address some issues in studies of MMOPs may be found in studies of MSOPs. We discuss what can be learned from the existing studies of MSOPs.

Fig. 2: (a) The Pareto front and (b) the Pareto optimal solution set of Two-On-One [11]. Figs. (c) and (d) show the two Pareto optimal solution subsets and , respectively.

This paper is organized as follows. Section II gives definitions of MMOPs. Section III describes MMEAs. Section IV presents test problems for multi-modal multi-objective optimization. Section V explains performance indicators for benchmarking MMEAs. Section VI concludes this paper.

Ii Definitions of MMOPs

Definition of MOPs

A continuous MOP involves finding a solution that minimizes a given objective function vector . Here, is the -dimensional solution space, and is the -dimensional objective space.

A solution is said to dominate iff for all and for at least one index . If is not dominated by any other solutions, it is called a Pareto optimal solution. The set of all is the Pareto optimal solution set, and the set of all is the Pareto front. The goal of MOPs is generally to find a non-dominated solution set that approximates the Pareto front in the objective space.

Definitions of MMOPs

The term “MMOP” was first coined in [14, 3] in 2005. However, “MMOP” was not used in most studies from 2007 to 2012. Terms that represent MMOPs were not explicitly defined in those studies. For example, MMOPs were referred to as problems of obtaining a diverse solution set in the solution space in [17]. It seems that “multi-modal multi-objective optimization” has been used again as of 2016. Apart from these instances, MMOPs were denoted as “Multi-objective multi-global optimization” and “Multi-modal multi-objective wicked problems” in [18] and [19], respectively.

Although MMOPs have been addressed for more than ten years, the definition of an MMOP is still controversial. In this paper, we define an MMOP using a relaxed equivalency introduced by Rudolph and Preuss [17] as follows:

Definition 1.

An MMOP involves finding all solutions that are equivalent to Pareto optimal solutions.

Definition 2.

Two different solutions and are said to be equivalent iff .

where is an arbitrary norm of , and is a non-negative threshold value given by the decision maker. If , the MMOP should find all equivalent Pareto optimal solutions. If , the MMOP should find all equivalent Pareto optimal solutions and dominated solutions with acceptable quality. The main advantage of our definition of an MMOP is that the decision maker can adjust the goal of the MMOP by changing the value. Most existing studies (e.g., [9, 20, 21]) assume MMOPs with . MMOPs with were discussed in [3, 4, 22, 19]. For example, , , and in Fig. 1 should be found for MMOPs with . In addition, the non-Pareto optimal solution should be found for MMOPs with if .

Although there is room for discussion, MMOPs with may be more practical in real-world applications. This is because the set of solutions of an MMOP with can provide more options for the decision maker than that of an MMOP with . While it is usually assumed in the EMO community that the final solution is selected from non-dominated solutions, the decision maker may also be interested in some dominated solutions in practice [3, 4]. Below, we use the term “MMOP” regardless of the value for simplicity.


This section describes 12 dominance-based MMEAs, 3 decomposition-based MMEAs, 2 set-based MMEAs, and a post-processing approach. MMEAs need the following three abilities: (1) the ability to find solutions with high quality, (2) the ability to find diverse solutions in the objective space, and (3) the ability to find diverse solutions in the solution space. MOEAs need the abilities (1) and (2) to find a solution set that approximates the Pareto front in the objective space. Multi-modal single-objective optimizers need the abilities (1) and (3) to find a set of global optimal solutions. In contrast, MMEAs need all abilities (1)–(3). Here, we mainly describe mechanisms of each type of MMEA to handle (1)–(3).

Pareto dominance-based MMEAs

The most representative MMEA is Omni-optimizer [14, 9], which is an NSGA-II-based generic optimizer applicable to various types of problems. The differences between Omni-optimizer and NSGA-II are fourfold: the Latin hypercube sampling-based population initialization, the so-called restricted mating selection, the -dominance-based non-dominated sorting, and the alternative crowding distance. In the restricted mating selection, an individual is randomly selected from the population. Then, and its nearest neighbor in the solution space are compared based on their non-domination levels and crowding distance values. The winner among and is selected as a parent.

The crowding distance measure in Omni-optimizer takes into account both the objective and solution spaces. For the -th individual in each non-dominated front , the crowding distance in the objective space is calculated in a similar manner to NSGA-II. In contrast, the crowding distance value of in the solution space is calculated in a different manner. First, for each , a “variable-wise” crowding distance value of in the -th decision variable is calculated as follows:


where we assume that all individuals in are sorted based on their -th decision variable values in descending order. In (1), and . Unlike the crowding distance in the objective space, an infinitely large value is not given to a boundary individual.

Then, an “individual-wise” crowding distance value is calculated as follows: . The average value of all individual-wise crowding distance values is also calculated as follows: . Finally, the crowding distance value of is obtained as follows:


where is the average value of all crowding distance values in the objective space. As shown in (2), in Omni-optimizer is the combination of and . Due to its alternative crowding distance, the results presented in [9] showed that Omni-optimizer finds more diverse solutions than NSGA-II.

In addition to Omni-optimizer, two extensions of NSGA-II for MMOPs have been proposed. DNEA [23] is similar to Omni-optimizer but uses two sharing functions in the objective and solution spaces. DNEA requires fine-tuning of two sharing niche parameters for the objective and solution spaces. The secondary criterion of DN-NSGA-II [24] is based on the crowding distance only in the solution space. DN-NSGA-II uses a solution distance-based mating selection.

The following are other dominance-based MMEAs. An MMEA proposed in [25] utilizes DBSCAN [26] and the rake selection [27]. DBSCAN, which is a clustering method, is used for grouping individuals based on the distribution of individuals in the solution space. The rake selection, which is a reference vector-based selection method similar to NSGA-III [28], is applied to individuals belonging to each niche for the environmental selection. SPEA2 [5, 15] uses two archives and to maintain diverse non-dominated individuals in the objective and solution spaces, respectively. While the environmental selection in is based on the density of individuals in the objective space similar to SPEA2 [13], that in is based on the density of individuals in the solution space. For the mating selection in SPEA2, neighborhood individuals in the objective space are selected only from .

-MOEA [4], 4D-Miner [3, 29], and MNCA [19] are capable of handling dominated solutions for MMOPs with . -MOEA uses the -dominance relation [30] so that an unbounded archive can maintain individuals with acceptable quality according to the decision maker. Unlike other MMEAs, -MOEA does not have an explicit mechanism to maintain the solution space diversity. 4D-Miner was specially designed for functional brain imaging problems [3]. The population is initialized by a problem-specific method. 4D-Miner maintains dissimilar individuals in an external archive, whose size is ten times larger than the population size. The environmental selection in 4D-Miner is based on a problem-specific metric. Similar to DIOP [22] (explained later), MNCA simultaneously evolves multiple subpopulations , where is the number of subpopulations. In MNCA, the primary subpopulation aims to find an approximation of the Pareto front that provides a target front for other subpopulations . While the update of is based on the same selection mechanism as in NSGA-II, the update of is performed with a complicated method that takes into account both the objective and solution spaces.

Although the above-mentioned MMEAs use genetic variation operators (e.g., the SBX crossover and the polynomial mutation [12]), the following MMEAs are based on other approaches. Niching-CMA [20] is an extension of CMA-ES [31] for MMOPs by introducing a niching mechanism. The number of niches and the niche radius are adaptively adjusted in Niching-CMA. An aggregate distance metric in the objective and solution spaces is used to group individuals into multiple niches. For each niche, individuals with better non-domination levels survive to the next iteration. MO_Ring_PSO_SCD [21], a PSO algorithm for MMOPs, uses a diversity measure similar to Omni-optimizer. However, MO_Ring_PSO_SCD handles the boundary individuals in the objective space in an alternative manner. In addition, an index-based ring topology is used to create niches.

Two extensions of artificial immune systems [32] have been proposed for MMOPs: omni-aiNet [18] and cob-aiNet [33]. These two methods use a modified version of the polynomial mutation [12]. The primary and secondary criteria of omni-aiNet are based on -nondomination levels [30] and a grid operation, respectively. In addition, omni-aiNet uses suppression and insertion operations. While the suppression operation deletes an inferior individual, the insertion operation adds new individuals to the population. The population size is not constant due to these two operations. The primary and secondary criteria of cob-aiNet are based on the fitness assignment method in SPEA2 [13] and a diversity measure with a sharing function in the solution space, respectively. The maximum population size is introduced in cob-aiNet.

Decomposition-based MMEAs

A three-phase multi-start method is proposed in [16]. First, -ES is carried out on each objective functions times to obtain best-so-far solutions. Then, an unsupervised clustering method is applied to the solutions to detect the number of equivalent Pareto optimal solution subsets . Finally, runs of -ES are performed on each single-objective subproblem decomposed by the Tchebycheff function. The initial individual of each run is determined in a chained manner. The best solution found in the -th subproblem becomes an initial individual of -ES for the -th subproblem (). It is expected that equivalent solutions are found for each decomposed subproblems.

Two variants of MOEA/D [34] for MMOPs are proposed in [35, 36]. MOEA/D decomposes an -objective problem into single-objective subproblems using a set of weight vectors, assigning a single individual to each subproblem. Then, MOEA/D simultaneously evolves the individuals. Unlike MOEA/D, the following two methods assign one or more individuals to each subproblem to handle the equivalency.

The MOEA/D algorithm presented in [35] assigns individuals to each subproblem. The selection is conducted based on a fitness value combining the PBI function value [34] and two distance values in the solution space. dissimilar individuals are likely to be assigned to each subproblem. The main drawback of the above methods [35, 16] is the difficulty in setting a proper value for , because it is problem dependent. MOEA/D-AD [36] does not need such a parameter but requires a relative neighborhood size . For each iteration, a child is assigned to the -th subproblem whose weight vector is closest to , with respect to the perpendicular distance. Let be a set of individuals already assigned to the th-subproblem. If in is within the nearest individuals from the child in the solution space, and are compared based on their scalarizing function values and . If , is deleted from the population and enters the population. also enters the population when no in is in the neighborhood of in the solution space.

Set-based MMEAs

DIOP [22] is a set-based MMEA that can maintain dominated solutions in the population. In the set-based optimization framework [37], a single solution in the upper level represents a set of solutions in the lower level (i.e., a problem). DIOP simultaneously evolves an archive and a target population . While approximates only the Pareto front and is not shown to the decision maker, obtains diverse solutions with acceptable quality by maximizing the following indicator: . Here, . is a performance indicator in the objective space, and is a diversity measure in the solution space. In [22], and were specified by the hypervolume indicator [38] and the Solow-Polasky diversity measure [39], respectively. Meta-individuals in that are -dominated by any meta-individuals in are excluded for the calculation of the metric. At the end of the search, is likely to contain meta-individuals (i.e., solution sets of a problem) -nondominated by meta-individuals in .

Another set-based MMEA is presented in [40]. Unlike DIOP, the proposed method evolves only a single population. Whereas DIOP maximizes the weighted sum of values of and , the proposed method treats and as meta two-objective functions. NSGA-II is used to simultaneously maximize and in [40].

A post-processing approach

As pointed out in [17], it is not always necessary to locate all Pareto optimal solutions. Suppose that a set of non-dominated solutions has already been obtained by an MOEA (e.g., NSGA-II) but not an MMEA (e.g., Omni-optimizer). After the decision maker has selected the final solution from according to her/his preference in the objective space, it is sufficient to search solutions whose objective vectors are equivalent to .

A post-processing approach is proposed in [17] to handle this problem. First, the proposed approach formulates a meta constrained two-objective minimization problem where , , and . The meta objective functions and represent the distance between and in the objective and solution spaces. Thus, smaller and indicate that is similar to in the objective space and far from in the solution space, respectively. The constraint with prevents from becoming an infinitely small value in unbounded problems. NSGA-II is used as a meta-optimizer in [17].

MMEAs Year U
SPEA2 [5, 15] 2004
Omni-optimizer [14, 9] 2005
4D-Miner [3, 29] 2005
omni-aiNet [18] 2006
Niching-CMA [20] 2009
Dominance A method in [25] 2010 Not clearly reported
-MOEA [4] 2011
cob-aiNet [33] 2011
MNCA [19] 2013
DN-NSGA-II [24] 2016
MO_Ring_PSO_SCD [21] 2017
DNEA [23] 2018
Decomp. A method in [16] 2007
A method in [35] 2018
MOEA/D-AD [36] 2018
Set DIOP [22] 2010
A method in [40] 2012
P. A method in [17] 2009
TABLE I: Properties of 18 MMEAs. and denote the population size and the maximum number of evaluations used in each paper, respectively. “” indicates whether each method can handle MMOPs with . “U” means whether each method has an unbounded population/archive. Initial values are reported for omni-aiNet, cob-aiNet, -MOEA, and MOEA/D-AD. and used in the post-processing step are shown for a method in [17].

Open issues

Table I summarizes the properties of the 18 MMEAs reviewed in this section.

While some MMEAs require an extra parameter (e.g., in MOEA/D-AD), Omni-optimizer does not require such a parameter. This parameter-less property is an advantage of Omni-optimizer. However, Omni-optimizer is a Pareto dominance-based MMEA. Since dominance-based MOEAs perform poorly on most MOPs with more than three objectives [28], Omni-optimizer is unlikely to handle many objectives.

In addition to MMEAs, some MOEAs handling the solution space diversity have been proposed, such as GDEA [41], DEMO [42], DIVA [43], “MMEA” [44], DCMMMOEA [45], and MOEA/D-EVSD [46]. Note that solution space diversity management in these MOEAs aims to efficiently approximate the Pareto front for MOPs. Since these methods were not designed for MMOPs, they are likely to perform poorly for MMOPs. For example, “MMEA”, which stands for a model-based multi-objective evolutionary algorithm, cannot find multiple equivalent Pareto optimal solutions [44]. Nevertheless, helpful clues for designing an efficient MMEA can be found in these MOEAs.

The performance of MMEAs has not been well analyzed. The post-processing method may perform better than MMEAs when the objective functions of a real-world problem are computationally expensive. However, an in-depth investigation is necessary to determine which approach is more practical. Whereas the population size and the maximum number of evaluations were set to large values in some studies, they were set to small values in other studies. For example, Table I shows that and for Omni-optimizer, while and for Niching-CMA. It is unclear whether an MMEA designed with large and values works well with small and values. While MMOPs with four or more objectives appear in real-world applications (e.g., five-objective rocket engine design problems [7]), most MMEAs have been applied to only two-objective MMOPs. A large-scale benchmarking study is necessary to address the above-mentioned issues.

The decision maker may want to examine diverse dominated solutions. As explained in Section I, dominated solutions found by -MOEA support the decision making in space mission design problems [4]. The results presented in [29] showed that diverse solutions found by 4D-Miner help neuroscientists analyze brain imaging data. Although most MMEAs assume MMOPs with as shown in Table I, MMEAs that can handle MMOPs with may be more practical. Since most MMEAs (e.g., Omni-optimizer) remove dominated individuals from the population, they are unlikely to find diverse dominated solutions. Some specific mechanisms are necessary to handle MMOPs with (e.g., the multiple subpopulation scheme in DIOP and MNCA).

As explained at the beginning of this section, MMEAs need the three abilities (1)–(3). While the abilities (1) and (2) are needed to approximate the Pareto front, the ability (3) is needed to find equivalent Pareto optimal solutions. Most existing studies (e.g., [9, 20, 21, 36]) report that the abilities (1) and (2) of MMEAs are worse than those of MOEAs. For example, the results presented in [36] showed that Omni-optimizer, MO_Ring_PSO_SCD, and MOEA/D-AD perform worse than NSGA-II in terms of IGD [47] (explained in Section V). If the decision maker is not interested in the distribution of solutions in the solution space, it would be better to use MOEAs rather than MMEAs. The poor performance of MMEAs for multi-objective optimization is mainly due to the ability (3), which prevents MMEAs from directly approximating the Pareto front. This undesirable performance regarding the abilities (1) and (2) is an issue in MMEAs.

What to learn from MSOPs: An online data repository (https://github.com/mikeagn/CEC2013) that provides results of optimizers on the CEC2013 problem suite [48] is available for MSOPs. This repository makes the comparison of optimizers easy, facilitating constructive algorithm development. A similar data repository is needed for studies of MMOPs.

The number of maintainable individuals in the population/archive strongly depends on the population/archive size. However, it is usually impossible to know the number of equivalent Pareto optimal solutions of an MMOP a priori. The same issue can be found in MSOPs. To address this issue, the latest optimizers (e.g., dADE [49] and RS-CMSA [50]) have an unbounded archive that maintains solutions found during the search process. Unlike modern optimizers for MSOPs, Table I shows that only three MMEAs have such a mechanism. The adaptive population sizing mechanisms in omni-aiNet, -MOEA, and MOEA/D-AD are advantageous. A general strategy of using an unbounded (external) archive could improve the performance of MMEAs.

Iv Multi-modal multi-objective test problems

This section describes test problems for benchmarking MMEAs. Unlike multi-objective test problems (e.g., the DTLZ [51] test suite), multi-modal multi-objective test problems were explicitly designed such that they have multiple equivalent Pareto optimal solution subsets. The two-objective and two-variable SYM-PART1 [16] is one of the most representative test problems for benchmarking MMEAs: and . Here, and are translated values of and as follows: and . In SYM-PART1, controls the region of Pareto optimal solutions, and and specify the positions of the Pareto optimal solution subsets. The so-called tile identifiers and are randomly selected from . Fig. 3(a) shows the shape of the Pareto optimal solutions of SYM-PART1 with , , and . As shown in Fig. 3(a), the equivalent Pareto optimal solution subsets are on nine lines in SYM-PART1.

Other test problems include the Two-On-One [11] problem, the Omni-test problem [9], the SYM-PART2 and SYM-PART3 problems [16], the Superspheres problem [52], the EBN problem [53], the two SSUF problems [24], and the Polygon problems [54]. Fig. 3 also shows the distribution of their Pareto optimal solutions. Since there are an infinite number of Pareto optimal solutions in the EBN problem, we do not show them. Source codes of the ten problems can be downloaded from the supplementary website (https://sites.google.com/view/emmo/). In Omni-test, equivalent Pareto optimal solution subsets are regularly located. SYM-PART2 is a rotated version of SYM-PART1. SYM-PART3 is a transformed version of SYM-PART2 using a distortion operation. The Superspheres problem with has six equivalent Pareto optimal solution subsets. However, the number of its is unknown for . EBN can be considered as a real-coded version of the so-called binary one-zero max problem. All solutions in the solution space are Pareto optimal solutions. SSUF1 and SSUF3 are extensions of the UF problems [55] to MMOPs. There are two symmetrical Pareto optimal solution subsets in SSUF1 and SSUF3. Polygon is an extension of the distance minimization problems [56] to MMOPs, where equivalent Pareto optimal solution subsets are inside of regular -sided polygons.

(d) Two-On-One
(e) Omni-test
(f) Superspheres
(g) SSUF1
(h) SSUF3
(i) Polygon
Fig. 3: Distribution of the Pareto optimal solutions for the eight problems. Only and are shown on Omni-test.

In addition, the eight MMF problems are presented in [21]. Similar to SSUF1 and SSUF3, the MMF problems are derived from the idea of designing a problem that has multiple equivalent Pareto optimal solution subsets by mirroring the original one. A bottom-up framework for generating scalable test problems with any is proposed in [57]. equivalent Pareto optimal solution subsets are in hyper-rectangular located in the solution space similar to the SYM-PART problems. While the first variables play the role of “position” parameters in the solution space, the other variables represent “distance” parameters. The six HPS problem instances were constructed using this framework in [57].

If a given problem has the multi-modal fitness landscape, it may have multiple non-Pareto fronts whose shapes are similar to the true Pareto front. Such a problem (e.g., ZDT4 [58]) is referred to as a multi-frontal test problem [59]. If the value (defined in Subsection II-2) is sufficiently large, a multi-frontal test problem can be regarded as a multi-modal multi-objective test problem. In fact, ZDT4 was used in [19] as a test problem. The Kursawe problem [60] is a multi-modal and nonseparable test problem with a disconnected Pareto front. The Kursawe problem has two fronts in the objective space similar to multi-frontal problems. Thus, the Kursawe problem can be used as a multi-modal multi-objective test problem.

Test problems Irregularity
SYM-PART problems [16] 2 2 9
Two-On-One problem [11] 2 2 2
Omni-test problem [9] 2 Any
Superspheres problem [52] 2 Any Unknown
EBN problem [53] 2 Any
Polygon problems [54] Any 2 Any
SSUF problems [24] 2 2 2
MMF suite [21] 2 2 2 or 4
HPS suite [57] 2 Any Any
TABLE II: Properties of multi-modal multi-objective test problems, where , , and denote the number of objectives, design variables, and equivalent Pareto optimal solution subsets, respectively. If a problem has irregularity, the shapes of its multiple equivalent Pareto optimal solution subsets differ from each other.

Open issues

Table II summarizes the properties of multi-modal multi-objective test problems reviewed here. In Table II, of Omni-test adheres to [22].

Table II indicates that scalable test problems do not exist, in terms of , , and . Although the SYM-PART problems have some desirable properties (e.g., their adjustable and straightforward Pareto optimal solution shapes), , , and are constant in these problems. Only Polygon is scalable in . While most test problems have only two design variables, Omni-test and HPS are scalable in . Unfortunately, increases exponentially with increased in Omni-test due to the combinatorial nature of variables. Although the idea of designing scalable SYM-PART and Polygon problems to is presented in [61, 62], they have similar issues to Omni-test. Although the HPS problems do not have such an issue, it is questionable whether there exists a real-world problem with design variables affecting only the distance between the objective vectors and the Pareto front. Only SYM-PART3 has irregularity. Since the shapes of the Pareto optimal solution subsets may be different from each other in real-world problems, we believe that test problems with the irregularity are necessary to evaluate the performance of MMEAs. The performance of an MMEA with an absolutely defined niching radius (e.g., DNEA) is likely to be overestimated in test problems without irregularity.

In addition, the relation between synthetic test problems and real-world problems has not been discussed. The idea of designing a Polygon problem based on a real-world map is presented in [63]. However, this does not mean that such a Polygon problem is an actual real-world problem.

What to learn from MSOPs: Some construction methods for multi-modal single-objective test problems are available, such as the software framework proposed in [64], the construction method for various problems [65], and Ahrari and Deb’s method [66]. Borrowing ideas from such sophisticated construction methods is a promising way to address the above-mentioned issues of multi-modal multi-objective test problems. In [64], Rönkkönen et al. present eight desirable properties for multi-modal single-objective problem generators such as scalability in , control of the number of global and local optima, and regular and irregular distributions of optima. These eight properties can be a useful guideline for designing multi-modal multi-objective problem generators.

V Performance indicators for MMEAs

Performance indicators play an important role in quantitatively evaluating the performance of MOEAs as well as MMEAs. Since performance indicators for MOEAs consider only the distribution of objective vectors (e.g., the hypervolume, GD, and IGD indicators [38, 47]), they cannot be used to assess the ability of MMEAs to find multiple equivalent Pareto optimal solutions. For this reason, some indicators have been specially designed for MMEAs. Performance indicators for MMEAs can be classified into two categories: simple extensions of existing performance indicators for MOEAs and specific indicators based on the distributions of solutions.

IGDX [4, 44] is a representative example of the first approach. The IGD and IGDX indicators are given as follows:


where is a set of solutions obtained by an MMEA and is a set of reference solutions in the Pareto optimal solution set. denotes the Euclidean distance between and . While with a small IGD value is a good approximation of the Pareto front, with a small IGDX approximates Pareto optimal solutions well. Other indicators in the first category include GDX [4], the Hausdorff distance indicator [67] in the solution space [4], CR [21], and PSP [21]. GDX is a GD indicator in the solution space similar to IGDX. CR is an alternative version of the maximum spread [38] to measure the spread of . PSP is a combination of IGDX and CR.

Performance indicators in the second category include the mean of the pairwise distance between two solutions [20], CS [16], SPS [16], the Solow-Polasky diversity measure [39] used in [40, 22], and PSV [57]. CS is the number of Pareto optimal solution subsets covered by at least one individual. SPS is the standard deviation of the number of solutions close to each Pareto optimal solution subset. PSV is the percentage of the volume of in the volume of in the solution space.

Indicators Conv. Div. Unif. Spr. Ref. Dif.
GDX [4]
IGDX [4, 44]
Hausdorff distance [4]
CR [21]
PSP [21]
Pairwise distance [20]
CS [16]
SPS [16]
Solow-Polasky [39]
PSV [57]
TABLE III: Properties of performance indicators for MMEAs (convergence to Pareto optimal solution subsets, diversity, uniformity, spread, the use of reference solution sets, and possibility to compare solution sets with different sizes).
(a) in the solution space
(b) in the solution space
(c) in the objective space
(d) in the objective space
Fig. 4: Comparison of solution sets and for SYM-PART1.

Open issues

Table III shows the properties of performance indicators for MMEAs reviewed in this section, where the properties are assessed based on the description of each indicator. While the properties of the performance indicators for MOEAs have been examined (e.g., [38, 67]), those for MMEAs have not been well analyzed.

Performance indicators for MMEAs should be able to evaluate the three abilities (1)–(3) explained in Section III. Although IGDX is frequently used, it should be noted that IGDX does not evaluate the distribution of solutions in the objective space. Fig. 4 shows the distribution of two solution sets and for SYM-PART1 in the solution and objective spaces, where and are 27. While the solutions in are evenly distributed on one of the nine Pareto optimal solution subsets, the solutions in are evenly distributed on all of them. Although has 27 objective vectors that cover the Pareto front, has only 3 equivalent objective vectors. The IGDX and IGD values of and are as follows: , , , and . We used Pareto optimal solutions for . Although has a worse distribution in the objective space than , is significantly better than . As demonstrated here, IGDX can evaluate the abilities (1) and (3) but cannot evaluate the ability (2) to find diverse solutions in the objective space. Since the other indicators in Table III do not take into account the distribution of objective vectors similar to IGDX, they are likely to have the same undesirable property. For a fair performance comparison, it is desirable to use the indicators for MOEAs (e.g., hypervolume and IGD) in addition to the indicators for MMEAs in Table III.

What to learn from MSOPs: It is desirable that the indicators for multi-modal single-objective optimizers evaluate a solution set without the knowledge of the fitness landscape such as the positions of the optima and the objective values of the optima [68]. The same is true for indicators for MMEAs. Table III shows that most indicators (e.g., IGDX) require . Since is usually unavailable in real-world problems, it is desirable that indicators for MMEAs evaluate without .

Since the archive size in modern multi-modal single-objective optimizers is unbounded in order to store a number of local optima [10], most indicators in this field can handle solution sets with different sizes (e.g., the peak ratio and the success rate [48]). For the same reason, it is desirable that indicators for MMEAs evaluate solution sets with different sizes in a fair manner. However, it is difficult to directly use indicators for multi-modal single-objective optimizers to evaluate MMEAs.

Vi Conclusion

The contributions of this paper are threefold. The first contribution is that we reviewed studies in this field in terms of definitions of MMOPs, MMEAs, test problems, and performance indicators. It was difficult to survey the existing studies of MMOPs for the reasons described in Section I. Our review helps to elucidate the current progress on evolutionary multi-modal multi-objective optimization. The second contribution is that we clarified open issues in this field. In contrast to multi-modal single-objective optimization, multi-modal multi-objective optimization has not received much attention despite its practical importance. Thus, some critical issues remain. The third contribution is that we pointed out an issue associated with performance indicators for MMEAs. Reliable performance indicators are necessary for the advancement of MMEAs. We hope that this paper will encourage researchers to work in this research area, which is not well explored.


This work was supported by the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (Grant No. 2017ZT07X386), Shenzhen Peacock Plan (Grant No. KQTD2016112514355531), the Science and Technology Innovation Committee Foundation of Shenzhen (Grant No. ZDSYS201703031748284), the Program for University Key Laboratory of Guangdong Province (Grant No. 2017KSYS008), and National Natural Science Foundation of China (Grant No. 61876075).


  1. K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms.   John Wiley & Sons, 2001.
  2. K. Miettinen, Nonlinear Multiobjective Optimization.   Springer, 1998.
  3. M. Sebag, N. Tarrisson, O. Teytaud, J. Lefèvre, and S. Baillet, “A Multi-Objective Multi-Modal Optimization Approach for Mining Stable Spatio-Temporal Patterns,” in IJCAI, 2005, pp. 859–864.
  4. O. Schütze, M. Vasile, and C. A. C. Coello, “Computing the Set of Epsilon-Efficient Solutions in Multiobjective Space Mission Design,” JACIC, vol. 8, no. 3, pp. 53–70, 2011.
  5. T. Hiroyasu, S. Nakayama, and M. Miki, “Comparison study of SPEA2+, SPEA2, and NSGA-II in diesel engine emissions and fuel economy problem,” in IEEE CEC, 2005, pp. 236–242.
  6. M. Preuss, C. Kausch, C. Bouvy, and F. Henrich, “Decision Space Diversity Can Be Essential for Solving Multiobjective Real-World Problems,” in MCDM, 2008, pp. 367–377.
  7. F. Kudo, T. Yoshikawa, and T. Furuhashi, “A study on analysis of design variables in Pareto solutions for conceptual design optimization problem of hybrid rocket engine,” in IEEE CEC, 2011, pp. 2558–2562.
  8. J. Togelius, M. Preuss, and G. N. Yannakakis, “Towards multiobjective procedural map generation,” in PCGames, 2010.
  9. K. Deb and S. Tiwari, “Omni-optimizer: A generic evolutionary algorithm for single and multi-objective optimization,” EJOR, vol. 185, no. 3, pp. 1062–1087, 2008.
  10. X. Li, M. G. Epitropakis, K. Deb, and A. P. Engelbrecht, “Seeking Multiple Solutions: An Updated Survey on Niching Methods and Their Applications,” IEEE TEVC, vol. 21, no. 4, pp. 518–538, 2017.
  11. M. Preuss, B. Naujoks, and G. Rudolph, “Pareto Set and EMOA Behavior for Simple Multimodal Multiobjective Functions,” in PPSN, 2006, pp. 513–522.
  12. K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE TEVC, vol. 6, no. 2, pp. 182–197, 2002.
  13. E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the Strength Pareto Evolutionary Algorithm,” ETHZ, Tech. Rep., 2001.
  14. K. Deb and S. Tiwari, “Omni-optimizer: A Procedure for Single and Multi-objective Optimization,” in EMO, 2005, pp. 47–61.
  15. M. Kim, T. Hiroyasu, M. Miki, and S. Watanabe, “SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2,” in PPSN, 2004, pp. 742–751.
  16. G. Rudolph, B. Naujoks, and M. Preuss, “Capabilities of EMOA to Detect and Preserve Equivalent Pareto Subsets,” in EMO, 2007, pp. 36–50.
  17. G. Rudolph and M. Preuss, “A multiobjective approach for finding equivalent inverse images of pareto-optimal objective vectors,” in MCDM, 2009, pp. 74–79.
  18. G. P. Coelho and F. J. V. Zuben, “omni-aiNet: An Immune-Inspired Approach for Omni Optimization,” in ICARIS, 2006, pp. 294–308.
  19. E. M. Zechman, M. H. G., and M. E. Shafiee, “An evolutionary algorithm approach to generate distinct sets of non-dominated solutions for wicked problems,” Eng. Appl. of AI, vol. 26, no. 5-6, pp. 1442–1457, 2013.
  20. O. M. Shir, M. Preuss, B. Naujoks, and M. T. M. Emmerich, “Enhancing Decision Space Diversity in Evolutionary Multiobjective Algorithms,” in EMO, 2009, pp. 95–109.
  21. C. Yue, B. Qu, and J. Liang, “A Multi-objective Particle Swarm Optimizer Using Ring Topology for Solving Multimodal Multi-objective Problems,” IEEE TEVC, 2018 (in press).
  22. T. Ulrich, J. Bader, and L. Thiele, “Defining and Optimizing Indicator-Based Diversity Measures in Multiobjective Search,” in PPSN, 2010, pp. 707–717.
  23. Y. Liu, H. Ishibuchi, Y. Nojima, N. Masuyama, and K. Shang, “A Double-Niched Evolutionary Algorithm and Its Behavior on Polygon-Based Problems,” in PPSN, 2018, pp. 262–273.
  24. J. J. Liang, C. T. Yue, and B. Y. Qu, “Multimodal multi-objective optimization: A preliminary study,” in IEEE CEC, 2016, pp. 2454–2461.
  25. O. Kramer and H. Danielsiek, “DBSCAN-based multi-objective niching to approximate equivalent pareto-subsets,” in GECCO, 2010, pp. 503–510.
  26. M. Ester, H. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise,” in KDD, 1996, pp. 226–231.
  27. O. Kramer and P. Koch, “Rake Selection: A Novel Evolutionary Multi-Objective Optimization Algorithm,” in KI, 2009, pp. 177–184.
  28. K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints,” IEEE TEVC, vol. 18, no. 4, pp. 577–601, 2014.
  29. V. Krmicek and M. Sebag, “Functional Brain Imaging with Multi-objective Multi-modal Evolutionary Optimization,” in PPSN, 2006, pp. 382–391.
  30. M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining Convergence and Diversity in Evolutionary Multiobjective Optimization,” Evol. Comput., vol. 10, no. 3, pp. 263–282, 2002.
  31. N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evol. Comput., vol. 9, no. 2, pp. 159–195, 2001.
  32. D. Dasgupta, S. Yu, and F. Niño, “Recent Advances in Artificial Immune Systems: Models and Applications,” Appl. Soft Comput., vol. 11, no. 2, pp. 1574–1587, 2011.
  33. G. P. Coelho and F. J. V. Zuben, “A Concentration-Based Artificial Immune Network for Multi-objective Optimization,” in EMO, 2011, pp. 343–357.
  34. Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE TEVC, vol. 11, no. 6, pp. 712–731, 2007.
  35. C. Hu and H. Ishibuchi, “Incorporation of a decision space diversity maintenance mechanism into MOEA/D for multi-modal multi-objective optimization,” in GECCO (Companion), 2018, pp. 1898–1901.
  36. R. Tanabe and H. Ishibuchi, “A Decomposition-Based Evolutionary Algorithm for Multi-modal Multi-objective Optimization,” in PPSN, 2018, pp. 249–261.
  37. E. Zitzler, L. Thiele, and J. Bader, “On Set-Based Multiobjective Optimization,” IEEE TEVC, vol. 14, no. 1, pp. 58–79, 2010.
  38. E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. da Fonseca, “Performance assessment of multiobjective optimizers: an analysis and review,” IEEE TEVC, vol. 7, no. 2, pp. 117–132, 2003.
  39. A. R. Solow and S. Polasky, “Measuring biological diversity,” Environ. Ecol. Stat., vol. 1, no. 2, pp. 95–103, 1994.
  40. H. Ishibuchi, M. Yamane, N. Akedo, and Y. Nojima, “Two-objective solution set optimization to maximize hypervolume and decision space diversity in multiobjective optimization,” in SCIS, 2012, pp. 1871–1876.
  41. A. Toffolo and E. Benini, “Genetic Diversity as an Objective in Multi-Objective Evolutionary Algorithms,” Evol. Comput., vol. 11, no. 2, pp. 151–167, 2003.
  42. T. Robič and B. Filipič, “DEMO: differential evolution for multiobjective optimization,” in EMO, 2005, pp. 520–533.
  43. T. Ulrich, J. Bader, and E. Zitzler, “Integrating decision space diversity into hypervolume-based multiobjective search,” in GECCO, 2010, pp. 455–462.
  44. A. Zhou, Q. Zhang, and Y. Jin, “Approximating the Set of Pareto-Optimal Solutions in Both the Decision and Objective Spaces by an Estimation of Distribution Algorithm,” IEEE TEVC, vol. 13, no. 5, pp. 1167–1189, 2009.
  45. H. Xia, J. Zhuang, and D. Yu, “Combining Crowding Estimation in Objective and Decision Space With Multiple Selection and Search Strategies for Multi-Objective Evolutionary Optimization,” IEEE Trans. Cyber., vol. 44, no. 3, pp. 378–393, 2014.
  46. J. C. Castillo, C. Segura, A. H. Aguirre, G. Miranda, and C. León, “A multi-objective decomposition-based evolutionary algorithm with enhanced variable space diversity control,” in GECCO (Companion), 2017, pp. 1565–1571.
  47. C. A. C. Coello and M. R. Sierra, “A Study of the Parallelization of a Coevolutionary Multi-objective Evolutionary Algorithm,” in MICAI, 2004, pp. 688–697.
  48. X. Li, A. Engelbrecht, and M. G. Epitropakis, “Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization,” RMIT Univ., Tech. Rep., 2013.
  49. M. G. Epitropakis, X. Li, and E. K. Burke, “A dynamic archive niching differential evolution algorithm for multimodal optimization,” in IEEE CEC, 2013, pp. 79–86.
  50. A. Ahrari, K. Deb, and M. Preuss, “Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations,” Evol. Comput., vol. 25, no. 3, pp. 439–471, 2017.
  51. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable Test Problems for Evolutionary Multi-Objective Optimization,” in Evolutionary Multiobjective Optimization. Theoretical Advances and Applications.   Springer, 2005, pp. 105–145.
  52. M. T. M. Emmerich and A. H. Deutz, “Test problems based on lamé superspheres,” in EMO, 2006, pp. 922–936.
  53. N. Beume, B. Naujoks, and M. T. M. Emmerich, “SMS-EMOA: multiobjective selection based on dominated hypervolume,” EJOR, vol. 181, no. 3, pp. 1653–1669, 2007.
  54. H. Ishibuchi, Y. Hitotsuyanagi, N. Tsukamoto, and Y. Nojima, “Many-Objective Test Problems to Visually Examine the Behavior of Multiobjective Evolution in a Decision Space,” in PPSN, 2010, pp. 91–100.
  55. Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari, “Multiobjective optimization Test Instances for the CEC 2009 Special Session and Competition,” Univ. of Essex, Tech. Rep., 2008.
  56. M. Köppen and K. Yoshida, “Substitute Distance Assignments in NSGA-II for Handling Many-objective Optimization Problems,” in EMO, 2007, pp. 727–741.
  57. B. Zhang, K. Shafi, and H. A. Abbass, “On Benchmark Problems and Metrics for Decision Space Performance Analysis in Multi-Objective Optimization,” IJCIA, vol. 16, no. 1, pp. 1–18, 2017.
  58. E. Zitzler, K. Deb, and L. Thiele, “Comparison of Multiobjective Evolutionary Algorithms: Empirical Results,” Evol. Comput., vol. 8, no. 2, pp. 173–195, 2000. [Online]. Available: http://dx.doi.org/10.1162/106365600568202
  59. S. Huband, P. Hingston, L. Barone, and R. L. While, “A review of multiobjective test problems and a scalable test problem toolkit,” IEEE TEVC, vol. 10, no. 5, pp. 477–506, 2006.
  60. F. Kursawe, “A Variant of Evolution Strategies for Vector Optimization,” in PPSN, 1990, pp. 193–197.
  61. V. L. Huang, A. K. Qin, K. Deb, E. Zitzler, P. N. Suganthan, J. J. Liang, M. Preuss, and S. Huband, “Problem Definitions for Performance Assessment on Multi-objective Optimization Algorithms,” NTU, Tech. Rep., 2007.
  62. H. Ishibuchi, M. Yamane, N. Akedo, and Y. Nojima, “Many-objective and many-variable test problems for visual examination of multiobjective search,” in IEEE CEC, 2013, pp. 1491–1498.
  63. H. Ishibuchi, N. Akedo, and Y. Nojima, “A many-objective test problem for visually examining diversity maintenance behavior in a decision space,” in GECCO, 2011, pp. 649–656.
  64. J. Rönkkönen, X. Li, V. Kyrki, and J. Lampinen, “A framework for generating tunable test functions for multimodal optimization,” Soft Comput., vol. 15, no. 9, pp. 1689–1706, 2011.
  65. B. Y. Qu, J. J. Liang, Z. Y. Wang, Q. Chen, and P. N. Suganthan, “Novel benchmark functions for continuous multimodal optimization with comparative results,” SWEVO, vol. 26, pp. 23–34, 2016.
  66. A. Ahrari and K. Deb, “A Novel Class of Test Problems for Performance Evaluation of Niching Methods,” IEEE TEVC, vol. 22, no. 6, pp. 909–919, 2018.
  67. O. Schütze, X. Esquivel, A. Lara, and C. A. C. Coello, “Using the Averaged Hausdorff Distance as a Performance Measure in Evolutionary Multiobjective Optimization,” IEEE TEVC, vol. 16, no. 4, pp. 504–522, 2012.
  68. J. Mwaura, A. P. Engelbrecht, and F. V. Nepocumeno, “Performance measures for niching algorithms,” in IEEE CEC, 2016, pp. 4775–4784.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description