Scalable and Customizable Benchmark Problems for Many-Objective Optimization

# Scalable and Customizable Benchmark Problems for Many-Objective Optimization

## Abstract

Solving many-objective problems (MaOPs) is still a significant challenge in the multi-objective optimization (MOO) field. One way to measure algorithm performance is through the use of benchmark functions (also called test functions or test suites), which are artificial problems with a well-defined mathematical formulation, known solutions and a variety of features and difficulties. In this paper we propose a parameterized generator of scalable and customizable benchmark problems for MaOPs. It is able to generate problems that reproduce features present in other benchmarks and also problems with some new features. We propose here the concept of generative benchmarking, in which one can generate an infinite number of MOO problems, by varying parameters that control specific features that the problem should have: scalability in the number of variables and objectives, bias, deceptiveness, multimodality, robust and non-robust solutions, shape of the Pareto front, and constraints. The proposed Generalized Position-Distance (GPD) tunable benchmark generator uses the position-distance paradigm, a basic approach to building test functions, used in other benchmarks such as Deb, Thiele, Laumanns and Zitzler (DTLZ), Walking Fish Group (WFG) and others. It includes scalable problems in any number of variables and objectives and it presents Pareto fronts with different characteristics. The resulting functions are easy to understand and visualize, easy to implement, fast to compute and their Pareto optimal solutions are known.

###### keywords:
Benchmark Functions, Scalable Test Functions, Many-Objective Optimization, Evolutionary Algorithms
1

## 1 Introduction

One significant challenge in the multi-objective optimization (MOO) field is related to solving many-objective problems (MaOPs), which are usually defined when the number of objective functions is greater than three Jain and Deb (2014); Cheng et al. (2017b); Zhou et al. (2017); Maltese et al. (2018). The increase in the number of objectives poses a number of challenges to the methods designed for MOO, in terms of convergence to the Pareto optimal solutions, dimensionality of the Pareto front, visualization of solutions and decision-making. An efficient way of obtaining an approximation set of the solutions to these problems is through stochastic heuristic algorithms, particularly Multi-Objective Evolutionary Algorithms (MOEA) Li et al. (2015); Ishibuchi et al. (2008). Assuming no prior preference provided by the decision-maker (DM), MOEAs are designed to find an unbiased, well-distributed approximation of the entire Pareto front, a task that becomes harder in MaOPs Meneghini et al. (2019). It is also possible to use preference information in MOEAs Meneghini et al. (2019); Cheng et al. (2016); Goulart and Campelo (2016); Thiele et al. (2009). Nonetheless, although focusing the search on a given region of interest, the scalability issue remains a challenge for preference-based MOEA.

Several factors characterize a good approximation set of solutions to a MOO problem: convergence to the true Pareto front, representativeness of the set (also involving the concept of diversity) and coverage of the obtained approximation in the objective space. Coverage is generally understood as the extension of the set or how well the set of solutions covers the extreme points of the Pareto front. Representativeness is the presence and good distribution of solutions along the Pareto front surface, providing to the DM a good representation of the Pareto front in terms of the potential to analyzing different trade-offs related to the objectives. Convergence means that the solutions obtained should be as close as possible to the Pareto front. Another factor that may be desirable in many practical cases is related to the robustness of the solutions found. In real-world problems the presence of noise, disturbances and variability is a rule, not an exception. In addition to being locally Pareto optimal, it is desirable that the solutions offered to the DM are less sensitive to the impact of uncertainties or unforeseen scenarios. In this way a sub-optimal solution (locally Pareto optimal), but that presents little variation under the presence of noise and uncertainties, can be considered better than an optimal solution that presents a high variability due to these effects.

One way to measure algorithm performance is through the use of benchmark functions (also called test functions or test suites in the literature), which are artificial problems with a well-defined mathematical formulation, known solutions and a variety of features and difficulties. Benchmarking allows one to test algorithms in the task of obtaining approximations to Pareto fronts of these test functions, such that the quality of these approximations, and hence the performance of the algorithm, can be measured. The key assumption behind this is that an optimization algorithm that performs well in those problems would also perform well in real-world problems. Additionally, by analyzing the performance of a given algorithm or group of algorithms in problems with different features, the designer can better understand the strenghts and weaknesses of each method. The use of benchmarking drives research and development in computational intelligence, optimization and machine learning, allowing to find the weaknesses and strengths of MOEAs more comprehensively Wang et al. (2019b).

There are many benchmark problems available for assessing the performance of MOEA, such as those described by Cheng et al. (2017b) and Tian et al. (2017). Some of the well known and widely used test problems are DTLZ Deb et al. (2005), WFG Huband et al. (2006b, 2005), ZDT Zitzler et al. (2000), CTP Deb et al. (2001) among others, see Tian et al. (2017). Some of these benchmark problems have been repeatedly used for demonstrating difficulties of the Pareto dominance based MOEA algorithms in MaOPs, see for instance Ishibuchi et al. (2017). More recently, many studies have identified weaknesses with the most well-known benchmark problems, pointing out that new test problems are desirable in the literature to drive research and development of MOEA Cheng et al. (2017b); Wang et al. (2019b). Some studies have tried to identify desirable characteristics of benchmark problems in MaOPs, see for instance Zapotecas-Martínez et al. (2019), many of which are not present in most benchmarks in the literature. More recent benchmark problems have been proposed in Wang et al. (2019b); Matsumoto et al. (2019); Yue et al. (2019); Jiang et al. (2019); Yu et al. (2019); Ma and Wang (2019); Weise and Wu (2018). Benchmark problems are discussed in Section 2 and the desirable characteristics in Section 3.1.

Nowadays, with the variety of benchmarks available in the literature, researchers got to a point in which any new MOEA developed should be tested on a wide variety of benchmark test suites, with dozens of different problems. Instead of proposing yet another set of fixed benchmark problems, this paper takes a different stand. In this paper we propose a parameterized generator of scalable and customizable benchmark problems for MaOPs. It is able to generate problems that reproduce features present in other benchmarks and also problems with some new features. The software engineering community has developed some years ago the concept of generative testing. In short, generative testing allows one to specify properties the software should have. Then the testing library generates test cases in a smart way Dias Neto et al. (2007); Pires and e Abreu (2018). Following this idea, we propose here the concept of generative benchmarking, which is a similar approach for benchmarking MOEAs: we develop in this paper a test generator, able to generate an infinite number of MOO problems, by varying a number of parameters that control specific features that the problem should have.

With this generative testing approach, one can generate scalable and customizable benchmark problems by controlling a number of features, such as scalability in the number of variables and objectives, bias, robustness, deceptiveness, multimodality, shape of the Pareto front, and constraints. These features are discussed in more detail in Section 3. The instance generator (see Section 4) offers the possibility of precise control over the spatial location of points in the objectives space, which is an essential characteristic to verify the efficiency of methods that propose to find solutions in specific regions of the space of the objectives, mainly in MaOPs. This work opens up a new perspective in benchmarking and testing MOEA. A number of test cases, combining different characteristics, can be randomly and automatically generated and the competing algorithms are then executed over each test case. Nonparametric statistical analysis for multiple comparison can then be used, following the best practices in experimental comparison of stochastic algorithms García et al. (2010); Derrac et al. (2011); Trawiński et al. (2012).

The paper is organized as follows: Section 2 presents an overview of benchmark problems in the literature as well as their limitations. Section 3 describes desirable characteristics for test problems and introduces a new set of benchmark functions that pose more challenges for MOEA. Section 4 describes the parameterized generator of scalable and customizable benchmark problems. At the end, Section 5 describes the conclusions and discusses some possible directions for future research.

## 2 An Overview of Test Functions

In this section, we review benchmark problems in the literature for multi-objective test problems: Zitzler, Deb and Thiele (ZDT) test suite Zitzler et al. (2000), Deb, Thiele, Laumanns and Zitzler (DTLZ) test suite Deb et al. (2005), and Walking Fish Group (WFG) toolkit Huband et al. (2006b). A common feature of these problems is that the optimization variables can always be written as , where is a vector with elements that controls the position of its image in the objective space and is a vector with coordinates that controls the distance of its image to the Pareto front. Recent benchmarks in the literature are discussed in Section 2.4. Finally, in Section 2.5 we discuss their main limitations.

### 2.1 ZDT benchmark

Zitzler, Deb and Thiele (ZDT) Zitzler et al. (2000) proposed the family of functions ZDT1-ZDT6, in order to compare the effectiveness of different MOEAs. Although this benchmark considers only problems with two objectives, it introduced the idea of having a subset of variables responsible for the position and variables responsible for the distance in the objective space. This constructive approach for benchmark problems is used in other works that followed on benchmarking for MaOPs.

The ZDT problems have variables in the range, except for ZDT5, which has binary domain Huband et al. (2006b). In all functions, is composed by only one coordinate, which corresponds to the objective value . The second objective is defined by means of different expressions with and , being responsible for convergence and distribution of the population on the Pareto front in the different problems. Vector has a variable size (29 for ZDT1 to ZDT3, 9 for ZDT4 and ZDT6 and 10 for ZDT5). The functions present Pareto fronts that are convex (ZDT1 and ZDT4), concave (ZDT2 and ZDT6) and disconnect (ZDT3) Huband et al. (2006b). However, the ZDT benchmark is limited to two objectives and its main focus is on the convergence of the solutions towards the Pareto front.

### 2.2 DTLZ benchmark

Deb, Thiele, Laumanns, and Zitzler (DTLZ) Deb et al. (2005) presented the DTLZ1-DTLZ9 problems, which are scalable for any number of objectives. Scalability is a desirable feature which makes these test functions suitable for testing MOEA in MaOPs Deb et al. (2005); Huband et al. (2006b). These problems have a well-defined solution, namely for and for . In all problems, the Pareto front is located in the first orthant2 of the objective space and its shape is quite simple: either a sphere, a curve or a simplex.

DTLZ1 is an M-objective problem with a simple linear Pareto front. As pointed out by the authors Deb et al. (2005), the only difficulty provided by this problem is the convergence towards the Pareto front. The search space contains () local Pareto-optimal fronts to attract the MOEAs. DTLZ2-DTLZ3 use a function based on spherical coordinate system to determine the position of the points in the objective space. For the function corresponds to the surface of a sphere in the first orthant of space . Similarly, the geometric characteristics of this surface make the objectives conflicting and an adequate distribution of the vectors guarantees a good distribution of the points in the Pareto front. In DTLZ4, is replaced by , where is a bias parameter, in order to introduce a bias and make the spatial distribution of the points in the objective space harder; is suggested in Deb et al. (2005). In DTLZ5 and DTLZ6, a slight modification in the auxiliary function turns the Pareto front into a curve contained in a sphere in the first orthant of the objective space in problems with three objectives. DTLZ7 to DTLZ9 problems do not use the spherical coordinate system in the -dimensional space. The DTLZ7 presents a simple formulation for objectives to for . The last objective is the only one dependent on the other variables of the problem. DTLZ7 presents disconnected regions in the Pareto front. DTLZ8 and DTLZ9 are the only problems that present inequality constraints in this family. The former presents constraints and the latter . The Pareto front of the DTLZ8 problem with three objectives is composed of a straight segment and a triangular shaped flat surface, and the Pareto front of the DTLZ9 problem is quite similar to that presented by the DTLZ5 problem.

In the problems DTLZ1 to DTLZ7, the decision space has variables. The first variables give the spatial location of the points in the objective space, while the remaining variables are responsible for the convergence of points to the Pareto front. Thus, in these problems a vector in the decision space can be written as , where is the portion responsible for the spatial location of the points in the objective space and responsible for convergence. In DTLZ8 and DTLZ9, it is suggested variables Deb et al. (2005). The values suggested are for DTLZ1, for DTLZ2-DTLZ6 and for DTLZ7. The optimal Pareto set for problems DTLZ1-DTLZ7 is for ; for in DTLZ1-DTLZ5; and for in DTLZ7. Lastly, the solution for the DTLZ8 and DTLZ9 problems is not presented.

### 2.3 WFG Benchmark

Huband et al. (2006b) divided the desirable characteristics into those related to the fitness landscape and the Pareto optimal front geometry. Also, they analyzed the scalability and the separability of the MOO problems. According to the authors, the problem should be well defined for any number of objectives and be scalable, since a problem with more decision variables than objectives in general presents more difficulties to the optimizer. For a vector in the decision space, any variable is classified in two ways: is a distance parameter if its variation produces a new that changes the dominance relationship between and . Otherwise, is a position parameter. If a variable has the same optimal value regardless of the values of the other decision variables, then this variable is separable. Otherwise, is non-separable. If every variable of an objective is separable, then this objective is separable. Consequently, if all objectives are separable, then the problem is separable. The solutions of the problem in the decision space are classified according to the location of in the interval where this variable is defined. If is close to the extremes or , then is on extremal parameter. Otherwise, if is located near the center of the interval , then is a central (medial) parameter.

With reference to the convergence in the objective space, the authors pointed out that a problem can be classified as unimodal or multimodal, where the deceptiveness is an specific type of multimodality. In a deceptive problem, the sub-optimal solutions lead the population of the MOEA to a region far from the one where the global optimum is located. A final aspect is the Pareto front. In a problem with objectives, the Pareto front is, in general, a surface of dimension . If the Pareto front dimension is less than , then the problem is degenerate. This surface can be concave, convex, flat or a mixture of these formats. This surface can also be connected, i.e. given any two points and in , there is always a path contained in connecting these points. Otherwise, the Pareto front is disconnected.

Huband et al. (2006b) also indicated several recommendations for the construction of test problems, such as: a) extremal or central parameters should not be used; b) adjustable dimension of the decision and objective space; c) the search and objective space must be dissimilar, i.e., the variables in the search space with intervals of different sizes (dissimilar parameter domains), as well as the solutions in the objective space (dissimilar tradeoff ranges); d) the optimal solution should be known; and e) must present different shapes of Pareto front.

Based on these recommendations, the authors presented a nine functions toolkit, WFG1-WFG9. Starting from a vector of parameters , a sequence of transformations is applied in order to obtain another vector that adds the desired characteristics. The problem is then defined by minimizing the objectives . The vector has positions, with the first variables determining the position of in the objective space and the last variables responsible for the distance from to the Pareto front of the problem. The transformed vector has positions, the first coordinates being responsible for the position of in the objective space and the last variable responsible for the distance from to the Pareto front of the problem. In this way, this class of problems can be represented by a sequence of applications , where is the decision space, is the space of the parameters and is the objective space, with , and .

### 2.4 Other Test Suites

Since benchmark test problems are of great significance for the development of MOO algorithms, new test functions have been created to introduce new features and difficulties to compare the ability of the several MOEAs.

More recently, Wang et al. (2019b) proposed a test problem generator that enables the design of MOO problems with complex Pareto front boundaries. The generator allows the researcher to control the feature of boundaries, consequently varying the difficulty for the MOEAs in achieving uniformity-diversity and breadth-diversity. Matsumoto et al. (2019) examined the influence of the shapes of the Pareto Front as well as the shape of the feasible region. Since scalability is not enough to impose difficulties for the MOEAs, the authors proposed a set of seven test problems with hexagon and triangular types of Pareto fronts. The results indicated differences between the algorithms used, given the different curvatures of the functions. Yue et al. (2019) proposed a novel family with 12 scalable multimodal MOO problems with different characteristics, such as scalability, presence of local Pareto optimal solutions, non-uniformly distributed Pareto shapes and discrete Pareto front, being all of them continuous optimization problems. Helbig and Engelbrecht (2014) and Jiang et al. (2019) focused on dynamic MOO (DMOO) problems. The former described a set of characteristics of an ideal set of DMOO benchmarking functions and proposed different problems for each characteristic. The latter proposed 15 scalable problems challenging the current dynamic algorithms to solve them. Ma and Wang Ma and Wang (2019) designed a test suite consisting of 14 problem instances for constrained multi-objective optimization, which tries to model characteristics extracted from real-world applications. Yu, Ji and Nolhofer Yu et al. (2019), in turn, proposed a set of test problems whose Pareto fronts consist of complex knee regions, i.e. an important geometric feature on the Pareto-optimal front, “where it requires an unfavorably large sacrifice in one objective to gain a small amount in other objectives” Yu et al. (2019). Weise and Wu (2018) proposed a benchmark suite tunable towards different difficulty features for bit string based problems. Authough that work is not applicable to MaOPs and is from the discrete domain, it shows that the proposed idea of a tunable benchmark suite is interesting in optimization in general.

These recent works bring important contributions for the research and development of optimization algorithms in different ways, specially for modeling features from practical problems and adding new aspects to the performance evaluation of MOEAs. However, at the same time, this brings a notable drawback to the researchers developing new methods: one has to work with 3 to 6 benchmarks from the literature in order to compare different competing MOEA.

### 2.5 Limitations

The main limitation of the ZDT family is that the problems are restricted to two objectives. Nevertheless, because it presents a large number of variables responsible for population convergence and it is easy to implement, this set is still valid to evaluate algorithms for problems with two objectives.

In the DTLZ family, the most used problems are DTLZ1 to DTLZ4. They are scalable to any number of objectives but have a small variety of Pareto Front shapes, namely a plane for DTLZ1 and a sphere for the others. In addition, the number of variables related to the positioning of solutions in the objective space is restricted to . The authors suggest a way to increase the difficulty of the problems by replacing the nominal value of the variable () by the meta-variable given by the mean value of variables, using Eq. (1)

 yi=1qiq∑k=(i−1)q+1xk (1)

which makes the spatial location of the points in the objective space depend on a larger number of variables. However, this strategy is not challenging enough to the current MOEAs.

DTLZ5 and DTLZ6 have a degenerate Pareto front, but present inconsistencies in problems with more than three objectives Huband et al. (2006a). DTLZ7 has a very simple formulation, , with the only objective with more elaborate algebraic expression. In addition, it has as an optimal solution an extremal value (). The DTLZ8 and DTLZ9 problems are partially degenerate and their results are difficult to interpret in high dimensions, besides not presenting a known solution set for validation of results.

Another limitation also present in the DTLZ family is the lack of formulations related to robust optimization, as well as the absence of inequality and equality constraints. DTLZ8 and DTLZ9 problems, as mentioned, present inequality constraints, but without any possibility of customization.

The WFG toolkit, in its turn, was carefully crafted, marked by the list of virtues and failures raised after extensive and rigorous analysis of the work presented so far. Although WFG offers a number of advantages, it also has some significant limitations. For example, a characteristic present in WFG1 is the idea of flat regions in the objective space, i.e. a region where small perturbations in the variables do not affect in a straightforward way the value of the objectives. A global optimum in a flat region may lead to similar results by MOEAs with different performance. The lack of geometric meaning of the transformations also makes it difficult to analyze the results obtained. The analysis of the results is restricted to the existing evaluation metrics.

The characteristics of the most classical test suites are also discussed in a number of works in the literature, see Ishibuchi et al. (2016), Huband et al. (2006b) and Zapotecas-Martínez et al. (2019). It is important to realize that even for the most popular test suites, such as ZDT, DTLZ and WFG, the well-diversified approximate solutions can also be easily attained by MOEAs. The most recent research has focused either on specific types of problems, as DMOO Jiang et al. (2019); Yu et al. (2019) or on variations on the Pareto Front shape Matsumoto et al. (2019). However, the problems remain fixed and without any customization and the properties of deceptiveness and robustness are not included.

## 3 A Customizable Family of Benchmark Problems

### 3.1 Desirable Characteristics

In principle, test functions should consist of a variety of problems that are capable to capture a wide variety of desirable characteristics. In order to list the main characteristics, we consider some recommendations that have already been made in the literature combined with our own ideas in Table 1. The related references are also indicated.

Trying to cover these directives, this section introduces a new set of scalable and customizable benchmark problems for MaOPs. The proposed test suite uses the bottom-up approach Deb et al. (2005); Zapotecas-Martínez et al. (2019). Once the Pareto optimal front, the objective space, and the decision space are separately constructed, this method has facilitated the design of MOO problems. We propose the minimization problem described in Eq. (3.1), using either a multiplicative or an additive approach.

 minF(x), F:RN→RM subject to {ϕi(x)≤Ai,i=1…k1ξj(x)=Ij,j=1…k2, Ij∈{1…M} (2)

where or . In this problem, a vector in decision space can be written as , where and , , is a vector with coordinates, responsible for the positioning of points in the objective space and with , , is a vector with coordinates, responsible for the convergence of . and are constrains. The details of the functions and are discussed next.

### 3.2 Function Fp(x)

The function is responsible for the relative position of the points in the objective space and the conflict among the objectives. The basic definition is given in Eq. (7), which describes the surface of a hypersphere of radius in the first orthant of space in spherical coordinates and depends on at least variables. The preference for this equation is due to the fact that it describes the spatial location with simplicity and precision. Two modifications are presented that aim to increase the number of variables and the diversity of formats to this surface.

One way to increase the difficulty presented to the optimizer in finding the best distribution of points in objective space is to replace of by the meta-variable defined in Eq. (3). It corresponds to the mean of values in the set of variables from the original decision space.

 yi=1q+t∣∣ ∣∣iq+t∑j=(i−1)q+1xj∣∣ ∣∣,−1≤xj≤1 (3)

Note that the difference between the meta-variables presented in Eqs. (1) and (3) is the sum of at the upper bound of the summation. Unlike the meta-variable (1) proposed for the DTLZ family, in Eq. (3), each sub-interval where it is calculated the average, shares elements with the previous sub-interval and elements with the next sub-interval, increasing the dependency between the decision space variables. However, it is necessary that for each sub-interval, guaranteeing the presence of at least one independent variable from neighbouring intervals, and, consequently, that there is at least one solution that reaches any region of the Pareto front. Thus, the subspace which is responsible for positioning the points in decision space will have variables. The distribution of the meta-variables in the parameter vector is presented in Eq. (4).

 x=(\UOLoverbracex1,…,[xq+1,…,xq+t,]y1\UOLunderbracexq+t+1,…,x2q,[x2q+1,…,x2q+t,]y2\UOLoverbracex2q+t+1…,…,x3q+t,y3,……,yM−1x(M−2)q+t+1,…,x(M−1)q+t)y1=1q+t∣∣ ∣∣q+t∑j=1xj∣∣ ∣∣,y2=1q+t∣∣ ∣∣2q+t∑j=q+1xj∣∣ ∣∣,⋮yM−1=1q+t∣∣ ∣∣(M−1)q+t∑j=(M−2)q+1xj∣∣ ∣∣ (4)

Figure 1 illustrates the effect of using the meta-variables on a MOO problem. The DTLZ2 problem with three objectives was selected and it was used the variable in Eq. (3) with parameter . In Figure (a)a the obtained solution with is presented, which corresponds to the use of the variable defined by Eq. (1), whereas in Figure (b)b the parameter was used. For both, the NSGA-III algorithm Deb and Jain (2014) available in the platEMO platform Tian et al. (2017) was used with the same hyper-parameters. When comparing Figures (a)a and (b)b, we immediately notice the degradation effect of the spatial distribution over the solutions by using the proposed meta-variables.

The use of this meta-variable has two important advantages:

1. Considering the decomposition of vector in decision space the component generally has components. Thus, the increase in the number of decision space variables corresponds to the increase in the number of variables of the component. The meta-variable allows scaling of the vector to more than variables. This more flexible form of design space scaling allows the use of this class of test problems in large scale MOO algorithms Cheng et al. (2017a).

2. In general, the vector is randomly initialized according to a uniform distribution over a range . This distribution has mean and variance , making the initial population symmetrically distributed in the central region of this range. The meta-variable is initialized in the same way. However, after its transformation, the initial population accumulates non-symmetrically at the beginning of the interval. This new biased configuration poses an unprecedented challenge to optimization algorithms.

.

The shape of this surface can be controlled by changing the norm of points. If is a point in , a -norm (also called norm) of is given by the Eq. (5)

 ||x||p=(M∑i=1|xi|p)1/p (5)

where and is the absolute value of (note that the absolute value of is the norm of , i.e., .). If the -norm is called the Euclidean norm and if it is the Manhattan norm (or taxicab norm). The infinite norm (or Tchebycheff norm) of the vector , denoted by , is defined as . So, consequently . If , the expression (5) does not define a norm, but a quasi-norm, since the triangular inequality is not satisfied Rudin (1991). Nonetheless, values greater than zero are going to be used in Eq. (5) to produce constant norm (or quasi-norm) surfaces. Figure 2 presents constant norm curves with different values in space.

In this way, the function can be defined by:

 Fp(x)=T(x)h(x) (6)

where is responsible for the distribution of the points in objective space, defined by Eq. (7), dependent on the meta-variable and .

 T(x)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩t1(x)=cos(y1π/2)cos(y2π/2)…cos(yM−2π/2)cos(yM−1π/2)t2(x)=cos(y1π/2)cos(y2π/2)…cos(yM−2π/2)sin(yM−1π/2)t3(x)=cos(y1π/2)cos(y2π/2)…sin(yM−2π/2)⋮tM−1(x)=cos(y1π/2)sin(y2π/2)tM(x)=sin(y1π/2) (7)

Another desirable characteristic is the dissimilarity of the objectives. In general, each objective of benchmark problems for multiobjective optimization has its optimal values limited in the range , i.e. , but this hardly reflects real-world problems. Some authors, such as Huband et al. (2006a) and Cheng et al. (2017b), presented functions with dissimilar objectives, being , with a power or a multiple of 2. However, these problems only present objectives with non-negative values. An optimization problem with dissimilar objectives with positive and negative values is presented in Eq. (8)

 (8)

where represents the -th objective obtained after the evaluation of .

Once , then . In fact, if then as well as if then . Note that the function transfers the origin from objective space to the point , but it does not change the surface’s concavity.

### 3.3 Function Fd(x)

This function is responsible for the convergence of points towards the Pareto front and, in some cases, for the shape of the Pareto front. This function makes use of the auxiliary functions and .

#### Auxiliary Function ϕ(x)

The auxiliary function aims to incorporate some information about the position of relative to the hyper-diagonal (or another vector ) in the objective space. For the point in the decision space, the point is on a surface of -norm (quasi-norm) constant in the objective space, with . The angle between the hyper-diagonal and is calculated by means of

 φ(x)=arccos(d⋅Fp(x)T|d||Fp(x)|) (9)

This value must be normalized into the range . For the vector , the angle is maximal if the vector defined by is aligned with some vector of the canonical base , that is, if for some . In this case, the maximum angle is and the value of the normalized distance function is given by

 ϕ(x)=φ(x)φmax (10)

#### Auxiliary Function g(x)

The auxiliary function is responsible for the convergence of the points in the objective space. In this work we present two versions: deceptive and multi-modal with brittle global optima and stable local optima. Other versions of this function can be incorporated into this proposal in order to satisfy some special need.

The first version, a parameterized deceptive function, is characterized by the presence of a global optimum and two local minima. Another relevant characteristic of this function is the influence of the relative position of the point in the objective space. The relative position is given by the function (Eq. (10)). The topology of this problem favors the sub-optimal solutions, making the global optimum difficult to achieve. The deceptive function proposed, , is defined by Eq. (13). It presents, for each variable , two local minima (in and ) and one global (in ) optima located in a deep valley of width .

The Figure (a)a presents the construction details of the function . If (and ), the function is a line connecting the points and ( and respectively). If then is a complete cycle of the cosine trigonometric function connecting the points and . The and parameters do the correlation between the position and distance, respectively, of a point in the objective space and introduce a bias in the decision space. They are defined as follows:

 v(xp,xi) =1.2+sin(2π(1−ϕ(xp))1.05i)2.4 (11) r(xi) =0.015cos(2kπϕ(xp))+0.025 (12)

This special format has as motivation to hide the location of the optimal value by a small opening of the valley determined by the parameter , leading the population to the local minima located at the beginning and the end of the range. Note that and use the normalized function . As seen before, this function estimates the relative position of a point in the objective space, controlling the range of the valley (segment in the (a)a). In Eq. (12), the factor produces two large ranges and one narrow range. In a general way, large valleys and narrow valleys will be produced. Figure (d)d shows the contour lines of for the first variable of the vector using .

The proposed deceptive function is defined by

 g(x)=∑xi∈xdz(xp,xi) (13)

where

 z(xi)=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩5(xi+r−v)v−r+10 if 0≤xi

Figure (b)b presents the function for a single variable with parameter equal to 0.1, 0.05 and 0.01. Figure (c)c presents the variation of when and the first variable of varies from 0 to 1. Figures (a)a to (d)d show the contour lines of for and , respectively.

The second version for the function presents a special multi-modal distance function, with a brittle global optimum and local optima. The global optimum and some of the local optima near it are sensitive to the presence of noise on their optimal values in the Decision Space. However, there is a region with stable local optima, whose values present little variation when this same noise is added to the optimal values in the Decision Space. This type of function is appropriate for the validation of algorithms designed for robust optimization Petersen and Tempo (2014); Meneghini et al. (2016); He et al. (2018); Wang and Fang (2018), whose goal is to find stable solutions, that is, solutions that, when evaluated, show little variation when some noise is added in their neighborhood. The multi-modal function is the combination of logistic and trigonometric functions. It has no extremal variables and the robust solutions are located in the range. The global optimal solution is (precisely ) but it is sensitive to noise in its neighborhood. The robust is defined by Eq. (15).

 g(x)=∑xi∈xd[−w(xi)(y(xi)−z(xi))+y(xi)−12+e−60xi+0.631] (15)

where

 y(xi) =11+e−20(xi−0.6) (16) z(xi) =11+e−20(xi−0.7) (17) w(xi) =cos(40πxi) (18)

The function is illustrated in Figure (a)a for a single variable . Figure (b)b shows the effect caused by the presence of noise in variables of the function. The figure shows solutions obtained with the optimal value ( – blue dots) and the stable local optima value ( – red dots) in the DTLZ2 problem with two objectives. To verify the robustness of these solutions, an noise with uniform distribution () was added to the global optimum and in the stable local optimum. It is possible to see that when these solutions were re-evaluated, there were no changes in the distribution and convergence of the stable solutions (black dots), whereas the optimal solutions present unsatisfactory performance in the presence of noise (yellow dots). In this sense the stable local optimal solution is robust, while the global optimal solution is not robust. Note that using this function it is possible to create test problems where the robustness of the solutions can be tested in all objectives. Recent work on robust optimization uses test functions where robustness can be controlled for only one objective or uses common test problems Gaspar-Cunha et al. (2013); Meneghini et al. (2016); He et al. (2018); Mirjalili et al. (2018).

### 3.4 Equality and Inequality Constraints

The proposed optimization problem easily allows the incorporation of equality and inequality constraints with the manipulation of the function defined previously in Eq. (10) and one vector . Since this function allows the spatial location of points in the objective space, the main idea is to select solutions with special values of for a particular vector . Since , we just select some thresholds , with and define the following constraints:

 ϕ(x)≥A (19) ϕ(x)≤A (20) A≤ϕ(x)≤B (21)

In addition to the constraints presented in Eqs. (19), (20) and (21), it is possible to select large regions in the objective space in the following way: consider a problem where the objective space is located in the first orthant of space. In this case, consider the angle between the point and the vectors of the canonical basis , with . Let and the objective associated with this minimal value. The function associates the vector to the objective which has the smallest angular distance of . It is easy to see that is not an injection function, because for we have , for example. In these cases where , make and define the constraint represented in Eq. (22).

 ξ(x)=j (22)

In this way, using this classification it is possible to select one or more regions that should be included or excluded from the Pareto front. All possibilities listed here can be incorporated into the MOP explicitly as constraints or as penalties in the distance function . Figure 6 ilustrates some examples. Figures (a)a and (b)b present the Pareto front of a three objective problem by applying the constraint defined in Eq. (19). Instead of a single reference vector , each canonical base vectors and were used as reference. Figure (a)a uses while Figure (c)c uses parameter equal to 0.5, 0.3 and 0.1 for the vectors and respectively. Notice that the resulting Pareto front is similar to an inverted Pareto front, but it is generated as a result of adding constraints. The same procedure is used in a problem with five objectives, using the canonical base vectors and . The Pareto front is shown in Figure (b)b using the RadViz visualization tool Hoffman (2000). In this figure, the red points represent the vectors just as reference. These examples illustrate how to use the constraint set to obtain a rich variety of shapes for the Pareto front. Figure (d)d illustrates the use of the constraint defined by Eq. (21) using and . This constraint can produce a disconnected Pareto front. Lastly, Figure (e)e shows the constraint defined by Eq. (22) in a problem with eight objectives using the CAP-vis tool Meneghini et al. (2018); Koochaksaraei et al. (2017). In this example, the Pareto front consists only of points near the axis of the second objective in terms of angular distance. For more details about reading this chart, see Meneghini et al. (2018). For this case the constraint in Eq. (22) was defined as .

## 4 Proposed Generator of benchmark problems

This section presents a new test function generator for scalable and customizable benchmark problems in MaOPs. This new set uses the bottom-up approach and allows the creation of scalable problems for any number of objectives, presenting Pareto fronts with different shapes and topologies.

The auxiliary functions previously defined allow the formulation of several customizable optimization problems. We refer to it as the Generalized Position-Distance (GPD) test functions. The generated functions are all scalable and use either the deceptive version, Eq. (13), or the multimodal version, Eq. (15), of function . The dissimilarity of the functions can also be controlled. In addition, each GPD test problem can present any of the constraints described in Section 3.4.

All the functions generated use the bottom-up approach in the multiplicative or additive form, as described in Eq. (4).

 minF(x), F:RN→RM subject to {ϕi(x)≤Ai,i=1…k1ξj(x)=Ij,j=1…k2, Ij∈{1…M} (23)

The decision space is separable, being the vectors responsible for the relative position of in the objective space (position parameters) and for the convergence of points in the Pareto front (distance parameters). The space has coordinates, where is the number of objectives and are the parameters of the meta-variable defined by Eq. (3), with . If and then and Eq. (7) is the usual multidimensional polar coordinates. Then, to define any problem instance, it is necessary to specify the number of objectives . The number of variables of the decision space is given by , where and are the meta variables parameters and is the number of variables used in some distance functions. If the meta-variable is used then and for the distance functions presented in this paper.

defines the relative position of points in the objective space and the p-norm (or quasi-norm) of the Pareto front. This function uses the parameters (previously defined) and , the latter being used to normalize the Pareto front, affecting its shape.

Points in a constant norm surface in high dimensional space have unbalanced coordinates. For example, in space with dimensions, a vector with canonical base has -norm equal to for any value of . Points located on the edge of the first orthant of the space have constant -norm equal to . On the other hand, in an extreme case, a vector parallel to the hyper-diagonal with -norm equal to has the following coordinates:

 v =d||d||p (24) =(1p√M,…,1p√M) (25)

For a fixed value, decreases very quickly. Figure 7 exemplifies the evolution of the values of related to the space dimension . It is possible to realize that this value decreases with reaching values close to zero in high dimensionality. It can be solved easily by determining an ideal value for . As an example, use to obtain . In this way, for MaOPs, we suggest .

establishes the radial distance between the points in the objective space and the Pareto front using some auxiliary function. The literature presents many examples of such functions, with distinct characteristics Wang et al. (2019b, a); Deb et al. (2005). The auxiliary functions introduced in this paper present the following peculiarities:

1. Use of the relative position of a point in the objective space by the function correlating position and distance. With this feature, changing the relative position by changing the vector changes the distance function value to the same vector;

2. Deceptiveness: a large portion of the decision space leads to a suboptimal distance value while optimal distance values are restricted to a small region of this space;

3. Robustness: An optimal distance solution is sensitive to small disturbances while suboptimal solutions have more stability in the presence of noise in distance variables. The proposed auxiliary function makes a Robust Multi-objective Optimization Problem where the robustness can be analyzed on all objectives. In a recent paper, He et al. (2018) present a Robust Multi-objective Evolutionary Algorithm but the test function used enables analysis of robustness of solutions in just one objective.

The deceptive auxiliary function uses a parameter in Eq. (12) that defines the number of large and narrow valleys. The robust auxiliary function has no parameters. Since the smallest value of each of its components in both functions is equal to zero, the corresponding function is in the additive approach and in the multiplicative case.

In addition to the functions listed above, it is possible to create other functions by manipulating the proposed distance and functions. For example,

 Fd(x)=ϕ(x)52+g(x)+0.5 (26)

presents a Pareto front with convex and concave regions, which are symmetrical to the hyper-diagonal of the first orthant of the objective space. Figure (a)a and (b)b show the Pareto front of this problem with two and three objectives.

A function with disconnected Pareto front and symmetric with the hyper-diagonal of the first orthant in the objective space can be produced by using:

 Fd(x)=cos(3πϕ(x))210+g(x)+1 (27)

Figure (c)c and (d)d show the Pareto front of this problem with two and three objectives.

In addition to the possibilities listed above, the constraints presented in Eqs. (19) to (22) can be added, as well as the dissimilarity of objectives as the presented in Eq. (8).

This proposal for generating test problem instances focuses on the combination of the parameters presented for the composition of the meta-variable , the position function and the distance function , as well as the combined use of one or more constraints. The composition of these parameters is capable of producing an unlimited number of multi-objective, multi-purpose, large-scale optimization benchmark problems that can be multimodal, deceptive, dissimilar, constrained, and others.

Table 2 presents the main parameters of the proposed instance generator. Suppose as an example that a robust problem with the following characteristics is required:

1. Two objectives and several decision variables;

2. Convex, dissimilar objectives and disconnected Pareto front;

3. Multiplicative approach.

This specific problem can be produced by setting the following parameters:

1. Set and use the meta-variables in Eq. (3) with , and , generating the minimization problem .

2. Set and use in function. Include the and constraints, using . Other values for can be used instead 0.3 and 0.7, as well as for the vector and any other value greater than 1 for parameter .

3. Define where and is defined by Eq. (15).

A deceptive MaOP for which the nominal values of the objectives are not discrepant but the first objective is always higher than the others can be created by making , using the deceptive auxiliary function and the constraint.

The following is a quick roadmap for building test function instances.

Input:

Number of Objectives , number of distance variables , reference vector , a list of parameters , , , , and other features that should be imposed on the problem (deceptiveness, dissimilarity, separability, constraints etc);

Decision Space design:

Initialize the Decision Space. If meta-variable is used, set , else do . Then generate the decision vector where and .

Evaluates the position function :

If the meta-variable is used, evaluate Eq. (3)

 yi=1q+t∣∣ ∣∣iq+t∑j=(i−1)q+1xj∣∣ ∣∣,−1≤xj≤1

Else do . Then project into space using the spherical coordinates in Eq. (7)

 T(x)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩t1(x)=cos(y1π/2)cos(y2π/2)…cos(yM−2π/2)cos(yM−1π/2)t2(x)=cos(y1π/2)cos(y2π/2)…cos(yM−2π/2)sin(yM−1π/2)t3(x)=cos(y1π/2)cos(y2π/2)…sin(yM−2π/2)⋮tM−1(x)=cos(y1π/2)sin(y2π/2)tM(x)=sin(y1π/2)

Evaluate the norm in Eq. (5)

 ||T(x)||p=h(x)=(M∑i=1|ti(x)|p)1/p

and normalize by defining the Eq. (6)

 Fp(x)=T(x)h(x)

Additionally, if the problem is dissimilar, calculate Eq. (8)

and do .

Evaluate the distance function :

If function is used in any step, evaluate the angle between vector and (Eq. (9)) and the normalized angular distance (Eq.(10)) using

 φ(x) =arccos(d⋅Fp(x)T|d||Fp(x)|) ϕ(x) =φ(x)φmax

Note that in the first orthant the maximum angle is the angle between the vector and one canonical basis . In case of a dissimilar problem, this angle must be calculated before applying function. Select one auxiliary function (Eqs. (13), (15), (26), (27) or any other appropriate equation of your choice) and build the appropriate distance function .

Define the constrains:

Using one or more inequality or equality constraint, for e.g. Eqs. (19), (20), (21) or (22).

The proposed instance generator has great flexibility. Its modular structure allows for multipurpose problem creation. Its basic structure can be extended by adding new auxiliary functions and , as well as new constraints or other meta-variable