sKPNSGA-II: Knee point based MOEA with self-adaptive angle for Mission Planning Problems

sKPNSGA-II: Knee point based MOEA with self-adaptive angle for Mission Planning Problems

Abstract

Real-world and complex problems have usually many objective functions that have to be optimized all at once. Over the last decades, \glsplmoea are designed to solve this kind of problems. Nevertheless, some problems have many objectives which lead to a large number of non-dominated solutions obtained by the optimization algorithms. The large set of non-dominated solutions hinders the selection of the most appropriate solution by the decision maker. This paper presents a new algorithm that has been designed to obtain the most significant solutions from the \glspof. This approach is based on the cone-domination applied to \glsmoea, which can find the knee point solutions. In order to obtain the best cone angle, we propose a hypervolume-distribution metric, which is used to self-adapt the angle during the evolving process. This new algorithm has been applied to the real world application in \glsuav Mission Planning Problem. The experimental results show a significant improvement of the algorithm performance in terms of hypervolume, number of solutions, and also the required number of generations to converge.

Evolutionary Computation, NSGA-II algorithm, Knee Point, Unmanned Aerial Vehicles, Mission Planning.
\glsresetall

I Introduction

Real-world optimization problems often deal with multiple objectives that must be met simultaneously in the solutions. In most cases, objectives are conflicting, so improving one objective usually cannot be achieved unless other objective is worsened. Such problems are called \glsplmop, the solution of which is a set of solutions representing different performance trade-off between the objectives.

Most of the existing algorithms focus on the approximation of the \glspof in terms of convergence and distribution, but always with a fixed population size, which in the end, for complex problems with many solutions, returns an amount of individuals equal or similar to this population. Nevertheless, when this approximation of the Pareto set comprises a large number of solutions, the process of decision making to select one appropriate solution becomes a difficult task for the \glsdmaker. Sometimes, the \glsdmaker provides a priori information about his/her preferences, which can be used in the optimization process [1, 2]. However, very often this information is not provided by the \glsdmaker, and it is necessary to consider other approaches to filter the number of solutions. In opposition to the common trend of returning a more or less fix amount of solutions, it should be taken into account the hardness of decision making for every additional solution, which in the end may not be optimal enough in comparison with other solutions. So, for complex real-world problems, it should be considered to provide a lower amount of solution maintaining as much as possible of the convergence and distribution of the \glspof.

In the last years, finding the ”knee points” [3] have been used in several algorithms [4, 5] to deal with large \glsplpof in convex problems when the \glsdmaker does not provide preferences about the \glsmop. In this work, a new \glsmoea focused on the search of Knee Points is presented. This new algorithm changes the concept of domination to cone-domination, where a larger portion (cone region) than a typical domination criteria is considered when the solution frontier is generated. In this paper, the main novelty with respect to a previous approach [6] lies in the proposition of a new adaptive technique to find the right angle for the cone-domination which focus on reducing as much as possible the number of solutions while maintaining as much as possible the convergence and distribution of the \glspof.

We apply our proposed algorithm to the real-world Mission Planning Problem in which a team of \glspluav must perform several tasks in a geodesic scenario in a specific time while being controlled by several \glsplgcs. In this context, there are several variables that influence the selection of the most appropriate plan, such as the makespan of the mission, the cost or the risk. Some experiments have been designed for evaluating the decrement of the number of solutions obtained, while maintaining the most significant ones.

This paper has been organized as follows. Section II provides some basics on \glsplmop and an introduction to the main concepts of Cone Domination. Section III presents the novel Knee-Point based Evolutionary Multi-Objective Optimization approach. In Section IV-A this new algorithm is evaluated using a set of real Mission Planning Problems, and compared against our previous approach. Finally, Section V presents several conclusions that have been achieved from current work.

Ii Background

This section provides the background and related works concerning the cone-domination and metrics upon which is our proposed algorithm is based.

Ii-a Multi-Objective Optimization

In most \glsplmop, it is not possible to find one single optimal solution that could be selected as the best one; so in this kind of problems there is usually a set of solutions that represent several agreements between the given criteria. Any minimization \glsmop can be formally defined as:

(1)

where represents a vector of decision variables, which are taken from the decision space ; , where represents a set of objective functions, and it is possible to define a mapping from -dimensional decision space to -dimensional objective space .

Definition 1

Given two decision vectors , is said to Pareto dominate , denoted by , iff:

(2)
Definition 2

A decision vector , is Pareto optimal if , .

Definition 3

The Pareto set , is defined as:

(3)
Definition 4

The Pareto front , is defined as:

(4)

The goal of \glsmoea is to find the non-dominated objective vectors which are as close as possible to the (convergence) and evenly spread along the (diversity). \glsnsga2 has been one of the most popular algorithms over the last decade in this field [7]. This algorithm generates a non-dominated ranking in order to look for the convergence, and crowding distance to assure the diversity of the solutions. Other popular algorithms are SPEA2 [8], MOEA/D [9] and NSGA-III [10]. These last two algorithms have become very popular in the last years for their good performance on \glsplmaop.

Ii-B Knee Points and Cone Domination

In the last decade, several \glsplmoea have been proposed to search for non-dominated solutions located close to a given reference point that incorporates the preference of the \glsdmaker [11]. However, some a priori knowledge is require to set the reference point, which many times the \glsdmaker does not have. With the aim of obtaining significant solutions when no a priori knowledge is provided, the concept of finding ”knee points” [12] can be used. In this way, we reduce the size of the non-dominated set and provide the \glsdmaker a small set of so-called knee point solutions. When distinct knee points are present in the Pareto front, most \glspldmaker would prefer the solutions in these points, because if a near solution to a knee point (trying to improve slightly one objective) is selected, it will generate a large worsening at least in one of the other objectives. An example showing the difference between a knee point (blue) and points that are not knee points (red) is shown in Figure 1.

Fig. 1: A synthetic simple example of a Pareto Front in a bi-objective minimization problem, where B is a knee point, while A and C, that have worse trade off than A, are not.

The concept of using knee points has been studied before. Branke et al. [3] presented a modification of \glsnsga2 where the crowding distance criterion is computed using angle-based and utility-based measures for focusing on knee points. In Schütze et al. [13] two different update methods are presented, these methods are based on maximal convex bulges that allow to focus the algorithm search on the knee points. Bechikh et al. [14] extended the reference point \glsnsga2 so the normal boundary intersection method is used to emphasize knee-like points. Zhang et al. [5] designed KnEA, an elitist Pareto-based algorithm that uses Knee point neighbouring as a secondary selection criteria in addition to dominance relationship.

In this paper, an angle-based measure is used to guide and focus the searching process on the knee points. Therefore, the concept of domination criterion has been changed to cone-domination. The cone-domination has been defined as a weighted function that manages the set of objectives [12]. This concept can be formally described as:

(5)

where is the amount of gain in the -th objective function for a loss of one unit in the -th objective function. The matrix , composed of these values, and with 1 values in its diagonal elements, has to be provided in order to apply the above equations.

Definition 5

A solution is said to cone-dominate a solution , denoted by , if:

(6)

In a bi-objective problem, the two () related objective weighted functions, can be defined as follows:

(7)
(8)

Previous equations can also be formalized as:

(9)

In Figure 1(b) it is shown the contour lines for our previous two linear functions when these pass through a solution in objective space. The set of solutions inside those contour lines (the ”cone-dominated region”) will be dominated by according to the previous definition of domination. It is specially interesting to remark that when the standard definition of domination is used (see Figure 1(a)), the region between the horizontal and vertical lines will be dominated by . Therefore, from both figures can be concluded that using the cone-domination definition will obtain larger regions (as the angle is greater than ), so more solutions will be dominated by one solution than when traditional definition is used. Therefore, and using the concept of cone-domination, the whole Pareto optimal front (using the traditional definition of domination), may not be non-dominated according to this new definition.

(a) Pareto Dominated Region.
(b) Cone Dominated Region.
Fig. 2: Regions dominated by a solution P (2.a) using the original definition of domination in a 2-objectives problem, and cone-dominated (2.b) by the same solution when the concept of cone domination is used.

Besides, in Figure 1(b) it can be observed that the values and expand the angle modifying the dominated region (whose value in the original definition of Pareto dominance is ). In this example, the vertical axis is rotated by the angle of , whereas the horizontal axis is rotated by . As it is shown in this figure, both angles are related to the and values, respectively, as follows:

(10)
(11)

Using previous equations, the new angle for the new dominated region of point will be . If the values of the objectives are normalized, to make the dominated region symmetric and thus equalize the turn of both horizontal and vertical axes (i.e. ), both variables of the matrix must also be equalized: . Then, the cone-domination considers angles, () [, ], where the 90 degrees case is the common Pareto dominance, and the 180 degrees is equivalent to a weighted sum multi-objective optimization where all weights are the same (i.e. a single-objective optimization using the sum of all objectives as fitness function). In this last case, all the solutions are inside the same line of cone-domination, and the matrix considered is filled with 1 ().

In other cases, the cone domination concept can be formally defined as:

(12)

With the aim of comparing the convergence and diversity of the knee points obtained, in contrast with the Pareto front from the original approach, a study of several values of the angle must be carried out.

Now, let extend this concept to higher dimensions. For three dimension (), in this case, having , the cone domination function is expressed as:

(13)

Figure 2(a) shows the 3D contour corresponding to cone-dominated region for a solution in the objective space, where the bold lines converging in represent the edges of the cone region. In dashed lines, the edges used in the normal definition of Pareto domination for are also presented. As in the 2D case, the modified definition of domination allows a larger region to become dominated by any solution than the usual definition.

(a) Cone Dominated Region.
(b) Projection in the f1-f2 plane.
(c) Projection in the f1-f3 plane.
(d) Projection in the f2-f3 plane.
Fig. 3: Regions cone-dominated by solution A in a 3-objective problem

Besides, Figures 2(b), 2(c) and 2(d) show the projections of this 3D region into the - plane, - plane and - plane, respectively. These projections show a similarity with the cone-domination region for 2-objectives problems (see Figure 1(b)). In Figure 2(b) it is observable that the values and change the dominated region by expanding the value, rotating the f1 axis an angle of and the f2 axis an angle of . On the other hand, Figure 2(c) shows how values and expand the angle, rotating the f1 axis an angle of and the f3 axis an angle of . Finally, Figure 2(d) shows how values and expand the angle, rotating the f2 axis an angle of and the f3 axis an angle of . As can be seen, similarly to the 2D cone-domination, the angles described and values of matrix can be formulated as follows:

(14)
(15)
(16)

Following this reasoning, the cone-dominated region is defined by extending the angle of the region in 2-objective problems, while in 3-objective problems it is defined by extending the three angles of the faces defining the 3D cone. When considering a higher dimension , a hypercone region with faces (and therefore, angles) is generated. This number is the result of the pair-combinations of the N objective functions. In this case, the cone domination is expressed as:

(17)

where each value with is related to the angle so . Besides, the angle of the face - of the hypercone region, is related to this angle (hence, can be calculated as ).

Again, and in order to make the cone dominated region symmetric for every objective, and supposing that the values of the objectives are normalized, it is necessary to equalize the angles of every face of the hypercone region. The definition of this angle leads to the setting of the values of matrix :

(18)

Ii-C Hypervolume and distribution of solutions

In order to evaluate the convergence and distribution of the non-dominated solutions obtained with \glsplmoea, some metrics have been proposed over the last decades [15]. One of the most popular is the \glshv [16], which gives the volume (in the objective space) that is dominated by some reference point. Other frequently used metric is the \glsigd [17], where a set of reference points is provided as an approximation of the Pareto front, and the \glsigd is computed as the distance from each reference point to the nearest solution in the solution set.

When the number of non-dominated solutions is large, the decision making process afterwards becomes really complex. In order to avoid this, the best outcome of the algorithm should be a small set of solutions maintaining as large as possible value for the \glshv or \glsigd. As the \glsigd is pretty complex to compute due to the need of a reference set, which in real problem is sometimes difficult to provide, the \glshv metric has been used to design a new metric that takes into account the number of solutions. We propose the hypervolume-distribution (HDist) metric, which is designed in order to evaluate this trade-off between \glshv and the number of solutions. This metric is defined as follows:

(19)

where is the \glspof of the problem, is the set of non-dominated solutions to be evaluated, and represents the hypervolume of the set. In this metric, the Pareto set is needed in order to normalize the number of solutions and the hypervolume, which are then combined. The higher the value of this metric, the better distributed solutions maintaining good hypervolume.

Using this metric, our main goal is to self-adapt the cone-domination angle in the evolving phase of the \glsmoea, so that the best set of non-dominated solutions according to the HDist metric is obtained.

Iii sKPNSGA-II: a self-adaptive Knee-point based extension of NSGA-II

In order to reduce the number of obtained solutions in a \glsmop, we propose an extension of \glsnsga2, which is designed to search Knee-points instead of non-dominated solutions. To reach this goal, the cone-domination concept described in section II-B will be used instead of the standard Pareto-domination. The non-dominated ranking used by \glsnsga2 is changed to a non-cone-domination ranking with an specific angle . In a previous work [6], a first approach called \glskpnsga2, using cone-domination with a fixed cone angle, was proposed. In this work, we propose \glsskpnsga2, which self-adapts the cone angle according to the metric proposed in previous section. In order to perform this self-adaptation, the golden section search [18] has been used. This technique is used to find the maximum value for the metric through a successive narrowing of the the range of values in which the maximum point is located.

The \glsskpnsga2 is presented in Algorithm 1. This novel approach, after randomly generating the initial population (Line 1), initializes the convergence factors (Lines 2-4). Following this step, the maximum and minimum values for each objective is initialized to a zero-vector (line 5), and to the vector of maximum objective values (Line 6), respectively. Every time the solutions are evaluated, these values are updated. Their aim is to be used in the normalization of the objective values.

Input: A problem . The set of -objectives and their upper bounds . And a set of positive parameters: (elitism), (population size), , (stopping criteria limit), and (maximum number of generations). is the golden ratio.
Output: The Knee-Point Frontier generated
1 set of individuals randomly generated while  do
2       for  to  do
3            
4       if  then
5            
6       for  to  do
7             )
8      
return kpof
Algorithm 1 Self-adaptive Knee-Point based NSGA-II.

The fitness function (Lines 10-14) used in the evaluation of the individuals computes the multi-objective values of the solutions, which are stored inside the fitness. Moreover, as previously mentioned, the maximum and minimum objective values are updated with the new evaluated solutions (Lines 12-13).

Based on the \glsnsga2 algorithm, the new offspring is updated with the buildArchive function (Algorithm 1, line 15), which is presented in Algorithm 2. This function creates an array of vectors, or fronts, storing the solutions grouped by their level of non-cone-dominance. This is done using the assignFrontRanks function (see Algorithm 3). In this levelled array, the first front is composed of the non-cone-dominated solutions of the population; the second front contains the non-cone-dominated solutions among the rest of the population without considering the solutions of the first front; the third front is then composed of the non-cone-dominated solutions of the population without considering the solutions of the first and second fronts, and so on.

Input: Vector containing the actual population. Population size . Angle for every face of the cone. Vector stores the maximum values found for the objectives. Vector stores the minimum values found for all objectives.
Output: Updated vector
1 for  to  do
2       if  then
3            
4      else
5            
6      
return newS
Algorithm 2 BuildArchive()
Input: Vector containing the actual population. Angle for every face of the cone. Vector stores the maximum values found for the objectives. Vector stores the minimum values found for all objectives.
Output: List of vectors containing the solutions with same rank values.
1 while  do
2       for  to  do
3            
4      
return
Algorithm 3 assignFrontRanks()

In order to create the array of ranked fronts, the kneeFront function is used (see Algorithm 4). This function is similar to the classical one used in \glsnsga2 to generate the Pareto front from a population. Nevertheless, it has been changed, so instead of the non-dominated solutions, the function will consider the non-cone-dominated solutions, as was described in Section II-B. The new approach of cone domination is described in detail in Algorithm 5. Then, kneeFront function requires a value indicating the angle of the cone-domination. First, the objective vectors are normalized with the maximum and minimum values. Then, the cone-domination function is computed using the Equations 5 and 18 for each objective; and the function examines if the second solution is cone-dominated by the first.

Input: Vector containing the actual population. Angle for every face of the cone. Vector stores the maximum values found for the objectives. Vector stores the minimum values found for all objectives.
Output: Knee-Point Frontier based on the Cone-Domination with angle .
1 for  to  do
2       for  to  do
3             if  then
4                  
5            else if  then
6                  
7            
8      if  then
9            
10      
return
Algorithm 4 kneeFront()
Input: Solutions A and B, used to check for cone-domination. The angle (in degrees) for every face of the cone. Vectors and store the maximum and minimum values found for all objectives.
Output: TRUE if A dominates B, FALSE otherwise.
1 FALSE for  to  do
2       for  to  do
3            
4      if  then
5             TRUE
6      if  then
7             return FALSE
8      
return
Algorithm 5 ConeDom()

Once the array of vectors containing the ranked solutions is created, in a similar way to \glsnsga2 algorithm, a sparsity value (that is based on the crowding distance) is given to each solution at every vector, through the assignSparsity function in Algorithm 2.

In order to self-adapt the cone angle according to the metric, the Golden Section Search has been used (Line 19). This technique is used to find the maximum of the metric iteratively as the main algorithm evolves. It is described in Algorithm 6. This technique is similar to the bisection search for the root of an equation. Specifically, if in the neighbourhood of the maximum we can find three points corresponding to , then there exists a maximum between the points and . To search for this maximum, we can choose another point between and as shown in the figure 4. Then, depending on the value of , the new triplet may become if , or if . And so, the process is repeated iteratively until an error tolerance is reached. In order to compute these points and , the golden ratio is used, where each point is separated from the corner points and the distance between these corner points divided by .

Fig. 4: Diagram of the Golden Section Search.

In \glsskpnsga2, the golden section search starts working once the front has a large number of solutions or the hypervolume does not show a considerable increase with respect to previous generations. Then, the cone angle value is tested in the following generation, and then the in the next one. After testing both, they are compared as previously described, the triplet is updated and the process continues until the stopping criteria is met.

Input: Vector containing the actual knee front. Angle being used. Angles used in the golden section search , , and .
1 if  then
2       if   then
3            
4      
5else
6       if  then
7            
8      else
9             if  then
10                  
11            else
12                  
13            
14      
Algorithm 6 goldenSection()

Following Algorithm 1, a tournament selection (Line 23) is used to provide the individuals that will be chosen to apply the genetic operators. The crossover operator (line 24) and the mutation operator (lines 25-26) are then applied.

Finally, in this algorithm the stopping criteria considers the comparison of the non dominated solutions obtained so far at the end of each generation with the solutions from the previous generation (Lines 17-18). When the solutions obtained so far remain unchanged for a specific number of generations, the algorithm will stop and return the set of solutions found as the best approximation of the \glspof.

Iv Experimental evaluation

In order to test the proposed algorithm, a real complex problem have to be considered where decision makers actually care about the number of solutions for the decision making process. In these experiments, several real Mission Planning Problems have been designed for this. Mission Planning[19] is a complex problem that involves the assignment of several tasks to the vehicles performing them, along with the assignments of vehicles to \glsplgcs controlling them. Some tasks are performed by just one vehicle, while others may be performed by several vehicles reducing the time needed for the task (e.g. taking a photo, monitoring a target…). There exists several issues to take into account, such as the paths followed by the \glspluav when there are \glsplnfz in the scenario, the sensors to be used by the vehicles for each task, the flight time or the fuel consumption, among others. In a previous work[20], this problem was modelled as a \glscsp, considering the different constraints of the problem (sensors, path, time, fuel…), and solved using a standard \glsnsga2 algorithm.

This problem is also a Multi-Objective Optimization problem, as there exist several objectives that influence the selection of the most appropriate plan. 7 objectives have been considered, that include: the total cost of the vehicles for completing the mission; the makespan or end time when all vehicles have returned and the mission is ended; or the risk of the mission, which has been calculated as an average percentage that indicates how hazardous the mission is (e.g. \glspluav that end up with low fuel, \glspluav that fly near to the ground or \glspluav that fly close between them); the number of \glspluav used in the mission, the total fuel consumption, the total flight time and the total distance traversed.

The fitness function used for this problem checks that all of the constraints considered are fulfilled for a given solution. If not, it stores inside its fitness the number of constraints fulfilled by the solution. When all constraints are fulfilled, the fitness will work as a multi-objective function minimizing the problem objectives.

The encoding considered here takes into account the different variables of the \glscsp model, which includes: the assignments of \glspluav to tasks, the order of the tasks, the assignments of \glsplgcs to \glspluav, the flight profiles used in each path and return to the base, and the sensors used for each task performance. For this encoding, proper crossover and mutation operators have been designed, where a concrete operator is applied to each allele of the individuals. For more details about the encoding and the \glscsp model, may you consult previous works [19] [20].

In these experiments, we tested the newly implemented \glsskpnsga2 with 12 different scenarios, represented in Figure 5). In these figures, the green zones represent tasks, while the red zones represent \glsplnfz. There are also some point tasks represented with an icon, such as photographing, tracking or fire extinguishing. These scenarios are composed of an increasing number of tasks, multi-UAV tasks, \glspluav, \glsplgcs, \glsplnfz and temporal dependencies between tasks (see Table I).

(a) Mission 1.
(b) Mission 2.
(c) Mission 3.
(d) Mission 4.
(e) Mission 5.
(f) Mission 6.
(g) Mission 7.
(h) Mission 8.
(i) Mission 9.
(j) Mission 10.
(k) Mission 11.
(l) Mission 12.
Fig. 5: Mission Scenarios considered.
Mission Id. Tasks Multi-UAV Tasks UAVs GCSs NFZs Time Dependencies
1 5 0 3 1 0 0
2 6 1 3 1 1 0
3 6 1 4 2 2 1
4 7 1 5 2 1 2
5 8 2 5 2 3 1
6 9 2 5 2 0 2
7 9 2 6 2 2 2
8 10 2 6 2 3 3
9 11 3 6 2 3 2
10 12 3 7 3 0 2
11 12 3 8 3 2 3
12 13 4 7 3 4 4
TABLE I: Main features (number of UAVs, GCs, NFZs, etc.) for the different missions designed.

In the experiments, we tested these missions with the \glsskpnsga2 algorithm developed in this work. In order to test the self-adaptation of the algorithm for the cone angle, the missions were also solved using \glsnsga2 and the cone-angle-dependant implementation \glskpnsga2[6] where the angle is fixed, using , , and angles (these approaches are names \glskpnsga2-120, \glskpnsga2-135 and \glskpnsga2-150, respectively).

Each experiment has been executed 30 times, and the mean and standard deviation are presented for all the tables. On the other hand, the rest of parameters have been set to: population of the algorithm to 200, maximum number of generations to 300, the mutation probability to , and the stopping criteria to 10.

Iv-a Experimental results

To compare the results obtained, we computed the hypervolume with the normalized objectives for each solution set, taking as reference point the maximum point . These results are shown in Table II. On the other hand, Table III shows the number of solutions obtained for each approach.

Id. NSGA-II KPNSGA-II-120 KPNSGA-II-135 KPNSGA-II-150 sKPNSGA-II
1
2
3
4
5
6
7
8
9
10
11
12
TABLE II: Mean and standard deviation of the hypervolume obtained from the solutions given the different approaches for the different mission problems.
Id. NSGA-II KPNSGA-II-120 KPNSGA-II-135 KPNSGA-II-150 sKPNSGA-II
1
2
3
4
5
6
7
8
9
10
11
12
TABLE III: Mean and standard deviation of the number of solutions obtained from the different approaches for the different mission problems.

In these results, it is appreciable how the hypervolume decreases with bigger angles, as well as the number of solutions. On the other hand, it is appreciable how \glsnsga2 gets worse results as the complexity of the problem grows (the difference of hypervolume with respect to \glsskpnsga2 decreases), due to the big number of solutions composing the \glspof.

In order to measure these hypervolume and number of solutions together, the HDist metric (see Section II-C) is used. The values of this metric for each result are presented in Table IV. With this, we can clearly appreciate that \glsskpnsga2 gets the best results for this metric, as it has been optimized during the evolutionary process. In addition, we have computed the Wilcoxon test [21], comparing \glsskpnsga2 with the rest of approaches. The test succeed in all problems, with a .

Id. NSGA-II KPNSGA-II-120 KPNSGA-II-135 KPNSGA-II-150 sKPNSGA-II
1
2
3
4
5
6
7
8
9
10
11
12
TABLE IV: Mean and standard deviation of the HDist metric obtained from the solutions of the different approaches for the different mission problems.

The results also show that the HDist metric presents a bigger standard deviation in \glsskpnsga2 than in the rest of approaches. This can be better seen in the HDist graphic in Figure 6. This is specially appreciable in the most complex problems, and it is due to the early start of the golden section search algorithm due to the condition of the high number of solutions (see Algorithm 6, Line 7). Erasing this condition, will outperform the convergence of the approach, but at the expense of increasing the number of generations needed to converge and, consequently, the runtime of the algorithm.

Fig. 6: Comparison of hypervolume, number of solutions obtained, HDist metric and number of generations needed to converge for the different approaches in each problem.

Table V shows the number of generations needed to converge for the different missions and algorithms. Here, it is shown that the runtime of the algorithm is also reduced with \glsskpnsga2 compared to \glsnsga2, which in most cases was not even able to converge in the maximum number of generations defined. On the other hand, the runtime of \glsskpnsga2 is bigger than the approaches where the angle is fixed. Concretely, the higher the cone angle, the faster the algorithm.

Id. NSGA-II KPNSGA-II-120 KPNSGA-II-135 KPNSGA-II-150 sKPNSGA-II
1
2
3
4
5
6
7
8
9
10
11
12
TABLE V: Mean and standard deviation of the number of generations needed to converge in the different approaches for the different mission problems.

In order to observe how \glsskpnsga2 evolves, Figure 7 shows the evolution of Cone Angle, Hypervolume, Number of solutions obtained and HDist Metric by generation in Mission 4, and compares them with the fix angle approaches. Here, it is appreciable how the cone angle critically varies when the golden section search starts but rapidly converge to the optimum value. In the HDist graphic, it is shown how \glsskpnsga2 starts with worse HDist than the other approaches as it has not determined its cone angle yet, but once it does, it get the better result.

(a) Cone angle.
(b) Hypervolume.
(c) Number of solutions.
(d) HDist metric.
Fig. 7: Evolution of metrics by generation for mission 4.
(a) Parallel plot.
(b) Radial plot.
Fig. 8: Visualization of solutions for mission 4.

On the other hand, Figure 8 shows the parallel and radial plot of the solutions obtained by each approach. Here, it can be seen how the solutions obtained by \glsskpnsga2 are spread with all the optimization variables, proving that the solutions obtained are a significant sample of the solutions of the \glspof.

V Conclusion

In this work, we have presented an extension of the \glsnsga2 algorithm based on Knee Points in order to guide and focus the search process of the algorithm for significant solutions. To do that, we have presented the concept of cone-domination, which substitutes the domination concept in the algorithm. The algorithm uses a cone angle which self-adapts in the algorithm using the golden section search.

This new approach have been tested with real Multi-UAV Mission Planning Problems, which are complex and have a lot of solutions. In these problems, the mission operator has to select the best solution among all the obtained, so reducing the number of solutions and present just the most significant ones to the operator will reduce its workload.

In the experimental phase, the approach has been compared against \glsnsga2 and non-self-adapting approaches with three different cone angles (120, 135 and 150). The results showed that the \glsskpnsga2 approach adapts the angle according to the HDist metric, which is clearly maximized when compared to the other approaches. The number of solutions returned are quite small while the most of the hypervolume is maintained compared to \glsnsga2.

On the other hand, the results obtained from the experimental phase showed that the number of generations needed to converge are also improved by the new algorithm compared to \glsnsga2, while the fixed angle approaches converge earlier. In the most complex problems, \glsnsga2 could not find the complete \glspof, while \glsskpnsga2 could converge.

In our future research works, and in order to improve the decision making process for the operator, we will also develop some ranker algorithm for the solutions returned by \glsskpnsga2, which allows to easily select the most interesting and relevant solutions to human operators.

Acknowledgment

This work has been supported by Airbus Defence & Space (under grants number: FUAM-076914 and FUAM-076915), and by the next research projects: DeepBio (TIN2017-85727-C4-3-P), funded by Spanish Ministry of Economy and Competitivity (MINECO), and CYNAMON (CAM grant S2018/TCS-4566), under the European Regional Development Fund FEDER. The authors would like to acknowledge the support obtained by the team from Airbus Defence & Space, specially we would like to acknolwedge the Savier Open Innovation project members: Gemma Blasco, César Castro, and José Insenser.

References

  1. L. Thiele, K. Miettinen, P. J. Korhonen, and J. Molina, “A Preference-Based Evolutionary Algorithm for Multi-Objective Optimization,” Evolutionary Computation, vol. 17, no. 3, pp. 411–436, 2009.
  2. F. Goulart and F. Campelo, “Preference-guided evolutionary algorithms for many-objective optimization,” Information Sciences, vol. 329, pp. 236–255, 2016.
  3. J. Branke, K. Deb, H. Dierolf, and M. Osswald, “Finding knees in multi-objective optimization,” in Parallel Problem Solving from Nature - PPSN VIII. PPSN 2004. Lecture Notes in Computer Science, X. Yao, Ed., vol. 3242.   Springer, Berlin, Heidelberg, 2004, pp. 722–731.
  4. Y. Setoguchi, K. Narukawa, and H. Ishibuchi, “A Knee-Based EMO Algorithm with an Efficient Method to Update Mobile Reference Points,” in Evolutionary Multi-Criterion Optimization. EMO 2015. Lecture Notes in Computer Science, A. Gaspar-Cunha, C. Henggeler Antunes, and C. Coello Coello, Eds., vol. 9018.   Springer, Cham, 2015, pp. 202–217.
  5. X. Zhang, Y. Tian, and Y. Jin, “A knee point-driven evolutionary algorithm for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 6, pp. 761–776, 2015.
  6. C. Ramirez-Atencia, S. Mostaghim, and D. Camacho, “A Knee Point based Evolutionary Multi-objective Optimization for Mission Planning Problems,” in Genetic and Evolutionary Computation Conference (GECCO 2017).   ACM, 2017, pp. 1216–1223.
  7. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002.
  8. E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the Strength Pareto Evolutionary Algorithm,” in Evolutionary Methods for Design Optimization and Control with Applications to Industrial Problems (EUROGEN 2001), K. C. Giannakoglou, D.T.Tsahalis, J.Periaux, and T. Fogarty, Eds.   International Center for Numerical Methods in Engineering (CIMNE), 2002, pp. 95–100.
  9. Q. Zhang and H. Li, “MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007.
  10. H. Jain and K. Deb, “An Improved Adaptive Approach for Elitist Nondominated Sorting Genetic Algorithm for Many-Objective Optimization,” in Evolutionary Multi-Criterion Optimization. EMO 2013. Lecture Notes in Computer Science, vol. 7811.   Springer, Berlin, Heidelberg, 2013, pp. 307–321.
  11. R. C. Purshouse, K. Deb, M. M. Mansor, S. Mostaghim, and R. Wang, “A review of hybrid evolutionary multiple criteria decision making methods,” in 2014 IEEE congress on evolutionary computation (CEC).   IEEE, 2014, pp. 1147–1154.
  12. K. Deb, P. Zope, and A. Jain, “Distributed computing of pareto-optimal solutions with evolutionary algorithms,” in International Conference on Evolutionary Multi-Criterion Optimization, vol. 2632.   Springer, 2003, pp. 534–549.
  13. O. Schütze, M. Laumanns, and C. A. Coello Coello, “Approximating the knee of an MOP with stochastic search algorithms,” in Parallel Problem Solving from Nature – PPSN X. PPSN 2008. Lecture Notes in Computer Science, G. Rudolph, T. Jansen, N. Beume, S. Lucas, and C. Poloni, Eds., vol. 5199.   Springer Berlin Heidelberg, 2008, pp. 795–804.
  14. S. Bechikh, L. B. Said, and K. Ghédira, “Searching for knee regions of the Pareto front using mobile reference points,” Soft Computing, vol. 15, no. 9, pp. 1807–1823, 2011.
  15. S. Jiang, Y. S. Ong, J. Zhang, and L. Feng, “Consistencies and contradictions of performance metrics in multiobjective optimization,” IEEE Transactions on Cybernetics, vol. 44, no. 12, pp. 2391–2404, 2014.
  16. C. Fonseca, L. Paquete, and M. Lopez-Ibanez, “An Improved Dimension-Sweep Algorithm for the Hypervolume Indicator,” in 2006 IEEE International Conference on Evolutionary Computation.   IEEE, 2006, pp. 1157–1163.
  17. H. Ishibuchi, H. Masuda, Y. Tanigaki, and Y. Nojima, “Difficulties in specifying reference points to calculate the inverted generational distance for many-objective optimization problems,” in IEEE SSCI 2014 - 2014 IEEE Symposium Series on Computational Intelligence - MCDM 2014: 2014 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making.   IEEE, 2015, pp. 170–177.
  18. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Golden Section Search in One Dimension, 3rd ed.   Cambridge University Press, 2007, no. 535, ch. 10.2, pp. 492–496.
  19. C. Ramirez-Atencia and D. Camacho, “Constrained multi-objective optimization for multi-UAV planning,” Journal of Ambient Intelligence and Humanized Computing, vol. 10, no. 6, pp. 2467–2484, 2019.
  20. C. Ramirez-Atencia, J. Del Ser, and D. Camacho, “Weighted strategies to guide a multi-objective evolutionary algorithm for multi-UAV mission planning,” Swarm and Evolutionary Computation, vol. 44, pp. 480–495, 2018.
  21. M. Hollander, D. A. Wolfe, and E. Chicken, Nonparametric statistical methods, 3rd ed.   John Wiley & Sons, Inc., 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
408897
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description