Pontogammarus Maeoticus Swarm Optimization: A Metaheuristic Optimization Algorithm

Pontogammarus Maeoticus Swarm Optimization:
A Metaheuristic Optimization Algorithm

Benyamin Ghojogh, Saeed Sharifian
Benyamin Ghojogh’s e-mail: ghojogh_benyamin@ee.sharif.eduSaeed Sharifian’s e-mail: sharifian_s@aut.ac.irBoth authors are with Department of Electrical Engineering, Amirkabir University of Technology, Tehran, Iran. This article was written in fall 2015 and revised in fall 2017.
Abstract

Nowadays, metaheuristic optimization algorithms are used to find the global optima in difficult search spaces. Pontogammarus Maeoticus Swarm Optimization (PMSO) is a metaheuristic algorithm imitating aquatic nature and foraging behavior. Pontogammarus Maeoticus, also called Gammarus in short, is a tiny creature found mostly in coast of Caspian Sea in Iran. In this algorithm, global optima is modeled as sea edge (coast) to which Gammarus creatures are willing to move in order to rest from sea waves and forage in sand. Sea waves satisfy exploration and foraging models exploitation. The strength of sea wave is determined according to distance of Gammarus from sea edge. The angles of waves applied on several particles are set randomly helping algorithm not be stuck in local bests. Meanwhile, the neighborhood of particles change adaptively resulting in more efficient progress in searching. The proposed algorithm, although is applicable on any optimization problem, is experimented for partially shaded solar PV array. Experiments on CEC05 benchmarks, as well as solar PV array, show the effectiveness of this optimization algorithm.

Pontogammarus Maeoticus, Gammarus swarm, metaheuristic optimization.

I Introduction

Metaheuristic algorithms for optimization purposes have been gotten attention as much as the classic mathematical solutions. The reason can be seeked out in this fact that metaheuristic methods are based on soft computing and can perform better in complicated problems, especially in problems with lots of local solutions. On the other hand, mathematical optimization algorithms are usually harder to use since they may be stuck in local solutions [1, 2]. Metaheuristic algorithms are actually one of the branches of artificial intelligence and are mostly used to optimize a cost function as well as classical algorithms [3].

There are lots of metaheuristic algorithms proposed for different applications. Several well-known ones are introduced here. Genetic Algorithm (GA), proposed by Holland [4] and Goldberg [5] is a genetic-inspired algorithm dealing with mutation and cross-over of chromosomes simulating the possible solutions. Genetic programming, proposed by Koza [6], is also a chromosome-based algorithm with mutations and cross-overs. However, it is mostly used for solving tree-based problems. This algorithm was first introduced to extend artificial intelligence to programming goals. It is usually used to model the problem with a mathematical function. Evolutionary programming, proposed by Fogel et. al. [7], De Jong [8] and Koza [9], is a chromosome-based algorithm with a mutation operator and a strategy parameter. Tabu search, proposed by Glover [10], prepares a tabu list to list illegal paths of search in it. Simulated annealing, proposed by Kirkpatrick et. al. [11], simulates the annealing process to solve an optimization problem. Temperature is a parameter of this algorithm, which decreases by going forward in the iterations. Particle Swarm Optimization (PSO), proposed by Kennedy and Eberhart [12], is inspired by the behavior of bird or fish swarms and simulates the migration of swarm particles. The particles search the landscape and the global and local bests affect the direction and speed of particles’ movements.

Lots of metaheuristic algorithms have simulated the behavior of aquatic creatures. As an example, krill herd algorithm, proposed by Gandomi and Alavi [13], can be mentioned in which three things affect the movement of the krill individuals: (I) Induced Movement, (II) Foraging and (III) Random Diffusion. There are some other papers improving this algorithm, such as papers [14] and [15]. Artificial algae algorithm [16] is another example of aquatic-inspired algorithms. In this article, a new metaheuristic optimization algorithm, Pontogammarus Maeoticus Swarm Optimization (PMSO), is proposed which is also inspired by an aquatic creature named Pontogammarus Maeoticus or shortly Gammarus.

In PMSO algorithm, the behavior of Pontogammarus Maeoticus swarm is simulated as a metaheuristic optimization algorithm. Pontogammarus Maeoticus, or Gammarus, is a tiny sea creature especially found in sea edges (coasts). In this algorithm, local search exploitation is satisfied by modeling foraging of Gammarus individuals in their neighborhood. They are willing to reach the sea edge to settle in there and be in peace from the sea waves. Hence, global optima is modeled as the sea edge and the particles search for it to reach. In order to have exploration on different parts of search space, every Gammarus is shaken using sea wave whose strength is related to distance of the Gammarus from the best answer found so far. The sea waves applied on Gammarus individuals which are close to sea edge, are toward the sea edge to help better exploitation around global best found so far. On the other hand, the particles far from sea edge are moved randomly in order to have better exploration in unseen parts of search space and escaping local optimums. The neighborhoods of Gammarus creatures are also subject to change adaptively resulting in better progress in searching for global optima.

As an example of its application, the proposed PMSO algorithm is also used for the problem of partial shading in solar arrays in order to maximize the output power. Solar PV arrays consist of several PhotoVoltaic (PV) cells which are connected in different configurations to each other. The simplest structure of solar PV array is TCT configuration. This configuration, however, cannot cancel the effect of partial shadow on the array and therefore, the output power is not perfect. In order to roughly cancel the effect of shadow as much as possible, other configurations have been proposed during the years. For instance, the Su Do Ku structure is proposed based on the Su Do Ku puzzle arrangement [17]. However, Su Do Ku configuration has several drawbacks [18]. Hence, recently, researchers have worked on solar PV array problem to enhance the output power. Although there are lots of works in non-heuristic areas for this problem such as [19, 20], several researches are using metaheuristic optimization for that. Paper [18] can be mentioned as an example of the recent works on sollar array which utilize metaheuristic algorithms. In [18], Genetic algorithm is used in order to improve the performance of solar PV array.

The remainder of paper is organized as follows. Section II introduces Gammarus creature in a non-mathematical perspective. Different parts of PMSO optimization algorithm and its nature inspirations are explained in detail in Section III. In Section IV, convergence, time complexity, and space complexity of the proposed algorithm as well as a sample scenario of the algorithm are analyzed. Experimental results on standard optimization benchmarks are reported in section V. Using PMSO algorithm for partially shaded solar PV array is proposed is Section VI. Finally, Section VII concludes the paper.

Ii Introducing Gammarus

A special type of hard-shell creature (Malacostraca) is divided into two categories: (I) Amphipoda and (II) Isopoda [21]. Amphipoda is mostly found in sea beds, fresh waters, salty waters, wells, oceans, lakes, and beals. It usually exists in the sediments, on the rocks of seas, or in the sea edges [22].

Gammarus is placed in Amphipoda category. Several pictures of Gammarus are illustrated in figure 1. Various types of Gammarus exist, and one of the well-known types is the one that is found in southern coast of Caspian sea in Iran. There are four types of Gammarus found in Caspian coast of Iran: (I) Pontogammarus Maeoticus, (II) Pontogammarus borcea, (III) Obesogammarus crassus, and (IV) Gammarus aequicauda [23].

(a)
(b)
Fig. 1: (a) Gammarus creature, (b) Gammarus body shape (credit of image: [24]).

The material of Gammarus body consists of 48.4% protein and 5.33% lipid [25]. Gammarus usually reproduces by spawning on around September. Their percentage of protein and lipid increases and decreases right before and after spawning [23]. Gammarus has an average length of 12 millimeters and an average thickness of three millimeters [26, 27]. The food of this creature is mostly organic materials and dead bodies of creatures. Gammarus has a very important role in food chain of aquatics such as fish. Its dried body is used as the food of aquarium fish [27]. Gammarus has two antennas and several legs named gnathopods, pereopods, peleopods and uropods. Its shape of body is convex [28]. See figure 2 for better visualization of its body parts.

Fig. 2: Different parts of Gammarus body (image taken from [28] with some modifications).
Fig. 3: Flow chart of PMSO algorithm.
Symbol Meaning
Uniform random number in the range of
NG Number of Gammarus individuals
Gammarus location in landscape
Location in landscape (determined by vector from origin)
Number of dimensions of landscape (search space)
Dimension of location in landscape
IG Number of global iterations
IL Number of iterations of local search
Neighborhood of Gammarus
Step of decrementing number of iterations of local search
Lower bound of decrementing number of iterations of local search
Step of incrementing/decrementing neighborhood
Lower bound of decrementing neighborhood
Upper bound of incrementing neighborhood
The wave vector affecting the Gammarus
Angle vector of wave, affecting Gammarus
GB Global best, found so far
Local best in each global iteration, for Gammarus
B Length of queue buffer
F Fraction of distance from GB to be considered as neighborhood
Status of Gammarus for neighborhood adaptation
Initial neighborhood of founder of GB at the first of every global iteration
TABLE I: Symbols used in PMSO algorithm

Iii PMSO Algorithm

In PMSO algorithm, several Gammarus individuals are used as particles in a swarm searching for global best optima. The flowchart of PMSO algorithm is shown in figure 3. In addition, the pseudo-code of PMSO algorithm can be seen in Algorithm 1. In the following, the details and different steps of this algorithm are explained, and the natural inspirations are addressed as well. Symbols and notations used in this algorithm are summarized in table I.

Iii-a Initialization

Iii-A1 Random exploration

As seen in figure 3 and lines 2-4 of algorithm, Gammarus individuals are primarily explored randomly in the landscape. Random initialization of Gammarus individuals can be formulated as,

(1)

where is the location of Gammarus in dimension of landscape (), is the bound of search space in dimension , and NG denotes population of Gammarus swarm.

1:Initialize NG, IL, IG, initial , , , , , , B, F, and
2:for  = 1 to NG do
3:     while collision occurred do
4:               
5:while stop criterion is not reached do
6:     for  = 1 to NG do
7:         if it is not first iteration then
8:              
9:              if  is close to sea edge then
10:                   equation (5)
11:              else
12:                   equation (6)               
13:               equations (7) and (8)
14:              if this Gammarus is founder of GB then
15:                   GB
16:              else
17:                                           
18:         
19:         for  = 1 to IL do
20:               NoChange
21:              buffer Local Search
22:              if then
23:                  if  found in latest B local searches then
24:                       if  is NoChange or Decrease then
25:                            Increase
26:                            max()
27:                       else if  is Increase then
28:                            Decrease
29:                            min()                                                                  
30:         if  is better than GB then
31:              GB                
32:      max()
Algorithm 1 PMSO Algorithm
Fig. 4: Local search by Gammarus for the sake of foraging. The red dots are the nutrients (local bests).

Iii-A2 Collision check

After locating gammarus individuals randomly in landscape, collisions of them are checked and if occurs, their locations are set another time. For better time efficiency, this collision check is performed after locating every Gammarus individual rather than checking after locating all particles and re-locating all particles in case of collision (see lines 3-4 in algorithm). For having a measure of collision, neighborhood of each Gammarus is defined as a distance from it which can be a surrounding hyper-sphere or hyper-cube in -dimensional space. Distance of locations and in landscape can be calculated using Euclidean distance,

(2)

The initial neighborhood of all Gammarus particles are set equivalently to a number which is a hyper-parameter and should be set according to problem.

The benefit of this collision check is to have better initial exploration in landscape for searching different parts as much as possible. Moreover, the avoidance of collision is inspired by the fact that Gammarus individuals do not like to get very close to each other. We have examined it ourselves with a container of water including several Gammarus creatures being in sand. When we shook the container, we observed that each Gammarus came out of the sand and after we put down the container, they tried to find a place in the sand to rest. However, whenever they landed to the place of another Gammarus, they went up again and tried another time to find a new place in the sand.

Iii-B Foraging

Iii-B1 Natural inspiration

As the next step, for the sake of exploitation, each Gammarus searches locally its neighborhood for several iterations. The local search of each Gammarus in its neighborhood is inspired by its foraging for which it searches around in the sand it has landed on. Figure 4 depicts the local search for foraging in sand. As can be seen in this figure, there are two pieces of nutrients in the sand which the Gammarus is looking for to forage. At first, the Gammarus falls on the region to search (figure 4(a)). After the Gammarus searches within its neighborhood for several iterations (figure 4(b)), it finds out that not any nutrient has been existed in that region, so it increases its neighborhood for search and finds a nutrient (see figure 4(c)). The nutrients are modeled by local bests found by Gammarus individuals. The Gammarus keeps searching with the same increased neighborhood for several iterations; however, it does not find any nutrient again (figure 4(d)). So, it guesses that it might be better to search in a specific direction and it might luckily find a nutrient with less effort (figure 4(e)).

Iii-B2 Exploitation

Notice that in this algorithm, two types of iterations exist: (I) global iteration that is the iteration on the whole algorithm (line 5 of algorithm), and (II) local iteration that is the iteration of local search of each Gammarus in its neighborhood (line 19). The number of local iterations in every global iteration is denoted as IL. The aim of local searches performed by Gammarus individuals is to perform exploitation in the locations of landscape on which Gammarus particles are located in a global iteration. The local search of particle is inspired by foraging of Gammarus which looks for nutrient in the sand on which it has landed for a moment before facing a sea wave.

Iii-B3 Adaptive neighborhood

Inspired by the explained natural foraging behavior, the neighborhood of Gammarus individuals are set to be adaptive. A status label, denoted as , is defined for each Gammarus. This status is initially set to NoChange at the local search iterations within every global iteration (see line 20 in algorithm), and the Gammarus searches within its initial neighborhood. Note that this initial neighborhood is subject to change in every global iteration, and setting the initial neighborhood for each Gammarus is addressed in the next sections. A queue buffer with length B is defined per each Gammarus in which the results of latest B local searches of that Gammarus are stored (line 21 of algorithm). This buffer is emptied at the first of each global iteration. If the local best of Gammarus in that global iteration, which is , is found in the latest B searches stored in buffer, it means the procedure of searching is progressive and Gammarus does not change its status and searches within the previously set neighborhood. Otherwise, it changes its status to Decrease or Increase if its previous status is Increase or Decrease, respectively (see lines 22-29 of algorithm). Accordingly, the neighborhood of Gammarus individual is decreased or increased, respectively. Lower and upper bounds, respectively denoted as and , are also considered for decreasing and increasing neighborhood in order to prevent unbounded changes.

It is worth to mention that adaptive neighborhood does have strong impact especially in high-dimensional search spaces. Therefore, in problems with small number of dimensions, the neighborhood can be set fixed in the iterations for the sake of simplicity.

Iii-B4 Number of iterations in local search

This fact should be considered that by going forward in the algorithm, we expect the algorithm to get closer to the actual global best; therefore, the number of local iterations can be decreased in every global iteration,

(3)

where and are, respectively, the step and lower bound of changing IL (see line 32 of algorithm). This step is performed to progress the speed of algorithm. The inspiration of this step is that as a Gammarus gets closer to the sea edge which is a representative of GB, the sand gets drier and harder to get in. Gammarus is willing to go inside of the sand to forage in it. Thus, by getting closer to the sea edge, it can go less into sand because of its more dryness. Moreover, there are more nutrients in the sand of sea edge and thus the Gammarus does not need to search a lot.

Fig. 5: Sea waves, sea edge, sea bed, and Gammarus creatures in sea and sea edge.

Iii-C Sea waves

Iii-C1 Natural inspiration

Figure 5 illustrates a sea with its sea edge and sea bed. As can be seen in this figure, the farther the Gammarus is from the sea edge, the stronger the sea wave becomes. In PMSO algorithm, the sea edge is modeled by global best found so far denoted by GB. Hence, a Gammarus which has found GB, has reached the sea edge and can rest in peace from the sea waves and seek for the nutrients in sea edge. The fact that lots of Gammarus creatures can be seen in in some particular regions of Caspian sea edges supports the claim that Gammarus individuals want to reach the sea edge and rest from the sea waves.

Moreover, as can be seen in figure 5, the Gammarus creatures which are close to sea edge are affected by the relatively weak sea waves toward the sea edge. However, far enough from the sea edge, the sea currents behave randomly in direction, although their strengths are impacted by the strength of sea waves. Therefore, the Gammarus particles, far away from the sea edge, are moved randomly but with the strength proportional to their distance from sea edge.

Iii-C2 Exploration

In every global iteration excluding the first one, for the sake of updating locations of Gammarus population and exploration, they should be shaken in the landscape. Different metaheuristic algorithms perform this step in different ways. In PMSO algprithm, this step is inspired by occurrence of the sea wave as explained before. According to distance of each Gammarus from the global best found so far, the strength of wave is determined. The lower the distance gets, the less the strength of wave becomes (see line 8 in algorithm),

(4)

This moves the more distant Gammarus individuals more, which is expected because the Gammarus individuals which are so far from GB are better to be shaken more in comparison to particles close to GB.

As explained before, inspired by nature, a number of Gammarus individuals, which are close to GB, are affected by waves toward GB. The number of particles which are considered to be close to GB, denoted as NC, is a hyper-parameter and should be determined according to optimization problem. For determining whether a Gammarus is close to GB or not, the distances of all Gammarus individuals are computed from GB, and thereafter the distances are sorted in ascending order. The several first number of particles are considered to be close to GB and the rest are far from it. This number is also a hyper-parameter.

The wave angle (direction), affecting the Gammarus individuals which are close to GB, is set in this way: Suppose for Gammarus individual, is the vector connecting to GB, which means , and denotes the component in the dimension of this vector. Notice that . The number of angles of this vector in different dimensions (the angles of sea wave denoted by ) can be computed as [29, 30],

(5)

On the other hand, the Gammarus individuals, which are far away from GB, are affected by sea waves with random directions as its natural inspiration was mentioned previously. Hence, for these particles, the number of angles of sea wave are computed as,

(6)

This moves the more distant Gammarus individuals more, which is beneficial because the Gammarus individuals which are so far from GB are better to be used for exploring other parts of landscape rather than going toward GB.

After computing the strength and direction of sea wave affecting Gammarus, the vector of sea wave is found as [29, 30],

(7)
(8)

This wave affects all Gammarus individuals except the Gammarus which has found GB so far (see lines 14-17 of algorithm). The Gammarus, which is founder of GB, is put in the GB exactly in order to better search around GB for a possible better solution. This fact has a biological support because GB is modeling sea edge in which Gammarus creature rests from the sea waves.

It is worth to mention that randomly selecting the angles for distant Gammarus individuals helps to escape from the local bests in some cases much better than PSO, in which all particles are moved toward their global and local bests. That is because the particles are given a chance to explore more spaces of landscape and perhaps find the global best in outlying positions.

Iii-D Initial neighborhood

Iii-D1 Natural inspiration

According to figure 5, it can be seen that the farther the Gammarus lands on sea bed from the sea edge, the softer the sand is, and thus the Gammarus has more freedom to search for food. Therefore, the neighborhood of Gammarus creature in sea bed is larger in the far distances from sea edge.

Iii-D2 Initial neighborhood determination

At the start of every global iteration, the neighborhood of each Gammarus individual should be set for the local search which was explained in previous sections. In the first global iteration, all particles are given a fixed pre-defined neighborhood. This initial neighborhood is used for both collision check and local search in the first iteration. In the preceding global iterations, however, the neighborhoods of Gammarus population are determined differently for the local search: At the start of every global iteration, the neighborhood of each Gammarus is set according to its distance from GB. Inspired by nature, as was explained, the more distant the Gammarus is from GB, the larger its initial neighborhood should be. Hence, the initial neighborhood is determined as,

(9)

where F is a hyper-parameter, and is the fraction of distance from GB to be considered as neighborhood (see line 18 of algorithm). This step is performed after applying the sea waves and moving the particles as shown in figure 3. After initializing at the first of each global iteration and after applying sea waves, the neighborhood changes adaptively during local search according to Section III-B3.

Notice that, as shown in figure 3, the initial neighborhoods of all particles, except the founder of GB, are determined by equation (9). The initial neighborhood of founder of GB at the first of each global iteration is set to be which is a pre-defined hyper-parameter. Moreover, note that similar to adaptive neighborhood property, determining initial neighborhood according to distance from GB has more impact in high-dimensional search spaces. Thus, if the number of dimensions of search space is not very high, the initial neighborhood of particles can be considered fixed.

Iii-E Updating global best & checking stop criterion

At the end of every global iteration, the local best of each Gammarus is compared to the global best so far (GB). If a better answer has been obtained in this global iteration, GB is updated (lines 30-31 of algorithm). Thereafter, the stop criteria is checked and if it is reached, algorithm is terminated and the best answer found so far (GB) is returned as the global optima. The stop criterion can be one of these conditions: (I) convergence criteria, which is whether the difference between last two global best answers is small enough, (II) time out criteria, (III) reaching the bound of number of global iterations, or (IV) reaching the maximum number of allowed function evaluations.

Fig. 6: A part of a 2D sample scenario in PMSO algorithm.

Iv Analysis of algorithm

Iv-a Convergence of particles

It can be proved that if the best answer found so far is the actual global optima and some particles do not go toward it (especially the ones which are distant from GB and are shaken randomly), after infinite global iterations of algorithm, the particles will converge to it because as they get farther from it, the strength of wave gets bigger.

Theorem 1.

If global best found so far, denoted by GB, is actually the global best, all Gammarus individuals converge to it after infinite number of iterations.

Proof.

For the Gammarus individuals which are close to sea edge, it is obvious because they are moved toward the GB. For the distant Gammarus individuals, the following proof is proposed. Proof by contradiction: As GB is the actual global best, it will not be replaced any more by another location as the global best found so far. Suppose that Gammarus individuals do not converge to GB after infinite number of iterations. If denotes or in global iteration, this means that for every Gammarus individual and in all iterations. In other words, the particles are getting farther and farther from the GB. However, the direction (angle) of sea wave is determined randomly and thus might result Gammarus individual to move toward the GB in one of iterations (e.g. iteration). Moreover, as the strength of wave is proportional to , particle will get close to GB in iteration . Thereafter, is very small and therefore, the wave affecting that particle will not be strong anymore, and the Gammarus will remain around GB. It means that Gammarus has converged to GB and this contradicts the assumption. ∎

Note that there is no worry about the finite number of global iterations because as a new viewpoint, there is no need to collect all the particles to the best answer found so far. The particle which has found GB can search around it without the need of help from all other particles. Having several particles close to GB for helping that particle in searching around GB suffices. Notice that the founder particle of the best answer so far will face a weak wave because it is near the best answer (the sea edge) and can search around the best answer, better. In other words, the particle which has found GB in addition to particles close to GB will locally search for fine tuning GB, and the other far particles search far from it in landscape in hope of finding a possible answer which is better than GB and is not found yet. This delicate fact is not considered in algorithms such as PSO, and in our best knowledge, it is not thoroughly analyzed and tackled by other metaheuristic algorithms, too. In other words, different roles are given to the particles of swarm, and the particles cooperate in various roles for the goal of optimization.

Iv-B A sample scenario of algorithm

A sample scenario is proposed in this section in order to analyze and describe PMSO algorithm. Figure 6 depicts a part of this sample scenario, in 2D case, for the sake of better visualization. First, the Gammarus individuals are randomly explored in landscape and having a pre-defined similar neighborhoods, they start to search locally while having adaptive neighborhoods. Assume that the local optima at the top-right corner of landscape (shown in figure 6(a)) is found as GB in the first global iteration. Figure 6(a) shows the start of second global iteration. As shown in this figure, the distance of each Gammarus individual is calculated from GB.

Afterwards, the sea waves are applied to Gammarus particles as shown in figure 6(b). It can be seen in this figure that the Gammarus, which is founder of GB, is put exactly on GB, and is not affected by sea wave. The other two Gammarus individuals, which are close to GB, are affected by waves toward GB; however, the three ones far away from GB are moved in random directions. The strengths of all sea waves, however, are determined according to their distances from GB.

Figure 6(c) shows the step after sea waves being applied. As can be seen in this figure, the neighborhoods of Gammarus particles are set as a fraction of their distance from GB. The farther particles are having wider neighborhood so they can search more distantly in landscape hoping to find a better answer. It can be seen in figure 6(c) that the very far Gammarus at the left does have the chance to find the actual global best because it exists in its neighborhood. Changing neighborhood adaptively will help this particle to probably find the actual global best easier. Adaptation in chaning neighborhood is not shown in figure 6 for the sake of brevity. When the left Gammarus finds the actual global best, that point will be GB thereafter, and the sea waves and neighborhoods will be determined according to that point.

As can be seen in figure 6, if the particles are moved toward GB (or a combination of local and global bests), the actual global best might not be found at all. The random wave directions for far particles, proposed in PMSO algorithm, gives the algorithm a chance not to be trapped in local bests.

(a) F1
(b) F2
(c) F3
(d) F4
(e) F5
Fig. 7: 2D versions of CEC05 uni-modal benchmarks.
(a) F6
(b) F7
(c) F8
(d) F9
(e) F10
(f) F11
(g) F12
(h) F13
(i) F14
Fig. 8: 2D versions of CEC05 multi-modal benchmarks.
(a) F15
(b) F16
(c) F17
(d) F18
(e) F19
(f) F20
(g) F21
(h) F22
(i) F24
(j) F25
Fig. 9: 2D versions of CEC05 hybrid benchmarks.

Iv-C Time complexity

Theorem 2.

The time complexity of PMSO algorithm is in worst case, where NG, IG, and IL denote number of Gammarus individuals, number of global iterations, and number of local iterations, respectively.

Proof.

According to figure 3 and Algorithm 1, it can be seen that PMSO algorithm consists of IG global iterations, which is the upper bound and the algorithm might terminate sooner by the stop criteria. In every global iteration, each Gammarus searches locally in its neighborhood. The number of local iterations IL is decremented in every global iteration, however, as an upper bound we do not consider decrementing it. Hence, the time complexity for every particle is . This procedure is performed by every Gammarus individual, and as the population includes NG particles, the time complexity of PMSO algorithm is . ∎

Iv-D Space complexity

Theorem 3.

The space complexity of PMSO algorithm is in worst case, where NG and B, respectively, denote the number of Gammarus individuals and the size of buffer. Therefore, the required space for PMSO algorithm is linear to the size of input.

Proof.

According to figure 3 and Algorithm 1, it can be seen that PMSO algorithm needs spaces for saving the locations of particles. Moreover, for saving , buffer, , and , it requires , , , and spaces, respectively, for NG particles. Note that in time complexity of is because the particle can have three possible cases, i.e., NoChange, Increase, Decrease. Notice that the as the algorithm is iterating serially on all the particles in every global iteration, it does not need to save these spaces simultaneously for all particles. In other words, these spaces can be overwritten by particles in the iterations. Therefore, for , buffer, , and , merely , , , and spaces are required, respectively, where , , and are constants. Other parameters, such as GB, need only constant space to be stored. Thus, the total space needed for PMSO algorithm is . ∎

Benchmark Function Bounds Initialization
TABLE II: CEC05 uni-modal benchmarks used for experiments
Benchmark Function Bounds Initialization
No bounds
TABLE III: CEC05 multi-modal benchmarks used for experiments
Benchmark Function Bounds Initialization
basic functions: Rastrigin, Weierstrass, Griewank, Ackley, Sphere
with different linear transformation matrices
basic functions: Ackley, Rastrigin, Sphere, Weierstrass, Griewank
with difference in weights and
with difference in shifting optima
basic functions: Rotated Expanded Scaffer, Rastrigin, F13, Weierstrass, Griewank
with different linear transformation matrices
basic functions: Weierstrass, Scaffer, F13, Ackley, Rastrigin, Griewank,
with no bounds No bounds
TABLE IV: CEC05 hybrid benchmarks used for experiments

V Experimental results of algorithm

V-a Benchmarks

For evaluations, verification, and comparison of the proposed optimization algorithm with other state-of-the-art methods, CEC05 benchmarks [31], which are standard benchmarks for metaheuristic optimization, are used111The CEC 2005 benchmark functions are available online: http://www.ntu.edu.sg/home/EPNSugan/. This bank of benchmark includes different types of benchmark functions, i.e., uni-modal, multi-modal, and hybrid. These benchmarks are conducted on several real-world optimization problems [31].

Uni-modal and multi-modal benchmarks include one and multiple global optimums, receptively. Hybrid functions, however, are composed of several different uni-modal and multi-modal functions with different wights and coverage on domain. Tables II, III, and IV respectively report the mathematical expressions and properties of uni-modal, multi-modal, and hybrid benchmark functions in CEC05 benchmark bank [31]. The 2D versions of uni-modal, multi-modal, and hybrid benchmarks are shown in figures 7, 8, and 9, respectively. Note that F23 is not considered in hybrid benchmarks because it is quantized and not continuous.

In tables II, III, and IV, is the component of vector . This vector is for functions F1, F2, F4, F9, and is for functions F6, F13, and is for functions F3, F7, F8, F10, F11, F14, where is the shifted global optimum and denotes an orthogonal transformation matrix. In F5, and are matrix and vector, respectively, where denotes dimension of benchmark. However, in F12, both and are matrices. In F13, and . In F14, . On the other hand, for hybrid functions listed in table IV, benchmark functions are composed as weighted summation of basic functions according to this formula: , where and , respectively, denote the weights and stretch/compress amount of basic function in composition. The reader is refered to [31] for more details on benchmark functions and their parameters.

F1 F2 F3 F4 F5
Best 0.000E00 0.000E00 5.066E07 0.000E00 1.453E01
Mean 7.504E06 8.143E06 3.143E02 9.646E06 5.242E01
Std. 1.781E05 2.060E05 6.663E02 2.767E05 2.709E01
TABLE V: Mean, best and standard deviation values of solutions achieved for CEC05 uni-modal benchmarks taken over 30 runs at 2 dimensions.
F6 F7 F8 F9 F10 F11 F12 F13 F14
Best 0.000E00 7.400E03 1.120E02 0.000E00 0.000E00 0.000E00 0.000E00 0.000E00 0.000E00
Mean 7.161E00 1.586E01 4.367E00 6.630E02 8.130E02 3.290E02 2.440E02 1.120E03 1.240E02
Std. 1.870E01 2.745E01 3.776E00 2.524E01 2.610E01 5.000E02 4.660E02 3.100E03 9.500E03
TABLE VI: Mean, best and standard deviation values of solutions achieved for CEC05 multi-modal benchmarks taken over 30 runs at 2 dimensions.

V-B Experiments on 2D benchmarks

PMSO algorithm is tested on 2-dimensional CEC05 benchmarks in order to experimentally verify the effectiveness and correctness of this new optimization algorithm. The population size is set to be 40 as in [32], and the number of runs for each benchmark function is 30 as in [16]. According to [31], stop criterion is maximum number of function evaluations which is where is the dimension of benchmark.

The results of these experiments are reported in tables V and VI for uni-modal and multi-modal benchmarks, respectively. As can be seen in these tables, PMSO algorithm works properly for the goal of optimization and its performance is acceptable. The errors on all different benchmarks, except perhaps F3, are significantly small which is expected. These results experimentally verify the proof of correctness of PMSO algorithm, and show that it can be applied on different optimization problems and benchmarks.

PMSO BA ABC AAA DE
F1 Best 5.900E03 2.997E00 0.000E00 0.000E00 0.000E00 0.000E00 0.000E00
Mean 6.010E02 4.233E03 1.895E15 0.000E00 2.274E14 0.000E00 0.000E00
Std. 3.450E02 3.812E03 1.038E14 0.000E00 3.533E14 0.000E00 0.000E00
F2 Best 9.400E03 4.257E00 3.077E01 4.607E03 0.000E00 1.541E08 5.013E00
Mean 1.024E01 4.898E03 3.322E00 2.774E01 1.478E13 1.184E06 1.403E01
Std. 5.800E02 3.547E03 2.893E00 3.481E01 1.263E13 1.757E06 5.389E00
F3 Best 1.651E04 3.169E05 1.860E05 3.230E04 8.126E04 5.199E04 3.482E05
Mean 2.866E05 1.528E07 7.224E05 2.196E05 2.238E06 3.111E05 1.753E06
Std. 2.208E05 1.959E07 3.527E05 1.723E05 2.689E06 2.441E05 6.862E05
F4 Best 1.072E01 1.597E03 2.894E02 8.339E02 5.684E14 4.191E05 5.814E01
Mean 1.981E03 8.885E03 1.327E03 5.989E01 3.752E13 8.199E03 1.483E02
Std. 1.610E03 4.665E03 6.884E02 1.074E02 3.221E13 9.187E03 6.227E01
F5 Best 6.038E01 8.165E02 8.169E00 1.855E10 3.456E01 0.000E00 1.206E01
Mean 1.929E03 8.168E03 7.750E01 5.555E01 3.938E02 1.601E11 3.384E00
Std. 1.486E03 3.946E03 8.828E01 1.058E02 4.884E02 3.433E11 3.175E00
TABLE VII: Mean, best and standard deviation values of solutions achieved for CEC05 uni-modal benchmarks taken over 30 runs at 10 dimensions.

V-C Comparison on 10D benchmarks

For the sake of comparing the proposed optimization algorithm, 10-dimensional uni-modal, multi-modal, and hybrid benchmarks are utilized. Several well-known state-of-the-art methods, which most of them are inspired by foraging behavior of living organisms as in PMSO algorithm, are used for comparison. These algorithms are Bees Algorithm (BA) [33, 32], Artificial Bee Colony (ABC) [34, 35, 32], Population-based Harmony Search () [36], Ant Colony Optimization () [37, 32], Artificial Algae Algorithm (AAA) [16], and Differential Evolution (DE) [38, 32].

The results of experimenting PMSO algorithm as well as state-of-the-art methods on uni-modal, multi-modal, and hybrid benchmarks are reported in tables VII, VIII, and IX, respectively. The results of other state-of-the-art methods, which are reported in these tables, are taken from [16]. The settings of experiments are the same as mentioned for 2D experiments. As is obvious in these three tables, PMSO algorithm does have well performance on the different benchmarks. It strongly outperforms BA algorithm [33, 32] on all benchmarks. PMSO outperforms ABC algorithm [34, 35, 32] on benchmarks F2, F3, F7, F8, and F25, and reaches its performance with slightly difference on benchmark F14. Moreover, it can be seen that PMSO algorithm outperforms algorithm [36] on benchmarks F2, F7, F8, F11, and F21. In comparison to algorithm [37, 32], PMSO algorithm outperforms it on benchmarks F3, F8, F11, F12, F14, F15, F16, F21, F24, and F25, and its performance is close to on benchmarks F17, F18, F19, F20, and F22. In addition, it can be observed that PMSO method outperforms AAA algorithm [16] on benchmarks F3 and F25, and performs closely to it on function F8. Finally, it is obvious that PMSO has better performance than DE algorithm [38, 32] on benchmarks F2, F3, F8, and F25, and is close to it on benchmark function F14.

The results show the effectiveness and correctness of the proposed metaheuristic optimization algorithm, which can be used in different problems and various benchmarks. The performance of this method on different types of benchmarks, including uni-modal, multi-modal, and hybrid benchmarks, determines the fact that PMSO algorithm is applicable in optimization problems with different situations. The next section shows one of the applications of this algorithm which can be used.

PMSO BA ABC AAA DE
F6 Best 8.488E00 7.899E02 5.465E02 5.533E01 3.661E06 1.623E06 3.020E01
Mean 2.870E03 2.936E08 1.491E00 2.709E01 8.611E01 1.219E00 3.033E00
Std. 3.270E03 4.188E08 2.487E00 2.839E01 4.412E02 1.491E00 2.213E00
F7 Best 1.336E00 1.560E02 1.267E03 1.267E03 3.161E01 1.267E03 2.363E01
Mean 4.290E01 1.645E03 1.267E03 1.267E03 8.581E01 1.267E03 4.106E01
Std. 5.009E01 7.414E02 2.313E13 1.094E02 2.913E01 3.288E02 9.131E02
F8 Best 2.006E01 2.017E01 2.019E01 2.018E01 2.014E01 2.002E01 2.021E01
Mean 2.028E01 2.034E01 2.033E01 2.033E01 2.035E01 2.016E01 2.040E01
Std. 9.470E02 7.941E02 7.863E02 5.667E02 8.296E02 8.695E02 6.267E02
F9 Best 1.492E01 1.766E01 0.000E00 0.000E00 2.985E00 0.000E00 0.000E00
Mean 3.678E01 5.135E01 0.000E00 2.801E07 7.735E00 0.000E00 0.000E00
Std. 1.232E01 1.820E01 0.000E00 1.534E06 3.603E00 0.000E00 0.000E00
F10 Best 2.290E01 3.757E01 1.008E01 1.751E01 7.091E00 4.975E00 1.144E01
Mean 4.913E01 7.113E01 2.518E01 2.193E01 2.340E01 1.501E01 1.911E01
Std. 1.518E01 2.512E01 7.635E00 2.371E00 7.573E00 5.593E00 3.599E00
F11 Best 3.974E00 6.003E00 4.175E00 8.113E00 4.981E00 1.499E00 4.875E00
Mean 7.106E00 9.360E00 5.415E00 9.221E00 8.604E00 3.667E00 6.102E00
Std. 1.496E00 1.518E00 7.297E01 4.890E01 9.727E01 9.292E01 6.214E01
F12 Best 4.444E02 6.907E03 8.504E01 4.466E01 1.349E04 2.439E00 1.715E02
Mean 3.558E03 3.014E04 3.070E02 3.168E03 2.923E04 5.435E02 4.341E02
Std. 4.139E03 8.858E03 1.634E02 3.135E03 6.722E03 7.766E02 1.850E02
F13 Best 8.125E01 4.722E00 3.125E02 2.048E01 8.360E01 1.228E01 7.506E02
Mean 2.974E00 9.636E00 2.241E01 8.897E01 1.692E00 4.231E01 2.936E01
Std. 1.717E00 3.494E00 8.985E02 4.433E01 5.347E01 1.329E01 1.176E01
F14 Best 2.867E00 3.256E00 2.992E00 1.170E00 3.219E00 2.698E00 3.174E00
Mean 3.564E00 3.940E00 3.412E00 2.485E00 3.800E00 3.296E00 3.459E00
Std. 2.668E01 2.307E01 1.441E01 6.736E01 2.861E01 2.823E01 1.299E01
TABLE VIII: Mean, best and standard deviation values of solutions achieved for CEC05 multi-modal benchmarks taken over 30 runs at 10 dimensions.
PMSO BA ABC AAA DE
F15 Best 1.223E02 4.100E02 0.000E00 0.000E00 1.026E02 0.000E00 4.867E01
Mean 4.108E02 5.921E02 7.329E02 2.782E02 4.355E02 3.311E01 1.653E01
Std. 1.144E02 9.820E01 3.227E01 1.786E02 1.876E02 3.778E01 1.813E01
F16 Best 1.166E02 1.385E02 1.238E02 1.159E02 1.066E02 9.222E01 1.107E02
Mean 1.987E02 3.212E02 1.476E02 1.380E02 2.055E02 1.293E02 1.443E02
Std. 5.011E01 7.830E01 1.384E01 1.030E01 1.178E02 1.596E01 1.475E01
F17 Best 1.133E02 1.659E02 1.404E02 1.285E02 1.215E02 9.984E01 1.384E02
Mean 2.065E02 3.442E02 1.694E02 1.507E02 1.890E02 1.353E02 1.730E02
Std. 4.990E01 8.931E01 1.529E01 1.146E01 5.524E01 1.825E01 1.422E01
F18 Best 4.911E02 9.685E02 4.035E02 6.269E02 7.794E02 3.000E02 5.114E02
Mean 9.453E02 1.104E03 5.086E02 8.541E02 9.317E02 4.864E02 7.459E02
Std. 1.360E02 5.856E01 7.031E01 1.013E02 9.863E01 2.242E02 1.012E02
F19 Best 6.150E02 9.810E02 4.349E02 3.002E02 8.001E02 3.000E02 3.740E02
Mean 9.844E02 1.111E03 5.213E02 8.265E02 9.552E02 4.529E02 6.980E02
Std. 1.046E02 6.753E01 7.592E01 1.545E02 6.679E01 1.993E02 1.363E02
F20 Best 6.066E02 8.019E02 5.000E02 5.375E02 7.244E02 3.000E02 5.007E02
Mean 9.610E02 1.097E03 5.430E02 8.674E02 9.396E02 4.842E02 7.536E02
Std. 1.303E02 8.884E01 1.075E02 1.019E02 9.376E01 2.135E02 1.110E02
F21 Best 2.000E02 5.037E02 2.035E02 5.000E02 5.000E02 3.000E02 2.270E02
Mean 9.687E02 1.266E03 3.459E02 1.038E03 1.088E03 5.233E02 4.642E02
Std. 3.105E02 1.574E02 9.939E01 1.990E02 2.130E02 1.006E02 9.040E01
F22 Best 4.573E02 9.150E02 2.008E02 7.574E02 5.318E02 3.000E02 7.760E02
Mean 8.337E02 1.025E03 7.084E02 7.873E02 8.315E02 7.347E02 7.941E02
Std. 9.306E01 6.621E01 1.905E02 2.429E01 1.010E02 1.196E02 7.699E00
F24 Best 2.000E02 1.110E03 2.000E02 2.000E02 3.744E02 2.000E02 2.000E02
Mean 3.386E02 1.264E03 2.000E02 2.400E02 6.038E02 2.000E02 2.000E02
Std. 2.200E02 5.919E01 0.000E00 1.038E02 2.904E02 0.000E00 1.201E02
F25 Best 2.000E02 1.303E03 6.176E02 2.000E02 3.727E02 8.120E02 8.183E02
Mean 5.651E02 1.390E03 7.689E02 3.196E02 5.708E02 8.170E02 8.272E02
Std. 2.963E02 4.070E01 9.427E01 1.165E02 2.967E02 2.464E00 3.619E00
TABLE IX: Mean, best and standard deviation values of solutions achieved for CEC05 hybrid benchmarks taken over 30 runs at 10 dimensions.
Fig. 10: A solar PV array in TCT configuration.

Vi Using PMSO in partially shaded solar PV array

Vi-a Different configurations of solar PV array

The proposed optimization algorithm can be applied to various types of real-world problems. In this section, as an example, it is used to solve the problem of configuration of partially shaded solar PV array for maximum possible power. Two well-known configurations of PV array are TCT and Su Do Ku configurations. In all configurations of solar PV arrays, each cell can be moved merely on its own column.

In TCT configuration, the solar cells are connected to each other in Series-Parallel (SP) scheme. For instance, figure 10 illustrates a solar PV array in TCT configuration. As can be seen in this figure, the cells in a row are parallel to each other and the cells in a column are connected serially. In this figure, denotes the current of cell in the row and the column. is also the voltage of every row and is the total obtained voltage which is here.

However, if shadow falls on the solar array partially, TCT configuration does not disperse the shadow uniformly on the array [18] and therefore the output power is not the optimum power. Therefore, Su Do Ku arrangement was proposed [17] which disperses the shadow by replacing cells using Su Do Ku puzzle configuration. Yet, the Su Do Ku configuration does not yield the maximum possible power and also has several drawbacks [18].

Hence, recent researches have been done on heuristic optimization for the problem of partial shading in solar arrays. As an example, Genetic algorithm is used for optimization in [18]. In this paper, the cells are replaced in every column. Every chromosome is two-dimensional and each cell plays the role of a gene. The mutation and cross-over is performed so that every cell can be moved (replaced) in its own column.

Vi-B Calculations of solar PV array

Suppose that is the current generated by a cell which receives the standard irradiance . is taken to be [18]. If the irradiance on the cell is denoted as (), the irradiance factor is obtained as .

According to Kirchhoff’s current law, the total current of each row is obtained as,

(10)

where and are, respectively, the number of rows and columns of array which are both nine in the mentioned example. Also, is the irradiance factor of the cell in the row and the column. Here, is the current obtained from the corresponding cell at the standard irradiation. As the cells are supposed to be identical, it can be assumed that all ’s are the same and are equal to . Therefore, equation (10) can be rewritten as,

(11)

In all the configurations, the row currents are sorted from the smallest to the largest value. Thereafter, the rows are bypassed one-by-one from the first sorted row to the last sorted one, resulting in less amount of output voltage . In one of these bypasses, the output power is the maximum which is desired. Hence, that bypass is performed and the maximum power of that configuration is obtained. In the following sections, results of bypassing rows using different methods, including PMSO, are reported.

Vi-C PMSO algorithm for solar PV array

The proposed PMSO algorithm can be utilized for any arbitrary optimization problem. Here, we use this algorithm for partially shaded solar PV array as an example. Because of technical calculations and configurations of sloar PV array, some slight changes are required to be applied on this algorithm which are detailed in the following.

The pseudo-code of proposed PMSO algorithm for this problem is shown in Algorithm 2. As can be seen, the algorithm is similar to the original proposed algorithm with some differences. In this algorithm, every Gammarus individual is two-dimensional. is the number of rows and is the number of columns of array. Here, in the mentioned example, both and are nine. For every particle, the distance from the found global best GB and the location of particle is calculated (line 9 of Algorithm 2). In this calculation, is the second norm of vector whose component is found as,

(12)

Also, is the second norm of a vector with the size of and all elements of one. Hence, the calculated distance is normalized and is always less than or equal to one.

1:Initialize NG, IL, IG, and initial
2:for  = 1 to NG do
3:     while collision occurred do
4:         Randomly Swap cells in their columns      
5:while stop criterion is not reached do
6:     for  = 1 to NG do
7:         if it is not first iteration then
8:              for  = 1 to C do
9:                  
10:                  for  = 1 to R do
11:                       if  then
12:                           
13:                           
14:                           if  then
15:                                Swap cells and
16:                           else if  then
17:                                Swap cells and
18:                           else
19:                                Swap cells and                                                                                             
20:         for  = 1 to IL do
21:              do Local Search          
22:         if  is better than GB then
23:              GB               
24:      max()
Algorithm 2 PMSO for Partially Shaded Solar PV Array

Afterwards, by iterations on all cells of a Gammarus particle (lines 8 and 10 of Algorithm 2), every cell of the Gammarus is moved in its own column with probability of Distance (line 11 of Algorithm 2). If it is moved, the amount of moving, which is the wave size , is calculated by rounding the product of and Distance (line 12 of Algorithm 2); therefore, the wave size is at most the number of rows because Distance is normalized. Then, the wave is a random number in the range (line 13 of Algorithm 2). This models the angle of the wave in the original algorithm. Thereafter, the cell is moved in its own column with this assumption that the first and last cell of every column are connected to each other as a ring (lines 14-19 of Algorithm 2). The rest of the algorithm is the same as the original one, except that initial and adaptive neighborhoods are not used in this algorithm because Gammarus particles are 2D and as was previously explained, in low-dimensional problems, these steps can be omitted for the sake of simplicity.

At every step of algorithm, each Gammarus particle presents a possible configuration of the solar array. Here, the fitness of every Gammarus particle is the maximum output power of the configuration the Gammarus presents. The maximum output power is obtained as was explained in Section VI-B.

Sorted cells to be bypassed
TCT configuration
Su Do Ku configuration