Estimating Approximation Errors of Elitist Evolutionary Algorithms

# Estimating Approximation Errors of Elitist Evolutionary Algorithms

Cong Wang 1School of Science, Wuhan University of Technology, Wuhan 430070, China    Yu Chen 1School of Science, Wuhan University of Technology, Wuhan 430070, China    Jun He 2School of Science and Technology, Nottingham Trent University,
Nottingham NG11 8NS, UK 2
Chengwang Xie 3School of Computer and Information Engineering, Nanning Normal University, Nanning 530299, China3
###### Abstract

When evolutionary algorithms (EAs) are unlikely to locate precise global optimal solutions with satisfactory performances, it is important to substitute alternative theoretical routine for the analysis of hitting time/running time. In order to narrow the gap between theories and applications, this paper is dedicated to perform an analysis on approximation error of EAs. First, we proposed a general result on upper bound and lower bound of approximation errors. Then, several case studies are performed to present the routine of error analysis, and theoretical results show the close connections between approximation errors and eigenvalues of transition matrices. The analysis validates applicability of error analysis, demonstrates significance of estimation results, and then, exhibits its potential to be applied for theoretical analysis of elitist EAs.

###### Keywords:
Evolutionary Algorithm Approximation Error Markov Chain Budget Analysis.

## 1 Introduction

For theoretical analysis, convergence performance of evolutionary algorithms (EAs) is widely evaluated by the expected first hitting time (FHT) and the expected running time (RT) , which quantify the respective numbers of iteration and function evaluations (FEs) to hit the global optimal solutions. General methods for estimation of FHT/RT have been proposed via theories of Markov chains [2, 3], drift analysis[4, 5], switch analysis  and application of them with partition of fitness levels , etc.

Although popularly employed in theoretical analysis, simple application of FHT/RT is not practical when the optimal solutions are difficult to hit. One of these “difficult” cases is optimization of continuous problems. Optimal sets of continuous optimization problems are usually zero-measure set, which could not be hit by generally designed EAs in finite time, and so, FHT/RT could be infinity for most cases. A remedy to this difficulty is to take a positive-measure set as the destination of population iteration. So, it is natural to take an approximation set for a given precision as the hitting set of FHT/RT estimation [8, 9, 10, 11]. Another “difficult” case is the optimization of NP-complete (NPC) problems that cannot be solved by EAs in polynomial FHT/RT. For this case, it is much more interesting to investigate the quality of approximate solutions obtained in polynomial FHT/RT. In this way, researchers have estimated approximation ratios of approximate solutions that EAs can obtain for various NPC combinatorial optimization problems in polynomial expected FHT/RT [12, 13, 14, 15, 16, 17].

However,the aforementioned methods could be impractical once we have little information about global optima of the investigated problems, and then, it is difficult to “guess” what threshold can result in polynomial FHT/RT. Since the approximation error after a given iteration number is usually employed to numerically compared performance of EAs, some researchers tried to analyze EAs by theoretically estimating the expected approximation error. Rudolph  proved that under the condition , the sequence converges in mean geometrically to , that is, . He and Lin  studied the geometric average convergence rate of the error sequence , defined by Starting from , it is straightforward to claim that .

A close work to analysis of approximation error is the fixed budget analysis proposed by Jansen and Zarges [20, 21], who aimed to bound the fitness value within a fixed time budget . However, Jansen and Zarges did not present general results for any time budget . In fixed budget analysis, a bound of approximation error holds for some small but might be invalid for a large one. He  made a first attempt to obtain an analytic expression of the approximation error for a class of elitist EAs. He proved if the transition matrix associated with an EA is an upper triangular matrix with unique diagonal entries, then for any , the approximation error is expressed by where are eigenvalues of the transition matrix. He et al.  also demonstrated the possibility of approximation estimation by estimating one-step convergence rate , however, it was not sufficient to validate its applicability to other problems because only two studied cases with trivial convergence rates were investigated.

This paper is dedicated to present an analysis on estimation of approximation error depending on any iteration number . We make the first attempt to perform a general error analysis of EAs, and demonstrate its feasibility by case studies. Rest of this paper is presented as follows. Section 2 presents some preliminaries. In Section 3, a general result on the upper and lower bounds of approximation error is proposed, and some case studies are performed in Section 4. Finally, Section 5 concludes this paper.

## 2 Preliminaries

In this paper, we consider a combinatorial optimization problem

 maxf(x), (1)

where has only finite available values. Denote its optimal solution as , and the corresponding objective value as . Quality of a feasible solution is quantified by its approximation error . Since there are only finite solutions of problem (1), there exist finite feasible values of , denoted as . Obviously, the minimum value is the approximation error of the optimal solution , and so, takes the value 0. We call that is located at the status if . Then, there are totally statuses for all feasible solutions. Status consists of all optimal solutions, called the optimal status; other statuses are the non-optimal statuses.

Suppose that an feasible solution of problem (1) is coded as a bit-string, and an elitist EA described in Algorithm 1 is employed to solve it. When the one-bit mutation is employed, it is called a random local search (RLS); if the bitwise mutation is used, it is named as a (1+1) evolutionary algorithm ((1+1)EA). Then, the error sequence is a Markov Chain. Assisted by the initial probability distribution of individual status , the evolution process of (1+1) elitist EA can be depicted by the transition probability matrix

 P=⎛⎜ ⎜⎝p0,0p0,1⋯p0,n⋮⋮⋮⋮pn,0pn,1⋯pn,n⎞⎟ ⎟⎠, (2)

where is the probability to transfer from status to status .

Since the elitist selection is employed, the probability to transfer from status to status is zero when . Then, the transition probability matrix is upper triangular, and we can partition it as

 P=(p0,0→p00R), (3)

where , ,

 R=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝p1,1p1,2…p1,np2,2…p2,n⋱⋮pn,n⎞⎟ ⎟ ⎟ ⎟ ⎟⎠. (4)

Thus, the expected approximation error at the iteration is

 e[t]=eRtq, (5)

where , , is the sub-matrix representing transition probabilities between non-optimal statuses . Because sum of each column in is equal to 1, the first row can be confirmed by , and in the following, we only consider the transition submatrix for estimation of approximation error. According to the shape of , we can further divide searching process of elitist EAs into two different categories.

1. Step-by-step Search: If the transition probability satisfies

 {pi,j=0, if i≠j−1,j,pj−1,j+pj,j=1,j=1,…,n. (6)

it is called a step-by-step search. Then, the transition submatrix is

 R=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝p1,1p1,2⋱⋱pn−1,n−1pn−1,npn,n⎞⎟ ⎟ ⎟ ⎟ ⎟⎠, (7)

which means the elitist EA cannot transfer between non-optimal statues that are not adjacent to each other;

2. Multi-step Search: If there exists some such that , we called it a multi-step search. A multi-step search can transfer between inconsecutive statuses, which endows it with better global exploration ability, and probably, better convergence speed.

Note that this classification is problem-dependent because the statuses depend on the problem to be optimized. So, the RLS could be either a step-by-step search or a multi-step search. However, the (1+1)EA is necessarily a multi-step search, because the bitwise mutation can jump between any two statuses. When in (3) is non-zero, column sums of is less than 1, which means it could jump from at least one non-optimal status directly to the optimal status. So, a step-by-step search represented by (7) must satisfies

 pj−1,j+pj,j=1,∀j∈{1,…,n}.

## 3 Estimation of General Approximation Bounds

### 3.1 General Bounds of the Step-by-step Search

Let be the submatrix of a step-by-step search. Its eigenvalues are

 λi=pi,i,i=1,…,n, (8)

which represents the probability of remaining at the present status after one iteration. Then, it is very natural to declare that greater the eigenvalues are, slower the step-by-step search converges. Inspired by this idea, we can estimate general bounds of a step-by-step search by enlarging or reducing the eigenvalues. Achievement of the general bounds is based on the following lemma.

###### Lemma 1

Denote

 ft(e,λ1,…,λn)=(ft,1(e,λ1,…,λn),…,ft,n(e,λ1,…,λn))=eRt. (9)

Then, is monotonously increasing with , .

###### Proof

This lemma could be proved by mathematical induction.

1. When , we have

 f1,i(e,λ1,…,λn)={e1λ1,i=1,ei−1(1−λi)+eiλi,i=2,…,n. (10)

Note that is not greater than 1 because it is an element of the probability transition matrix . Then, from the truth that , we conclude that is monotonously increasing with , . Meanwhile, (10) also implies that

 0≤f1,1(e,λ1,…,λn)≤e1≤⋯≤f1,n(e,λ1,…,λn)≤en. (11)
2. Suppose that when , is monotonously increasing with for all , and it holds that

 0≤fk,i(e,λ1,…,λn)≤fk,i+1(e,λ1,…,λn),∀i∈{1,…,n−1}. (12)

First, the monotonicity indicated by (12) implies that

 ∂∂λjfk,i(e,λ1,…,λn)≥0,∀i,j∈{1,…,n}. (13)

Meanwhile, according to equation (9) we know , that is,

 fk+1,i(e,λ1,…,λn) = {fk,1(e,λ1,…,λn)λ1,i=1,fk,i−1(e,λ1,…,λn)(1−λi)+fk,i(e,λ1,…,λn)λi,i=2,…,n.

So, ,

 ∂∂λjfk+1,i(e,λ1,…,λn) = ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∂∂λjfk,1(e,λ1,…,λn)λ1+fk,1(e,λ1,…,λn)∂λ1∂λj,i=1,∂∂λjfk,i−1(e,λ1,…,λn)(1−λi)+∂∂λjfk,i(e,λ1,…,λn)λi+(fk,i(e,λ1,…,λn)−fk,i−1(e,λ1,…,λn))∂λi∂λj,i=2,…,n. (14)

Combining (12), (13) and 2, we know that

 ∂∂λjfk+1,i(e,λ1,…,λn)≥0,∀i,j∈{1,…,n},

which means is monotonously increasing with for all .

In conclusion, is monotonously increasing with , .∎

Denote

 R(λ)=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝λ1−λ⋱⋱λ1−λλ⎞⎟ ⎟ ⎟ ⎟ ⎟⎠, (15)

If we enlarge or shrink all eigenvalues of to the maximum value and the minimum value, respectively, we can get two transition submatrices and , where , . Then, depicts a searching process converging slower than the one represents, and is the transition submatrix of a process converging faster than what represents.

###### Theorem 3.1

The expected approximation error of a step-by-step search represented by and is bounded by

 eRt(λmin)q≤e[t]≤eRt(λmax)q. (16)
###### Proof

Note that

 e[t]=eRtq=ft(e,λ1,…,λn)q,

where is a non-zero vector composed of non-negative components. Then, by lemma 1 we can conclude that is also monotonously increasing with , . So, we can get the result that

 eRt(λmin)q≤e[t]≤eRt(λmax)q.

Theorem 3.1 provides a general result about the upper and the lower bounds of approximation error. From the above arguments we can figure out that the lower bounds and the upper bounds can be achieved once the transition submatrix degenerates to and , respectively. That is to say, they are indeed the “best” results about the general bounds. Recall that . Starting from the status, is the probability that the (1+1) elitist EA stays at the status after one iteration. Then, greater is, harder the step-by-step search transfers to the sub-level status . So, performance of a step-by-step search depicted by , for the worst case, would not be worse than that of ; meanwhile, it would not be better than that of , which contributes to a bottleneck for improving performance of the step-by-step search.

### 3.2 General Bounds of the Multi-step Search

Denoting the transition submatrix of a multi-step search as

 RM=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝p1,1p1,2…p1,n−1p1,n⋱⋱⋮pn−1,n−1pn−1,npn,n⎞⎟ ⎟ ⎟ ⎟ ⎟⎠, (17)

we can bound its approximation error by defining two transition matrices

 RSu=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝p1,1∑1k=0pk,1⋱⋱pn−1,n−1∑n−1k=0pk,npn,n⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠ (18)

and

 RSl=diag(p1,1,…,pn,n). (19)
###### Lemma 2

Let , and be the transition matrix defined by (17), (18) and (19), respectively. Given any nonnegative vector satisfying and the corresponding initial distribution , it holds that

 eRSltq≤eRMtq≤eRSutq,∀t>0. (20)
###### Proof

It is trivial to prove that . Because has part of non-zero elements of , is a partial sum of . Since all elements included in are nonnegative, it holds that .

Moreover, the second inequality can be proved by mathematical induction. Denote

 a =(a1,…,an)=eRM, (21) b =(b1,…,bn)=eRSu, (22)

where . Combining with the fact that , we know that

 0≤ai≤bi,i=1,…,n. (23)
1. When , (21), (22) and (23) imply that

 eRMq=n∑i=1aipi≤n∑i=1bipi=eRSuq.
2. Assume that (20) holds when . Then, (23) implies that

 eRMk+1q=eRMRMkq=aRMkq≤bRMkq. (24)

Meanwhile, because , we know . Then, the assumption implies that

 bRMkq≤bRSukq.

Combining it with (24), we can conclude that

 eRMk+1q≤bRMkq≤bRSukq=eRSuk+1q.

So, the result also holds for .

In conclusion, it holds that .∎

###### Theorem 3.2

The approximation error of the multi-step search defined by (17) is bounded by

 eRSltq≤e[t]≤eRt(λmax)q, (25)

where .

###### Proof

From Lemma 2 we know that

 eRSltq≤eRMtq≤eRStq,∀t>0. (26)

Moreover, by Theorem 3.1 we know that

 e[t]=eRStq≤eR(λmax)q. (27)

Combing (26) and (27) we get the theorem proved.∎

### 3.3 Analytic Expressions of General Bounds

Theorems 3.1 and 3.2 show that computation of general bounds for approximation errors is based on the computability of and , where and are defined by (15) and (19), respectively.

1. Analytic Expression of : The submatrix can be split as , where

 λ=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝λ⋱λλ⎞⎟ ⎟ ⎟ ⎟ ⎟⎠,B=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝01−λ⋱⋱01−λ0⎞⎟ ⎟ ⎟ ⎟ ⎟⎠.

Because multiplication of and is commutative, the binomial theorem  holds and we have

 Rt(λ)=(Λ+B)t=t∑i=0CitΛt−iBi, (28)

where

 Λt−i=diag{λt−i,…,λt−i}, (29)

Note that is a nilpotent matrix of index  111In linear algebra, a nilpotent matrix is a square matrix such that for some positive integer . The smallest such is called the index of  ., and

 Bi=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝0…(1−λ)i⋱⋱⋱⋱⋱(1−λ)i⋱⋮0⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,i

Then, from (29), (30) and (28) we know

1. if ,

 eRt(λ)q= t∑j=1j∑i=1eiCj−itλt−(j−i)(1−λ)j−iqj + n∑j=t+1j∑i=j−teiCj−itλt−(j−i)(1−λ)j−iqj. (31)
2. if ,

 eRt(λ)q=n∑j=1j∑i=1eiCj−itλt−(j−i)(1−λ)j−iqj. (32)
2. Analytic Expression of : For the diagonal matrix , it holds that

 eRSltq=n∑i=1eipti,iqi=n∑i=1eiλtiqi. (33)

## 4 Case-by-case Estimation of Approximation Error

In section 3 general bounds of approximation error are obtained by ignoring most of elements in the sub-matrix . Thus, these bounds could be very general but not tight. In this section, we would like to perform several case-by-case studies to demonstrate a feasible routine of error analysis, where the RLS and the (1+1)EA are employed solving the popular OneMax problem and the Needle-in-Haystack problem.

###### Problem 1

(OneMax)

 maxf(x)=d∑i=1xi,x=(x1,…,xn)∈{0,1}n.
###### Problem 2

(Needle-in-Haystack)

 maxf(x)=⎧⎪ ⎪⎨⎪ ⎪⎩1,if d∑i=1xi=0,0,otherwise.x=(x1,…,xn)∈{0,1}n.

### 4.1 Error Estimation for the OneMax Problem

Application of RLS on the unimodal OneMax problem generates a step-by-step search, the transition submatrix of which is

 RS=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝1−1/n2/n1−2/n3/n⋱⋱1/n10⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. (34)

Eigenvalues and corresponding eigenvectors of are

 λ1=1−1/n,η1=(C11,0,…,0)T,λ2=1−2/n,η2=(−C12,C22,0,…,0)T,…,λn=0,ηn=((−1)n+1C1n,(−1)n+2C2n,…,(−1)2n)Cnn)T. (35)
###### Theorem 4.1

The expected approximation error of RLS for the OneMax problem is

 e[t]=n2(1−1n)t. (36)
###### Proof

Denote . Then we know that

 Q−1=(q′i,j)n×n=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝C11C12C13⋯C1nC22C32⋯Cn2⋱⋮Cnn⎞⎟ ⎟ ⎟ ⎟ ⎟⎠. (37)

has distinct eigenvalues, and so, can be diagonalized as  . Then, we have

 e[t]=eRStq=eQΛtQ−1q=aΛtb, (38)

where , ,

 ak=k∑i=1eiqi,k=k∑i=1i(−1)i+kCik={1,k=1,0,k=2,…,n, (39) bk=n∑j=kqi,kpj=n∑j=kCkjCjn12n=Ckn2k,k=1,…,n.

Substituting (39) into (38) we get the result

 e[t]=a1λt1b1=n2(1−1n)t.

###### Theorem 4.2

The expected approximation error of (1+1)EA for the OneMax problem is bounded from above by

 e[t]≤n2[1−1ne]t. (40)
###### Proof

According to the definition of population status, we know that the status index is the number of 0-bits in . Once one of 0-bits is flip to 1-bit and all 1-bits keep unchanged, the generated solution will be accepted, and the status transfers from to . Recalling that the probability this case happen is , we know that

 pi−1,i≥in(1−1n)n−i≥ine,i=1,…,n.

Denote

 RS=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝1−1ne2ne⋱⋱1−n−1ne10⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,

and we know that

 e[t]≤eRStq. (41)

With distinct eigenvalues, can be diagonalized:

 P−1RSP=Λ, (42)

where , . and are the eigenvalues and the corresponding eigenvectors:

 λ1=1−1/(ne),η1=(C11,0,…,0)T,λ2=1−2/(ne),η2=(−C12,C22,0,…,0)T,…,λn=0,ηn=((−1)n+1C1n,(−1)n+2C2n,…,(−1)2n)Cnn)T. (43)

It is obvious that is invertible, and its inverse is

 P−1=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝C11C12C13⋯C1nC22C32⋯Cn2⋱⋮Cnn⎞⎟ ⎟ ⎟ ⎟ ⎟⎠. (44)

Similar to the result illustrated in (39), we know that

 eP=(1,0,…,0)T,P−1q=(C2n2,C2n22,…,Cn−1n2n−1,12n)T. (45)

Combing (41), (42), (43), (44) and (45) we know that

 e[t]≤ePΛtP−1q=n2[1−1ne]t.

### 4.2 Error Estimation for the Needle-in-Haystack Problem

Landscape of the Needle-in-Haystack problem has a platform where all solutions have the same function value , and only the global optimum has a non-zero function value . For this problem, the status is defined as total number of 1-bits in a solutions .

###### Theorem 4.3

The expected approximation error of RLS for the Needle-in-Haystack problem is bounded by

 (1−1n)t+1−n+12n≤e[t]≤(1−1en)t+1−n+12n. (46)
###### Proof

When the RLS is employed to solve the Needle-in-Haystack problem, the transition submatrix is

 RS=diag(1−1n(1−1n)n−1,1,…,1). (47)

Then,

 e[t]=eRSq=n∑i=1eipti,ipi=[1−1n(1−1n)n−1]t+n∑i=2Cin2n. (48)

Since

 (1−1n)t≤[1−1n(1−1n)n−1]t≤(1−1en)t,∑ni=2Cin2n=1−C0n2n−C1n2n=1−n+12n,

we can conclude that

 (1−1n)t+1−n+12n≤e[t]≤(1−1en)t+1−n+12n.

Theorem 4.3 indicates that both the upper bound and the lower bound converge to the positive when , which implies the fact that RLS cannot converge in mean to global optimal solution of the Needle-in-Haystack problem. Because the RLS searches adjacent statuses and only better solutions can be accepted, it cannot converge to the optimal status once the initial solution is not located at the status .

###### Theorem 4.4

The expected approximation error of (1+1)EA for the Needle-in-Haystack problem is bounded by

 n2(1−1n)t≤e[t]≤n2(1−1nn)t. (49)
###### Proof

When the (1+1)EA is employed to solve the Needle-in-Haystack problem, the transition probability submatrix is

 RS=diag(1−1n(1−1n)n−1,…,1−(1n)n). (50)

Then,

 e[t]=eRSq=n∑i=1eipti,ipi=n∑i=1i[1−(1n)i(1−1n)n−i]tCin2n. (51)

Since

 ∑ni=1i[1−(1n)i(1−1n)n−i]tCin2n≥(1−1n)t∑ni=1iCin2n=n2(1−1n)t,∑ni=1[1−(1n)i(1−1n)n−i]tCin2n≤(1−1nn)t∑ni=1iCin2n=n2(1−1nn)t,

we can conclude that

 n2(1−1n)t≤e[t]≤n2(1−1nn)t.

## 5 Conclusion

To make theoretical results more instructional to algorithm developments and applications, this paper proposes to investigate performance of EAs by estimating approximation error for any iteration budget . General bounds included in Theorems 3.1 and 3.2 demonstrate that bottlenecks of EAs’ performance are decided by the maximum and the minimum eigenvalues of transition submatrix . Meanwhile, theorems 4.1,4.2,4.3, and 4.4 present estimations of approximation error for RLS and (1+1)EA for two benchmark problems, which shows that our analysis scheme is applicable for elitist EAs, regardless the shapes of transition matrices. Moreover, the estimation results demonstrate that approximation errors are closely related to eigenvalues of the transition matrices, which provide useful information for performance improvements of EAs. Our future work is to further perform error analysis on real combinatorial problems to show its applicability in theoretical analysis of EAs.

## Acknowledgements

This work was supported in part by the National Nature Science Foundation of China under Grants 61303028 and 61763010, in part by the Guangxi “BAGUI Scholar” Program, and in part by the Science and Technology Major Project of Guangxi under Grant AA18118047.

## References

•  Oliveto, P., He, J., Yao, X.: Time complexity of evolutionary algorithms for combinatorial optimization: A decade of results. International Journal of Automation and Computing 4(3), 281–293 (2007)
•  He, J., Yao, X.: Towards an analytic framework for analysing the computation time of evolutionary algorithms. Artificial Intelligence 145(1-2), 59–97 (2003)
•  Ding, L., Yu, J.: Some techniques for analyzing time complexity of evolutionary algorithms. Transactions of the Institute of Measurement and Control 34(6), 755–766 (2012)
•  He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artificial intelligence 127(1), 57–85 (2001)
•  Doerr, B., Johannsen, D., Winzen, C.: Multiplicative drift analysis. Algorithmica 64(4), 673–697 (2012)
•  Yu, Y., Qian, C., Zhou, Z.H.: Switch analysis for running time analysis of evolutionary algorithms. IEEE Transactions on Evolutionary Computation 19(6), 777–792 (2014)
•  Droste, S., Jansen, T., Wegener, I.: On the analysis of the (1+ 1) evolutionary algorithm. Theoretical Computer Science 276(1-2), 51–81 (2002)
•  Chen, Y., Zou, X., He, J.: Drift conditions for estimating the first hitting times of evolutionary algorithms. International Journal of Computer Mathematics 88(1), 37–50 (2011)
•  Huang, H., Xu, W., Zhang, Y., Lin, Z., Hao, Z.: Runtime analysis for continuous (1+ 1) evolutionary algorithm based on average gain model. Scientia Sinica Informationis 44(6), 811–824 (2014)
•  Zhang, Y., Huang, H., Hao, Z., Hu, G.: First hitting time analysis of continuous evolutionary algorithms based on average gain. Cluster Computing 19(3), 1323–1332 (2016)
•  Akimoto, Y., Auger, A., Glasmachers, T.: Drift theory in continuous search spaces: expected hitting time of the (1+ 1)-es with 1/5 success rule. In: Proceedings of the Genetic and Evolutionary Computation Conference. pp. 801–808. ACM (2018)
•  Yu, Y., Yao, X., Zhou, Z. H.: On the approximation ability of evolutionary optimization with application to minimum set cover. Artificial Intelligence (180-181), 20–33 (2012)
•  Lai, X., Zhou, Y., He, J., Zhang, J.: Performance analysis of evolutionary algorithms for the minimum label spanning tree problem. IEEE Transactions on Evolutionary Computation 18(6), 860–872 (2014)
•  Zhou, Y., Lai, X., Li, K.: Approximation and parameterized runtime analysis of evolutionary algorithms for the maximum cut problem. IEEE transactions on cybernetics 45(8), 1491–1498 (2015)
•  Zhou, Y., Zhang, J., Wang, Y.: Performance analysis of the (1+ 1) evolutionary algorithm for the multiprocessor scheduling problem. Algorithmica 73(1), 21–41 (2015)
•  Xia, X., Zhou, Y., Lai, X.: On the analysis of the (1+ 1) evolutionary algorithm for the maximum leaf spanning tree problem. International Journal of Computer Mathematics 92(10), 2023–2035 (2015)
•  Peng, X., Zhou, Y., Xu, G.: Approximation performance of ant colony optimization for the tsp (1, 2) problem. International Journal of Computer Mathematics 93(10), 1683–1694 (2016)
•  Rudolph, G.: Convergence rates of evolutionary algorithms for a class of convex objective functions. Control and Cybernetics 26, 375–390 (1997)
•  He, J., Lin, G.: Average convergence rate of evolutionary algorithms. IEEE Transactions on Evolutionary Computation 20(2), 316–321 (2016)
•  Jansen, T., Zarges, C.: Fixed budget computations: A different perspective on run time analysis. In: Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation. pp. 1325–1332. ACM (2012)
•  Jansen, T., Zarges, C.: Performance analysis of randomised search heuristics operating with a fixed budget. Theoretical Computer Science 545, 39–58 (2014)
•  He, J.: An analytic expression of relative approximation error for a class of evolutionary algorithms. In: Proceedings of 2016 IEEE Congress on Evolutionary Computation (CEC 2016). pp. 4366–4373 (July 2016)
•  He, J., Jansen, T., Zarges, C.: Unlimited budget analysis. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. pp. 427–428. ACM (2019)
•  He, J., Chen, Y., Zhou, Y.: A theoretical framework of approximation error analysis of evolutionary algorithms. arXiv preprint arXiv:1810.11532 (2018)
•  Aigner, M.: Combinatorial theory. Springer Science & Business Media (2012)
•  Herstein, I.N.: Topics in algebra. John Wiley & Sons (2006)
•  Lay, D.C.: Linear algebra and its applications. Pearson Education (2003)