Acceleration of Univariate Global Optimization Algorithms
Working With Lipschitz Functions
and Lipschitz First
Derivatives^{1}
Abstract
This paper deals with two kinds of the onedimensional global optimization problems over a closed finite interval: (i) the objective function satisfies the Lipschitz condition with a constant ; (ii) the first derivative of satisfies the Lipschitz condition with a constant . In the paper, six algorithms are presented for the case (i) and six algorithms for the case (ii). In both cases, auxiliary functions are constructed and adaptively improved during the search. In the case (i), piecewise linear functions are constructed and in the case (ii) smooth piecewise quadratic functions are used. The constants and either are taken as values known a priori or are dynamically estimated during the search. A recent technique that adaptively estimates the local Lipschitz constants over different zones of the search region is used to accelerate the search. A new technique called the local improvement is introduced in order to accelerate the search in both cases (i) and (ii). The algorithms are described in a unique framework, their properties are studied from a general viewpoint, and convergence conditions of the proposed algorithms are given. Numerical experiments executed on 120 test problems taken from the literature show quite a promising performance of the new accelerating techniques.
lobal optimization, Lipschitz functions, Lipschitz derivatives, balancing local and global information, acceleration.
90C26, 65K05
1 Introduction
Let us consider the onedimensional global optimization problem of finding a point belonging to a finite interval and the value such that
(1.1) 
where either the objective function or its first derivative satisfy the Lipschitz condition, i.e., either
(1.2) 
or
(1.3) 
with constants .
Problems of this kind are worthy of a great attention because of at least two reasons. First, there exists a large number of reallife applications where it is necessary to solve univariate global optimization problems stated in various ways (see, e.g., [2, 3, 4, 5, 8, 10, 16, 18, 23, 24, 26, 28, 29, 30, 31, 32, 35, 36]). This kind of problems is often encountered in scientific and engineering applications (see, e.g., [9, 14, 15, 21, 24, 30, 31, 33, 36]), and, in particular, in electrical engineering optimization problems (see, e.g., [5, 6, 17, 20, 27, 33]). On the other hand, it is important to study onedimensional methods proposed to solve problems (1.1), (1.2) and (1.1), (1.3) because they can be successfully extended to the multidimensional case by numerous schemes (see, for example, onepoint based, diagonal, simplicial, spacefilling curves, and other popular approaches in [7, 12, 13, 19, 21, 30, 33]).
In the literature, there exist several methods for solving the problems (1.1), (1.2) and (1.1), (1.3) (see, for example, [12, 13, 30, 33, 21], etc.). For solving the problem (1.1), (1.2) Piyavskii (see [23]) has proposed a popular method that requires an a priori overestimate of the Lipschitz constant of the function : in the course of its work, the algorithm constructs piecewise linear support functions for over every subinterval , , where the points are points previously produced by the algorithm (see Fig. 1) at which the objective function has been evaluated, i.e., .
In the present paper, to solve the problem (1.1), (1.2) we consider Piyavskii’s method and algorithms that dynamically estimate the Lipschitz information for the entire region or for its subregions. This is done since the precise information about the value Piyavskii’s method requires for its correct work is often hard to get in practice. Thus, we use two different procedures to obtain an information on the constant : the first one estimates the global constant during the search (the word “global” means that the same estimate is used over the whole region ), and the second, called “local tuning technique” that adaptively estimates the local Lipschitz constants in different subintervals of the search region during the course of the optimization process.
Then, in order to accelerate the search, we propose a new acceleration tool, called “local improvement”, that can be used together with all three ways described above to obtain the Lipschitz information in the framework of the Lipschitz algorithms. The new approach forces the global optimization method to make a local improvement of the best approximation of the global minimum immediately after a new approximation better than the current one is found.
The proposed local improvement technique is of a particular interest due to the following reasons. First, usually in the global optimization methods the local search phases are separated from the global ones. This means that it is necessary to introduce a rule that stops the global phase and starts the local one; then it stops the local phase and starts the global one. It can happen (see, e.g., [12, 13, 30, 33, 21], etc.), that the global search and the local one are realized by different algorithms and the global search is not able to use all evaluations of made during the local search losing so an important information about the objective function that has been already obtained. The local improvement technique introduced in this paper does not have this defect and allows the global search to use all the information obtained during the local phases. In addition, it can work without any usage of the derivatives and this is a valuable asset when one solves the problem (1.1), (1.2) because, clearly, Lipschitz functions can be nondifferentiable.
Let us consider now the problem (1.1), (1.3). For this case, using the fact that the first derivative of the objective function satisfies the Lipschitz condition (1.3), Breiman and Cutler (see [1]) have suggested an approach that constructs at each iteration piecewise quadratic nondifferentiable support functions for the function over using an a priori given overestimate of from (1.3). Gergel (see [8]) has proposed independently a global optimization method that constructs similar auxiliary functions (see Fig. 2) and estimates dynamically during the search.
If we suppose that the the Lipschitz constant from (1.3) is known, then (see [1, 8]), at an iteration , the support functions are constructed for every interval , , (see Fig. 2) as follows:
(1.4) 
where
and , .
It can be noticed that in spite of the fact that is smooth, the support functions are not smooth. This defect has been eliminated in [25] where there have been introduced three methods constructing smooth support functions that are closer to the objective function than nonsmooth ones.
In this paper, for solving the problem (1.1), (1.3) we describe six different algorithms where smooth support functions are used. As it was in the case of the problem (1.1), (1.2), the local tuning and the local improvement techniques are applied to accelerate the search.
The paper has the following structure: in Section 2 we describe algorithms for solving the problem (1.1), (1.2); in Section 3 we describe methods that use smooth support functions in order to solve the problem (1.1), (1.3). The convergence conditions to the global minimizers for the introduced methods are established in both Sections. In Section 4, numerical results are presented and discussed. Finally, Section 5 concludes the paper.
2 Six methods constructing piecewise linear auxiliary functions for solving problems with the Lipschitz objective function
In this Section, we study the problem (1.1) with the objective function satisfying the Lipschitz condition (1.2). First, we present a general scheme describing in a compact form all the methods considered in this Section and then, by specifying STEP 2 and STEP 4 of the scheme, we introduce six different algorithms. In this Section, by the term trial we denote the evaluation of the function at a point that is called the trial point.
General Scheme () describing algorithms working with piecewise linear auxiliary functions.
 STEP 0.

The first two trials are performed at the points and . The point , , of the current (k+1)th iteration is chosen as follows.
 STEP 1.

Renumber the trial points of the previous iterations by subscripts so that
(2.1)  STEP 2.

Compute in a certain way the values being estimates of the Lipschitz constants of over the intervals , . The way to calculate the values will be specified in each concrete algorithm described below.
 STEP 3.

Calculate for each interval , , its characteristic
(2.2) where the values , .
 STEP 4.

Find an interval where the next trial will be executed. The way to choose such an interval will be specified in each concrete algorithm described below.
 STEP 5.

If
(2.3) where is a given search accuracy, then execute the next trial at the point
(2.4) and go to STEP 1. Otherwise, take as an estimate of the global minimum from (1.1) the value
and a point
as an estimate of the global minimizer , after executing these operations STOP.
Let us make some observations with regard to the scheme introduced above. During the course of the iteration a method following this scheme constructs an auxiliary piecewise linear function
where
and the characteristic from (2.2) represents the minimum of the auxiliary function over the interval .
If the constants are equal or larger than the Lipschitz constant for all , then it follows from (1.2) that the function is a lowbounding function for over the interval , i.e., for every interval , , we have
Moreover, if , we obtain the Piyavskii support functions (see Fig. 1).
In order to obtain from the general scheme a concrete global optimization algorithm, it is necessary to define STEP 2 and STEP 4 of the scheme. This section proposes six specific algorithms executing this operation in different ways. In STEP 2, we can make three different choices of computing the constant that lead to three different procedures that are called STEP 2.1, STEP 2.2, and STEP 2.3, respectively. The first way to define STEP 2 is the following.
STEP 2.1.
Set
(2.5) 
Here the exact value of the a priori given Lipschitz constant is used. Obviously, this rule gives us the Piyavskii algorithm.
If the constant it is not available (this situation be can very often encountered in practice), it is necessary to look for an approximation of during the course of the search. Thus, as the second way to define STEP 2 of the we use an adaptive estimate of the global Lipschitz constant (see [30, 33]), for each iteration . More precisely we have:
STEP 2.2.
Set
(2.6) 
where is a small number that takes into account
our hypothesis that
is not constant over the
interval and is a reliability parameter. The
value is calculated as follows
(2.7) 
with
(2.8) 
In both cases, STEP 2.1 and STEP 2.2, at each iteration all quantities assume the same value over the whole search region . However, both the a priori given exact constant and its global estimate (2.6) can provide a poor information about the behavior of the objective function over every small subinterval . In fact, when the local Lipschitz constant related to the interval is significantly smaller than the global constant , then the methods using only this global constant or its estimate (2.6) can work slowly over such an interval (see [24, 30, 33]).
In order to overcome this difficulty, we consider a recent approach (see [24, 30, 33]) called the local tuning that adaptively estimates the values of the local Lipschitz constants related to the intervals (note that other techniques using different kinds of local information in global optimization can be found also in [21, 33, 34]). The auxiliary function is then constructed by using these local estimates for each interval , . This technique is described below as the rule STEP 2.3.
STEP 2.3.
Set
(2.9) 
with
(2.10) 
where is from (2.8), and when and
only ,
and , should be considered,
respectively. The value
(2.11) 
where is from (2.7) and
The parameter has the same sense as in STEP 2.2.
Note that in (2.9) we consider two different components, and , that take into account respectively the local and the global information obtained during the previous iterations. When the interval is large, the local information is not reliable and the global part has a decisive influence on thanks to (2.9) and (2.11). When is small, then the local information becomes relevant, is small (see (2.11)), and the local component assumes the key role. Thus, STEP 2.3 automatically balances the global and the local information available at the current iteration. It has been proved for a number of global optimization algorithms that the usage of the local tuning can accelerate the search significantly (see [24, 25, 26, 30, 31, 32, 33]).
Let us introduce now possible ways to fix STEP 4 of the . At this step, we select an interval where a new trial will be executed. We consider both the traditional rule used, for example, in [23] and [33] and a new one that we shall call the local improvement technique. The traditional way to choose an interval for the next trial is the following.
This rule used together with STEP 2.1 gives us Piyavskii’s algorithm. In this case, the new trial point is chosen in such a way that
The new way to fix STEP 4 is introduced below.
STEP 4.2 (the local improvement
technique).
is a parameter initially equal to zero.
is the index corresponding to the current
estimate of the minimal value
of the function, that is: , .
is the result of the last trial corresponding to a point in the line (2.1),
i.e., .
IF (flag=1) THEN
IF THEN .
Local improvement: Alternate the choice of the
interval among
and , if (if
or
take or , respectively) in such a way
that for it follows
(2.13) 
ELSE (flag=0)
ENDIF
flag=NOTFLAG(flag)
The motivation of the introduction of STEP 4.2 presented above is the following. In STEP 4.1, at each iteration, we continue the search at an interval corresponding to the minimal value of the characteristic , (see (2.12)). This choice admits occurrence of such a situation where the search goes on for a certain finite (but possibly high) number of iterations at subregions of the domain that are “distant” from the best found approximation to the global solution and only successively concentrates trials at the interval containing a global minimizer. However, very often it is of a crucial importance to be able to find a good approximation of the global minimum in the lowest number of iterations. Due to this reason, in STEP 4.2 we take into account the rule (2.12) used in STEP 4.1 and related to the minimal characteristic, but we alternate it with a new selection method that forces the algorithm to continue the search in the part of the domain close to the best value of the objective function found up to now. The parameter “flag” assuming values 0 or 1 allows us to alternate the two methods of the selection.
More precisely, in STEP 4.2 we start by identifying the index corresponding to the current minimum among the found values of the objective function , and then we select the interval located on the right of the best current point, , or the interval on the left of , i.e., . STEP 4.2 keeps working alternatively on the right and on the left of the current best point until a new trial point with value less than is found. The search moves from the right to the left of the best found approximation trying to improve it. However, since we are not sure that the found best approximation is really located in the neighborhood of a global minimizer , the local improvement is alternated in STEP 4.2 with the usual rule (2.12) providing so the global search of new subregions possibly containing the global solution . The parameter defines the width of the intervals that can be subdivided during the phase of the local improvement. Note that the trial points produced during the phases of the local improvement (obviously, there can be more than one phase in the course of the search) are used during the further iterations of the global search in the same way as the points produced during the global phases.
Let us consider now possible combinations of the different choices of STEP 2 and STEP 4 allowing us to construct the following six algorithms.

: with STEP 2.1 and STEP 4.1 (Piyavskii’s method with the a priori Known Constant ).

: with STEP 2.2 and STEP 4.1 (the method using the Global Estimate of the Lipschitz constant ).

: with STEP 2.3 and STEP 4.1 (the method executing the Local Tuning on the local Lipschitz constants).

: with STEP 2.1 and STEP 4.2 (Piyavskii’s method with the a priori Known Constant enriched by the Local Improvement technique).

: with STEP 2.2 and STEP 4.2 (the method using the Global Estimate of enriched by the Local Improvement technique).

: with STEP 2.3 and STEP 4.2 (the method executing the Local Tuning on the local Lipschitz constants enriched by the Local Improvement technique).
Let us consider convergence properties of the introduced algorithms by studying an infinite trial sequence generated by an algorithm belonging to the general scheme for solving problem (1.1), (1.2). We remind that the algorithm is Piyavskii’s method and its convergence properties have been studied in [23]. In order to start we need the following definition.
Definition 2.1
Convergence to a point is said to be bilateral if there exist two infinite subsequences of converging to one from the left, the other from the right.
Assume that the objective function satisfies the condition (1.2), and let be any limit point of generated by the or by the algorithm. Then the following assertions hold:

convergence to is bilateral, if ;

, for all trial points , ;

if there exists another limit point , then ;

if the function has a finite number of local minima in , then the point is locally optimal;

(Sufficient conditions for convergence to a global minimizer). Let be a global minimizer of . If there exists an iteration number such that for all the inequality
(2.14) holds, where is the Lipschitz constant for the interval containing , and is its estimate (see (2.6) and (2.9)). Then the set of limit points of the sequence coincides with the set of global minimizers of the function .
Proof. The proofs of assertions 1–5 are analogous to the proofs of Theorems 4.1–4.2 and Corollaries 4.1–4.4 from [33].
Assertions 1–5 of Theorem 2 hold for the algorithms , , and for a fixed finite tolerance and , where is from (2.13) and is from (2.3). Proof. Since and , the algorithms , , and use the local improvement only at the initial stage of the search until the selected interval is greater than . When the interval cannot be divided by the local improvement technique and the selection criterion (2.12) is used. Thus, since the onedimensional search region has a finite length and is a fixed finite number, there exists a finite iteration number such that at all iterations only selection criterion (2.12) will be used. As a result, at the remaining part of the search, the methods , , and behave themselves as the algorithms , , and , respectively. This consideration concludes the proof.
The next theorem ensures existence of the values of the parameter satisfying condition (2.14) providing so that all global minimizers of will be located by the four proposed methods that do not use the a priori known Lipschitz constant.
For any function satisfying (1.2) with there exists a value such that for all condition (2.14) holds for the four algorithms , , , and . Proof. It follows from (2.6), (2.9), and the finiteness of that approximations of the Lipschitz constant in the four methods are always greater than zero. Since in (1.2) and any positive value of the parameter can be chosen in the scheme , it follows that there exists an such that condition (2.14) will be satisfied for all global minimizers for . This fact, due to Theorems 2 and 2, proves the theorem.
3 Six methods constructing smooth piecewise quadratic auxiliary functions for solving problems with the Lipschitz first derivative
In this Section, we study the algorithms for solving problem (1.1) with the Lipschitz condition (1.3) that holds for the first derivative of the objective function . In this Section, by the term trial we denote the evaluation of both the function and its first derivative at a point that is called the trial point.
We consider the smooth support functions described in [25]. This approach is based on the fact observed in [25], namely, that at each interval (see Fig. 3) the curvature of the objective function is determined by the Lipschitz constant from (1.3). In particular, over the interval it should be where
(3.1) 
This means that over the interval both the objective function and the parabola are strictly above the BreimanCutlerGergel’s function from (1.4) where the unknowns and can be determined following the considerations made in [25].
These results from [25] allow us to construct the following smooth support function for over :
(3.2) 
where there exists the first derivative , , and
This function is shown in Fig. 4. The points , and the vertex of the parabola can be found (see [25] for the details) as follows:
(3.3) 
(3.4) 
(3.5) 
where and .
In order to construct global optimization algorithms by applying the same methodology used in the previous Section, for each interval we should calculate its characteristic . For the smooth auxiliary functions it can be calculated as , where
Three different cases can take place.
We are ready now to introduce the general scheme for the methods working with smooth piecewise quadratic auxiliary functions. As it has been done in the previous Section, six different algorithms will be then constructed by specifying STEP 2 and STEP 4 of the general scheme.
General Scheme describing algorithms working with the first Derivatives and constructing smooth piecewise quadratic auxiliary functions ().
 STEP 0.

The first two trials are performed at the points and . The point , , of the current (k+1)th iteration is chosen as follows.
 STEP 1.

Renumber the trial points of the previous iterations by subscripts so that
(3.8)  STEP 2.

Compute in a certain way the values being estimates of the Lipschitz constants of over the intervals , The way to calculate the values will be specified in each concrete algorithm described below.
 STEP 3.

Initiate the index sets , , and . Set the index of the current interval and go to STEP 3.1.
 STEP 3.1.

If for the current interval the following inequality
(3.9) does not hold (where is the derivative of the parabola (3.1)) then go to STEP 3.2. Otherwise go to STEP 3.3.
 STEP 3.2.

Calculate for the interval its characteristic using (3.6). Include in and go to STEP 3.4.
 STEP 3.3.

Calculate for the interval its characteristic using (3.7). If
then include the index in the set and go to STEP 3.4. Otherwise include in the set and go to STEP 3.4.
 STEP 3.4.

If , set and go to STEP 3.1. Otherwise go to STEP 4.
 STEP 4.

Find the interval for the next possible trial. The way to do it will be specified in each concrete algorithm described below.
 STEP 5.

If
(3.10) where is a given search accuracy, then execute the next trial at the point
(3.11) and go to STEP 1. Otherwise, take as an estimate of the global minimum from (1.1) the value
and a point
as an estimate of the global minimizer , after executing these operations STOP.
Let us make just two comments upon the introduced scheme . First, in STEPS 3.1–3.4 the characteristics , , are calculated by taking into account the different cases i – iii of the location of the point described above. Second, note that the index sets , , and have been introduced in order to calculate the new trial point in STEP 5. In fact, the vertex of the parabola, , can be outside the interior of the interval . It can happen that whenever and (or and ), and so the point (or ) is selected as new trial point .
Let us show now how it is possible to specify STEP 2 and STEP 4 of the scheme . As it has been done in the previous Section for the scheme , we first describe three different choices of the values that should be done at STEP 2 and then consider two selection rules that can be used to fix STEP 4 for choosing the point . The first possible way to assign values to is the following:
In this case, the exact value of the a priori given Lipschitz constant for the first derivative is used. As a result, the auxiliary functions from (3.2) are support functions for over the intervals , . Since it is difficult to know the exact value in practice, the choices made in the following STEPS 2.2 and 2.3 (as it was for the methods working with Lipschitz objective functions) describe how to estimate dynamically the global constant (STEP 2.2) and the local constants related to each interval , (STEP 2.3).
STEP 2.2
Set
(3.13) 
where reflects the supposition that is not
constant over the interval
and has the same sense as in the
STEP 2.2 of the scheme . The
value is computed as
(3.14) 
where
(3.15) 
and
(3.16) 
If an algorithm uses the exact value of the Lipschitz constant (see STEP 2.1 above) then it is ensured by construction that the points , from (3.4) and (3.3) belong to the interval . In the case, when an estimate of is used, it can happen that, if the value is underestimated, the points and can be obtained outside the interval that would lead to an error in the work of the algorithm using such an underestimate. It has been proved in [25] that the choice (3.13)–(3.16) makes this unpleasant situation impossible. More precisely, the following theorem holds. {theorem} If the values in are determined by formulae (3.13)(3.16) then the points , from (3.3), (3.4) belong to the interval and the following estimates take place:
Let us introduce now STEP 2.3 that shows how the local tuning technique works in the situation where the first derivative of the objective function can be calculated.
STEP 2.3
Set
(3.17) 
where and have the same sense as before, and
(3.18) 
where the values are calculated following
(3.15), and when and
we consider
only , and , respectively. The value
is computed
as follows
(3.19) 
where is from (3.14) and
As it was in STEP 2.3 of the scheme from the previous Section, the local tuning technique balances the local and the global information to get the estimates on the basis of the local and the global estimates and . Note also that the fact that and belong to the interval can be proved by a complete analogy with Theorem 4 above.
Let us consider now STEP 4 of the scheme . At this step, we should select an interval containing the next trial point . As we have already done in Section 2, we consider two strategies: the rule selecting the interval corresponding to the minimal characteristic and the local improvement technique. Thus, STEP 4.1 and STEP 4.2 of the scheme correspond exactly to STEP 4.1 and STEP 4.2 of the scheme from Section 2. The obvious difference consists of the fact that characteristics , are calculated with respect to STEPS 3.1–3.4 of the scheme .
Thus, by specifying STEP 2 and STEP 4 we obtain from the general scheme the following six algorithms:

: with STEP 2.1 and STEP 4.1 (the method using the first Derivatives and the a priori Known Lipschitz Constant ).

: with STEP 2.2 and STEP 4.1 (the method using the first Derivatives and the Global Estimate of the Lipschitz constant ).

: with STEP 2.3 and STEP 4.1 (the method using the first Derivatives and the Local Tuning).

: with STEP 2.1 and STEP 4.2 (the method using the first Derivatives, the a priori Known Lipschitz Constant , and the Local Improvement technique).

: with STEP 2.2 and STEP 4.2 (the method using the first Derivatives, the Global Estimate of the Lipschitz constant , and the Local Improvement technique).

: with STEP 2.3 and STEP 4.2 (the method using the first Derivatives, the Local Tuning, and the Local Improvement technique).
Let us consider now infinite trial sequences generated by methods belonging to the general scheme and study convergence properties of the six algorithms introduced above. {theorem} Assume that the objective function satisfies condition (1.3), and let (, ) be any limit point of generated by either by the method or the or the . If the values , , are bounded as below
(3.20) 
where is from (3.15), then the following assertions hold:

convergence to is bilateral, if ;

, for all trial points , ;

if there exists another limit point , then ;

if the function has a finite number of local minima in , then the point is locally optimal;

(Sufficient conditions for convergence to a global minimizer). Let be a global minimizer of and be an interval containing this point during the course of the th iteration of one of the algorithms , , or . If there exists an iteration number such that for all the inequality
(3.21) takes places for and (3.20) for all the other intervals, then the set of limit points of the sequence coincides with the set of global minimizers of the function .
Proof. The proofs of assertions 1–5 are analogous to the proofs of Theorems 5.1–5.5 and Corollaries 5.1–5.6 from [25].
The fulfillment of the sufficient conditions for convergence to a global minimizer, i.e., (3.21), are evident for the algorithm . For the methods and , its fulfillment depends on the choice of the reliability parameter . A theorem similar to the theorem 2 can be proved for them by a complete analogy. However, there exist particular cases where the objective function is such that its structure ensures that (3.21) holds. In the following theorem, sufficient conditions providing the fulfillment of (3.21) for the methods and are established for a particular class of objective functions. The theorem states that if is quadratic in a neighborhood of the global minimizer , then to ensure the global convergence it is sufficient that the methods will place one trial point on the left from and one trial point on the right from . {theorem} If the objective function is such that there exists a neighborhood of a global minimizer where
(3.22) 
where and are finite constants and is from (1.3) and trials have been executed at points , , then condition (3.21) holds for algorithms and and is a limit point of the trial sequences generated by these methods if (3.20) is fulfilled for all the other intervals. Proof. The proof is analogous to the proof of Theorem 5.6 from [25].
4 Numerical experiments
In this section, we present numerical results executed on 120 functions taken from the literature to compare the performance of the six algorithms described in Section 2 and the six algorithms from Section 3.
Two series of experiments have been done. In both of them the choice of the reliability parameter has been done with the step 0.1 starting from , i.e., , etc. in order to ensure convergence to the global solution for all the functions taken into consideration in each series. It is well known (see detailed discussions on the choice of and its influence on the speed of Lipschitz global optimization methods in [21, 30, 33]) that in general, for higher values of methods of this kind are more reliable but slower. It can be seen from the results of experiments (see Tables 1 – 6) that the tested methods were able to find the global solution already for very low values of . Then, since there is no sense to make a local improvement with the accuracy that is higher than the final required accuracy , in all the algorithms using the local improvement technique the accuracy from (2.13) has been fixed . Finally, the technical parameter (used only when at the initial iterations a method executes trials at the points with equal values) has been fixed to for all the methods using it.
In the first series of experiments, a set of 20 functions described in [11] has been considered. In Tables 1 and 2, we present numerical results for the six methods proposed to work with the problem (1.1), (1.2). In particular, Table 1 contains the numbers of trials executed by the algorithms with the accuracy , where is from (2.3). Table 2 presents the results for . The parameter was sufficient for the algorithms , , and , while the exact values of the Lipschitz constant of the functions have been used in the methods and .
Problem  PKC  GE  LT  PKC_LI  GE_LI  LT_LI 

1  149  158  37  37  35  35 
2  155  127  36  33  35  35 
3  195  203  145  67  25  41 
4  413  322  45  39  39  37 
5  151  142  46  151  145  53 
6  129  90  84  39  41  41 
7  153  140  41  41  33  35 
8  185  184  126  55  41  29 
9  119  132  44  37  37  35 
10  203  180  43  43  37  39 
11  373  428  74  47  43  37 
12  327  99  71  45  33  35 
13  993  536  73  993  536  75 
14  145  108  43  39  25  27 
15  629  550  62  41  37  37 
16  497  588  79  41  43  41 
17  549  422  100  43  79  81 
18  303  257  44  41  39  37 
19  131  117  39  39  31  33 
20  493  70  70  41  37  33 
Average  314.60  242.40  65.10  95.60  68.55  40.80 
Problem  PKC  GE  LT  PKC_LI  GE_LI  LT_LI 

1  1681  1242  60  55  55  57 
2  1285  1439  58  53  61  57 