Differential Evolution Using Opposite Point for Global Numerical Optimization

The Differential Evolution (DE) algorithm is arguably one of the most powerful stochastic optimization algorithms, which has been widely applied in various fields. Global numerical optimization is a very important and extremely difficult task in optimization domain, and it is also a great need for many practical applications. This paper proposes an opposition-based DE algorithm for global numerical optimization, which is called GNO2DE. In GNO2DE, firstly, the opposite point method is employed to utilize the existing search space to improve the convergence speed. Secondly, two candidate DE strategies “DE/rand/1/bin” and “DE/current to best/2/bin” are randomly chosen to make the most of their respective advantages to enhance the search ability. In order to reduce the number of control parameters, this algorithm uses an adaptive crossover rate dynamically tuned during the evolutionary process. Finally, it is validated on a set of benchmark test functions for global numerical optimization. Compared with several existing algorithms, the performance of GNO2DE is superior to or not worse than that of these algorithms in terms of final accuracy, convergence speed, and robustness. In addition, we also especially compare the opposition-based DE algorithm with the DE algorithm without using the opposite point method, and the DE algorithm using “DE/rand/1/bin” or “DE/current to best/2/bin”, respectively.


Introduction
Global numerical optimization problems arise in almost every field such as industry and engineering design, applied and social science, and statistics and business, etc.The aim of global numerical optimization is to find global optima of a generic objective function.In this paper, we are most interested in the following global numerical minimization problem [1,2]: where   f x is the objective function to be minimized, , , , is the real-parameter variable vector, is the lower bound of the variables and is the upper bound of the variables, respectively, such that   , i i i x l u  .Many real-world global numerical optimization problems have many objective functions that are non-differentiable, non-continuous, non-linear, noisy, flat, random, or that have many local minima, multiple dimensions, etc.However, the major challenge of the global numerical optimization is that the problems to be optimized have many local optima and multiple dimensions.Such prob-lems are extremely difficult to be optimized and find reliable global optima [3,4].Therefore, increasing requirements for solving global numerical optimization in various application domains have encouraged many researchers to find a reliable global numerical optimization algorithm.However, in the last decades, this problem remains intractable, theoretically at least [5].
In the global numerical optimization, the traditional methods can be usually classified into two main categories [5,6]: deterministic and probabilistic global numerical optimization methods.During the global numerical optimization process, the first stage is usually to find specific heuristic information involved in problem.Most of deterministic methods rely on the heuristic information to escape from local minima.On the other hand, almost probabilistic methods rely on a probability to determine whether or not search should depart from the neighborhood of a local minimum.Evolutionary algorithms (including genetic algorithm (GA) [7], evolution strategy (ES) [8], genetic programming (GP) [9], and evolutionary programming (EP) [10]) are inspired from the evolution of nature and relatively recent optimization methods.These algorithms have the potential to over-come the limitations of traditional global numerical optimization methods, mainly in terms of unknown system parameters, multiple local minima, non-differentiability, or multiple dimensions, etc. [5,11].
Lately, some new methods for global numerical optimization were gradually introduced.Particle Swarm Optimization (PSO) was originally proposed by J. Kennedy as a simulation of social behavior, and it was initially introduced as an optimization method in 1995 [12].PSO has been a member of the wide category of Swarm Intelligence methods for solving global numerical optimization problems [13][14][15].Differential Evolution (DE) was introduced by Storn and Price in 1995, and developed to optimize real-parameter functions [3,16,17].DE mainly uses the distance and direction information from the current population to guide its further search, and it mainly has three advantages: 1) finding the true global minimum regardless of the initial parameter values; 2) fast convergence; 3) using a few control parameters.In addition, DE is simple, fast, easy to use, very easily adaptable and useful for optimizing multimodal search spaces [18][19][20][21][22]. Recently, DE has been shown to produce superior performance, and perform better than GA and PSO over some global numerical optimization problems [13,14].Therefore, DE is very promising in solving global numerical optimization problems.
This paper proposes an opposition-based DE algorithm for global numerical optimization (GNO2DE).This algorithm employs the opposite point method to utilize the existing search spaces to speed the convergence [21][22][23][24].Usually, different problems require different settings for the control parameters.Generally, adaptation is introduced into an evolutionary algorithm, which can improve the ability to solve a general class of problems, without user interaction.In order to improve the adaptation and reduce the control parameter, GNO2DE uses a dynamic mechanism to dynamically tune the crossover rate CR during the evolutionary process.Moreover, GNO2DE can enhance the search ability by randomly selecting a candidate from strategies "DE/rand/1/bin" and "DE/current to best/2/bin".Numerical experiments clearly show that GNO2DE is feasible and effective.
The remainder of this paper is organized as follows.Section 2 briefly introduces the basic idea of the DE algorithm.Section 3 describes in detail the proposed GNO2DE algorithm.Section 4 presents the experimental setup adopted and provides an analysis of the experimental results obtained from our empirical study.Finally, our conclusions and some possible paths for the future research are provided in Section 5.

The Classical DE Algorithm
The DE algorithm is a population-based stochastic opti-mization algorithm like many evolutionary algorithms such as genetic algorithms using three similar genetic operators: crossover, mutation, and selection [7].The main difference in generating better solutions is that genetic algorithms mainly rely on crossover while DE mainly relies on mutation operation.The DE algorithm uses mutation operation as a search mechanism and selection operation to direct the search toward the prospective regions in the search space.The DE algorithm also uses a non-uniform crossover that can take child vector parameters from one parent more often than it does from others.By using the components of the existing population members to generate trial vectors, the recombination (i.e., crossover) operator efficiently shuffles information about successful combinations, enabling the search for a better solution space [3,16,17].
A global numerical optimization problem consisting of n parameters can be represented by a n-dimensional vector.In DE, a population of NP solution vectors is randomly created at the start, where 4 NP  .The population is successfully improved by applying mutation, crossover, and selection operators [13,25,26].

Randomly Initializing Population
Like other many evolutionary algorithms, the DE algorithm starts with an initial population, which is randomly generated when no preliminary knowledge about the solution is available.In DE, let us assume that an individual stands for the th i individual of population G P (population size NP ) at the generation G .The population where NP is the population size, n is the number of variables, j rand is a uniformly distributed random number in the range [0,1], and , ,0 is the th j variable of the th i individual at the initial generation, which is initialized within the th j range ,

Mutation Operation
In the mutation phase, DE randomly selects three distinct individuals from the current population.For each target vector , i G x , the th i mutant vector is generated based on the three selected individuals as follows: , , {1,2, , } r r r NP   are randomly chosen integers, mutually different, and they are also chosen to be different from the running index i , so that NP must be greater or equal to four to allow for this condition.The scaling factor F is a control parameter of the DE algorithm, which controls the amplification of the differential variation x ).And the scaling factor F is a real constant factor in the range [0, 2] and is often set to 0.5 in the real applications [27].
The above strategy is called "DE/rand/1/bin", it is not the only variant of DE mutation which has been proven to be useful for real-valued optimization.In order to classify the variants of DE mutation, the notation: DE x y z is introduced where 1) x specifies the vector to be mutated which currently can be "rand" (a randomly chosen population vector) or "best" (the vector of lowest cost from the current population); 2) y is the number of difference vectors used; 3) z denotes the crossover scheme, there are two crossover schemes often used, namely, "bin" (i.e., the binomial recombination) and "exp" (i.e., the exponential recombination).Usually, there are the following several differential DE schemes often used in the global optimization [3]: "DE/best/1/bin": "DE/current to best/2/bin": "DE/best/2/bin": "DE/rand/2/bin": where best x is the best individual of the current population G .The scaling factor F is the control parameter of the DE algorithm.

Crossover Operation
In order to increase the diversity of the perturbed parameter vectors, the crossover operator is introduced.The new individual is generated by recombining the original vector according to the following formula: x .And the cross- over rate CR is a real constant in the range [0,1], one of control parameters of the DE algorithm.After crossover, if one or more of the variables in the new solution are outside their boundaries, the following repair rule is applied [25]:

Selection Operation
After mutation and crossover, the selection operation selects to decide that the new individual where   f  is the fitness function, and , 1 in the next generation (G + 1).

The General Framework of the DE Algorithm
The above operations (i.e., mutation, crossover, and selection) are repeated NP (population size) times to generate the next population of the current population.These successive generations are generated until the predefined termination criterion is satisfied.The main steps of the DE algorithm are given in Figure 1. and let 1 G G   .12: until (the predefined termination criterion is achieved).

The Proposed GNO2DE Algorithm
Similar to all population-based optimization algorithms, two main steps are distinguishable for the DE, population initialization and producing new generations by evolutionary operations such as selection, crossover, and mutation.GNO2DE enhances these two steps using the opposite point method.The opposite point method has been proven to be an effective method to evolutionary algorithms for solving global numerical problems.When evaluating a point to a given problem, simultaneously computing its opposite point can provide another chance for finding a point closer to the global optimum.The concept of the opposite point is defined as follows [21][22][23][24]: is the th i point of the population G P (population size NP ) at the generation G in the n-dimensional space.The opposite point  is completely defined by its components as follows: , , , ,  , j l and j u are the lower and the upper limits of the variable , , , respectively.

Generating the Initial Population Using the Opposite Point Method
Generally, population-based Evolutionary Algorithms randomly generate the initial population within the boundaries of parameter variables.In order to improve the quality of the initial population, we can obtain fitter starting candidate solutions by utilizing opposite points, even when there is no a priori knowledge about the solution (s).The procedure of generating the initial population using the opposite point method is given as follows: Step 1: Randomly initialize the starting population 0 P (population size NP ).
Step 2: Calculate the opposite population of 0 P using the opposite point method, and obtain the opposite population 0 OP .
Step 3: Select the NP fittest individuals from 0 0 P OP  as the initial population 0 P .

Evolving the Population Using the Opposite Point Method
By applying a similar approach to the current population, the evolutionary process can be forced to jump to a new solution candidate, which may be fitter than the current one.After generating new population by selection, crossover, and mutation, the opposite population is calculated and the NP fittest individuals are selected from the union of the current population and the opposite popula-tion.Following steps describe the procedure: Step 1: The offspring population 1 G P  of the current population G P is generated after performing the corresponding successive DE operations (i.e., mutation, crossover, and selection).
Step 2: Calculate the opposite population of 1 G P  using the opposite point method, and obtain the opposite population 1 G OP  .
Step 3: Select the NP fittest individuals from

Adaptive Crossover Rate CR
In DE, the aim of crossover is to improve the diversity of the population, and there is a control parameter CR (i.e., the crossover rate) to control the diversity.The smaller diversity is easy to result in the premature convergence, while the larger diversity reduces the convergence speed.In conventional DE, the crossover rate CR is a constant value in the range [0,1].Inspired by nonuniform mutation, this paper introduces an adaptive crossover rate CR , which is defined as follows [28]: where r is a uniform random number from [0,1], t and T are the current generation number and the maximal generation number, respectively.The parameter b is a shape parameter determining the degree of dependency on the iteration number and usually is set to 2 or 3.In this study, b is set to 3. The property of CR causes the crossover operator to search the solution space uniformly initially when t is small, while to search the solution space very locally when t is large.This strategy increases the probability of generating a new number close to its successor than a random choice.Therefore, at the early stage, GNO2DE uses a bigger crossover rate CR to search the solution space to preserve the diversity of solutions and prevent premature convergence; at the later stage, GNO2DE employs a smaller crossover rate CR to search the solution space to enhance the local search and prevent the fitter solutions found from being destroyed.The relation of generation vs crossover rate CR is plotted in Figure 2.

Adaptive Mutation Strategies
In subsection 2.2, we have described a few useful mutation schemes, where "DE/rand/1/bin" and "DE/current to best/2/bin" are the most often used in practical applications mainly due to their good performance [17,19].To overcome their respective disadvantages and utilize their cooperative advantages, GNO2DE randomly chooses a mutation scheme from two candidates "DE/rand/1/bin" (i.e., Equation ( 3)) and "DE/current to best/2/bin" (i.e., Equation ( 5)), and the new mutant vector , 1 is generated according to the following formula [27]: where [0,1] rand is a uniform random number from the range [0,1].

Approaching of Boundaries
In the given optimization problem, it has to be ensured that some boundary values are not outside their limits.Several possibilities exist for this task: 1) The positions that beyond the boundaries are newly generated until the positions within the boundaries are satisfied; 2) the boundary-exceeding values are replaced by random numbers in the feasible region; 3) The boundary is approached asymptotically by setting the boundary-offending value to the middle between old position and boundary [29]: After crossover, if one or more of the variables in the new vector , 1 i G w are outside their boundaries, the violated variable value , , 1 i j G w  is either reflected back from the violated boundary or set to the corresponding boundary value using the repair rule as follows [30]: where p is a probability and a uniformly distributed random number in the range [0,1].

The Framework of the GNO2DE Algorithm
DE creates new candidate solutions by combining the parent individual and several other individuals of the same population.A candidate replaces the parent only if it has better fitness value.The initial population is selected randomly in a uniform manner between the lower and upper bounds defined for each variable.These bounds are specified by the user according to the nature of the problem.After initialization, DE performs mutation, crossover, selection etc., in an evolution process.The general framework of the GNO2DE algorithm is described in Figure 3.

Benchmark Functions
In order to test the robustness and effectiveness of GNO2DE, we use a well-known test set of 23 benchmark functions [1,2,[31][32][33].This relatively large set is necessary in order to reduce biases in evaluating algorithms.The complete description of all these functions and the corresponding parameters involved are described in Ta- ble 1 and APPENDX.These functions can be divided into three different categories with different complexities: [0,1) sin ,10,100, 4 0.1 sin 3π 1 1 sin 3π 1 1 sin 2π ,5,100, 4 .53649 f   1) unimodal functions ( 1 7 f f ), which are relatively easy to be optimized, but the difficulty increases as the dimensions of the problems increase (see Figure 4); 2) multimodal functions ( 8 1 3 f f ), which have many local minima, represent the most difficult class of problems for many optimization algorithms (see Figure 5); 3) multimodal functions ( 14 23 f f ), which contain only few local optima (see Figure 6).It is interesting to note that some functions have unique features: 6 f is a discontinuous step function having a single optimum; 7 f is a noisy function involving a uniformly distributed random variable within the range [0,1].In unimodal functions the convergence rate is our main interest, as the optimization is not a hard problem.Obviously, for multimodal functions the quality of the final results is more important because it reflects the ability of the designed algorithm to escape from local optima.

Discussion of Parameter Settings
In order to setup the parameters, we firstly discuss the convergence characteristic of each function of dimen-   f approximately requires 300,000 FES to achieve the convergence, and that the convergence speed of function 5 f is relatively slow in the case of the above parameters.Therefore, in order to investigate the effect of the control parameter F on the convergence.Some experimental results are given in Fig- ures 13-18.Firstly, the control parameter F is set to different values 0.4, 0.5, 0.6, 0.7 on functions 1 f and 2 f , and the convergence curve is presented in Figures 13 and 14.From Figures 13 and 14, we can observe that GNO2DE can achieve the convergence for each value of the above control parameter F when the number of fitness evaluations is set to 100,000 FES, while the convergence speed is fastest when the value of the control parameter F is set to 0.5.For function 5 f , we set the control parameter F to 0.5, 0.6, 0.7, and 0.8, respectively.The convergence graph is given in Figure 15.
From Figure 15, it is clearly shown that the convergence speed is obviously fastest when the value of the control parameter F is set to 0.6.In addition, we also present the convergence graph of each function of 8 f , 13 f , and 20 f in Figures 16-18, respectively.The control parameter F is set to 0.5, 0.6, 0.7, and 0.8.From these figures, we can institutively find that the convergence speed is relatively fastest when the value of the control parameter is set to 0.5.Therefore, for the most functions, GNO2DE can show good performance when the value of the control parameter F is set to 0.5 or 0.6.According to the above discussion and analysis, we set up the corresponding experimental parameters in Tables 2-4.Table 2 presents the parameters used by GNO2DE, GNO2DE-A, and GNO2DE-B for functions of dimensionality 30 or lower, where GNO2DE-A and GNO2DE-B employ only DE schemes "DE/rand/1/bin" or "DE/current to best/2/bin", respectively.Table 3 presents the parameters used by GNODE (without using the opposite point method) for functions of dimensionality 30 or lower.Table 4 presents the parameter settings used by GNO2DE for functions of dimensionality 100.

Comparison of GNO2DE with GNODE
In this section, we compare GNO2DE with GNODE in terms of some performance indices according to the parameter settings presented in Tables 2 and 3.The experimental results are in detail summarized in Tables 5 and 6, and better results are highlighted in boldface.The optimized objective function values over 30 independent runs are arranged in ascending order and the 15 th value in the list is called the median optimized function value.
According to Tables 5 and 6, we can find that GNO2DE  f f , the performance of GNO2DE is superior to or less worse than the performance of GNODE in terms of the min value (i.e., the best result), the median value (i.e., the median result), the max value (i.e., the worst result), the mean value (i.e., the mean result), and the std value (i.e., the standard deviation result), on condition that while the FES of GNO2DE is essentially less than that of GNODE, although they are apparently set to the same FES 100,000.In addition, the global optimum of function 18 f found by GNO2DE is f 18 (x) = 2.99999999999992, the corresponding x = (0.00000000061668, −0.99999999932877).
According to Table 5, for 5 f , the performance of GNO2DE is obviously better than that of GNODE in terms of the max, mean, and std values, while the performance of GNODE is slightly better than that of GNO2DE in terms of the min, median values.For 8 f , the median, max, mean, and std values of GNODE are better than those of GNO2DE, while the min value of GNODE is approximate to that of GNO2DE.For 12 f , the min, median, max, and mean values of GNO2DE are better than those of GNODE, while the std value of GNO2DE is worse than that of GNODE.The reason is that GNO2DE can't find the optimal solution in very few runs of 30 runs.For function 13 f , the min, median, max, mean, and std values of GNO2DE are slightly worse than those of GNODE.
As shown in Table 6, for 16 f , the min, median values of GNO2DE are similar to those of GNODE, while the max, mean, and std values of GNO2DE are worse than those of GNODE to some extent.This is because that GNO2DE can't obtain the min value in one or two runs of 30 runs.For 20 f , the min, and max values obtained by GNO2DE are the same to those by GNODE, while the median value obtained by GNO2DE is worse than that obtained by GNODE.Accordingly, it also decides that the mean, and std values of GNO2DE are worse than those of GNODE.GNO2DE and GNODE all have a tendency to getting stuck in the local optima.The global optima of function 20 f found by GNO2DE is f 20 (x) = −3.33539215295525, the corresponding x = (0.20085810809731, 0.15013171771783, 0.47865329178970, 0.27652528463205, 0.31191293322300, 0.65702016661775).
In conclusion, the performance of GNO2DE is relatively stable and obviously better than or not worse than that of GNODE.The reason is that GNO2DE uses the opposite point method to provide another chance for finding a solution more close to the global numerical optimum, without increasing much time.

Comparison of GNO2DE with GNO2DE-A, and GNO2DE-B
In this section, we compare GNO2DE ("DE/rand/1/bin" and "DE/current to best/2/bin") with GNO2DE-A ("DE/ rand/1/bin"), and GNO2DE-B ("DE/current to best/2/ bin") in terms of the best result (i.e., the min value), the mean result (i.e., the mean value), and the standard deviation result (i.e., the std value).The parameter settings of GNO2DE, GNO2DE-A, and GNO2DE-B are given in   GNO2DE-A, and GNO2DE-B, and three algorithms all can find the optimal solution.Table 7 shows that for each function of 1 4 f f , the optimal solution can be found by GNO2DE, GNO2DE-A, and GNO2DE-B, while the mean, std values of GNO2DE are slightly different from those of GNO2DE-A, and GNO2DE-B.For 5 f , the mean, and std values of GNO2DE are obviously better than those of GNO2DE-A, and GNO2DE-B, while the min value of GNO2DE-A is worst among three algorithms.For 7 f , the min value of GNO2DE-A is best among three algorithms, while its mean, and std values are worse or not better than those of GNO2DE, and GNO2DE-B.For 8 f , the min, mean, and std values of GNO2DE-B are obviously worse than those of GNO2DE, and GNO2DE-A, while GNO2DE-A can obtained better mean, and std values than GNO2DE.For 12 f , the min value of GNO2DE-A is worst among three algorithms, while the std value of GNO2DE is best among three algorithms.For 13 f , the min value of GNO2DE-A is worst among three algorithms, while the mean, and std values of GNO2DE are best among three algorithms.
Table 8 shows that for 15 f , the min, and mean values are approximate among three algorithms, while the std value of GNO2DE is best, that of GNO2DE-B is better, and that of GNO2DE-A is good.For 16 f , the min, and mean values are similar among three algorithms, while the std value of GNO2DE-B is best, that of GNO2DE is better, and that of GNO2DE-A is good.
Therefore, from the above analysis, we know that the performance of GNO2DE is more stable than that of GNO2DE-A, and that of GNO2DE-B.This is because that GNO2DE employs two schemes "DE/bin/1/bin" and "DE/current to best/2/bin" to search the solution space.On the whole, GNO2DE can improve the search ability.

Comparison of GNO2DE with Some State-of-the-Art Algorithms
In this section, we compare GNO2DE with DE [3], ODE/2 [2], SOA [31], FEP [32], opt-IA [1], and CLPSO [15] in terms of the mean result (i.e., the mean value), the standard deviation result (i.e., the std value), and the  In sum, the mean and standard deviation results of GNO2DE are not worse than or superior to DE, ODE/2, SOA, FEP, opt-IA, and CLPSO on a test set of benchmark functions.GNO2DE uses the opposite point method, employs two DE schemes "DE/rand/1/bin" and "DE/ current to best/2/bin", and introduces non-uniform crossover rate.These techniques are beneficial to enhancing the performance of GNO2DE.

Experimental Results of 100-Dimensional Functions
In this section, the statistical results of GNO2DE on 100dimensional functions are given in Table 11.The parameter setup is used in Table 4.The optimized objecttive function values over 30 independent runs are arranged in ascending order and the 15 th value in the list is called the median optimized function value.Table 11 clearly shows that GNO2DE can find the optimum or near optimum of each 100-dimensional function of 1 1 3 f f , and that GNO2DE can obtain the stable performance of each function of 1 7 f f , 9 1 1 f f , while it performs slightly worse on 8 f , 12 f , 13 f .Therefore, when used for solving high dimensional global numerical optimization problems, NGO2DE also performs well.

Conclusion and Future Work
This paper introduces an opposition-based DE algorithm for global numerical optimization (GNO2DE).GNO2DE uses the method of opposition-based learning to utilize the existing search spaces to improve the convergence speed, employs adaptive DE schemes and non-uniform crossover to enhance the adaptive search ability.Numerical results show that GNO2DE outperforms some stateof-the-art algorithms.However, there are still some possible things to do in the future: 1) further, to improve the self-adaptation of the control parameters such as the scaling factor F; 2) to test higher dimensional global nu-

Figure 1 .
Figure 1.The generic framework of the DE algorithm.

Figure 2 .
Figure 2. The graph for generation vs crossover rate CR.

Figure 5 .
Figure 5. Graph for one multimodal function with many local minima.

Figure 6 .
Figure 6.Graph for one multimodal function containing only few local optima.sionality 30 or lower.The parameters used by GNO2DE are listed in the following: the control parameter 0.5 F  , the population size 100 NP  , the maximal generation number 500 T  for functions 1 4 f f , 21 23 f f , 1500 T  for functions 5 2 0 f f , respectively.For convenience of illustration, we plot the convergence graphs for benchmark test functions 1 2 3 f f in Figures 7-12.Figures 7-12 clearly show that GNO2DE can achieve better convergence for each function of 1 4 f f , 6 7 f f , 9 2 0 f f , and 21 23 f f , when evaluated by 100,000 FES (the number of fitness evaluations).From Figure 8, we know that function 8f approximately requires 300,000 FES to achieve the convergence, and that the convergence speed of function 5f is relatively slow in the case of the above parameters.Therefore, in order to investigate the effect of the control parameter F on the convergence.Some experimental results are given in Fig- ures 13-18.Firstly, the control parameter F is set to different values 0.4, 0.5, 0.6, 0.7 on functions 1 f and

Figure 14 .Figure 15 .
Figure 14.Convergence curve of f 2 for each F value.

Figure 16 .
Figure 16.Convergence curve of f 8 for each F value.

Figure 17 .Figure 18 .
Figure 17.Convergence curve of f 13 for each F value.

Table 2 .
The

Table 10 . Comparison between GNO2DE, DE, ODE/2, SOA, FEP, opt-IA, and CLPSO on functions -
GNO2DEis slightly worse in terms of the mean, std values of, while the 200,000 FES of DE is twice of that of GNO2DE, and the performance of GNO2DE is clearly better than that of SOA, FEP, opt-IA, and CLPSO.For 6 f , the mean, std values of GNO2DE are similar to those of DE, SOA, FEP, opt-IA, and CLPSO, while the 100,000 FES used by GNO2DE is least among these methods.For 8 f , the mean, std, and FES values of ODE/2 are not worse than or better than those of other methods.For 5 f , compared with DE,

Table 11 . Experimental Results of 100-dimensional functions -
merical optimization problems; 3) to introduce some local search and heuristic techniques to speed up the convergence and escape from the local optima, etc.