A New Filled Function with One Parameter to Solve Global Optimization

In this paper, a new filled function with only one parameter is proposed. The main advantages of the new filled function are that it not only can be analyzed easily, but also can be approximated uniformly by a continuously differentiable function. Thus, a minimizer of the proposed filled function can be obtained easily by using a local optimization algorithm. The obtained minimizer is taken as the initial point to minimize the objective function and a better minimizer will be found. By repeating the above processes, we will find a global minimizer at last. The results of numerical experiments show that the new proposed filled function method is effective.


Introduction
Global optimization methods have wide applications in many fields, such as engineering, finance, management, decision science and so on.The task of global optimization is to find a solution with the smallest or largest objective function value.In this paper, we mainly discuss the method to find the global minimizer of the objective function.For some problems with only one minimizer, there are many local optimization methods available, for instance, the steepest decent method, the Newton method, the trust region method and so on.However, many problems include multiple local minimizers, and most of the existing methods will not be applicable to these problems.
The difficulty for global optimization is to escape from the current local minimizer to a better one.One of the most efficient methods to deal with this issue is the filled function method which was proposed by Ge [1] [2].The generic framework of the filled function method can be described as follows: 1) An arbitrary point is taken as an initial point to minimize the objective function by using a local optimization method, and a minimizer of the objective function is obtained.
2) Based on the current minimizer of the objective function, a filled function is designed and a point near the current minimizer is used as an initial point to further minimize the filled function.As a result, a minimizer of the filled function will be found.This minimizer falls into a better region (called basin) of the original objective function.
3) The minimizer of the filled function obtained in 2 is taken as an initial point to minimize the objective function and a better minimizer of the objective function will be found.
4) By repeating steps 2 and 3, the number of the local minimizers will be gradually reduced, and a global minimizer will be found at last.
Although the filled function method is an efficient global optimization method and different filled functions have been proposed, there are some drawbacks for the existing filled functions, such as more than one parameters to be controlled, being sensitive to the parameters and ill-condition.For example, the filled functions proposed in [1] [2] contain exponent term or logarithm term which will cause ill-condition problem; the filled functions proposed in [3] [4] are non-smooth functions to which the usual classical local optimization methods can not be used; the filled functions proposed in [1] [5] [6] have more than one parameter which is difficult to adjust.To overcome these shortcomings, a new filled function with only one parameter is presented.Although it is not a smooth function, it can be approximated uniformly by a continuously differentiable function.Thus its minimizer can be easily obtained.Based on this new filled function, a new filled function method is proposed.
The remainder of this paper is organized as follows.Related concepts of the filled function method are given in Section 2. In Section 3, a new filled function is proposed and its properties are analyzed.Furthermore, an approximate function of the proposed filled function is given.Finally, the method for avoiding numerical difficulty is presented.In Section 4, a new filled function method is proposed and the numerical experiments on several test problems are made.Finally, some concluding remarks are drawn in Section 5.

The Related Concepts
Consider the following global optimization problem with a box constraint: ( ) ( ) . Generally, we assume that ( ) f x has only a finite number of minimizers and the set of the minimizers is denoted as in Ω ( I is the number of minimizers of ( ) Some useful concepts and notations are introduced as follows: 1 x * : A local minimizer of ( ) Assumption.All of the local minimizers of ( ) f x fall into the interior of Ω .

Definition 1
The basin [7] of ( ) x * is a connected domain ( ) x * , and in which the steepest descent sequences of ( ) f x starting from any point in ( ) x * , while the minimization sequences of ( ) f x starting from any point outside of ( ) x * is an isolated maximizer of ( ) f x , the basin of ( ) x * is defined as the hill of ( ) . If there is another minimizer 2 x * of ( ) , then the basin ( ) 2 B x * of ( ) x * is said to be lower (or higher) than ( ) x * .The first concept of the filled function was introduced by Ge [1] [2].Since the concept of the filled function was introduced, different filled functions are given (e.g., [8] [9]).A new concept of the filled function was presented which is easier to understand in [8].It can be described as follows: The first concept of the filled function was introduced in [1].Definition 2 A function ( ) FF x is said to be a filled function of ( ) x * , if it satisfies the following properties: 1) 1 x * is a strict local maximizer of ( )

A New Filled Function and Its Properties
Assume that a local minimizer 1 x * of ( ) f x has been found so far.Consider the following function for pro- blem (P): where A is a parameter.
The following theorems will show that the formula (1) is a filled function which satisfies definition 2. Theorem 1 Suppose 1 x * is a local minimizer of ( ) Thus, 1 x * is a strict local maximizer of ( ) S is not empty, then there exists a point x * is a local minimizer of ( ) and 1 x * is not a global minimizer of ( ) f x , there exists another local minimizer 2 x * of ( ) By the definition of m and continuity of ( ) By 1 x * is a local minimizer of ( ) f x , there exists a point ( ) Then, there exists a point Furthermore, by the Theorem 2, one has ( ) ( ) . Consequently, Theorem 3 is true.□ From Theorem 1, 2 and 3, we know that if there is a better local minimizer 2 x * of ( ) x * , then there exists a point x′ which is minimizer of ( ) where p is a positive parameter.It is obvious that ln 1 exp max 0, 1 ln exp max 0, max 0, we have that the inequality ( ) ( ) holds.From above discussion, we can see that By doing so, the existing shortcomings can be overcome.

A New Filled Function Algorithm
Based on the theorems and discussions in the previous section, a new filled function algorithm for finding a global minimizer of ( ) f x will be proposed, and then some explanations on the algorithm will be given.The details are as follows.
Step 1 (Initialization).Choose the initial values ( ) ), a lower bound of A (denote it as Lba), sufficiently large p and r .Some directions , 1, 2, , 2  are also given in advance, where , n is the dimension of the optimization problems.Set : 1 k = .
Step 2 Minimize ( ) f x starting from an initial point k x ∈ Ω and obtain a minimizer k x * of ( ) ( ) p FF x , we need to select a local optimization method first.In the pro- posed algorithm, the trust region method is employed.
2) In Step 4, the smaller δ is needed to select accurately, in our algorithm, the δ is selected to guarantee is greater than a threshold (e.g., take the threshold as 3 10 − ).

3)
Step 5 means that if a local minimizer x′ of ( ) p FF x is found in Ω and with ( ) ( ) , then a better local minimizer of ( ) f x will be obtained by using x′ as the initial point to minimize ( ) f x .

Numerical Experiment
In this section, the proposed algorithm is tested on some benchmark problems taken from some literatures.Problem 1. (Two-dimensional function) The global minimum solution is where ( ) ( ) ( ) and ( ) . .0 10, 0 10  where i c is the i th element of vector C , ij a and ij p are the elements at the i th row and the j th column of matrices n A and n P , respectively.
( ) The known global minimizer is ( ) The known global minimizer is ( ) T 0.2016, 0.1501, 0.4769, 0.2753, 0.3117, 0.6573 so far.Enerally, in order to illustrate the performance of the filled function method, it is necessary to record the total number of evaluations of ( ) f x and ( ) p FF x until the algorithm terminates.The numerical results of the proposed algorithm are summarized in Table 1 for the above 7 problems.
Additionally, the proposed algorithm is compared with algorithm presented in [11].A series of minimizers obtained by the above two algorithms are recorded in Tables 2-14 for all testing problems.Some symbols used in the following tables are given firstly.FABZ : The algorithm proposed in reference [11].
According to the Tables 2-13, we will find that our algorithm is effective, and it is affected by the initial value of A and the selection of Lba .The larger initial value of A , the less local minimizer will be found and   also the lower computation cost will be; meanwhile, if the function value of the current local minimizer is closed to that of the global minimizer, then the sufficiently small Lba is necessary, while a relatively large initial value of A will cause increasing of number of iterations.Therefore, the initial value of A and Lba are needed to be selected accurately.The selection of Lba ensure the accuracy of the global minimizer, so that the sufficiently small Lba and appropriate small initial A should be selected or ρ contained in the algorithm is taken as small as possible.

Concluding Remarks
The filled function method is a kind of efficient approaches for the global optimization.existing filled functions have some drawbacks, for example, some are non-differentiable functions, some contain more than one adjust parameter and some contain ill-condition terms and so on.These drawbacks may result in failure or difficulty of the algorithm in searching global optimal solution.In order to overcome these shortcomings, a new filled function with only one parameter is proposed in this paper.Although the proposed filled function is nondifferentiable at some points, it can be approximated uniformly by a continuous differentiable function.The inherent shortcomings of the approximate function can be eliminated by simple treatment.The effectiveness of the new filled function method is demonstrated by numerical experiments on some testing optimization problems.
to infinity.Therefore, by selecting a sufficiently large p , the minimization of be large enough.However, if the value of p is too large, it will cause the overflow of the function values ( ) p FF x .To prevent the occurrence of this situation, a shrinkage factor r is introduced to ( ) p FF x as follows.First of all, it is necessary to estimate order to prevent the difficulty of numerical computation, a large 10 pU r = can be taken.Finally, ( ) p FF x can be rewritten as ( )

5 and go back to Step 2 . 6
Step 5; otherwise, go to Step 6.Step Use x as an initial point for minimization of go back to Step 4; Otherwise, a minimizer x′ of Step If A Lba ≤ , the algorithm stops and kx * is taken as the global minimizer of ( ) f x ; Otherwise, decrease A by setting : to Step 3;Before we go to the experiments, we have to give some explanations on the above filled function algorithm.1)In minimization of ( )f x and

.
The global minimum solution satisfies ( ) 0 f x * = for all c .Problem 2. (Three-hump back camel function) This function has 760 minimizers in total.The global minimum value is ( )

0x:
The initial point which satisfies 0 x ∈ Ω .x* : An approximate global minimizer obtained by the proposed algorithm.Iter : The total number of function evaluations of ( )fx and ( ) p FF x until the algorithm terminates.The initial value of A is taken as 1 for all problems.k : The iteration number in finding the k th local minimizer of the objective function; k x * : The k th local minimizer; k f * : The function value of k x * FABO : The algorithm proposed in this paper;

Table 1 .
Numerical results of all testing problems.

Table 2 .
Computational results for problem 1 with = 0.2 c .

Table 5 .
Computational results for problem 2 with initial point ( )

Table 6 .
Computational results for problem 2 with initial point ( )

Table 7 .
Computational results for problem 3 with initial point ( )

Table 8 .
Computational results for problem 3 with initial point ( )

Table 9 .
Computational results for problem 3 with initial point ( )

Table 10 .
Computational results for problem 4.

Table 11 .
Computational results for problem 5.

Table 12 .
Computational results for problem 6.

Table 13 .
Computational results for problem 7.

Table 14 .
Computational results for problem 7 with 6 n = .