Global Convergence of an Extended Descent Algorithm without Line Search for Unconstrained Optimization

In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algo-rithm. The numerical results illustrate that the new algorithm is effective


Introduction
Consider an unconstrained optimization problem (UP) ( ) where : n f ℜ → ℜ is a continuously differentiable function.In general, the iterative algorithms for solving (UP) usually take the form: where , k k x α and k d are current iterative point, a positive step length and a search direction, respectively.For simplicity, we denote The main task in the iterative formula (2) is to choose search direction k d and determine step length k α along the direction.There are many classic where k β is a parameter (see [2] [3] [4]).For step length k α , it is usually determined by line search procedure, such as exact line search, Wolfe line search, Armijo line search, and so on.However, these line search procedures may involve extensive computation of objective functions and its gradients, which often becomes a significant burden for large-scale problems.Evidently, it is a good idea that line search procedure is avoided in algorithm design in order to reduce the evaluations of objective functions and gradients.Based on the above consideration, some authors have started to study the algorithms without line search.Recently, some conjugate gradient algorithms without line search were investigated.In [5], Sun and Zhang studied some well-known conjugate gradient methods without line search, for instance, Fletcher-Reeves method, Hestenes-Stiefel method, Dai-Yuan method, Polak-Ribière method and Conjugate Descent method.In [6], Chen and Sun researched a two-parameter family of conjugate gradient methods without line search.In [7] [8], Wang and Zhu put forward to conjugate gradient path methods without line search.Shi, Shen and Zhou proposed descent methods without line search in [9] and [10], respectively.Further, Zhou presented the steepest descent algorithm without line search in [11].
Inspired by the above literatures, in this paper we will extend the descent algorithm without line search of [10] to more general case, and discuss its global convergence.The rest of this paper is organized as follows.In Section 2, we describe the extended descent algorithm without line search.In Section 3, we analyze its global convergence.Further, we generalize the search direction to more general form, and obtain global convergence of corresponding algorithm.Finally, numerical results are reported in Section 4.

Extended Descent Algorithm
To proceed, we first assume that [2] (H 1 ) The function f has lower bound on , where The gradient g is Lipschitz continuous in an open convex set  that contains £ , i.e., there exists Now we give the extended algorithm.
Algorithm 2.1.Given a starting point 1 x , a positive constant  , three parameters 1 2 , µ µ and ρ such that Step 1.If k g <  , then stop; otherwise go to Step 2.

of Applied Mathematics and Physics
Step 3. Set search direction ( ) Step 4. Compute step length by the following rule.When ( ) where k L satisfies that  is a positive sequence which has a sufficient large upper bound.
Step 5. Set next iterative point Step 6. Set : 1 k k = + , and go to Step 1.
Remark 2.1.Note that the formula of k s and k d in Algorithm 2.1 are the generalized forms of those in [10].  (5and (6), we have

Global Convergence
This completes the proof. Lemma 3.2 (Mean value theorem, see [1]).Suppose that the objective function ( ) where , and .
Using induction principle and noting that and 13), (4), Lemma 3.1, Lemma 3.3 and 1 is contained in  by (H 1 ), and there exists a constant * f such that k m k =  has an upper bound, (17) holds.On the other hand, by (9) and Lemma 3.1, we have ( By the same analysis as the above proof, (18) holds.The proof is completed. Lemma 3.4 (see [12]).If the conditions in Theorem 3.1 hold and In the following, we carry out our proofs in two cases.
Case 1.We complete the proof by utilizing reduction to absurdity.Suppose  .( ) ( ) ( ) ( ) where the function ( ) ϕ α satisfies the following conditions(see [10]): a) It is continuous and strictly monotone increasing when , etc (see [10]).We denote Algorithm 2.1 in which k d is determined by (26) as Algorithm 3.1.By using proof technique of above Theorem 3.2, it is easy to get its convergence theorem.

Numerical Results
In this section, we report some preliminary numerical experiments.The test problems and their initial values are drawn from [13].
In numerical experiment, we take the parameter 100 k L = ,and stop the iteration if the inequality

Conclusion
In this paper, we extended the descent algorithm without line search of [10] to more general case, and got its global convergence.Compared with [10], the extended algorithm has more effective numerical perfermance, so it is effective.
In the future, we will further research the descent algorithms without line search, and try to get some new descent algorithms without line search, which not only convergence globally, but also have good numerical results.

Remark 3 . 1 .
, the proof is the same as that in[10] and here is omitted.It follows from the proofs of Case 1 and Case 2 that (22) holds.This completes the proof.Search direction of Algorithm 2.1 can be extended to more general form as follows:

Table 1 ,
in which NI, NF and NG denote the total number of iterations, the total number of function evaluations and the total number of gradient evaluations, respectively.From Table1, we can see the extended algorithm has good numerical results.