A hybrid conjugate gradient method for optimization problems

A hybrid method of the Polak-Ribière-Polyak (PRP) method and the Wei-Yao-Liu (WYL) method is proposed for unconstrained optimization problems, which possesses the following properties: i) This method inherits an important property of the well known PRP method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening; ii) The scalar 0 k β  holds automatically; iii) The global convergence with some line search rule is established for nonconvex functions. Numerical results show that the method is effective for the test problems.


INTRODUCTION
We are interested to consider the unconstrained optimization problem   min , where : n f    is continuously differentiable. It is well known that there are many methods for solving optimization problems (see [24,26,[28][29][30][31][32]34] etc.), where the conjugate gradient(CG) method is a powerful line search method because of its simplicity and its very low memory requirement, especially for the large scale optimization problems [22,23,27], which can avoid, like steepest descent method, the computation and storage of some matrices associated with the Hessian of objective functions. The following iterative formula is often used by the nonlinear CG method 1 , 0,1,2, for (1.1), where k x is the current iterate point, 0 k   is a steplength, and k d is the search direction designed by 1 where k    is a scalar which determines the different conjugate gradient methods [4,5,8,9,12,13,15,16,18,20,21,25,33] etc., and k g is the gradient of ( ) f x at the point k x . The well-known formula for k  from the computation point of view is the following PRP method of ( ) f x at the point k x and 1 k x  , respectively, and  denotes the Euclidian norm of vectors.
Throughout this paper, we also denote   k f x by k f . Polak and Ribèire [18] proved that this method with the exact line search is globally convergent when the objective function is convex. Powell [19] gave a counter example to show that there exist nonconvex functions on which the PRP method does not converge globally even the exact line search is used. He suggested that k  should not be less than zero. Considering this suggestion, Gilbert and Nocedal [10] The numerical results show that this method is competitive to the PRP method for the test problems of [17]. Under the sufficient descent condition, this method is globally convergent with the WWP line search. These observations make us know that the sufficient descent condition  is a constant holds for all is very important to ensure the global convergence [1,2,10,14], and the scalar 0 k   also plays a very important role [10,19]. This motivates us to propose a hybrid method combining the PRP method and the WYL method. The hybrid method will possess some better properties of the PRP method and the WYL method: (i) the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening; (ii) The scalar 0 k   holds automatically.
The global convergence with the WWP line search of the presented method is established for nonconvex objective function. Numerical results show that this given method is competitive to the PRP method and the WYL method. This paper is organized as follows. In the next section, the algorithm is stated. The global convergence is proved in Section 3, and the numerical results are reported in Section 4. The last section gives one conclusion.

ALGORITHM
Now we describe the given algorithm as follows. Here we call it Algorithm 1.
Algorithm 1 (The hybrid algorithm of the PRP method and the WYL method) Step 0: Choose an initial point then stop; Otherwise go to the next step.
Step 2: Compute step size k  by some line search rules.
Thus the given method inherits the better property of the PRP method: the directions will turn out to be the steepest descent directions if the tiny steps from happening.
ii) By the definition of the new formula ,

THE GLOBAL CONVERGENCE
The following assumptions are often needed to prove the convergence of the nonlinear conjugate gradient methods (see [5,9,10,20,21] etc.).
x is a given point and  is bounded. ii) In an open convex set 0  that contains  , J is differentiable and its gradient g is Lipschitz continuous, namely, there exists a constants 0 L  such that

The global Convergence with the Weak Wolfe-Powell Line Search
The weak Wolfe-Powell (WWP) search rule is to find a step length k This line search technique is often used to study the convergence of conjugate gradient algorithms [6,27,34] summing up this inequality from 0 k  to  , and using Assumption 3.1 i), we can obtain this lemma. This completes the proof.
We will prove the global convergence of Algorithm 1 by contradiction. Then we assume that there exists a positive constant 0 Using (3.5) deduces a contradiction to obtain our conclusion.  .7), we get this lemma. The proof is complete.
The following property (*) was introduced by Gilbert and Nocedal [10], which pertains to the k   under the sufficient descent condition. The WYL formula also has this property. Now we show that this property (*) pertains to our method.
Property (*). Suppose that 1 it follows that (3.12) and the definition of b and  that Thus, the formula WYL k  also has the property (*).
Using the definition of the , it is not difficult to prove the following result. Here we only state it as follows, but omit the proof.  We also denote , Taking the norm in both sides of the above equality, and using (3.13) we get   Therefore, the conclusion of this theorem is right. This completes the proof.

NUMERICAL RESULTS
In this section, we report some numerical experiments. In Figure 1, "WYL", "PRP", and "MPRP-WYL" stand for the WYL method, the PRP method, and the new method, respectively. Figure 1 shows the performance of these methods relative to the iterative number of the function and gradient(NFN), which were evaluated using the profiles of Dolan and Moré [7]. It is easy to see that the MPRP-WYL is predominant among these three methods and the new method can solve about 99% of the test problems successfully. The PRP method is better than the WYL method for 1 1.2 t   and the WYL method is better than the PRP method for 1.2 6 t   . Moreover, the PRP method solves about 98% of the test problems and the WYL method solve about 99% of the test problems successfully, respectively. In a word, the given  Table 1 (NFN). method is competitive to the other two methods and the hybrid formula is notable.

CONCLUSION
This paper gives a hybrid conjugate gradient method for solving unconstrained optimization. The global convergence for nonconvex functions with the WWP line search is established. The numerical results show that the given method is competitive to the other standard conjugate gradient methods for the test problems.
For further research, we should study the convergence of the new algorithm under other line search rules. Moreover, more numerical experiments and testing environments (such that [3]) for large practical problems should be done in the future.