^{1}

^{1}

In this study, we derive a new scale parameter φ for the CG method, for solving large scale unconstrained optimization algorithms. The new scale parameter φ satisfies the sufficient descent condition, global convergence analysis proved under Strong Wolfe line search conditions. Our numerical results show that the proposed method is effective and robust against some known algorithms.

In unconstrained optimization, we minimize an objective function that depends on real variables with no restrictions at all on the value of these variables. The unconstrained optimization problem is stated by:

min x ∈ R n f ( x ) (1)

where x ∈ R n is a real vector with n ≥ 1 component and f : R n → R is a smooth function and its gradient g is available [

x k + 1 = x k + α k d k , k = 0 , 1 , 2 , ⋯ (2)

where α k is the positive step size obtained by carrying out a one dimensional search, known as the line searches [

f ( x k + α k d k ) ≤ f ( x k ) + σ α k d k , (3)

| g ( x k + α k d k ) | ≤ δ | g k T d k | (4)

where 0 < σ < δ < 1 , is to find an approximation of α k where the descent property must be satisfied and no longer searching in the direction when x k is far from the solution. Thus by strong Wolfe line search conditions we in herit the advantages of exact line search with inexpensive and low computational cost [

The search direction d k is generated by:

d k = { − g k , k = 1 − g k + β k d k , k > 1 (5)

where g k and β k is the gradient and conjugate gradient coefficient of f(x) respectively at the point x k . The different choices for the parameter β k correspond to different conjugate gradient methods. The most popular formulas for β k is Hestenes Stiefel method (HS), Fletcher-Reeves method (FR), Polak-Ribiere- Polyak method (PR), conjugate―Descent method (CD), Liu―Storey method (LS), and Dai-Yuan method (DY), etc

These methods are identical when f is a strongly convex quadratic function and the line search is exact, since the gradient are mutually orthogonal, and the parameters β k in these methods are equal. When applied to general nonlinear function with inexact line searches, however, the behavior of these methods is marked different [

An important class of conjugate gradient methods is the hybrid conjugate gradient algorithms. The hybrid computational schemes perform better than the classical conjugate gradient methods. They are defined by (2) and (5) where the parameter β k is computed as projections or as convex combinations of different conjugate gradient methods [

We are going to summarize some well known hybrid conjugate gradient method in

We propose a new hybrid CG method based on combination of MMWU [

B k M M W U = ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 (6)

and

β k R M A R = ‖ g k + 1 ‖ 2 − ‖ g k + 1 ‖ ‖ d k ‖ g k + 1 T d k ‖ d k ‖ 2 (7)

NO | Formula | Authors |
---|---|---|

1 | β k H S = g k + 1 T y k y k T s k | Hestenes and Stiefel (HS) [ |

2 | β k F R = g k + 1 T g k + 1 g k T g k | Fletcher and Reeves (FR) [ |

3 | β k P R P = g k + 1 T y k g k T g k | Polak-Ribiere (PRP) [ |

4 | β k C D = g k + 1 T g k + 1 y k T s k | Conjugate Descent (CD) [ |

5 | β k L S = g k + 1 T y k − g k T s k | Liu and Storey (LS) [ |

6 | β k D Y = g k + 1 T g k + 1 y k T s k | Dai-Yuan method (DY) [ |

7 | β k n e w = g k + 1 T y k d k T y k − α k 2 d k T g k y k T y k | Al-Naemi and Hamed [ |

NO | Formula | Authors |
---|---|---|

1 | β k c = ( 1 − θ k ) β k H S + θ k β k D Y | Andrei [ |

2 | β k A c = ( 1 − θ k ) β k P R P + θ k β k D Y | Yan [ |

3 | β k N = ( 1 − θ k ) β k F R + θ k β k M M W U | Li and Sun [ |

4 | β k h y b = ( 1 − θ k ) β k L S + θ k β k D Y | Liu, J.K. and Li, Sij [ |

5 | β k h y b = ( 1 − θ k ) β k L S + θ k β k F R | Djordjevic’ [ |

6 7 | β k h y b = ( 1 − θ k ) β k H S + θ k β k F R β k h y b = ( 1 − θ k ) β k L S + θ k β k F R | Djordjevic’ [ |

8 | β k c = ( 1 − θ k ) β k H S + θ k β k C D | Xiuyun, et al. [ |

9 | β k L S D Y = ( 1 − γ k ) β k L S + γ k β k D Y | Abdullahi and Ahmad [ |

10 | β k H C G = λ k β k D Y + ( 1 − λ k ) β k H S | Livieris, Tampakas, and Pintelas [ |

We defined the parameter β k in the proposed method by:

β k F G = ( 1 − φ k ) β k M M W U + φ k β k R M A R (8)

Observe that if φ k = 0 , then β k F G = β k M M W U , and if φ k = 1 , then β k F G = β k R M A R .

By choosing the appropriate value of the parameter φ k In the convex combination, the search direction d k of our algorithm not only is the Newton direction, but also satisfies the famous DL conjugate condition proposed by Dai and Liao [

This paper is organized as follows. Section 2 we introduce our new hybrid conjugate gradient method (HFG), and we obtain the parameter φ k using some approaches and give us a specific algorithm. Section 3, we prove that it generates direction satisfying the sufficient descent condition under strong Wolfe line search conditions. The global convergence property of the proposed method is established in Section 4. Some numerical results are reported in Section 5.

In this section, we will describe a new proposed hybrid conjugate gradient method. In order to obtain the sufficient descent direction, we will compute φ k as follows. We combine β k M M W U and β k R M A R in a convex combination in order to have a good algorithm for unconstrained optimization.

The direction d k + 1 is generated by the rule

d k + 1 = − g k + 1 + β k H F G d k (9)

where β k H F G defined in (8), the iterates x 1 , x 2 , x 3 , ⋯ of our method are computed by means of the recurrence (2), where the step size α k Is determined according to the strong Wolf conditions (3) and (4).

The scale parameter φ k satisfying 0 ≤ φ k ≤ 1 , which will be determined in a specific way to be described later. Observe that if φ k = 0 , then β k H F G = β k M M W U , and

If φ k = 1 , then β k H F G = β k R M A R . On the other hand, if 0 < φ k < 1 , then β k H F G is a convex combination of β k M M W U and β k R M A R .

From (8) and (9) it is obvious that:

d k + 1 = { − g k + 1 , k = 1 − g k + 1 + ( 1 − φ k ) ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 d k + φ k ‖ g k + 1 ‖ 2 − ‖ g k + 1 ‖ ‖ d k ‖ g k + 1 T d k ‖ d k ‖ 2 d k , k > 1 , (10)

Our motivation to select the parameter φ k in such a manner that the defection d k + 1 given in (10) is equal to the Newton direction d k + 1 N = − ∇ 2 f ( x k + 1 ) − 1 g k + 1 . There for

− ∇ 2 f ( x k + 1 ) − 1 g k + 1 = − g k + 1 + ( 1 − φ k ) ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 d k + φ k ‖ g k + 1 ‖ 2 − ‖ g k + 1 ‖ ‖ d k ‖ g k + 1 T d k ‖ d k ‖ 2 d k (11)

Now multiplying (11) by s k T ∇ 2 f ( x k + 1 ) from the left, we get

− s k T g k + 1 = − s K T ∇ 2 f ( x k + 1 ) g k + 1 + ( 1 − φ k ) ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 s K T ∇ 2 f ( x k + 1 ) d k + φ k ‖ g k + 1 ‖ 2 − ‖ g k + 1 ‖ ‖ d k ‖ g k + 1 T d k ‖ d k ‖ 2 s k T ∇ 2 f ( x k + 1 ) d k

Therefore, in order to have an algorithm for solving large scale problems we assume that pair ( s k , y k ) satisfies the secant equation

∇ 2 f ( x k + 1 ) s k = y k . (12)

From (12), we get

s k T ∇ 2 f ( x k + 1 ) = y k T .

Denoting φ k F G = φ k we get

− s k T g k + 1 = − y K T g k + 1 + ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 y k T d k + φ k F G ( ‖ g k + 1 ‖ ( g k + 1 T d k ) ‖ d k ‖ 2 ) ( y k T d k )

after some algebra, we get

φ k F G = ( s k T g k + 1 − y k T g k + 1 ) ⋅ ‖ d k ‖ 3 + ‖ g k + 1 ‖ 2 ⋅ ‖ d k ‖ ( y k T d k ) ‖ g k + 1 ‖ ⋅ ( g k + 1 T d k ) ⋅ ( y k T d k ) (13)

Now, we specify a complete hybrid conjugate gradient method (HFG) which posses some nice properties of conjugate gradient and Newton method.

Algorithm HFG

Step 1: Select x 0 ∈ R n , ∈ > 0 , set k = 0 . Compute f ( x 0 ) and g 0 = − ∇ f ( x 0 ) , set d 0 = − g 0 .

Step 2: Test the stopping criteria, i.e. if ‖ g k ‖ ≤ ∈ , then stop.

Step 3: Compute α k by strong Wolfe line search conditions in (3) & (4).

Step 4: Compute x k + 1 = x k + α k d k , g k + 1 = g ( x k + 1 ) . Compute s k = x k + 1 − x k And y k = g k + 1 − g k

Step 5: If φ k ≥ 1 then set φ k = 1 . If φ k ≤ 0 , then set φ k = 0 , otherwise compute φ k as (13).

Step 6: Compute β k F G by (8).

Step 7: Generate d = − g k + 1 + β k F G d k

Step 8: If the restart criteria of Powell | g k T g k | ≥ 0.2 ‖ g k + 1 ‖ 2 , is satisfied, then set d k = − g k + 1 , otherwise define d k + 1 = d

Step 9: Set k = k + 1 , and continue with step 2.

In this section, we are going to apply the following theorem to illustrate that the search direction d k Obtained by hybrid FG satisfies the sufficient descent condition which plays Avit of role in analyzing the global convergence.

For further considerations we need the following assumptions

The level sets S = { x ∈ R n , f ( x n ) } are bounded.

In a neighborhood N of S, the function f is continuously differentiable and its gradient is Lipschitz continuous, i.e., there exists a constant L > 0 , such that

‖ ∇ f ( x ) − ∇ f ( y ) ‖ ≤ L ‖ x − y ‖ , ∀ x , y ∈ N

Under these assumptions of if there exists a positive constant ( γ , γ ¯ , ω & ω ¯ ) & such that

γ ¯ ≤ ‖ g k + 1 ‖ ≤ γ and ω ¯ ≤ ‖ g k ‖ ≤ ω , ∀ x ∈ S [

Theorem.

Let the sequences { g k } and { d k } be generated by a hybrid FG method. Then the search direction d k satisfies the sufficient descent condition:

g k + 1 T d k + 1 ≤ − μ ‖ g k + 1 ‖ 2 , ∀ μ ≥ 0 (14)

where μ = 1 − ( E 4 − E 3 ) , with 0 < ( E 4 − E 3 ) < 1 .

Proof. We shall show that d k satisfies the sufficient descent condition holds for k = 0 , the proof is a trivial one, i.e. d 0 = − g 0 and so g 0 T d 0 = − ‖ g 0 ‖ 2 . Now we have

d k + 1 = − g k + 1 + β k F G d k ,

i.e.

d k + 1 = − g k + 1 + [ ( 1 − φ k ) β k M M W U + φ k β k R M A R ] d k

We can rewrite the above direction by the following manner:

d k + 1 = − ( φ k g k + 1 + ( 1 − φ k ) g k + 1 ) + ( ( 1 − φ k ) β k M M W U + φ k β k R M A R ) d k .

So,

d k + 1 = φ k ( − g k + 1 + β k R M A R d k ) + ( 1 − φ k ) ( − g k + β k M M W U d k ) ,_{ }

After some arrangement, we get

d k + 1 = φ k d k + 1 R M A R + ( 1 − φ k ) d k + 1 M M W U (15)

Multiplying (15) by g k + 1 T from the left, we get

g k + 1 T d k + 1 = φ k g k + 1 T d k + 1 R M A R + ( 1 − φ k ) g k + 1 T d k M M W U

Firstly, if φ k = 0 , then d k + 1 = d k + 1 M M W U , we are going to prove that the sufficient descent condition holds for MMWU method in the presence of the strong Wolfe line search condition, because in [

i.e.

g k + 1 T d k + 1 M M W U = − ‖ g k + 1 ‖ 2 + ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 g k + 1 T d k (16)

Since,

g k + 1 T d k ≤ y k T d k and y k T d k ≤ α k L ‖ d k ‖ 2 (17)

Applications (17) in (16), we get

g k + 1 T d k + 1 M M W U ≤ − ‖ g k + 1 ‖ 2 + ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 α k L ‖ d k ‖ 2 = − ( 1 − α k L ) ‖ g k + 1 ‖ 2 = − E 1 ‖ g k + 1 ‖ 2 (18)

where E 1 = ( 1 − α k L ) > 0 , with 0 < α k L < 1 .

So, it is proved that d k + 1 M M W U satisfies the sufficient descent condition.

Now let φ k = 1 then d k = d k R M A R , we are going to prove that the sufficient descent condition holds for RMAR method in the presence of the strong Wolfe line search condition because in [

d k + 1 R M A R = − g k + 1 + β k R M A R d k

Multiplying the above equation from left by g k + 1 T we get

g k + 1 T d k + 1 R M A R = − ‖ g k + 1 ‖ 2 + ‖ g k + 1 ‖ 2 − ‖ g k + 1 ‖ ‖ d k ‖ g k + 1 T d k ‖ d k ‖ 2 g k + 1 T d k .

In [

0 ≤ ‖ g k + 1 ‖ 2 − ‖ g k + 1 ‖ ‖ d k ‖ g k + 1 T d k ‖ d k ‖ 2 ≤ 2 ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 (19)

Used (17), and (19) the direction become

g k + 1 T d k + 1 ≤ − ‖ g k + 1 ‖ 2 + 2 α k L ‖ g k + 1 ‖ 2 = − ( 1 − 2 α k L ) ⋅ ‖ g k + 1 ‖ 2 = − E 2 ⋅ ‖ g k + 1 ‖ 2 (20)

where E 2 = ( 1 − 2 α k L ) > 0 with 0 < 2 α k L < 1 and 0 < L < 1 2 .

So, it is proved that d k + 1 R M A R satisfied the sufficient descent condition.

Now, we are going to prove the direction satisfy the sufficient descent condition when 0 < φ k < 1 , firstly for

( 1 − φ k ) β K M M W U g k + 1 T d k = ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 g k T d k − [ s k T g k + 1 ‖ d k ‖ 3 − y k T g k + 1 ‖ d k ‖ 3 + ‖ g k + 1 ‖ 2 ‖ d k ‖ y k T d k ‖ g k + 1 ‖ ( g k + 1 T d k ) y k T d k ] ∗ ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 g k + 1 T d k

We have from Lipschitz condition g k + 1 T d k < y k T d k and

− ( 1 − σ ) ‖ g k ‖ ≤ y k T d k ≤ α k L ‖ d k ‖ 2

with a mathematical calculation, we get

( 1 − φ k ) β k M M W U g k + 1 T d k ≤ [ α k L ‖ d k ‖ 2 ‖ d k ‖ 2 − ‖ s k ‖ ‖ g k + 1 ‖ ‖ d k ‖ − α k L ‖ d k ‖ 2 + ‖ g k + 1 ‖ 2 α k L ‖ d k ‖ ‖ g k + 1 ‖ ⋅ ( − ( 1 − σ ) ‖ g k ‖ ) ] ‖ g k + 1 ‖ 2 ≤ [ α k L + L ( 1 − σ ) ‖ s k ‖ 2 ‖ d k ‖ − α k L ‖ d k ‖ 3 + ‖ g k + 1 ‖ 2 ‖ d k ‖ ‖ g k + 1 ‖ ‖ g k ‖ 2 ] ‖ g k + 1 ‖ 2 ≤ [ α k L + A γ B − α k L B 2 + Y 2 α k L B ( 1 − σ ) Y ¯ W ¯ 2 ] ‖ g k + 1 ‖ 2

Let E 1 = α k L + L A B − α k L B 3 + γ 2 α k L B ( 1 − σ ) γ ¯ ω ¯ 2 ^{ }

∴ ( 1 − φ k ) β M M W U g k + 1 T d k ≤ E 3 ‖ g k + 1 ‖ 2 (21)

Now, secondly for

φ k β k R M A R g k + 1 T d k = [ s k T g k + 1 ‖ d k ‖ 3 − y k T g k + 1 ‖ d k ‖ 3 + ‖ g k + 1 ‖ 2 ‖ d k ‖ y k T ‖ d k ‖ ‖ g k + 1 ‖ ( g k + 1 T d k ) ( y k T d k ) ] ⋅ [ ‖ g k + 1 ‖ 2 − ‖ g k + 1 ‖ ‖ d k ‖ g k + 1 T d k ‖ d k ‖ 2 ] g k + 1 T d k

From (19), Lipschitz condition s k T g k + 1 ≤ y k T s k ≤ L ‖ s k ‖ 2 _{ }and s k = α k d k , we get

φ k β k R M A R g k + 1 T d k = 2 [ L ‖ s k ‖ 2 ‖ d k ‖ − ‖ y k ‖ ‖ g k + 1 ‖ ‖ d k ‖ 3 + α k L ‖ g k + 1 ‖ 2 ‖ d k ‖ 2 ‖ g k + 1 ‖ 2 ( − ( 1 − σ ) ) ‖ g k ‖ 2 ‖ d k ‖ 2 ] ⋅ ‖ g k + 1 ‖ 2

Since ‖ y k ‖ ≤ ‖ g k + 1 ‖ + ‖ g k ‖ , so

φ k β k R M A R g k + 1 T d k ≤ − 2 1 − σ [ L ‖ s k ‖ 2 ‖ d k ‖ − 0.8 ‖ g k + 1 ‖ 2 ‖ d k ‖ + α k L ‖ g k + 1 ‖ 2 ‖ d k ‖ ‖ g k + 1 ‖ 2 ‖ g k ‖ 2 ] ⋅ ‖ g k + 1 ‖ 2 ≤ − 2 B 1 − σ [ L A − 0.8 γ 2 + α k L ω 2 γ ¯ ω ¯ 2 ] ⋅ ‖ g k + 1 ‖ 2

where E 4 = 2 B 1 − σ [ L A − 0.8 γ 2 + α k L ω 2 γ ¯ ω ¯ 2 ]

∴ φ k β k R M A R g k + 1 T d k ≤ − E 4 ‖ g k + 1 ‖ 2 (22)

From (18), (20), (21) and (22) we get

g k + 1 T d k + 1 ≤ − ‖ g k + 1 ‖ 2 + E 3 ‖ g k + 1 ‖ 2 − E 4 ‖ g k + 1 ‖ 2 = − [ 1 − ( E 4 − E 3 ) ] ‖ g k + 1 ‖ 2 = − E ‖ g k + 1 ‖ 2

with E = 1 − ( E 4 − E 3 ) and 0 < E 4 − E 3 < 1 .

So, it is proved that d k + 1 Satisfied the sufficient descent condition.

Let Assumption 2.1 and 2.2 hold. In [

Let Assumption 2.1 and 2.2 holds. Consider the method (2) and (5) where the d k Is a descent direction and α_{k} is received from the strong wolf line search. If

∑ k ≥ 1 1 ‖ d k ‖ 2 = ∞ .

Then

lim k → ∞ inf ‖ g k ‖ = 0 .

Suppose that assumption 2.1 and 2.2 holds. Consider the algorithm HFG were 0 ≤ φ k ≤ 1 and α k is obtained by the strong Wolfe line search and d k + 1 is the descent direction. Then

lim k → ∞ inf ‖ g k ‖ = 0 .

Proof. Because the descent condition holds, we have

They proved that in [

And

Now for

By (4), we have

Since

with some mathematical calculation, we get

In this section we selected some of test functions in

All codes are written in double precision FORTRAN Language and compiled Visual F90 (default compiler settings) on a Workstation Intel Pentium 4. The value of

We selected 26 large scale unconstrained optimization problems in the extended

n | Test Function | Dimension (N) | ||||||
---|---|---|---|---|---|---|---|---|

Total NOI | Total NOF | Total NOI | Total NOF | Total NOI | Total NOF | |||

1 | Beal | 1000 5000 10,000 | 12 12 12 | 29 29 29 | 12 12 12 | 29 29 29 | 12 12 12 | 29 29 29 |

2 | Biggsb1 | 1000 5000 10,000 | F 32 241 | F 71 511 | F 32 241 | F 71 511 | F 32 240 | F 71 506 |

3 | Cosine | 1000 5000 10,000 | 10 11 11 | 22 27 28 | 10 11 11 | 22 27 28 | 10 11 11 | 22 27 28 |

4 | Cubic | 1000 5000 10,000 | 16 16 16 | 45 45 45 | 16 16 16 | 45 45 45 | 16 16 16 | 45 45 45 |

5 | Denschnb | 1000 5000 10,000 | 6 6 6 | 15 15 15 | 6 6 6 | 15 15 15 | 6 6 6 | 15 15 15 |

6 | Denschnf | 1000 5000 10,000 | 12 13 15 | 26 28 31 | 12 13 15 | 26 28 31 | 12 13 15 | 26 28 31 |

7 | Diagonal1 | 1000 5000 10,000 | 32 F 93 | 71 F 242 | 32 52 F | 71 123 F | 32 51 92 | 71 121 236 |

8 | DiagonalI3 | 1000 5000 10,000 | 24 54 84 | 49 110 184 | 24 54 84 | 49 110 175 | 24 53 83 | 49 108 170 |

9 | Diagonal4 | 1000 5000 10,000 | 2 2 2 | 6 6 6 | 2 2 2 | 6 6 6 | 2 2 2 | 6 6 6 |

10 | Dixmaan A | 1000 5000 10,000 | 6 6 5 | 15 15 13 | 6 6 5 | 15 15 13 | 6 6 5 | 15 15 13 |

11 | Dixmaan E | 1000 5000 10,000 | 43 68 115 | 112 193 335 | 43 68 116 | 112 193 338 | 43 68 111 | 112 193 305 | ||
---|---|---|---|---|---|---|---|---|---|---|

12 | Dixmaan I | 1000 5000 10,000 | 43 68 111 | 117 191 327 | 43 F F | 117 F F | 43 65 110 | 117 173 322 | ||

13 | Dqdrtic | 1000 5000 10,000 | 32 32 32 | 65 65 65 | 32 32 32 | 65 65 65 | 32 32 32 | 65 65 65 | ||

14 | Extended EP1function | 1000 5000 10,000 | 4 4 4 | 10 10 10 | 4 4 4 | 10 10 10 | 4 4 4 | 10 10 10 | ||

15 | Extended cliff | 1000 5000 10,000 | 6 6 6 | 29 29 29 | 6 6 6 | 29 29 29 | 6 6 6 | 29 29 29 | ||

16 | Exhimmelbau | 1000 5000 10,000 | 26 8 8 | 276 1138 390 | 26 8 8 | 276 416 400 | 24 7 7 | 268 382 278 | ||

17 | Ex tri2 | 1000 5000 10,000 | 49 57 44 | 150 1372 235 | 46 50 58 | 129 314 935 | 45 46 41 | 103 339 340 | ||

18 | Ex Wood | 1000 5000 10,000 | 248 210 207 | 503 427 421 | 220 200 204 | 447 407 416 | 161 166 171 | 329 339 349 | ||

19 | Hager | 1000 5000 10,000 | 26 29 77 | 54 59 5360 | 26 29 F | 53 62 F | 26 29 70 | 54 59 263 | ||

20 | Helical | 1000 5000 10,000 | 65 68 68 | 134 140 140 | 58 58 58 | 121 121 121 | 43 43 43 | 90 90 90 | ||

21 | Miele | 1000 5000 10,000 | 134 141 145 | 510 549 569 | 146 150 160 | 521 543 593 | 108 120 108 | 368 419 369 | ||

22 | Nond | 1000 5000 10,000 | 30 30 30 | 78 78 78 | 30 30 30 | 78 78 78 | 30 30 30 | 78 78 78 | ||

23 | OSP | 1000 5000 10,000 | 197 329 401 | 758 1159 1353 | 195 298 386 | 714 1041 1342 | 149 297 383 | 540 1011 1318 | ||

24 | Powell 3 | 1000 5000 10,000 | 31 32 32 | 66 68 68 | 27 28 28 | 58 61 61 | 26 27 27 | 56 58 58 | ||

25 | Powell4 | 1000 5000 10,000 | F F F | F F F | 212 293 293 | 485 660 660 | 197 230 230 | 483 530 530 | ||

26 | Wood | 1000 5000 10,000 | 204 266 246 | 415 539 499 | 266 237 243 | 539 481 493 | 175 177 191 | 357 361 389 | ||

Measures | |||
---|---|---|---|

NOI | 100% | 99.2% | 71.3% |

NOF | 100% | 92.4% | 60.0% |

or generalized form. Each problem was tested three times for a gradually increasing number of variables: N = 1000, 5000 and 10,000, all algorithms implemented the strong Wolfe line search (3) and (4) conditions with

In some cases, the computation stopped due to the failure of the line search to find the positive step size, and thus it was considered as a failure denoted by (F).

We record the number of iteration calls (NOI), the number of function evaluations calls (NOF), and the dimensions of test problems calls (N), for the purpose of our comparisons.

While

The authors declare no conflicts of interest regarding the publication of this paper.

Al-Namat, F.N. and Al-Naemi, G.M. (2020) Global Convergence Property with Inexact Line Search for a New Hybrid Conjugate Gradient Method. Open Access Library Journal, 7: e6048. https://doi.org/10.4236/oalib.1106048