Global Convergence Property with Inexact Line Search for a New Hybrid Conjugate Gradient Method

Abstract

In this study, we derive a new scale parameter φ for the CG method, for solving large scale unconstrained optimization algorithms. The new scale parameter φ satisfies the sufficient descent condition, global convergence analysis proved under Strong Wolfe line search conditions. Our numerical results show that the proposed method is effective and robust against some known algorithms.

Share and Cite:

Al-Namat, F. and Al-Naemi, G. (2020) Global Convergence Property with Inexact Line Search for a New Hybrid Conjugate Gradient Method. Open Access Library Journal, 7, 1-14. doi: 10.4236/oalib.1106048.

1. Introduction

In unconstrained optimization, we minimize an objective function that depends on real variables with no restrictions at all on the value of these variables. The unconstrained optimization problem is stated by:

min x R n f ( x ) (1)

where x R n is a real vector with n 1 component and f : R n R is a smooth function and its gradient g is available [1]. A nonlinear conjugate gradient method generates a sequence x k Starting from an initial guess x 0 R n Using the recurrence

x k + 1 = x k + α k d k , k = 0 , 1 , 2 , (2)

where α k is the positive step size obtained by carrying out a one dimensional search, known as the line searches [2]. Among them, the so-called strong wolf line search conditions require that [3] [4].

f ( x k + α k d k ) f ( x k ) + σ α k d k , (3)

| g ( x k + α k d k ) | δ | g k T d k | (4)

where 0 < σ < δ < 1 , is to find an approximation of α k where the descent property must be satisfied and no longer searching in the direction when x k is far from the solution. Thus by strong Wolfe line search conditions we in herit the advantages of exact line search with inexpensive and low computational cost [5].

The search direction d k is generated by:

d k = { g k , k = 1 g k + β k d k , k > 1 (5)

where g k and β k is the gradient and conjugate gradient coefficient of f(x) respectively at the point x k . The different choices for the parameter β k correspond to different conjugate gradient methods. The most popular formulas for β k is Hestenes Stiefel method (HS), Fletcher-Reeves method (FR), Polak-Ribiere- Polyak method (PR), conjugate―Descent method (CD), Liu―Storey method (LS), and Dai-Yuan method (DY), etc

These methods are identical when f is a strongly convex quadratic function and the line search is exact, since the gradient are mutually orthogonal, and the parameters β k in these methods are equal. When applied to general nonlinear function with inexact line searches, however, the behavior of these methods is marked different [1]. We are going to summarize some well known conjugate gradient method in Table 1.

An important class of conjugate gradient methods is the hybrid conjugate gradient algorithms. The hybrid computational schemes perform better than the classical conjugate gradient methods. They are defined by (2) and (5) where the parameter β k is computed as projections or as convex combinations of different conjugate gradient methods [14].

We are going to summarize some well known hybrid conjugate gradient method in Table 2.

We propose a new hybrid CG method based on combination of MMWU [24] and RMAR [25] conjugate gradient methods for solving unconstrained optimization method with suitable conditions. The corresponding conjugate gradient parameters are

B k M M W U = g k + 1 2 d k 2 (6)

and

β k R M A R = g k + 1 2 g k + 1 d k g k + 1 T d k d k 2 (7)

Table 1. Some well known conjugate gradient coefficients.

Table 2. Hybrid conjugate gradient methods.

We defined the parameter β k in the proposed method by:

β k F G = ( 1 φ k ) β k M M W U + φ k β k R M A R (8)

Observe that if φ k = 0 , then β k F G = β k M M W U , and if φ k = 1 , then β k F G = β k R M A R .

By choosing the appropriate value of the parameter φ k In the convex combination, the search direction d k of our algorithm not only is the Newton direction, but also satisfies the famous DL conjugate condition proposed by Dai and Liao [26]. Under the strong Wolfe line search conditions, we prove the global convergence of our algorithm. The numerical results also show the feasibility and effectiveness of our algorithm.

This paper is organized as follows. Section 2 we introduce our new hybrid conjugate gradient method (HFG), and we obtain the parameter φ k using some approaches and give us a specific algorithm. Section 3, we prove that it generates direction satisfying the sufficient descent condition under strong Wolfe line search conditions. The global convergence property of the proposed method is established in Section 4. Some numerical results are reported in Section 5.

2. A New Hybrid Conjugate Gradient Method

In this section, we will describe a new proposed hybrid conjugate gradient method. In order to obtain the sufficient descent direction, we will compute φ k as follows. We combine β k M M W U and β k R M A R in a convex combination in order to have a good algorithm for unconstrained optimization.

The direction d k + 1 is generated by the rule

d k + 1 = g k + 1 + β k H F G d k (9)

where β k H F G defined in (8), the iterates x 1 , x 2 , x 3 , of our method are computed by means of the recurrence (2), where the step size α k Is determined according to the strong Wolf conditions (3) and (4).

The scale parameter φ k satisfying 0 φ k 1 , which will be determined in a specific way to be described later. Observe that if φ k = 0 , then β k H F G = β k M M W U , and

If φ k = 1 , then β k H F G = β k R M A R . On the other hand, if 0 < φ k < 1 , then β k H F G is a convex combination of β k M M W U and β k R M A R .

From (8) and (9) it is obvious that:

d k + 1 = { g k + 1 , k = 1 g k + 1 + ( 1 φ k ) g k + 1 2 d k 2 d k + φ k g k + 1 2 g k + 1 d k g k + 1 T d k d k 2 d k , k > 1 , (10)

Our motivation to select the parameter φ k in such a manner that the defection d k + 1 given in (10) is equal to the Newton direction d k + 1 N = 2 f ( x k + 1 ) 1 g k + 1 . There for

2 f ( x k + 1 ) 1 g k + 1 = g k + 1 + ( 1 φ k ) g k + 1 2 d k 2 d k + φ k g k + 1 2 g k + 1 d k g k + 1 T d k d k 2 d k (11)

Now multiplying (11) by s k T 2 f ( x k + 1 ) from the left, we get

s k T g k + 1 = s K T 2 f ( x k + 1 ) g k + 1 + ( 1 φ k ) g k + 1 2 d k 2 s K T 2 f ( x k + 1 ) d k + φ k g k + 1 2 g k + 1 d k g k + 1 T d k d k 2 s k T 2 f ( x k + 1 ) d k

Therefore, in order to have an algorithm for solving large scale problems we assume that pair ( s k , y k ) satisfies the secant equation

2 f ( x k + 1 ) s k = y k . (12)

From (12), we get

s k T 2 f ( x k + 1 ) = y k T .

Denoting φ k F G = φ k we get

s k T g k + 1 = y K T g k + 1 + g k + 1 2 d k 2 y k T d k + φ k F G ( g k + 1 ( g k + 1 T d k ) d k 2 ) ( y k T d k )

after some algebra, we get

φ k F G = ( s k T g k + 1 y k T g k + 1 ) d k 3 + g k + 1 2 d k ( y k T d k ) g k + 1 ( g k + 1 T d k ) ( y k T d k ) (13)

Now, we specify a complete hybrid conjugate gradient method (HFG) which posses some nice properties of conjugate gradient and Newton method.

Algorithm HFG

Step 1: Select x 0 R n , > 0 , set k = 0 . Compute f ( x 0 ) and g 0 = f ( x 0 ) , set d 0 = g 0 .

Step 2: Test the stopping criteria, i.e. if g k , then stop.

Step 3: Compute α k by strong Wolfe line search conditions in (3) & (4).

Step 4: Compute x k + 1 = x k + α k d k , g k + 1 = g ( x k + 1 ) . Compute s k = x k + 1 x k And y k = g k + 1 g k

Step 5: If φ k 1 then set φ k = 1 . If φ k 0 , then set φ k = 0 , otherwise compute φ k as (13).

Step 6: Compute β k F G by (8).

Step 7: Generate d = g k + 1 + β k F G d k

Step 8: If the restart criteria of Powell | g k T g k | 0.2 g k + 1 2 , is satisfied, then set d k = g k + 1 , otherwise define d k + 1 = d

Step 9: Set k = k + 1 , and continue with step 2.

3. The Sufficient Descent Condition

In this section, we are going to apply the following theorem to illustrate that the search direction d k Obtained by hybrid FG satisfies the sufficient descent condition which plays Avit of role in analyzing the global convergence.

For further considerations we need the following assumptions

3.1. Assumption

The level sets S = { x R n , f ( x n ) } are bounded.

3.2. Assumption

In a neighborhood N of S, the function f is continuously differentiable and its gradient is Lipschitz continuous, i.e., there exists a constant L > 0 , such that

f ( x ) f ( y ) L x y , x , y N

Under these assumptions of if there exists a positive constant ( γ , γ ¯ , ω & ω ¯ ) & such that

γ ¯ g k + 1 γ and ω ¯ g k ω , x S [27].

Theorem.

Let the sequences { g k } and { d k } be generated by a hybrid FG method. Then the search direction d k satisfies the sufficient descent condition:

g k + 1 T d k + 1 μ g k + 1 2 , μ 0 (14)

where μ = 1 ( E 4 E 3 ) , with 0 < ( E 4 E 3 ) < 1 .

Proof. We shall show that d k satisfies the sufficient descent condition holds for k = 0 , the proof is a trivial one, i.e. d 0 = g 0 and so g 0 T d 0 = g 0 2 . Now we have

d k + 1 = g k + 1 + β k F G d k ,

i.e.

d k + 1 = g k + 1 + [ ( 1 φ k ) β k M M W U + φ k β k R M A R ] d k

We can rewrite the above direction by the following manner:

d k + 1 = ( φ k g k + 1 + ( 1 φ k ) g k + 1 ) + ( ( 1 φ k ) β k M M W U + φ k β k R M A R ) d k .

So,

d k + 1 = φ k ( g k + 1 + β k R M A R d k ) + ( 1 φ k ) ( g k + β k M M W U d k ) ,

After some arrangement, we get

d k + 1 = φ k d k + 1 R M A R + ( 1 φ k ) d k + 1 M M W U (15)

Multiplying (15) by g k + 1 T from the left, we get

g k + 1 T d k + 1 = φ k g k + 1 T d k + 1 R M A R + ( 1 φ k ) g k + 1 T d k M M W U

Firstly, if φ k = 0 , then d k + 1 = d k + 1 M M W U , we are going to prove that the sufficient descent condition holds for MMWU method in the presence of the strong Wolfe line search condition, because in [24] they proved this method satisfied the sufficient descent condition with exact line search.

i.e.

g k + 1 T d k + 1 M M W U = g k + 1 2 + g k + 1 2 d k 2 g k + 1 T d k (16)

Since,

g k + 1 T d k y k T d k and y k T d k α k L d k 2 (17)

Applications (17) in (16), we get

g k + 1 T d k + 1 M M W U g k + 1 2 + g k + 1 2 d k 2 α k L d k 2 = ( 1 α k L ) g k + 1 2 = E 1 g k + 1 2 (18)

where E 1 = ( 1 α k L ) > 0 , with 0 < α k L < 1 .

So, it is proved that d k + 1 M M W U satisfies the sufficient descent condition.

Now let φ k = 1 then d k = d k R M A R , we are going to prove that the sufficient descent condition holds for RMAR method in the presence of the strong Wolfe line search condition because in [25] they proved this method satisfied the sufficient descent condition with exact line search.

d k + 1 R M A R = g k + 1 + β k R M A R d k

Multiplying the above equation from left by g k + 1 T we get

g k + 1 T d k + 1 R M A R = g k + 1 2 + g k + 1 2 g k + 1 d k g k + 1 T d k d k 2 g k + 1 T d k .

In [25], they proved that

0 g k + 1 2 g k + 1 d k g k + 1 T d k d k 2 2 g k + 1 2 d k 2 (19)

Used (17), and (19) the direction become

g k + 1 T d k + 1 g k + 1 2 + 2 α k L g k + 1 2 = ( 1 2 α k L ) g k + 1 2 = E 2 g k + 1 2 (20)

where E 2 = ( 1 2 α k L ) > 0 with 0 < 2 α k L < 1 and 0 < L < 1 2 .

So, it is proved that d k + 1 R M A R satisfied the sufficient descent condition.

Now, we are going to prove the direction satisfy the sufficient descent condition when 0 < φ k < 1 , firstly for

( 1 φ k ) β K M M W U g k + 1 T d k = g k + 1 2 d k 2 g k T d k [ s k T g k + 1 d k 3 y k T g k + 1 d k 3 + g k + 1 2 d k y k T d k g k + 1 ( g k + 1 T d k ) y k T d k ] g k + 1 2 d k 2 g k + 1 T d k

We have from Lipschitz condition g k + 1 T d k < y k T d k and

( 1 σ ) g k y k T d k α k L d k 2

with a mathematical calculation, we get

( 1 φ k ) β k M M W U g k + 1 T d k [ α k L d k 2 d k 2 s k g k + 1 d k α k L d k 2 + g k + 1 2 α k L d k g k + 1 ( ( 1 σ ) g k ) ] g k + 1 2 [ α k L + L ( 1 σ ) s k 2 d k α k L d k 3 + g k + 1 2 d k g k + 1 g k 2 ] g k + 1 2 [ α k L + A γ B α k L B 2 + Y 2 α k L B ( 1 σ ) Y ¯ W ¯ 2 ] g k + 1 2

Let E 1 = α k L + L A B α k L B 3 + γ 2 α k L B ( 1 σ ) γ ¯ ω ¯ 2

( 1 φ k ) β M M W U g k + 1 T d k E 3 g k + 1 2 (21)

Now, secondly for

φ k β k R M A R g k + 1 T d k = [ s k T g k + 1 d k 3 y k T g k + 1 d k 3 + g k + 1 2 d k y k T d k g k + 1 ( g k + 1 T d k ) ( y k T d k ) ] [ g k + 1 2 g k + 1 d k g k + 1 T d k d k 2 ] g k + 1 T d k

From (19), Lipschitz condition s k T g k + 1 y k T s k L s k 2 and s k = α k d k , we get

φ k β k R M A R g k + 1 T d k = 2 [ L s k 2 d k y k g k + 1 d k 3 + α k L g k + 1 2 d k 2 g k + 1 2 ( ( 1 σ ) ) g k 2 d k 2 ] g k + 1 2

Since y k g k + 1 + g k , so

φ k β k R M A R g k + 1 T d k 2 1 σ [ L s k 2 d k 0.8 g k + 1 2 d k + α k L g k + 1 2 d k g k + 1 2 g k 2 ] g k + 1 2 2 B 1 σ [ L A 0.8 γ 2 + α k L ω 2 γ ¯ ω ¯ 2 ] g k + 1 2

where E 4 = 2 B 1 σ [ L A 0.8 γ 2 + α k L ω 2 γ ¯ ω ¯ 2 ]

φ k β k R M A R g k + 1 T d k E 4 g k + 1 2 (22)

From (18), (20), (21) and (22) we get

g k + 1 T d k + 1 g k + 1 2 + E 3 g k + 1 2 E 4 g k + 1 2 = [ 1 ( E 4 E 3 ) ] g k + 1 2 = E g k + 1 2

with E = 1 ( E 4 E 3 ) and 0 < E 4 E 3 < 1 .

So, it is proved that d k + 1 Satisfied the sufficient descent condition.

4. Converge Analysis

Let Assumption 2.1 and 2.2 hold. In [26] it is proved that for any conjugate gradient method with strong Wolfe line search conditions, it holds:

4.1. Lemma

Let Assumption 2.1 and 2.2 holds. Consider the method (2) and (5) where the d k Is a descent direction and αk is received from the strong wolf line search. If

k 1 1 d k 2 = .

Then

lim k inf g k = 0 .

4.2. Theorem

Suppose that assumption 2.1 and 2.2 holds. Consider the algorithm HFG were 0 φ k 1 and α k is obtained by the strong Wolfe line search and d k + 1 is the descent direction. Then

lim k inf g k = 0 .

Proof. Because the descent condition holds, we have. So using lemma 3.1, it is sufficient to prove that is bounded above. From (10).

They proved that in [24] and [25].

,

And

.

Now for

,

By (4), we have.

Since and

,

with some mathematical calculation, we get

5. Numerical Experiments

In this section we selected some of test functions in Table 3 from CUTE library, along with other large scale optimization problems presented in Andrei [28] [29] and Bongartz et al. [30].

All codes are written in double precision FORTRAN Language and compiled Visual F90 (default compiler settings) on a Workstation Intel Pentium 4. The value of is always computed by cubic fitting procedure.

We selected 26 large scale unconstrained optimization problems in the extended

Table 3. It gives the comparison depending in the NOI and NOF between, and the proposed method.

Table 4. The percentage performance of the proposed methods.

Figure 1. The comparison between the three methods.

or generalized form. Each problem was tested three times for a gradually increasing number of variables: N = 1000, 5000 and 10,000, all algorithms implemented the strong Wolfe line search (3) and (4) conditions with and and the same stopping criterion is used.

In some cases, the computation stopped due to the failure of the line search to find the positive step size, and thus it was considered as a failure denoted by (F).

We record the number of iteration calls (NOI), the number of function evaluations calls (NOF), and the dimensions of test problems calls (N), for the purpose of our comparisons.

Table 3 gives the comparison depending in the NOI and NOF between, and the proposed method.

Table 4 gives the percentage performance of the proposed methods against and. We have seen that. Method saves (NOI 0.8%), (NOF 7.6%), and method saves (NOI 28.7%), (NOF 40.0%) compared with method.

While Figure 1 gives the comparison between, and, using a well-known Wood test function.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Liu, A.J. and Li, S. (2014) New Hybrid Conjugate Gradient Method for Unconstrained Optimization. Applied Mathematics and Computation, 245, 36-43.
https://www.sciencedirect.com/science/article/abs/pii/S0096300314010637
https://doi.org/10.1016/j.amc.2014.07.096
[2] Salleh, Z. and Alhawarat, A. (2016) An Efficient Modification of the Hes-tenes-Stiefel Nonlinear Conjugate Gradient Method with Restart Property. Journal of Inequalities and Applications, 2016, Article No. 110.
https://link.springer.com/article/10.1186/s13660-016-1049-5
https://doi.org/10.1186/s13660-016-1049-5
[3] Wolfe, P. (1969) Convergence Condition for Ascent Methods. SIAM Review, 11, 226-235. https://doi.org/10.1137/1011036
[4] Wolfe, P. (1971) Convergence Conditions for Ascent Methods II: Some Corrections. SIAM Review, 13, 185-188. https://doi.org/10.1137/1013035
[5] Alhawarat, A., Mamat, M., Rivaie, M. and Salleh, Z. (2015) An Efficient Hybrid Conjugate Gradient Method with the Strong Wolfe-Powell Line Search. Mathematical Problems in Engineering, 2015, Article ID: 103517.
https://www.hindawi.com/journals/mpe/2015/103517
https://doi.org/10.1155/2015/103517
[6] Hestenes, M.R. and Stiefel, E. (1952) Methods of Conjugate Gradients for Solving Linear Systems. Journal of Research of the National Bureau of Standards, 49, 409-436. https://nvlpubs.nist.gov/nistpubs/jres/049/jresv49n6p409_A1b.pdf
https://doi.org/10.6028/jres.049.044
[7] Fletcher, R. and Reeves, C. (1964) Function Minimization by Conjugate Gradients. The Computer Journal, 7, 149-154. https://doi.org/10.1093/comjnl/7.2.149
https://academic.oup.com/comjnl/article/7/2/149/335311
[8] Polak, E. and Ribie’re, G. (1969) Note sur la convergence de me’thodes de directions conjugue’s. Revue Francsis d’Infermatique et de recherché Operationnelle, 3, 35-43.
http://www.numdam.org/item?id=M2AN_1969__3_1_35_0
https://doi.org/10.1051/m2an/196903R100351
[9] Polyak, B.T. (1969) The Conjugate Gradient Method in Extreme Problems. USSR Computational Mathematics and Mathematical Physics, 9, 94-112.
https://www.researchgate.net/publication/222365587
https://doi.org/10.1016/0041-5553(69)90035-4
[10] Fletcher, R. (1987) Practical Methods of Optimization. 2nd Edition, John Wiley & Sons, Inc., Hoboken. https://www.wiley.com/en-ba
[11] Liu, D. and Story, C. (1991) Efficient Generalized Conjugate Gradient Algorithms. Part 1: Theory. Journal of Optimization Theory and Applications, 69, 129-137.
https://link.springer.com/article/10.1007/BF00940464
https://doi.org/10.1007/BF00940464
[12] Dai, Y.H. and Yuan, Y. (1999) A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM Journal on Optimization, 10, 177-182.
https://doi.org/10.1137/S1052623497318992
[13] Al-Naemi, Gh.M. and Hamed, E.T. (2013) New Conjugate Method with Wolfe Type Line Searches for Nonlinear Programming. Australian Journal of Basic and Applied Sciences, 7, 622-632. https://www.researchgate.net/publication/330686295
[14] Andrei, N. (2010) Acceleration Hybrid Conjugate Gradient Algorithm with Modified Secant Condition for Unconstrained Optimization. Numerical Algorithms, 54, 23-46. https://link.springer.com/article/10.1007/s11075-009-9321-0
https://doi.org/10.1007/s11075-009-9321-0
[15] Andrei, N. (2008) Another Nonlinear Conjugate Gradient Algorithm for Unconstrained Optimization. Optimization Methods and Software, 24, 89-104.
https://www.tandfonline.com/doi/abs/10.1080/10556780802393326
https://doi.org/10.1007/s11075-007-9152-9
[16] Yan, H., Chen, L. and Jiao, B. (2009) HS-LS-CD Hybrid Conjugate Gradient Algorithm for Unconstrained Optimization. 2nd International Workshop on Computer Science and Engineering, Vol. 1, Qingdao, 28-30 October 2009, 264-268.
https://ieeexplore.ieee.org/document/5403315
https://doi.org/10.1109/WCSE.2009.667
[17] Li, S. and Sun, Z.B. (2010) A New Hybrid Conjugate Gradient Method and Its Global Convergence for Unconstrained Optimization. International Journal of Pure and Applied Mathematics, 63, 84-93.
https://ijpam.eu/contents/2010-63-3/4/index.html
[18] Djordjevic, S.S. (2019) New Hybrid Conjugate Gradient Method as a Convex Combination of LS and CD Methods. Filomat, 31, 1813-1825.
https://doi.org/10.2298/FIL1706813D
[19] Djordjevic, S.S. (2018) New Hybrid Conjugate Gradient Method as a Convex Combination of HS and FR Conjugate Gradient Methods. Journal of Applied Mathematics and Computation, 2, 366-378. https://doi.org/10.26855/jamc.2018.09.002
https://www.researchgate.net/publication/328125868
[20] Djordjevi?, S.S. (2019) New Hybrid Conjugate Gradient Method as a Convex Combination of LS and FR Conjugate Gradient Methods. Acta Mathematica Scientia, 39, 214-228.
[21] Zheng, X.Y., Dong, X.L., Shi, J.R. and Yang, W. (2019) Further Comment on Another Hybrid Conjugate Gradient Algorithm for Unconstrained Optimization by Andrei. Numerical Algorithm, 1-6. https://doi.org/10.1007/s11075-019-00771-1
[22] Abdullahi, I. and Ahmad, R. (2016) Global Convergence Analysis of a Nonlinear Conjugate Gradient Method for Unconstrained Optimization Problems. Indian Journal of Science and Technology, 9, 1-9.
http://www.indjst.org/index.php/indjst/article/view/90175/74779
https://doi.org/10.17485/ijst/2016/v9i41/90175
[23] Livieris, I.E., Tampakas, V. and Pintelas, P. (2018) A Descent Hybrid Conjugate Gradient Method Based on the Memoryless BFGS Update. Numerical Algorithms, 79, 1169-1185. https://link.springer.com/article/10.1007/s11075-018-0479-1
https://doi.org/10.1007/s11075-018-0479-1
[24] Mandara, A.V., Mamat, M., Waziri, M.Y., Mohamed, M.A. and Yakubu, U.A. (2018) A New Conjugate Gradient Coefficient with Exact Line Search for Unconstrained Optimization. Far East Journal of Mathematical Sciences, 105, 193-206.
https://www.researchgate.net/publication/329522075
https://doi.org/10.17654/MS105020193
[25] Liu, J.K. and Li, S.J. (2014) New Hybrid Conjugate Gradient Method for Unconstrained Optimization. Applied Mathematics and Computation, 245, 36-43.
https://doi.org/10.1016/j.amc.2014.07.096
https://www.sciencedirect.com/science/article/abs/pii/S0096300314010637
[26] Yunus, R.B., Mamat, M. and Abashar, A. (2018) Comparative Study of Some New Conjugate Gradient Methods. UniSZA Research Conference (URC 2015), Kuala Terengganu, 14-16 April 2015, 616-621.
https://www.researchgate.net/publication/326301618
[27] Al-Naemi, Gh.M. and Ahmed, H.I. (2013) Modified Nonlinear Conjugate Gradient Algorithms with Application in Neural Networks. LAP Lambert Academic Publishing, Saarbrucken.
[28] Andrei, N. (2008) An Unconstrained Optimization Test Functions Collection. Advanced Modeling and Optimization, 10, 147-161.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.665.3152&rep=rep1&type=pdf
[29] Andrei, N. (2014) Test Functions for Unconstrained Global Optimization. 3-5.
[30] Bongartz, I., Conn, A.R., Gould, N. and Toint, P.L. (1995) CUTE: Constrained and Unconstrained Testing Environment. ACM Transactions on Mathematical Software, 21, 123-160. https://doi.org/10.1145/200979.201043

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.