Several New Line Search Methods and Their Convergence

Abstract

In this paper, we propose several new line search rules for solving unconstrained minimization problems. These new line search rules can extend the accepted scope of step sizes to a wider extent than the corresponding original ones and give an adequate initial step size at each iteration. It is proved that the resulting line search algorithms have global convergence under some mild conditions. It is also proved that the search direction plays an important role in line search methods and that the step size approaches mainly guarantee global convergence in general cases. The convergence rate of these methods is also investigated. Some numerical results show that these new line search algorithms are effective in practical computation.

Share and Cite:

Z. Shi, K. Kendricks, Z. Xu and Y. Tang, "Several New Line Search Methods and Their Convergence," American Journal of Operations Research, Vol. 3 No. 5, 2013, pp. 421-430. doi: 10.4236/ajor.2013.35040.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] D. P. Bertsekas, “Constrained Optimization and Lagrange Multiplier Methods,” Academic Press Inc., Waltham, 1982.
[2] J. Nocedal and J. W. Stephen, “Numerical Optimization,” Springer-Verlag New York, Inc., New York, 1999. doi:10.1007/b98874
[3] Y. G. Evtushenko, “Numerical Optimization Techniques,” Optimization Software Inc., Publications Division, New York, 1985.
[4] J. P. Dussault, “Convergence of Implementable Descent Algorithms for Unconstrained Optimization,” Journal of Optimization Theory and Applications, Vol. 104, No. 3, 2000, pp. 749-750.
doi:10.1023/A:1004602012151
[5] B. Rhanizar, “Hybrid Procedures for Solving Some Unconstrained Nonlinear Optimization Problems, Applied Numerical Mathematics, Vol. 30, No. 4, 1999, pp. 459474.
doi:10.1016/S0168-9274(98)00068-3
[6] L. Armijo, “Minimization of Functions Having Lipschitz Continuous First Partial Derivatives,” Pacific Journal of Mathematics, Vol. 16, No. 1, 1966, pp. 1-3. doi:10.2140/pjm.1966.16.1
[7] A. A. Goldstein, “On Steepest Descent Method,” Journal on Society for the Industrial and Applied Mathematics Series A Control, Vol. 3, No. 1, 1965, pp. 147-151.
[8] W. Y. Sun and Y. X. Yuan, “Optimization Theory and Methods—Nonlinear Programming,” Springer Optimization and Its Applications, Vol. 1. Springer, New York, 2006.
[9] P. Wolfe, “Convergence Conditions for Ascent Methods,” SIAM Review, Vol. 11, No. 2, 1969, pp. 226-235. doi:10.1137/1011036
[10] Z. J. Shi, “Convergence of Line Search Metods for Unconstrained Optimization,” Applied Mathematics and Computation, Vol. 157, No. 2, 2004, pp. 393-405. doi:10.1016/j.amc.2003.08.058
[11] Z. J. Shi and J. Shen, “Convergence of Descent Method without Line Search,” Applied Mathematics and Computation, Vol. 167, No. 1, 2005, pp. 94-107. doi:10.1016/j.amc.2004.06.097
[12] Z. J. Shi and J. Shen, “New Inexact Line Search Method for Unconstrained Optimization,” Journal of Optimization Theory and Applications, Vol. 127, No. 2, 2005, pp. 425-446. doi:10.1007/s10957-005-6553-6
[13] Z. J. Shi and J. Shen, “Step Size Estimation for Unconstrained Optimization Methods,” Journal of Computational and Applied Mathematics, Vol. 24, No. 3, 2005, pp. 399-417.
[14] Y. H. Dai, “On the Nonmonotone Line Search,” Journal of Optimization Theory and Applications, Vol. 112, No. 2, 2002, pp. 315-330. doi:10.1023/A:1013653923062
[15] Z. J. Shi and J. Shen, “Convergence of Nonmonotone Line Search Method,” Journal of Computational and Applied Mathematics, Vol. 193, No. 2, 2006, pp. 397-412. doi:10.1016/j.cam.2005.06.033
[16] W. Y. Sun, J. Y. Han and J. Sun, “Global Convergence of Nonmonotone Descent Methods for Unconstrained Optimization Problems,” Journal of Computational and Applied Mathematics, Vol. 146, No. 1, 2002, pp. 89-98. doi:10.1016/S0377-0427(02)00420-X
[17] H. C. Zhang and W. W. Hager, “A Nonmonotone Line Search Technique and Its Application to Unconstrained Optimization,” The SIAM Journal on Optimization, Vol. 14, No. 4, 2004, pp. 1043-1056. doi:10.1137/S1052623403428208
[18] J. M. Ortega and W. C. Rheinboldt, “Iterative Solution of Nonlinear Equations in Several Variables,” Academic Press, Waltham, 1970.
[19] R. T. Rockafellar, “Convex Analysis,” Princeton University Press, Princeton, 1970.
[20] I. Bongartz, A. R. Conn, N. I. M. Gould and P. L. Toint, “CUTE: Constrained and Unconstrained Testing Environments,” ACM Transactions on Mathematical Software, Vol. 21, No. 1, 1995, pp. 123-160.
doi:10.1145/200979.201043
[21] W. W. Hager and H. C. Zhang, “A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search,” The SIAM Journal on Optimization, Vol. 16, No. 1, 2005, pp. 170-192.
doi:10.1137/030601880
[22] W. W. Hager and H. C. Zhang, “Algorithm 851: CG DESCENT, A Conjugate Gradient Method with Guaranteed Descent,” ACM Transactions on Mathematical Software, Vol. 32, No. 1, 2006, pp. 113-137. doi:10.1145/1132973.1132979
[23] A. I. Cohen, “Stepsize Analysis for Descent Methods,” Journal of Optimization Theory and Applications, Vol. 33, No. 2, 1981, pp. 187-205. doi:10.1007/BF00935546
[24] J. C. Gilbert and J. Nocedal, “Global Convergence Properties of Conjugate Gradient Methods for Optimization,” The SIAM Journal on Optimization, Vol. 2, No. 1, 1992, pp. 21-42.
doi:10.1137/0802003
[25] Z. J. Shi, “A New Memory Gradient Method under Exact Line Search,” Asia-Pacific Journal of Operational Research, Vol. 20, No. 2, 2003, pp. 275-284.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.