A Modified Limited SQP Method For Constrained Optimization

Abstract

In this paper, a modified variation of the Limited SQP method is presented for constrained optimization. This method possesses not only the information of gradient but also the information of function value. Moreover, the proposed method requires no more function or derivative evaluations and hardly more storage or arithmetic operations. Under suitable conditions, the global convergence is established.

Share and Cite:

Yuan, G. , Lu, S. and Wei, Z. (2010) A Modified Limited SQP Method For Constrained Optimization. Applied Mathematics, 1, 8-17. doi: 10.4236/am.2010.11002.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] P. T. Boggs, J. W. Tolle and P. Wang, “On The Local Convergence of Quasi-Newton Methods for Constrained Optimization,” SIAM Journal on Control and Optimiza-tion, Vol. 20, No. 2, 1982, pp. 161-171.
[2] F. H. Clarke, “Optimization and Nonsmooth Analysis,” Wiley, New York, 1983.
[3] T. F. Coleman and A. R. Conn, “Nonlinear Programming Via Exact Penlty Function: Asymptotic Analysis,” Ma-thematical Programming, Vol. 24, No. 1, 1982, pp. 123- 136.
[4] M. Fukushima, “A Successive Quadratic Programming Algorithm with Global and Superlinear Convergence Properties,” Mathematical Programming, Vol. 35, No. 3, 1986, pp. 253-264.
[5] M. J. D. Powell, “The Convergence of Variable Metric methods for Nonlinearly Constrained Optimization Cal-culations,” In O. L. Mangasarian, R. R. Meyer and S. M. Robinson Eds., Nonlinear Programming 3, Academic Press, New York, 1978, pp. 27-63.
[6] W. Sun, “Newton's Method and Quasi-Newton-SQP Me-thod for General LC1 Constrained Optimization,” Applied Mathematics and Computation, Vol. 92, No. 1, 1998, pp. 69-84.
[7] F. C. Thomas and Q. C. Andrew, “On The Local Con-Vergence of a Quasi-Newton Methods for The Nonlinear Programming Problem,” SIAM Journal on Numerical Analysis, Vol. 21, No. 4, 1984, pp. 755-769.
[8] Y. Yuan and W. Sun, “Theory and Methods of Optimiza-tion,” Science Press of China, Beijing, 1999.
[9] G. Yuan, “Modified Nonlinear Conjugate Gradient Me-thods with Sufficient Descent Property for Large-Scale Optimization Problems,” Optimization Letters, Vol. 3, No. 1, 2009, pp. 11-21.
[10] G. L. Yuan and X. W. Lu, “A New Line Search Method with Trust Region for Unconstrained Optimization, ” Communications on Applied Nonlinear Analysis, Vol. 15, No. 1, 2008, pp. 35-49.
[11] G. L. Yuan and X. W. Lu, “A Modified PRP Conjugate Gradient Method,” Annals of Operations Research, Vol. 166, No. 1, 2009, pp. 73-90.
[12] G. Yuan, X. Lu and Z. Wei, “A Conjugate Gradient Me-thod with Descent Direction for Unconstrained Optimiza-tion,” Journal of Computational and Applied Mathematics, Vol. 233, No. 2, 2009, pp. 519-530.
[13] G. L. Yuan and Z. X. Wei, “New Line Search Methods for Unconstrained Optimization,” Journal of the Korean Statistical Society, Vol. 38, No. 1, 2009, pp. 29-39.
[14] W. C. Davidon, “Variable Metric Methods for Minimiza-tion,” SIAM Journal on Optimization, Vol. 1, No. 1, 1991, pp. 1-17.
[15] D. H. Li and M. Fukushima, “A Modified BFGS Method and Its Global Convergence in Non-convex Minimiza-tion,” Journal of Computational and Applied Mathematics, Vol. 129, No. 1-2, 2001, pp. 15-35.
[16] D. H. Li and M. Fukushima, “On The Global Conver-gence of The BFGS Methods for Non-convex Uncon-strained Optimization Problems,” SIAM Journal on Op-timization, Vol. 11, No. 4, 2000, pp.1054-1064.
[17] M. J. D. Powell, “A New Algorithm for Unconstrained Optimation,” In J. B. Rosen, O. L. Mangasarian and K. Ritter Eds., Nonlinear Programming, Academic Press, New York, 1970.
[18] Z. Wei, G. Yu, G. Yuan and Z. Lian, “The Superlinear Convergence of A Modified BFGS-type Method for Un-constrained Optimization,” Computational Optimization and Applications, 29(2004), pp. 315-332.
[19] J. Z. Zhang, N. Y. Deng and L. H. Chen, “New Qua-si-Newton Equation and Related Methods for Uncon-strained Optimization,” Journal of Optimization Theory and Applications, Vol. 102, No. 1, pp. 147-167.
[20] Z. Wei, G. Li and L. Qi, “New Quasi-Newton Methods for Unconstrained Optimization Problems,” Applied Ma-thematics and Computation, Vol. 175, No. 2, 2006, pp. 1156-1188.
[21] G. L. Yuan and Z. X. Wei, “Convergence Analysis of A Modified BFGS Method on Convex Minimizations,” Computational Optimization and Applications, doi: 10.1007/s10 589-008-9219-0.
[22] R. H. Byrd, J. Nocedal and R. B. Schnabel, “Representa-tions of Quasi-Newton Matrices and Their Use in Limited Memory Methods,” Mathematical Programming, Vol. 63, No. 1-3, 1994, pp. 129-156.
[23] C. G. Broyden, J. E. Dennis and J. J. Moré, “On The Lo-cal and Supelinear Convergence of Quasi-Newton Me-thods,” IMA Journal of Applied Mathematics, Vol. 12 No. 3, 1973, pp. 223-246.
[24] R. H. Byrd and J. Nocedal, “A Tool for The Analysis of Quasi-Newton Methods with Application to Uncon-strained Minimization,” SIAM Journal on Numerical Analysis, Vol. 26, No. 3, 1989, pp. 727-739.
[25] R. H. Byrd, J. Nocedal and Y. Yuan, “Global Conver-gence of A Class of Quasi-Newton Methods on Convex Problems,” SIAM Journal on Numerical Analysis, Vol. 24, No. 5, 1987, pp.1171-1189.
[26] J. E. Dennis adn J. J. Moré, “A Characteization of Super-Linear Convergence and Its Application to Quasi-Newton Methods,” Mathematics of Computation, Vol. 28, No. 126, 1974, pp. 549-560.
[27] A. Griewank and Ph. L. Toint, “Local Convergence Analysis for Partitioned Quasi-Newton Updates,” Nume-rische Mathematik, Vol. 39, No. 3, 1982, pp. 429-448.
[28] A. Perry, “A Class of Conjugate Algorithms with a Two Step Variable Metric Memory,” Discussion paper, No. 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University, 1977.
[29] D. F. Shanno, “On the Convergence of A New Conjugate Gradient Algorithm,” SIAM Journal on Numerical Anal-ysis, 15, No. 6, 1978, pp. 1247-1257.
[30] Z. Wei, L. Qi and X. Chen, “An SQP-type Method and Its Application in Stochastic Programming,” Journal of Optimization Theory and Applications, Vol. 116, No. 1, 2003, pp. 205-228.
[31] G. L. Yuan and Z. X. Wei, “The Superlinear Convergence Analysis of A Nonmonotone BFGS Algorithm on Convex Objective Functions,” Acta Mathematica Sinica, Vol. 24, No. 1, 2008, pp. 35-42.
[32] Y. Dai, “Convergence Properties of the BFGS Algo-rithm,” SIAM Journal on Optimization, Vol. 13, No. 3, 2003, pp. 693-701.
[33] W. F. Mascarenhas, “The BFGS Method with Exact Line Searchs Fals for Non-convex Objective Functions,” Ma-thematical Programming, Vo. 99 No. 1, 2004, pp. 49-61.
[34] R. H. Byrd, P. Lu, J. Nocedal and C. Zhu, “A Limited Memory Algorithm for Bound Constrained Optimization,” SIAM Journal on Scientific Computing, Vol. 16, No. 5, 1995, pp. 1190-1208.
[35] M. J. D. Powell, “A fast Algorithm for Nonlinear Con-strained Optimization Calculations,” Lecture Notes in Mathematics, Vol. 630, Springer, Berlin, pp. 144-157.
[36] M. J. D. Powell, “Some Properties of The Variable Metric Algorithm,” In F. A. Lootsma Ed., Numerical methods for Nonlinear Optimization, Academia Press, London, 1972.
[37] J. E. Dennis and J. J. Moré, “Quasi-Newton Methods, Motivation and Theory,” SIAM Review, Vol. 19, No. 1, 1977, pp. 46-89.
[38] J. Y. Han and G. H. Liu, “Global Convergence Analysis of a New Nonmonotone BFGS Algorithm on Convex Objective Functions,” Computational Optimization and Applications, Vol. 7, No. 3, 1997, pp. 277-289.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.