Intelligent Information Management, 2010, 2, 40-45
doi:10.4236/iim.2010.21005 Published Online January 2010 (http://www.scirp.org/journal/iim)
Copyright © 2010 SciRes IIM
New Variants of Newton’s Method for Nonlinear
Unconstrained Optimization Problems
V. KANWAR1, Kapil K. SHARMA2, Ramandeep BEHL2
1University Institute of Engineering and Technology, Panjab University, Chandigarh160 014, India
2Department of Mathematics, Panjab University, Chandigarh160 014, India
Email: vmithil@yahoo.co.in, kapilks@pu.ac.in, ramanbehl87@yahoo.in
Abstract
In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for
solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the
proposed family is three. Numerical comparisons are made to show the performance of the presented meth-
ods. Furthermore, numerical experiments demonstrate that the logarithmic mean Newton’s method outper-
form the classical Newton’s and other variants of Newton’s method. MSC: 65H05.
Keywords: Unconstrained Optimization, Newton’s Method, Order of Convergence, Power Means, Initial Guess
1. Introduction
The celebrated Newton’s method


1
n
nn
n
f
x
xx
f
x

 (1)
used to approximate the optimum of a function is one of
the most fundamental tools in computational mathemat-
ics, operation research, optimization and control theory.
It has many applications in management science, indus-
trial and financial research, chaos and fractals, dynamical
systems, stability analysis, variational inequalities and
even to equilibrium type problems. Its role in optimiza-
tion theory can not be overestimated as the method is the
basis for the most effective procedures in linear and
nonlinear programming. For a more detailed survey, one
can refer [1–4] and the references cited therein. The idea
behind the Newton’s method is to approximate the objec-
tive function locally by a quadratic function which agr-
ees with the function at a point. The process can be re-
peated at the point that optimizes the approximate func-
tion. Recently, many new modified Newton-type meth-
ods and their variants are reported in the literature [5–8].
One of the reasons for discussing one dimensional opti-
mization is that some of the iterative methods for higher
dimensional problems involve steps of searching extrema
along certain directions in [8]. Finding the step size,
n
n
, along the direction vector involves solving the
sub problem to minimize
n
d
 
nnn
n1
f
xfxd

n
, which
is a one dimensional search problem in
[9]. Hence
the one dimensional methods are most indispensable and
the efficiency of any method partly depends on them
[10].
The purpose of this work is to provide some alterna-
tive derivations through power mean and to revisit some
well-known methods for solving nonlinear unconstrained
optimization problems.
2. Review of Definition of Various Means
For a given finite real number
, the th power
mean m
of positive scalars and b, is defined as
follows [11]
a
1
2
ab
m


(2)
It is easy to see that
For 1
, 1
m
(Harmonic mean) 2ab
ab
, (3)
For 1
2
, 1
2
m
2
2
ab

, (4)
For 1
, (Arithmetic mean)
1
m2
ab
. (5)
For 0
, we have 0
lim ma
b, (6)
which is the so-called geometric mean of , and
may be denoted by
ab
g
m.
V. KANWAR ET AL. 41
For given positive scalars and, some other
well-known means are defined as
ab
N (Heronian mean) 3
aabb
, (7)
C (Contra-harmonic mean)
22
ab
ab
, (8)
(Centroidal mean)

22
2
3
aabb
ab

, (9)
T
and
(Logarithmic mean)
 
log log
ab
ab
. (10) L
3. Variants of Newton’s Method for
Nonlinear Equations
Recently, some modified Newton’s method with cubic
convergence have been developed by considering differ-
ent quadrature formula for computing integral, arising in
the Newton’s theorem [12]
 
n
x
n
x
f
xfx ftd

t (11)
Weerakoon and Fernando [13] re-derived the classical
Newton’s method by approximating the indefinite inte-
gral in (11) by rectangular rule. Suppose that 1n
x
x
is
the root of the equation , we then put the left
side in the identity (11) and approximate
the integral by the rectangular rule according to which

0fx

10
n
fx
 
1
1
n
n
x
nn n
x
f
tdtxxf x

(12)
Therefore, from (11) and (12), they obtain the well-
known Newton’s method. Weerakoon and Fernando [13]
further used trapezoidal approximation to the definite
integral according to which
 

1
11
1
2
n
n
x
nn nn
x
ftdtxx fxfx



(13)
to obtain modified Newton’s method given by



1
1
2n
nn
nn
fx
xx
fx fx


(14)
where


1
n
nn
n
f
x
xx
f
x

(15)
is the Newton’s iterate. In contrast to classical Newton’s
method, this method uses the arithmetic mean of
n
f
x
and
*
1n
fx
, instead of

n
f
x
. Therefore, it is called
arithmetic mean Newton’s method [13]. By using differ-
ent approximations to the indefinite integral in the New-
ton’s theorem (11), different iterative formulas can be
obtained for solving nonlinear equations [14].
4. Variants of Newton’s Method for
Unconstrained Optimization Problems
Now we shall extend this idea for the case of uncon-
strained optimization problems. Suppose that the func-
tion
f
x is a sufficiently differentiable function. Let
is an extremum point of

f
x, then x
is a
root of
0fx
(16)
Extending Newton’s theorem (11), we have
 
n
x
n
x
f
xf td
 
x f
1nn
t
(17)
Approximating the indefinite integral in (17) by the
rectangular rule according to which
 
n
1n
n
x
x
f
tdtf x
 
x x

(18)
and using
10
n
fx
, we get


11
n
N
nnn
n
f
x
xxx
f
x


 , . (19)

0
n
fx

This is a well-known quadratically convergent New-
ton’s method for unconstrained optimization problems.
Approximating the integral in (17) by the trapezoidal
approximation
 

11
1
2nn n n
txxfxfx

 
 
1n
n
x
x
ftd


(20)
in combination with the approximation

1nn
fx f
n
n
fx
xfx
 




1
N
n
fx

and
10
n
fx
,
we get the following arithmetic mean Newton’s method
given by



1
1
2n
nn N
nn
fx
xx
fx fx

 
(21)
for unconstrained optimization problems. This formula is
also derived independently by Kahya [5].
If we use the midpoint rule of integration in (20)
(Gaussian-Legendre formula with one knot) [15], then
we obtain a new formula given by
Copyright © 2010 SciRes IIM
V. KANWAR ET AL.
Copyright © 2010 SciRes IIM
42
mula (23) as follows:

1
1
2
n
nn N
nn
fx
xx xx
f

 

(22) 1) For 1
(arithmetic mean), Formula (23) corre-
sponds to cubically convergent arithmetic mean New-
ton’s method
This formula may be called the midpoint Newton’s
formula for unconstrained optimization problems. Now
for generalization, approximating the functions in cor-



1
1
2n
nn N
nn
fx
xx
fx fx

 
(24)
rection factor in (21) with a power means of
n
afx

and , we have
1
N
n
bfx

2) For 1
(harmonic mean), Formula (23) cor-
responds to a cubically convergent harmonic mean
Newton’s method





11
1
02
n
nn
N
nn
fx
xx
fx fx
sign fx



 
 





1
1
11
2
n
nn N
nn
fx
xx fx fx
 
 
(25)
3) For 0
(geometric mean), Formula (23) cor-
responds to a new cubically convergent geometric mean
Newton’s method
(23)






1
01
n
nn N
nn
fx
xx
signfx fxfx
  
(26)
This family may be called the th power
mean it-
erative family of Newton’s method for unconstrained op-
timization problems.
4) For 1
2
, Formula (23) corresponds to a new
cubically convergent method
Special cases:
It is interesting to note that for different specific values
of
, various new methods can be deduced from For-







1
10
4
2
n
nn NN
nn nn
fx
xx
fx fxsignfxfxfx
1
  

(2 7)
5) For 2
(root mean square), Formula (23) cor-
responds to a new cubically convergent root mean square
Newton’s method




122
1
02
n
nn N
nn
fx
xx
fx fx
sign fx

 

(28)
Some other new third-order iterative methods based on
heronian mean, contra-harmonic mean, centrodial mean,
logarithmic mean etc. can also be obtained from Formula
(21) respectively.
6) New cubically convergent iteration method based
on heronian mean is







1
10
3n
nn NN
nn nn
fx
xx
fx fxsignfxfxfx
1
  
 (2 9)
7) New cubically convergent iteration method based
on contra-harmonic mean is






1
122
1
3N
nn n
nn N
nn
fxf xf x
xx
fx fx
 
  
(30)
9) New cubically convergent iteration method based
on logarithmic mean is




1
1
1
loglog N
nn n
nn N
nn
fx fxfx
xx fx fx
 
  
(32)
8) New cubically convergent iteration method based
on centroidal mean is 5. Convergence Analysis
 






1
122
1
3
2
N
nn n
nn N
nnn
fxfxfx
xx
fx fxfxfx
 
  
 1
N
n
(31)
Theorem 5.1 Let I
be an optimum point of a suf-
ficiently differentiable function
:fx I for
an open interval
I
. If the initial guess 0
x
is suffi-
V. KANWAR ET AL. 43
ciently close to
, then for
, the family of meth-
ods defined by (23) has cubic convergence with the fol-
lowing error equation

nn


34
3
9
21
2n
c eOe

 

14
ec
nn
(33)
where
x
e
 and


1
!
k
f
kf
 ,3,4,....
k
ck
.
Proof: Let
be an extremum of the function
f
x
(i.e. and

f
0

0
f
). Since
f
x is
sufficiently differentiable function, expanding
n
f
x,

n
f
x
and
n
f
x
 about x
by means of Tay-
lor’s expansion, we have
 

234
2! nn
11
3!
nn
f
xfff eOe
 
 
 
4
n
n
]
e
f

f

(34)
 

23
34
34
nnnn
fxecece Oe

(35)
 

23
34
16 12
nnn
fxcece Oe
 

(36)
Using (35) and (36) in (19), we get



22
13
23 4
34 3
[1 18
68 18
N
nn
nn
fx fce
ccce Oe
 

 
(37)
Case 1) For
\0
 , Formula (23) may be written
as




1
1
1
1
2
N
n
n
n
nn
n
fx
fx
fx
xx
fx












 



(38)
Using binomial theorem and the Formulae (35), (36)
and (37) in (38), we finally obtain


3
13 4
912
2
nn
ec ceOe

 


4
n
(39)
Case 2) For 0
, Formula (23) can be written as





1
0
n
nn N
nn
fx
xx
signfxfxfx
1
   (40)
Upon using (35), (36) and (37), we obtain






01
234
34
92
2
n
N
nn
nn
fx
signfx fxfx
ecceOe
n
 

 


(41)
From (40) and (41), we obtain

23
134
4.5 2
nn
ecceO
 
4
n
e
(42)
Therefore, it can be concluded that for all
, the
th power
mean iterative family (23) for unconstra-
ined optimization problems converges cubically. On sim-
ilar lines, we can prove the convergence of the remaining
Formulae (29)-(32) respectively.
6. Further Modifications of the Family (23)
The two main practical deficiencies of Newton’s method,
the need for analytic derivatives and the possible failure
to converge to the solutions from poor starting points are
the key issues in unconstrained optimization problems.
Family (23) and other methods (29)-(32) are also vari-
ants of Newton’s method and will fail miserably if at any
stage of computation, the second order derivative of the
function is either zero or very close to zero. The defect of
Newton’s method can easily be eliminated by the simple
modification of iteration process. Applying the Newton’s
method (19) to a modified function:



n
px x
f
xe fx
(43)
wherep
. This function has better behavior as well
as the same optimum point as

f
x
; we shall get the
modified Newton’s method given by

 
1
n
nn
nn
f
x
xx
f
xpfx

 
(44)
This is a one parameter family of Newton’s iteration
method for unconstrained optimization problems and do
not fail if
0
n
fx

like Newton’s method. In order
to obtain the quadratic convergence, the sign of en-
tity should be chosen so that denominator is largest in
magnitude. On similar lines, we can also modify some of
the above-mentioned cubically convergent variants of
Newton’s methods for unconstrained optimization prob-
lems. Kahya [5] has also derived similar formula by us-
ing the different approach based on the ideas of Mamta et
al. [16]. Similar approaches for finding the simple root of
a nonlinear equation or system of equations have been
used by Ben-Israel [17] and, Kanwar and Tomar [18].
p
7. Numerical Results
In this section, we shall present the numerical results
Copyright © 2010 SciRes IIM
V. KANWAR ET AL.
Copyright © 2010 SciRes IIM
44
Table 1. Test problems
No. Examples Initial guess
Optimum point

7.0
1 432
8.531.06257.59 45xxx x  10.0 8.278462409973145
-1.0
2 2
3
x
ex 1.0 0.204481452703476
1.0
3

2
cos 2xx 3.0 2.35424280166260
0.5
4 3
10.2 6.2 ,0xx
x 2.0 0.860054146289254
32.0
5 3774.522 2.27 181.529,0xx
x  45.0 40.777259826660156
Table 2. Comparison table
Examples NM
A
MN
H
MN GMN Method
(27)
R
MSNM
H
ERNMCNM TMNM
L
MNM
1 5 4 3 544444 3
5 4 3 444444 3
2 5 3 4 534343 3
5 3 4 544433 2
3 5 4 4 544444 2
4 3 3 433333 2
4 6 4 4 644454 3
5 4 4 544444 3
5 5 4 4 544444 3
5 4 3 544444 3
obtained by employing the iterative methods namely
Newton’s method (), arithmetic mean Newton’s
method (
NM
A
MN ), harmonic mean Newton’s method
(
H
MN
RMSNM
), geometric mean Newton’s method (),
method (27), root mean square Newton’s method
(), heronian mean Newton’s method (
GMN
H
ENM
N
C
),
contra-harmonic mean Newton’s method (), cen-
troidal mean Newton’s method (), logarithmic
mean Newton’s method (LMNM) respectively to com-
pute the extrememum of the function given in Table 1.
We use the same functions as Kahya [6, 7]. The results
are summarized in Table 2. We use  as toler-
ance. Computations have been performed using
CM
15
10
TMN
in
double precision arithmetic. The following stopping cri-
teria are used for computer programs:
1) 1nn
xx
, 2)

1n
fx
.
8. Conclusions
It is well-known that the quadrature formulas play an
important and significant role in the evaluations of defi-
nite integrals. We have shown that these quadrature for-
mulas can also be used to develop some iterative meth-
ods for solving unconstrained optimization problems.
Further, this work proposes a family of Newton-type
methods based on nonlinear means and can be used to
solve efficiently the unconstrained optimization prob-
lems. Proposed family (23) unifies some of the most
known third-order methods for unconstrained optimiza-
tion problems. It also provides many more unknown
processes. All of the proposed third-order methods re-
quire three function evaluations per iterations so that
their efficiency index is , which is better than the
one of Newton’s method 1.41. Finally experimental
results showed that the harmonic mean and logarithmic
mean Newton’s method are the best among the proposed
iteration methods.
1.44
9. Acknowledgement
We would like to thank the anonymous referee for his or
her valuable comments on improving the original manu-
script. We are also thankful to Professor S.K. Tomar,
Department of Mathematics, Panjab University, Chandi-
garh for several useful suggestions. Ramandeep Behl
further acknowledges the financial support of CSIR,
New Delhi.
10. References
[1] B. T. Ployak, “Newton’s method and its use in optimiza-
V. KANWAR ET AL. 45
tion,” European Journal of Operation Research, Vol. 181,
pp. 1086–1096, 2007.
[2] Y. P. Laptin, “An approach to the solution of nonlinear
unconstrained optimization problems (brief communica-
tions),” Cybernetics and System Analysis, Vol. 45, No. 3,
pp. 497–502, 2009.
[3] G. Gundersen & T. Steihaug, “On large-scale uncon-
strained optimization problems and higher order meth-
ods,” Optimization methods & Software, DOI: 10.1080/
10556780903239071, No. 1–22, 2009.
[4] H. B. Zhang, “On the Halley class of methods for uncon-
strained optimization problems,” Optimization Methods
& Software, DOI: 10.1080/10556780902951643, No.
1–10, 2009.
[5] E. Kahya, “Modified secant-type methods for uncon-
strained optimization,” Applied Mathematics and Com-
putation, Vol. 181, No. 2, pp. 1349–1356, 2007.
[6] E. Kahya & J. Chen, “A modified Secant method for
unconstrained optimization,” Applied Mathematics and
Computation, Vol. 186, No. 2, pp. 1000–1004, 2007.
[7] E. Kahya, “A class of exponential quadratically conver-
gent iterative formulae for unconstrained optimization,”
Applied Mathematics and Computation, Vol. 186, pp.
1010–1017, 2007.
[8] C. L. Tseng, “A newton-type univariate optimization
algorithm for locating nearest extremum,” European Jour-
nal of Operation Research, Vol. 105, pp. 236–246, 1998.
[9] E. Kahya, “A new unidimensional search method for
optimization: Linear interpolation method,” Applied
Mathematics and Computation, Vol. 171, No. 2, pp. 912–
926, 2005.
[10] M. V. C. Rao & N. D. Bhat, “A new unidimensional
search scheme for optimization,” Computer & Chemical
Engineering, Vol. 15, No. 9, pp. 671–674, 1991.
[11] P. S. Bulle, “The power means, hand book of means and
their inequalities,” Kluwer Dordrecht, Netherlands. 2003.
[12] J. E. Dennis & R. B. Schnable, “Numerical methods for
unconstrained optimization and nonlinear equations,”
Prentice-Hall, New York, 1983.
[13] S. Weerakoon & T. G. I. Fernando, “A variant of New-
ton’s method with accelerated third-order convergence,”
Applied Mathematics Letter, Vol. 13, pp. 87–93, 2000.
[14] M. Frontini & E. Sormani,” Some variants of Newton’s
method with third-order convergence,” Applied Mathe-
matics and Computation, Vol. 140, pp. 419–426, 2003.
[15] W. Gautschi, “Numerical analysis: an introduction,” Birk-
häuser, Boston, Inc., Boston, 1997.
[16] Mamta, V. Kanwar, V. K. Kukreja & S. Singh, “On a
class of quadratically convergent iteration formulae,” Ap-
plied Mathematics Computation, Vol. 166, pp. 633–637,
2005.
[17] B. I. Adi, “Newton’s method with modified functions,”
Contemporary Mathematics, Vol. 204, pp. 39–50, 1997.
[18] V. Kanwar & S. K. Tomar, “Exponentially fitted variants
of Newton’s method with quadratic and cubic conver-
gence,” International Journal of Computer Mathematics,
Vol. 86, No. 9, pp. 603–611, 2009.
Copyright © 2010 SciRes IIM