Modified Piyavskii ’ s Global One-Dimensional Optimization of a Differentiable Function

Piyavskii’s algorithm maximizes a univariate function satisfying a Lipschitz condition. We propose a modified Piyavskii’s sequential algorithm which maximizes a univariate differentiable function f by iteratively constructing an upper bounding piece-wise concave function Φ of f and evaluating f at a point where Φ reaches its maximum. We compare the numbers of iterations needed by the modified Piyavskii’s algorithm (nC) to obtain a bounding piece-wise concave function Φ whose maximum is within ε of the globally optimal value fopt with that required by the reference sequential algorithm (nref). The main result is that nC ≤ 2nref + 1 and this bound is sharp. We also show that the number of iterations needed by modified Piyavskii’s algorithm to obtain a globally ε-optimal value together with a corresponding point (nB) satisfies nB ≤ 4nref + 1 Lower and upper bounds for nref are obtained as functions of f(x), ε, M1 and M0 where M0 is a constant defined by     0 , = sup x a b M f x    and M1 ≥ M0 is an evaluation of M0.


Introduction
We consider the following general global optimization problems for a function defined on a compact set 0.9 0.15 1.4 0.9 : m X R e x e x e x e x   (P) Find   , 0.9 0.15 1.4 0.9 where ε is a small positive constant.
Many recent papers and books propose several approaches for the numerical resolution of the problem (P) and give a classification of the problems and their methods of resolution.For instance, the book of Horst and Tuy [1] provides a general discussion concerning deterministic algorithms.Piyavskii [2,3] proposes a deterministic sequential method which solves (P) by iteratively constructing an upper bounding function F of f and evaluating f at a point where F reaches its maximum, Shubert [4], Basso [5,6], Schoen [7], Shen and Zhu [8] and Horst and Tuy [9] give a special aspect of its application by examples involving functions satisfying a Lipschitz condition and propose other formulations of the Piyavskii's algorithm, Sergeyev [10] use a smooth auxiliary functions for an upper bounding function, Jones et al.
[11] consider a global optimization without the Lipschitz constant.Multidimensional extensions are proposed by Danilin and Piyavskii [12], Mayne and Polak [13], Mladineo [14] and Meewella and Mayne [15], Evtushenko [16], Galperin [17] and Hansen, Jaumard and Lu [18,19] propose other algorithms for the problem (P) or its multidimensional case extension.Hansen and Jaumard [20] summarize and discuss the algorithms proposed in the literature and present them in a simplified and uniform way in a high-level computer language.Another aspect of the application of Piyavskii's algorithm has been developed by Brent [21], the requirement is that the function is defined on a compact interval, with a bounded second derivative.Jacobsen and Torabi [22] assume that the differentiable function is the sum of a convex and concave functions.Recently, a multivariate extension is proposed by Breiman and Cutler [23] which use the Taylor development of f to build an upper bounding function of f.Baritompa and Cutler [24] give a variation and an acceleration of the Breiman and Cutler's method.In this paper, we suggest a modified Piyavskii's sequential algorithm which maximizes a univariate differentiable function f.The theoretical study of the number of iterations of Piyavskii's algorithm was initiated by Dani-lin [25].Danilin's result was improved by Hansen, Jaumard and Lu [26].In the same way, we develop a reference algorithm in order to study the relationship between n B and n ref where n B denotes the number of iterations used by the modified Piyavskii's algorithm to obtain an ε-optimal value, and n ref denotes the number of iterations used by a reference algorithm to find an upper bounding function, whose maximum is within ε of the maximum of f.Our main result is that n B ≤ 4n ref + 1.The last purpose of this paper is to derive bounds for n ref .Lower and upper bounds for n ref are obtained as functions of f(x), ε, M 1 and M 0 where M 0 is a constant defined by

Upper Bounding Piecewise Concave Function
and suppose that there is a positive constant M such that Let Then Ψ is a concave function, the minimum of If the function f is not differentiable, we generalize the above result by the following one: Theorem 2. Let f be a continuous on [a, b] and suppose that there is a positive constant M such that for every h > 0 ( Proof.without loss of generality, we assume that f(a) = f(b) = 0 and M = 0.
Let us consider the function f(x) − Φ(x) instead of the function f(x).It suffice to prove that Suppose by contradiction that f * > 0, and let Since M = 0, this contradicts the hypothesis (3).Hence f * ≤ 0.
Remark 2. Since the above algorithm is based entirely on the result of Theorem 1, it is clear that the same algorithm will be adopted for functions satisfying condition (2).
If f is twice continuously differentiable, the conditions (1) and (2) are equivalents.

Modified Piyavskii's Algorithm
We call subproblem the set of information = , , , The algorithm that we propose memorizes all the subproblems and, in particular, stores the maximum i   of each bounding concave function Φ i over [a i , b i ] in a structure called Heap.A Heap is a data structure that allows an access to the maximum of the k values that it stores in constant time and is updated in O(Log 2 k) time.
In the following discussion, we denote the Heap by H. Modified Piyavskii's algorithm can be described as follows: r Bound  Build an upper bounding function x a Proof.In the case where is in [a n , x n ], then from remark 1, the maximum of The maximum of By substitution in expression (3), we have the result.We show in the same way that the maximum nr

s al rithm obtains an ε of the maximum value of f(x) when the gap go upper bound on f within
is not larger than ε, where are unknown.So modified Piyavskii's algorithm stops only after a so value within ε of the upper bo lution of und has been found, i.e., when the error n n f   is not larger than ε.The number of iteratio to satisfy conditions is studied below.

Convergence of the Algorithm f the n   ns needed each of these
We now study the error and the gap of Φ n in modified Piyavskii's algorithm as a function o umber n of iterations.The following proposition shows how they decrease when the number of iterations increases and provides also a relationship between them.

Proposition 2. Let
   ter n iterations denote respectively the error and the gap af of modified Piyavskii's algorithm.Then, for has been obtained, by splitting the maximum Then, in the case where nd fro pro osition 1, we get: and is one of t dpoints of the in ove whi e function defined. 1) have  the e, we have refor • case 1 we will consider two cases: We deduce from these two inequalities that .
the same steps to prove the second point of 1) hat We follow and show t 2 1 4.


) is convergent, i.e. either it terminates in a finite number of iterations or Proof.This is an immediate consequence of the definition of and i) of proposition 2. Description of the Reference Algorithm

3). Minimum number of evaluations points
is then defined as follows (see Figure 2)     and hence the result holds.
• case 2 y 1 < b. (see Figure 4) If n c = 2, (4) holds.In this case, we have We show that and that .
implies that n ref = 1, which is not the case accordi g to the assumption (n ref ≥ 2).Therefore, this case will never exist.As noticed in remark 3, the modified Piyavskii's algorithm does not necessarily stop as soon as a co is found as described in proposition 4, but only when the rror does not exceed ε.We now study the number of evaluation points necessary for this to happen.
Proposition 5.For a given tolerance ε, let n ref be the number of evaluation points in a reference bounding function of f and n B the number of iterations after which the modified Piyavskii's algorithm stops.We then have . ( algorithm is satisfied.This proves (5).
 which shows that the termination of modified Piyavskii's

Bounds on the Number of Evaluation Points
e prev modified algo nce algorithm.We now evaluate the number n B of evaluation functions of the modified Piyavskii's algorithm itself.To achieve this, we rive bounds on n ref , from which bounds on n are readily obta he re In th ious section, we compared the number of evaluation functions in the Piyavskii's rithm and in the refere de Then the number n ref of evaluation points in a reference cover Φ ref using the constant M 1 ≥ M 2 is bounded by the following inequalities: ontains all the n ref evaluation points.Figure 6.None of the subintervals c Proof.We suppose that the reference cover Φ ref has n ref − 1 partial upper bounding functions defined by the evaluation points .
We consider an arbitrary partial upper bounding function and the corresponding subinterval [y i , To simplify, we move the origin to the point (y i , We assume z ≥ 0 (See Figure 7).
Let be the partial upper bounding function defined on y i + 1 ] and x the point where We deduce that Let g 1 be the function defined on [y i , y i + 1 ] by: Let x 1 and x 2 be the roots of the equation F 1 (x) = 0; they are given by 0 1 0 Then F 1 is written in the following way We have Since θ ≥ 0 and g 1 reaches its minimum at the point θ, . Now let us consider the function G defined and continuous on [y i , then, the function G is given by implies (8), this proves the first inequality of (7).Suppose that: Case 2: Let X 1 et X 2 be the roots of the equation .

Computational Experiences
In this section, we report the results of computational experiences performed on fourteen test functions (see Tables 1 and 2).Most of these functions are test functions drawn from the Lipschitz optimization literature (see For the first three test ions, we obser that the influe of the parameter M is not very important, since the number of function evaluations increase reciably for a same precision ε.
and (4) holds.e now th and onsider n ref = k + 1.The relation ( = 2, therefore we may assume that .at (4) holds for ref Assum 's algorithm has to be implemented on two subintervals [x 1 , x 3 ] and [x 3 , x 2 ] There are two cases which need to be discussed separately: • case 1 (see Figure 5) There is a subinterval containing all the n ref evaluation points of the best bounding function In this case, we have

ref
Consi er the function f(x) = x defined on [0, 1], the constant M is equal to 4. For ε = e n C = n ref + 1 (see Figure6), Fig ns.ure 7. The reference cover Φ ref has n ref − 1 partial upper bounding functio Hansen and Jaumard [20]).The performance of the Modified Piyavskii's algorit easured in terms of N C , the number of function evaluations.The number of function evaluations N C is compared with (n ref ), the number of functio uations required by the reference sequential algorithm.We ob-hm is m n eval serve that N C is on the average only 1 More precisely, we have the following estimation .35larger than (n ref ).

n , b n ] is deleted and replaced by two partial upper bounding functions, the first one spanning [a n , x n ] de- noted by Φ nl (x) and the second one spanning [x n , b n ] de- noted by Φ nr (x) (Figure 1). Proposition 1. For n ≥ 2 the upper bounding function
H all Φ n (x) is easily deduced fro must belong to the second generation for the first time after no more than another 1 n  iterations of the modified Piyavskii's algorithm, at iteration m (m ≤ 2n − 1).
non increasing sequences and   n f  is nondecreasing.After n iterations of the modified Piyavskii's algorit tion m 