Parallel Minimax Searching Algorithm for Extremum of Unimodal Unbounded Function

In this paper we consider a parallel algorithm that detects the maximizer of unimodal function ( ) f x computable at every point on unbounded interval (0, )  . The algorithm consists of two modes: scanning and detecting. Search diagrams are introduced as a way to describe parallel searching algorithms on unbounded intervals. Dynamic programming equations, combined with a series of liner programming problems, describe relations between results for every pair of successive evaluations of function f in parallel. Properties of optimal search strategies are derived from these equations. The worst-case complexity analysis shows that, if the maximizer is located on a priori unknown interval ( n 1, ] n  , then it can be detected after parallel evaluations of

is located on a priori unknown interval ( n 1, ] n , then it can be detected after parallel evaluations of , where p is the number of processors.

Introduction and Problem Statement
Design of modern systems like planes, submarines, cars, aircraft carriers, drugs, space crafts, communication networks etc. is an expensive and time consuming process.In many cases the designers must run and repeat expensive experiments, while each run is a lasting process.The goals of these experiments can be either to maximize performance parameters (speed, carried load, degree of survivability, healing effect, reliability etc.) or to minimize fuel consumption, undesirable side effects in drugs, system's cost etc.
Performance parameters that are to be maximized (or minimized) are functions of other design parameters (wing span in aircrafts, aerodynamic characteristics of cars, planes, helicopters, raising cars, antennas or hydrodynamic profiles of submarines, ships etc.).At the best, these functions are computable if CAD is applicable.Otherwise, multitude of wind tunnel experiments is required for aero-or hydrodynamic evaluations.Analogously, numerous statistical experiments on animals and later on different groups of people are necessary if a new drug is a subject of design.Such experiments may last months or even years.For instance, in the USA development of a new drug takes in average ten-twelve years.In view of all factors listed above, it is natural to minimize the number of experiments in attempts to design a system with the best parameters.In most of the cases, we can estimate an upper bound on the design parameter under consideration; yet this is not always the case especially if the cost of experiments is increasing or even can be fatal in design of new drugs.
The unbounded search problem, as described in [1], is a search for a key in a sorted unbounded sequence.The goal of the optimal unbounded search is to find this key for a minimal number of comparisons in the worst case.The authors describe an infinite series of sequential algorithms (i.e., using a single processor), where each algorithm is more accurate, than the previous algorithm.In [2] the unbounded search problem is interpreted as the following two-player game.Player A chooses an arbitrary positive integer .Player may ask whether the integer n B x is less than .The "cost" of the searching algorithm is the number of guesses that must use in order to determine .The goal of the player is to use a minimal number of these guesses in the worst case.
This number is a function .The author of that paper provides lower and upper bounds on the val f ( ) c n .More results on nearly-optimal algorithms are provided in [3], and then these results are generalized for a transfinite case i

( ) c n
As pointed out in [5], the problem formulated in [2] is equivalent to the search for a maximizer of an unimodal function ( ) f x , where .The goal of the search is to minimize the number of required evaluations of a function (0, x  )  ( ) f x (probes, for short) in the worst case, if the maximizer is located on a priori unspecified interval , and where is a positive integer number.In [5], the authors consider the unbounded discrete unimodal sequential search for a maximizer.Employing an elaborate apparatus of Kraft's inequality, [5], inverse Fibonacci Ackermann's function and, finally, a repeated diagonalization, they construct a series of algorithms that eventually approach lower bounds on the function .

( ) c n
The general theory of optimal algorithms is provided in [6,7].The problems where f is a unimodal function, defined on a finite interval, are analyzed by many authors, and the optimal algorithms are provided and analyzed in [8][9][10][11][12].Optimal parallel algorithms, searching for a maximum of a unimodal function on a finite interval, are discussed in [13][14][15].The case where f is a bimodal function is discussed and analyzed in [16][17][18].The case where additional information is available is studied in [19,20].The optimal search algorithm for the maximum of a unimodal function on a finite interval is generalized for a case of multi-extremal function in [21][22][23].In all these papers, the optimal algorithms are based on the mere fact that a maximizer (minimizer) is located on a priori known finite interval (called the interval of uncertainty).The algorithms employ a procedure that shortens the interval of uncertainty after every probe.Complexities of related problems are functions of the size of interval K. Search algorithms for two-dimensional and multidimensional unimodal functions are considered respectively in [24,25].

K K
In this paper we consider a parallel algorithm finding a maximizer (or a minimizer) of a function f defined on an unbounded interval I of R and computable at every point x I  , )  .Without loss of generality, we assume that .It is easy to see that an algorithm, that detects a maximizer, cannot employ the same or analogous strategies as in the finite case, since the interval of uncertainty is infinite.
(0 I  Definition 1.1.A unimodal function has the following properties: 1) there exists a positive number s , such that is not constant on any subinterval of I .The point s is called a maximizer of function ( )  f x .It is not required that f be a smooth or even a continuous function.The goal of this paper is to describe and analyze an algorithm that 1) detects an interval of length t (t-interval, for short) within which a maximizer of f is located; 2) uses a minimal number of parallel probes (p-probes, for short) in the worst case for the t-detection.
Definition 1.2.An algorithm is called balanced if it requires an equal number of probes for both stages (scanning and detection).
Definition 1.3.The algorithm that is described in this paper is minimax (optimal) in the following sense.Let F be a set of all unimodal functions f F  defined on I; t be a set of all possible strategies t S s detecting a t-interval that contains a maximizer s of function f; and let be the number of p-probes that are required for detection of the maximizer on t-interval using strategy t ( , ) t s N f s .Then a minimax strategy t s  detects the maximizer for a minimal number of p-probes in the worst case of the unbounded function f, [17].
The Definition 1.3 implies that Although s is a priori unknown to the algorithm designer, it is assumed in this paper that its value is fixed.Otherwise, the algorithm designer will not be able to provide any algorithm for t-detection of s.Indeed, the adversary can generate a function f that is increasing on any finite subinterval .(0, ) v I 

Choice of Next Evaluation Point
, then s is greater than L and smaller than R, i.e., ( , ) s L R  , [10,26].Proof follows immediately from unimodality of the function f.If , then a maximizer s is detected on a finite interval, i.e., . Therefore for t-detection of the maximizer s we can employ Kiefer's algorithm for sequential search [10,26] or the algorithm [25] for parallel search.
Suppose that f is evaluated at two points : i q L  and : is larger than R;

Possible Outputs in the Worst Case
For both AC and AD scenarios ( ) f x and  can be selected in such a way that for all , for which Hence, taking into account that we are dealing with the worst case, all evaluations must be done outside interval if

Optimal Unbounded Search Algorithm as
a Two-Player Game with Referee , , , i q  .The referee terminates the first stage if there are points j q and k q such that j s q  and k s q  . The second stage begins from state 1 1 1 ( , , ) u v w where :  At the second stage, B selects points and A selects intervals.Let the game be in state ( , , ) Then A eliminates the leftmost or the rightmost subinterval, [16].The goal of player B is to minimize the number of points required to terminate the game.The goal of player A is to maximize the number of these points.The adversarial approach for interpretation of the optimal search algorithms is also considered in [2,27].
Remark 3.1.It is easy to see that B is an algorithm designer and A is a user that selects function f and required accuracy t.

Multiple-Processor Search: (p ≥ 2)
An optimal parallel search algorithm with p processors has an analogous interpretation.In this case, at the first phase of the game, player B on his move selects p distinct positive points.The referee terminates the first phase if there are at least two points to the right from s.At the beginning of the second phase, the player A selects any two adjacent subintervals and eliminates all other subintervals.In general, at the second phase, B selects points and A selects intervals.More specifically, player B on his move selects p distinct points on the interval and player A on her move selects any two adjacent subintervals and eliminates all other subintervals.The goal of the game is the same as in the single-processor case.It is obvious that on the first stage of the game player B must select all points in an increasing order from one p-probe to another.
Remark 3.2.At first, we will describe the optimal unbounded searching algorithm with one processing element, (PE, for short).Subsequently, we will describe and discuss the parallel minimax unbounded search with p PEs.As it will be demonstrated, the case where p is an even integer is simpler than the case where p is odd.

Structure of Unbounded Sequential Search
Consider a finite interval K, i.e., its length K   .In the case, if it is known a priori that s K  , then we can t-detect maximizer s for at most However, the situation is more complicated if f is defined on unbounded interval In this case we divide entire interval I into an infinite set of finite subintervals of uncertainty 1 1 (0, ).
In this paper it is demonstrated that a minimax search algorithm consists of two major modes: a scanning (expanding) mode and a detecting (contracting) mode.Let us assume for simplicity of notations that for all integer k A search algorithm is in the scanning mode while for all function f satisfies inequalities .
, i.e., is a result of the k-th probe.
In the scanning mode we probe intervals until this mode terminates.
, , , , We say that a search algorithm is in l-th state is to be eliminated and is the next evaluation point.Here , then the search moves to the next 1 l state of the scanning mode.As a result, the interval of uncertainty ) and the inter 2 l val I  is the next to be examined.Since at any state the search can switch into the detecting mode, the dilemma is whether to select interv 2 l al I  as small as possible (in preparation for this switch) or, if the search continues to stay in the scanning mode, to select 2 l I  as large as possible.The dilemma indicates that there must be an optimal ic cho e of 2 l I  .In the detecting mode, we can use an optimal strategy, [10,16,26], which locates s on t-interval.To design a minimax search algorithm, we must select all 1 2 1 k in such a way, that the total number of required probes on both modes is minimal in the worst case.
q q q      Definition 4.3.We say that a set of points is a detecting triplet if ( , , ) If is a detecting triplet and f is a unimodal function, then maximizer s satisfies inequality k ( , , ) i j k q q q i q s q   , [17].Definition 4.4.In the following consideration, means the minimal total number of required probes for both modes in order to detect maximizer s in the worst case if In the following discussion, we assume that t = 1, unless it is specified otherwise.
Proposition 4.1.If f is an unimodal function and , but this is a priori unknown, then for all s is detectable after probes in the worst case, i.e., where probes are used in the scanning mode and probes in the detecting mode.Here Proof {by induction}: We will demonstrate that in the scanning mode the optimal evaluation points 1 2 1 , , , ,  2] (0,1   , then, in the worst case, two probes are not sufficient for detection of s on t-interval.Indeed, the adversary can select such s and f that satisfy the following inequalities: Hence, in that case, s is not t-detectable on interval after two probes.
, then k probes were used in the scanning mode and the probes are taken in the points 2   .
In this case the following inequalities hold is a detecting triplet, and, as a result, the search is in the detecting state 1 2 { , .Then from [8,10,26], using the optimal search algorithm, we can detect s with accuracy t = 1 for additional k evaluations of f.Hence the minimal total number of required probes for both modes is equal ( , , )

The Algorithm
Assign a required accuracy t; {the scanning mode of the algorithm begins}; {the following steps describe the optimal detecting algorithm-see [16]}; assign 3) and/or (5.4) generate a sequence of detecting states ; {a is the approximation of the maximizer: s a t   }; stop.
The algorithm described above is called V-algorithm.

Optimality of Sequential Search
Proposition 5.1.The number of required probes for t-detection of a maximizer described in Proposition 3.1 is minimal in the worst case.
Proof.The algorithm consists of the scanning and detecting modes.In the scanning mode (SM) the search is sequentially in the probing states where , , : , (5.5) On the other hand, in the detecting mode (DM) the algorithm is in the detecting states , , } F F are optimal (there are no other strategies that can t-detect s for a smaller number of probes).At the same time the entire SM is a mirror image of the DM.Indeed, from the beginning to the end of the SM the search goes from scanning state 1 2 { , } F F to scanning state 1  , while in the DM the search goes from detecting state 1 { , to detecting state Thus, both modes (scanning and detecting) are optimal; therefore, the entire algorithm is optimal.

Complexity of Minimax Sequential Search
Let us compare the optimal search algorithms for two cases: 1) Maximizer , but this is a priori unknown; here b is a positive integer; 2) It is known a priori that (0, ) . Let be the minimal total number of required probes for t-detection of s in the worst case if However, if 0 b  , then the following inequality holds: From [11] it follows that, if , then 3) (6.1) and ( 6.3) imply that for all where  is defined in (4.1).
Equality (6.5) follows from the fact that where The complexities (6.1) and (6.5) can be further reduced if any prior information is available, [6,17], or if a searching algorithm is based on a randomized approach, [19,30].Proposition 6.1.Let be the minimal number in the worst case of the required probes to detect s on a priori unknown interval then Proof.First of all, the relations (6.1), (6.2), (6.5) and (6.8) are based in the previously made assumption that t := 1.From this assumption it follows that maximizer is detectable on an interval of length two, .In order to find the complexity of the algorithm if , the scale of the search must be decreased twice, i.e., we must select t := 1/2.Two cases must be considered:

It occurs if
In this case the left half of the interval and, as a result, fewer probes are required for t-detection of s.However, in the worst case, the maximizer may be on the right half of interval , hence for both cases.For illustration see Table 6.1 below.

Estimated Interval of Uncertainty
In many applications, an upper bound value Q on maximizer s can be estimated from a feasibility study.Let It is easy to check that the proof follows from (6.7) and from the fact that Preliminary results on analysis of the optimal algorithm searching for the maximum of a multimodal function on the infinite interval are provided in [28].

Parallel Search: Basic Properties
If several processors are available, then, as it is indicated in [29], the algorithm can be executed in a parallel mode.[13,15] are the earliest papers on a parallel search for a maximum of a unimodal function of one variable on a finite interval, that are known to the author of this paper.Although the optimal search strategies in both papers are in essence identical, the formulation of the problem is different.The proof of optimality of the search is more detailed in [15].[2] provides an idea of a parallel algorithm searching for a maximum of a unimodal function on a unbounded interval.This idea is based on an application of the Kraft's inequality formalism, is provided in [2].The authors indicate that the approach they used to construct an infinite series of near-optimal algorithms for the unbounded search with a single processor can be expanded for a multiprocessor case.However, no details are provided.
The search algorithm described in this paper is based on the following properties.
Proposition 8.1: Let us consider p arbitrary points 1 , , p q q  that satisfy inequalities then maximizer s is greater than if 1 ; then s is smaller than if 2 ; q then s is greater than 1 j q  and does not exceed Proof follows immediately from unimodality of function f.

Search on Finite Interval: Principle of Optimality
ing Propositions 9.2-9.5 are provided in the Appendix.

Odd Number of Processors roposition 9.3: If
; and then in the lo (9.8).;

Even Number of
p is even, then (9.12)Both these rules for p odd and even can be generalized as: assign for hen for all { , } The following two examples and Table 9.
show how steps k ar for v f processors optimal search e changing arious number o and , , , p q q q  . and q q If maximizer s q q     , then s will be detected on a finite interval after k p-probes.

Detecting Mode
Suppose the search is in t eans that at most k s. p-pr m t-detection of the maximizer obes will divide the larger interval into p + 1 sub-intervals , , , , , , In the k-th detecting state, the search will be either in { , } k k d c state or in the { , } k k c d state.Both of these states are equivalent, i.e., they reer of p-probes for the t-detection and the symmetrical ch probes.Schematical can be described in the following diagram: In general, for any even p we consider a detecting state {1, k and k b satisfy the following recursive relations for all with the following defining rules: where 1 0 1 0 : 0; : 1; : 1; : 1 and 0 It is easy to demonstrate by induction that for all k 1 .
Thus, all can be computed using the following formula: where u and w are roots of the equation and  and  satisfy the equations: Taking into account that

29)
Proof: Since an algorithm designer's goal is for a given number of p-probes to maximize an interval on which s can be located within a unit interval, it is obvious to select all the intervals in such a way that any two adjacent intervals would have the same sum.This search strategy means that intervals 1 Proof is analogous to the proof of the Theorem 9.4.
of the second stage.This stage terminates if j j w u t   .Otherwise B selects a point j j x v  , such that j j j u x w   .

,
then the search switches into the detecting mode with initial state 1

(
in the worst case interval of uncertainty containing maximizer s that can be detected after m p-probes if the search starts from the detecting state Effectiveness of p-probes);

9 ). 5 . 9 . 2 .
Defining Rules of Optimal Detecting States efinition If there exists a pair of positive numbers and that D c and d such that c>d and for all non-negative numbers u

Proposition 9 . 5 .
if p is odd, then and one.Then from the diagram (10.1) one can see tha added before k-th p-probe is computed; k g  smaller interval on the k-th scanning state of the search; k larger interval on the k-th scanning state of the search; h  k interval of uncertainty eliminated from the search after the k-th p-probe; w  k total interval added before the k-th p-probe is performed; t  k total interval eliminated from the search as a result of k p-probes.
eliminate the second and the third terms in the upper branch of the functional Equation (A.24).

3 ; 4 where
u y v v y    such a way that every two adjacent intervals have equal sums.It implies that the alternating intervals must have equal lengths: 1 On the other hand the first term of the upper branch in (A.24) can itself be eliminated since Proof: Since, in the worst case, the adversary can select two adjacent intervals with the largest sum, from optimality point of view, one must select the intervals in ThusThere are p + 1 ways to represent the intervals u and v as sums in (A.27).The following dynamic programming equation describes recursive relations between the detecting states all control variables 1 2 must satisfy constraints (A.27).Considering the worst case, a user can select such a function f, that the algorithm must select a pair of adjacent intervals with the largest sum.v r k k r k v u k

2.1. Sequential Search: Single-Processor Case Proposition 2.1. Let
us consider two arbitrary points, L and R, that satisfy inequalities 0

Table 6 .1. Total number of probes as function of n.
13)