Some Remarks on Application of Sandwich Methods in the Minimum Cost Flow Problem

In this paper, two new sandwich algorithms for the convex curve approximation are introduced. The proofs of the linear convergence property of the first method and the quadratic convergence property of the second method are given. The methods are applied to approximate the efficient frontier of the stochastic minimum cost flow problem with the moment bicriterion. Two numerical examples including the comparison of the proposed algorithms with two other literature derivative free methods are given.


Introduction
The network cost flow problems which describe a lot of real-life problems have been studied recently in many Operation Research papers.One of the basic problems in this field are the bicriteria optimization problems.Although there exist exact computation methods for finding the analytic solution sets of bicriteria linear and quadratic cost flow problems (see e.g.[1,2]), Ruhe [3] and Zadeh [4] have shown that the determination of these sets may be very perplexing, because there exists the possibility of the exponential number of extreme nondominated objecttive vectors on the efficient frontier of the considered problems.The fact that efficient frontiers of bicriteria linear and quadratic cost flow problems are the convex curves in allows to apply the sandwich methods for a convex curve approximation in this field of optimization (see e.g.[5][6][7][8]).However, in some of these algorithms the derivative information is required.A derivative free method was introduced first by Yang and Goh in [8], who applied it to bicriteria quadratic minimum cost flow problems.The efficient frontiers of these problems are approximated by two piecewise linear functions called further approximation bounds, which construction requires solving of a number of one dimensional minimum cost flow problems.Unfortunately, the method introduced by Yang and Goh works under the assumption that the change of the direction of the tangents of the 2 R approximated function is less than or equal to π 2 .Also, Siem et al. in [7] proposed an algorithm based only on the function value evaluation, with the interval bisection partition rule and two new iterative strategies for the determination of the new input data point in each iteration.Authors gave the proof of linear convergence of their algorithm.
In this paper we consider the generalized bicriteria minimum cost flow problem.We are interested in minimizing two cost functions, which satisfy some additional assumptions.Two sandwich methods for the approximation of the efficient frontier of this problem are presented.In the first method, based on the algorithm proposed by Siem et al. [7], new points on the efficient frontier are computed according to the chord rule or the maximum error rule by solving proper convex network problems.In the second method, we modify the lower approximation function discussed in [8], what decreases the Hausdorff distance between upper and lower bounds.We give the proofs of the linear convergence property of the first method called the Simple Triangle Algorithm and the quadratic convergence property of the second method called the Trapezium Algorithm.
The paper is organized as follows.In Section 2, we state a nonlinear bicriteria optimization problem that can be treated as a generalized minimum cost flow problem.In Section 3, two new sandwich methods of approximation of the efficient frontier of the stated problem are presented and in Subsection 3.5 the corresponding algorithms are presented.In Section 4, we discuss the convergence of these algorithms.Section 5 includes the information how to use the methodology from Section 2 in the case of the stochastic minimum cost flow problem with the moment bicriterion.To illustrate discussed methods in comparison with the algorithms presented in [7] and [8] two numerical examples are given.Finally, Section 6 contains the conclusions and future research direction.Proofs of lemmas and Theorem 2 are given in Appendix.

Problem Statement
Let G be the directed network with n nodes and m arcs.Let  . We consider the generalized minimum cost flow problem (GMCFP) defined as follows where is the node-arc incidence matrix, Using these definitions in the field of the bicriteria programming, a feasible solution x X  is called the efficient solution of problem (1) if there does not exist a feasible solution y X  The set of all efficient solutions and the image of this set under the objective functions are called the efficient set and the efficient frontier, respectively.
Note that the efficient set E X of problem (1) is the same as the efficient set of following problem Moreover, from the fact that for all and that the function is a convex one it follows the next lemma.

Lemma 1
Efficient frontier of problem ( 3) is a convex curve in .
Proof: See Appendix.

The New Sandwich Methods
In this section, we introduce two new sandwich methods for the approximation of the efficient frontier of problem (3).

Initial Set of Points
  for all   1 are r given points on the efficient frontier of problem (1) such that r x and x are the lexicographical minimum for the first and the second criterion, respectively.Although we need only three given points on the efficient frontier to start the first method called the Simple Triangle Method (STM) and two points to start the second method called the Trapezium Method (TM), the described methodologies work for any number r of initial points, which may be obtained by solving scalarization problems corresponding to problem (3), i.e. where Another possibility is to find lexicographical minima of problem (3) and then to solve r convex programing problems with additional equality constraints . This method gives r points on the efficient frontier with the following property

Upper Bound
Suppose that the initial set 1 r of points on the efficient frontier is given and that the points are ordered according to the first criterion for

Lower Bounds
In the algorithms proposed at the end of this section we will use two different definitions of the lower approximation functions called the lower bounds.

Definition 1
According to [7] the straight lines defined by the points  may be constructed in the following form where and is the point of intersecttion of two linear functions and u  .Moreover, we define the lower approximation bound on the most left and the most right interval as follows  If we compute new points on the efficient frontier due to the chord rule (see next section), then definition (9) may be modified, see Rote [9], in the following way where and k is the point of intersection of corresponding two linear functions.Note that the lower bound is constructed by the tangents to and for , and for , for ,

Definition 2
The simple modification of the definition presented in [8] leads to the following form of lower approximation bound Similar to the previous case, we define the lower approximation bound on the most left and the most right interval as follows for , for , Moreover, these definitions may be modified like in (12) using the tangents in points in the following form and for , for , If the approximation bounds and are constructed for each , then we define the upper approximation function and the lower approximation function due to equality for all   . We note that after a small modification of the definition of the lower approximation function in the most right interval, any convex function may be approximated by the lower and upper bounds defined in this subsection.

Error Analysis
Suppose that the approximation bounds have been built and let   , where  denotes the approximation error on the interval .We where If a measure  ) does not satisfy a desired accuracy, we choose for which and we determine the new point points on the efficient frontier may be computed according to the chord rule or the maximum error rule, that is by solving the optimization problem (16) or the following problem (the maximum error rule problem) where k is the point of intersection of linear functions   Note that if we construct the lower bound due to definition (15), then the chord rule problem (16) has been already solved.
After the determination of the new point, we rebuild   , P P P  the set P of given points on efficient frontier due to following equality 1 for for for , then we construct new upper and lower bounds and repeat the procedure until we obtain an error  smaller than the prescribed accuracy.
The next lemma describes the relation between the approximation bounds of the efficient frontiers of problems ( 1) and (3).
Lemma 2 Let and be the lower and upper approximation bounds of the efficient frontier of problem (3) built due to the definitions ( 8) and ( 9) or ( 8) and ( 15), then are the lower and upper approximation bounds of the efficient frontier of problem (1).
Moreover, the following inequality is satisfied for all the efficient solutions

Algorithms
We present two algorithms described in this section.

The Simple Triangle Algorithm (STA)
Input: Introduce an accuracy parameter  and an initial set of points on the efficient frontier , , P P P P  .
Step Step 3. Check if    0  , then go to Step 2, otherwise stop.

The Trapezium Algorithm (TA)
Input: Introduce an accuracy parameter  and an initial set of points on the efficient frontier 1 2 Step 1. Solve problem (16) and calculate lower and upper bounds .
, then go to Step 2, otherwise stop.
Step 2. Choose interval  for which the maximum error is achieved.The new point . Update set P, solve problem (13), calculate lower and upper bounds , Here one can observe that in the interval In Figure 3 and Figure 4 we see the illustration of STA and TA which lower bounds built due to definitions (9), ( 12) and ( 15), (19), respectively.
In Section 4 we study the convergence of described algorithms.

Convergence of the Algorithms
In this section we present the convergence results of presented algorithms based on proofs given in Rote [9] (a) (b)  and Yang and Goh [8].First, we formulate two following remarks, which show the relation between the considered error measures.
Remark 1 Suppose that the lower and upper approximation have been bounds on the interval  build according to the definitions ( 8) and ( 9) or (15), then we have

Remark 2
Suppose that the lower and upper approximation bounds on the interval have been build according to the definitions ( 8) and ( 9) or (15), then we have Now, we suppose that the efficient frontier of problem (3) is given as a convex function The following theorem based on Remark 3 and Theorem 1 in [8], Theorem 2 in [9] and Remark 1 and Remark 2 shows the quadratic convergence property of TA.
The number H of optimization problems (16) which have to be solved in order to obtain the Hausdorff distance between upper and lower bounds in TA smaller than or equal to satisfies the following inequality on the efficient frontier has to be chosen if we want to avoid the problems with the leftmost interval in which the Hausdorff distance between the approximation bounds is equal to the maximum error measure between them.The explanation how to determine a point with property (34) is given in the proof of Theorem 2.
Corollary 1 . The number M of optimization problems (16) which have to be solved in order to obtain the Maximum error between upper and lower bounds in TA smaller than or equal to satisfies the following inequality


The number U of optimization problems (16) which have to be solved in order to obtain the Uncertainty area error between upper and lower bounds in TA smaller than or equal to satisfies the following inequality , respectively, and and are the lengths of these intervals.
The next theorem based on Lemma 5, Theorem 2 from [9] and Lemma 3 establishes the linear convergence property of STA.where Yang and Goh [8] noticed that the right directional derivative opyright © 2012 SciRes.

  
m x x   Also for STA we may find the upper bound for the number of optimization problems (16) or (28) which have to be solved.First, let us formulate the following lemma.

Lemma 3
Suppose that the convex function , a g a , and , , respectively (see Figure 6).If we denote by where 1 tan tan , , then the number H of convex optimization problems (( 16) or ( 28)), which have to be solved in order to make the Hausdorff distance between upper and lower bound in STA with the chord rule or the maximum error rule smaller than or equal to satisfies the following inequality .The number M of optimization problems (16) which have to be solved in order to obtain the Maximum error between upper and lower bounds in TA smaller than or equal to satisfies the following inequality problem e to be solved in order to obtain the Uncertainty area error between upper and lower bounds in TA smaller than or equal to  satisfies the following inequality ark 3 theorem is true for every convex func- such that the one-sided derivative   g a  valuated.If we do not have the derivat formation using TA with maximum error rule gives us the linear convergence property of this procedure.
The following theorem by Rote [9] establishes the qu has been e n ive i adratic convergence property of STA with the modified lower bound as in (12).
Theorem 3 , then the number nve problems (1 H of co imization 6), which have to be solved in order to make the Hausdorff distance between upper and lower bound in STA and TA with the modified lower bounds and the chord rule smaller than or equal to  satisfies the following inequality x opt max 2, 0 .

Examples
where :  are linear x functions, re ind that the effic m (45 a co rve in 2 R and conve spectively, then using Lemma 1 we f ient frontier of proble The following two examples include the comparison of the results obtained by STA, TA with Yang and Goh's method [8] and th iem et al.'s algorithm [7].

Example 1
We consider the simple stochastic minimum cost flow problem irst and the  1.The results of STA, when new points are computed according to th maximum error rule in common with the Maximum error measure (STAM) and when new points are computed according to the chord rule in common with the Hausdorff measure with the lower bounds defined by the Equation (9) (STAH) and e Table 1.The results of subsequent calls of TA and Yang and Goh's method (YG) for the problem described in example 1.

TA YG
Step .Ta ribed i [7], which uses the interval bisection method of the computing new points with the Maximum error measure.After each step of algorithm we present the maximum values of three error measures: the Maximum error, the Hausdorff distance and the Uncertainty area.As we can notice TA and TA1 performs better than other algorithms giving in each step the smallest values of the Hausdorff distance measure and the Uncertainty area measure.Moreover, from Table 2 we can conclude that STA with the chord rule and the Hausdorff distance gives smaller values of the Hausdorff measure in each step than to two other algorithms, which give comparable results.

Example 2
We consider the stochastic minimum cost flow problem in the network with 12 nodes and 17 arcs.We would like to minimize the mean value of the total cost w through th st, that is we solve the following problem : is the random cost vector such that i C and  j C are y independent for all , i j A  such that i j  .mutuall

 
We have alues of chosen the v   , , Yang and Goh's method are considered, since from Example 1. follows that these two algorithms work faster in comparison t al.'s m on of TA with the method presented in [8].After each step of the considered methods we present the values of Hausdorff distance, Maximum error and Uncertainty area measure and a new evaluated point.As we can notice TA performs better in compareson to Yang and Goh's algorithm giving in each step the smallest value of the Hausdorff distance between upper and lower approximation bounds.

Conclusions
Two sandwich algorithms (the Simple Triangle Algorithm Table 3.The results of subsequent c oh's method (YG) for the problem described in example 2.
Step The Simple Triangle Algorithm uses the lower bound proposed by Siem et al. in [7] with the maximum error rule or the chord rule, what causes faster decrease of the Maximum error measure and the Hausdorff distance measure and, as a result, reduces the number of steps of algorithm in comparison to the Siem et al.'s method.We have established the linear convergence property of this algorithm with both partition rules.If the lower bound in the Simple Triangle Algorithm is defined as in [9], according to the definition (12) and new points of the efficient frontier are computed due to the chord rule then we have the quadratic convergence property of the algorithm.
From the numerical examples follows that the Trapezium Algorithm performs better in comparison to all of the mentioned derivative free algorithms (Siem et for  be two given points on the efficient frontier of problem (3) and let the point lies also on the efficient frontier of problem (3).Suppose now that we have , what yields Due to the convexity of the function we have .
Moreover,  Similar to [9] we prove the theorem by induction on

h x l h x h u h h x h l h h h u f x h l f x h y u f x h y u f x h f x u f x l f x h x u f x l f x
The induction basis, 0 , is equivalent to Lemma 1 from [5], which also holds for lower approximation function built according to definition (9).Suppose that .If after one step of STA     , then we have had only one additional evaluation and the thesis is true.
of intersection of function and the constant function .

2
the most left and the most right interval are redefined as follows Appendix.From Lemma 2 follows that in order to obtain the Maximum error between upper and lower approximation bounds  1) smaller than or equal to the accuracy parameter , we need to build the approximation bounds l and  u of problem (3) for which the Maximum error is smaller than or equal to

1 .u
Calculate lower and upper bounds   l  ,   u  and error  .Check if    , then go to Step 2, maximum error is achieved.Solve the quadratic problem (16) or (28) to obtain the new point  .Update set P, lower and upper bounds ,  and error  .Go to Step 3.

uFigure 1 .
Figure 1.Lower and upper bounds built due to STA with the chord rule.
means that we have to introduce a new point 1 2 into the efficient frontier and determine new corresponding lower and upper bounds, what is illustrated in Figure 1(b).Similar considerations are illustrated in Figures 2(a) and 2(b).

Figure 2 .
Figure 2. Lower and upper bounds built due to TA.

Figure 5 .
Figure 5. Illustration of error measures defined by (25)-(27).Theorem 1 L b a  to , that is why using the fact that the Hausdorff distance is invariant under rotation it is better to consider the efficient frontier ro- be the lines which intersect the points Appendix.Note that 1 and 2  are the differences of the slopes of lower approximation functions computed according to Let

Figure 6 .
Figure 6.Illustration of functions and angels considered in Lemma 3.
In this section w flow problem with the moment bicriterion and present two numerical examples, which illustrate algorithm presented in Section 3. Similar to the classic bicriteria network cost flow problem, we consider the directed network G with n nodes and m arcs with the node-arc incithat each variable i C has positive expected value   and  i i E C c  and let C b a random vector such that the lexicographical minima due to the f second criterion of problem (46) and 1 x ).It is possible to find such a point as a solution of a convex programing problem with one additional constraint similar to problem(6).From condition (39

 d c b 2 
In the other case, let   L d c be the new computed point and let 1  .denote the slope differences of the linear functions

Table 2 .
In th case of ng d Goh meth the errors are easur in th