On Optimal Non-Overlapping Segmentation and Solutions of Three-Dimensional Linear Programming Problems through the Super Convergent Line Series ()
1. Introduction
Linear Programming (LP) problems belong to a class of constrained convex optimization problems which have been widely discussed by several authors: see [1] [2] [3] . The commonly used algorithms for solving Linear Programming problems are: the Simplex method which requires the use of artificial variables and surplus or slack variables, and the active set method which requires the use of artificial constraints and variables. Over the years, a variety of line search algorithms have been employed in locating the local optimizer of response surface methodology (RSM) problems: see [4] and [5] . Similarly, the active set and simplex methods which are available for solving linear programming problems also belong to the class of line search exchange algorithms.
The line search algorithm, which is built around the concept of super convergence, has several points of departure from the classical, gradient-based line series. These gradient-based line series do often times fail to converge to the optimum but the Super Convergent Line Series (SCLS) which are also gradient- based techniques locate the global optimum of response surfaces with certainty. Super Convergent Line Series (SCLS) was introduced by [6] , and later used by [7] and [8] . [9] modified the Super Convergent Line Series (SCLS) and used it to solve Linear Programming Problems, [10] applied Quick Convergent Inflow Algorithm to solve Constrained Linear Programming Problems on Segmented region, and [11] modified the “Quick Convergent Inflow Algorithm” and used it to solve Linear Programming Problems based on variance of predicted response. In [12] , it was verified and established that the best number of segments is two (S = 2) for Linear Programming Problems, four (S = 4) for Quadratic Programming Problems, and eight (S = 8) for Cubic Programming Problems, for non-over- lapping segmentation of the response surface. The above algorithms compared favourably with other Line Search algorithms that utilize the principles of experimental design.
Other recent studies on line search algorithms for optimization problems are: [13] in which a modified version of line search for global optimization was proposed. The line search here uses a technique for the determination of random- generated values for the direction and step-length of the search. Some numerical experiments were performed using popular optimization functions involving fifty dimensions; comparison with standard line search, genetic algorithms and differential evolution were performed. Empirical results illustrate that the modified line search algorithm performs better than the standard line search and other techniques for three or four test functions considered. [14] focused on line search algorithms for solving large-scale unconstrained optimization problems such as quasi-Newton methods, truncated Newton and conjugate gradient. [15] proposed a line search algorithm based on the Majorize-Minimum principle; here, a tangent majorant function is built to approximate a scalar criterion containing a barrier function, which leads to a simple line search ensuring the convergence of several classical descent optimization strategies, including the most classical variants of non-linear conjugate gradient. [16] presented the fundamental ideas, concepts and theorems of basic line search algorithm for solving linear programming problems which can be regarded as an extension of the Simplex method. The basic line search algorithm can be used to find an optimal solution with only one iteration. [17] presented a performance of a one-dimensional search algorithm for solving general high-dimensional optimization problems which uses line search algorithm as subroutines.
In all the aforementioned works, none has gone beyond solving problems in two-dimensional spaces with segmentation. This paper is basically on obtaining optimal solutions and segmentation of Linear Programming Problems in three dimensional spaces of a cuboidal region.
2. Preliminaries
2.1. Three Dimensional Non-Overlapping Segmentation of the Response Surface
The space,
, (the shape of a cube) is partitioned into subspaces called segments. These segments are non-overlapping with common boundaries. The space,
, is partitioned into S non-overlapping segments as follows:
In Figure 1(a), the cube (experimental space) is partitioned into two segments, S1 and S2, while in Figure 1(b) and Figure 1(c), the cubes are partitioned into three and four segments, respectively. From the above Figures and their respective segments, support points will be picked to form their respective design matrices. The number of support points per segment, according to [18] , should not exceed
, where p is the number of parameters of the regression model under consideration. Therefore,
, where n is
(a) (b) (c)
Figure 1. (a): A vertical line, Ƨ, drawn through the middle of a Cube [Two segments (S = 2)]. (b): A vertical line, Ʈ, and a horizontal line, ƥ, draw through the middle of a cube [Three Segments (S = 3)]. (c): A vertical line, Δ, and a horizontal line, Ԓ, drawn through the middle of a cube [Four Segments (S = 4)].
the maximum number of support points per segment. The number of support points per segment as given by [6] is
, where n is the number of variables in the model, Nk is the number of support points in Nk segment. The support points per segment are arbitrarily chosen provided they satisfy constraint equations and do not lie outside the feasible region.
2.2. Rationale of the Segmentation
Design matrices are formed from the support points obtained from each of the segments created above. The segmentation of the response surface according to [6] is a rapid way of improving the average information matrix and obtaining the optimum direction vector. This is achieved by obtaining the linear combination of the information matrices from the different segments. The improved average information matrix (resultant matrix) is used to compute the optimum direction vector, which locates the optimum direction and the optimizer in a very short period or with one iteration. Without segmentation, information leading to the optimizer would have been obtained from only a fraction of the entire response surface.
With segmentation, more support points are available at the boundary of the feasible region. [18] [19] [20] have shown that a design formed with support points taken at the boundary of the feasible region is better than any other design with support points taken at the interior of the feasible region.
Theorem: The average information matrix resulting from pooling the segments using matrices of coefficients of convex combination is
Proof:
where Hk is the matrix of coefficient of convex combination,
is the information matrix
,
and
.
Thus,
Therefore,
, since
Therefore,
3. Methodology
3.1. The Theory of Super Convergent Line Series
3.1.1. Definitions and Preliminaries
The Super Convergent Line Series (SCLS) is defined by [6] as
(1.1)
is the vector of the optimal values,
is the optimal starting points, where
;
,
,
is the direction vector defined as
, where
is an n-component vector of responses;
, is the ith row of the average information matrix,
, where
is the inverse of the average information matrix;
is the step-length defined as
, where
is the di-
rection vector;
is the vector which represents the parameter of linear inequalities;
is the starting point and
is a scalar of the linear inequalities;
is an N-point design measure whose support points may or may not have equal weights;
Support points are pairs of points marked on the boundary and interior of the partitioned space which are picked to form design matrices;
is the experimental space of the response surface that can be partitioned into segments such that every pair of support points in the segment is a subset of
;
is the information matrix,
is the inverse information matrix;
S1 is segment 1, S2 is segment 2,
is the determinant of the information matrix;
Hi is the matrix of the coefficients of convex combination and is defined as
With i = 1, 2 segments, the coefficients of convex combinations, Hi, of the segments are:
(1.2)
for the inverse information matrix in segment 1,
(1.3)
for the inverse information matrix in segment 2,
where V111, V122, V133 are the variances of the inverse information matrix of segment 1 and V211, V222, V233 are variances of the inverse information matrix of segment 2, respectively.
The average information matrix,
, is the sum of the product of the k information matrices and the k matrices of the coefficients of convex combinations, thus
see [6] (1.4)
Segmentation is the partitioning of the experimental space,
, into segments. Segmentation can be non-overlapping and overlapping, and support points are selected from each segment to form design matrices.
An unbiased response function is defined by
(1.5)
3.1.2. Algorithm for Super Convergent Line Series
The algorithm follows the following sequence of steps:
1) Partition the experimental space (Cube) into
segments and select Nk support points from the kth segment; hence, make up an N-point design,
2) Compute the vectors,
3) Move to the point,
4) Is
? (where
is the optimizer of
).
Yes: stop,
No: then go back to 1) above until the optimal solution is obtained.
5) Identify the segment in which the optimal solution is obtained.
3.2. The Average Information Matrix, the Direction Vector, the Starting Point and the Step-Length
3.2.1. The Average Information Matrix
The average information matrix,
, is the sum of the product of the k information matrices, and the k matrices of the coefficients of convex combina-
tions given by
for two segments, the average information matrix is
.
3.2.2. The Direction Vector
The direction vector defined in Section 3.1.1 is computed as follows:
If f(x) is the response function, then the response vector, Z, is given by
, where
Hence, the direction vector defined in Section 3.1.1 is computed as
.
By normalizing such that
, we have
,
where d0 = 1 is discarded.
3.2.3. Optimal Starting Point
The optimal starting point is obtained from the design matrices of the segments considered. The optimal starting point defined in Section 3.1.1 is obtained as follows:
Using a 4-point design matrix,
3.2.4. The Step-Length
The step-length is defined by
, where
is the optimal step-length and
is the
normalized direction vector,
is the vector which represent the parameter of linear inequalities,
is the starting point while
is a scalar of linear inequalities.
4. Results and Discussion
4.1. Comparison of Results Obtained Using the Segmentation Procedure with Existing
Results in the Literature
Problem 1: [ [21] , Problem 7.2B, Question 2b, pp. 304]
Maximize
Subject to
Support points are picked from the boundaries of the partitioned segments (Figure 2) provided they do not violate the constraint equations.
Thus
and
,
where X1 and X2 is obtained from S1 and S2 respectively.
Thus, the design and inverse matrices are given as follows (from Figure 2):
;
,
;
The direction vector,
; by normalizing
, we get
,
(See Section 3.2.2)
, the step-length,
,
.
Therefore, Max Z = 5.46.
With S = 2 (2 Segments), the value of Z is Max. Z = 5.46 (in one iteration) which is close to the optimal value obtained by [21] , problem 7.2b, Question 2b, pp. 304, as Max Z = 5.00 (in 3 iterations). The maximum values of Z for this problem using 3 and 4 segments are: 5.81 for (x1, x2, x3) = (1.265, 0.7891, 1.2431), and 5.77 for (x1, x2, x3) = (1.0008, 0.6606, 1.5523). These values are not optimal because they do not compare favourably with the existing solution got by [21] using the simplex method.
Problem 2: [ [22] , Ex. 2.4, Q. 14(ii), p. 215]
Maximize
Subject to
Support points are picked from the boundaries of the partitioned segments (from Figure 3) provided they do not violate the constraint equations.
Thus
and
,
Thus, the design and inverse matrices are given as follows (from Figure 3):
;
,
;
The direction vector,
, by normalizing
, we get
(See Section 3.2.2)
, the step-length
,
.
Therefore, Max Z = 98.96.
With S = 2 (2 Segments), the value of Z is Max Z = 98.96 which is close to the
optimal value (in one iteration) obtained by [22] , Ex. 2.4, Q 14(ii), p. 215, as Max Z = 98.80 (in two iterations), using the simplex method. The maximum value of Z for this problem using 3 and 4 segments are: 99.06 for (x1, x2, x3) = (6.0213, 3.725, 8.2537) and 99.15 for (x1, x2, x3) = (6.0746, 3.675, 8.2503).
4.2. Illustrative Problem and Application
A producer of leather shoes makes three types of shoes, X, Y and Z, which are processed on three machines, K1, K2 and K3. The daily capacities of the machines are given in Table 1 as follows.
The profit gained from shoe X is ₦3 per unit, shoe Y is ₦5 per unit and shoe Z is ₦4 per unit. What is the maximum profit for the three types of shoe produced?
Solution: Let X1 be the unit of type X, X2 be the unit of type Y and X3 be the unit of type Z.
Maximize
Subject to
In a similar manner, the design and inverse matrices are given as follows [from Figure 4].
Table 1. The daily capacity of the machines.
;
,
;
.
The direction vector,
; by normalizing
, we get
,
, step-length
,
. Therefore, the maximum value of Z is
₦18.88. This value in one iteration is close to the optimum value got by using the simplex method approach (in three iterations). When 3 and 4 segments were used, the maximum values of Z for this problem are ₦21.25 with corresponding values (X1, X2, X3) = (1.37, 2.08, 1.68) and ₦21.03 with corresponding values (X1, X2, X3) = (1.43, 2.01, 1.67). These values are not optimal because they do not compare favourably with the simplex method solution which is Max Z = ₦18.66.
5. Conclusion
Three dimensional Linear Programming problems have been solved using the line search equation,
, of the Super Convergent Line Series, by segmenting the cuboidal response surface into 2, 3 and 4 segments. A real-life problem was also used to achieve the desired result. It was found that the optimal solution is attained at 2 segments (S = 2) and in one iteration or move even though up to 4 segments (S = 4) were considered. But comparing the solution with the simplex method’s result, a close result was obtained in 2 and 3 iterations. Hence, as the name implies, the Super Convergent Line Series (SCLS) locates the optimizer in one iteration and better still with segmentation.