^{1}

^{*}

^{2}

The solutions of Linear Programming Problems by the segmentation of the cuboidal response surface through the Super Convergent Line Series methodologies were obtained. The cuboidal response surface was segmented up to four segments, and explored. It was verified that the number of segments, S, for which optimal solutions are obtained is two (S = 2). Illustrative examples and a real-life problem were also given and solved.

Linear Programming (LP) problems belong to a class of constrained convex optimization problems which have been widely discussed by several authors: see [

The line search algorithm, which is built around the concept of super convergence, has several points of departure from the classical, gradient-based line series. These gradient-based line series do often times fail to converge to the optimum but the Super Convergent Line Series (SCLS) which are also gradient- based techniques locate the global optimum of response surfaces with certainty. Super Convergent Line Series (SCLS) was introduced by [

Other recent studies on line search algorithms for optimization problems are: [

In all the aforementioned works, none has gone beyond solving problems in two-dimensional spaces with segmentation. This paper is basically on obtaining optimal solutions and segmentation of Linear Programming Problems in three dimensional spaces of a cuboidal region.

The space, X ^ , (the shape of a cube) is partitioned into subspaces called segments. These segments are non-overlapping with common boundaries. The space, X ^ , is partitioned into S non-overlapping segments as follows:

In _{1} and_{ }S_{2}, while in

the maximum number of support points per segment. The number of support points per segment as given by [_{k} is the number of support points in N_{k} segment. The support points per segment are arbitrarily chosen provided they satisfy constraint equations and do not lie outside the feasible region.

Design matrices are formed from the support points obtained from each of the segments created above. The segmentation of the response surface according to [

With segmentation, more support points are available at the boundary of the feasible region. [

Theorem: The average information matrix resulting from pooling the segments using matrices of coefficients of convex combination is

M ( ζ n ) = ∑ k = 1 s H k X k T X k H k T ,

Proof:

X _ T X _ = diag { X 1 T X 1 , X 2 T X 2 , ⋯ , X S T X S } = [ X 1 T X 1 0 ⋯ 0 0 X 2 T X 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ X S T X S ] ,

where H_{k} is the matrix of coefficient of convex combination, X K T X K is the information matrix

H K = [ h 0 k 0 ⋯ 0 0 h 1 k ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ h n k ] , H K T = [ h 0 k 0 ⋯ 0 0 h 1 k ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ h n k ] and

H K H K T = [ h 0 k 2 0 ⋯ 0 0 h 1 k 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ h n k 2 ] .

Thus,

∑ k = 1 s H K H K T = H 1 H 1 T + H 2 H 2 T + ⋯ + H S H S T = [ h 01 2 0 ⋯ 0 0 h 11 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ h n 1 2 ] + [ h 02 2 0 ⋯ 0 0 h 12 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ h n 2 2 ] + ⋯ + [ h 0 s 2 0 ⋯ 0 0 h 1 s 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ h n s 2 ] = [ h 01 2 + h 02 2 + ⋯ + h 0 s 2 0 ⋯ 0 0 h 11 2 + h 12 2 + ⋯ + h 1 s 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ h n 1 2 + h n 2 2 + ⋯ + h n s 2 ]

Therefore, ∑ k = 1 s H K H K T = [ 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ 1 ] = 1 , since ∑ i = 0 n ∑ k = 1 s h i k 2 = 1

Therefore, M ( ζ n ) = ∑ k = 1 s H K X K T X K H K T

The Super Convergent Line Series (SCLS) is defined by [

X _ = X _ ¯ − ρ d _ (1.1)

X _ is the vector of the optimal values,

X _ ¯ = ∑ m = 1 N w m x m is the optimal starting points, where w m > 0 ; ∑ m = 1 N w m = 1 , w m = a m − 1 ∑ m = 1 N a m − 1 ,

a m = X m T X m , m = 1 , 2 , ⋯ , N .

d _ is the direction vector defined as d _ = M A − 1 ( ζ N ) Z _ ( . ) , where Z _ ( . ) = ( Z 0 , Z 1 , ⋯ , Z n ) T is an n-component vector of responses; Z i = f ( m i ) , is the ith row of the average information matrix, M A ( ζ N ) , where M A − 1 ( ζ N ) is the inverse of the average information matrix;

ρ is the step-length defined as ρ = min { C _ i T X ¯ _ − b i C _ i T d _ } , where d _ is the di-

rection vector; C _ i T is the vector which represents the parameter of linear inequalities; X ¯ _ is the starting point and b i is a scalar of the linear inequalities;

ζ N is an N-point design measure whose support points may or may not have equal weights;

Support points are pairs of points marked on the boundary and interior of the partitioned space which are picked to form design matrices;

X ˜ is the experimental space of the response surface that can be partitioned into segments such that every pair of support points in the segment is a subset of X ˜ ;

M ( ζ n k ) = ( X k T X k ) is the information matrix, M − 1 ( ζ n k ) = ( X k T X k ) − 1 is the inverse information matrix;

S_{1} is segment 1, S_{2} is segment 2,

det M ( ζ n k ) is the determinant of the information matrix;

H_{i} is the matrix of the coefficients of convex combination and is defined as

H i = diag ( h i 1 , h i 2 , ⋯ , h i , n + 1 ) , i = 1 , 2 , ⋯ , k ;

With i = 1, 2 segments, the coefficients of convex combinations, H_{i}, of the segments are:

H 1 = diag { V 1 11 V 111 + V 211 , V 122 V 122 + V 222 , V 133 V 133 + V 233 } = diag { h 11 , h 12 , h 13 } (1.2)

for the inverse information matrix in segment 1,

H 2 = diag { V 2 11 V 111 + V 211 , V 222 V 122 + V 222 , V 233 V 133 + V 233 } = diag { h 21 , h 22 , h 23 } (1.3)

for the inverse information matrix in segment 2,

where V_{111}, V_{122}, V_{133} are the variances of the inverse information matrix of segment 1 and V_{211}, V_{222}, V_{233 }are variances of the inverse information matrix of segment 2, respectively.

The average information matrix, M A ( ζ N ) , is the sum of the product of the k information matrices and the k matrices of the coefficients of convex combinations, thus

M A ( ζ N ) = ∑ k = 1 s H k X k T X k H k T ; see [

Segmentation is the partitioning of the experimental space, X ˜ , into segments. Segmentation can be non-overlapping and overlapping, and support points are selected from each segment to form design matrices.

An unbiased response function is defined by

f ( x 1 , x 2 ) = a 00 + a 10 x 1 + a 20 x 2 (1.5)

The algorithm follows the following sequence of steps:

1) Partition the experimental space (Cube) into k = 1 , 2 , ⋯ , s segments and select N_{k} support points from the kth segment; hence, make up an N-point design,

ζ N ( 1 ) = { x _ 1 , x _ 2 , ⋯ , x _ n , ⋯ , x _ n , w 1 , w 2 , ⋯ , w n , ⋯ , w n } ; N = ∑ k = 1 s N k .

2) Compute the vectors, X _ ¯ ∗ , d ∗ , ρ ∗ .

3) Move to the point, X _ ∗ = X _ ¯ ∗ − ρ ∗ d ∗ .

4) Is X ∗ = X f ∗ ? (where X f ∗ is the optimizer of f ( ⋅ ) ).

Yes: stop,

No: then go back to 1) above until the optimal solution is obtained.

5) Identify the segment in which the optimal solution is obtained.

The average information matrix, M ( ζ n ) , is the sum of the product of the k information matrices, and the k matrices of the coefficients of convex combina-

tions given by M ( ζ n ) = ∑ k = 1 s H K X K T X K H K T ,

for two segments, the average information matrix is M A ( ζ N ) = H 1 ∗ X 1 T X 1 H 1 ∗ T + H 2 ∗ X 2 T X 2 H 2 ∗ T = ( m 11 m 21 m 31 m 12 m 22 m 32 m 13 m 23 m 33 ) .

The direction vector defined in Section 3.1.1 is computed as follows:

If f(x) is the response function, then the response vector, Z, is given by

Z = ( z 0 z 1 z 2 ⋮ z n ) , where

z 0 = f ( m 12 , m 13 , ⋯ , m 1 , n + 1 ) , z 1 = f ( m 22 , m 23 , ⋯ , m 2 , n + 2 ) , ⋮ z n = f ( m n + 1 , 2 , m n + 1 , 3 , ⋯ , m 2 , n + 1 ) .

Hence, the direction vector defined in Section 3.1.1 is computed as

d _ = M A − 1 ( ζ N ) Z = ( d 0 _ d 1 d 2 ⋮ d n ) .^{ }

By normalizing such that d * T d * = 1 , we have d * = ( d 1 d 1 2 + d 2 2 + ⋯ + d n 2 d 2 d 1 2 + d 2 2 + ⋯ + d n 2 ⋮ d n d 1 2 + d 2 2 + ⋯ + d n 2 ) ,

where d_{0} = 1 is discarded.

The optimal starting point is obtained from the design matrices of the segments considered. The optimal starting point defined in Section 3.1.1 is obtained as follows:

X _ ¯ = ∑ m = 1 N w m x m ; w m ≥ 0 ; ∑ m = 1 N w m = 1. w m = a m − 1 ∑ m = 1 N w m , m = 1 , 2 , ⋯ , N .

a m = x m T x m , m = 1 , 2 , ⋯ , N . ^{ }

Using a 4-point design matrix, X _ ¯ = ∑ m = 1 8 w m x m , w m = a m − 1 ∑ m = 1 8 a m − 1 , m = 1 , 2 , ⋯ , 8.

The step-length is defined by

ρ ∗ = min { C _ ′ i T X ¯ _ ∗ − b i C _ ′ i T d _ ∗ } , where ρ ∗ is the optimal step-length and d _ ∗ is the

normalized direction vector, C _ i T is the vector which represent the parameter of linear inequalities, X ¯ _ ∗ is the starting point while b i is a scalar of linear inequalities.

Results in the Literature

Problem 1: [ [

Maximize Z = 2 x 1 + x 2 + 2 x 3

Subject to 4 x 1 + 3 x 2 + 8 x 3 ≤ 12

4 x 1 + x 2 + 12 x 3 ≤ 8

4 x 1 − x 2 + 3 x 3 ≤ 8

x 1 , x 2 , x 3 ≥ 0

Support points are picked from the boundaries of the partitioned segments (

Thus X 1 = { ( 0 , 1 , 0 ) , ( 0 , 1 , 1 ) , ( 0 , 0 , 1 ) , ( 1 / 2 , 0 , 0 ) , ⋯ , ( 1 / 4 , 0 , 0 ) } and

X 2 = { ( 1 , 0 , 1 ) , ( 0 , 0 , 1 / 2 ) , ( 1 , 0 , 0 ) , ( 1 , 1 / 2 , 0 ) , ( 1 , 1 , 0 ) , ⋯ , ( 1 / 2 , 0 , 0 ) } ,

where X_{1} and X_{2} is obtained from S_{1} and S_{2} respectively.

Thus, the design and inverse matrices are given as follows (from

X 1 = ( 1 0 1 0 1 0 0 1 2 1 0 1 2 0 1 1 2 0 0 ) ; X 2 = ( 1 1 1 0 1 1 1 2 0 1 1 2 0 0 1 0 0 1 2 ) ,

( X 1 T X 1 ) − 1 = ( 5 − 10 − 6 − 10 − 10 24 12 20 − 6 12 8 12 − 10 20 12 24 ) ; ( X 2 T X 2 ) − 1 = ( 9 − 14 6 − 18 − 14 24 − 12 28 6 − 12 8 − 12 − 18 28 − 12 40 )

The direction vector, d _ = ( 2.000 1.000 2.000 ) ; by normalizing d _ , we get d _ ∗ = ( 0.8944 0.4472 0.8944 ) ,

(See Section 3.2.2)

X _ ¯ ∗ = ∑ i = 1 N w i x i = ( 0.2990 0.2758 0.1516 ) , the step-length, ρ ∗ = − 1.1396 , X _ ∗ = X ¯ _ ∗ − ρ ∗ d ∗ = ( 1.318 0.7854 1.1709 ) .

Therefore, Max Z = 5.46.

With S = 2 (2 Segments), the value of Z is Max. Z = 5.46 (in one iteration) which is close to the optimal value obtained by [_{1}, x_{2}, x_{3}) = (1.265, 0.7891, 1.2431), and 5.77 for (x_{1}, x_{2}, x_{3}) = (1.0008, 0.6606, 1.5523). These values are not optimal because they do not compare favourably with the existing solution got by [

Problem 2: [ [

Maximize Z = 5 x 1 + 3 x 2 + 7 x 3

Subject to x 1 + x 2 + 2 x 3 ≤ 26

3 x 1 + 2 x 2 + x 3 ≤ 26

x 1 + x 2 + x 3 ≤ 18

x 1 , x 2 , x 3 ≥ 0

Support points are picked from the boundaries of the partitioned segments (from

Thus X 1 = { ( 0 , 1 , 0 ) , ( 0 , 1 , 1 ) , ( 0 , 0 , 1 ) , ( 1 / 2 , 0 , 0 ) , ⋯ , ( 1 / 4 , 0 , 0 ) } and

X 2 = { ( 1 , 0 , 1 ) , ( 0 , 0 , 1 / 2 ) , ( 1 , 1 , 1 ) , ( 1 , 1 / 2 , 0 ) , ( 1 , 1 , 0 ) , ⋯ , ( 1 / 2 , 0 , 0 ) } ,

Thus, the design and inverse matrices are given as follows (from

X 1 = ( 1 0 1 0 1 0 1 1 1 1 4 0 0 1 0 0 1 ) ; X 2 = ( 1 1 1 1 1 1 0 1 1 1 1 2 0 1 1 2 0 0 ) ,

( X 1 T X 1 ) − 1 = ( 3 − 12 − 2 − 2 − 12 64 20 8 − 2 8 2 1 − 2 8 1 2 ) ; ( X 2 T X 2 ) − 1 = ( 9 − 14 6 5 − 14 24 − 12 − 10 6 − 12 8 6 5 − 10 6 6 )

The direction vector, d _ = ( 5.0002 2.9998 6.9999 ) , by normalizing d _ , we get d _ ∗ = ( 0.5488 0.3293 0.7683 )

(See Section 3.2.2) X _ ¯ ∗ = ∑ i = 1 N w i x i = ( 0.3812 0.3318 0.2787 ) , the step-length ρ ∗ = − 10.3306 , X _ ∗ = X ¯ _ ∗ − ρ ∗ d ∗ = ( 6.0506 3.7337 8.2157 ) .

Therefore, Max Z = 98.96.

With S = 2 (2 Segments), the value of Z is Max Z = 98.96 which is close to the

optimal value (in one iteration) obtained by [_{1}, x_{2}, x_{3}) = (6.0213, 3.725, 8.2537) and 99.15 for (x_{1}, x_{2}, x_{3}) = (6.0746, 3.675, 8.2503).

A producer of leather shoes makes three types of shoes, X, Y and Z, which are processed on three machines, K_{1}, K_{2} and K_{3}. The daily capacities of the machines are given in

The profit gained from shoe X is ₦3 per unit, shoe Y is ₦5 per unit and shoe Z is ₦4 per unit. What is the maximum profit for the three types of shoe produced?

Solution: Let X_{1} be the unit of type X, X_{2} be the unit of type Y and X_{3} be the unit of type Z.

Maximize Z = 3 x 1 + 5 x 2 + 4 x 3

Subject to 2 X 1 + 3 X 2 ≤ 8

2 X 2 + 5 X 3 ≤ 10

3 X 1 + 2 X 2 + 4 X 3 ≤ 15

X 1 , X 2 , X 3 ≥ 0

In a similar manner, the design and inverse matrices are given as follows [from

Types of shoes (Unit of types of shoes) | Hours available per day | |||
---|---|---|---|---|

Machines | X | Y | Z | |

K_{1} | 2 | 3 | 8 | |

K_{2} | 2 | 5 | 10 | |

K_{3} | 3 | 2 | 4 | 15 |

X 1 = ( 1 0 1 0 1 0 1 1 1 1 4 0 0 1 0 0 1 ) ; X 2 = ( 1 1 1 1 1 1 0 1 1 1 1 2 0 1 1 2 0 0 ) ,

( X 1 T X 1 ) − 1 = ( 3 − 12 − 2 − 2 − 12 64 8 8 − 2 8 2 1 − 2 8 1 2 ) ; ( X 2 T X 2 ) − 1 = ( 9 − 14 6 5 − 14 24 − 12 − 10 6 − 12 8 6 5 − 10 6 6 ) .

The direction vector, d _ = ( 3 5 4 ) ; by normalizing d _ , we get d _ ∗ = ( 0.4243 0.7071 0.5657 ) , X _ ¯ ∗ = ∑ i = 1 N w i x i = ( 0.3027 0.2953 0.2483 ) , step-length ρ ∗ = − 2.1916 , X _ ∗ = X ¯ _ ∗ − ρ ∗ d ∗ = ( 1.2326 1.845 1.4881 ) . Therefore, the maximum value of Z is

₦18.88. This value in one iteration is close to the optimum value got by using the simplex method approach (in three iterations). When 3 and 4 segments were used, the maximum values of Z for this problem are ₦21.25 with corresponding values (X_{1}, X_{2}, X_{3}) = (1.37, 2.08, 1.68) and ₦21.03 with corresponding values (X_{1}, X_{2}, X_{3}) = (1.43, 2.01, 1.67). These values are not optimal because they do not compare favourably with the simplex method solution which is Max Z = ₦18.66.

Three dimensional Linear Programming problems have been solved using the line search equation, X ¯ _ ∗ − ρ ∗ d ∗ , of the Super Convergent Line Series, by segmenting the cuboidal response surface into 2, 3 and 4 segments. A real-life problem was also used to achieve the desired result. It was found that the optimal solution is attained at 2 segments (S = 2) and in one iteration or move even though up to 4 segments (S = 4) were considered. But comparing the solution with the simplex method’s result, a close result was obtained in 2 and 3 iterations. Hence, as the name implies, the Super Convergent Line Series (SCLS) locates the optimizer in one iteration and better still with segmentation.

Ugbe, T. and Chigbu, P. (2017) On Optimal Non-Overlapping Segmentation and Solutions of Three-Dimen- sional Linear Programming Problems through the Super Convergent Line Series. American Journal of Operations Research, 7, 225-238. https://doi.org/10.4236/ajor.2017.73015