^{1}

^{*}

^{1}

^{1}

In this work, a new method is presented for determining the binding constraints of a general linear maximization problem. The new method uses only objective function values at points which are determined by simple vector operations, so the computational cost is inferior to the corresponding cost of matrix manipulation and/or inversion. This method uses a recently proposed notion for addressing such problems: the average of each constraint. The identification of binding constraints decreases the complexity and the dimension of the problem resulting to a significant decrease of the computational cost comparing to Simplex-like methods. The new method is highly useful when dealing with very large linear programming (LP) problems, where only a relatively small percentage of constraints are binding at the optimal solution, as in many transportation, management and economic problems, since it reduces the size of the problem. The method has been implemented and tested in a large number of LP problems. In LP problems without superfluous constraints, the algorithm was 100% successful in identifying binding constraints, while in a set of large scale LP tested problems that included superfluous constraints, the power of the algorithm considered as statistical tool of binding constraints identification, was up to 90.4%.

It is well-known, that in large linear programming (LP) problems there is a significant number of redundant constraints and variables. Simplex method [

In this direction, many researchers including Andersen and Andersen [

In this work, a new method is presented for determining the binding constraints of a general linear maximization problem. The method deals with the general n-dimensional linear maximization problem with more constraints than decision variables in a standard form, i.e. with functional constraints of the type lesser-than-or-equal and no negativity constraints. It is based on the notion of a kind of weighted average of the decision variables associated with each constraint boundary. Actually, this average is one coordinate of the point at which the bisector line of the first angle of the axis (the line whose points have equal coordinates) intersects the hyperplane defined by the constraint boundary equation.

The proposed method is always applicable to linear programming problems which are defined with no excessive constraints. Sometimes, it is applicable even in the cases when there are superfluous constraints in the problems. It uses only objective function values at points which are determined by simple vector operations, so the computational cost is inferior to the corresponding cost of matrix manipulation and/or inversion. The new method has been implemented and tested and results are presented.

This paper is organized as follows: in Section 2, there is a discussion on the average of the optimum based on the weighted average of each constraint and in Section 3, the new method is presented using a geometrical approach. The proposed algorithm is presented in Section 4 and the computational cost is presented in Section 5. In Section 6, the results of the algorithm tested in random LP problems are presented. Finally, in Section 7, we conclude with some remarks on the proposed algorithm and possible extensions.

Many real-life problems consist of maximizing or minimizing a certain quantity subject to several constraints. Specifically, linear programming (LP) involves optimization (maximization or minimization) of a linear objective function on several decision variables subject to linear constraints.

In mathematical notation, a normal form of an LP multidimensional problem can be expressed as follows:

max z ( X ) = C T X subjectto A X ≤ b X ≥ 0 (1.1)

where:

X is the n-dimensional vector of decision variables,

A = [ a i j ] m × n , b = [ b i ] m , C = [ c j ] m

with a i j ∈ ℝ , c j ∈ ℝ and b j > 0 for i = 1 , 2 , ⋯ , m and j = 1 , 2 , ⋯ , n are the coefficients of the LP-problem and z ( X ) = C T X is the objective function.

The feasible region of this problem is the set of all possible feasible points, i.e. points in ℝ^{n} that satisfy all constraints in (1.1). The feasible region in n dimensions is a hyper-polyhedron. More specifically, in two dimensions, the boundaries are formed by line segments, and the feasible region is a polygon. An optimal solution is the feasible solution with the largest objective function value (for a maximization problem). In this paper we consider linear problems for which the feasible region forms a convex nonempty set.

The extreme points or vertices of a feasible region are those boundary points that are intersections of the straight-line boundary segments of the region. If a linear programming problem has a solution, it must occur at a vertex of the set of feasible solutions. If the problem has more than one solution, then at least one of them must occur at an extreme point. In either case, the value of the optimal value of the objective function is unique.

Definition 1: A constraint is called “binding” or “active” if it is satisfied as an equality at the optimal solution, i.e. if the optimal solution lies on the surface having the corresponding equation (plane of the constraint). Otherwise the constraint is called “redundant”.

Definition 2: If the plane of a redundant constraint does not contain feasible points (it is beyond the feasible region) then the constraint is called superfluous.

Both types of redundant constraints are included in problem formulation since it is not known a priori that they are superfluous or nonbinding.

Definition 3: The zero-level planes together with the planes of all the constraints except the superfluous define the boundary of the feasible region. We refer to these constraints as “functional”.

Any redundant constraint may be dropped from the problem, and the information lost by dropping this constraint (the amount of excess associated with the constraint) can be easily computed once the optimum solution has been found.

The existence of superfluous constraints does not alter the optimum solutions, but may require many additional iterations to be taken, and increases the computational cost.

Most of the methods proposed to date for identification of redundant and nonbinding constraints are not warranted in practice because of the excessive computations required to implement them.

Unfortunately, there is no easy method of assuring a convex solution path, except in two-dimensional problems nor is there any way of preventing degeneracy [

We make the assumption that the feasible region is convex and it is formed by the whole set of constraints so that each constraint refers to a side of the polygon feasible region. So our problem should not contain superfluous constraints.

Consider the LP problem (1.1). In this problem, constraints could be binding and redundant, nevertheless, as it was mentioned before, in the considered problem there are not superfluous constraints included. For the completeness of the present work, some necessary hypotheses are made and definitions are given.

Let X ∗ = ( x 1 ∗ , x 2 ∗ , ⋯ , x n ∗ ) be the optimal solution of the problem (1.1). Since the binding constraints hold equality in the system of inequalities in problem (1.1), X ∗ is a solution of each binding constraint of the problem (1.1).

Definition 4: Consider the LP problem (1.1).

Let

λ i = ∑ j = 1 n a i j x i j / ∑ j = 1 n a i j = b i / ∑ j = 1 n a i j

be the weighted average of the random constraint i.

Then, the -dimensional vector λ i * = ( λ i , λ i , ⋯ , λ i ) is a solution of this constraint. The -dimensional vector λ i * , i = 1 , 2 , ⋯ , m is defined by the weighted average of the i-constraints. The coordinates of λ i * , i = 1 , 2 , ⋯ , m of the problem (1.1) are the points of the bisector line of the first angle of the axis(the hyperplane whose points have equal coordinates) intersects the hyperplane defined by the constraint boundary equation.

In most cases, λ i s , i = 1 , 2 , ⋯ , m are different from each other but it can happen that two or more λ i s are the same although the corresponding constraints are totally different.

The -dimensional vector that represents the weighted average for each constraint is given as:

λ = ( λ 1 , λ 2 , ⋯ , λ m ) . (1.2)

Τhen, the weighted average of the -th binding constraint of the LP problem (1.1) is given as:

λ k = b k / ∑ j = 1 n a k j . (1.3)

Since the constraint k is binding, defines the optimal solution and then the corresponding λ k * can be considered as an estimator of the optimal solution of the LP problem (1.1) [

As it was presented in this section, λ i s , i = 1 , 2 , ⋯ , m have been considered as solutions of potential equality constraints, so the main reflection is how to identify necessary constraints using these λ i * s , i = 1 , 2 , ⋯ , m . For this purpose, λ i * s , i = 1 , 2 , ⋯ , m are used as a probabilistic order of the constraint importance of the LP problem (1.1).

In this direction, the solutions λ i * s , i = 1 , 2 , ⋯ , m are sorted. Point M ( λ min , λ min , ⋯ , λ min ) where λ min = min { λ i : i = 1 , 2 , ⋯ , m } is a point of the constraint having this specific solution: λ min * = ( λ min , λ min , ⋯ , λ min ) .

As the feasible area is defined by the planes of the convex polyhedron, point M is a feasible point. Point M is actually the only point at which the bisector plane of the first angle of the axes intersects the boundary of the feasible region, i.e. the other λ i * s , apart from λ min * lie beyond the feasible area of the problem. Moreover, point M lies on the face “opposite the origin”.

The proposed method starts by setting the value of the variables equal to λ min , so the starting point is the point M, which is a point of the feasible area, lies on a constraint and on the bisector plane. Moreover, point M is set across the origin, which is the Simplex starting point.

Simplex method starts by setting the value of the variables equal to zero and then proceeds to find the optimum value of the objective function. The starting point is the origin. Hence, by setting the variables to zero as the starting point that is the origin. This starting point is relatively far from the optimum.

Our method starts from the point M, which is in general a point across the origin. Point M is a boundary point of the feasible area, and its coordinates are the weighted average of a constraint of the problem, therefore point M lies on this constraint. In case the constraint is binding, the constraint defines the optimal solution and the coordinates of point M are considered as an estimator of the optimal solution of the LP problem. Since the main goal of optimization algorithms in LP problems is to achieve the optimum, point M is certainly a better estimator of the optimum than the origin.

The algorithm checks for binding constraints considering one decision variable each time. Using the weighted average and the intercepts of the constraints with the zero level hyperplane of the variable under consideration, the algorithm moves from a constraint to an adjacent one until it locates a binding constraint.

The 2-dimensional case

For a LP problem with two decision variables the feasible region is, as mentioned, a convex polygon with m + 2 edges in general (including the two axes). The maximum of the objective function is located on a vertex of this polygon, so it is defined by two binding constraints (one or two of them can be a no negativity constraint in the case that the solution is degenerated). Our scope is to locate these two binding constraints.

Each axis defines two opposite directions. If we slide the objective function line along one of them then the objective function value increases, while along its opposite it decreases. To recognize which is which it suffices to compare the objective function value on two points. These two points can be chosen to belong on the edge defined by a constraint. Specifically, we choose the intersection of the constraint with the bisector and the intersection of the constraint with the other axis. Thus we can easily locate the increasing direction for any axis.

Consider a random LP problem with two decision variables and five constraints.

Given a constraint, to locate his adjacent to a specific direction, let’s say in respect to x 1 it suffices to check the distance of the intersection of this constraint with the x 2 -axis from the origin, due to the convexity of the feasible region. Specifically, if the increasing direction is the one decreasing the x 1 value, then the adjacent to a constraint j is the one whose intersection with the x 2 -axis has the next smaller distance from the origin. The result for the opposite direction is obvious.

We conclude that the location of the two binding constraints is the result of the directional search in respect to only one of the two variables.

The n-dimensional case

The above results can be easily generalized for the n-dimensional problem, since any plenary projection of the feasible region is a convex polygon.

The adjacent constraint is defined with the aid of the distance of the cuts of the constraints with the x i = 0 plane from the origin, where i is the search direction. The only issue is that there can be several adjacent constraints to a given constraint, so there is no guaranty that an adjacent constraint is not overlooked. Also since the adjacent constraints lie in several directions, one must repeat the procedure for more than one variable. During the search for binding constraints along each direction, an opposition of increasing directions defines one binding constraint.

Hence, for LP problems that their feasible region is a convex polyhedron and there are no superfluous constraints, the following theorem has been proved geometrically:

Theorem: Let the problem (1.1). Assume that for this problem, the following are satisfied:

• The feasible region is convex

• There are no superfluous constraints

Then, the constraint that is located by the proposed algorithm is binding.

Consider the LP problem (1.1). The proposed algorithm is based on a geometrical approach, as it was shown in Section 3. We have to calculate the values of the objective function at specific points: point M and another point that arises from be the nonzero coordinates of the cut points of i-constraint with j-axis, i = 1 , 2 , ⋯ , m , j = 1 , 2 , ⋯ , n .

First, we choose the constraint with the smallest weighted average λ i , i = 1 , 2 , ⋯ , m as it is defined in (1.3). Let this constraint be the k-th one and set λ min = λ k .

The value of the objective function at λ min is:

z ( λ k ) = ∑ j = 1 n c j λ k . (1.4)

Then, we calculate the distance of the cuts of the constraints with the x i = 0 plane from the origin.

The nonzero coordinates of the cut points of i-constraint with j-axis, i = 1 , 2 , ⋯ , m , j = 1 , 2 , ⋯ , n are calculated as:

b x i j = b i / a i j (1.5)

and, the objective function values of these points are calculated as:

z ( b x k j ) = ∑ j = 1 n c j b x k j . (1.6)

Start the searching in respect to x j , j = 1 , 2 , ⋯ , n . During the search for binding constraints, an opposition of increasing directions defines one binding constraint.

The increasing direction of the objective function is considered as the index r 0 which is defined in the following equation:

r 0 = ( z ( b x k j ) − z ( λ k ) ) / ( b x k j − λ k ) . (1.7)

If r 0 > 0 then the increasing direction is the positive one, i.e. the direction that x j is augmenting. Otherwise, the direction is the one that

x j is decreasing. Then find the adjacent constraint in the chosen direction that is described below:

The distance of the cut of the constraint with the plane x i = 0 , i ≠ j from the origin is given as:

d x i j = b i / ∑ k = 1 , k ≠ j n a i k 2 . (1.8)

To continue, we move to the constraint following the chosen direction regarding the distance d x i j , for j = 1 , 2 , ⋯ , m .

The adjacent constraint in the chosen direction is the one having the next bigger (respectively smaller) distance in case the direction is increasing (respectively decreasing). If there is no bigger (respectively smaller) distance in this direction, then this constraint is binding, otherwise compute the r 0 index for the next constraint.

If the r 0 index for the next constraint has different sign from the previous index, then the increasing direction has changed and the last constraint is binding. In case there is no constraint that has a different direction, then the corresponding variable is considered as zero and this constraint is binding. Repeat the process for the next decision variable x j + 1 .

After finding n-binding constraints or after having exhausted the n-variables, stop the procedure.

The steps of the proposed algorithm are described below:

The proposed algorithm

algorithm bi.co.fi

input: The number of decision variables (n), the number of constraints (m), the coefficient matrix of the problem (A), the vector of the right-hand side coefficients (b) the vector of the objective coefficients (C).

output: The vector of binding constraints v = ( v 1 , v 2 , ⋯ , v n )

Compute λ i , ∀ i ( i = 1 , 2 , ⋯ , m ) using Equation (1.3)

Se λ min = λ k , where k is the constraint with λ min

If there are more than two constraints having the same λ min ,

then choose randomly one of them.

Compute z ( λ k ) using Equation (1.4)

If a i j = 0 then set b x i j = 10000000

else compute b x i j , ∀ i , j ( i = 1 , 2 , ⋯ , m ) , ( j = 1 , 2 , ⋯ , n ) using Equation (1.5)

end if

Compute z ( b x k j ) , ∀ j ( j = 1 , 2 , ⋯ , n ) using Equation (1.6)

if

∑ k = 1 , k ≠ j n a i k 2 = 0

then set d x i j = 10000000

else compute d x i j , ∀ i , j ( i = 1 , 2 , ⋯ , m ) , ( j = 1 , 2 , ⋯ , n ) using Equation (1.8)

end if

for j = 1 , 2 , ⋯ , n do

d x _ O r d e r ← O r d e r d x j

place ← the order of d x _ O r d e r in position k

If z ( b x k j ) − z ( λ k ) = 0 then print 0

If b x k j − λ k = 0 then print 0

Compute r 0 in position k using Equation (1.7)

if sign of r 0 = 1 then sequence (start = place, end = m, 1)

else sequence(start = place, end = 1, −1)

end if

for t = start , ⋯ , end

v j ← 0

new place ← t + sign of r 0

if new place = 0 or new place = m + 1 then

v j ← the constraint in new place

break

end if

set r p e v = r 0

Compute r 0 in new place

gin = r p e v ∗ r 0

if gin < 0 then

v j ← the constraint in new place

break

end if

end second for

end first for

Consider a LP problem of n variables and m constraints, with m and m > n , and large (so quantities such as m or n are negligible in comparison to m^{2} or mn). The array needed to solve this problem using Simplex is ( m + 1 ) × ( n + 1 ) . Working on the proposed algorithm, only an array of n × n is needed to solve the same problem, so the dimension reduction is obvious. The number of iterations using the Simplex method is on average ( n + m ) / 2 [

The number of iterations of the proposed algorithm is at the most, which is the number of variables. Sometimes, the number of needed iterations is even less (for example in 2-variable problems).

The number of multiplications (or divisions) required in each iteration of the Simplex algorithm is ( m + 1 ) ( n + 1 ) , ignoring the cost of the ratio test, while the number of additions (or subtractions) is roughly the same [

To find the total number of operations of the proposed algorithm the computational cost of r 0 , λ ι , b x i j and d x i j , i = 1 , 2 , ⋯ , m , j = 1 , 2 , ⋯ , n must be added. The total evaluation cost of the multiplications is on average m n 2 + 4 m n + 2 n + m + 2 and the total evaluation cost of the additions is on average m n ( n − 1 ) . Finally, the total number of operations using the proposed algorithm is 2 m n 2 + 3 m n + 2 n + m + 2 .

It is obvious that for m > n not only the number of iterations of the new method is less than the corresponding number of Simplex but also the total computational cost is significant inferior. If one adds the cost of solving an system of equation for the binding constraints found ( 2 n 3 / 3 operations) to compute the optimal solution (so that the results of the two methods are comparable) the total computational cost remains significantly inferior (gain of order at least n 2 ). The gain on computational cost increases as m becomes much bigger than n.

In this section, the numerical results of the proposed algorithm are presented. In the first part, a LP problem is illustrated. In second part the comparative results of the approach of the proposed algorithm and Simplex method for identifying binding constraints in indicative LP problems are presented. Finally, the algorithm was considered as a statistical tool for correctly identifying binding constraints in random linear programming problems and the results of this statistical approach are presented in the third part of the section. The proposed algorithm was implemented in R language [

The following numerical example serves to illustrate the proposed algorithm. Consider the LP maximization problem (1.9) with three decision variables and four constraints:

max z ( X ) = 20 x 1 + 10 x 2 + 15 x 3 subject to 3 x 1 + 2 x 2 + 5 x 3 ≤ 55 2 x 1 + x 2 + x 3 ≤ 26 x 1 + x 2 + 3 x 3 ≤ 30 5 x 1 + 2 x 2 + 4 x 3 ≤ 57 X ≥ 0

where X is the 3-dimensional vector of decision variables

X = ( x 1 x 2 x 3 ) , A = ( 3 2 1 5 2 1 1 2 5 1 3 4 ) , b = ( 55 26 30 57 ) , C = ( 20 10 15 )

and z ( X ) = C T X is the objective function.

In this problem, according to Simplex method the third constraint is redundant, so the first, the second and the fourth constraints are binding. It also known that the third constraint is not superfluous.

According to (1.2), λ Τ = ( 5.5 , 6.5 , 6 , 5 . 18 ¯ ) . The smallest λ i , i = 1 , 2 , 3 , 4 is λ 4 = 5. 18 ¯ and it is referring to the fourth constraint. Thus, λ min = λ 4 .

The matrices of

b x = [ b x i j ] 4 × 3 and d x = [ d x i j ] 4 × 3

are given below, however, not all of these elements of these matrices are needed to be calculated in advance.

Each element is calculated whenever it is needed

b x = ( 18. 3 ¯ 27.5 11 13 26 26 30 30 10 11.4 28.5 14.25 ) ,

d x = ( 10.213 9.432 15.254 10.213 11.627 11.627 9.486 9.486 21.213 12.745 8.902 10.584 )

We start from the fourth constraint and the first variable, according to (1.4) and (1.6) respectively and we find z ( λ 4 ) = 233. 18 ¯ and z ( b x 41 ) = 288 . Then according to (1.7) we obtain r 0 = − 0.8 3 ¯ < 0 . Then, according to (1.6), the next smaller distance is d x i 1 for i = 1 , 2 , 3 , 4 is d x 11 ≅ 10.213 referring to the first constraint. For the first constraint, z ( b x 11 ) = 366. 6 ¯ and r 11 = 9.2857 > 0 . Since r 11 ∗ r 0 < 0 the first constraint is binding. This process is illustrated in

Starting again from the fourth constraint and the second variable, we obtain z ( λ 4 ) = 233. 18 ¯ and z ( b x 42 ) = 285 . In this case, according to (1.7) r 0 = 2. 2 ¯ > 0 . The next bigger distance d x i 2 for i = 1 , 2 , 3 , 4 is d x 12 ≅ 9.432 referring to the first constraint. For the first constraint, the calculation of (1.6) is z ( b x 12 ) = 275 and the calculation of (1.7) is r 12 = 1.25 > 0 . Then, we obtain r 12 ∗ r 0 > 0 . So we must continue the search. The next bigger distance is d x 32 ≅ 9.487 referring to the third constraint. We set r 12 = r 0 .

For the third constraint, z ( b x 32 ) = 300 and r 32 = 1.25 > 0 . Applying the results, we obtain r 32 ∗ r 0 > 0 . The next bigger distance d x i 2 for i = 1 , 2 , 3 , 4 0 is d x 22 ≅ 11.627 , referring to the second constraint. We set r 32 = r .

For the second constraint, the calculation of (1.6) is z ( b x 22 ) = 260 and the

calculation of (1.7) gives r 22 = − 1. 6 ¯ > 0 . So r 22 ∗ r 0 < 0 Therefore, the second constraint is binding (

Starting from the fourth constraint again and the third variable we obtain z ( λ 4 ) = 247.5 and z ( b x 43 ) = 213.75 . Calculating r 0 according to (1.7) we obtain r 0 = − 2.143 < 0 . Then, according to (1.4), the next smaller distance d x i 3 for i = 1 , 2 , 3 , 4 is d x 43 ≅ 10.585 referring to the fourth constraint itself. Since there is no smaller distance among the d x i 3 for i = 1 , 2 , 3 , 4 , the fourth constraint is binding (

According to the new algorithm the binding constraints are the first the second and the fourth. The proposed algorithm identified successfully the three binding constraints of the problem.

These results are briefly presented in the tables below.

The comparative results of the approach of the proposed algorithm and Simplex method for identifying binding constraints in indicative LP problems are

presented in the following table.

i-constraint | Weighted average computations | |
---|---|---|

λ | z(λ) | |

1 | 5.5 | 247.5 |

2 | 6.5 | 292.5 |

3 | 6 | 270 |

4 | 5.182 | 233.182 |

i- constraint | Dx_{i1} | Dx_{i2} | Dx_{i3} | |||
---|---|---|---|---|---|---|

Value | Descending order | Value | Descending order | Value | Descending order | |

1 | 10.213 | 2 | 9.432 | 2 | 15.254 | 3 |

2 | 18.385 | 4 | 11.628 | 4 | 11.627 | 2 |

3 | 9.487 | 1 | 9.487 | 3 | 21.213 | 4 |

4 | 12.745 | 3 | 8.902 | 1 | 10.585 | 1 |

constraints | 1^{st} variable | 2^{nd} variable | 3^{rd} variable | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|

r_{0} | sign | order | r_{0} | sign | order | r_{0} | sign | order | |||

1 | 9.286 (stop) | + | 2 | 1.25 | + | 2 | |||||

2 | - | −1.667 (stop) | - | 4 | |||||||

3 | - | 1.25 | + | 3 | |||||||

4 | −0.833 | - | 3 | 2.222 | + | 1 | −2.143 (stop) | - | 1 | ||

The algorithm was applied in a large number of random LP problems to check its efficiency in identifying binding constraints. However, for random LP problems it is not known a priori whether there are superfluous constraints or not. Since there was no information about the constraints in these random problems, the proposed algorithm was considered as a statistical tool for correctly identifying binding constraints. To check the efficiency of this tool, a statistical approach was used.

For this purpose, three sets of 1000 different random no negative LP small, medium and large scale problems in normal form were derived for the numerical experiments to identify binding constraints using R language.

The problems were created using a jitter function. At first, a vector that was considered as a solution of the problem was chosen. Then, linear problems were

Size of the problem | No. of binding constraints and which constraints are binding (in the parentheses) identified by: | ||||||
---|---|---|---|---|---|---|---|

Example no. | Simplex method | The proposed algorithm | |||||

No. of constraints | No. of variables | Total no. of binding constraints | No. of binding constraints without duplicates | ||||

1 | 4 | 3 | 3 (1,2,4) | 3 (1,2,4) | 3 (1,2,4) | ||

2 | 4 | 3 | 2 (1,2) | 3 (1,2,1) | 2 (1,2) | ||

3 | 5 | 3 | 2 (1,2) | 3 (2,1,1) | 2 (2,1) | ||

4 | 4 | 3 | 2 (2,3) | 3 (2,2,3) | 2 (2,3) | ||

5 | 4 | 3 | 2 (3,4) | 3 (4,3,3) | 2 (4,3) | ||

6 | 4 | 3 | 2 (2,3) | 3 (2,2,3) | 2 (2,3) | ||

7 | 4 | 3 | 2 (3,4) | 3 (4,4,3) | 2 (4,3) | ||

8 | 4 | 3 | 2 (1,2,3) | 3 (2,1,3) | 2 (2,1,3) | ||

9 | 5 | 4 | 3 (1,4,5) | 4 (4,1,5,1) | 3 (4,1,5) | ||

10 | 9 | 6 | 3 (1,2,3) | 6 (3,3,2,1,2,2) | 3 (3,2,1) | ||

11 [^{a} | 4 | 5 | 2 (1,4) | 5 (1,1,1,1,4) | 2 (1,4) | ||

a. See references.

formed: the a i j , ( i = 1 , 2 , ⋯ , m , j = 1 , 2 , ⋯ , n ) coefficients were generated independently and randomly from the uniform distribution.

Then, we have the coefficient matrix A = [ a i j ] m × n .

To form vector b we multiplied the above matrix A with the considered solution and added random noise to this vector. To form the objective coefficients vector C, the c j , j = 1 , 2 , ⋯ , n coefficients were generated independently and randomly from the uniform distribution.

Using the above formulation, three samples of 1000 small, 1000 medium and 1000 large scale problems were formed. These problems had redundant, binding and superfluous constraints.

The trials were performed for different training data sizes and the observations of the trial sets are independent. In these problems, the binding constraints were characterized as binding ones according to Simplex algorithm. In view of the fact that the algorithm is considered as a statistical tool, we calculate the incorrect rejection of a true null hypothesis and the failure to reject a false null hypothesis.

Then, for each sample, consider the following assumptions to conduct the hypotheses test.

Assumptions for the observed data, referring to constraint characterization according to Simplex:

H_{0}: The constraint is binding according to Simplex

H_{a}: The constraint is not binding according to Simplex

And

Assumptions for the predicted data, referring to constraint characterization according to the proposed algorithm:

H_{0}: The constraint is considered as binding according to the proposed algorithm

H_{a}: The constraint cannot be considered as binding according to the proposed algorithm

Let

P_{1}: The probability that there are binding constraints (according to Simplex method) among the constraints that characterized as binding by the proposed algorithm.

P_{2}: The probability a constraint can’t be characterized as binding according to the proposed algorithm even though the constraint is binding according to Simplex method.

P_{3}: The probability a constraint that is not binding according to Simplex method is characterized as binding according to the proposed algorithm.

P_{4}: The probability a constraint that can’t be characterized as binding by the proposed algorithm isn’t binding according to Simplex method.

P_{5}: The probability that binding constraints according to Simplex method are included among the constraints that are characterized as binding by the proposed algorithm.

The probabilities about small, medium and large scale problems are presented in

In small scale problems, the algorithm fails to identify correctly the binding ones in 22% of cases. Also, the algorithm doesn’t fail to correctly identify the binding constraints of the problem in 63.6% of cases and the probability that binding constraints are identified correctly by the algorithm is 86.9%. In small scale problems constraints that are not binding can’t be characterized as binding by the proposed algorithm in 76.6% cases.

In medium scale problems, the algorithm fails to identify correctly the binding ones in 15.9% of cases. Also, the algorithm doesn’t fail to correctly identify the binding constraints of the problem in 70.3% of cases and the probability that binding constraints are identified correctly by the algorithm is 89%. Constraints

Probabilities | 95% Confidence Interval for mean | |||||
---|---|---|---|---|---|---|

Mean | Standard Error | Lower Bound | Upper Bound | Standard deviation | Median | |

P_{1} | 0.22 | 0.05 | 0.21 | 0.23 | 0.164 | 0.2 |

P_{2} | 0.13 | 0.002 | 0.136 | 0.126 | 0.086 | 0.125 |

P_{3} | 0.233 | 0.032 | 0.227 | 0.239 | 0.1 | 0.227 |

P_{4} | 0.766 | 0.032 | 0.76 | 0.773 | 0.1 | 0.773 |

P_{5} | 0.869 | 0.003 | 0.864 | 0.875 | 0.085 | 0.875 |

Probabilities | 95% Confidence Interval for mean | |||||
---|---|---|---|---|---|---|

Mean | Standard Error | Lower Bound | Upper Bound | Standard deviation | Median | |

P_{1} | 0.159 | 0.003 | 0.154 | 0.165 | 0.095 | 0.137 |

P_{2} | 0.11 | 0.002 | 0.107 | 0.113 | 0.052 | 0.103 |

P_{3} | 0.187 | 0.002 | 0.183 | 0.192 | 0.076 | 0.175 |

P_{4} | 0.813 | 0.002 | 0.808 | 0.817 | 0.076 | 0.825 |

P_{5} | 0.89 | 0.002 | 0.887 | 0.893 | 0.052 | 0.897 |

Probabilities | 95% Confidence Interval for mean | ||||||
---|---|---|---|---|---|---|---|

Mean | Standard Error | Lower Bound | Upper Bound | Standard deviation | Median | ||

P_{1} | 0.067 | 0.001 | 0.064 | 0.07 | 0.043 | 0.052 | |

P_{2} | 0.058 | 0.001 | 0.056 | 0.06 | 0.033 | 0.047 | |

P_{3} | 0.096 | 0.002 | 0.092 | 0.099 | 0.059 | 0.087 | |

P_{4} | 0.904 | 0.002 | 0.9 | 0.908 | 0.913 | 0.059 | |

P_{5} | 0.942 | 0.001 | 0.939 | 0.944 | 0.033 | 0.953 | |

that are not binding in medium scale problems can’t be characterized as binding by the proposed algorithm in 81.3% cases.

In large scale problems, the algorithm fails to identify correctly the binding ones in 6.7% of cases. Also, the algorithm doesn’t fail to correctly identify the binding constraints of the problem in 84.6% of cases and the probability that binding constraints are identified correctly by the algorithm is 94.2%. In these problems, constraints that are not binding can’t be characterized as binding by the proposed algorithm in 90.4% cases.

In this paper, a method is proposed to reduce an LP problem’s dimension which is highly useful when dealing with very large LP-problems, where only a relatively small percentage of constraints are binding at the optimal solution. A new algorithm was presented and has been implemented and tested in a large number of linear programming problems with impressive results. In fact, in all the well-conditioned problems, the algorithm behaved successfully. The number of operations required in the proposed method is small compared to other known algorithms.

In convex LP problems without superfluous constraints, the algorithm succeeds to find the binding constraints. Specifically, the constraints that are found are definitely binding, so the dimension of the problem is reduced. Especially, in problems with two variables, only one iteration is required. Even when the problem had several superfluous constraints, the algorithm succeeded to find most of the binding constraints. If one binding constraint is missing after the procedure, this is likely the constraint having the smallest λ .

For very large LP problems, only a relatively small percentage of constraints are binding at the optimal solution. However, although the percentage is small, in these problems the algorithm fails to identify correctly the binding ones in 6.7% of cases. In large scale problems also, the algorithm doesn’t fail to correctly identify the constraints of the problem in 84.6% of cases and the probability that binding constraints are identified correctly by the algorithm is 94.2%. Using the proposed algorithm, the reduction of the dimension of the problem is obvious, even though there is a small chance not to identify correctly the binding constraints. However, Telgen [

The proposed method can easily be modified for minimization problems, and for problems with different type of constraints. Future research involves applications of the algorithm in dual LP problems, in integer programming problems, in nonlinear programming problems and in multi-objective optimization problems. Since in the successful rate of correct identification of binding constraints it wasn’t included the number of degenerated solutions, research remains to be done in the topic of degenerated solutions identification.

The authors declare no conflicts of interest regarding the publication of this paper.

Nikolopoulou, E.I., Manoussakis, G.E. and Androulakis, G.S. (2019) Locating Binding Constraints in LP Problems. American Journal of Operations Research, 9, 59-78. https://doi.org/10.4236/ajor.2019.92004