Relating Optimization Problems to Systems of Inequalities and Equalities

Abstract

In quantitative decision analysis, an analyst applies mathematical models to make decisions. Frequently these models involve an optimization problem to determine the values of the decision variables, a system S of possibly non- linear inequalities and equalities to restrict these variables, or both. In this note, we relate a general nonlinear programming problem to such a system S in such a way as to provide a solution of either by solving the other—with certain limitations. We first start with S and generalize phase 1 of the two-phase simplex method to either solve S or establish that a solution does not exist. A conclusion is reached by trying to solve S by minimizing a sum of artificial variables subject to the system S as constraints. Using examples, we illustrate how this approach can give the core of a cooperative game and an equilibrium for a noncooperative game, as well as solve both linear and nonlinear goal programming problems. Similarly, we start with a general nonlinear programming problem and present an algorithm to solve it as a series of systems S by generalizing the sliding objective function method for two-dimensional linear programming. An example is presented to illustrate the geometrical nature of this approach.

Share and Cite:

Corley, H. and Dwobeng, E. (2020) Relating Optimization Problems to Systems of Inequalities and Equalities. American Journal of Operations Research, 10, 284-298. doi: 10.4236/ajor.2020.106016.

1. Introduction

Quantitative decision analysis involves notions of comparison and optimality. The result is that the mathematical models used to make decisions frequently involve an optimization problem to determine the values of the decision variables, a system S of possibly nonlinear inequalities and equalities to restrict these variables, or both. The solution of such a system S and optimization problems is thus essential to decision analysis. In this note we relate a general nonlinear programming problem to a system S to provide a solution of either by solving the other—with certain limitations. In particular, we present a method for either obtaining a solution for S or else establishing that a solution does not exist by using existing computational techniques. Our method generalizes phase 1 of the two-phase linear programming simplex method to nonlinear programming. We also present an algorithm to solve a general nonlinear programming problem as a series of such systems S by generalizing the “sliding objective” method for geometrically solving a two-dimensional linear programming problem.

As background, we note that systems of linear equations have been considered for at least three millennia. By then the Chinese had organized linear systems of equations in a matrix-like form and solved them with a procedure equivalent to Gaussian Elimination [1]. In the third century BCE, Archimedes [2] formulated his well-known cattle problem as a system of linear equations that required an integer solution. In the seventeenth century Descartes introduced systems of linear equations in geometry, and later that century Leibniz developed a systematic method of finding solution using determinants [3]. In the nineteenth century Cramer used Leibniz’s work to establish a way of getting explicit solutions via his eponymous Cramer’s rule, and Grassman began to synthesize these developments into what is now called linear algebra [4].

The history of the theory of linear inequalities is more recent and developed through the interactions between mathematics and other disciplines [5]. In the nineteenth century Fourier proposed the idea of constructing a mathematical theory for systems of linear inequalities. Shortly thereafter, Farkas developed a theory of systems of linear inequalities with respect to analytical mechanics that led to Farkas Lemma. At the end of the nineteenth century, Minkowski—independent of Farkas—derived a theory of linear inequalities with respect to convexity [5]. In the twentieth century Lovitt introduced the preferential voting problem as a set of inequalities and sought a geometric solution. Later, Dines approached the voting problem in an algebraic way [5]. The ideas behind his solution method led to the development of a theory of systems of linear inequalities [6]. He also examined the relation between the matrix of the coefficients of a system of linear inequalities, the existence of their solutions, and the characteristics of solutions. Further studies by Kuhn and Tucker [7] [8], for example, refined these results but did not relate optimization problems to linear systems as done here in the general case.

An efficient computational method to solve a system of linear inequalities and equalities did not exist until Dantzig [9] suggested a phase 1 involving artificial variables to start the simplex method. Subsequent work involving optimization has focused on the solvability of convex inequality constraints in a convex programming problem [11], solving systems of linear interval inequalities [12] [13], or considering variational inequalities [14] [15], for example.

Here we generalize Dantzig’s approach to systems of nonlinear inequalities and equalities S by considering an associated nonlinear programming problem. We then extend the geometric “sliding objective function method” [9] for solving a two-variable linear programming problem to solving a general nonlinear programming problem. Our approach requires determining the solvability of a nonlinear systems S at each iteration.

The paper is organized as follows. In Section 2, a correspondence is established between the solvability of a nonlinear system S and an associated non-linear programming minimization problem. We then present an algorithm for solving a general nonlinear programming minimization problem, to any degree of accuracy, as a series of systems S. In Section 3, examples are given. Conclusions are stated in Section 4.

2. Basic Results

For real-valued functions f, g, h, consider the system S of inequalities and equalities (1)-(3) and the minimization problem T below, where X is a set in which ( x 1 , , x m ) is required to satisfy further requirements. For example, if nonnegativity restrictions x i 0 , i = 1 , , m , are automatically applied by the solver to be used, then X could be the set { ( x 1 , , x m ) : x i 0 , i = 1 , , m } . X could also be the set X = { ( x 1 , , x m ) : x i W , i = 1 , , m } , where W = { 0 , 1 , 2 , 3 , } is the set of nonnegative integers. We note that each equality in (2) could be replaced by two inequalities in opposite directions so that S without (2) remains a general formulation.

( S ) g i ( x 1 , , x m ) b i , i = 1, , n (1)

h j ( x 1 , , x m ) = d j , j = 1 , , p (2)

( x 1 , , x m ) X (3)

(T) Minimize f ( x 1 , , x m , s 1 , , s n , a 1 , , a n + p ) = k = 1 n + p a k

subject to

g i ( x 1 , , x m ) + s i + a i = b i , i = 1 , , n (4)

h j ( x 1 , , x m ) + a n + j = d j , j = 1 , , p (5)

s i 0 , i = 1 , , n (6)

a k 0 , k = 1 , , n + p (7)

x l 0 , l = 1 , , m (8)

The variables s 1 , , s n in T but not in S are called slack variables as in linear programming. They represent the nonnegative difference between the left and right sides of (1). Similarly, the variables a k , k = 1 , , n + p , are called artificial variables and should each have the value 0 if (1)-(2) are to hold. The main result relating S and T is now stated.

Proposition 1. System S has a solution ( x 1 , , x m ) if and only if problem T has a solution ( x 1 , , x m , s 1 , , s n , a 1 , , a n + p ) for which a k = 0 , k = 1 , , n + p . In particular, a solution ( x 1 , , x m , s 1 , , s n , a 1 , , a n + p ) to T for which a k = 0 , k = 1 , , n + p , determines the solution ( x 1 , , x m ) to S.

Proof. Suppose that system S has a solution ( x 1 * , , x m * ) . Then ( x 1 * , , x m * ) satisfies (1)-(3). In particular,

g i ( x 1 * , , x m * ) b i , i = 1 , , n (9)

h j ( x 1 * , , x m * ) = d j , j = 1 , , p (10)

It follows from (9) that for all i = 1, , n , there exists s i 0 such that g i ( x 1 * , , x m * ) + s i = b i , i = 1 , , n . Set a k = 0 , k = 1 , , n + p . Then ( x 1 * , , x m * , s 1 , , s n , a 1 , , a n + p ) satisfies (4)-(8) and thus solves T since k = 1 n + p a k = 0 , which is the minimum possible value of the objective function of T. Next suppose that ( x 1 * , , x m * , s 1 , , s n , a 1 , , a n + p ) solves T and a k = 0 , k = 1 , , n + p . Then immediately ( x 1 * , , x m * ) solves S, so the proof is complete.

Proposition 1 has two immediate corollaries.

Corollary 1. System S has no solution if and only if problem T has no solution for which a k = 0 , k = 1 , , n + p .

Corollary 2. Proposition 1 remains true for any one or more of the following modifications to T:

1) Any r = 1, , n + p artificial variables have been added in (4)-(5) but not necessarily one for each equation,

2) The coefficient of an added artificial variable in (4)-(5) is any nonzero scalar,

3) The objective function of T is the sum of the added artificial variables with any positive scalar coefficients.

Corollary 1 is simply an equivalent restatement of Proposition 1 in terms of a necessary and sufficient condition for the nonexistence of a solution to S. Proposition 1 and Corollary 1 together fully address the solvability of S. Corollary 2 generalizes the problem T under which Proposition 1 is valid. Corollary 2 is established with a proof similar to that of Proposition 1.

Observe that the efficiency of solving the problem T in either Proposition 1 or Corollary 2 depends both on the nature of T and the computational method used to solve it. For example, Proposition 1 can be applied to a Diophantine equation—a polynomial equation with integral coefficients, usually in two or more variables, for which integer solutions are required. It is well known that there are undecidable Diophantine equations [10]; that is, there is no possible algorithm to determine in finite time if a solution exists. It follows that a problem T associated with a system S consisting of a single Diophantine equation may be undecideable.

Now consider the following system S of inequalities and equalities (11)-(14) and the minimization problem T , where z is a variable representing the scalar f ( x 1 , , x m ) .

( S ) f ( x 1 , , x m ) z 0 (11)

g i ( x 1 , , x m ) 0 , i = 1 , , n (12)

h j ( x 1 , , x m ) = 0 , j = 1 , , p (13)

( x 1 , , x m ) X (14)

( T ) Minimize f ( x 1 , , x m )

subject to

g i ( x 1 , , x m ) 0 , i = 1 , , n (15)

h j ( x 1 , , x m ) = 0 , j = 1 , , p (16)

( x 1 , , x m ) X (17)

Assume that ( x 1 * , , x m * ) solves T' with f ( x 1 * , , x m * ) = z * . Then ( x 1 * , , x m * , z * ) is obviously a solution to system S . On the other hand, a solution ( x 1 * * , , x m * * , z * * ) to S is not necessarily a solution to T since it is possible that z * * > z * . However, the minimum value z * of the objective function f ( x 1 , , x m ) for T can be obtained to any degree of accuracy with the following algorithm by solving a sequence of systems S , each with a different value for z.

Algorithm 1 is an extension of the “sliding objective function method” for solving a two-variable linear programming problem [9]. More generally, it is a level-set method since { ( x 1 , , x m ) : f ( x 1 , , x m ) z } is a sublevel set [16] of f, a concept used extensively in quasiconvex minimization [17]. The difficulty in solving T by Algorithm 1 is that one must solve a series of systems S via Proposition 1 or Corollary 2. But advancing computer techniques [18] may allow a computer to “visualize” the m-dimensional sublevel sets of Algorithm 1 and thus determine at least an approximate solution to T geometrically in a manner analogous to finding the zeros of a real-valued function with graphing software.

Algorithm 1 may also be construed as an inverse approach for nonlinear problems to the linear active-set constraint selection method in m-dimensions described in [19] [20] [21] [22]. There, for a minimization problem, z increases at each iteration upon adding more active constraints until all constraints are active. In contrast, z decreases in Algorithm 1 until there are no solutions to S . In the former case, the additional constraints and resulting smaller feasible region cause z to increase. In the latter case, a smaller z gives a smaller feasible region for S . The significance of this comparison is that an approach related to Algorithm 1 has been efficiently implemented for large-scale linear programming problems T .

3. Applications

3.1. Example of Solving Nonlinear Inequalities and Equalities

Consider the following system S of inequalities and equalities:

2 x 1 x 2 + 3 x 1 2 + 2 x 3 3 (18)

4 x 1 + 2 x 1 x 2 x 1 x 3 1 (19)

3 x 1 2 + x 2 x 3 + 4 x 3 2 = 6 (20)

x 1 2 + 2 x 2 2 x 2 x 3 = 2. (21)

To find a solution for S or else determine that a solution does not exist for (18)-(21), we apply Proposition 1. The associated problem problem T is

(P1) Minimize f ( x 1 , x 2 , x 3 , s 1 , s 2 , a 1 , a 2 , a 3 , a 4 ) = k = 1 4 a k

subject to

2 x 1 x 2 + 3 x 1 2 + 2 x 3 + s 1 + a 1 = 3

4 x 1 + 2 x 1 x 2 x 1 x 3 + s 2 + a 3 = 1

3 x 1 2 + x 2 x 3 + 4 x 3 2 + a 2 = 6

x 1 2 + 2 x 2 2 x 2 x 3 + a 4 = 2

s i 0 , i = 1 , 2

a j 0 , j = 1 , 2 , 3 , 4.

Problem P1 is then solved by the nonlinear programming solver BARON in the General Algebraic Modeling System (GAMS) [23], which gives to two decimal places x 1 = 0.14 , x 2 = 0.73 , x 3 = 1.31 , s 1 = 0.11 , s 2 = 1.17 , a 1 = a 2 = a 3 = a 4 = 0 with an objective function value of 0. It follows from Proposition 1 that ( 0.14, 0.73,1.31 ) solves (18)-(21).

We note that applying the same approach to the system

x 2 + y 2 1 (22)

x 2 y y + x 7 (23)

x 3 x y 4 y 2 x 2 + y 2 2 (24)

x 3 x y 4 y 2 = 3 (25)

gives a nonzero optimal objective function value after putting (22) in the standard inequality direction (1). Hence no solution exists for system (22)-(25).

3.2. Example of Finding the Core of a Cooperative Game

In cooperative game theory, the solution concept called the core of a game [24] reduces to the solution of a system of linear inequalities and equalities. A cooperative game is one in which players form coalitions, coordinate their strategies, and share the payoffs. Given a cooperative n-person game, let N = { 1, , n } denote the set of players. For each subset S of N, the characteristic function ν of the game gives the amount ν ( S ) that the members of S can be certain of receiving if they form a coalition. A reward vector x = ( x 1 , , x n ) stipulates the amount x i that player i receives. If for i = 1, , n , a reward vector x satisfies

ν ( N ) = i = 1 n x i ( group rationality )

x i ν ( i ) ( individual rationality )

then x is called an imputation. The core of an n-person game is the set of all undominated imputations. An imputation x is in the core of an n-person game if and only if for each subset S of N the sum of its players’ rewards is at least ν ( S ) .

The following example is adapted from [24]. Let G 1 be a 3-player game with characteristic function ν ( ) = ν ( 1 ) = ν ( 2 ) = ν ( 3 ) = ν ( 2 , 3 ) = 0 and ν ( 1 , 2 ) = ν ( 1 , 3 ) = ν ( 1 , 2 , 3 ) = $ 1000000 . Then a reward vector ( x 1 , x 2 , x 3 ) is an imputation if and only if

x 1 0 (26)

x 2 0 (27)

x 3 0 (28)

x 1 + x 2 + x 3 = 1 , 000 , 000 (29)

An imputation ( x 1 , x 2 , x 3 ) will be in the core if and only if ( x 1 , x 2 , x 3 ) also satisfies

x 1 + x 2 1 , 000 , 000 (30)

x 1 + x 3 1 , 000 , 000 (31)

x 2 + x 3 0 (32)

x 1 + x 2 + x 3 1 , 000 , 000 (33)

To find the core defined by (26)-(33), we remove the redundant constraint (33). In addition, let X = { ( x 1 , x 2 , x 3 ) : x i 0 , i = 1 , 2 , 3 } to avoid adding slack and artificial variables to the nonnegativity restrictions. We then find the solutions of the system

x 1 x 2 1 , 000 , 000 (34)

x 1 x 3 1 , 000 , 000 (35)

x 2 x 3 0 (36)

x 1 + x 2 + x 3 = 1 , 000 , 000 (37)

( x 1 , x 2 , x 3 ) X (38)

in the standard form (1)-(3) for S.

The associated minimization problem T for (34)-(38) is

(P2) Minimize f ( x 1 , x 2 , x 3 , s 1 , s 2 , a 1 , a 2 , a 3 , a 4 ) = k = 1 4 a k

subject to

x 1 x 2 + s 1 + a 1 = 1 , 000 , 000

x 1 x 3 + s 2 + a 2 = 1 , 000 , 000

x 2 x 3 + s 3 + a 3 = 0

x 1 + x 2 + x 3 + a 4 = 1 , 000 , 000

x i 0 , i = 1 , 2 , 3

s j 0 , j = 1 , 2 , 3

a k 0 , k = 1 , 2 , 3 , 4

Solving P2 with the CPLEX solver in GAMS, we obtain the unique solution x 1 = 1000000 , x 2 = 0 , x 3 = 0 , s 1 = s 2 = s 3 = a 1 = a 2 = a 3 = a 4 = 0 . Thus the system (34)-(38) has a solution ( 1000000,0,0 ) according to Proposition 1, and the core of G1 is { ( 1000000,0,0 ) } .

3.3. Example of Solving a Linear Goal Program

Consider the following goal programming advertising model adapted from [25]. There are four resource constraints and two goals, where x 1 and x 2 are the nonnegative number of minutes for radio and television ads, respectively, to be bought for advertising some product. The resource constraints impose limitations on developing the ads. The first goal is that the ads should reach at least 45 million people, while the second goal is that the total cost spent on both ads should be no more than 100 thousand dollars. The explicit resource and goal constraints are given as the system

x 1 + 2 x 2 10 ( resource constraint ) (39)

x 1 6 ( resource constraint ) (40)

x 1 0 ( resource constraint ) (41)

x 2 0 ( resource constraint ) (42)

4 x 1 + 8 x 2 45 ( goal constraint ) (43)

8 x 1 + 24 x 2 100 ( goal constraint ) (44)

We apply Corollary 2 to solve (39)-(44) for ( x 1 , x 2 ) or else to determine that both goals cannot be satisfied. We include the nonnegativity constraints (41)-(42) in the set X = { ( x 1 , x 2 ) : x i 0 , i = 1 , 2 } to avoid adding slack variables to them. We then change the direction of (43) to ≤ as in (1), add slack variables to all inequalities, but only add artificial variables to the goal constraints. We do not distinguish between the relative importance of the goals. Problem T in Corollary 2 thus becomes

(P3) Minimize f ( x 1 , x 2 , s 1 , s 2 , s 3 , s 4 , a 1 , a 2 ) = k = 1 2 a k

subject to

x 1 + 2 x 2 + s 1 = 10

x 1 + s 2 = 6

4 x 1 8 x 2 + s 3 + a 1 = 45

8 x 1 + 24 x 2 + s 4 + a 2 = 100

x i 0 , i = 1 , 2

a j 0 , j = 1 , 2

s k 0 , k = 1 , 2 , 3 , 4

The CPLEX solver in GAMS gives that P3 has no solution and hence that (39)-(44) cannot be jointly satisfied. However, a slight modification of P3 yields further information. We now subtract artificial variables instead of adding them in the goal constraints of P3, as allowed by Corollary 2. In this case, we get a solution x 1 = 5 , x 2 = 2.5 , s 2 = 1 , a 1 = 5 , s 1 = 0 = s 3 = s 4 = 0 = a 2 = 0 . The conclusion from Proposition 1 is again that (39)-(44) cannot be satisfied. But now a 1 = 5 is the amount by which (43) cannot be met for x 1 = 5 , x 2 = 2.5 . This point satisfies (44), however, since a 2 = 0 . Such information is available if artificial variables are used only in the goal constraints of (4) and are subtracted rather than added. In that case, the slack and artificial variables in a goal constraint act as a pair of deviational variables [26].

Reference [26] also discusses weighting the artificial variables differently in the objective function of T. Such a weighting can account for the relative importance of the different goals as well as normalize the goal constraints to a comparable scale. T would then provide a more accurate model.

3.4. Example of Solving a Nonlinear Goal Program

A nonlinear goal programming problem has either a nonlinear goal or a nonlinear resource constraint. Consider the following nonlinear two-goal programming problem, where X = { ( x 1 , x 2 , x 3 ) : x i W , i = 1 , 2 , 3 } where W = { 0,1,2,3, }

x 1 + x 3 2 5 ( resource constraint ) (45)

x 2 x 3 x 1 x 3 8 ( resource constraint ) (46)

2 x 1 2 3 x 1 x 3 + x 2 3 ( goal constraint ) (47)

4 x 1 x 3 2 + 3 x 2 12 ( goal constraint ) (48)

( x 1 , x 2 , x 3 ) X (49)

After taking the negative of (47) to add a slack variable and put (47) into the standard form (1), we use Proposition 1 to formulate (45)-(49) as a minimization problem and obtain

(P4) Minimize f ( x 1 , x 2 , x 3 , a 1 , a 2 , a 3 , a 4 , s 1 , s 2 , s 3 , s 4 ) = k = 1 4 a k

subject to

x 1 + x 3 2 + s 1 + a 1 = 5

x 2 x 3 x 1 x 3 + s 2 + a 2 = 8

2 x 1 2 + 3 x 1 x 3 x 2 + s 3 + a 3 = 3

4 x 1 x 3 2 + 3 x 2 + s 4 + a 4 = 12

s j 0 , j = 1 , 2 , 3 , 4

a k 0 , k = 1 , 2 , 3 , 4

( x 1 , x 2 , x 3 ) X

The Baron solver in GAMS gives the solution x 1 = 0 , x 2 = 3 , x 3 = 1 , s 1 = 4 , s 2 = 5 , s 4 = 4 and a 1 = a 2 = a 3 = a 4 = s 3 = 0 with an objective function value of 0. The zero objective function value indicates that ( 0,3,1 ) satisfies (45)-(49). Weighting the artificial variables in the objective function of problem T is also possible for nonlinear goal programming.

3.5. Example of Finding a Nash Equilibrium (NE)

Let G2 be the two-person nonzero sum game with the payoff matrix of Table 1. According to [27], the following system (50)-(58) is both necessary and sufficient for ( x 1 , x 2 ) and ( y 1 , y 2 ) to be a mixed NE for Players 1 and 2, respectively. Let X = { ( x 1 , x 2 , y 1 , y 2 ) : x i 0 , y i 0 , i = 1 , 2 } . The auxiliary variables α and β are needed in these conditions but not required to express the NE of G2.

2 y 1 y 2 α 0 (50)

y 1 + y 2 α 0 (51)

x 1 x 2 β 0 (52)

x 1 + 2 x 2 β 0 (53)

2 x 1 y 1 x 2 y 1 x 1 y 2 + x 2 y 2 α = 0 (54)

x 1 y 1 x 2 y 1 x 1 y 2 + 2 x 2 y 2 β = 0 (55)

x 1 + x 2 1 = 0 (56)

Table 1. Payoff Matrix for G3.

y 1 + y 2 1 = 0 (57)

( x 1 , x 2 , y 1 , y 2 ) X (58)

Thus problem T of Proposition 1 is now

(P5) Minimize f ( x 1 , x 2 , y 1 , y 2 , s 1 , , s 4 , a 1 , , a 8 ) = k = 1 8 a k

subject to

2 y 1 y 2 α + s 1 + a 1 = 0

y 1 + y 2 α + s 2 + a 2 = 0

x 1 x 2 β + s 3 + a 3 = 0

x 1 + 2 x 2 β + s 4 + a 4 = 0

2 x 1 y 1 x 2 y 1 x 1 y 2 + x 2 y 2 α + a 5 = 0

x 1 y 1 x 2 y 1 x 1 y 2 + 2 x 2 y 2 β + a 6 = 0

x 1 + x 2 1 + a 7 = 0

y 1 + y 2 1 + a 8 = 0

x i 0 , i = 1 , 2

y j 0 , j = 1 , 2

s k 0 , k = 1 , , 4

a l 0 , l = 1 , , 8.

The BARON solver in GAMS gives the solution y 1 = 0.4 , y 2 = 0.6 , x 1 = 0.6 , x 2 = 0.4 , α = 0.2 , β = 0.2 , with all the s j and a k equal to 0. It is well known that a mixed NE always exists [28] for a noncooperative game with a finite number of players having a finite number of strategies. This fact is confirmed here, and the mixed strategies of Players 1 and 2 for G2 are (0.6, 0.4) and (0.4, 0.6), respectively.

For n 3 , necessary and sufficient conditions given in [29] can be similarly used to find an NE. Likewise, Berge [30] and more general equilibria can be found from the necessary and sufficient conditions stated in [31]. The reasoning behind these sets of conditions is that any equilibrium in noncooperative game theory is implicitly defined by an optimization problem with inequality and equality constraints. This fact results from the properties associated with the general meaning of an equilibrium. The system (50)-(58), for example, is just a way to simplify this optimization problem by writing it as a system of inequalities and equalities with auxiliary variables.

3.6. Example of Solving an Optimization Problem with Algorithm 1

Consider the following problem with nonnegative integer variables.

(P6) Minimize f ( x 1 , x 2 ) = 3 x 1 x 2

subject to

g ( x 1 , x 2 ) = 3 x 1 2 + 2 x 2 2 18

( x 1 , x 2 ) X

where X = { ( x 1 , x 2 ) : x i W , i = 1 , 2 } . Associated with the problem T = P 6 of Algorithm 1 is the system S

3 x 1 x 2 z (59)

3 x 1 2 + 2 x 2 2 18 (60)

( x 1 , x 2 ) X (61)

We apply Algorithm 1 to (59)-(61) with the following steps.

Step 1. We use Proposition 1 and the BARON solver in GAMS to find the point (1, 2) satisfying (60)-(61). Set z 1 = f ( 1 , 2 ) = 5 , δ = 1 , and i = 1 .

Step 2. Set z 2 = z 1 δ = 6 . Using Proposition 1 for (59)-(61) with z = 6 , GAMS determines a solution. In general, Step 2 only requires information as to the existence or nonexistence of solutions to S for the current value of z.

Step 3. Set i = 2 .

Step 2. Set z 3 = z 2 δ = 7 . Then GAMS determines a solution for (59)-(61) with z = 7 .

Step 3. Set i = 3 .

Step 2. Set z 4 = z 3 δ = 8 . Now GAMS cannot find a solution to (59)-(61).

Step 4. The optimal objective function value of P6 lies in. Since the coefficients of the objective function are integers, its minimum value must be −7 occurring at (2, 1).

Figure 1 illustrates the application of Algorithm 1 to P6. For each fixed z, the set of both satisfying (60)-(61) and lying to the right of the line are feasible points to P6 that give smaller values of. The solution to P6 is the point as noted by the single dot with. The solution to two decimal places without the integer restriction is with.

Figure 1. Graphical Illustration of Solving P6 by Algorithm 1.

In general, the sublevel sets of the objective function f of the problem (T') solved by Algorithm 1 are considerably more complicated in higher dimensions, with more constraints, and for a nonlinear objective function than the level sets of P6. As seen in Figure 1, the sublevel sets for P6 are simple half-spaces. The two-dimensional level (not sublevel) sets of [18] provide more interesting visual examples in a context not involving optimization.

4. Conclusions

In this paper, we have related a general nonlinear programming problem to a system S of nonlinear inequalities and equalities in two ways. In the first, we solved S or else determined that a solution did not exist by solving an associated nonlinear programming problem. In particular, we used artificial variables and generalize phase 1 of the two-phase simplex method for the purpose of examining the solvability of S. Examples were given for a system of nonlinear inequalities and equalities, in cooperative and non-cooperative game theory, and in goal programming.

In the second way, we developed an algorithm to solve an a general nonlinear programming problem to any degree of accuracy by determining if a solution exists for each of a series of systems S, i.e., for a series of subproblems. The fact that an optimization problem can solve a given system S, but not vice versa, simply indicates that an optimization problem must essentially solve S as part of the optimization process. In this second approach, we generalized to nonlinear programming the “sliding objective function” method of linear programming, and an example was presented to illustrate its geometrical interpretation. We noted that in linear programming a sequential active-set method with an inverse interpretation of Algorithm 1 uses the simplex algorithm for each subproblem and has proved efficient in solving large-scale linear programming problems. This observation also emphasizes that both of our approaches here rely on existing computational techniques and thus might be construed as meta-approaches.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] aHart, R. (2011) The Chinese Roots of Linear Algebra. The Johns Hopkins University Press, Baltimore.
[2] Archibald, R.C. (1918) Cattle Problem of Archimedes. The American Mathematical Monthly, 25, 411-414.
https://doi.org/10.1080/00029890.1998.12004887
[3] Miller, G.A. (1930) On the History of Determinants. The American Mathematical Monthly, 37, 216-219.
https://doi.org/10.2307/2299112
[4] Fearnley-Sander, D. (1979) Hermann Grassmann and the Creation of Linear Algebra. The American Mathematical Monthly, 86, 809-817.
https://doi.org/10.2307/2320145
[5] Kjeldsen, T.H. (2002) Different Motivations and Goals in the Historical Development of the Theory of Systems of Linear Inequalities. In: Buchwald, J.Z. and Gray, J., Eds., Archive for History of Exact Sciences, Springer, Berlin, 469-538.
https://doi.org/10.1007/s004070200057
[6] Motzkin, T.S. (1933) Contributions to the Theory of Linear Inequalities. PhD. Dissertation, University of Basel, Basel. (Translated by Fulkerson, D.R. (1983) In: Theodore, S.M. Selected Papers, Cantor, D., Gordon, B. and Rothschild, B., Eds., Birkhauser, Basel.)
[7] Kuhn, H.W. (1956) Solvability and Consistency for Linear Equalities and Inequalities. The American Mathematical Monthly, 63, 217-232.
https://doi.org/10.2307/2310345
[8] Kuhn, H.W. and Tucker, A.W. (1956) Linear Inequalities and Related Systems. Princeton University Press, Princeton, NJ.
https://doi.org/10.1515/9781400881987
[9] Dantzig, G.B. (1963) Linear Programming and Extensions. Princeton University Press, Princeton.
https://doi.org/10.7249/R366
[10] Davis, M. (1973) Hilbert’s Tenth Problem Is Unsolvable. The American Mathematical Monthly, 80, 233-269.
https://doi.org/10.1080/00029890.1973.11993265
[11] Jeyakumar, V. and Gwinner, J. (1991) Inequality Systems and Optimization. Journal of Mathematical Analysis and Applications, 159, 51-71.
https://doi.org/10.1016/0022-247X(91)90221-K
[12] Rohn, J. (2003) Solvability of Systems of Linear Interval Equations. SIAM Journal on Matrix Analysis and Applications, 25, 237-245.
https://doi.org/10.1137/S0895479801398955
[13] Prokopyev, O.A., Butenko, S. and Trapp, A. (2009) Checking Solvability of Systems of Interval Linear Equalities and Inequalities via Mixed Integer Programming. European Journal of Operational Research, 199, 117-121.
https://doi.org/10.1016/j.ejor.2008.11.008
[14] Fan, J., Liu, L. and Qin, X. (2020) A Subgradient Extragradient Algorithm with Inertial Effects for Solving Strongly Pseudomonotone Variational Inequalities , Optimization. A Journal of Mathematical Programming and Operations Research, 68, 2199-2215.
https://doi.org/10.1080/02331934.2019.1625355
[15] Stonyakin, F., Gasnikov, A., Tyurin, A., Pasechnyuk, D., Agafonov, A., Dvurechensky, P., Dvinskikh, D., Kroshnin, A. and Piskunova, V. (2020) Inexact Model: A Framework for Optimization and Variational Inequalities. Cornell University, New York.
[16] https://en.wikipedia.org/wiki/Level_set/
[17] Aravkin, A., Burke, J., Drusvyatskiy, D., Friedlander, M. and Roy, S. (2019) Level-Set Methods for Convex Optimization. In: Lee, J. and Leyffer, S., Eds., Mathematical Programming, Springer, Berlin, 359-390.
https://doi.org/10.1007/s10107-018-1351-8
[18] Simionescu, P. (2011) Some Advancements to Visualizing Constrained Functions and Inequalities of Two Variables. Journal of Computing and Information Science in Engineering, 11, Article No. 014502.
https://doi.org/10.1115/1.3570770
[19] Saito, G., Corley, H.W. and Rosenberger, J. (2013) Constraint Optimal Selection Techniques (COSTs) for Linear Programming. American Journal of Operations Research, 3, 53-64.
https://doi.org/10.4236/ajor.2013.31004
[20] Noroziroshan, A., Corley, H.W. and Rosenberger, J. (2015) A Dynamic Active-Set Method for Linear Programming. American Journal of Operations Research, 5, 526- 535.
https://doi.org/10.4236/ajor.2015.56041
[21] Saito, G., Corley, H.W., Rosenberger, J., Sung, T.K. and Noroziroshan, A. (2015) Constraint Optimal Selection Techniques (COSTs) for Nonnegative Linear Programming Problems. In: Simos, D., Ed., Applied Mathematics and Computation, Elsevier, Amsterdam, 586-598.
https://doi.org/10.1016/j.amc.2014.11.080
[22] Noroziroshan, A., Corley, H.W. and Rosenberger, J. (2017) Posterior Constraint Selection Techniques for Nonnegative Linear Programming. American Journal of Operations Research, 7, 26-40.
https://doi.org/10.4236/ajor.2017.71002
[23] https://www.gams.com/
[24] Chalkiadakis, G., Elkind, E. and Woolridge, M. (2011) Computational Aspects of Cooperative Game Theory (Synthesis Lectures on Artificial Intelligence and Machine Learning). Morgan & Claypool, Princeton, NJ.
https://doi.org/10.2200/S00355ED1V01Y201107AIM016
[25] Taha, H. (2011) Operations Research: An Introduction. 9th Edition, Prentice Hall, Princeton, NJ.
[26] Jones, D. and Tamiz, M. (2010) Practical Goal Programming. Springer, New York.
https://doi.org/10.1007/978-1-4419-5771-9
[27] Mangasarian, O.L. and Stone, H. (1964) Two-Person Nonzero-Sum Games and Quadratic Programming. Journal of Mathematical Analysis and Applications, 9, 348-355.
https://doi.org/10.1016/0022-247X(64)90021-6
[28] Nash, J. (1950) Equilibrium Points in n-Person Games. Proceedings of the National Academy of Sciences of the United States of America, 36, 48-49.
https://doi.org/10.1073/pnas.36.1.48
[29] Batbileg, S. and Enkhbat, R. (2011) Global Optimization Approach to Nonzero Sum n-Person Game. Advanced Modeling and Optimization, 13, 59-66.
[30] Corley, H.W. (2015) A Mixed Cooperative Dual to the Nash Equilibrium. Game Theory, 2015, Article ID: 647246.
https://doi.org/10.1155/2015/647246
[31] Nahhas, A. and Corley, H.W. (2017) A Nonlinear Programming Approach to Determine a Generalized Equilibrium for N-Person Normal Form Games. International Game Theory Review, 19, Article No. 1750011.
https://doi.org/10.1142/S0219198917500116

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.