^{1}

^{*}

^{1}

^{*}

In quantitative decision analysis, an analyst applies mathematical models to make decisions. Frequently these models involve an optimization problem to determine the values of the decision variables, a system S of possibly non - li near inequalities and equalities to restrict these variables, or both. In this note, we relate a general nonlinear programming problem to such a system S in such a way as to provide a solution of either by solving the other—with certain limitations. We first start with S and generalize phase 1 of the two-phase simplex method to either solve S or establish that a solution does not exist. A conclusion is reached by trying to solve S by minimizing a sum of artificial variables subject to the system S as constraints. Using examples, we illustrate how this approach can give the core of a cooperative game and an equilibrium for a noncooperative game, as well as solve both linear and nonlinear goal programming problems. Similarly, we start with a general nonlinear programming problem and present an algorithm to solve it as a series of systems S by generalizing the “ sliding objective function method ” for two-dimensional linear programming. An example is presented to illustrate the geometrical nature of this approach.

Quantitative decision analysis involves notions of comparison and optimality. The result is that the mathematical models used to make decisions frequently involve an optimization problem to determine the values of the decision variables, a system S of possibly nonlinear inequalities and equalities to restrict these variables, or both. The solution of such a system S and optimization problems is thus essential to decision analysis. In this note we relate a general nonlinear programming problem to a system S to provide a solution of either by solving the other—with certain limitations. In particular, we present a method for either obtaining a solution for S or else establishing that a solution does not exist by using existing computational techniques. Our method generalizes phase 1 of the two-phase linear programming simplex method to nonlinear programming. We also present an algorithm to solve a general nonlinear programming problem as a series of such systems S by generalizing the “sliding objective” method for geometrically solving a two-dimensional linear programming problem.

As background, we note that systems of linear equations have been considered for at least three millennia. By then the Chinese had organized linear systems of equations in a matrix-like form and solved them with a procedure equivalent to Gaussian Elimination [

The history of the theory of linear inequalities is more recent and developed through the interactions between mathematics and other disciplines [

An efficient computational method to solve a system of linear inequalities and equalities did not exist until Dantzig [

Here we generalize Dantzig’s approach to systems of nonlinear inequalities and equalities S by considering an associated nonlinear programming problem. We then extend the geometric “sliding objective function method” [

The paper is organized as follows. In Section 2, a correspondence is established between the solvability of a nonlinear system S and an associated non-linear programming minimization problem. We then present an algorithm for solving a general nonlinear programming minimization problem, to any degree of accuracy, as a series of systems S. In Section 3, examples are given. Conclusions are stated in Section 4.

For real-valued functions f, g, h, consider the system S of inequalities and equalities (1)-(3) and the minimization problem T below, where X is a set in which ( x 1 , ⋯ , x m ) is required to satisfy further requirements. For example, if nonnegativity restrictions x i ≥ 0 , i = 1 , ⋯ , m , are automatically applied by the solver to be used, then X could be the set { ( x 1 , ⋯ , x m ) : x i ≥ 0 , i = 1 , ⋯ , m } . X could also be the set X = { ( x 1 , ⋯ , x m ) : x i ∈ W , i = 1 , ⋯ , m } , where W = { 0 , 1 , 2 , 3 , ⋯ } is the set of nonnegative integers. We note that each equality in (2) could be replaced by two inequalities in opposite directions so that S without (2) remains a general formulation.

( S ) g i ( x 1 , ⋯ , x m ) ≤ b i , i = 1, ⋯ , n (1)

h j ( x 1 , ⋯ , x m ) = d j , j = 1 , ⋯ , p (2)

( x 1 , ⋯ , x m ) ∈ X (3)

(T) Minimize f ( x 1 , ⋯ , x m , s 1 , ⋯ , s n , a 1 , ⋯ , a n + p ) = ∑ k = 1 n + p a k

subject to

g i ( x 1 , ⋯ , x m ) + s i + a i = b i , i = 1 , ⋯ , n (4)

h j ( x 1 , ⋯ , x m ) + a n + j = d j , j = 1 , ⋯ , p (5)

s i ≥ 0 , i = 1 , ⋯ , n (6)

a k ≥ 0 , k = 1 , ⋯ , n + p (7)

x l ≥ 0 , l = 1 , ⋯ , m (8)

The variables s 1 , ⋯ , s n in T but not in S are called slack variables as in linear programming. They represent the nonnegative difference between the left and right sides of (1). Similarly, the variables a k , k = 1 , ⋯ , n + p , are called artificial variables and should each have the value 0 if (1)-(2) are to hold. The main result relating S and T is now stated.

Proposition 1. System S has a solution ( x 1 , ⋯ , x m ) if and only if problem T has a solution ( x 1 , ⋯ , x m , s 1 , ⋯ , s n , a 1 , ⋯ , a n + p ) for which a k = 0 , k = 1 , ⋯ , n + p . In particular, a solution ( x 1 , ⋯ , x m , s 1 , ⋯ , s n , a 1 , ⋯ , a n + p ) to T for which a k = 0 , k = 1 , ⋯ , n + p , determines the solution ( x 1 , ⋯ , x m ) to S.

Proof. Suppose that system S has a solution ( x 1 * , ⋯ , x m * ) . Then ( x 1 * , ⋯ , x m * ) satisfies (1)-(3). In particular,

g i ( x 1 * , ⋯ , x m * ) ≤ b i , i = 1 , ⋯ , n (9)

h j ( x 1 * , ⋯ , x m * ) = d j , j = 1 , ⋯ , p (10)

It follows from (9) that for all i = 1, ⋯ , n , there exists s i ≥ 0 such that g i ( x 1 * , ⋯ , x m * ) + s i = b i , i = 1 , ⋯ , n . Set a k = 0 , k = 1 , ⋯ , n + p . Then ( x 1 * , ⋯ , x m * , s 1 , ⋯ , s n , a 1 , ⋯ , a n + p ) satisfies (4)-(8) and thus solves T since ∑ k = 1 n + p a k = 0 , which is the minimum possible value of the objective function of T. Next suppose that ( x 1 * , ⋯ , x m * , s 1 , ⋯ , s n , a 1 , ⋯ , a n + p ) solves T and a k = 0 , k = 1 , ⋯ , n + p . Then immediately ( x 1 * , ⋯ , x m * ) solves S, so the proof is complete.

Proposition 1 has two immediate corollaries.

Corollary 1. System S has no solution if and only if problem T has no solution for which a k = 0 , k = 1 , ⋯ , n + p .

Corollary 2. Proposition 1 remains true for any one or more of the following modifications to T:

1) Any r = 1, ⋯ , n + p artificial variables have been added in (4)-(5) but not necessarily one for each equation,

2) The coefficient of an added artificial variable in (4)-(5) is any nonzero scalar,

3) The objective function of T is the sum of the added artificial variables with any positive scalar coefficients.

Corollary 1 is simply an equivalent restatement of Proposition 1 in terms of a necessary and sufficient condition for the nonexistence of a solution to S. Proposition 1 and Corollary 1 together fully address the solvability of S. Corollary 2 generalizes the problem T under which Proposition 1 is valid. Corollary 2 is established with a proof similar to that of Proposition 1.

Observe that the efficiency of solving the problem T in either Proposition 1 or Corollary 2 depends both on the nature of T and the computational method used to solve it. For example, Proposition 1 can be applied to a Diophantine equation—a polynomial equation with integral coefficients, usually in two or more variables, for which integer solutions are required. It is well known that there are undecidable Diophantine equations [

Now consider the following system S ′ of inequalities and equalities (11)-(14) and the minimization problem T ′ , where z is a variable representing the scalar f ( x 1 , ⋯ , x m ) .

( S ′ ) f ( x 1 , ⋯ , x m ) − z ≤ 0 (11)

g i ( x 1 , ⋯ , x m ) ≤ 0 , i = 1 , ⋯ , n (12)

h j ( x 1 , ⋯ , x m ) = 0 , j = 1 , ⋯ , p (13)

( x 1 , ⋯ , x m ) ∈ X (14)

( T ′ ) Minimize f ( x 1 , ⋯ , x m )

subject to

g i ( x 1 , ⋯ , x m ) ≤ 0 , i = 1 , ⋯ , n (15)

h j ( x 1 , ⋯ , x m ) = 0 , j = 1 , ⋯ , p (16)

( x 1 , ⋯ , x m ) ∈ X (17)

Assume that ( x 1 * , ⋯ , x m * ) solves T' with f ( x 1 * , ⋯ , x m * ) = z * . Then ( x 1 * , ⋯ , x m * , z * ) is obviously a solution to system S ′ . On the other hand, a solution ( x 1 * * , ⋯ , x m * * , z * * ) to S ′ is not necessarily a solution to T ′ since it is possible that z * * > z * . However, the minimum value z * of the objective function f ( x 1 , ⋯ , x m ) for T ′ can be obtained to any degree of accuracy with the following algorithm by solving a sequence of systems S ′ , each with a different value for z.

Algorithm 1 is an extension of the “sliding objective function method” for solving a two-variable linear programming problem [

Algorithm 1 may also be construed as an inverse approach for nonlinear problems to the linear active-set constraint selection method in m-dimensions described in [

Consider the following system S of inequalities and equalities:

2 x 1 x 2 + 3 x 1 2 + 2 x 3 ≤ 3 (18)

4 x 1 + 2 x 1 x 2 − x 1 x 3 ≤ 1 (19)

3 x 1 2 + x 2 x 3 + 4 x 3 2 = 6 (20)

− x 1 2 + 2 x 2 2 − x 2 x 3 = 2. (21)

To find a solution for S or else determine that a solution does not exist for (18)-(21), we apply Proposition 1. The associated problem problem T ′ is

(P_{1}) Minimize f ( x 1 , x 2 , x 3 , s 1 , s 2 , a 1 , a 2 , a 3 , a 4 ) = ∑ k = 1 4 a k

subject to

2 x 1 x 2 + 3 x 1 2 + 2 x 3 + s 1 + a 1 = 3

4 x 1 + 2 x 1 x 2 − x 1 x 3 + s 2 + a 3 = 1

3 x 1 2 + x 2 x 3 + 4 x 3 2 + a 2 = 6

− x 1 2 + 2 x 2 2 − x 2 x 3 + a 4 = 2

s i ≥ 0 , i = 1 , 2

a j ≥ 0 , j = 1 , 2 , 3 , 4.

Problem P_{1} is then solved by the nonlinear programming solver BARON in the General Algebraic Modeling System (GAMS) [

We note that applying the same approach to the system

− x 2 + y 2 ≥ 1 (22)

− x 2 y − y + x ≤ − 7 (23)

x 3 − x y − 4 y 2 − x 2 + y 2 ≤ − 2 (24)

x 3 − x y − 4 y 2 = 3 (25)

gives a nonzero optimal objective function value after putting (22) in the standard inequality direction (1). Hence no solution exists for system (22)-(25).

In cooperative game theory, the solution concept called the core of a game [

ν ( N ) = ∑ i = 1 n x i ( group rationality )

x i ≥ ν ( i ) ( individual rationality )

then x is called an imputation. The core of an n-person game is the set of all undominated imputations. An imputation x is in the core of an n-person game if and only if for each subset S of N the sum of its players’ rewards is at least ν ( S ) .

The following example is adapted from [

x 1 ≥ 0 (26)

x 2 ≥ 0 (27)

x 3 ≥ 0 (28)

x 1 + x 2 + x 3 = 1 , 000 , 000 (29)

An imputation ( x 1 , x 2 , x 3 ) will be in the core if and only if ( x 1 , x 2 , x 3 ) also satisfies

x 1 + x 2 ≥ 1 , 000 , 000 (30)

x 1 + x 3 ≥ 1 , 000 , 000 (31)

x 2 + x 3 ≥ 0 (32)

x 1 + x 2 + x 3 ≥ 1 , 000 , 000 (33)

To find the core defined by (26)-(33), we remove the redundant constraint (33). In addition, let X = { ( x 1 , x 2 , x 3 ) : x i ≥ 0 , i = 1 , 2 , 3 } to avoid adding slack and artificial variables to the nonnegativity restrictions. We then find the solutions of the system

− x 1 − x 2 ≤ − 1 , 000 , 000 (34)

− x 1 − x 3 ≤ − 1 , 000 , 000 (35)

− x 2 − x 3 ≤ 0 (36)

x 1 + x 2 + x 3 = 1 , 000 , 000 (37)

( x 1 , x 2 , x 3 ) ∈ X (38)

in the standard form (1)-(3) for S.

The associated minimization problem T for (34)-(38) is

(P_{2}) Minimize f ( x 1 , x 2 , x 3 , s 1 , s 2 , a 1 , a 2 , a 3 , a 4 ) = ∑ k = 1 4 a k

subject to

− x 1 − x 2 + s 1 + a 1 = − 1 , 000 , 000

− x 1 − x 3 + s 2 + a 2 = − 1 , 000 , 000

− x 2 − x 3 + s 3 + a 3 = 0

x 1 + x 2 + x 3 + a 4 = 1 , 000 , 000

x i ≥ 0 , i = 1 , 2 , 3

s j ≥ 0 , j = 1 , 2 , 3

a k ≥ 0 , k = 1 , 2 , 3 , 4

Solving P_{2} with the CPLEX solver in GAMS, we obtain the unique solution x 1 = 1000000 , x 2 = 0 , x 3 = 0 , s 1 = s 2 = s 3 = a 1 = a 2 = a 3 = a 4 = 0 . Thus the system (34)-(38) has a solution ( 1000000,0,0 ) according to Proposition 1, and the core of G_{1} is { ( 1000000,0,0 ) } .

Consider the following goal programming advertising model adapted from [

x 1 + 2 x 2 ≤ 10 ( resource constraint ) (39)

x 1 ≤ 6 ( resource constraint ) (40)

x 1 ≥ 0 ( resource constraint ) (41)

x 2 ≥ 0 ( resource constraint ) (42)

4 x 1 + 8 x 2 ≥ 45 ( goal constraint ) (43)

8 x 1 + 24 x 2 ≤ 100 ( goal constraint ) (44)

We apply Corollary 2 to solve (39)-(44) for ( x 1 , x 2 ) or else to determine that both goals cannot be satisfied. We include the nonnegativity constraints (41)-(42) in the set X = { ( x 1 , x 2 ) : x i ≥ 0 , i = 1 , 2 } to avoid adding slack variables to them. We then change the direction of (43) to ≤ as in (1), add slack variables to all inequalities, but only add artificial variables to the goal constraints. We do not distinguish between the relative importance of the goals. Problem T in Corollary 2 thus becomes

(P_{3}) Minimize f ( x 1 , x 2 , s 1 , s 2 , s 3 , s 4 , a 1 , a 2 ) = ∑ k = 1 2 a k

subject to

x 1 + 2 x 2 + s 1 = 10

x 1 + s 2 = 6

− 4 x 1 − 8 x 2 + s 3 + a 1 = − 45

8 x 1 + 24 x 2 + s 4 + a 2 = 100

x i ≥ 0 , i = 1 , 2

a j ≥ 0 , j = 1 , 2

s k ≥ 0 , k = 1 , 2 , 3 , 4

The CPLEX solver in GAMS gives that P_{3} has no solution and hence that (39)-(44) cannot be jointly satisfied. However, a slight modification of P_{3} yields further information. We now subtract artificial variables instead of adding them in the goal constraints of P_{3}, as allowed by Corollary 2. In this case, we get a solution x 1 = 5 , x 2 = 2.5 , s 2 = 1 , a 1 = 5 , s 1 = 0 = s 3 = s 4 = 0 = a 2 = 0 . The conclusion from Proposition 1 is again that (39)-(44) cannot be satisfied. But now a 1 = 5 is the amount by which (43) cannot be met for x 1 = 5 , x 2 = 2.5 . This point satisfies (44), however, since a 2 = 0 . Such information is available if artificial variables are used only in the goal constraints of (4) and are subtracted rather than added. In that case, the slack and artificial variables in a goal constraint act as a pair of deviational variables [

Reference [

A nonlinear goal programming problem has either a nonlinear goal or a nonlinear resource constraint. Consider the following nonlinear two-goal programming problem, where X = { ( x 1 , x 2 , x 3 ) : x i ∈ W , i = 1 , 2 , 3 } where W = { 0,1,2,3, ⋯ }

x 1 + x 3 2 ≤ 5 ( resource constraint ) (45)

x 2 x 3 − x 1 x 3 ≤ 8 ( resource constraint ) (46)

2 x 1 2 − 3 x 1 x 3 + x 2 ≥ 3 ( goal constraint ) (47)

4 x 1 − x 3 2 + 3 x 2 ≤ 12 ( goal constraint ) (48)

( x 1 , x 2 , x 3 ) ∈ X (49)

After taking the negative of (47) to add a slack variable and put (47) into the standard form (1), we use Proposition 1 to formulate (45)-(49) as a minimization problem and obtain

(P_{4}) Minimize f ( x 1 , x 2 , x 3 , a 1 , a 2 , a 3 , a 4 , s 1 , s 2 , s 3 , s 4 ) = ∑ k = 1 4 a k

subject to

x 1 + x 3 2 + s 1 + a 1 = 5

x 2 x 3 − x 1 x 3 + s 2 + a 2 = 8

− 2 x 1 2 + 3 x 1 x 3 − x 2 + s 3 + a 3 = − 3

4 x 1 − x 3 2 + 3 x 2 + s 4 + a 4 = 12

s j ≥ 0 , j = 1 , 2 , 3 , 4

a k ≥ 0 , k = 1 , 2 , 3 , 4

( x 1 , x 2 , x 3 ) ∈ X

The Baron solver in GAMS gives the solution x 1 = 0 , x 2 = 3 , x 3 = 1 , s 1 = 4 , s 2 = 5 , s 4 = 4 and a 1 = a 2 = a 3 = a 4 = s 3 = 0 with an objective function value of 0. The zero objective function value indicates that ( 0,3,1 ) satisfies (45)-(49). Weighting the artificial variables in the objective function of problem T is also possible for nonlinear goal programming.

Let G_{2} be the two-person nonzero sum game with the payoff matrix of _{2}.

2 y 1 − y 2 − α ≤ 0 (50)

− y 1 + y 2 − α ≤ 0 (51)

x 1 − x 2 − β ≤ 0 (52)

− x 1 + 2 x 2 − β ≤ 0 (53)

2 x 1 y 1 − x 2 y 1 − x 1 y 2 + x 2 y 2 − α = 0 (54)

x 1 y 1 − x 2 y 1 − x 1 y 2 + 2 x 2 y 2 − β = 0 (55)

x 1 + x 2 − 1 = 0 (56)

Player 2 | |||
---|---|---|---|

y 1 | y 2 | ||

Player 1 | x 1 | ( 2,1 ) | ( − 1, − 1 ) |

x 2 | ( − 1, − 1 ) | ( 1,2 ) |

y 1 + y 2 − 1 = 0 (57)

( x 1 , x 2 , y 1 , y 2 ) ∈ X (58)

Thus problem T of Proposition 1 is now

(P_{5}) Minimize f ( x 1 , x 2 , y 1 , y 2 , s 1 , ⋯ , s 4 , a 1 , ⋯ , a 8 ) = ∑ k = 1 8 a k

subject to

2 y 1 − y 2 − α + s 1 + a 1 = 0

− y 1 + y 2 − α + s 2 + a 2 = 0

x 1 − x 2 − β + s 3 + a 3 = 0

− x 1 + 2 x 2 − β + s 4 + a 4 = 0

2 x 1 y 1 − x 2 y 1 − x 1 y 2 + x 2 y 2 − α + a 5 = 0

x 1 y 1 − x 2 y 1 − x 1 y 2 + 2 x 2 y 2 − β + a 6 = 0

x 1 + x 2 − 1 + a 7 = 0

y 1 + y 2 − 1 + a 8 = 0

x i ≥ 0 , i = 1 , 2

y j ≥ 0 , j = 1 , 2

s k ≥ 0 , k = 1 , ⋯ , 4

a l ≥ 0 , l = 1 , ⋯ , 8.

The BARON solver in GAMS gives the solution y 1 = 0.4 , y 2 = 0.6 , x 1 = 0.6 , x 2 = 0.4 , α = 0.2 , β = 0.2 , with all the s j and a k equal to 0. It is well known that a mixed NE always exists [_{2} are (0.6, 0.4) and (0.4, 0.6), respectively.

For n ≥ 3 , necessary and sufficient conditions given in [

Consider the following problem with nonnegative integer variables.

(P_{6}) Minimize f ( x 1 , x 2 ) = − 3 x 1 − x 2

subject to

g ( x 1 , x 2 ) = 3 x 1 2 + 2 x 2 2 ≤ 18

( x 1 , x 2 ) ∈ X

where X = { ( x 1 , x 2 ) : x i ∈ W , i = 1 , 2 } . Associated with the problem T ′ = P 6 of Algorithm 1 is the system S ′

− 3 x 1 − x 2 ≤ z (59)

3 x 1 2 + 2 x 2 2 ≤ 18 (60)

( x 1 , x 2 ) ∈ X (61)

We apply Algorithm 1 to (59)-(61) with the following steps.

Step 1. We use Proposition 1 and the BARON solver in GAMS to find the point (1, 2) satisfying (60)-(61). Set z 1 = f ( 1 , 2 ) = − 5 , δ = 1 , and i = 1 .

Step 2. Set z 2 = z 1 − δ = − 6 . Using Proposition 1 for (59)-(61) with z = − 6 , GAMS determines a solution. In general, Step 2 only requires information as to the existence or nonexistence of solutions to S ′ for the current value of z.

Step 3. Set i = 2 .

Step 2. Set z 3 = z 2 − δ = − 7 . Then GAMS determines a solution for (59)-(61) with z = − 7 .

Step 3. Set i = 3 .

Step 2. Set z 4 = z 3 − δ = − 8 . Now GAMS cannot find a solution to (59)-(61).

Step 4. The optimal objective function value of P_{6} lies in

_{6}. For each fixed z, the set of _{6} that give smaller values of_{6} is the point

In general, the sublevel sets of the objective function f of the problem (T') solved by Algorithm 1 are considerably more complicated in higher dimensions, with more constraints, and for a nonlinear objective function than the level sets of P_{6}. As seen in _{6} are simple half-spaces. The two-dimensional level (not sublevel) sets of [

In this paper, we have related a general nonlinear programming problem to a system S of nonlinear inequalities and equalities in two ways. In the first, we solved S or else determined that a solution did not exist by solving an associated nonlinear programming problem. In particular, we used artificial variables and generalize phase 1 of the two-phase simplex method for the purpose of examining the solvability of S. Examples were given for a system of nonlinear inequalities and equalities, in cooperative and non-cooperative game theory, and in goal programming.

In the second way, we developed an algorithm to solve an a general nonlinear programming problem to any degree of accuracy by determining if a solution exists for each of a series of systems S, i.e., for a series of subproblems. The fact that an optimization problem can solve a given system S, but not vice versa, simply indicates that an optimization problem must essentially solve S as part of the optimization process. In this second approach, we generalized to nonlinear programming the “sliding objective function” method of linear programming, and an example was presented to illustrate its geometrical interpretation. We noted that in linear programming a sequential active-set method with an inverse interpretation of Algorithm 1 uses the simplex algorithm for each subproblem and has proved efficient in solving large-scale linear programming problems. This observation also emphasizes that both of our approaches here rely on existing computational techniques and thus might be construed as meta-approaches.

The authors declare no conflicts of interest regarding the publication of this paper.

Corley, H.W. and Dwobeng, E.O. (2020) Relating Optimization Problems to Systems of Inequalities and Equalities. American Journal of Operations Research, 10, 284-298. https://doi.org/10.4236/ajor.2020.106016