Relating Optimization Problems to Systems of Inequalities and Equalities

In quantitative decision analysis, an analyst 
applies mathematical models to make decisions. Frequently these models involve 
an optimization problem to determine the 
values of the decision variables, a system S of possibly non- linear inequalities and 
equalities to restrict these variables, or both. In this note, we relate a general nonlinear 
programming problem to such a system S in such a way as to provide a solution 
of either by solving the other—with certain limitations. We first start 
with S and generalize phase 1 of the 
two-phase simplex method to either solve S or establish that a solution does not exist. A conclusion is reached by trying 
to solve S by minimizing a sum of 
artificial variables subject to the system S as constraints. Using examples, we illustrate how 
this approach can give the core of a cooperative game and an equilibrium 
for a noncooperative game, as well as solve both linear and nonlinear goal 
programming problems. Similarly, we start with a general nonlinear programming 
problem and present an algorithm to solve it as a series of systems S by generalizing the “sliding objective function method” for two-dimensional 
linear programming. An example is presented to illustrate the geometrical 
nature of this approach.


Introduction
Quantitative decision analysis involves notions of comparison and optimality.
The result is that the mathematical models used to make decisions frequently involve an optimization problem to determine the values of the decision variables, a system S of possibly nonlinear inequalities and equalities to restrict these How to cite this paper: Corley variables, or both. The solution of such a system S and optimization problems is thus essential to decision analysis. In this note we relate a general nonlinear programming problem to a system S to provide a solution of either by solving the other-with certain limitations. In particular, we present a method for either obtaining a solution for S or else establishing that a solution does not exist by using existing computational techniques. Our method generalizes phase 1 of the two-phase linear programming simplex method to nonlinear programming. We also present an algorithm to solve a general nonlinear programming problem as a series of such systems S by generalizing the "sliding objective" method for geometrically solving a two-dimensional linear programming problem.
As background, we note that systems of linear equations have been considered for at least three millennia. By then the Chinese had organized linear systems of equations in a matrix-like form and solved them with a procedure equivalent to Gaussian Elimination [1]. In the third century BCE, Archimedes [2] formulated his well-known cattle problem as a system of linear equations that required an integer solution. In the seventeenth century Descartes introduced systems of linear equations in geometry, and later that century Leibniz developed a systematic method of finding solution using determinants [3]. In the nineteenth century Cramer used Leibniz's work to establish a way of getting explicit solutions via his eponymous Cramer's rule, and Grassman began to synthesize these developments into what is now called linear algebra [4].
The history of the theory of linear inequalities is more recent and developed through the interactions between mathematics and other disciplines [5]. In the nineteenth century Fourier proposed the idea of constructing a mathematical theory for systems of linear inequalities. Shortly thereafter, Farkas developed a theory of systems of linear inequalities with respect to analytical mechanics that led to Farkas Lemma. At the end of the nineteenth century, Minkowski-independent of Farkas-derived a theory of linear inequalities with respect to convexity [5]. In the twentieth century Lovitt introduced the preferential voting problem as a set of inequalities and sought a geometric solution. Later, Dines approached the voting problem in an algebraic way [5]. The ideas behind his solution method led to the development of a theory of systems of linear inequalities [6]. He also examined the relation between the matrix of the coefficients of a system of linear inequalities, the existence of their solutions, and the characteristics of solutions. Further studies by Kuhn and Tucker [7] [8], for example, refined these results but did not relate optimization problems to linear systems as done here in the general case.
An efficient computational method to solve a system of linear inequalities and equalities did not exist until Dantzig [9] suggested a phase 1 involving artificial variables to start the simplex method. Subsequent work involving optimization has focused on the solvability of convex inequality constraints in a convex programming problem [11], solving systems of linear interval inequalities [12] [13], or considering variational inequalities [14] [15], for example.
Here we generalize Dantzig's approach to systems of nonlinear inequalities and equalities S by considering an associated nonlinear programming problem.
We then extend the geometric "sliding objective function method" [9] for solving a two-variable linear programming problem to solving a general nonlinear programming problem. Our approach requires determining the solvability of a nonlinear systems S at each iteration.
The paper is organized as follows. In Section 2, a correspondence is established between the solvability of a nonlinear system S and an associated nonlinear programming minimization problem. We then present an algorithm for solving a general nonlinear programming minimization problem, to any degree of accuracy, as a series of systems S. In Section 3, examples are given. Conclusions are stated in Section 4.

Basic Results
For real-valued functions f, g, h, consider the system S of inequalities and equalities (1)-(3) and the minimization problem T below, where X is a set in which is the set of nonnegative integers. We note that each equality in (2) could be replaced by two inequalities in opposite directions so that S without (2) remains a general formulation.
(T) Minimize ( ) It follows from (9) that for all 2) The coefficient of an added artificial variable in (4)-(5) is any nonzero scalar, 3) The objective function of T is the sum of the added artificial variables with any positive scalar coefficients.
Corollary 1 is simply an equivalent restatement of Proposition 1 in terms of a necessary and sufficient condition for the nonexistence of a solution to S. Proposition 1 and Corollary 1 together fully address the solvability of S. Corollary 2 generalizes the problem T under which Proposition 1 is valid. Corollary 2 is established with a proof similar to that of Proposition 1.
Observe that the efficiency of solving the problem T in either Proposition 1 or Corollary 2 depends both on the nature of T and the computational method used to solve it. For example, Proposition 1 can be applied to a Diophantine equation-a polynomial equation with integral coefficients, usually in two or more variables, for which integer solutions are required. It is well known that there are undecidable Diophantine equations [10]; that is, there is no possible algorithm to determine in finite time if a solution exists. It follows that a problem T associated with a system S consisting of a single Diophantine equation may be undecideable. Now consider the following system S′ of inequalities and equalities (11)- (14) and the minimization problem T ′ , where z is a variable representing the scalar Assume that ( ) Step 2. Set Step 3. Set Step 4. The optimal objective function value for T ′ lies in the interval [ ] 1 , Step 2 is either an exact or approximate solution to T ′ . The objective function value ( ) Algorithm 1 is an extension of the "sliding objective function method" for solving a two-variable linear programming problem [9]. More generally, it is a is a sublevel set [16] of f, a concept used extensively in quasiconvex minimization [17]. The difficulty in solving T ′ by Algorithm 1 is that one must solve a series of systems S′ via Proposition 1 or Corollary 2. But advancing computer techniques [18] may allow a computer to "visualize" the m-dimensional sublevel sets of Algorithm 1 and thus determine at least an approximate solution to T ′ geometrically in a manner analogous to finding the zeros of a real-valued function with graphing software. Algorithm 1 may also be construed as an inverse approach for nonlinear problems to the linear active-set constraint selection method in m-dimensions described in [19] [20] [21] [22]. There, for a minimization problem, z increases at each iteration upon adding more active constraints until all constraints are active. In contrast, z decreases in Algorithm 1 until there are no solutions to S′ .
In the former case, the additional constraints and resulting smaller feasible region cause z to increase. In the latter case, a smaller z gives a smaller feasible region for S′ . The significance of this comparison is that an approach related to Algorithm 1 has been efficiently implemented for large-scale linear programming problems T ′ .

Example of Solving Nonlinear Inequalities and Equalities
Consider the following system S of inequalities and equalities: To find a solution for S or else determine that a solution does not exist for . We note that applying the same approach to the system gives a nonzero optimal objective function value after putting (22) in the standard inequality direction (1). Hence no solution exists for system (22)-(25).

Example of Finding the Core of a Cooperative Game
In cooperative game theory, the solution concept called the core of a game [24] reduces to the solution 1,000,000 An imputation ( ) 1 2 3 , , x x x will be in the core if and only if ( ) 1 2 3 , , x x x also satisfies 1 2 1,000,000 1,000,000 1,000,000 To find the core defined by (26)-(33), we remove the redundant constraint (33). In addition, let

Example of Solving a Linear Goal Program
Consider the following goal programming advertising model adapted from [25]. There are four resource constraints and two goals, where 1 x and 2 x are the nonnegative number of minutes for radio and television ads, respectively, to be bought for advertising some product. The resource constraints impose limitations on developing the ads. The first goal is that the ads should reach at least 45 million people, while the second goal is that the total cost spent on both ads should be no more than 100 thousand dollars. The explicit resource and goal constraints are given as the system ( ) to avoid adding slack variables to them. We then change the direction of (43) to ≤ as in (1), add slack variables to all in-equalities, but only add artificial variables to the goal constraints. We do not distinguish between the relative importance of the goals. Problem T in Corollary 2 thus becomes (P 3 ) Minimize ( ) 2  1  2 1 2 3 4 1  2  1 , , , , , , , The CPLEX solver in GAMS gives that P 3 has no solution and hence that (39)-(44) cannot be jointly satisfied. However, a slight modification of P 3 yields further information. We now subtract artificial variables instead of adding them in the goal constraints of P 3 , as allowed by Corollary 2. In this case, we get a so- variables are used only in the goal constraints of (4) and are subtracted rather than added. In that case, the slack and artificial variables in a goal constraint act as a pair of deviational variables [26].
Reference [26] also discusses weighting the artificial variables differently in the objective function of T. Such a weighting can account for the relative importance of the different goals as well as normalize the goal constraints to a comparable scale. T would then provide a more accurate model.

Example of Solving a Nonlinear Goal Program
A nonlinear goal programming problem has either a nonlinear goal or a nonlinear resource constraint. Consider the following nonlinear two-goal programming problem, where , , After taking the negative of (47) to add a slack variable and put (47)

Example of Finding a Nash Equilibrium (NE)
Let G 2 be the two-person nonzero sum game with the payoff matrix of Table 1.
According to [27]    For 3 n ≥ , necessary and sufficient conditions given in [29] can be similarly used to find an NE. Likewise, Berge [30] and more general equilibria can be found from the necessary and sufficient conditions stated in [31]. The reasoning behind these sets of conditions is that any equilibrium in noncooperative game theory is implicitly defined by an optimization problem with inequality and equality constraints. This fact results from the properties associated with the general meaning of an equilibrium. The system (50)-(58), for example, is just a way to simplify this optimization problem by writing it as a system of inequalities and equalities with auxiliary variables.

Example of Solving an Optimization Problem with Algorithm 1
Consider the following problem with nonnegative integer variables.
(P 6 ) Minimize ( ) are integers, its minimum value must be −7 occurring at (2, 1). Figure 1 illustrates the application of Algorithm 1 to P 6 . For each fixed z, the set of ( )  In general, the sublevel sets of the objective function f of the problem (T') solved by Algorithm 1 are considerably more complicated in higher dimensions, with more constraints, and for a nonlinear objective function than the level sets of P 6 . As seen in Figure 1, the sublevel sets for P 6 are simple half-spaces. The two-dimensional level (not sublevel) sets of [18] provide more interesting visual examples in a context not involving optimization.

Conclusions
In this paper, we have related a general nonlinear programming problem to a system S of nonlinear inequalities and equalities in two ways. In the first, we solved S or else determined that a solution did not exist by solving an associated nonlinear programming problem. In particular, we used artificial variables and generalize phase 1 of the two-phase simplex method for the purpose of examining the solvability of S. Examples were given for a system of nonlinear inequalities and equalities, in cooperative and non-cooperative game theory, and in goal programming.
In the second way, we developed an algorithm to solve an a general nonlinear programming problem to any degree of accuracy by determining if a solution exists for each of a series of systems S, i.e., for a series of subproblems. The fact that an optimization problem can solve a given system S, but not vice versa, simply indicates that an optimization problem must essentially solve S as part of the optimization process. In this second approach, we generalized to nonlinear programming the "sliding objective function" method of linear programming, and an example was presented to illustrate its geometrical interpretation. We noted that in linear programming a sequential active-set method with an inverse interpretation of Algorithm 1 uses the simplex algorithm for each subproblem and has proved efficient in solving large-scale linear programming problems. This observation also emphasizes that both of our approaches here rely on existing computational techniques and thus might be construed as meta-approaches.