A Modified Lagrange Method for Solving Convex Quadratic Optimization Problems

Abstract

In this paper, a modified version of the Classical Lagrange Multiplier method is developed for convex quadratic optimization problems. The method, which is evolved from the first order derivative test for optimality of the Lagrangian function with respect to the primary variables of the problem, decomposes the solution process into two independent ones, in which the primary variables are solved for independently, and then the secondary variables, which are the Lagrange multipliers, are solved for, afterward. This is an innovation that leads to solving independently two simpler systems of equations involving the primary variables only, on one hand, and the secondary ones on the other. Solutions obtained for small sized problems (as preliminary test of the method) demonstrate that the new method is generally effective in producing the required solutions.

Share and Cite:

Stephen, T. , John, A. and Etwire, C. (2024) A Modified Lagrange Method for Solving Convex Quadratic Optimization Problems. Open Journal of Optimization, 13, 1-20. doi: 10.4236/ojop.2024.131001.

1. Introduction

Whenever a viable decision is made by selecting from a variety of alternatives which can be evaluated by some performance measure for quality’s sake, and there are some restrictions involving the alternatives, there is an optimization problem. An optimization problem is therefore characterized by a set of decision alternatives called decision variables, at least a performance measure often called objective function (or objective functions), and often restrictions involving the decision variables called constraints [1] . The occurrence of optimization problems is an everyday phenomenon and cuts across diverse disciplines, including engineering, computer science, applied mathematics, economics, and management, to name a few. Interest in the study of optimization has grown immensely from its early days going back to those of Sir Isaac Newton, and given new impetus in the 1940s by the seminal work of George B. Dantzig [2] in linear optimization. Today it attracts many researchers and practitioners from diverse fields and backgrounds, including engineers, systems analysts, operations researchers, numerical analysts, and management scientists.

Optimization problems are broadly classified as either linear or nonlinear, constrained or unconstrained [3] . Among constrained nonlinear optimization problems is a special subclass known as convex optimization with convex quadratic optimization problems being special subset of this subclass. Convex optimization problems in general have gained a lot of interest over the years, with applications discovered in many fields including computer science, robotics and signal processing [4] . Convex quadratic optimization problems also called convex quadratic programming problems (which are the focus in this study) are characterized by either linear or convex quadratic objective function and linear or convex quadratic constraints. A convex quadratic programming problem is a type of nonlinear programming. A typical form of the problem (see [5] ) is:

min c T x + 1 2 x T Q x subject to A x = b (1.1)

where x is a vector of n decision variables, c is n dimensional constant vector, Q and A are respectively n × n symmetric positive semi-definite and m × n constant matrices, and b is m dimensional constant vector [6] . If the objective function is linear and constraints a mix of linear, nonlinear, or convex quadratic equations, the problem becomes:

min c T x subjectto h i ( x ) = b i , i = 1 , 2 , , m (1.2)

where, h i ( x ) is linear or nonlinear, convex quadratic, for some i { 1 , 2 , , m } .

1.1. The Classical Lagrange Method

The Lagrange Multiplier Method, referred to also as Classical Lagrange Method (CLM) is well-known for solving equality constrained nonlinear optimization problems (Hoffman et al., 1992). The main idea of the method is to convert an equality constrained optimization problem into an unconstrained one, by aggregating the objective function and constraint functions into a composite function called the Lagrangian function, to which the first derivative test for optimality is applied [7] . The Lagrangian function can be seen as constituting a relationship between the objective function and the equality constraints which simply gives a reformulation of the original problem as unconstrained one [8] . For convex quadratic programming problems, the objective function and constraint functions are convex and so the Lagrangian function is convex [9] . Therefore, for the sake of this work, we can limit the discussion of the CLM to convex problems only.

Consider a convex optimization problem with equality constraints as:

min f ( x ) subjectto : h i ( x ) = b i , b i R , x R n , i = 1 , , m ,

where f and h i are respectively convex quadratic objective function and convex ith constraint functions. By letting g i ( x ) = h i ( x ) b i = 0 , the problem may be posed as:

min f ( x ) subjectto : g i ( x ) = 0 , x , b i R n , i = 1 , , m . (1.3)

The problem (1.3) is transformed into an unconstrained Lagrangian function L as:

min L ( x , λ ) = f ( x ) λ T g ( x ) (1.4)

where g ( x ) is an n vector of constraint functions and λ is an m vector of multipliers, which are scalars (called Lagrange Multipliers).

The first order necessary optimality conditions for the Lagrangian function, which are also sufficient for a minimum solution for convex unconstrained optimization problems [10] are:

x L ( x , λ ) = x f ( x ) λ T x g ( x ) = 0 (1.5a)

λ L ( x , λ ) = λ f ( x ) g ( x ) = 0 (1.5b)

Equations (1.5a) and (1.5b) imply that the gradients of the Lagrangian function with respect to x and λ independently vanish at the minimum points (solutions) x * and λ * . Furthermore, it means that if minimum solutions x * and λ * exist to (1.4), they satisfy (1.5a) and (1.5b). However, for non-convex problems not all minimum solutions satisfying the first order necessary optimality conditions would be minimum solutions [10] ; such problems require that sufficient optimality conditions are also satisfied [10] . Observe also that (1.5a) and (1.5b) have n + m unknown variables (i.e., n variables in x and m variables in λ) whose values have to be determined.

To find the minimum values x * , λ * the m + n system of equations resulting from (1.5a) and (1.5b) are solved simultaneously in the CLM. The larger m and n are, the more computational effort and time are required to solve for the minimum values. Therefore, finding a method that reduces the computational effort and time demands for finding the solutions is interesting and important, since that can have consequent positive impact on solving many real-world convex (quadratic) problems which occur in many disciplines [4] . This, indeed, is the motivation behind the pursuit of a Modified Lagrange Method (MLM).

1.2. Some Previous Works

A few related previous works in the research area are by [11] who developed a Lagrange relaxation-based decomposition algorithm for an integrated off-shore oil production planning optimization problem. The algorithm was used to provide compact bounds and thus replace the large-scale problem with moderate scale ones and therefore solve several single-batch subproblems simultaneously. An Adaptive Augmented Lagrangian method by [12] was developed for large scale constrained optimization problems. They used a novel adaptive updating technique for the penalty parameters which was motivated by techniques for exact penalty methods. A modified augmented Lagrangian multiplier method was proposed by [13] who used an outer iteration approach in which the Lagrangian multipliers and various penalty parameters were updated and used in the iteration involving constrained problems with simple bounds. Earlier, a class of Augmented Lagrangian methods for constrained optimization problems had been developed (see [14] [15] [16] ) which are similar to the Penalty methods and replace a constrained problem with a series of unconstrained ones and add a penalty term to the objective function. Komzsik and Chiang [17] investigated the effects of the Lagrange multiplier method in solving large-scale problems in a parallel environment.

In the next section, the philosophy and methodology behind the MLM is presented and discussed. The subsequent section assesses the MLM, in this first instance, in terms of its ability to produce solutions for convex quadratic optimization problems as does the CLM, using selected test problems. The paper is concluded with a section that elucidates the findings of the study and outlines prospects and consequences of the MLM for further research.

2. The Modified Lagrange Method

The Classical Lagrange Method (CLM) is modified in this section with the intention to improve it. The modification centers on the evolution of a new approach premised on departure from the classical and conventional thinking to finding solution of a convex quadratic nonlinear equality constrained optimization problem. While the CLM solves directly the two systems of Equations (1.5a) and (1.5b) resulting from the application of the first order necessary optimality conditions to (1.4), the Modified Lagrange Method (MLM) avoids this path, and rather uses only one of the two sets of Equations (i.e., (1.5a)) to develop a new approach to solving the problem. This is considered a major innovation that simplifies the solution process. The rest of the section develops the concept behind the MLM.

Consider the convex quadratic optimization problem given by (1.3). A scalar form of the Lagrangian function (1.4) which converts the problem (1.3) into an equivalent unconstrained problem and which we seek to minimize over x only, is:

min x L ( x , λ ) = f ( x ) i = 1 m λ i g i ( x ) , x R n , λ i R , i = 1 , 2 , , m (2.1)

It is clear that if there exist some x * R n for which the second term of Equation (2.1) vanishes for all g i ( x ) , i = 1 , 2 , , m , then the constraint equations of (1.3) are satisfied. At such a point, the function L ( x , λ ) and f ( x ) become coincident, regardless of λ, and so x * is a critical vector value that minimizes the function L ( x , λ ) and therefore f ( x ) . In other words, the Lagrangian function is always equal to the objective function for some critical point x * for which g i ( x * ) = 0 , for all i. It turns out, under the CLM, that such a point, x * , is the desired optimal (or minimum) solution of (2.1) or (1.3). This observation provides four (4) important facts that serve as bases for the evolution of the MLM. They are as follows: 1) At x * R n , g i ( x * ) = 0 for all i, in spite of λ; 2) This (i.e., (1)) indicate that we may seek to find x * independently of λ; 3) Since g i ( x * ) = 0 identically for all i and does not vary from zero at x * , it follows that g i ( x * ) x j = 0 for all j; and 4) By independently finding x * we may independently find λ * the optimal value of λ. These four observations derived from the CLM means that we can instead find the optimal solutions x * , λ * of problem (1.3) and problem (2.1) by two independent processes which first finds x * and subsequently finds λ * . This, in essence, is the philosophy or conceptual framework of the new method called MLM.

By the CLM, the first order necessary optimality conditions (in scalar forms) of (2.1) which are also sufficient for the minimum solutions x * , λ * are (2.2a)

f ( x ) x j i = 1 m λ i g i ( x ) x j = 0 , i = 1 , 2 , , m ; j = 1 , 2 , , n (2.2a)

f ( x ) λ i g i ( x ) = 0 , i = 1 , 2 , , m (2.2b)

It is noted that (2.2a) involves n equations in n unknown variables of x and m unknown variables of λ. Therefore, if we are able to eliminate λ i ( i = 1 , 2 , , m ) from (2.2a), then we can solve (2.2a) independently of (2.2b). secondly, we observe on the other hand, that (2.2b) cannot be solved independently of (2.2a), since it has m equations in the n unknown variables of x, and for nontrivial cases of the problem (1.3), n > m . Thirdly, it is important to note that the solution x * which satisfies (2.2a) would not violate (2.2b), since the first and second terms of (2.2b) vanish for all i { 1 , 2 , , m } at x * .

2.1. A Novel Process for Finding x

With the observations made, therefore, we proceed to eliminate λ i ( i { 1 , 2 , , m } ) by some means in (2.2b), so that we can obtain equations involving only the n unknown variables of x. Consider the system (2.2a) in the expanded form given by the Equation (2.3).

f ( x ) x j λ 1 g 1 ( x ) x j λ 2 g 2 ( x ) x j λ m 1 g m 1 ( x ) x j λ m g m ( x ) x j = 0 , j = 1 , 2 , , n . (2.3)

As was noted earlier, g i ( x ) and g i ( x ) x j both vanish at x * for any i and j, in spite of λ i . therefore λ i can be viewed as arbitrary for all i, as far as finding x * from (2.2a) and therefore from (2.3) is concerned. This means we can choose λ i arbitrarily in our quest to solve (2.2a) independently of (2.2b). By choosing λ i 0 for some i { 1 , 2 , , m } and λ s = 0 for all s i , s { 1 , 2 , , m } , we can obtain the result in (2.4).

f ( x ) x j = λ i g i ( x ) x j , j = 1 , 2 , , n , i { 1 , 2 , , m } (2.4)

From (2.4), we can eliminate λ i by taking ratios of the jth and the ( j + 1 ) th equation ( j = 1 , 2 , , n 1 ) , leading to:

f ( x ) / x 1 f ( x ) / x 2 = g i ( x ) / x 1 g i ( x ) / x 2 , f ( x ) / x 2 f ( x ) / x 3 = g i ( x ) / x 2 g i ( x ) / x 3 , , f ( x ) / x n 1 f ( x ) / x n = g i ( x ) / x n 1 g i ( x ) / x n

which is generalized as (2.5):

f ( x ) / x j f ( x ) / x j + 1 = g i ( x ) / x j g i ( x ) / x j + 1 , j = 1 , 2 , , n 1 , i { 1 , 2 , , m } (2.5)

The result in (2.5) leads to (2.6):

f ( x ) x j g i ( x ) x j + 1 f ( x ) x j + 1 g i ( x ) x j = 0 , j = 1 , 2 , , n 1 ; i { 1 , 2 , , m } (2.6)

The results in (2.6) produces for each i, n − 1 equations, which we refer to as Subsidiary Constraint Equations (SCE). The SCE become additional available equations to the m original equations given by g i ( x ) = 0 of the problem (1.3) for finding x * . Together, they provide m + n 1 equations in x only and therefore simpler to solve. They are generally of sufficient number for finding x * . Occasionally, through numerical experimental work, the authors have observed that some of the n − 1 number of SCE produced from (2.6) may be redundant equations, and depending on the number which are redundant, there may not be sufficient number of equations for finding x * . To deal with this phenomenon, the authors have further observed that apart from taking ratios consecutively as in (2.5) for any i { 1 , 2 , , m } , the ratios can instead be taken in any arbitrary order, which means that there can be a large number (innumerable) possible forms of the SCE that can be constructed. For the purpose of this work, we produce another form of ordering of the ratios that leads to other forms of the SCE that appears to avoid the redundancy phenomenon, at least for the current work. The ratios are given by (2.7):

f ( x ) / x j f ( x ) / x j + 1 = g i ( x ) / x j g i ( x ) / x j + 1 , f ( x ) / x j f ( x ) / x j + 2 = g i ( x ) / x j g i ( x ) / x j + 2 , , f ( x ) / x j f ( x ) / x n = g i ( x ) / x j g i ( x ) / x n (2.7)

where j { 1 , 2 , , n 1 } , i { 1 , 2 , , m } . The result in (2.7) is generalized as (2.8).

f ( x ) / x j f ( x ) / x j + s = g i ( x ) / x j g i ( x ) / x j + s , j { 1 , 2 , , n 1 } , s = 1 , 2 , , n 1 i (2.8)

It follows from (2.8) that:

f ( x ) x j g i ( x ) x j + s f ( x ) x j + s g i ( x ) x j = 0 , j { 1 , 2 , , n 1 } , s = 1 , 2 , , n 1 i (2.9)

The authors have observed further (through numerical experimental work) that taking aggregates of the set of SCE obtained from (2.9) is even more effective at avoiding the redundancy phenomenon. Therefore, summing from the kth SCE of (2.9) ( k = j 1 ), involving the equations k , k + 1 , , n 1 , n , we obtain:

f ( x ) x k ( g i ( x ) x k + 1 + g i ( x ) x k + 2 + + g i ( x ) x n ) ( f ( x ) x k + 1 + f ( x ) x k + 2 + + f ( x ) x n ) g i ( x ) x k = 0

Which is the same as:

f ( x ) x k s = k + 1 n g i ( x ) x s g i ( x ) x k s = k + 1 n f ( x ) x s = 0 , 1 k n 2 (2.10)

It is noted that various sums, involving two (2) or more equations, may be obtained from (2.9) or (2.10), indicating further the variety of the forms of aggregates of the SCE that may be created from (2.9).

The following observations are made about the SCE in general: 1) They are devoid of the unknown parameter λ i and produce equations in the unknown variables of x only; 2) They are derived from the first order necessary optimality condition given by (2.2a) only, which involves derivatives that are easy to compute, due to the convex nature of the functions: and 3) They are derived under the implicit assumption of the existence of the minimum solution. Therefore, they embody the minimum point x * . The resulting equations, therefore, in conjunction with the original constraint equations of the problem can independently be used to find the minimum solution x * . The corresponding minimum objective function value f ( x * ) follow immediately, therefore.

2.2. Finding the Parameter λ

The optimal value of λ can also be obtained independently, from (2.2a) which is given by:

f ( x ) x j i = 1 m λ i g i ( x ) x j = 0 , i = 1 , 2 , , m ; j = 1 , 2 , , n

By obtaining the required derivatives of f and g i , and evaluating them at x * , we obtain m equations involving λ i only, i = 1 , 2 , , m , which, therefore, can be solved independently for λ * .

2.3. The Solution Steps of the MLM

The solution process of the MLM is characterized by the following five (5) steps:

Ÿ Step 1: Given problem (1.3), construct the Lagrangian function as in (2.1) and go to Step 2;

Ÿ Step 2: Construct the first order necessary optimality condition as in (2.2a), and construct (2.4), then go to Step 3;

Ÿ Step 3: Generate SCE as in (2.6), (2.9) or (2.10) and go to Step 4;

Ÿ Determine whether or not the set of SCE together with the original problem constraints are sufficient to find x * . If yes, then solve for x * using any suitable numerical algorithm, otherwise, form alternative SCE using other forms of ratios or aggregates of equations such as 2.10 to solve for x * ; then go to Step 5.

Ÿ Step 5: Substitute x * in (2.2a) and solve for λ * , then stop.

We conclude the presentation of the MLM with the observation that: unlike the CLM, it does not require the first order necessary optimality condition given by (2.2b) in the solution of a problem (as given by (1.3)). Nevertheless, the solution x * obtained would not violate (2.2b). This, in addition to the novel procedures for finding x * and λ * are major simplifications in the process of solving a given convex quadratic optimization problem. The next section provides numerical results obtained by only hand computations involving small sized problems as a first stage evaluation of the MLM. In a subsequent work to be given in another paper, the new method would be evaluated on fairly large-scale problems and its relative performance against the CLM would be assessed to further demonstrate its capabilities as a viable method for solving equality constrained convex quadratic optimization problems.

3. Numerical Results

The new method (MLM), is applied to selected convex quadratic optimization problems to show that it produces the required solutions as the CLM. In this first stage of testing the method, we use small sized problems for which manual (or hand) computation is sufficient to find the solution. In the subsequent paper, where the method would be assessed against the CLM in terms of their relative performances in solving larger sized problems, the use of software would be necessary.

Seven problems, selected from literature for which solutions by the CLM have been obtained already, are solved with the MLM. The solution of each problem by the MLM is obtained for several cases of the SCE generated using (2.6) and (2.10) together with the original problem constraints, to demonstrate the variety of ways of solving the problems using the MLM.

3.1. Problem 1

This problem is extracted from Schittkowski [18] , given by:

min f ( x ) = ( x 1 1 ) 2 + ( x 2 x 3 ) 2 + ( x 4 x 5 ) 2 (3.1a)

Subject to:

g 1 ( x ) = x 1 + x 2 + x 3 + x 4 + x 5 5 = 0 (3.1b)

g 2 ( x ) = x 3 2 ( x 4 + x 5 ) + 3 = 0 (3.1c)

The solution by the CLM (see [18] ) is:

x 1 = x 2 = x 3 = x 4 = x 5 = 1 , f ( x * ) = 0 , λ 1 = 0 , λ 2 = 0

Solution by the MLM

Case 1: Solving by (2.6) and from (3.1a) and (3.1b), we obtain the SCE given by (3.1c) to (3.1f).

f ( x ) x 1 g 1 ( x ) x 2 f ( x ) x 2 g 1 ( x ) x 1 = 0 x 1 x 2 + x 3 = 1 (3.1c)

f ( x ) x 2 g 1 ( x ) x 3 f ( x ) x 3 g 1 ( x ) x 2 = 0 x 2 = x 3 (3.1d)

f ( x ) x 3 g 1 ( x ) x 4 f ( x ) x 4 g 1 ( x ) x 3 = 0 x 2 + x 3 x 4 + x 5 = 0 (3.1e)

f ( x ) x 4 g 1 ( x ) x 5 f ( x ) x 5 g 1 ( x ) x 4 = 0 x 4 = x 5 (3.1f)

Notice that the equations given by (3.1d) and (3.1f) make (3.1e) redundant. However, substituting (3.1d) in (3.1c) gives x 1 = 1 . By substituting x 1 = 1 , (3.1d), and (3.1f) in the constraint Equations (3.1b) and (3.1c) and multiplying the resulting expression from (3.1b) by and adding that to the resulting equation from (3.1c) yields x 3 = 1 and therefore x 2 = 1 . Substituting x 1 = 1 , x 2 = x 3 = 1 into (3.1b) we obtain x 4 = 1 = x 5 which gives the solution x 1 = x 2 = x 3 = x 4 = x 5 = 1 , satisfying both constraint equations g 1 ( x ) = 0 and g 2 ( x ) = 0 . The objective function value is obtained as f ( x * ) = ( 1 1 ) 2 + ( 1 1 ) 2 + ( 1 1 ) 2 = 0 . The minimum values of the multipliers λ 1 and λ 2 are obtained by solving the system of Equations (3.1g) given by:

[ 2 x 1 2 2 x 2 2 x 3 2 x 2 + 2 x 3 2 x 4 + 2 x 5 2 x 5 2 x 4 ] + [ λ 1 λ 1 λ 1 λ 1 λ 1 ] + [ 0 0 λ 2 2 λ 2 2 λ 2 ] = [ 0 0 0 0 0 ] (3.1g)

Substituting the values of x i ( i = 1 , 2 , 3 , 4 , 5 ) into (3.1g) produces the results λ 1 = λ 2 = 0 . Hence the minimum solution for Problem 1 using the MLM is

x 1 = x 2 = x 3 = x 4 = x 5 = 1 , λ 1 = λ 2 = 0 and f ( x ) = 0 .

Case 2: Alternatively, we use aggregates of SCE formed as in (2.9) or (2.10) to generate equations and use in conjunction with the constraint equations to solve Problem 1, as follows:

f ( x ) x 1 ( g 1 ( x ) x 2 + g 1 ( x ) x 3 ) g 1 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 ) = 0 x 1 = 1

f ( x ) x 1 ( g 1 ( x ) x 2 + g 1 ( x ) x 3 + g 1 ( x ) x 4 ) g 1 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 + f ( x ) x 4 ) = 0 3 x 1 x 4 + x 5 = 3

f ( x ) x 2 ( g 1 ( x ) x 3 + g 1 ( x ) x 4 ) g 1 ( x ) x 2 ( f ( x ) x 3 + f ( x ) x 4 ) = 0 3 x 1 3 x 3 x 4 + x 5 = 0

f ( x ) x 3 ( g 1 ( x ) x 4 + g 1 ( x ) x 5 ) g 1 ( x ) x 3 ( f ( x ) x 4 + f ( x ) x 5 ) = 0 x 2 = x 3

( f ( x ) x 3 + f ( x ) x 4 ) g 1 ( x ) x 5 ( g 1 ( x ) x 3 + g 1 ( x ) x 4 ) f ( x ) x 5 = 0 x 2 x 3 3 x 4 + 3 x 5 = 0

Subtracting 3 x 1 3 x 3 x 4 + x 5 = 0 from 3 x 1 x 4 + x 5 = 3 , the result is x 3 = 1 . Substituting x 3 = 1 in x 2 = x 3 , the result is x 2 = 1 . Substituting x 1 = x 2 = x 3 = 1 in (3.1b), the resulting equation is x 4 + x 5 = 2 . Substituting x 2 = x 3 = 1 in x 2 x 3 3 x 4 + 3 x 5 = 0 , the resulting equation is x 4 + x 5 = 0 . Solving x 4 + x 5 = 2 and x 4 + x 5 = 0 results in x 4 = x 5 = 1 .

Since the value of x 1 = x 2 = x 3 = x 4 = x 5 = 1 satisfy both constraints (3.1b) and (3.1c), the objective function value is 0 as obtained previously. The values of λ 1 = λ 2 = 0 follow accordingly from (3.1g).

Case 3: The required number of SCE in this case are generated with respect to f and g 2 using (2.6) leading to the following:

f ( x ) x 1 g 2 ( x ) x 2 f ( x ) x 2 g 2 ( x ) x 1 = 0 0 = 0

f ( x ) x 2 g 2 ( x ) x 3 f ( x ) x 3 g 2 ( x ) x 2 = 0 x 2 = x 3 ,

f ( x ) x 3 g 2 ( x ) x 4 f ( x ) x 4 g 2 ( x ) x 3 = 0 2 x 2 2 x 3 x 4 + x 5 = 0 ,

f ( x ) x 4 g 2 ( x ) x 5 f ( x ) x 5 g 2 ( x ) x 4 = 0 x 4 = x 5 .

The resulting SCE generated above, together with the constraint Equations (3.1b) and (3.1c) fail to be of sufficient number to enable solving Problem 1, due to some redundant equations obtained. Therefore, we proceed to Case 4, where aggregates of the SCE are constructed to enable solving of Problem 1.

Case 4: The aggregates of SCE produced according to (2.10), yield the following equations:

The solution process of the MLM.

f ( x ) x 1 ( g 1 ( x ) x 2 + g 1 ( x ) x 3 ) g 1 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 ) = 0 x 1 = 1

f ( x ) x 3 ( g 1 ( x ) x 4 + g 1 ( x ) x 5 ) g 1 ( x ) x 3 ( f ( x ) x 4 + f ( x ) x 5 ) = 0 x 2 x 3 = 0

( f ( x ) x 3 + f ( x ) x 4 ) g 1 ( x ) x 5 ( g 1 ( x ) x 3 + g 1 ( x ) x 4 ) f ( x ) x 5 = 0 x 2 x 3 3 x 4 + 3 x 5 = 0

Substituting x 1 = 1 in (3.1b) and multiplying the resulting equation by 2 and adding that to (3.1c) gives 2 x 2 + 3 x 3 = 5 . Substituting x 2 = x 3 in 2 x 2 + 3 x 3 = 5 results in x 3 = 1 and so x 2 = 1 . Substituting x 3 = 1 in (3.1c) gives x 4 + x 5 = 2 . Substituting x 2 = x 3 = 1 in x 2 x 3 3 x 4 + 3 x 5 = 0 gives x 4 + x 5 = 0 . Solving x 4 + x 5 = 2 and x 4 + x 5 = 0 simultaneously results in x 4 = x 5 = 1 . Therefore, we have x 1 = x 2 = x 3 = x 4 = x 4 = x 5 = 1 as was obtained in earlier cases. The rest of the results follow therefore.

3.2. Problem 2

This is an extract from [19] and given by:

min f ( x ) = x 1 2 + x 2 2 + x 3 2 (3.2a)

subject to

g 1 ( x ) = 2 x 1 + x 2 + 2 x 3 9 = 0 (3.2b)

g 2 ( x ) = 5 x 1 + 5 x 2 + 7 x 3 29 = 0 (3.2c)

The solution obtained by CLM as in [19] , is:

x 1 = 2 , x 2 = 1 , x 3 = 2 , λ 1 = 2 and λ 2 = 0 , f ( x ) = 9

Solution of Problem 2 by the MLM

Case 1: The nonredundant SCE produced according to (2.6) and using g 1 ( x ) yield:

f ( x ) x 1 g 1 ( x ) x 2 f ( x ) x 2 g 1 ( x ) x 1 = 0 x 1 = 2 x 2 ,

f ( x ) x 2 g 1 ( x ) x 3 f ( x ) x 3 g 1 ( x ) x 2 = 0 x 3 = 2 x 2 ,

Substituting x 1 = 2 x 2 and x 3 = 2 x 2 in (3.2b) results in x 2 = 1 . From x 1 = 2 x 2 and x 3 = 2 x 2 we obtain x 1 = 2 and x 3 = 2 . The solution is therefore x 1 = 2 , x 2 = 1 , x 3 = 2 . The solution x 1 = 2 , x 2 = 1 and x 3 = 2 satisfy both constraints. The objective function value is f ( x * ) = 2 2 + 1 2 + 2 2 = 9 . We solve for λ 1 and λ 2 as follows:

f ( x ) + λ 1 g 1 ( x ) + λ 2 g 2 ( x ) = 0

[ 2 x 1 2 x 2 2 x 3 ] + [ 2 λ 1 λ 1 2 λ 1 ] + [ 5 λ 2 5 λ 2 7 λ 2 ] = [ 0 0 0 ]

This system of equations yields the solutions λ 1 = 2 , λ 2 = 0 .

Case 2: Aggregates of SCE this time is produced according to (2.10) involving the constraint g 1 ( x ) as follows:

f ( x ) x 1 ( g 1 ( x ) x 2 + g 1 ( x ) x 3 ) g 1 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 ) = 0 3 x 1 2 x 2 2 x 3 = 0

g 1 ( x ) x 3 ( f ( x ) x 2 + f ( x ) x 1 ) f ( x ) x 3 ( g 1 ( x ) x 2 + g 1 ( x ) x 1 ) = 0 2 x 1 + 2 x 2 3 x 3 = 0

Solving 3 x 1 2 x 2 2 x 3 = 0 simultaneously with (3.2b) and (3.2c) results in x 1 = 2 , x 2 = 1 , x 3 = 2 which is the required solution. The corresponding values of the parameters λ 1 and λ 2 and the objective function follow accordingly. Alternatively, we can obtain the same solution by solving simultaneously 2 x 1 + 2 x 2 3 x 3 = 0 , (3.2b) and (3.2c).

Case 3: In this case, SCE are produced using (2.6), but this time using g 2 ( x ) , leading to the following equations:

f ( x ) x 1 g 2 ( x ) x 2 f ( x ) x 2 g 2 ( x ) x 1 = 0 x 1 = x 2 ,

f ( x ) x 2 g 2 ( x ) x 3 f ( x ) x 3 g 2 ( x ) x 2 = 0 x 2 = 7 5 x 3 ,

Substituting x 1 = 7 5 x 3 and x 2 = 7 5 x 3 in (3.2c), the result is x 3 = 21 29 . From x 1 = 5 7 x 3 and x 2 = 7 5 x 3 , we obtain x 1 = 147 145 and x 2 = 147 145 . Since the values do not satisfy both constraints, they are discarded. Alternatively, substituting x 1 = x 2 in both constraints and solving the resulting equations simultaneously results in x 3 = 3 , x 2 = 5 . Substituting these values in (3.2c) gives x 1 = 5 .

The values x 1 = 5 , x 2 = 5 , x 3 = 3 satisfy both constraint and yield corresponding objective function value calculated as f ( x * ) = 59 . This value is higher than the value f ( x * ) = 9 obtained earlier; and so, the current solution, though feasible, fails to be the minimum solution.

Case 4: In this case, aggregates of the SCE formed according to (2.10) and using g 2 ( x ) produce the following equations:

f ( x ) x 1 ( g 2 ( x ) x 2 + g 2 ( x ) x 3 ) g 2 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 ) = 0 7 x 1 5 x 2 5 x 3 = 0

g 2 ( x ) x 3 ( f ( x ) x 2 + f ( x ) x 1 ) f ( x ) x 3 ( g 2 ( x ) x 2 + g 2 ( x ) x 1 ) = 0 3 x 1 2 x 2 2 x 3 = 0

Solving (3.2b) and (3.2c) simultaneously with the first SCE results in the solution x 1 = 2.1154 , x 2 = 1.1538 , x 3 = 1.8077 , which satisfy both constraints. The corresponding objective function value is f ( x * ) = 9 . Alternatively, solving (3.2b), (3.2c) and the second equation simultaneously, gives x 1 = 2.0826 , x 2 = 1.1101 , x 3 = 1.8624 , which satisfy both constraints. The corresponding objective function value is f ( x * ) = 9.0381 , which is marginally different from the value of 9 obtained previously.

The results obtained under this case reveal interesting outcomes that require further investigation to understand the phenomena responsible for this. We observe, nevertheless, that the solution (2.1154, 1.1538, 1.8077) compares well with (2, 1, 2) which is the required; the objective function value of 9.0381 also compares well with 9, the required minimum value.

3.3. Problem 3

The problem is taken from [20] and given by:

min f ( x ) = x 1 2 + x 2 2 + x 3 2 + x 4 2 (3.3a)

subject to

g 1 ( x ) = x 1 + x 2 + x 3 + x 4 10 = 0 (3.3b)

g 2 ( x ) = x 1 x 2 + x 3 + 3 x 4 6 = 0 (3.3c)

The solution by the CLM is:

x 1 = 5 2 , x 2 = 7 2 , x 3 = 5 2 , x 4 = 3 2 , f = 27 , λ 1 = 3 , λ 2 = 1 2 .

Solution of Problem 3 by the MLM

Case 1: SCE according to (2.6) are generated involving g 1 ( x ) as follows:

f ( x ) x 1 g 1 ( x ) x 2 f ( x ) x 2 g 1 ( x ) x 1 = 0 x 1 x 2 = 0 ,

f ( x ) x 2 g 1 ( x ) x 3 f ( x ) x 3 g 1 ( x ) x 2 = 0 x 2 x 3 = 0 ,

f ( x ) x 3 g 1 ( x ) x 4 f ( x ) x 4 g 1 ( x ) x 3 = 0 x 3 x 4 = 0 .

The equations x 1 x 2 = 0 x 1 = x 2 and x 2 x 3 = 0 x 2 = x 3

x 2 = x 1 = x 3 . Substituting x 1 = x 3 in (3.3b) and (3.3c) gives the equations x 2 + 2 x 3 + x 4 = 10 and x 2 + 2 x 3 + 3 x 4 = 6 . Adding them gives x 3 + x 4 = 4 . Also, adding x 1 x 2 = 0 to x 3 x 4 = 0 and subtracting the result from (3.3c) yields x 4 = 3 2 . Putting x 4 = 3 2 in x 3 + x 4 = 4 gives x 3 = 5 2 , and putting the results of x 3 and x 4 in x 2 + 2 x 3 + 3 x 4 = 6 gives x 2 = 7 2 . Finally, substituting x 2 = 7 2 , x 3 = 5 2 and x 4 = 3 2 in (3.3b) or (3.3c) results in x 1 = 5 2 . The values x 1 = 5 2 , x 2 = 7 2 , x 3 = 5 2 , x 4 = 3 2 satisfy both constraints (3.3b) and (3.3c), and the corresponding objective function value is f ( x * ) = 27 . The values of λ 1 , λ 2 are obtained from the equation f ( x ) + λ 1 g 1 ( x ) + λ 2 g 2 ( x ) = 0 which is the system:

[ 2 x 1 2 x 2 2 x 3 2 x 4 ] + [ λ 1 λ 1 λ 1 λ 1 ] + [ λ 2 λ 2 λ 2 3 λ 2 ] = [ 0 0 0 0 ] (3.3d)

Solving Equation (3.3d) produces the result λ 1 = 6 , λ 2 = 1 .

Case 2: The SCE, this time, are constructed according to (2.10) involving the constraint g 1 ( x ) , leading to the following:

f ( x ) x 1 ( g 1 ( x ) x 2 + g 1 ( x ) x 3 ) g 1 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 ) = 0 2 x 1 x 2 x 3 = 0 (3.3e)

f ( x ) x 1 ( g 1 ( x ) x 2 + g 1 ( x ) x 3 + g 1 ( x ) x 4 ) g 1 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 + f ( x ) x 4 ) = 0 3 x 1 x 2 x 3 x 4 = 0 (3.3f)

f ( x ) x 2 ( g 1 ( x ) x 3 + g 1 ( x ) x 4 ) g 1 ( x ) x 2 ( f ( x ) x 3 + f ( x ) x 4 ) = 0 2 x 2 x 3 x 4 = 0 (3.3g)

( f ( x ) x 2 + f ( x ) x 3 ) g 1 ( x ) x 4 ( g 1 ( x ) x 2 + g 1 ( x ) x 3 ) f ( x ) x 4 = 0 2 x 2 x 3 x 4 = 0 (3.3h)

Solving Equations (3.3b) (3.3c), (3.3e) and (3.3f) simultaneously, we have x 1 = 5 2 , x 2 = 9 2 , x 3 = 1 2 , x 4 = 5 2 , which satisfy the constraints (3.3b) and (3.3c). The corresponding objective function value is f ( x * ) = 30.5 . Comparing the objective function values of 27 (obtained in Case 1), we see that 27 is the least. Therefore, the minimum solution for Problem 3 is x 1 = 5 2 , x 2 = 7 2 , x 3 = 5 2 , x 4 = 3 2 , λ 1 = 6 , λ 2 = 1 , f ( x * ) = 27 .

Case 3: In this case, SCE are produced using (2.6) in conjunction with the constraint g 2 ( x ) as follows:

f ( x ) x 1 g 2 ( x ) x 2 f ( x ) x 2 g 2 ( x ) x 1 = 0 x 1 + x 2 = 0 ,

f ( x ) x 2 g 2 ( x ) x 3 f ( x ) x 3 g 2 ( x ) x 2 = 0 x 2 + x 3 = 0 ,

f ( x ) x 3 g 2 ( x ) x 4 f ( x ) x 4 g 2 ( x ) x 3 = 0 3 x 3 x 4 = 0 ,

From x 1 + x 2 = 0 x 2 = x 1 and x 2 + x 3 = 0 x 2 = x 3 x 2 = x 1 = x 3 . Substituting x 1 = x 3 in (3.3b) and (3.3c), gives the equations x 2 + 2 x 3 + x 4 = 10 and x 2 + 2 x 3 + 3 x 4 = 6 respectively. Adding x 2 + 2 x 3 + x 4 = 10 to x 2 + 2 x 3 + 3 x 4 = 6 , results in x 3 + x 4 = 4 . Adding 3 x 3 x 4 = 0 to x 3 + x 4 = 4 results in x 3 = 1 , x 4 = 3 . Therefore, x 1 = 1 , since x 1 = x 3 . Putting x 1 = 1 , x 3 = 1 , x 4 = 3 in (3.3c) give x 2 = 5 . The values x 1 = 1 , x 2 = 5 , x 3 = 1 , x 4 = 3 satisfy both constraints (3.3b) and (3.3c) with corresponding objective function value of 36. This value, however, is higher than 27 obtained under Cases 1 and 2; therefore, the solution while feasible, is not optimal.

3.4. Problem 4

This problem is taken from [18] and given by:

min f ( x ) = ( x 1 x 2 ) 2 + ( x 2 + x 3 2 ) 2 + ( x 4 x 5 ) 2 + ( x 4 1 ) 2 + ( x 5 1 ) 2

subject to:

g 1 ( x ) = x 1 + 3 x 2 4 = 0 (3.4a)

g 2 ( x ) = x 3 + x 4 2 x 5 = 0 (3.4b)

g 3 ( x ) = x 2 x 5 = 0 (3.4c)

The solution of the problem by CLM is:

x 1 = x 2 = x 3 = x 4 = x 5 = 1 , λ 1 = 0 , λ 2 = 0 and λ 3 = 0 , f ( x * ) = 0 (Schittkowski, 2009).

Solution of Problem 4 by the MLM

Case 1: The SCE according to (2.6) and involving the constraint g 1 ( x ) are:

f ( x ) x 1 g 1 ( x ) x 2 f ( x ) x 2 g 1 ( x ) x 1 = 0 4 x 1 5 x 2 x 3 = 2 (3.4d)

f ( x ) x 2 g 1 ( x ) x 3 f ( x ) x 3 g 1 ( x ) x 2 = 0 x 2 + x 3 = 2 , (3.4e)

Solving (3.4a), (3.4b), (3.4c), (3.4d) and (3.4e) simultaneously results in x 1 = x 2 = x 3 = x 4 = x 5 = 1 , which satisfy all the three constraints with corresponding objective function value f ( x * ) = 0 . The first order necessary optimality conditions lead to the system:

[ 2 x 1 2 x 2 2 x 1 + 4 x 2 + 2 x 3 4 2 x 2 + 2 x 3 4 4 x 4 2 x 5 2 2 x 4 + 2 x 5 2 ] + [ λ 1 3 λ 1 0 0 0 ] + [ 0 0 λ 2 λ 2 2 λ 2 ] + [ 0 λ 3 0 0 λ 3 ] = [ 0 0 0 0 0 ] (3.4f)

Solving the system of Equations (3.4f) results in λ 1 = 0 , λ 2 = 0 , λ 3 = 0 .

Case 2: The SCE this time are produced according to (2.10) as:

f ( x ) x 1 ( g 1 ( x ) x 2 + g 1 ( x ) x 3 ) g 1 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 ) = 0 2 x 1 3 x 2 x 3 = 2 (3.4g)

f ( x ) x 1 ( g 1 ( x ) x 2 + g 1 ( x ) x 3 + g 1 ( x ) x 4 ) g 1 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 + f ( x ) x 4 ) = 0 4 x 1 6 x 2 2 x 3 2 x 4 + x 5 = 5 (3.4h)

f ( x ) x 1 ( g 1 ( x ) x 2 + g 1 ( x ) x 3 + g 1 ( x ) x 4 + g 1 ( x ) x 5 ) g 1 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 + f ( x ) x 4 + f ( x ) x 5 ) = 0 4 x 1 6 x 2 2 x 3 x 4 x 5 = 6 (3.4i)

f ( x ) x 2 ( g 1 ( x ) x 3 + g 1 ( x ) x 4 ) g 1 ( x ) x 2 ( f ( x ) x 3 + f ( x ) x 4 ) = 0 x 2 x 3 2 x 4 + x 5 = 3 (3.4j)

f ( x ) x 2 ( g 1 ( x ) x 3 + g 1 ( x ) x 4 + g 1 ( x ) x 5 ) g 1 ( x ) x 2 ( f ( x ) x 3 + f ( x ) x 4 + f ( x ) x 5 ) = 0 x 2 + x 3 + x 4 + x 5 = 4 (3.4k)

Multiplying (3.4g) by 2 and subtracting the resulting equation from (3.4h) gives 2 x 4 x 5 = 1 . Subtracting (2.4h) from (3.4i) results in x 4 + 2 x 5 = 1 . Solving simultaneously the equations 2 x 4 x 5 = 1 and 2 x 4 x 5 = 1 yields x 4 = x 5 = 1 . Substituting x 5 = 1 in (3.4c) gives x 2 = 1 . Substituting x 4 = x 5 = 1 in (3.4b) yields x 3 = 1 . Substituting x 2 = 1 in (3.4a) gives x 1 = 1 . Therefore, the minimum solution is x 1 = x 2 = x 3 = x 4 = x 5 = 1 with minimum objective function value of zero.

Case 3: The SCE in this case are produced according to (2.6), but this time involving the constraint g 2 ( x ) as follows:

f ( x ) x 2 g 2 ( x ) x 3 f ( x ) x 3 g 2 ( x ) x 2 = 0 x 1 + 2 x 2 + x 3 = 2 , (3.4l)

f ( x ) x 3 g 2 ( x ) x 4 f ( x ) x 4 g 2 ( x ) x 3 = 0 x 2 + x 3 2 x 4 + x 5 = 1 , (3.4m)

f ( x ) x 4 g 2 ( x ) x 5 f ( x ) x 5 g 2 ( x ) x 4 = 0 x 4 = 1 .

Solving (3.4a), (3.4b), (3.4c), (3.4l), and (3.4m) simultaneously results in x 1 = x 2 = x 3 = x 4 = x 5 = 1 which is the required minimum solution, with objective function value 0.

Case 4: The SCE are produced using (2.6) involving g 3 ( x ) this time around, as:

f ( x ) x 1 g 3 ( x ) x 2 f ( x ) x 2 g 3 ( x ) x 1 = 0 x 1 x 2 = 0 , (3.4n)

f ( x ) x 2 g 3 ( x ) x 3 f ( x ) x 3 g 3 ( x ) x 2 = 0 x 2 + x 3 = 2 , (3.4o)

f ( x ) x 4 g 3 ( x ) x 5 f ( x ) x 5 g 3 ( x ) x 4 = 0 2 x 4 + x 5 = 1 . (3.4p)

Solving constraint (3.4a) and Equation (3.4n) simultaneously results in x 1 = x 2 = 1 . Substituting x 2 = 1 in constraint (3.4b) results in x 5 = 1 . Substituting x 2 = 1 in x 2 + x 3 = 2 and x 5 = 1 in 2 x 4 + x 5 = 1 gives x 3 = 1 and x 4 = 1 respectively. Therefore, x 1 = x 2 = x 3 = x 4 = x 5 = 1 is the required minimum solution, with objective function value zero as previously. The parameter values obtained previously therefore follow accordingly.

Case 5: The SCE in this case are generated according to (2.10), involving the constraint g 3 ( x ) , as follows:

f ( x ) x 1 ( g 3 ( x ) x 2 + g 3 ( x ) x 3 ) g 3 ( x ) x 1 ( f ( x ) x 2 + f ( x ) x 3 ) = 0 x 1 x 2 = 0 (3.4q)

f ( x ) x 2 ( g 3 ( x ) x 3 + g 3 ( x ) x 4 ) g 3 ( x ) x 2 ( f ( x ) x 3 + f ( x ) x 4 ) = 0 x 2 + x 3 + 2 x 4 x 5 = 3 (3.4r)

f ( x ) x 2 ( g 3 ( x ) x 3 + g 3 ( x ) x 4 + g 3 ( x ) x 5 ) g 3 ( x ) x 2 ( f ( x ) x 3 + f ( x ) x 4 + f ( x ) x 5 ) = 0 x 1 3 x 2 2 x 3 x 4 = 6 (3.4s)

f ( x ) x 3 ( g 3 ( x ) x 4 + g 3 ( x ) x 5 ) g 3 ( x ) x 3 ( f ( x ) x 4 + f ( x ) x 5 ) = 0 x 2 + x 3 = 2 (3.4t)

Solving the constraint Equation (3.4a) and Equation (3.4q) simultaneously yields x 1 = x 2 = 1 . Substituting x 2 = 1 in (3.4c) results in x 5 = 1 . Substituting (3.4t) gives x 3 = 1 . Substituting x 1 = x 2 = x 3 = x 5 = 1 in (3.4t) results in x 4 = 1 , which is what was obtained in the earlier cases of Problem 4. The objective function and parameter values, therefore, follow accordingly.

3.5. Problem 5

This is an extract from [20] . In this case we have a linear objective function being minimized subject to a nonlinear convex quadratic constraint equation.

min f ( x ) = 2 x 1 + x 2

subject to:

g ( x ) = x 1 2 + x 2 2 4 = 0

The solution obtained by the CLM is:

x 1 = 4 5 , x 2 = 2 5 , λ = 5 4 , f ( x * ) = 2 5

Solution of Problem 5 by the MLM

Case 1: The only SCE produced for Problem 5 using (2.6) and involving the only constraint g ( x ) is:

f ( x ) x 1 g ( x ) x 2 f ( x ) x 2 g ( x ) x 1 = 0 x 1 = 2 x 2 .

Substituting x 1 = 2 x 2 in x 1 2 + x 2 2 4 = 0 the result is x 2 = ± 2 5 . Substituting x 2 = 2 5 and x 2 = 2 5 in x 1 = 2 x 2 results in x 1 = 4 5 and x 1 = 4 5 respectively. The objective function values at ( 4 5 , 2 5 ) and ( 4 5 , 2 5 ) are respectively 2 5 and 2 5 . Hence, the minimum value is 2 5 and attained at ( 4 5 , 2 5 ). From the first order optimality condition, we have the system:

[ 2 1 ] + λ [ 2 x 1 2 x 2 ] = [ 0 0 ]

which produces the result λ = 5 4 .

3.6. Problem 6

This problem is an extract from [21] . Like Problem 5, also involves minimizing a linear objective function, subject to a nonlinear convex quadratic constraint, given by:

min f ( x ) = 2 x 2 + x 1

subject to:

g ( x ) = x 2 2 + x 1 x 2 1 = 0

The solution (see [21] ) obtained by the CLM is:

x 1 = 0 , x 2 = 1 , λ = 1 , f ( x ) = 2

Solution of Problem 6 by the MLM

Case 1: The only SCE obtained from (2.6) for this problem is:

f ( x ) x 1 g ( x ) x 2 f ( x ) x 2 g ( x ) x 1 = 0 x 1 = 0

Substituting x 1 = 0 in x 2 2 + x 1 x 2 1 = 0 results in x 2 = 1 or x 2 = 1 . Substituting x 1 = 0 and x 2 = 1 , the objective function value is 2. Also, substituting x 1 = 0 , x 2 = 1 , the objective function value is −2. Therefore, minimum objective function value is −2 and attained at x 1 = 0 , x 2 = 1 . The first order optimality condition leads to the system:

[ 1 2 ] + λ [ x 2 x 1 + 2 x 2 ] = [ 0 0 ]

Substituting x 1 = 0 , x 2 = 1 results in λ = 1 .

3.7. Problem 7

In this problem, we consider minimizing a convex function, subject to a linear constraint. The problem is taken from [21] .

min f ( x ) = x 1 x 2

subject to:

g ( x ) = 2 x 1 + 2 x 2 20 = 0

Solution by the CLM is:

x 1 = 5 , x 2 = 5 , λ = 2.5 , f ( x * ) = 25

Solution of Problem 7 by the MLM

Case 1: The only SCE obtained from (2.6) is:

f ( x ) x 1 g ( x ) x 2 f ( x ) x 2 g ( x ) x 1 = 0 x 1 = x 2

Substituting x 1 = x 2 in 2 x 1 + 2 x 2 20 = 0 , results in x 2 = 5 . Substituting x 2 = 5 in 2 x 1 + 2 x 2 20 = 0 , we obtain x 1 = 5 . The minimum objective function value is therefore 25, attained at x 1 = 5 = x 2 . The first order optimality condition leads to the system:

[ x 2 x 1 ] + λ [ 2 2 ] = [ 0 0 ]

which gives λ = 2.5 .

4. Conclusions

This paper has reported a new way of solving an equality constrained convex quadratic optimization problems, based on what the authors have called Modified Lagrange Method (MLM), which, unlike the classical Lagrange Method (CLM), decomposes the task of solving for the unknown decision variable values of the problem and the values of the Lagrange multipliers by two independent procedures. The construction of what has been called Subsidiary Constraint Equations (SCE) by the authors, which result from a new procedure for independent solution for the decision variables is the novelty of the MLM. The arbitrarily large variety of ways of constructing the SCE is noted to be interesting and remains an area of further research.

The seven problems solved in this paper were intended to obtain preliminary results to demonstrate the ability of the MLM to find the required optimal solutions just as can be obtained from the CLM. In future works, the authors would demonstrate the superior performance of the MLM over the CLM in solving convex quadratic equality constrained problems and extend the approach to inequality constrained nonlinear convex quadratic problems as well as to convex problems not necessarily quadratic.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Luenburger, D.G. and Ye, Y. (2008) Dual and Cutting Plane Methods. Linear and Nonlinear Programming, Vol. 116. Springer, New York, 435-468.
https://doi.org/10.1007/978-0-387-74503-9_14
[2] Koopman, T.C. (2003) Activity Analysis of Production Allocation. Wiley & Chapman-Hall, New York, 339-347.
[3] Bazaraa, M.S., Sherali, H.D. and Shetty, C.M. (2006) Nonlinear Programming, Theory & Algorithms. John Wiley & Sons Inc., New York.
https://doi.org/10.1002/0471787779
[4] Boyd, S. and Vandenberghe, L. (2004) Convex Optimization. Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9780511804441
[5] Nocedal, J. and Wright, S.J. (2006) Quadratic Programming. Numerical Optimization, Springer, New York, 448-492.
[6] Shustrova, A. (2015) Primal-Dual Interior Point Methods for Quadratic Programming. Doctor’s Thesis, University of California, San Diego.
[7] Hoffman, L.D. and Bradley, G.L. (1992) Calculus for Business, Economics, and the Social and Life Sciences. McGraw-Hill, New York.
[8] Beavis, B. and Dobbs, I. (1990) “Static Optimization” Optimization and Stability Theory for Economic Analysis. Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9780511559402
[9] Hillier, F.S. and Price, C.C. (2012) International Series in Operations Research & Management Science. Springer, Berlin/Heidelberg
[10] Jakovetic, D., Xavier, J., and Moure, J.M. (2014) Fast Distributed Gradient Methods. IEEE Transaction on Automatic Control, 59, 1131-1146.
https://doi.org/10.1109/TAC.2014.2298712
[11] Gao, X., Hao, Y.Z., Wang, Y., Zuo, X., and Chen, T. (2021) A Lagrange Relaxation-Based Decomposition Algorithm for Large-Scale Off Shore Oil Production Planning Optimization. Processes, 9, Article 1257.
https://doi.org/10.3390/pr9071257
[12] Curtis, F.E., Jiang, H. and Robinson, D.P. (2015) An Adaptive Augmented Lagrangian Method for Large-Scale Problems. Mathematical Programming Series A, 152, 201-245.
https://doi.org/10.1007/s10107-014-0784-y
[13] Liang, X., Hu, J., Zhong, W. and Qian, J. (2008) A Modified Augmented Lagrange Multiplier Method for Large Scale Optimization. Developments in Chemical Engineering and Mineral Processing, 9, 115-124.
https://doi.org/10.1002/apj.5500090214
[14] Burman, E., Hausbo, P. and Larson, M.G. (2023) The Augmented Lagrange Method as a Framework for Stabilized Methods in Computational Mechanics. Archives of Computational Methods in Engineering, 30, 2579-2604.
https://doi.org/10.1007/s11831-022-09878-6
[15] Bertsekas, D.P. (1996) Constrained Optimization and Lagrange Multiplier Methods. Athena Scientific, Nashua.
[16] Fortin, M. and Glovinski, R. (1983) Augmented Lagrangian Methods: Numerical Solution of Boundary-Value Problems. Elsevier Science Problems, New York.
[17] Komzsik, L. and Chaing, K. (1993) The Effects of a Lagrange Multiplier Approach in MSC/MASTIZAN on Large-Scale Parallel Application. Computing Systems in Engineering, 4, 399-403.
https://doi.org/10.1016/0956-0521(93)90008-K
[18] Schittkowski, K. (2012) More Test Examples for Nonlinear Programming Codes. Vol. 282, Springer Science & Business Media, Berlin.
[19] Libretext (2023) Constrained Optimization Lagrange Multipliers.
https://math.libretexts.org/
[20] Trench, W.F. (2012) The Method of Lagrange Multipliers. Harper & Row, New York.ss
[21] Berkeley Math (2020) University of California.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.