Guignard’s Constraint Qualification (GCQ) and Multiobjective Optimisation Problems

Abstract

Investigation of optimality conditions has been one of the most interesting topics in the theory of multiobjective optimisation problems (MOP). To derive necessary optimality conditions of MOP, we consider assumptions called constraints qualifications. It is recognised that Guignard Constraint Qualification (GCQ) is the most efficient and general assumption for scalar objective optimisation problems; however, GCQ does not ensure Karush-Kuhn Tucker (KKT) necessary conditions for multiobjective optimisation problems. In this paper, we investigate the reasons behind that GCQ are not allowed to derive KKT conditions in multiobjective optimisation problems. Furthermore, we propose additional assumptions that allow one to use GCQ to derive necessary conditions for multiobjective optimisation problems. Finally, we also include sufficient conditions for multiobjective optimisation problems.

Share and Cite:

Alam, H. and Ray, G. (2022) Guignard’s Constraint Qualification (GCQ) and Multiobjective Optimisation Problems. Journal of Applied Mathematics and Physics, 10, 2356-2367. doi: 10.4236/jamp.2022.107160.

1. Introduction

Many authors derive the first order necessary conditions for multiple objective functions under the same techniques as used in scalar-valued objective function [1] [2] [3] [4] [5], but do not provide a useful tool to develop necessary and sufficient conditions for the multiobjective optimisation problem.

In this paper, we generalise and recall first order optimality conditions. We consider objective function to be a vector function with classical inequality and equality constraints are considered; furthermore, we suppose that the involved functions satisfy suitable differentiability assumptions.

Many papers [6] - [13] have been devoted to studying first order necessary conditions for multiobjective problems with a set constraint; the basic idea is to approximate the constraints with the contingent cone. We review these results carefully, stressing the meaningful differences with the scalar-objective case.

Scalarization techniques are often used in multiobjective optimisation problem. Many papers have been published where scalarisation approaches are used and non-negative Lagrange multipliers associated with the vector-valued objective functions are considered. So it is possible that due to some zero multipliers, the corresponding components of the vector valued objective functions have no role in the necessary conditions of multiobjective problems. To get positive Lagrange multipliers, Maeda [14] first then Preda [15] introduced some special sets and derived some generalised regularity conditions for the first-order KKT-type necessary conditions that ensure the existence of positive Lagrange multipliers for first order multiobjective optimality conditions.

This paper recalls first order necessary and sufficient conditions for multiobjective optimisation problems. To derive necessary conditions, we use well-known constraint qualifications such as Abadie constraints qualification and Guignard constraint qualification, which are considered more general assumptions in the literature to establish optimality conditions.

In Section 2 of this paper, basic notations are presented which are used throughout our analysis. Then, in Section 3, first order necessary conditions are introduced, whereas constraint qualifications with counterexamples are presented in Section 3 to show the gap between scalar and multiobjective optimisation problems.

2. Basic Notations

This section introduces some notations and definitions used throughout the papers.

For x , y R n , R n be n-dimensional space. We use the following relations to compare n-dimensional points.

x y , if and only if x i y i , i = 1 , , n ,

x y , if and only if x y and x y ,

x > y , if and only if x i > y i , i = 1 , , n .

Now, we consider the following multiobjective optimisation Problem P:

min f ( x ) , subject to the constraints set X such that

x ¯ X = { x R n | g ( x ) 0 , h ( x ) = 0 } .

Now, let f : R n R l , g : R n R m and h : R n R k be continuously differentiable vector-valued functions defined by f ( x ) ( f 1 ( x ) , f 2 ( x ) , , f l ( x ) ) , g ( x ) ( g 1 ( x ) , g 2 ( x ) , , g m ( x ) ) and h ( x ) ( h 1 ( x ) , h 2 ( x ) , , h k ( x ) ) where f i : R n R for i = 1 , , l , g j : R n R for j = 1 , , m and h r : R n R for r = 1 , , k . Assume that active constraints set I ( x ¯ ) = { j : g j : g j ( x ¯ ) = 0 } , for j = 1 , , m .

The solution of the problem (P) is called the efficient point. A more general solution of (P) is called the weak efficient point. The following definitions are the standard ones that are used for multiobjective optimisation problems.

Definition 2.1. A point x ¯ X is called an efficient solution to Problem (P) if there is no x X such that f ( x ) f ( x ¯ ) .

A point x ¯ X is called a weakly efficient solution to Problem (P) if there is no x X such that f ( x ) < f ( x ¯ ) .

Because of ordering relations, we can get a problem where all points are efficient solutions or no efficient solution at all, as the following example shows.

Example 2.1 Consider the problem

min { x , x 3 } and X = { x | x R } .

Here, all x X are efficient solutions (see, Figure 1). And if we consider a problem

min { x 1 , x 2 } and X = { ( x 1 , x 2 ) | x 1 0 , x 2 0 } .

So, no x X (see, Figure 2) are efficient solutions to the problem.

Figure 1. Efficient points X = { x | x R } .

Figure 2. Shaded feasible region X = { ( x 1 , x 2 ) | x 1 0 , x 2 0 } .

3. First Order Necessary Conditions

The study of constraints set X and its image f ( X ) is a difficult task and therefore, it is reasonable to consider suitable approximations of these sets. Thus, the following concept plays a vital role in developing optimisation theories [6] [14] [16] [17].

Definition 3.1 Let X be a subset of R n . The contingent cone to X at x ¯ c l X is the set defined by

T ( X ; x ¯ ) { d R n | d = lim n t n ( x n x ¯ ) suchthat x n X , with x n x ¯ a nd t n > 0 , for all n = 1 , 2 , } ,

where c l X denotes the closure of X and T ( X ; x ¯ ) is a nonempty closed cone and enjoys some important properties [6] [14] [17] [18]; The contingent cone is isotone, that is, T ( X 1 ; x ¯ ) T ( X 2 ; x ¯ ) whenever X 1 X 2 . It is convex if the original set is convex.

The analysis of optimality conditions can be deepened, developing the connection between the contingent cone and the following linearising cone (see [1] [18] [19]).

Definition 3.2 The linearising cone to X at x ¯ X is the set defined by

L ( X ; x ¯ ) { d R n | g j ( x ¯ ) T d 0 , j I ( x ¯ ) and h k ( x ¯ ) d = 0 , k = 1 , 2 , , p } .

Here L ( X ; x ¯ ) is a nonempty closed convex cone.

The following lemma is a well-known established property for scalar and multiobjective optimisation problems. The property and proof can be seen in the literaturesee, [14] [15] [20] [21] [22]. However, we have included the statement and proof here for reader convenience.

Lemma 3.1 If x ¯ X is an efficient solution of P then for any direction d T ( X ; x ¯ ) the system

f i ( x ¯ ) T d < 0 , i = 1 , 2 , , l (1)

has no solution d R n .

Proof. Let d T ( X ; x ¯ ) , that is d = lim n t n ( x n x ¯ ) , where t n > 0 , x n X for each n,

and lim n x n = x ¯ . By differentiability of f at x ¯ , we get

f ( x n ) f ( x ¯ ) = f ( x ¯ ) T ( x n x ¯ ) + ο ( x n x ¯ ) (2)

where ο ( x n x ¯ ) x n x ¯ 0 as x n x ¯

Since x ¯ is an efficient solution, so there is no x n X , where f ( x n ) f ( x ¯ ) , and so, from (2), we get

f ( x ¯ ) T ( x n x ¯ ) + ο ( x n x ¯ ) 0 ,

f ( x ¯ ) T t n ( x n x ¯ ) + ο ( x n x ¯ ) t n 0

Since t n > 0 and taking the limit as n , the above inequality implies f ( x ¯ ) T d 0 . It implies that (1) has no solution. This completes the proof.

The converse is no longer true, as the following example shows

Example 3.1 Consider the problem

min { x 1 , x 2 } and X = { ( x 1 , x 2 ) | x 2 + 1 8 x 1 1 0 , x 2 + 1 0 } .

It is easily verified that:

1) F = { f i ( x ¯ ) T d < 0 , i = 1 , 2 } = { ( x 1 , x 2 ) T R 2 | x 1 < 0 , x 2 < 1 } , where

x ¯ = ( 0 , 1 ) .

2) T = { ( x 1 , x 2 ) T R 2 | ( x 1 , x 2 ) T X } .

3) F T = φ .

4) x ¯ = ( 0 , 1 ) is not an efficient solution to the problem. It is noted that x ¯ is a weak efficient point.

5) Figure 3

The relation between contingent and linearization cone is presented as follows.

Lemma 3.2 Let x ¯ X . Then T ( X ; x ¯ ) L ( X ; x ¯ )

[see [14] for its proof].

Now we introduce Motzkin theorems that are used to derive the necessary conditions of optimality. These theorems say the impossibility of one system to the solvability of another one; see [11] [23] for its proof.

Theorem 3.1 Motzkin theorem of the Alternative

Figure 3. Shaded feasible region X.

Let A, B, and C be given matrices, with A being nonvacuous. Then either

1) A x > 0 , B x 0 and C x = 0 has a solution x

or

2) A T y 1 + B T y 2 + C T y 3 = 0 y 1 0 , y 2 0 has a solution y1, y2, y3

but never both.

4. Constraint Qualifications

To obtain Lagrange multipliers associated with the objective function which are all positive ( { λ R l | λ 0 , λ 0 } ) that is, at least one multiplier is non-zero. To restrict multiplier associated objective functions that are not equal to zero and establish necessary conditions for optimality, we require some assumptions called constraint qualifications (CQ). This is because if λ = 0 then the objective function has disappeared and any other function could have played this role. In our analysis, we consider two well-known constraint qualifications introduced below.

Definition 4.1 The constraint set X satisfies the Abadie Constraint Qualification (ACQ) at x ¯ if

L ( X ; x ¯ ) = T ( X ; x ¯ ) .

Definition 4.2 The constraint set X satisfies the Guignard’s Constraint Qualification (GCQ) at x ¯ if

L ( X ; x ¯ ) = c l c o n v T ( X ; x ¯ ) .

Lemma 4.1 Let x ¯ X be any feasible solution to problem P. Assume that the ACQ holds at x ¯ . If x ¯ X is an efficient solution to Problem P, then the system

f i ( x ¯ ) T d < 0 i = 1 , 2 , , l g j ( x ¯ ) T d 0 j I ( x ¯ ) h k ( x ¯ ) T d = 0 k = 1 , 2 , , p } (3)

has no solution d R n .

Proof. By Lemma 3.1, x ¯ X is an efficient

{ d | f i ( x ¯ ) T d < 0 , i = 1 , 2 , , l } T ( X , x ¯ ) = φ , If ACQ holds at x ¯ then { d | f i ( x ¯ ) T d < 0 , i = 1 , 2 , , l } L ( X , x ¯ ) = φ .

This completes the proof that (3) has no solution.

Theorem 4.1 Suppose that (ACQ) holds at x ¯ . If x ¯ X is an efficient solution to Problem P, then there exist vectors u R l , v R m such that

i = 1 l u i f i ( x ¯ ) + j = 1 m v j g j ( x ¯ ) + k = 1 p μ k h k ( x ¯ ) = 0 (4)

v j g j ( x ¯ ) = 0 , j = 1 , , m (5)

u 0 , v 0 (6)

Proof. Using Theorem 2.1 and Lemma 3.1, there exist u 0 , u R l and v j 0 , j I ( x ¯ ) ,

such that

i = 1 l u i f i ( x ¯ ) + j I v j g j ( x ¯ ) + k = 1 p μ k h k ( x ¯ ) = 0 .

By setting v j = 0 , j I ( x ¯ ) , we have

i = 1 l u i f i ( x ¯ ) + j = 1 m v j g j ( x ¯ ) + k = 1 p μ k h k ( x ¯ ) = 0 ,

u 0 , v 0 .

Since g j ( x ¯ ) = 0 for j I ( x ¯ ) , we have

v j g j ( x ¯ ) = 0 for j = 1 , , m .

which completes the proof.

Therefore, ACQ and Motzkin Theorem of the alternative allows restricting multipliers rule with u 0 .

Example 4.1 We take n = l = 2 ; Consider the problem

min { x 1 , x 2 } , and X = { ( x 1 , x 2 ) | ( x 1 + a x 2 ) ( a x 1 + x 2 ) 0 , a > 0 } .

It is easily verified that:

1) x ¯ = ( 0 , 0 ) is an efficient solution to the problem.

2) F = { f i ( x ¯ ) T d < 0 , i = 1 , 2 } = { ( x 1 , x 2 ) T R 2 | x 1 < 0 , x 2 < 1 } , where

x ¯ = ( 0 , 0 ) .

3) T = { ( x 1 , x 2 ) T R 2 | ( x 1 , x 2 ) T X } .

4) F T = φ .

5) L = { g i ( x ¯ ) T d 0 , i = 1 , 2 } = { ( x 1 , x 2 ) T | ( x 1 , x 2 ) T R 2 } .

6) T ( X ; x ¯ ) L ( X ; x ¯ ) , Abadie Constraint Qualification (ACQ) does not hold at x ¯ = ( 0 , 0 ) .

7) L ( X ; x ¯ ) = c l c o n v T ( X ; x ¯ ) ,when a 1 , and thus Guignard’s Constraint Qualification (GCQ) holds at x ¯ = ( 0 , 0 ) .

8) x ¯ = ( 0 , 0 ) does not satisfy Theorem 4.1.

9) Figure 4.

Remarks: In single optimisation, since F = { d : f ( x ¯ ) T d < 0 } lies in open half-space, as a result, Theorem 3.1 holds for ACQ and GCQ when l = 1 . Theorem 4.1 does hold for ACQ when l > 1 , however, the assumption GCQ does not guarantee to hold Theorem 4.1. For this instance, GCQ cannot be used in multiobjective optimisation problems. In Example 4.1, we see that, even if GCQ holds at x0, but KKT does not hold at that point.

Under certain situations GCQ can be used for multiobjective optimisation problems, and this illustrates by the following examples given below.

Example 4.2 We take n = l = 2 ; consider the problem

min { x 1 , x 2 } and X = { ( x 1 , x 2 ) | x 1 0 , x 2 0 } .

It is easily verified that:

Figure 4. Shaded feasible region X.

1) x 0 = ( 0 , 0 ) is a weak efficient solution to the problem.

2) GCQ holds at x 0 = ( 0 , 0 ) .

3) F = { ( x 1 , x 2 ) T R 2 | x 1 < 0 , x 2 < 0 } .

4) T = c l X = c l c o n v T and c l c o n v T = { ( x 1 , x 2 ) | x 1 0 , x 2 0 } .

Hence, F c l c o n v T = φ . That is Lemma 4.1 holds when l > 1 .

Example 4.3 We take n = l = 2 , consider the problem

min { x 1 , x 1 3 } and X = { ( x 1 , x 2 ) | x 1 x 2 0 } .

It is easily verified that:

1) x 0 = ( 0 , 0 ) is an efficient solution to the problem (see Figure 5).

2) GCQ holds at x 0 = ( 0 , 0 ) .

3) F = φ .

4) T = X and c l c o n v T = { ( x 1 , x 2 ) R 2 } .

Hence, F c l c o n v T = φ . That is Lemma 4.1 holds when l > 1 .

5. Sufficient Conditions for Efficiency

To establish the necessary conditions, we need some sort of constraint qualifications but do not need convexity assumptions. Necessary conditions generally do not turn out to be also sufficient unless additional assumptions hold. In this section, we use the concept of quasiconvexity (quasiconcavity) and pseudoconvexity for the sufficient condition. For the definitions of qausi and psedu convexity we refer [17]. Many authors have been devoted to generalise the sufficient condition by using weaker assumption such as generalised convex function see [1] [6] [23] [24].

Figure 5. Shaded feasible region.

Now we will check the sufficiency of Lemma 4.1. Let’s state it as follows.

Lemma 5.1 If the system

f i ( x ¯ ) T d < 0 i = 1 , 2 , , l g j ( x ¯ ) T d 0 j I ( x ¯ ) h k ( x ¯ ) T d = 0 k = 1 , 2 , , p } (7)

has no solution d R n then x ¯ X is an efficient solution to Problem P. The Lemma 5.1 is no longer true for MOP, as the following example shows.

Example 5.1 Recall the Example 3.1

Consider the problem

min { x 1 , x 2 } and X = { ( x 1 , x 2 ) | x 2 + 1 8 x 1 1 0 , x 2 + 1 0 } .

It is easily verified that:

1) F = { f i ( x ¯ ) T d < 0 , i = 1 , 2 } = { ( x 1 , x 2 ) T R 2 | x 1 < 0 , x 2 < 1 } , where

x ¯ = ( 0 , 1 ) .

2) T = { ( x 1 , x 2 ) T R 2 | ( x 1 , x 2 ) T X } .

3) F T = φ .

4) L = { g i ( x ¯ ) T d 0 , i = 1 , 2 } = { ( x 1 , x 2 ) T R 2 | ( x 1 , x 2 ) T X } .

5) L ( X ; x ¯ ) = T ( X ; x ¯ ) , Abadie Constraint Qualification (ACQ) holds at x ¯ = ( 0 , 1 ) .

6) T ( X ; x ¯ ) = c l c o n v T ( X ; x ¯ ) , therefore, L ( X ; x ¯ ) = c l c o n v T ( X ; x ¯ ) , and thus Guignard’s Constraint Qualification (GCQ) holds at x ¯ = ( 0 , 1 ) .

7) x ¯ = ( 0 , 1 ) is not an efficient solution to the problem. It is noted that x ¯ is a weak efficient point.

8) Figure 3.

Unfortunately, there is no suitable theorem of the alternative, which allows turninglemma into a sufficient multipliers rule: convexity assumptions have to be added to achieve this type of result [8] [10].

Theorem 5.1 Let x ¯ X be a feasible solution of problem P. Let R = { k : μ k > 0 } and K = { k : μ k < 0 } . Suppose that f is pseudoconvex at x ¯ , g j , j I ( x ¯ ) are quasiconvex at x ¯ , h k for k R are quasiconvex at x ¯ and h k for k K are quasiconcave at x ¯ . If there exist u i > 0 , v j 0 such that (4) and (5) hold at x ¯ , then x ¯ is an efficient solution for problem P on X.

Proof. Let S = { j : g j ( x ) < 0 for j I ( x ¯ ) } . Since v j 0 and v j g j ( x ¯ ) = 0 for j = 1 , , m and hence v j = 0 for j S .

Now g j ( x ) 0 = g j ( x ¯ ) , j I ( x ¯ ) for all x X . It follows by quasiconvexity of g j , j I ( x ¯ ) at x ¯ .

g j ( x ¯ ) T ( x x ¯ ) 0 x X and j I ( x ¯ )

[ g j ( x ¯ ) T ( x x ¯ ) ] v j 0 , j I ( x ¯ ) (8)

Similarly, since h k for k R are quasiconvex at x ¯ and h k for k K are quasiconcave at x ¯ , we have

h k ( x ¯ ) T ( x x ¯ ) 0 for k R (9)

and

h k ( x ¯ ) T ( x x ¯ ) 0 for k K (10)

Multiplying (9) and (10) by μ k > 0 and μ k < 0 respectively, and adding

with (8),we get

[ j I ( x ¯ ) v j g j ( x ¯ ) + k R K μ k h k ( x ¯ ) ] T ( x x ¯ ) 0 (11)

Multiplying (3.16) by ( x x ¯ ) we get,

[ i = 1 l u i f i ( x ¯ ) + j = 1 m v j g j ( x ¯ ) + k = 1 p μ k h k ( x ¯ ) ] T ( x x ¯ ) = 0

and compared with (12) we write

[ i = 1 l u i f i ( x ¯ ) ] T ( x x ¯ ) 0

u i f i ( x ¯ ) T ( x x ¯ ) 0

f i ( x ¯ ) T ( x x ¯ ) 0 , since u i > 0 and for all x X .

Which, by the pseudo convexity of f at x ¯ , implies that f ( x ) f ( x ¯ ) for all x X .

Hence, the proof is complete.

6. Conclusion

This paper reviewed first-order optimality conditions for multiple objective functions. Combining the result of [25] [26], we have derived KKT type necessary conditions. Furthermore, concepts of constraint qualification are introduced, which are required in the necessary conditions. The main result of this paper is that it ensures the existence of GCQ for Multiobjective optimisation problems. Counterexamples are provided to show the restrictions that do not allow one to use well-known GCQ in Multiobjective optimisation problems. In our future research, we intend to investigate the sufficiency conditions of KKT conditions for Multiobjective optimization problem under more general assumptions.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Laurent, P.J. (1972) Approximation et Optimisation. Hermann, Paris.
[2] Lin, J.G. (1976) Maximal Vectors and Multi Objective Optimisation. Journal of Optimisation Theory and Applications, 18, 41-64.
https://doi.org/10.1007/BF00933793
[3] Shapiro, J.F. (1979) Mathematical Programming: Structures and Algorithms. John Wiley, New York.
[4] Singh, C. (1987) Optimality Conditions in Multiobjective Differentiable Programming. Journal of Optimization Theory and Applications, 53, 115-123.
https://doi.org/10.1007/BF00938820
[5] Peressini, A.L. (1988) The Mathematics of Nonlinear Programming. Springer-Verlag, New York.
[6] Bigi, G. and Castellani, M. (2004) Uniqueness of KKT Multipliers in Multi-Objective Programming. Applied Mathematics Letters, 17, 1985-1290.
https://doi.org/10.1016/j.aml.2003.10.011
[7] Bigi, G. (2003) Optimality and Lagrangian Regularity in Vector Optimization. Ph.D. Thesis, University of Pisa, Pisa.
[8] Ben-Israel, A., Ben-Tal, A. and Charnes, A. (1977) Necessary and Sufficient Conditions for a Pareto Optimum in Convex Programming. Econometrica, 45, 811-820.
https://doi.org/10.2307/1912673
[9] Bhati, M.A. (2000) Practical Optimisation Methods. Springer-Verlag, New York.
[10] Censor, Y. (1977) Pareto Optimality in Multiobjective Problems. Applied Mathematics and Optimisation, 4, 41-59.
https://doi.org/10.1007/BF01442131
[11] Gould, F.J. and Tolle, J.W. (1971) A Necessary and Sufficient Qualification for Constrained Optimisation. SIAM Journal on Applied Mathematics, 20, 164-172.
https://doi.org/10.1137/0120021
[12] Jimenez, B. and Novo, V. (2003) Optimality Conditions in Directionally Differentiable Pareto Problems with a Set Constraint via Tangent Cones. Numerical Functional Analysis and Optimization, 24, 557-574.
https://doi.org/10.1081/NFA-120023868
[13] Jiménez, B. and Novo, V. (2002) First and Second Order Sufficient Conditions for Strict Minimality in Multiobjective Programming. Numerical Functional Analysis and Optimization, 23, 303-322.
https://doi.org/10.1081/NFA-120006695
[14] Maeda, T. (1994) Constraint Qualification in Multi-Objective Optimization Problems: Differentiable Case. Journal of Optimization Theory and Applications, 80, 483-500.
https://doi.org/10.1007/BF02207776
[15] Preda, V. and Chitescu, I. (1999) On Constraint Qualification in Multiobjective Optimisation Problems: Semi-Differentiable Case. Journal of Optimization Theory and Applications, 100, 417-433.
https://doi.org/10.1023/A:1021794505701
[16] Bigi, G. (2006) On Sufficient Second Order Optimality Conditions in Multiobjective Optimisation. Mathematical Methods of Operations Research, 63, 77-85.
https://doi.org/10.1007/s00186-005-0013-9
[17] Bazaraa, M.S., Sherali, H.D. and Shetty, C.M. (1993) Nonlinear Programming. 2nd Edition, John Wiley and Sons, New York.
[18] Bigi, G. and Castellani, M. (2000) Second Order Optimality Conditions for Differentiable Multiobjective Problems. RAIRO, Operations Research, 34, 411-426.
https://doi.org/10.1051/ro:2000122
[19] Magnus, J.R. and Neudecker, H. (1988) Matrix Differential Calculus with Applications in Statistics and Econometrics. John Wiley and Sons, New York.
https://doi.org/10.2307/2531754
[20] Rizvi, M.M. and Nasser, M. (2006) Use of Guignard’s Constraint Qualification (GCQ) in Optimisation Problem. GANIT, Journal of Bangladesh Mathematical Society, 26, 63-70.
[21] Haeser, G. and Ramos, A. (2020) Constraint Qualifications for Karush-Kuhn-Tucker Conditions in Multiobjective Optimization. Journal of Optimization Theory and Applications, 187, 469-487.
https://doi.org/10.1007/s10957-020-01749-z
[22] Burachik, R.S., Kaya, C.Y. and Rizvi, M.M. (2017) A New Scalarization Technique and New Algorithms to Generate Pareto Fronts. SIAM Journal on Optimization, 27, 1010-1034.
https://doi.org/10.1137/16M1083967
[23] Gleixner, A., Eifler, L., Gally, T., Gamrath, G., Gemander, P., Gottwald, R.L., Hendel, C., Koch, T., Miltenberger, M., Muller, B., Pfetsch, M.E., Puchert, E., Rehfeldt, D., Schlosser, F., Serrano, F., Shinano, Y., Viernickel, J.M., Vigerske, S., Weninger, D., Witt, J.T. and Witzig, J. (2017) The SCIP Optimisation Suite 5.0. ZIB-Report 17-61, Zuse Institute, Berlin.
[24] Cornuejols, G., Fisher, M.L. and Nemhauser, G.L. (1977) Location of Bank Accounts to Optimise Float: An Analytic Study of Exact and Approximate Algorithms. Management Science, 23, 789-810.
https://doi.org/10.1137/16M1083967
[25] Rizvi, M.M. (2003) Optimisation of Non Linear Programming Problems under Constraints: Some Applications in Statistics. M. Phil. Thesis.
[26] Rizvi, M.M., Hanif, M. and Waliullah, G.M. (2009) First-Order Optimality Conditions in Multiobjective Optimisation Problems: Differentiable Case. GANIT: Journal of Bangladesh Mathematical Society, 29, 95-105.
https://doi.org/10.3329/ganit.v29i0.8519

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.