^{1}

^{*}

^{2}

In this paper, we find the solution of a quasiconcave bilevel programming problem (QCBPP). After formulating a Bilevel Multiobjective Programming Problem (BMPP), we characterize its leader objective function and its feasible set. We show some necessary and sufficient conditions to establish a convex union of set of efficient point, an efficient set at the QCBPP. Based on this result, we formulate and solve a new QCBPP. Finally, we illustrate our approach with a numerical example.

A Bilevel Programming Problem (BPP) is a decision problem where the vector variables x and y are controlled by two decision-makers: the leader and the follower. Variables x (resp. y ) are variables of decision at the upper (resp. lower) level. This structure of hierarchical optimization appears in many applications when the strategic y of the lower level depends on the strategic x of the upper level.

Mathematically, solving a BPP consists of finding a solution of the problem at the upper level called the leader’s (or outer’s) problem;

min x F ( x , y ) subject to G ( x , y ) ≤ 0

where for each value of x , y is the solution of the problem at the lower level, which is called the follower’s (or inner’s) problem;

min y f ( x , y ) subject to g ( x , y ) ≤ 0

with x ∈ ℝ n 1 , y ∈ ℝ n 2 ; F , f : ℝ n 1 + n 2 → ℝ m 1 are the objective functions of the upper (resp. lower) level; G , g : ℝ n 1 + n 2 → ℝ m 2 are the constraint functions of the upper (resp. lower) level.

In the literature, the BPP and the problem with multiple objectives at the upper level or at the lower level are presented as a class of bilevel problems and are at the center of research of some authors such as [

Clearly, there are very few approaches in the literature that deal with bilevel multiobjective problems. According to Pieume et al. [

In this paper, we are interested in finding the solution of a quasiconcave bilevel programming problem (QCBPP). After the formulation of a bilevel multiobjective programming problem (BMPP), we characterize its leader objective and its feasible set. Then, we show some necessary and sufficient conditions to establish that a convex union of set of efficient point is an efficient set of the QCBPP. Based on this result a QCBPP is formulated and solved. A numerical example is provided to illustrate our approach.

This paper is organized as follows: in the next section, we present some concepts and results in multiobjective programming. In section 3, we define and formulate a BMPP. We give in section 4, a characterization of QCBPP. In section 5, we illustrate our approach with a numerical example. Section 6 concludes the paper.

Here, we give some concepts and results of multiobjective programming that will be used throughout the paper.

Preliminaries and NotationsA multi-objective programming problem is formulated in general as follows:

“ min x ” h ( x ) = ( h 1 ( x ) , h 2 ( x ) , ⋯ , h Q ( x ) ) , x ∈ U (MOPP)

with h i : U ⊆ ℝ n → ℝ ℚ where the h i are the objective functions for all i = 1 , ⋯ , Q and U ⊆ ℝ n is the feasible set. In order to solve (MOPP), it is necessary to define how objective function vectors h 1 ( x ) , h 2 ( x ) , ⋯ , h Q ( x ) should be compared for different alternatives x ∈ U . We must define on h(U) the order that should be used for this comparison. Due to the fact that, for Q > 2 there is no canonical (total) order in ℝ Q . Calice Pieume and al [

Let C ⊂ ℝ Q be an arbitrary cone. They show that the binary relation ≤ C defined in C by: a ≤ C b ⇔ b − a ∈ C , achieves a partial order introduced by closed pointed convex cones that are the most used.

Consider the linear optimization problem (LOP)

min x ∈ U ∑ i = 1 m 1 λ i h i ( x ) with ∑ i = 1 m 1 λ i = 1 and λ i ≥ 0 for all i (LOP)

where λ i is the weight of the i-th objective h i and defines the importance of each objective.

Geoffrion [

Throughout the rest of the paper, the set of efficient points of a multi-objective optimization problem defined by a vector value function h on a feasible set U with respect to a cone C will be denoted: E ( h , U , ≤ C ) .

Consider the problem (1) called the leader’s problem formulated as follows:

min x F ( x , y ) = ( F 1 ( x , y ) , F 2 ( x , y ) , ⋯ , F m 1 ( x , y ) ) subject to G ( x ) ≤ 0 (1)

where for each value of x , y is the solution of the problem (2) called the follower’s problem;

min y f ( x , y ) = ( f 1 ( x , y ) , f 2 ( x , y ) , ⋯ , f m 2 ( x , y ) ) subject to g ( x , y ) ≤ 0 (2)

x = ( x 1 , x 2 , ⋯ , x n 1 ) ∈ ℝ + n 1 , (resp. y = ( y 1 , y 2 , ⋯ , y n 2 ) ∈ ℝ + n 2 ) are the decision variable vectors controlled by the leader (resp. the follower). n 1 , n 2 ∈ ℕ * . F : ℝ n 1 + n 2 → ℝ m 1 and f : ℝ n 1 + n 2 → ℝ m 2 are the objective functions of the leader’s problem and follower’s problem respectively; G : ℝ n 1 + n 2 → ℝ and g : ℝ n 1 + n 2 → ℝ are the constraint functions of the leader’s problem and follower’s problem respectively.

Let us consider a bilevel programming problem (BPP) that comprises at the upper level the leader’s problem (1) and at the lower level the follower’s problem (2). The feasible region of the BPP of the first level is implicitly determined by the follower’s problem (2). This bilevel programming problem is called bilevel multiobjective programming problem (BMPP) and is defined as follows:

min x ∈ ℝ + n 1 F ( x , y ) = ( F 1 ( x , y ) , F 2 ( x , y ) , ⋯ , F m 1 ( x , y ) ) s .t { G ( x ) ≤ 0 y solves min y ∈ ℝ + n 2 f ( x , y ) = ( f 1 ( x , y ) , f 2 ( x , y ) , ⋯ , f m 2 ( x , y ) ) s .t g ( x , y ) ≤ 0 (BMPP)

Let M = { ( x , y ) ∈ ℝ + n 1 × ℝ + n 2 : g ( x , y ) ≤ 0 } denote the feasible region of the problem (2). The solution set of the follower’s problem denoted by:

R 1 ( x ) = arg min y { f ( x ; y ) ; ( x , y ) ∈ M } is called lower-level reaction set for each decision x of the upper level and is defined as the set of Pareto-optimal points.

Let define a lower level solution y for every feasible x such that:

y : ℝ + n 1 → ℝ + n 2 x → y (x)

x ∈ ℝ n 1 is a parameter of the follower’s problem (2).

Let consider R 1 ( x ) = { y = y ( x ) } and with M ˜ = { ( x , y ) ∈ ℝ + n 1 × ℝ + n 2 , G ( x ) ≤ 0 } ⊂ ℝ + n 1 × ℝ + n 2 be compact set. The bilevel multi-objective programming problem (BMPP) can be reformulated as follows:

min x , y F ( x , y ( x ) ) = ( F 1 ( x , y ( x ) ) , F 2 ( x , y ( x ) ) , ⋯ , F m 1 ( x , y ( x ) ) ) , { y ( x ) ∈ R 1 ( x ) ( x , y ) ∈ M ˜ (BMPP)

Let denote by Ω 1 the feasible space (also called induced set) of BMPP given by:

Ω 1 = { ( x , y ) ∈ ℝ + n 1 × ℝ + n 2 / y ∈ R 1 ( x ) , ( x , y ) ∈ M ˜ }

The optimistic formulation of BMPP is given by:

min x ∈ ℝ + n 1 F ( x , y ( x ) ) = ( F 1 ( x , y ( x ) ) , F 2 ( x , y ( x ) ) , ⋯ , F m 1 ( x , y ( x ) ) ) , ( x , y ) ∈ Ω 1 (BMPP)

For a fixed x ∈ ℝ + n 1 , if y is a Pareto optimal solution of the follower’s problem, then ( x , y ) ∈ Ω 1 is a feasible solution to the BMPP.

Let F be the objective function of the BMPP.

Definition 1. The objective function F of the BMPP defined on a convex subset Ω 1 of ℝ n 1 × ℝ n 2 with values in R is quasiconcave if for all real k the whole

U k = { ( x , y ) ∈ Ω 1 ; F ( x , y ) ≥ k }

is convex.

Lemma 1. Let Ω 1 be a convex subset of R n 1 + n 2 of interior non empty and F : Ω 1 → R be quasiconcave then ∀ ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) ∈ Ω 1 2 , ∀ λ ∈ [ 0 , 1 ] , F ( ( 1 − λ ) ( x 1 , y 1 ) + λ ( x 2 , y 2 ) ) ≥ min ( F ( x 1 , y 1 ) , F ( x 2 , y 2 ) )

Proof:

Let’s suppose that F is quasiconcave on the convex Ω 1 , ( x 1 , y 1 ) , ( x 2 , y 2 ) ∈ Ω 1 and λ ∈ [ 0 , 1 ] . Let’s apply the definition of the quasiconcavity of F to k = min ( F ( x 1 , y 1 ) ; ( x 2 , y 2 ) ) . One has F ( x 1 , y 1 ) ≥ k , F ( x 2 , y 2 ) ≥ k that is to say ( x 1 , y 1 ) , ( x 2 , y 2 ) ∈ U k , which is convex by hypothesis on F. Therefore, ( 1 − λ ) ( x 1 , y 1 ) + λ ( x 2 , y 2 ) ∈ U k : In other words, F ( ( 1 − λ ) ( x 1 , y 1 ) + λ ( x 2 , y 2 ) ) ≥ k or k = min ( F ( x 1 , y 1 ) , F ( x 2 , y 2 ) ) and therefore F ( ( 1 − λ ) ( x 1 , y 1 ) + λ ( x 2 , y 2 ) ) ≥ min ( F ( x 1 , y 1 ) , F ( x 2 , y 2 ) )

The lemma 1 establishes that components of F are quasiconcave functions on the convex set Ω 1 .

Theorem 1. Let Ω 1 be a nonempty convex and compact subset of ℝ n 1 × ℝ n 2 and let F : Ω 1 → ℝ be any function.

If F is quasiconcave and continuous, then there exists an extreme point of the polyhedron ( x * , y * ) ∈ Ω 1 which is an optimal solution of the BMPP.

Proof:

Let suppose F quasiconcave and continuous and show that ( x * , y * ) is optimal solution of the BMPP.

Consider Ω 1 a non-empty compact set of optimal points and Let denote ( x * , y * ) the optimal solution of the BMPP.

Let ( x , y ) ∈ Ω 1 . Since F is a quasiconcave function, there exists ( x * , y * ) ∈ Ω 1 such that for all k ∈ [ 0,1 ] , λ ( x * , y * ) + ( 1 − λ ) ( x , y ) ∈ Ω 1 . By hypothesis, F is continuous on Ω 1 implies that F ( λ ( x * , y * ) + ( 1 − λ ) ( x , y ) ) ∈ F ( Ω 1 ) . For λ i ≥ 0 with ∑ i = 1 m 1 λ i = 1 , F ( x * , y * ) ∈ F ( Ω 1 ) , ( x * , y * ) is an optimal solution of the BMPP.

Definition 2. The feasible point ( x * , y * ) ∈ Ω 1 is the optimal solution of the BMPP if F ( x * , y * ) ≤ F ( x , y ) for each point ( x , y ) ∈ Ω 1 .

For BMPP, it is noted that a solution ( x * , y * ) is optimal for the upper level problem if and only if y * is an optimal solution for the lower level problem with x = x * .

Given a fixed value of y = y ( x ) , the problem (2) can be rewritten as follows:

min x f ( x , y ( x ) ) = ( f 1 ( x , y ( x ) ) , f 2 ( x , y ( x ) ) , ⋯ , f m 2 ( x , y ( x ) ) ) , x ∈ T

In the following, let C = ℝ + m 2 \ { 0 m 2 } be a cone. The feasible region of the follower’s problem is the area T = { ( x , y ) ∈ ℝ + n 1 × ℝ + n 2 : g ( x ) ≤ 0 } . E ( f , T , ≤ C ) = { f ( x * , y * ) ∈ ℝ m 2 : ( x * , y * ) ∈ T } is the efficient set.

Theorem 2. Let J ⊂ { 1, ⋯ , r } with W = { r / 1 ≤ r ≤ n } and [ E ( f , T , ≤ C ) ] 1 , ⋯ , [ E ( f , T , ≤ C ) ] r be non-empty efficient subset of Ω 1 . The following result holds.

∪ j ∈ J [ E ( f , T , ≤ C ) ] j ⊂ Ω 1

Proof:

Let ( x * , y * ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C ) ] j . Then there exists j 0 ∈ J such that

( x * , y * ) ∈ [ E ( f , T , ≤ C ) ] j 0 ⊂ Ω 1 , thus ( x * , y * ) ∈ Ω 1 implies that

∪ j ∈ J [ E ( f , T , ≤ C ) ] j ⊂ Ω 1 .

Theorem 2 permits to say that ∪ j ∈ J [ E ( f , T , ≤ C ) ] j is the efficient set of the

problem (2)

Lemma 2. If F verifies F ( λ ( x * , y * ) + ( 1 − λ ) ( x , y ) ) ≥ min ( F ( x * , y * ) , F ( x , y ) )

for all ( x * , y * ) , ( x , y ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C ) ] j and λ ∈ [ 0 , 1 ] then

λ ( x * , y * ) + ( 1 − λ ) ( x , y ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C ) ] j

Proof:

Let suppose F quasiconcave and show that ∪ j ∈ J [ E ( f , T , ≤ C ) ] j is convex.

Let ( x * , y * ) , ( x , y ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C ) ] j and λ ∈ [ 0 ,1 ] .

F ( x * , y * ) ≥ k , F ( x , y ) ≥ k for all real k. Thus, min ( F ( x * , y * ) , F ( x , y ) ) ≥ k . Therefore F ( λ ( x * , y * ) + ( 1 − λ ) ( x , y ) ) ≥ min ( F ( x * , y * ) , F ( x , y ) ) ≥ k , implies

that λ ( x * , y * ) + ( 1 − λ ) ( x , y ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C ) ] j .

Theorem 3. Let J ⊂ { 1, ⋯ , r } with W = { r / 1 ≤ r ≤ n } and [ E ( f , T , ≤ C ) ] 1 , ⋯ , [ E ( f , T , ≤ C ) ] r be non-empty efficient subset of Ω 1 . The following result holds.

Ω 1 ⊂ ∪ j ∈ J [ E ( f , T , ≤ C ) ] j

Proof:

( x * , y * ) ∈ Ω 1 implies that ( x * , y * ) ∈ [ E ( f , T , ≤ C ) ] j implies that

( x * , y * ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C ) ] j . Thus Ω 1 ⊂ ∪ j ∈ J [ E ( f , T , ≤ C ) ] j

Let T = { ( x , y ) ∈ ℝ + n 1 × ℝ + n 2 / y ∈ R 1 ( x ) , ( x , y ) ∈ M ˜ } and consider the following constructed follower’s problem:

min x , y f ( x , y ( x ) ) = ( f 1 ( x , y ( x ) ) , f 2 ( x , y ( x ) ) , ⋯ , f m 2 ( x , y ( x ) ) , y ) subject to ( x , y ) ∈ T

Let C 1 = ℝ + m 2 \ { 0 m 2 } × { 0 n 2 } ⊂ ℝ + m 2 × ℝ + n 2 , The result of Theorem 4 holds by Theorem 2 and Theorem 3.

Theorem 4. ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j = Ω 1

Since the set ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j is convex, solving the BMPP is then equiva-

lent to solving the quasiconcave problem:

min x ∈ ℝ + n 1 F ( x ) = ( F 1 ( x , y ( x ) ) , F 2 ( x , y ( x ) ) , ⋯ , F m 1 ( x , y ( x ) ) ) ( x , y ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j

Definition 3 If ( x * , y * ) is a feasible solution to the QCBPP and there are no

( x , y ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j such that F ( x , y ) ≤ F ( x * , y * ) , then ( x * , y * ) is a

Pareto optimal (efficient) solution to the QCBPP, where the binary relation ≤

defines a partial order in F ( ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j ) .

Theorem 5. ( x * , y * ) ∈ Ω 1 is an optimal solution of BMPP if only if

( x * , y * ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j is an efficient solution to the QCBPP.

Proof:

Þ) Let suppose ( x * , y * ) optimal solution of BMPP and show that ( x * , y * ) is efficient solution of QCBPP.

Let ( x * , y * ) ∈ Ω 1 . Since F is a continuous function on, Ω 1 ,

F ( x * , y * ) ∈ F ( Ω 1 ) . According to theorem 3 ( x * , y * ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j and

by definition 3 there is no ( x ; y ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j such that

F ( x , y ) ≤ F ( x * , y * ) . Then F ( x * , y * ) ∈ F ( ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j ) . Hence

( x * , y * ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j is efficient solution of QCBPP.

(Ü Let suppose ( x * , y * ) is efficient solution of QCBPP. Let us show that ( x * , y * ) is optimal solution of the BMPP.

Let ( x * , y * ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j is efficient solution of QCBPP with

( x , y ) ∉ ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j such that: 1) F ( x , y ) ≤ F ( x * , y * ) . Taking into ac-

count the theorem 4, one has : 2) F ( x * , y * ) ≤ F ( x , y ) for all ( x , y ) ∈ Ω 1 .

Due to the relations 1) and 2), F ( x * , y * ) ∈ F ( Ω 1 ) . Therefore ( x * , y * ) ∈ Ω 1 is optimal solution of BMPP.

Let J ⊂ { 1, ⋯ , r } , [ E ( f , T , ≤ C 1 ) ] j the efficient subset of ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j ,

such that for a fixed j 0 ∈ J , ( x * , y * ) ∈ [ E ( f , T , ≤ C 1 ) ] j 0 is a minimizing solution to the problem

min x ∈ ℝ + n 1 F ( x ) = ( F 1 ( x , y ( x ) ) , F 2 ( x , y ( x ) ) , ⋯ , F m 1 ( x , y ( x ) ) ) ( x , y ) ∈ [ E ( f , T , ≤ C 1 ) ] j 0 (3)

Let ( x * , y * ) an optimal solution of the following problem:

min ( x , y ) ∈ Ω 1 ∑ i = 1 m 1 λ i F i ( x 1 , x 2 ) (4)

If λ i ≥ 0 is fixed and for all i , ( x * , y * ) is an optimal solution of (4) then ( x * , y * ) is efficient solution of (3).

If λ i > 0 is fixed and for all i , ( x * , y * ) is an optimal solution of (4) then ( x * , y * ) is weakly efficient solution of (3).

That is, ( x * , y * ) is an efficient solution as well as weakly efficient solution in [ E ( f , T , ≤ C 1 ) ] j 0

Therefore, [ E ( f , T , ≤ C 1 ) ] j 0 represents the efficient subset in which ( x * , y * ) is an efficient solution to the QCBPP.

This example is taken from [

min x F ( x , y ) = ( x 1 + 2 x 2 , 3 x 1 + x 2 ) subject to : { x 1 + x 2 ≤ 3 x 1 , x 2 ≥ 0

and the follower’s problem:

min y f ( x , y ) = ( y 1 + 3 y 2 , 2 y 1 + y 2 ) subject to { − x 1 + y 1 + y 2 ≤ 0 − x 2 + y 1 ≤ 3 x 1 + x 2 + y 2 ≤ 8 y 1 , y 2 ≥ 0

x = ( x 1 , x 2 ) ∈ ℝ + 2 , (resp. y = ( y 1 , y 2 ) ∈ ℝ + 2 ) are the decision variable vectors controlled by the leader (resp. the follower).

The two multi-objective problems used are:

m a x x 1 , x 2 ( x 1 + 2 x 2 ,3 x 1 + x 2 ) s .t { x 1 + x 2 ≤ 3 x 1 , x 2 ≥ 0 y solves max y 1 , y 2 ( y 1 + 3 y 2 , 2 y 1 + y 2 ) s .t : − x 1 + y 1 + y 2 ≤ 6 − x 2 + y 1 ≤ 3 x 1 + x 2 + y 2 ≤ 8 y 1 , y 2 ≥ 0 (BLMPP)

Let M = { ( x , y ) ∈ ℝ + 2 × ℝ + 2 − x 1 + y 1 + y 2 ≤ 6 − x 2 + y 1 ≤ 3 x 1 + x 2 + y 2 ≤ 8 be the constraint region of the lower level problem

and R 1 ( x ) = arg min y { f ( x ; y ) = ( y 1 + 3 y 2 , 2 y 1 + y 2 ) ; ( x , y ) ∈ M } be the solution set of the lower-level problem.

Consider y = ( 4.1154 , 3.3846 ) the Pareto optimal solution of the follower’s problem and with M ˜ = { ( x , y ) ∈ ℝ 2 × ℝ 1 , x 1 + x 2 ≤ 3 } a compact set, the BLMPP becomes:

max x , y F ( x , y ( x ) ) = ( x 1 + 2 x 2 , 3 x 1 + x 2 ) { y ( x ) = ( 4.1154 , 3.3146 ) ( x ) ∈ R 1 ( x ) ( x , y ) ∈ M ˜

This problem can be formulated as:

m a x x 1 , x 2 ( x 1 + 2 x 2 ,3 x 1 + x 2 ) s .t { x 1 + x 2 ≤ 3 − x 1 ≤ − 1.5 − x 2 ≤ − 1.1154 x 1 + x 2 ≤ 4.6154 x 1 , x 2 > 0

Ω 1 , the feasible space (also called induced set) of the reformulated BMPP is:

Ω 1 = { ( x , y ) ∈ ℝ + 2 × ℝ + 1 : x 1 + x 2 ≤ 3 − x 1 ≤ − 1.5 − x 2 ≤ − 1.1154 x 1 + x 2 ≤ 4.6854

Lemma 1 establishes that F is quasiconcave function on the convex set Ω 1 .

According to the theorem 1, ( 1.5 , 1.5 , 0 ) ∈ Ω 1 is an optimal solution to the QCBPP for fixed y = y ( x ) optimal solution of the follower’s problem.

Ω 1 = { y = y ( x ) = ( 4.1154 , 3.3146 ) ( x ) ∈ R 1 ( x ) , M 1 ∩ M ˜ = ( 1.5 , 1.5 , 0 ) }

Therefore, the follower’s problem is as follows:

max x f ( x , y ( x ) ) = ( y 1 + 3 y 2 , 2 y 1 + y 2 ) , x ∈ M 1

The feasible region of the follower’s problem is the area

M 1 = { ( x , y ) ∈ ℝ + 2 × ℝ + 1 : − x 1 ≤ − 1.5 − x 2 ≤ − 1.1154 x 1 + x 2 ≤ 4.6854 } .

Here the cone C = ℝ + 2 \ { ( 0 , 0 ) } , n = 7 hence W = { r / 1 ≤ r ≤ 7 } .

With r = Sup W = 7 (Superior of W = 7), one has [ E ( f , T , ≤ C ) ] 1 = { f ( 1.6 , 1.2 , 0 ) : ( 1.6 , 1.2 , 0 ) ∈ M 1 } , ⋯ , [ E ( f , T , ≤ C ) ] 4 = { f ( 1.5 , 1.5 , 0 ) : ( 1.5 , 1.5 , 0 ) ∈ M 1 } , ⋯ , [ E ( f , T , ≤ C ) ] 7 = { f ( 1.5 , 1.4 , 0 ) : ( 1.5 , 1.4 , 0 ) ∈ M 1 } are non-empty efficient subsets of the follower’s problem

∪ j ∈ J [ E ( f , T , ≤ C ) ] 7 = { ( 1.6,1.2,0 ) , ⋯ , ( 1.5,1.5,0 ) , ⋯ , ( 1.5,1.4,0 ) } ⊂ Ω 1

The objective function F : ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] 7 ⊆ ℝ 3 → ℝ 1 of the BMPP very-

fies F ( λ ( 1.5 , 1.5 , 0 ) + ( 1 − λ ) ( x , y ) ) ≥ min ( F ( 1.5 , 1.5 , 0 ) , F ( x , y ) ) for all

( 1.5 , 1.5 , 0 ) , ( x , y ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C ) ] 7 with

λ ∈ [ 0 , 1 ] ⇒ λ ( 1.5 , 1.5 , 0 ) + ( 1 − λ ) ( x , y ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C ) ] 7 and F is quasicon-

cave function on ∪ j ∈ J [ E ( f , T , ≤ C ) ] 7 convex.

( 1.5 , 1.5 , 0 ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C ) ] 7 with j 0 = 4 ∈ J we have

( 1.5 , 1.5 , 0 ) ∈ [ E ( f , T , ≤ C ) ] 4 ⊂ Ω 1 . ( 1.5 , 1.5 , 0 ) ∈ Ω 1 implies that

∪ j ∈ J [ E ( f , T , ≤ C ) ] 7 ⊂ Ω 1 .

We therefore have according to the Theorem 2:

Ω 1 ⊂ ∪ j ∈ J [ E ( f , T , ≤ C ) ] 7

The follower’s problem is constructed as follows:

min x , y f ( x , y ( x ) ) = ( ( y 1 + 3 y 2 , 2 y 1 + y 2 ) , y ) subject to { x 1 + x 2 ≤ 3 − x 1 ≤ − 1.5 − x 2 ≤ − 1.1154 x 1 + x 2 ≤ 4.6154 x 1 , x 2 > 0

C 1 = ℝ + 2 \ { ( 0 , 0 ) } × { 0 } ⊂ ℝ + 2 × ℝ + 1

∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j = { x 1 + x 2 ≤ 3 − x 1 ≤ − 1.5 − x 2 ≤ − 1.1154 x 1 + x 2 ≤ 4.6154 x 1 , x 2 > 0 = Ω 1 ,

The set ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j is convex and solving the BMPP is equivalent to

solving the quasiconcave problem

max x ∈ ℝ n 1 ( x 1 + 2 x 2 , 3 x 1 + x 2 ) ( x , y ) ∈ ∪ j ∈ J [ E ( f , T , ≤ C 1 ) ] j

Theorem 5 says that (1.5;,1.5, 0) is an optimal solution of the BMPP if and only if it is an efficient solution to the QCBPP.

j 0 = 4 ∈ { 1 , ⋯ , 7 } implies that ( 1.5 , 1.5 , 0 ) ∈ [ E ( f , T , ≤ C 1 ) ] 4 and ( 1.5 , 1.5 , 0 ) is a maximizing solution to the problem:

max x ∈ ℝ + 2 ( x 1 + 2 x 2 , 3 x 1 + x 2 ) ( 1.5 , 1.5 , 0 ) ∈ [ E ( f , T , ≤ C 1 ) ] 4 (I)

( 1.5 , 1.5 , 0 ) is an optimal solution of the following problem:

min ( x , y ) ∈ Ω 1 ∑ i = 1 2 λ i F i ( x 1 , x 2 ) , λ i ≥ 0 with ∑ i = 1 2 λ i = 1 (II)

where F 1 ( x 1 , x 2 ) = x 1 + 2 x 2 , F 2 ( x 1 , x 2 ) = 3 x 1 + x 2

For λ i ≥ 0 with ∑ i = 1 2 λ i = 1 , ( 1.5 , 1.5 , 0 ) is an optimal solution of (II) and is efficient solution of (I). Also, with λ i = ( λ 1 = 0.5 , λ 2 = 0.4615 ) > 0 ( 1.5 , 1.5 , 0 ) is an optimal solution of (II) and is weakly efficient solution of (I). Thus, (1.5,1.5, 0) is an efficient solution as well as weakly efficient solution in [ E ( f , T , ≤ C 1 ) ] 4 and therefore [ E ( f , T , ≤ C 1 ) ] 4 represents the efficient subset in which ( 1.5 , 1.5 , 0 ) is the solution to the QCBPP.

In this paper, we have uniquely defined a lower level solution for every upper level feasible solution as a parameter of the follower’s problem. We have formulated a Bilevel Multiple Programming Problem (BMPP), of which we have considered the quasiconcave objective function and showed that there was an extreme point of the feasible space that was an optimal solution of the BMPP. We have proven a theorem, suggesting that the optimal solution of the BMPP is an efficient solution to the QCBPP. Based on this result, we presented an efficient solution which was a weakly efficient solution in the efficient subset as well. We proved that this efficient solution was the solution of the QCBPP. Thus, we concluded that solving BMPP was equivalent to solving the QCBPP.

Balme, D. and Fotso, L.P. (2017) Solving Quasiconcave Bilevel Programming Problem. American Journal of Operations Research, 7, 121-132. https://doi.org/10.4236/ajor.2017.72009