^{1}

^{1}

^{*}

^{2}

^{2}

^{3}

In the real situations of supply chain, there are different parts such as facilities, logistics warehouses and retail stores and they handle common kinds of products. In this research, these situations are focused on as the background of this research. They deal with the common quantities of their products, but due to their different environments, the optimal production quantity of one part can be unacceptable to another part and it may suffer a heavy loss. To avoid that kind of unacceptable situations, the common production quantities should be acceptable to all parts in one supply chain. Therefore, the motivation of this research is the necessity of the method to find the production quantities that make all decision makers acceptable is needed. However, it is difficult to find the production quantities that make all decision makers acceptable. Moreover, their acceptable ranges do not always have common ranges. In the decision making of car design, there are similar situations to this type of decision making. The performance of a car consists of purposes such as fuel efficiency, size and so on. Improving one purpose makes another worse and the relationship between these purposes is tradeoff. In these cases, Suriawase process is applied. This process consists of negotiations and reviews of the requirements of the purposes. In the step of negotiations, the requirements of the purposes are share among all decision makers and the solution that makes them as satisfied as possible. In the step of reviews of the requirements, they are reviewed based on the result of the negotiation if the result is unacceptable to some of decision makers. Therefore, through the iterations of the two steps, the solution that makes all decision makers satisfied is obtained. However, in the previous research, the effects that one decision maker reviews requirements in Suriawase process are quantified, but the mathematical model to modify the ranges of production quantities of all decision makers simultaneously is not shown. Therefore, in this research, based on Suriawase process, the mathematical model of multi-player multi-objective decision making is proposed. The mathematical model of multi-player multi-objective decision making by using linear physical programming (LPP) and robust optimization (RO) in the previous research is the basis of the methods of this research. LPP is one of the multi-objective optimization methods and RO is used to make the balance of the preference levels among decision makers. In LPP, the preference ranges of all objective functions are needed, so as the hypothesis of this research. In the research referred in this research, the method to control the effect of RO is not shown. If the effect of RO is too big, the average of the preference level becomes worse. The purpose of this research is to reproduce the mathematical model of multi-player multi-objective decision making based on Suriawase process and propose the method to control the effect of RO. In the proposed model, a set of the solutions of the negotiation problem is obtained and it is proved by the result of the numerical experiment. Therefore, the conclusion that the proposed model is available to obtain a set of the solutions of the negotiation problems in supply chain.

A supply chain consists of various products, stages, and players, such as production facilities, logistics warehouses, and retail stores. It may also treat common products. Recent advancements in the manufacturing industry, such as the advent of Industry 4.0, have paved the way for a system-wide deployment where information from all related perspectives can be closely monitored and synchronized between the physical factory floor and cyberspace [

To achieve a successful production system in this era, it is necessary to understand how existing non-Industry 4.0-ready production systems can be expanded to eventually play a role in an Industry 4.0 supply chain [

Successful supply chain coordination faces the need to ensure the sustainable evolution in social, environmental, and economic dimensions for all involved [

As mentioned, each stage has different optimal production quantities. Specifically, each has an acceptable range of production quantities, but these ranges may not always be the same. The supply chain for a car design is a good example of this situation with this type of decision-making. In order for the carmaker to maximize effectiveness, different auto parts are designed by different players. To determine the total design, each part manufacturer has to make adjustments. In Japanese, this is called the “Suriawase” (harmonization or integration) process [

The proposed model uses two important techniques. The first is multi-objective optimization. Here, each target value of each objective function is considered. Goal programming (GP) [

The second technique is multi-player decision-making. However, in their basic forms, neither GP nor LPP can be used to address multi-player problems. Then, the model to apply LPP to multi-player is developed by using the idea of robust optimization (RO) [

The purpose of this research is to develop the model of decision making of the production quantities that makes players of all the stages in the supply chain satisfied. However, the desirable production quantity of each player differs from those of other players because of the different environments among the players. And by improving one of the purposes, another may get less desirable. In other words, there may be trade-off relations among purposes. Suriawase process is one of the negotiation methods of multi-player with different preferences and trade-off relations and this method makes it possible to find the solution that all the decision makers are satisfied with. This process is applied to the product development and the product has multiple purposes (for example, fuel efficiency, size, and so on in the case of car). Each decision maker of each purpose shares the requirement of each purpose with each other and the result of negotiation is obtained by sharing the requirements of all the purposes among all the decision makers. Then, if some of the requirements are not satisfied sufficiently, all the requirements are reviewed. The iterations of negotiations and reviews of the requirements are continued until all the decision makers are satisfied with the result. The detailed flow in Suriawase process is shown as follows:

1) Each decision maker makes the initial optimal design (solution) and the requirements of it.

2) The requirements of all decision makers are shared with each other.

3) Based on the requirements, the decision makers make one alternative solution through negotiation.

4) If the alternative solution is not acceptable to some of decision makers, all decision makers review the requirements and return to step 2.

5) If the alternative solution is acceptable to all decision makers, it is regarded as a final design (solution) and this process exterminates.

Step 1 to 3 are the stage of the negotiation and Step 4 is the stage of the reviews of the requirements. In this research, the stage of the negotiation is focused on.

For this case, the multi-objective optimization of the target values of the objective functions is the focus. The GP and LPP methods are known for this type of multi-objective optimization. We discuss the LPP method. In ordinary GP, the objective functions and constraints are given as linear functions [

In the first step, the preference ranges of the objectives are given for different target value levels.

In

Ideal range: μ ≤ 25

Desirable range: 25 < μ ≤ 31

Tolerable range: 31 < μ ≤ 36

Undesirable range: 36 < μ ≤ 44

Highly undesirable range: 44 < μ ≤ 50

Unacceptable range: 50 < μ

That is, this case has six targets and five target values (25, 31, 36, 44, and 50).

The objective functions are classified into four types. “1S” means the smaller value of the objective function is more ideal. “2S” means that the larger value of the objective function is more ideal. “3S” means that a given value of the objective function is the most ideal. “4S” means that a given range of the objective function is the range of the most ideal values.

In the second step, based on the preference ranges in the first step, the weight coefficients are calculated in the seven steps of the algorithm below. The following definitions apply: n objectives, and n s preference levels ( s = 1 , ⋯ , n s ) of each objective, are given. t i s is the target value of level s of objective i . The target values are classified into t i s + and t i s − ; the target value is t i s + if it is

Level | Range |
---|---|

Ideal | <25 |

Desirable | 25 - 31 |

Tolerable | 31 - 36 |

Undesirable | 36 - 44 |

Highly Undesirable | 44 - 50 |

Unacceptable | >50 |

larger than the most ideal target value or range (1S, 3S and 4S); in contrast, the target value is t i s − if it is smaller than the most ideal value or range (2S, 3S and 4S). And the variables d i s + and d i s − ( s = 2 , ⋯ , n s ) show how far from the target values t i ( s − 1 ) + and t i ( s − 1 ) − . The weight coefficient of level s (between target value t i ( s − 1 ) and t i s ) of objective i is denoted as w i s and classified into w i s + and w i s − . The length of the preference ranges of level s , the increment of the weight coefficients between level s − 1 and s , and the distance of the preference functions between t i ( s − 1 ) and t i s are denoted as t ˜ i s , w ˜ i s , and z ˜ s , respectively ( s = 2 , ⋯ , n s ). β is calculated as the common parameter to decide the preference function among all objectives. To calculate β , the OVO rule (one vs. others) is used; this rule maintains the balance of all preference levels among all objectives. For example, when 10 objectives are given, the case where the preference levels of all objectives are “Desirable” is better than where the preference levels of nine objectives are “Ideal” and the other objective is “Tolerable”. Therefore, the following equation is given.

z ˜ s > ( n s − 1 ) z ˜ s − 1 (1)

By using the parameter β ( > 1 ) , this equation is changed to,

z ˜ s = β ( n s − 1 ) z ˜ s − 1 (2)

Then, z ˜ s is used to calculate the weight coefficient w i s , as follows.

w i s + = z ˜ s / t ˜ i s + , w i s − = | z ˜ s / t ˜ i s − | (3)

Therefore, if β is not large enough, the increments of the weight coefficients between consecutive levels become too small. Thus, β is calculated to ensure that the minimum w ˜ i s ( w ˜ min ) is large enough in the following algorithm.

Step 1. Initial condition: β = 1.1 ; w i 1 + = w i 1 − = 0 ; z ˜ 2 = small number; i = 0 ; s = 1

Step 2. i = i + 1

Step 3. s = s + 1

Step 4. z ˜ s = β ( n s − 1 ) z ˜ s − 1 ( 3 ≤ s ≤ n s )

t ˜ i s + = t i s + − t i ( s − 1 ) + ( 2 ≤ s ≤ n s ) ( 1 S , 3 S , 4 S )

t ˜ i s − = t i s − − t i ( s − 1 ) − ( 2 ≤ s ≤ n s ) ( 2 S , 3 S , 4 S )

w i s + = z ˜ s / t ˜ i s + ( 2 ≤ s ≤ n s ) ( 1 S , 3 S , 4 S )

w i s − = | z ˜ s / t ˜ i s − | ( 2 ≤ s ≤ n s ) ( 2 S , 3 S , 4 S )

w ˜ i s + = w i s + − w i ( s − 1 ) + ( 2 ≤ s ≤ n s ) ( 1 S , 3 S , 4 S )

w ˜ i s − = w i s − − w i ( s − 1 ) − ( 2 ≤ s ≤ n s ) ( 2 S , 3 S , 4 S )

w ˜ min = min i , s ( w ˜ i s + , w ˜ i s − ) > 0 ( 2 ≤ s ≤ n s )

Step 5. If w ˜ min is smaller than a chosen small positive value (e.g., 0.1), β = β + 1 , i = 0 , s = 1 and go back to Step 2.

Step 6. If s ≠ n s , go to Step 3.

Step 7. If i = n , terminate; otherwise, go to Step 2.

By using β , the distance of the preference functions between level s − 1 and s ( z ˜ s ) follow a geometric progression based on the recursion of Equation (2).

z ˜ s = { β ( n s − 1 ) } s − 2 z ˜ 2 ( 2 ≤ s ≤ n s ) (4)

Then, the preference function z i ( μ i ) ( f i s ) is the following equation, while f i 0 = 0 .

f i s = ∑ k = 2 s z ˜ k = ∑ k = 2 s { β ( n s − 1 ) } k − 2 z ˜ 2 = ( { β ( n s − 1 ) } s − 1 − 1 ) / ( { β ( n s − 1 ) } − 1 ) ( 2 ≤ s n s ) (5)

Based on β , the weight coefficients and the differences of them between consecutive levels are calculated. The sum of the preference functions of all objectives is shown by using d i + and d i − of level s ( d i s + , d i s − ) as the difference between the objective function μ i and the target value t i ( s − 1 ) . d i s + and d i s − are calculated as follows.

μ i + d i s + = t i ( s − 1 ) + , i = 1 , ⋯ , n ; s = 2 , ⋯ , n s (6)

μ i − d i s − = t i ( s − 1 ) − , i = 1 , ⋯ , n ; s = 2 , ⋯ , n s (7)

Based on d i s + and d i s − , the sum of the preference functions of all decision makers is as follows.

∑ i = 1 n z i = ∑ i = 1 n ∑ s = 2 n s ( w ˜ i s + d i s + + w ˜ i s − d i s − ) (8)

The above function and constraints are used in the following formulation. The objective function is:

∑ i = 1 n z i = ∑ i = 1 n ∑ s = 2 n s ( w ˜ i s + d i s + + w ˜ i s − d i s − ) → min (9)

subject to:

μ i + d i s + = t i ( s − 1 ) + , i = 1 , ⋯ , n ; s = 2 , ⋯ , n s (10)

μ i − d i s − = t i ( s − 1 ) − , i = 1 , ⋯ , n ; s = 2 , ⋯ , n s (11)

and other constraints related to x j ( j = 1 , ⋯ , m ) and μ i ( i = 1 , ⋯ , n ) .

Therefore, LPP consists of an algorithm with weight coefficients. Because LPP is applied to just a single decision maker, LPP now needs to be extended to a multi-player situation to reproduce the Suriawase process. Thus, the method to extend LPP to a multi-player framework needs to be identified.

LPP can be applied to multi-player by applying LPP to the objectives of all decision makers. To ensure the preference function values for each level for all targets ( f l i s ) are equal among all decision makers ( l = 1 , ⋯ , L ), a common value of β is used for all decision makers. By using the common value of β , the following equations are established.

f 11 s = ⋯ = f 1 n s = f 21 s = ⋯ = f 2 n s ⋮ = f L 1 s = ⋯ = f L n s ( s = 2 , ⋯ , n s ) (12)

Considering these equations, similarly to LPP with a single decision maker, the solution is obtained by minimizing the sum of the preference functions z l i ( l = 1 , ⋯ , L ) of all objectives for all decision makers as follows. The objective functions μ i ( i = 1 , ⋯ , n ) and the decision variables x j ( j = 1 , ⋯ , m ) are shared among all decision makers. The multi-player LPP objective function

∑ l = 1 L ∑ i = 1 n z l i = ∑ l = 1 L ∑ i = 1 n ∑ s = 2 n s ( w ˜ l i s + d l i s + + w ˜ l i s − d l i s − ) → min (13)

is subject to

μ i + d l i s + = t l i ( s − 1 ) + , l = 1 , ⋯ , L ; i = 1 , ⋯ , n ; s = 2 , ⋯ , n s (14)

μ i − d l i s − = t l i ( s − 1 ) − , l = 1 , ⋯ , L ; i = 1 , ⋯ , n ; s = 2 , ⋯ , n s (15)

and other constraints related to x j ( j = 1 , ⋯ , m ) and μ i ( i = 1 , ⋯ , n ) .

It is impossible to identify the preference function values between decision makers. Therefore, the method to keep the balance of the preference function values between decision makers is not shown.

The solution obtained by solving this can cause the biases of the sums of the preference function values between decision makers. The reason for this is that this formula is not able to consider the balance of the sums of the preference functions between all decision makers. Therefore, it is necessary to consider balancing the sums or reducing the biases of the sums of the preference functions. Therefore, RO [

The formulation of the problem with minimization of the objective function is as follows using parameters with the fluctuation u i ( ∈ U i , i = 0 , ⋯ , n u ) and the variables x . The RO objective function,

min x max U 0 f 0 ( x , u 0 ) (16)

is subject to:

f i ( x , u i ) ≤ 0 , ∀ u i ∈ U i , i = 1 , ⋯ , m 1 (17)

g j ( x ) ≤ 0 , j = 1 , ⋯ , m 2 (18)

When LPP is extended to a multi-player framework, the differences in the sums of the preference functions of objectives between decision makers are the equivalents of the parameters with fluctuations in RO. Therefore, in [

max l ∑ i = 1 n z l i = max l ∑ i = 1 n ∑ s = 2 n s ( w ˜ l i s + d l i s + + w ˜ l i s − d l i s − ) → min (19)

In Equation (19), the decision maker that has the largest sum of the preference functions of all decision makers is selected and his or her sum of the preference functions is minimized. It makes it possible to avoid the case that the largest sum of the preference functions is extremely large.

RO makes biases in the sums of the preference functions between the decision makers smaller. However, using the objective function of RO makes the average of the sums of the preference functions of all decision makers larger. Thus, the balance between the average (sum) and the reduction of the biases among all decision makers is important. Therefore, the effect of RO in multi-player LPP is controlled in our model by using α ( 0 ≤ α ≤ 1 ) as follows. The multi-player LPP with RO objective function,

is subject to

μ i + d l i s + = t l i ( s − 1 ) + , l = 1 , ⋯ , L ; i = 1 , ⋯ , n ; s = 2 , ⋯ , n s (21)

μ i − d l i s − = t l i ( s − 1 ) − , l = 1 , ⋯ , L ; i = 1 , ⋯ , n ; s = 2 , ⋯ , n s (22)

and other constraints related to x j ( j = 1 , ⋯ , m ) and μ i ( i = 1 , ⋯ , n ) .

However, the following inequality between the first term of the objective function and the second is established when α is not considered:

∑ l = 1 L ∑ i = 1 n ∑ s = 2 n s ( w ˜ l i s + d l i s + + w ˜ l i s − d l i s − ) ≤ L max l ∑ i = 1 n ∑ s = 2 n s ( w ˜ l i s + d l i s + + w ˜ l i s − d l i s − ) (23)

Therefore, the second term affects the result of the objective function more than the first term. As a result, it is necessary to unify the scales of their effects. The scales can be normalized by using the maximum values of the first and second terms, denoted maxLPP and maxRO. maxLPP is calculated in multi-player LPP with RO by using α = 0 , and maxRO is calculated in multi-player LPP. By using maxLPP and maxRO, the normalized formulation of multi-player LPP with RO is

( 1 − α ) ( ∑ l = 1 L ∑ i = 1 n ∑ s = 2 n s ( w ˜ l i s + d l i s + + w ˜ l i s − d l i s − ) ) / max L P P + α ( L max l ∑ i = 1 n ∑ s = 2 n s ( w ˜ l i s + d l i s + + w ˜ l i s − d l i s − ) ) / max R O → min (24)

subject to

μ i + d l i s + = t l i ( s − 1 ) + , l = 1 , ⋯ , L ; i = 1 , ⋯ , n ; s = 2 , ⋯ , n s (25)

μ i − d l i s − = t l i ( s − 1 ) − , l = 1 , ⋯ , L ; i = 1 , ⋯ , n ; s = 2 , ⋯ , n s (26)

and other constraints related to x j ( j = 1 , ⋯ , m ) and μ i ( i = 1 , ⋯ , n ) .

As α changes in the range of [ 0 , 1 ] , the multi-player LPP is solved as follows. n α is the natural number of the iterations of this algorithm.

Step 1. Initial situation: α = 0 , i = 0

Step 2. Multi-player LPP with RO is solved with α .

Step 3. If i ≠ n α , set α = α + ( 1 / n α ) , i = i + 1 and go back to Step 2. If i = n α , terminate.

In this algorithm, several solutions are obtained by changing the strength of the effect of the RO.

In this experiment, decision-making around different production quantities is the focus. The players in the supply chain, such as facilities, logistics warehouses, and retail stores, have several common product items. However, they have different optimal production quantities, due to their different environments. In this section, by extending the numerical experiment in [

The weight coefficients are calculated as shown in Tables 5-7.

The production quantities for products A, B, and C are denoted as x A , x B , and x C , respectively, and the objective functions of products A, B, and C are denoted as μ 1 , μ 2 , and μ 3 , respectively. To simplify the problem, the objective functions μ 1 , μ 2 , and μ 3 are shown as follows.

Level | Range | ||
---|---|---|---|

Facility | Logistic warehouse | Retail store | |

Ideal | <25 | <14 | <18 |

Desirable | 25 - 31 | 14 - 18 | 18 - 23 |

Tolerable | 31 - 36 | 18 - 25 | 23 - 30 |

Undesirable | 36 - 44 | 25 - 35 | 30 - 41 |

Highly Undesirable | 44 - 50 | 35 - 50 | 41 - 50 |

Unacceptable | >50 | >50 | >50 |

Level | Range | ||
---|---|---|---|

Facility | Logistic warehouse | Retail store | |

Ideal | <12 | <23 | <14 |

Desirable | 12 - 19 | 23 - 27 | 14 - 17 |

Tolerable | 19 - 27 | 27 - 33 | 17 - 22 |

Undesirable | 27 - 33 | 33 - 36 | 22 - 32 |

Highly Undesirable | 33 - 40 | 36 - 40 | 32 - 40 |

Unacceptable | >40 | >40 | >40 |

Level | Range | ||
---|---|---|---|

Facility | Logistic warehouse | Retail store | |

Ideal | <6 | <7 | <5 |

Desirable | 6 - 10 | 7 - 16 | 5 - 11 |

Tolerable | 10 - 17 | 16 - 19 | 11 - 16 |

Undesirable | 17 - 22 | 19 - 23 | 16 - 22 |

Highly Undesirable | 22 - 30 | 23 - 30 | 22 - 30 |

Unacceptable | >30 | >30 | >30 |

Level | Range | ||
---|---|---|---|

Facility | Logistic warehouse | Retail store | |

Ideal | 0.0000 | 0.0000 | 0.0000 |

Desirable | 0.0167 | 0.0250 | 0.0200 |

Tolerable | 0.0520 | 0.0371 | 0.0371 |

Undesirable | 0.0845 | 0.0676 | 0.0615 |

Highly Undesirable | 0.2929 | 0.1172 | 0.1953 |

Level | Range | ||
---|---|---|---|

Facility | Logistic warehouse | Retail store | |

Ideal | 0.0000 | 0.0000 | 0.0000 |

Desirable | 0.0143 | 0.0250 | 0.0333 |

Tolerable | 0.0325 | 0.0433 | 0.0520 |

Undesirable | 0.1127 | 0.2253 | 0.0676 |

Highly Undesirable | 0.2511 | 0.4394 | 0.2197 |

Level | Range | ||
---|---|---|---|

Facility | Logistic warehouse | Retail store | |

Ideal | 0.0000 | 0.0000 | 0.0000 |

Desirable | 0.0250 | 0.0111 | 0.0167 |

Tolerable | 0.0371 | 0.0867 | 0.0520 |

Undesirable | 0.1352 | 0.1690 | 0.1127 |

Highly Undesirable | 0.2197 | 0.2511 | 0.2197 |

μ 1 = x A , μ 2 = x B , μ 3 = x C (27)

The formulation of this problem is as follows. The objective function

( 1 − α ) ( ∑ l = 1 3 ∑ i = 1 3 ∑ s = 2 5 ( w ˜ l i s + d l i s + ) ) / max L P P + α ( L max l ∑ i = 1 3 ∑ s = 2 5 ( w ˜ l i s + d l i s + ) ) / max R O → min (28)

is subject to

μ i + d l i s + = t l i ( s − 1 ) + , l = 1 , 2 , 3 ; i = 1 , 2 , 3 ; s = 2 , 3 , 4 , 5 (29)

12 μ 1 + 10 μ 2 + 8 μ 3 ≥ 750 (30)

μ 1 = x A , μ 2 = x B , μ 3 = x C (31)

The multi-player LPP with RO is solved with n α = 10000 . Thus, the results of this numerical experiment are shown in

According to α, seven patterns of the solution set are obtained. In

( Sumpercentage ) = ( sumofeachpattern ) / ( thebestsum ( α = 0 ) )

( Maxpercentage ) = ( maxofeachpattern ) / ( thebestsum ( α = 1 ) )

In

Pattern | Product | Decision maker | α | |||||
---|---|---|---|---|---|---|---|---|

A | B | C | Facility | Logistic warehouse | Retail store | Lower | Upper | |

1 | 31.00 | 27.00 | 13.50 | 0.6900 | 0.9378 | 1.3495 | 0.0000 | 0.0340 |

2 | 31.00 | 25.00 | 16.00 | 0.7179 | 0.9156 | 1.3443 | 0.0341 | 0.3953 |

3 | 32.67 | 23.00 | 16.00 | 0.7397 | 0.9785 | 1.3117 | 0.3954 | 0.4352 |

4 | 35.00 | 23.00 | 12.50 | 0.7309 | 1.0971 | 1.2729 | 0.4353 | 0.6684 |

5 | 36.00 | 23.00 | 11.00 | 0.7271 | 1.1976 | 1.2563 | 0.6685 | 0.7514 |

6 | 36.43 | 22.49 | 11.00 | 0.7469 | 1.2480 | 1.2483 | 0.7525 | 0.9355 |

7 | 36.36 | 22.00 | 11.71 | 0.7514 | 1.2477 | 1.2478 | 0.9356 | 1.0000 |

Pattern | Sum | Max |
---|---|---|

1 | 2.9773 | 1.3495 |

2 | 2.9777 | 1.3443 |

3 | 3.0299 | 1.3117 |

4 | 3.1008 | 1.2729 |

5 | 3.1811 | 1.2563 |

6 | 3.2432 | 1.2483 |

7 | 3.2469 | 1.2478 |

Pattern | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
---|---|---|---|---|---|---|---|

Sum percentage | 100.00 | 100.01 | 101.77 | 104.15 | 106.85 | 108.93 | 109.06 |

Max percentage | 108.15 | 107.73 | 105.12 | 102.01 | 100.69 | 100.04 | 100.00 |

The purpose of this research was to develop a mathematical model for decision-making, using the Suriawase process with multi-player and multi-objective, to find a solution that is satisfactory for all decision makers. To achieve this, in [

The proposed model has two points that should be considered. The one of them is the stage of the reviews of the requirements in Suriawase process. In this research, the stage of the negotiation is focused. However, in the real situations, a solution that makes all decision makers satisfied does not always exist in the solutions based on the initial requirements of decision makers. Therefore, it is necessary to consider the stage of the reviews of the requirements. The other point that should be consider in the proposed model is the implementation of the proposed model to various kinds of multi-player multi-objective problems. In this research, the multi-objective multi-player optimization problem was treated for the case of a supply chain. This method is applicable to various scenes, such as optimization by the automobile design and production process in mechanical engineering. However, the performance of the proposed method was confirmed by only one numerical example with extreme bias. In future research, the method proposed here could be applied to different situations, such as where there is extreme bias in the preference levels in the results of multi-player LPP without RO. Moreover, it is necessary to investigate the difference in preference levels under a multi-player environment with various levels of bias.

This research was partially supported by the Japan Society for the Promotion of Science (JSPS), KAKENHI, Grant-in-Aid for Scientific Research (C), JP16K01262 from 2015 to 2020 and Grant-in-Aid for Scientific Research (A), JP18H03824 from 2018 to 2020.

The authors declare no conflicts of interest regarding the publication of this paper.

Yatsuka, T., Ishigaki, A., Kinoshita, Y., Yamada, T. and Inoue, M. (2019) Control Method of Effect of Robust Optimization in Multi-Player Multi-Objective Decision-Making. American Journal of Operations Research, 9, 175-191. https://doi.org/10.4236/ajor.2019.94011

The parameters and variables that are used in this paper are shown as followings.

(LPP)

n : the number of the objectives

n s : the number of levels of the preference ranges of the objectives

μ i : the function value of objective i

t i s : the target value of level s − 1 of objective i

t ˜ i s : the length of level of objective i

w i s : the weight coefficient of level of objective i ( w i s + in 1S, 3S and 4S, and w i s − in 2S, 3S and 4S)

w ˜ i s : the weight coefficient increment of objective i between level s − 1 and s ( w ˜ i s + in 1S, 3S and 4S, and w ˜ i s − in 2S, 3S and 4S)

d i s : the deviational variable between t i ( s − 1 ) and μ i

z s : the preference function value of the intersection between level s and s + 1 .

z ˜ s : the distance of the preference function values between the target value of level s − 1 and that of level s

β : the parameter to calculate the preference function values

z i ( μ i ) : the preference function value of objective i

(Multi-player LPP)

n : the number of the objectives

n s : the number of levels of the preference ranges of the objectives

L : the number of the decision makers

μ l i : the function value of objective i of decision maker l

t l i s : the target value of level s − 1 of objective i of decision maker l

t ˜ l i s : the length of level s of objective i of decision maker l

w l i s : the weight coefficient of level of objective i of decision maker l ( w l i s + in 1S, 3S and 4S, and w l i s − in 2S, 3S and 4S)

w ˜ l i s : the weight coefficient increment of objective i of decision maker l between level s − 1 and s ( w ˜ i s + in 1S, 3S and 4S, and w ˜ i s − in 2S, 3S and 4S)

d l i s : the deviational variable between t l i ( s − 1 ) and μ l i

z s : the preference function value of the intersection between level s and s + 1 .

z ˜ s : the distance of the preference function values between the target value of level s − 1 and that of level s

z l i ( μ l i ) : the preference function value of objective of i of decision maker l