Control Method of Effect of Robust Optimization in Multi-Player Multi-Objective Decision-Making ()

Tomoaki Yatsuka^{1}, Aya Ishigaki^{1*}, Yuki Kinoshita^{2}, Tetsuo Yamada^{2}, Masato Inoue^{3}

^{1}Department of Industrial Administration, Graduate School of Science and Technology, Tokyo University of Science, Chiba, Japan.

^{2}Management Science and Social Informatics Program, Department of Informatics, Graduate School of Informatics and
Engineering, The University of Electro-Communications, Tokyo, Japan.

^{3}Department of Mechanical Engineering Informatics, Meiji University, Kanagawa, Japan.

**DOI: **10.4236/ajor.2019.94011
PDF HTML XML
504
Downloads
809
Views
Citations

In the real situations of supply chain, there are different parts such as facilities, logistics warehouses and retail stores and they handle common kinds of products. In this research, these situations are focused on as the background of this research. They deal with the common quantities of their products, but due to their different environments, the optimal production quantity of one part can be unacceptable to another part and it may suffer a heavy loss. To avoid that kind of unacceptable situations, the common production quantities should be acceptable to all parts in one supply chain. Therefore, the motivation of this research is the necessity of the method to find the production quantities that make all decision makers acceptable is needed. However, it is difficult to find the production quantities that make all decision makers acceptable. Moreover, their acceptable ranges do not always have common ranges. In the decision making of car design, there are similar situations to this type of decision making. The performance of a car consists of purposes such as fuel efficiency, size and so on. Improving one purpose makes another worse and the relationship between these purposes is tradeoff. In these cases, Suriawase process is applied. This process consists of negotiations and reviews of the requirements of the purposes. In the step of negotiations, the requirements of the purposes are share among all decision makers and the solution that makes them as satisfied as possible. In the step of reviews of the requirements, they are reviewed based on the result of the negotiation if the result is unacceptable to some of decision makers. Therefore, through the iterations of the two steps, the solution that makes all decision makers satisfied is obtained. However, in the previous research, the effects that one decision maker reviews requirements in Suriawase process are quantified, but the mathematical model to modify the ranges of production quantities of all decision makers simultaneously is not shown. Therefore, in this research, based on Suriawase process, the mathematical model of multi-player multi-objective decision making is proposed. The mathematical model of multi-player multi-objective decision making by using linear physical programming (LPP) and robust optimization (RO) in the previous research is the basis of the methods of this research. LPP is one of the multi-objective optimization methods and RO is used to make the balance of the preference levels among decision makers. In LPP, the preference ranges of all objective functions are needed, so as the hypothesis of this research. In the research referred in this research, the method to control the effect of RO is not shown. If the effect of RO is too big, the average of the preference level becomes worse. The purpose of this research is to reproduce the mathematical model of multi-player multi-objective decision making based on Suriawase process and propose the method to control the effect of RO. In the proposed model, a set of the solutions of the negotiation problem is obtained and it is proved by the result of the numerical experiment. Therefore, the conclusion that the proposed model is available to obtain a set of the solutions of the negotiation problems in supply chain.

Keywords

Linear Physical Programming, Suriawase Process, Multi-Player Decision-Making, Supply Chain Coordination, Robust Optimization

Share and Cite:

Yatsuka, T. , Ishigaki, A. , Kinoshita, Y. , Yamada, T. and Inoue, M. (2019) Control Method of Effect of Robust Optimization in Multi-Player Multi-Objective Decision-Making. *American Journal of Operations Research*, **9**, 175-191. doi: 10.4236/ajor.2019.94011.

1. Introduction

A supply chain consists of various products, stages, and players, such as production facilities, logistics warehouses, and retail stores. It may also treat common products. Recent advancements in the manufacturing industry, such as the advent of Industry 4.0, have paved the way for a system-wide deployment where information from all related perspectives can be closely monitored and synchronized between the physical factory floor and cyberspace [1] . Specifically, networked stages in the supply chain can become more efficient, collaborative, and resilient by utilizing advanced information analytics. That is, in Industry 4.0, systems in the supply chain are connected as a collaborative community [2] . Simultaneously, the production system in the Industry 4.0 era can be highly flexible in terms of production quantities and customization, can have extensive integration among customers, companies, and suppliers, and, above all, can be sustainable [3] [4] .

To achieve a successful production system in this era, it is necessary to understand how existing non-Industry 4.0-ready production systems can be expanded to eventually play a role in an Industry 4.0 supply chain [5] [6] . Through such technological support, participants at each stage can meet their full potential and become strategic decision makers and flexible problem-solvers [7] . Moreover, it would provide the required inter-disciplinary understanding needed for the implementation of Industry 4.0.

Successful supply chain coordination faces the need to ensure the sustainable evolution in social, environmental, and economic dimensions for all involved [4] . However, it is often difficult to integrate the optimal production quantities, for example, that are acceptable to all decision makers in the chain. Given different environments, the optimal quantity of a product may differ across stages and players [8] [9] . Therefore, the optimal product quantity at one stage could be less than optimal at another; in the worst-case scenario, this imbalance could cause business failure. To avoid this situation, the product quantity in the supply chain should be anticipated and acceptable for all stages and players.

As mentioned, each stage has different optimal production quantities. Specifically, each has an acceptable range of production quantities, but these ranges may not always be the same. The supply chain for a car design is a good example of this situation with this type of decision-making. In order for the carmaker to maximize effectiveness, different auto parts are designed by different players. To determine the total design, each part manufacturer has to make adjustments. In Japanese, this is called the “Suriawase” (harmonization or integration) process [10] [11] [12] . This process looks for the designs that are acceptable to all the decision makers through iterations of negotiations, sharing, and reviews of the requirements. However, according to [11] , in the Suriawase process, although the effects of one decision maker’s requirements are quantified, a mathematical model to modify the ranges of production quantities for all decision makers simultaneously has not been identified. Therefore, a method is needed that identifies the optimal production quantities for all decision makers. Accordingly, the aim of this research, based on the Suriawase process, is to propose a decision-making mathematical model for a multi-player, multi-objective supply chain.

The proposed model uses two important techniques. The first is multi-objective optimization. Here, each target value of each objective function is considered. Goal programming (GP) [13] and linear physical programming (LPP) [14] - [21] are known methods for solving this type of multi-objective problem. In these methods, a preference function for each objective function is calculated; the preference function becomes smaller as the objective function value approaches its target value. In GP, the preference functions are linear. In contrast, preference functions in LPP are piecewise linear, using GP with different target value levels, so the preference functions become nonlinear. Therefore, LPP is preferable for solving multi-objective problems.

The second technique is multi-player decision-making. However, in their basic forms, neither GP nor LPP can be used to address multi-player problems. Then, the model to apply LPP to multi-player is developed by using the idea of robust optimization (RO) [22] [23] [24] to balance preference levels among the decision makers [25] . In this model, the balance between the improvement of preference levels for each decision maker and the balance of preference levels among all decision makers by using RO is important, but the method to control the effect of RO is not shown. Therefore, the purpose of this research is to propose the model with the way to control the effect of RO. By adding the way to control the effect of RO, it becomes possible to propose not one solution but a set of solutions (solutions). The paper is organized as follows: Section 2 presents the Suriawase process. Section 3 discusses multi-objective optimization, the previous model of multi-player multi-objective optimization and the proposed model in this research. Section 4 reports the result of a numerical experiment, and Section 5 concludes the paper with some perspectives.

2. Decision Making through Negotiation

The purpose of this research is to develop the model of decision making of the production quantities that makes players of all the stages in the supply chain satisfied. However, the desirable production quantity of each player differs from those of other players because of the different environments among the players. And by improving one of the purposes, another may get less desirable. In other words, there may be trade-off relations among purposes. Suriawase process is one of the negotiation methods of multi-player with different preferences and trade-off relations and this method makes it possible to find the solution that all the decision makers are satisfied with. This process is applied to the product development and the product has multiple purposes (for example, fuel efficiency, size, and so on in the case of car). Each decision maker of each purpose shares the requirement of each purpose with each other and the result of negotiation is obtained by sharing the requirements of all the purposes among all the decision makers. Then, if some of the requirements are not satisfied sufficiently, all the requirements are reviewed. The iterations of negotiations and reviews of the requirements are continued until all the decision makers are satisfied with the result. The detailed flow in Suriawase process is shown as follows:

1) Each decision maker makes the initial optimal design (solution) and the requirements of it.

2) The requirements of all decision makers are shared with each other.

3) Based on the requirements, the decision makers make one alternative solution through negotiation.

4) If the alternative solution is not acceptable to some of decision makers, all decision makers review the requirements and return to step 2.

5) If the alternative solution is acceptable to all decision makers, it is regarded as a final design (solution) and this process exterminates.

Step 1 to 3 are the stage of the negotiation and Step 4 is the stage of the reviews of the requirements. In this research, the stage of the negotiation is focused on.

3. Multi-Player Multi-Objective Model

3.1. LPP Procedure

For this case, the multi-objective optimization of the target values of the objective functions is the focus. The GP and LPP methods are known for this type of multi-objective optimization. We discuss the LPP method. In ordinary GP, the objective functions and constraints are given as linear functions [13] . To address nonlinear problems, a method is needed to calculate the weight coefficients of the objective functions step by step. Thus, LPP enables nonlinear problems to be solved using the GP approach while adding preference ranges for the objective functions [14] - [21] . In LPP, there are three steps. In the first step, the preference ranges of the objective functions are given. In the second step, the weight coefficients are calculated with the preference ranges. In the third step, the sum of the preference functions for all objective functions is minimized.

In the first step, the preference ranges of the objectives are given for different target value levels. Table 1 shows an example of preference range.

In Table 1, six preference levels are given, where a smaller value of the objective function is preferable. For example, if μ is defined as a generic design objective, the ranges of desirability are defined as follows in order of decreasing preference:

Ideal range: $\mu \le 25$

Desirable range: $25<\mu \le 31$

Tolerable range: $31<\mu \le 36$

Undesirable range: $36<\mu \le 44$

Highly undesirable range: $44<\mu \le 50$

Unacceptable range: $50<\mu $

That is, this case has six targets and five target values (25, 31, 36, 44, and 50).

The objective functions are classified into four types. “1S” means the smaller value of the objective function is more ideal. “2S” means that the larger value of the objective function is more ideal. “3S” means that a given value of the objective function is the most ideal. “4S” means that a given range of the objective function is the range of the most ideal values.

In the second step, based on the preference ranges in the first step, the weight coefficients are calculated in the seven steps of the algorithm below. The following definitions apply: $n$ objectives, and ${n}_{s}$ preference levels ( $s=1,\cdots ,{n}_{s}$ ) of each objective, are given. ${t}_{is}$ is the target value of level $s$ of objective $i$ . The target values are classified into ${t}_{is}^{+}$ and ${t}_{is}^{-}$ ; the target value is ${t}_{is}^{+}$ if it is

Table 1. Preference range (example).

larger than the most ideal target value or range (1S, 3S and 4S); in contrast, the target value is ${t}_{is}^{-}$ if it is smaller than the most ideal value or range (2S, 3S and 4S). And the variables ${d}_{is}^{+}$ and ${d}_{is}^{-}$ ( $s=2,\cdots ,{n}_{s}$ ) show how far from the target values ${t}_{i\left(s-1\right)}^{+}$ and ${t}_{i\left(s-1\right)}^{-}$ . The weight coefficient of level s (between target value ${t}_{i\left(s-1\right)}$ and ${t}_{is}$ ) of objective $i$ is denoted as ${w}_{is}$ and classified into ${w}_{is}^{+}$ and ${w}_{is}^{-}$ . The length of the preference ranges of level $s$ , the increment of the weight coefficients between level $s-1$ and $s$ , and the distance of the preference functions between ${t}_{i\left(s-1\right)}$ and ${t}_{is}$ are denoted as ${\stackrel{\u02dc}{t}}_{is}$ , ${\stackrel{\u02dc}{w}}_{is}$ , and ${\stackrel{\u02dc}{z}}^{s}$ , respectively ( $s=2,\cdots ,{n}_{s}$ ). $\beta $ is calculated as the common parameter to decide the preference function among all objectives. To calculate $\beta $ , the OVO rule (one vs. others) is used; this rule maintains the balance of all preference levels among all objectives. For example, when 10 objectives are given, the case where the preference levels of all objectives are “Desirable” is better than where the preference levels of nine objectives are “Ideal” and the other objective is “Tolerable”. Therefore, the following equation is given.

${\stackrel{\u02dc}{z}}^{s}>\left({n}_{s}-1\right){\stackrel{\u02dc}{z}}^{s-1}$ (1)

By using the parameter $\beta \left(>1\right)$ , this equation is changed to,

${\stackrel{\u02dc}{z}}^{s}=\beta \left({n}_{s}-1\right){\stackrel{\u02dc}{z}}^{s-1}$ (2)

Then, ${\stackrel{\u02dc}{z}}^{s}$ is used to calculate the weight coefficient ${w}_{is}$ , as follows.

${w}_{is}^{+}={\stackrel{\u02dc}{z}}^{s}/{\stackrel{\u02dc}{t}}_{is}^{+},{w}_{is}^{-}=\left|{\stackrel{\u02dc}{z}}^{s}/{\stackrel{\u02dc}{t}}_{is}^{-}\right|$ (3)

Therefore, if $\beta $ is not large enough, the increments of the weight coefficients between consecutive levels become too small. Thus, $\beta $ is calculated to ensure that the minimum ${\stackrel{\u02dc}{w}}_{is}$ ( ${\stackrel{\u02dc}{w}}_{\mathrm{min}}$ ) is large enough in the following algorithm.

Step 1. Initial condition: $\beta =1.1$ ; ${w}_{i1}^{+}={w}_{i1}^{-}=0$ ; ${\stackrel{\u02dc}{z}}^{2}$ = small number; $i=0$ ; $s=1$

Step 2. $i=i+1$

Step 3. $s=s+1$

Step 4. ${\stackrel{\u02dc}{z}}^{s}=\beta \left({n}_{s}-1\right){\stackrel{\u02dc}{z}}^{s-1}\left(3\le s\le {n}_{s}\right)$

${\stackrel{\u02dc}{t}}_{is}^{+}={t}_{is}^{+}-{t}_{i\left(s-1\right)}^{+}\left(2\le s\le {n}_{s}\right)\left(1S,3S,4S\right)$

${\stackrel{\u02dc}{t}}_{is}^{-}={t}_{is}^{-}-{t}_{i\left(s-1\right)}^{-}\left(2\le s\le {n}_{s}\right)\left(2S,3S,4S\right)$

${w}_{is}^{+}={\stackrel{\u02dc}{z}}^{s}/{\stackrel{\u02dc}{t}}_{is}^{+}\left(2\le s\le {n}_{s}\right)\left(1S,3S,4S\right)$

${w}_{is}^{-}=\left|{\stackrel{\u02dc}{z}}^{s}/{\stackrel{\u02dc}{t}}_{is}^{-}\right|\left(2\le s\le {n}_{s}\right)\left(2S,3S,4S\right)$

${\stackrel{\u02dc}{w}}_{is}^{+}={w}_{is}^{+}-{w}_{i\left(s-1\right)}^{+}\left(2\le s\le {n}_{s}\right)\left(1S,3S,4S\right)$

${\stackrel{\u02dc}{w}}_{is}^{-}={w}_{is}^{-}-{w}_{i\left(s-1\right)}^{-}\left(2\le s\le {n}_{s}\right)\left(2S,3S,4S\right)$

${\stackrel{\u02dc}{w}}_{\mathrm{min}}=\underset{i,s}{\mathrm{min}}\left({\stackrel{\u02dc}{w}}_{is}^{+},{\stackrel{\u02dc}{w}}_{is}^{-}\right)>0\left(2\le s\le {n}_{s}\right)$

Step 5. If ${\stackrel{\u02dc}{w}}_{\mathrm{min}}$ is smaller than a chosen small positive value (e.g., 0.1), $\beta =\beta +1$ , $i=0,s=1$ and go back to Step 2.

Step 6. If $s\ne {n}_{s}$ , go to Step 3.

Step 7. If $i=n$ , terminate; otherwise, go to Step 2.

By using $\beta $ , the distance of the preference functions between level $s-1$ and s ( ${\stackrel{\u02dc}{z}}^{s}$ ) follow a geometric progression based on the recursion of Equation (2).

${\stackrel{\u02dc}{z}}^{s}={\left\{\beta \left({n}_{s}-1\right)\right\}}^{s-2}{\stackrel{\u02dc}{z}}^{2}\left(2\le s\le {n}_{s}\right)$ (4)

Then, the preference function ${z}_{i}\left({\mu}_{i}\right)$ ( ${f}_{is}$ ) is the following equation, while ${f}_{i0}=0$ .

$\begin{array}{c}{f}_{is}=\underset{k=2}{\overset{s}{{\displaystyle \sum}}}\text{\hspace{0.05em}}{\stackrel{\u02dc}{z}}^{k}=\underset{k=2}{\overset{s}{{\displaystyle \sum}}}{\left\{\beta \left({n}_{s}-1\right)\right\}}^{k-2}{\stackrel{\u02dc}{z}}^{2}\\ =\left({\left\{\beta \left({n}_{s}-1\right)\right\}}^{s-1}-1\right)/\left(\left\{\beta \left({n}_{s}-1\right)\right\}-1\right)\left(2\le s{n}_{s}\right)\end{array}$ (5)

Based on $\beta $ , the weight coefficients and the differences of them between consecutive levels are calculated. The sum of the preference functions of all objectives is shown by using ${d}_{i}^{+}$ and ${d}_{i}^{-}$ of level s ( ${d}_{is}^{+},{d}_{is}^{-}$ ) as the difference between the objective function ${\mu}_{i}$ and the target value ${t}_{i\left(s-1\right)}$ . ${d}_{is}^{+}$ and ${d}_{is}^{-}$ are calculated as follows.

${\mu}_{i}+{d}_{is}^{+}={t}_{i\left(s-1\right)}^{+},i=1,\cdots ,n;s=2,\cdots ,{n}_{s}$ (6)

${\mu}_{i}-{d}_{is}^{-}={t}_{i\left(s-1\right)}^{-},i=1,\cdots ,n;s=2,\cdots ,{n}_{s}$ (7)

Based on ${d}_{is}^{+}$ and ${d}_{is}^{-}$ , the sum of the preference functions of all decision makers is as follows.

$\sum}_{i=1}^{n}{z}_{i}}={\displaystyle {\sum}_{i=1}^{n}{\displaystyle {\sum}_{s=2}^{{n}_{s}}\left({\stackrel{\u02dc}{w}}_{is}^{+}{d}_{is}^{+}+{\stackrel{\u02dc}{w}}_{is}^{-}{d}_{is}^{-}\right)$ (8)

The above function and constraints are used in the following formulation. The objective function is:

${\sum}_{i=1}^{n}{z}_{i}}={\displaystyle {\sum}_{i=1}^{n}{\displaystyle {\sum}_{s=2}^{{n}_{s}}\left({\stackrel{\u02dc}{w}}_{is}^{+}{d}_{is}^{+}+{\stackrel{\u02dc}{w}}_{is}^{-}{d}_{is}^{-}\right)}}\to \mathrm{min$ (9)

subject to:

${\mu}_{i}+{d}_{is}^{+}={t}_{i\left(s-1\right)}^{+},i=1,\cdots ,n;s=2,\cdots ,{n}_{s}$ (10)

${\mu}_{i}-{d}_{is}^{-}={t}_{i\left(s-1\right)}^{-},i=1,\cdots ,n;s=2,\cdots ,{n}_{s}$ (11)

and other constraints related to ${x}_{j}\left(j=1,\cdots ,m\right)$ and ${\mu}_{i}\left(i=1,\cdots ,n\right)$ .

Therefore, LPP consists of an algorithm with weight coefficients. Because LPP is applied to just a single decision maker, LPP now needs to be extended to a multi-player situation to reproduce the Suriawase process. Thus, the method to extend LPP to a multi-player framework needs to be identified.

3.2. Multi-Player LPP

LPP can be applied to multi-player by applying LPP to the objectives of all decision makers. To ensure the preference function values for each level for all targets ( ${f}_{lis}$ ) are equal among all decision makers ( $l=1,\cdots ,L$ ), a common value of $\beta $ is used for all decision makers. By using the common value of $\beta $ , the following equations are established.

$\begin{array}{l}{f}_{11s}=\cdots ={f}_{1ns}\\ ={f}_{21s}=\cdots ={f}_{2ns}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\vdots \\ ={f}_{L1s}=\cdots ={f}_{Lns}\left(s=2,\cdots ,{n}_{s}\right)\end{array}$ (12)

Considering these equations, similarly to LPP with a single decision maker, the solution is obtained by minimizing the sum of the preference functions ${z}_{li}\left(l=1,\cdots ,L\right)$ of all objectives for all decision makers as follows. The objective functions ${\mu}_{i}\left(i=1,\cdots ,n\right)$ and the decision variables ${x}_{j}\left(j=1,\cdots ,m\right)$ are shared among all decision makers. The multi-player LPP objective function

${\sum}_{l=1}^{L}{\displaystyle {\sum}_{i=1}^{n}{z}_{li}}}={\displaystyle {\sum}_{l=1}^{L}{\displaystyle {\sum}_{i=1}^{n}{\displaystyle {\sum}_{s=2}^{{n}_{s}}\left({\stackrel{\u02dc}{w}}_{lis}^{+}{d}_{lis}^{+}+{\stackrel{\u02dc}{w}}_{lis}^{-}{d}_{lis}^{-}\right)}}}\to \mathrm{min$ (13)

is subject to

${\mu}_{i}+{d}_{lis}^{+}={t}_{li\left(s-1\right)}^{+},l=1,\cdots ,L;i=1,\cdots ,n;s=2,\cdots ,{n}_{s}$ (14)

${\mu}_{i}-{d}_{lis}^{-}={t}_{li\left(s-1\right)}^{-},l=1,\cdots ,L;i=1,\cdots ,n;s=2,\cdots ,{n}_{s}$ (15)

and other constraints related to ${x}_{j}\left(j=1,\cdots ,m\right)$ and ${\mu}_{i}\left(i=1,\cdots ,n\right)$ .

It is impossible to identify the preference function values between decision makers. Therefore, the method to keep the balance of the preference function values between decision makers is not shown.

3.3. Robust Optimization

The solution obtained by solving this can cause the biases of the sums of the preference function values between decision makers. The reason for this is that this formula is not able to consider the balance of the sums of the preference functions between all decision makers. Therefore, it is necessary to consider balancing the sums or reducing the biases of the sums of the preference functions. Therefore, RO [22] [23] [24] is used to reduce the biases of the sums of the preference functions between all decision makers. In general, the objective of RO models is to obtain solutions that are guaranteed to perform well (in terms of feasibility and near-optimality) for all, or at least most, possible realizations of the uncertain input parameters. RO is used to find the solution where the objective function in the worst case is not too bad in an uncertain environment. Although it is possible for each player to solve its own multiple-objective optimization problem, behavior of other players cannot be known in advance. It is necessary to solve the multi-objective optimization problem with consideration of the differences between the solutions of decision makers.

The formulation of the problem with minimization of the objective function is as follows using parameters with the fluctuation ${u}_{i}\left(\in {U}_{i},i=0,\cdots ,{n}_{u}\right)$ and the variables $x$ . The RO objective function,

${\mathrm{min}}_{x}{\mathrm{max}}_{{\mathcal{U}}_{0}}{f}_{0}\left(x,{u}_{0}\right)$ (16)

is subject to:

${f}_{i}\left(x,{u}_{i}\right)\le 0,\forall {u}_{i}\in {\mathcal{U}}_{i},i=1,\cdots ,{m}_{1}$ (17)

${g}_{j}\left(x\right)\le 0,j=1,\cdots ,{m}_{2}$ (18)

When LPP is extended to a multi-player framework, the differences in the sums of the preference functions of objectives between decision makers are the equivalents of the parameters with fluctuations in RO. Therefore, in [25] the objective function of multi-player LPP RO is to minimize the sum of the preference functions of the decision maker whose sum of preference functions is the largest among all decision makers, that is,

${\mathrm{max}}_{l}{\displaystyle {\sum}_{i=1}^{n}{z}_{li}}={\mathrm{max}}_{l}{\displaystyle {\sum}_{i=1}^{n}{\displaystyle {\sum}_{s=2}^{{n}_{s}}\left({\stackrel{\u02dc}{w}}_{lis}^{+}{d}_{lis}^{+}+{\stackrel{\u02dc}{w}}_{lis}^{-}{d}_{lis}^{-}\right)}}\to \mathrm{min}$ (19)

In Equation (19), the decision maker that has the largest sum of the preference functions of all decision makers is selected and his or her sum of the preference functions is minimized. It makes it possible to avoid the case that the largest sum of the preference functions is extremely large.

3.4. Proposed Model

RO makes biases in the sums of the preference functions between the decision makers smaller. However, using the objective function of RO makes the average of the sums of the preference functions of all decision makers larger. Thus, the balance between the average (sum) and the reduction of the biases among all decision makers is important. Therefore, the effect of RO in multi-player LPP is controlled in our model by using $\alpha \left(0\le \alpha \le 1\right)$ as follows. The multi-player LPP with RO objective function,

$\left(1-\alpha \right)\underset{l=1}{\overset{L}{{\displaystyle \sum}}}\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\underset{s=2}{\overset{{n}_{s}}{{\displaystyle \sum}}}\left({\stackrel{\u02dc}{w}}_{lis}^{+}{d}_{lis}^{+}+{\stackrel{\u02dc}{w}}_{lis}^{-}{d}_{lis}^{-}\right)+\alpha L\underset{l}{\mathrm{max}}\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\underset{s=2}{\overset{{n}_{s}}{{\displaystyle \sum}}}\left({\stackrel{\u02dc}{w}}_{lis}^{+}{d}_{lis}^{+}+{\stackrel{\u02dc}{w}}_{lis}^{-}{d}_{lis}^{-}\right)\to \mathrm{min}$

(20)
is subject to

${\mu}_{i}+{d}_{lis}^{+}={t}_{li\left(s-1\right)}^{+},l=1,\cdots ,L;i=1,\cdots ,n;s=2,\cdots ,{n}_{s}$ (21)

${\mu}_{i}-{d}_{lis}^{-}={t}_{li\left(s-1\right)}^{-},l=1,\cdots ,L;i=1,\cdots ,n;s=2,\cdots ,{n}_{s}$ (22)

and other constraints related to ${x}_{j}\left(j=1,\cdots ,m\right)$ and ${\mu}_{i}\left(i=1,\cdots ,n\right)$ .

However, the following inequality between the first term of the objective function and the second is established when $\alpha $ is not considered:

$\sum}_{l=1}^{L}{\displaystyle {\sum}_{i=1}^{n}{\displaystyle {\sum}_{s=2}^{{n}_{s}}\left({\stackrel{\u02dc}{w}}_{lis}^{+}{d}_{lis}^{+}+{\stackrel{\u02dc}{w}}_{lis}^{-}{d}_{lis}^{-}\right)}}}\le L{\mathrm{max}}_{l}{\displaystyle {\sum}_{i=1}^{n}{\displaystyle {\sum}_{s=2}^{{n}_{s}}\left({\stackrel{\u02dc}{w}}_{lis}^{+}{d}_{lis}^{+}+{\stackrel{\u02dc}{w}}_{lis}^{-}{d}_{lis}^{-}\right)$ (23)

Therefore, the second term affects the result of the objective function more than the first term. As a result, it is necessary to unify the scales of their effects. The scales can be normalized by using the maximum values of the first and second terms, denoted maxLPP and maxRO. maxLPP is calculated in multi-player LPP with RO by using $\alpha =0$ , and maxRO is calculated in multi-player LPP. By using maxLPP and maxRO, the normalized formulation of multi-player LPP with RO is

$\begin{array}{l}\left(1-\alpha \right)\left({\displaystyle {\sum}_{l=1}^{L}{\displaystyle {\sum}_{i=1}^{n}{\displaystyle {\sum}_{s=2}^{{n}_{s}}\left({\stackrel{\u02dc}{w}}_{lis}^{+}{d}_{lis}^{+}+{\stackrel{\u02dc}{w}}_{lis}^{-}{d}_{lis}^{-}\right)}}}\right)/\mathrm{max}LPP\\ \text{\hspace{0.05em}}+\alpha \left(L{\mathrm{max}}_{l}{\displaystyle {\sum}_{i=1}^{n}{\displaystyle {\sum}_{s=2}^{{n}_{s}}\left({\stackrel{\u02dc}{w}}_{lis}^{+}{d}_{lis}^{+}+{\stackrel{\u02dc}{w}}_{lis}^{-}{d}_{lis}^{-}\right)}}\right)/\mathrm{max}RO\to \mathrm{min}\end{array}$ (24)

subject to

${\mu}_{i}+{d}_{lis}^{+}={t}_{li\left(s-1\right)}^{+},l=1,\cdots ,L;i=1,\cdots ,n;s=2,\cdots ,{n}_{s}$ (25)

${\mu}_{i}-{d}_{lis}^{-}={t}_{li\left(s-1\right)}^{-},l=1,\cdots ,L;i=1,\cdots ,n;s=2,\cdots ,{n}_{s}$ (26)

As $\alpha $ changes in the range of $\left[0,1\right]$ , the multi-player LPP is solved as follows. ${n}_{\alpha}$ is the natural number of the iterations of this algorithm.

Step 1. Initial situation: $\alpha =0,i=0$

Step 2. Multi-player LPP with RO is solved with $\alpha $ .

Step 3. If $i\ne {n}_{\alpha}$ , set $\alpha =\alpha +\left(1/{n}_{\alpha}\right)$ , $i=i+1$ and go back to Step 2. If $i={n}_{\alpha}$ , terminate.

In this algorithm, several solutions are obtained by changing the strength of the effect of the RO.

4. Multi-Player Experiment

In this experiment, decision-making around different production quantities is the focus. The players in the supply chain, such as facilities, logistics warehouses, and retail stores, have several common product items. However, they have different optimal production quantities, due to their different environments. In this section, by extending the numerical experiment in [21] , a data set of a multi-player decision-making problem can be developed. Let us assume there are three kinds of products and the profit per product A, B, and C is $12 k, $10 k, and $8k, respectively. While the total profit is necessary to reach at least $750 k, the resources of the products are minimized. Thus, the objective functions are classified as 1S (minimization target) for a facility, a logistics warehouse, and a retail store. They have different preference ranges for the products as shown in Tables 2-4. In this case, there are six preference levels and five target values ( ${n}_{s}=5$ ).

The weight coefficients are calculated as shown in Tables 5-7.

The production quantities for products A, B, and C are denoted as ${x}_{A}$ , ${x}_{B}$ , and ${x}_{C}$ , respectively, and the objective functions of products A, B, and C are denoted as ${\mu}_{1}$ , ${\mu}_{2}$ , and ${\mu}_{3}$ , respectively. To simplify the problem, the objective functions ${\mu}_{1}$ , ${\mu}_{2}$ , and ${\mu}_{3}$ are shown as follows.

Table 2. Preference range of product A.

Table 3. Preference range of product B.

Table 4. Preference range of product C.

Table 5. Weight coefficients of product A.

Table 6. Weight coefficients of product B.

Table 7. Weight coefficients of product C.

${\mu}_{1}={x}_{A},{\mu}_{2}={x}_{B},{\mu}_{3}={x}_{C}$ (27)

The formulation of this problem is as follows. The objective function

$\begin{array}{l}\left(1-\alpha \right)\left(\underset{l=1}{\overset{3}{{\displaystyle \sum}}}\underset{i=1}{\overset{3}{{\displaystyle \sum}}}\underset{s=2}{\overset{5}{{\displaystyle \sum}}}\left({\stackrel{\u02dc}{w}}_{lis}^{+}{d}_{lis}^{+}\right)\right)/\mathrm{max}LPP\\ \text{\hspace{0.05em}}+\alpha \left(L\underset{l}{\mathrm{max}}\underset{i=1}{\overset{3}{{\displaystyle \sum}}}\underset{s=2}{\overset{5}{{\displaystyle \sum}}}\left({\stackrel{\u02dc}{w}}_{lis}^{+}{d}_{lis}^{+}\right)\right)/\mathrm{max}RO\to \mathrm{min}\end{array}$ (28)

is subject to

${\mu}_{i}+{d}_{lis}^{+}={t}_{li\left(s-1\right)}^{+},l=1,2,3;i=1,2,3;s=2,3,4,5$ (29)

$12{\mu}_{1}+10{\mu}_{2}+8{\mu}_{3}\ge 750$ (30)

${\mu}_{1}={x}_{A},{\mu}_{2}={x}_{B},{\mu}_{3}={x}_{C}$ (31)

The multi-player LPP with RO is solved with ${n}_{\alpha}=10000$ . Thus, the results of this numerical experiment are shown in Table 8.

According to α, seven patterns of the solution set are obtained. In Table 8, “Sum” denotes the sum of the preference functions of all decision makers, “Average” means the average of the sums of the preference functions of the decision makers, and “Max” means the largest sum of the preference functions of the three decision makers. As $\alpha $ becomes larger, “Sum” and “Average” become larger and “Max” becomes smaller. When $\alpha =0$ , “Sum” is the smallest. In addition, when $\alpha =1$ , “Max” is the smallest. The smallest “Sum” and the smallest “Max” are regarded as the best values. Table 9 shows how much larger the “Sum” and “Max” are in the other patterns compared with the best values. The “Sum percentage” and the “Max percentage” are calculated as follows.

$\left(\text{Sumpercentage}\right)=\left(\text{sumofeachpattern}\right)/\left(\text{thebestsum}\left(\alpha =0\right)\right)$

$\left(\text{Maxpercentage}\right)=\left(\text{maxofeachpattern}\right)/\left(\text{thebestsum}\left(\alpha =1\right)\right)$

In Table 9, as the pattern number becomes larger, the “Sum percentage” becomes larger but the “Max percentage” becomes smaller. The minimization of the “Sum” becomes smaller and the minimization of the “Max” becomes larger as $\alpha $ becomes larger. Thus, as $\alpha $ becomes larger, “Sum percentage” becomes larger and “Max percentage” becomes smaller. The relationship between “Sum” and “Max” is a tradeoff. Finally, we can find solutions that ensure the “Sum” and “Max” are simultaneously improved.

Table 8. Results of numerical experiment.

Table 9. Increasing rate of sum and max (%).

5. Conclusions

The purpose of this research was to develop a mathematical model for decision-making, using the Suriawase process with multi-player and multi-objective, to find a solution that is satisfactory for all decision makers. To achieve this, in [25] , LPP is extended to multi-player model and the balance of the preference levels between decision makers is considered by adding the effect of RO, but only one solution is obtained as a predicted result of the negotiation and the method to control the effect of RO is not shown. Therefore, in this research, the method to control the effect of RO is proposed to improve the previous model, and this method makes possible to obtain not one solution but a set of solutions. From the viewpoint of the model to support the stage of the reviews of the requirements, because several options are obtained in the proposed model, the negotiations with the proposed model are more efficient than those with the previous model. The proposed model serves a predicted result by predicting the behaviors of other players and modifying their own solutions in a supply chain coordination. In the implementation of Industry 4.0, it becomes possible to collect the information from each stage of a supply chain in real time. The decision-making method in this research not only can predict the numerical value in each stage, but also can predict the behavior of each player. This method may be help for smooth decision-making in implementation of Industry 4.0.

The proposed model has two points that should be considered. The one of them is the stage of the reviews of the requirements in Suriawase process. In this research, the stage of the negotiation is focused. However, in the real situations, a solution that makes all decision makers satisfied does not always exist in the solutions based on the initial requirements of decision makers. Therefore, it is necessary to consider the stage of the reviews of the requirements. The other point that should be consider in the proposed model is the implementation of the proposed model to various kinds of multi-player multi-objective problems. In this research, the multi-objective multi-player optimization problem was treated for the case of a supply chain. This method is applicable to various scenes, such as optimization by the automobile design and production process in mechanical engineering. However, the performance of the proposed method was confirmed by only one numerical example with extreme bias. In future research, the method proposed here could be applied to different situations, such as where there is extreme bias in the preference levels in the results of multi-player LPP without RO. Moreover, it is necessary to investigate the difference in preference levels under a multi-player environment with various levels of bias.

Acknowledgements

This research was partially supported by the Japan Society for the Promotion of Science (JSPS), KAKENHI, Grant-in-Aid for Scientific Research (C), JP16K01262 from 2015 to 2020 and Grant-in-Aid for Scientific Research (A), JP18H03824 from 2018 to 2020.

Nomenclature

The parameters and variables that are used in this paper are shown as followings.

(LPP)

$n$ : the number of the objectives

${n}_{s}$ : the number of levels of the preference ranges of the objectives

${\mu}_{i}$ : the function value of objective $i$

${t}_{is}$ : the target value of level $s-1$ of objective $i$

${\stackrel{\u02dc}{t}}_{is}$ : the length of level

${w}_{is}$ : the weight coefficient of level

${\stackrel{\u02dc}{w}}_{is}$ : the weight coefficient increment of objective $i$ between level $s-1$ and $s$ ( ${\stackrel{\u02dc}{w}}_{is}^{+}$ in 1S, 3S and 4S, and ${\stackrel{\u02dc}{w}}_{is}^{-}$ in 2S, 3S and 4S)

${d}_{is}$ : the deviational variable between ${t}_{i\left(s-1\right)}$ and ${\mu}_{i}$

${z}^{s}$ : the preference function value of the intersection between level $s$ and $s+1$ .

${\stackrel{\u02dc}{z}}^{s}$ : the distance of the preference function values between the target value of level $s-1$ and that of level $s$

$\beta $ : the parameter to calculate the preference function values

${z}_{i}\left({\mu}_{i}\right)$ : the preference function value of objective $i$

(Multi-player LPP)

$n$ : the number of the objectives

${n}_{s}$ : the number of levels of the preference ranges of the objectives

$L$ : the number of the decision makers

${\mu}_{li}$ : the function value of objective $i$ of decision maker $l$

${t}_{lis}$ : the target value of level $s-1$ of objective $i$ of decision maker $l$

${\stackrel{\u02dc}{t}}_{lis}$ : the length of level s of objective $i$ of decision maker $l$

${w}_{lis}$ : the weight coefficient of level

${\stackrel{\u02dc}{w}}_{lis}$ : the weight coefficient increment of objective $i$ of decision maker $l$ between level $s-1$ and $s$ ( ${\stackrel{\u02dc}{w}}_{is}^{+}$ in 1S, 3S and 4S, and ${\stackrel{\u02dc}{w}}_{is}^{-}$ in 2S, 3S and 4S)

${d}_{lis}$ : the deviational variable between ${t}_{li\left(s-1\right)}$ and ${\mu}_{li}$

${z}^{s}$ : the preference function value of the intersection between level $s$ and $s+1$ .

${\stackrel{\u02dc}{z}}^{s}$ : the distance of the preference function values between the target value of level $s-1$ and that of level $s$

${z}_{li}\left({\mu}_{li}\right)$ : the preference function value of objective of $i$ of decision maker $l$

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
Lee, J., Bagheri, B. and Kao, H.A. (2015) A Cyber-Physical Systems Architecture for Industry 4.0-Based Manufacturing Systems. Manufacturing Letters, 3, 18-23.
https://doi.org/10.1016/j.mfglet.2014.12.001 |

[2] |
Lee, J., Kao, H.A. and Yang, S. (2014) Service Innovation and Smart Analytics for Industry 4.0 and Big Data Environment. Procedia CIRP, 16, 3-8.
https://doi.org/10.1016/j.procir.2014.02.001 |

[3] |
Shrouf, F., Ordieres, J. and Miragliotta, G. (2014) Smart Factories in Industry 4.0: A Review of the Concept and of Energy Management Approached in Production Based on the Internet of Things Paradigm. Proceedings of 2014 IEEE International Conference on Industrial Engineering and Engineering Management, Selangor, 9-12 December 2014, 697-701. https://doi.org/10.1109/IEEM.2014.7058728 |

[4] |
Stock, T. and Selige, G. (2016) Opportunities of Sustainable Manufacturing in Industry 4.0. Procedia CIRP, 40, 536-541. https://doi.org/10.1016/j.procir.2016.01.129 |

[5] |
Schlechtendahl, J., Keinert, M., Kretschmer, F., Lechler, A. and Verl, A. (2015) Making Existing Production Systems Industry 4.0-Ready. Production Engineering, 9, 143-148. https://doi.org/10.1007/s11740-014-0586-3 |

[6] |
Qin, J., Liu, Y. and Grosvenor, R. (2016) A Categorical Framework of Manufacturing for Industry 4.0 and Beyond. Procedia CIRP, 52, 173-178.
https://doi.org/10.1016/j.procir.2016.08.005 |

[7] |
Gorecky, D., Schmitt, M. and Loskyll, M. (2014) Human-Machine-Interaction in the Industry 4.0 Era. Proceedings of 2014 12th IEEE International Conference on Industrial Informatics, Porto Alegre, 27-30 July 2014, 289-294.
https://doi.org/10.1109/INDIN.2014.6945523 |

[8] | Simchi-Levi, D., Kaminsky, P. and Simchi-Levi, E. (2000) Designing and Managing the Supply Chain. The McGraw-Hill Companies, New York. |

[9] |
Ilgin, M.A. and Gupta, S.M. (2010) Environmentally Conscious Manufacturing and Product Recovery (ECMPRO): A Review of the State of the Art. Journal of Environmental Management, 91, 563-591.
https://doi.org/10.1016/j.jenvman.2009.09.037 |

[10] |
Takeishi, A. and Fujimoto, T. (2001) Modularization in the Auto Industry: Interlinked Multiple Hierarchies of Product, Production and Supplier Systems. International Journal of Automotive Technology and Management, 1, 379-396.
https://doi.org/10.1504/IJATM.2001.000047 |

[11] |
Inoue, M., Mogi, R., Nahm, Y.E., Tanaka, K. and Ishikawa, H. (2011) Design Support for “Suriawase”: Japanese Way for Negotiation among Several Teams, Improving Complex Systems Today. Springer, Berlin, 385-392.
https://doi.org/10.1007/978-0-85729-799-0_45 |

[12] |
Inoue, M., Nahm, Y.E., Tanaka, K. and Ishikawa, H. (2013) Collaborative Engineering among Designers with Different Preferences: Application of the Preference Set-Based Design to the Design Problem of an Automotive Front-Side Frame. Concurrent Engineering: Research and Applications, 21, 252-267.
https://doi.org/10.1177/1063293X13493447 |

[13] |
Charnes, A. and Cooper, W.W. (1997) Goal Programming and Multiple Objective Optimizations: Part 1. European Journal of Operational Research, 1, 39-54.
https://doi.org/10.1016/S0377-2217(77)81007-2 |

[14] |
Messac, A. (1996) Physical Programming-Effective Optimization for Computational Design. AIAA Journal, 34, 149-158. https://doi.org/10.2514/3.13035 |

[15] | Messac, A., Gupta, S.M. and Akbulut, B. (1996) Linear Physical Programming: A New Approach to Multiple Objective Optimization. Transactions on Operational Research, 8, 39-59. |

[16] |
Messac, A. (1998) Control-Structure Integrated Design with Closed-Form Design Metrics Using Physical Programming. AIAA Journal, 36, 855-864.
https://doi.org/10.2514/2.447 |

[17] |
Messac, A. and Ismail-Yahaya (2002) Multiobjective Robust Design Using Physical Programming. Structural and Multidisciplinary Optimization, 23, 357-371.
https://doi.org/10.1007/s00158-002-0196-0 |

[18] |
Konger, E. and Gupta, S.M. (2009) Solving the Disassembly-to-Order Problem Using Linear Physical Programming. International Journal of Mathematics in Operational Research, 1, 504-531. https://doi.org/10.1504/IJMOR.2009.026279 |

[19] |
Ilgin, M.A. and Gupta, S.M. (2012) Physical Programming: A Review of the State of the Art. Studies in Informatics and Control, 21, 349-366.
https://doi.org/10.24846/v21i4y201201 |

[20] |
Ondemir, O. and Gupta, S.M. (2014) A Multi-Criteria Decision Making Model for Advanced Repair-Toorder and Desassembly-to-Order System. European Journal of Operational Research, 233, 408-419. https://doi.org/10.1016/j.ejor.2013.09.003 |

[21] | Messac, A. (2015) Physical Programming for Multiobjective Optimization. In: Optimization in Practice with MATLAB, Cambridge University Press, 429-444. |

[22] |
Ben-Tal, A. and Nemirovski, A. (1998) Robust Convex Optimization. Mathematics of Operations Research, 23, 769-805. https://doi.org/10.1287/moor.23.4.769 |

[23] |
Ben-Tal, A. and Nemirovski, A. (1999) Robust Solutions of Uncertain Linear Programs. Operations Research Letters, 25, 1-13.
https://doi.org/10.1016/S0167-6377(99)00016-4 |

[24] |
Bertsimas, D. and Sim, M. (2004) The Price of Robustness. Operations Research, 52, 35-53. https://doi.org/10.1287/opre.1030.0065 |

[25] |
Yatsuka, T., Ishigaki, A., Ijuin, H., Kinoshita, Y., Yamada, T. and Inoue, M. (2018) Mathematical Modeling of Multi-Player Multi-Objective Decision Making by Linear Physical Programming. Proceedings of 7th International Congress on Advanced Applied Informatics, Yonago, 8-13 July 2018, 706-711.
https://doi.org/10.1109/IIAI-AAI.2018.00147 |

Journals Menu

Copyright © 2021 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.