Bi-Objective Optimization: A Pareto Method with Analytical Solutions ()

David W. K. Yeung^{1,2,3*}, Yingxuan Zhang^{1}

^{1}SRS Consortium for Advanced Studies, Shue Yan University, Hong Kong, China.

^{2}Center of Game Theory, St Petersburg State University, Saint Petersburg, Russia.

^{3}Department of Finance, Asia University, Taiwan, China.

**DOI: **10.4236/am.2023.141004
PDF HTML XML
53
Downloads
253
Views
Citations

Multiple objectives to be optimized simultaneously are prevalent in real-life problems. This paper develops a new Pareto Method for bi-objective optimization which yields analytical solutions. The Pareto optimal front is obtained in closed-form, enabling the derivation of various solutions in a convenient and efficient way. The advantage of analytical solution is the possibility of deriving accurate, exact and well-understood solutions, which is especially useful for policy analysis. An extension of the method to include multiple objectives is provided with the objectives being classified into two types. Such an extension expands the applicability of the developed techniques.

Keywords

Multi-Objective Optimization, Pareto Optimal Front, Analytical Solution, Lagrange Method, Karush-Kuhn-Tucker Conditions

Share and Cite:

Yeung, D. and Zhang, Y. (2023) Bi-Objective Optimization: A Pareto Method with Analytical Solutions. *Applied Mathematics*, **14**, 57-81. doi: 10.4236/am.2023.141004.

1. Introduction

Many real life optimization problems require that two or more objectives under analysis be optimized simultaneously. Frequently, these objectives conflict with each other, and it is not possible to find a single platform that maximizes all objectives simultaneously. Among them are the cases with two conflicting objectives such as: inflation and unemployment, risk and returns, environmental preservation and national income, current enjoyment and future education, and short-term profit and future growth, etc. Studies in bi-objective optimization constitute a non-trivial part in multi-objective analyses. For instance, Zhou *et al.* [1], Kukkonen and Deb [2], Pinto-Varela *et al.* [3], Lath *et al.* [4], Pereyra *et al.* [5], Garg [6], Futrell *et al.* [7], Hirpa *et al.* [8], Liu *et al.* [9], Wang *et al.* [10], Cheraghalipour *et al.* [11], Ho-Huu *et al.* [12], Yeh [13], Liu *et al.* [14], Nagamanjula and Pethalakshmi [15], Xu *et al.* [16], Diao *et al.* [17], Mohammadi *et al.* [18], Kparib *et al.* [19], Kparib *et al.* [20], Gulben and Orhan [21], Zaninudin and Paputungan [22], and Stutzle and Hoos [23]. Studies proposing multi-objective optimization techniques and solution can be found in Messac [24], Das and Dennis [25], Deb [26], Messac *et al.* [27], Messac and Mattson [28], Kim and Weck [29], Zhang and Li [30], Chinchuluun and Pardalos [31], Mueller-Gritschneder *et al.* [32], Pereyra *et al.* [5], Pérez-Fernández *et al.* [33], Marler and Arora [34], Gunantara [35], Orths *et al.* [36], Collette and Siarry [37], Ehrgott [38], Eskelinen *et al.* [39], Fonseca and Fleming [40], Alaa *et al.* [41], Subhamoy and Sugata [42], Wilfried and Blum (2014) [43], Caramia and Dell’Olmo (2008) [44], Rohilla (2020) [45], Engau and Wiecek [46], Obayashi *et al.* [47], Lagarias *et al.* [48], Miettinen [49], Zhang and Li [30], Bendsoe *et al.* [50], and Chankong and Haimes [51].

In these studies, several methods are commonly used for constructing aggregation functions, they include the weighted sum, Tchebycheff inequality, the normal boundary intersection, the normal constraint method, the Physical Programming method, Goal Programming, the epsilon constraints and Directed Search Domain to approximate the preference of the decision-maker. Often very lengthy computational efforts have to be invested and may end up with insufficient number of Pareto optimal points to be considered.

A crucial goal of a multi-objective optimization problem is to construct the Pareto optimal front (POF), which depicts the best trade-offs among the objectives to be optimized. The POF can be approximated as the solution of a series of scalar optimization subproblems in which the objective is an aggregation of the objectives. This paper presents a new Pareto Method for bi-objective optimization yielding the POF in the form of analytical solutions. An analytical solution involves framing the problem in a well-understood form and deriving exact solution. The analytical method is often preferred because its solution is in exact closed form.

Analytical solutions have three important advantages:

1) Transparency: Analytical solutions are presented as mathematical expressions, they make the effects of variables and their interactions with each other explicit.

2) Efficiency: Usually, algorithms and models expressed with analytical solutions are more efficient for manipulation and analysis than numerical analysis. Specifically, it is often faster, more accurate, and more convenient to evaluate an analytical solution than to perform an equivalent numeric implementation.

3) Mathematical Rigor: Analytical methods are rigorous and provide exact solutions with high tractability.

This paper is organized as follows. Bi-objective optimization problem is formulated in Section 2. Derivation of POF with equality constraints is provided in Section 3. Section 4 presents different analytical Pareto solutions with equality constraints. Section 5 derives the POF in cases with equality and inequality constraints. Analytical Pareto solutions under equality and inequality constraints are examined in Section 6. An Illustrative example is given in Section 7. Extension and conclusion are provided in Section 8 concludes.

2. Bi-Objective Optimization Problem

Consider a bi-objective optimization in which the decision-maker faces two objectives: ${f}_{1}\left(x\right)$ and ${f}_{2}\left(x\right)$. The problem becomes

$\underset{x}{\mathrm{max}}F\left[{f}_{1}\left(x\right),{f}_{2}\left(x\right)\right]$,

subject to

$x\in X\subseteq {R}^{n}$. (2.1)

where $x=\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)\in X\subseteq {R}^{n}$ is a set of decision variables which values are to be chosen in the optimization problem.

The feasible set of decision variables $X\subseteq {R}^{n}$ is implicitly determined by a set of equality constraints and a set of inequality constraints,

$g\left(x\right)=0$ and $h(x)\ge 0$, (2.2)

where
$g\left(x\right)$ is a *m*-dimensional vector of functions, and
$h\left(x\right)$ is a *τ*-dimensional vector of functions.

The objectives
${f}_{1}\left(x\right)$ and
${f}_{2}\left(x\right)$ are functions which measures the effects of the decision variables *x* on the objectives
${f}_{1}$ and
${f}_{2}$. The function
$F\left[{f}_{1},{f}_{2}\right]$ represents the ranking preference of different combinations of
${f}_{1}$ and
${f}_{2}$. It can take various functional forms, contingent upon the preference or targets fixed by the decision-maker.

The problem defined in (2.1)-(2.2) belongs to the class of constrained multi-objective optimization problems. There are a number of methods designed to assist the decision maker to arrive at the best compromise solution.

1) *Scalarization*: The most commonly used methods adopt schemes to convert the multiple objectives into a single scalar objective and apply standard scalar optimization algorithms to generate an optimal solution. Various weighted schemes to scalarize the multiple objectives into a scalar function are available, such as weighted global methods, weighted sum methods and exponential weighted criterion. One of the problems in scalarization is the existence of conflicting objectives.

2) *Utility-Based Optimization*: Another solution for multi-objective optimization is to explicitly consider the possible trade-offs between conflicting objective functions. Such trade-offs can be analyzed on the basis of the utility that these compromises have for the decision-maker. Many studies considered the utility-based optimization should be a common standard in multi-objective optimization.

3) *Axiomatic Solution*: Often the decision-maker cannot concretely define what he prefers. Axiomatic solutions like the Nash arbitration scheme can be chosen. Based on predetermined axioms of fairness, the solution suggests an arbitration yielding the maximum (over a convex compact set of points) of the product of the players’ utilities. In this case, the utility functions always have non-negative values and have a value of zero in the absence of cooperation. It can also be generalized to become weighted product methods. Similarly, the Kalai-Smorodinsky solution is another solution to bargaining problems of utility maximizing players. In multi-objective problems, players’ utilities are replaced by objectives that the decision-maker aims to maximize simultaneously.

4) *Goal Programming Method*: Finally, the decision-maker may consider a goal programming solution. In particular, the decision-maker aims to reach or getting as close as possible to a goal or a vector of targets.

In Section 4, we consider five methods with analytical solutions. Specifically, they are Nash arbitration and objective product method, target-attainment method, Kalai-Smorodinsky bargaining solution, scalarization method with weighted-sum and utility-based method.

3. Derivation of POF with Equality Constraints

We first consider the case where there are only equality constraints in the decision variables as a bench mark. (This corresponds to the case where the inequality constraints are either absent or inactive). A way to obtain Pareto efficient strategies in the bi-objective optimization problem is through the weighted-sum method. Such approach is also employed in identifying the players’ cooperative strategies belonging to the Pareto optimal set in non-transferrable utility games (see [52] [53] [54] ). In particular, the POF can be traced out by identifying the Pareto efficient strategies through systematically changing the weights among the objective functions. Therefore, the decision-maker considers the problem:

$\underset{x}{\mathrm{max}}\left[\alpha {f}_{1}\left(x\right)+\left(1-\alpha \right){f}_{2}\left(x\right)\right]$, for $\alpha \in \left[0,1\right]$,

subject to

$g\left(x\right)=0$. (3.1)

The corresponding Lagrange function can be expressed as:

$L\left(x,\lambda ,\alpha \right)=\left[\alpha {f}_{1}\left(x\right)+\left(1-\alpha \right){f}_{2}\left(x\right)\right]+\underset{j=1}{\overset{m}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{\lambda}_{j}{g}_{j}\left(x\right),$ (3.2)

where $\lambda =\left({\lambda}_{1},{\lambda}_{2},\cdots ,{\lambda}_{m}\right)$ is the set of Lagrange multipliers, and $\alpha $ and $1-\alpha $, for $\alpha \in \left[0,1\right]$, are the weight for the objective 1 and objective 2 respectively.

First-order conditions for a maximum yield

$\alpha \frac{\partial {f}_{1}\left(x\right)}{\partial {x}_{i}}+\left(1-\alpha \right)\frac{\partial {f}_{2}\left(x\right)}{\partial {x}_{i}}+\underset{j=1}{\overset{m}{{\displaystyle \sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{j}\frac{\partial {g}_{j}\left(x\right)}{\partial {x}_{i}}=0,$

for $i\in \left[1,2,\cdots ,n\right]$,

${g}_{j}\left(x\right)=0$, for $j\in \left[1,2,\cdots ,m\right]$. (3.3)

If the system of $n+m$ first-order conditions in (3.3) satisfies the implicit function theorem, one can express the optimal decision variables ${x}^{\alpha}=\left({x}_{1}^{\alpha},{x}_{2}^{\alpha},\cdots ,{x}_{n}^{\alpha}\right)$ and the corresponding Lagrange multipliers ${\lambda}^{\alpha}=\left({\lambda}_{1}^{\alpha},{\lambda}_{2}^{\alpha},\cdots ,{\lambda}_{m}^{\alpha}\right)$ as functions the exogenous parameter $\alpha $, that is

${x}_{i}^{\alpha}={\phi}_{i}^{\alpha}\left(\alpha \right)$, for $i\in \left[1,2,\cdots ,n\right]$,

${\lambda}_{i}^{\alpha}={\varphi}_{i}^{\alpha}\left(\alpha \right)$, for $j\in \left[1,2,\cdots ,m\right]$. (3.4)

Substituting the optimal decision variables ${x}_{i}^{\alpha}={\phi}_{i}^{\alpha}\left(\alpha \right)$ from (3.4) into the objectives ${f}_{1}$ and ${f}_{2}$, we can obtain the optimal objectives under $\alpha $ as:

${f}_{1}\left({x}^{\alpha}\right)={f}_{1}({\phi}^{\alpha}(\; \alpha \; )\; )$

and

${f}_{2}\left({x}^{\alpha}\right)={f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)$, (3.5)

where ${\phi}^{\alpha}\left(\alpha \right)=\left({\phi}_{1}^{\alpha}\left(\alpha \right),{\phi}_{2}^{\alpha}\left(\alpha \right),\cdots ,{\phi}_{n}^{\alpha}\left(\alpha \right)\right)$.

In the case where $\alpha =1$, it generates the anchor point where the best of objective ${f}_{1}$ is obtained, that is $\underset{x}{\mathrm{max}}{f}_{1}\left(x\right)$. In the case where $\alpha =0$, it generates the anchor point where the best of objective ${f}_{2}$ is obtained, that is $\underset{x}{\mathrm{max}}{f}_{2}\left(x\right)$. The Pareto optimal frontier (POF) can be obtained as

$\left({f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right),{f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)\right)$, for $\alpha \in \left[0,1\right]$, (3.6)

which is analytically tractable.

An increase in the value of $\alpha $ signifies an increase in the weight for objective ${f}_{1}$ and a decrease in the weight for objective ${f}_{2}$. Hence the POF is downward sloping in the $\left({f}_{1},{f}_{2}\right)$ space.

The point $\left({f}_{1}\left({\phi}^{1}\left(1\right)\right),{f}_{2}\left({\phi}^{1}\left(1\right)\right)\right)$ is an anchor point at which the objective ${f}_{1}$ reaches its maximum. Similarly, the point $\left({f}_{0}\left({\phi}^{0}\left(0\right)\right),{f}_{2}\left({\phi}^{0}\left(0\right)\right)\right)$ is an anchor point at which the objective ${f}_{2}$ reaches its maximum. The point $\left({f}_{1}\left({\phi}^{1}\left(1\right)\right),{f}_{2}\left({\phi}^{0}\left(0\right)\right)\right)$ is the utopia (ideal) point at which ${f}_{1}$ reaches its maximum and ${f}_{2}$ reaches its maximum simultaneously.

In addition, if there exist minimum levels of the objectives, ${f}_{1}\left(x\right)\ge {\underset{\_}{f}}_{1}$ and ${f}_{2}\left(x\right)\ge {\underset{\_}{f}}_{2}$, that the optimal solution have to fulfilled, then the range of the POF has to be restricted to be above ${\underset{\_}{f}}_{1}$ and above ${\underset{\_}{f}}_{2}$. The corresponding restriction on the weight can be obtained as $\alpha \in \left(\underset{\_}{\alpha},\stackrel{\xaf}{\alpha}\right)$, where ${f}_{1}\left({\phi}^{\underset{\_}{\alpha}}\left(\underset{\_}{\alpha}\right)\right)={\underset{\_}{f}}_{1}$ and ${f}_{2}\left({\phi}^{\stackrel{\xaf}{\alpha}}\left(\stackrel{\xaf}{\alpha}\right)\right)={\underset{\_}{f}}_{2}$.

The point $\left({\underset{\_}{f}}_{1},{\underset{\_}{f}}_{2}\right)$ is called the nadir point. The point $\left({f}_{1}\left({\phi}^{\underset{\_}{\alpha}}\left(\underset{\_}{\alpha}\right)\right),{f}_{2}\left({\phi}^{\underset{\_}{\alpha}}\left(\underset{\_}{\alpha}\right)\right)\right)$ becomes an anchor point at which the objective ${f}_{2}$ reaches its maximum. The point $\left({f}_{1}\left({\phi}^{\stackrel{\xaf}{\alpha}}\left(\stackrel{\xaf}{\alpha}\right)\right),{f}_{2}\left({\phi}^{\stackrel{\xaf}{\alpha}}\left(\stackrel{\xaf}{\alpha}\right)\right)\right)$ becomes an anchor point at which the objective ${f}_{1}$ reaches its maximum. The point $\left({f}_{1}\left({\phi}^{\stackrel{\xaf}{\alpha}}\left(\stackrel{\xaf}{\alpha}\right)\right),{f}_{2}\left({\phi}^{\underset{\_}{\alpha}}\left(\underset{\_}{\alpha}\right)\right)\right)$ becomes the utopia point.

The POF is inside the rectangle bounded the nadir point, the utopia point and the two anchor points.

The part inside the area bounded by the nadir point, two anchor points and the curve of the POF in Figure 1 are dominated points. The part inside the area bounded by the utopia point, two anchor points and the curve of the POF in Figure 1 are unreachable points. Since the decision-maker would not choose a

Figure 1. POF under equality constraints.

dominated point and could not reach unreachable points, any optimal solution chosen by the decision-maker would be on the POF.

4. Analytical Pareto Solutions with Equality Constraints

In this section, we consider various solution methods via the analytical solution of the POF derived in Equation (3.6) under equality constraints only.

4.1. Nash Arbitration and Objective Product Method

The Nash objective Product maximization seeks a solution which yields the maximum of the product of the objectives in the feasible decision region. The idea is derived from Nash [55] and applied by Davis [56] in multi-objective optimization. Consider Figure 1, the feasible decision region is the POF bounded by the vertical line ${f}_{1}={\underset{\_}{f}}_{1}$ and the horizontal line ${f}_{2}={\underset{\_}{f}}_{2}$. The maximization of the product of the relevant objectives can be expressed as:

$\underset{\alpha}{\mathrm{max}}\left[{f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right)-{\underset{\_}{f}}_{1}\right]\left[{f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)-{\underset{\_}{f}}_{2}\right]$. (4.1)

Performing the maximization operator in (4.1) we obtain the condition

$\begin{array}{l}\left[{f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)-{\underset{\_}{f}}_{2}\right]\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\frac{\partial {f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right)}{\partial {\phi}_{i}^{\alpha}}\frac{\partial {\phi}_{i}^{\alpha}}{\partial \alpha}\\ +\left[{f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right)-{\underset{\_}{f}}_{1}\right]\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\frac{\partial {f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)}{\partial {\phi}_{i}^{\alpha}}\frac{\partial {\phi}_{i}^{\alpha}}{\partial \alpha}=0.\end{array}$ (4.2)

The weight ${\alpha}^{*}$ that satisfies (4.2) yields the solution to the objective product maximization method can be obtained as $\left({f}_{1}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right),{f}_{2}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right)\right)$.

4.2. Target-Attainment Method

In the target-attainment method, the decision-maker aims to reach a target or a vector of targets. For instance, the target for ${f}_{1}\left(x\right)$ is to reach ${T}_{1}$ and the target for ${f}_{2}\left(x\right)$ to is reach ${T}_{2}$. The objective is to minimize the deviation of the solution from the targets. One can depict the explicitly derived POF and compared to the target $\left({T}_{1},{T}_{2}\right)$.

If the target point $\left({T}_{1},{T}_{2}\right)$ is outside the POF, the problem becomes minimizing the distance between the POF and the point $\left({T}_{1},{T}_{2}\right)$ indicated by the dotted line in Figure 2, that is

Figure 2. Target-attainment solution.

$\underset{\alpha}{\mathrm{min}}{\left[{\left({T}_{1}-{f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right)\right)}^{2}+{\left({T}_{2}-{f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)\right)}^{2}\right]}^{\frac{1}{2}}$. (4.3)

The solution to (4.3) will be characterized by the condition

$\begin{array}{c}0=\frac{1}{2}{\left[{\left({T}_{1}-{f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right)\right)}^{2}+{\left({T}_{2}-{f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)\right)}^{2}\right]}^{-\frac{1}{2}}\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\times \left[2\left({T}_{1}-{f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right)\right)\underset{i=1}{\overset{n}{{\displaystyle \sum}}}-\frac{\partial {f}_{1}}{\partial {\phi}_{i}^{\alpha}}\frac{\partial {\phi}_{i}^{\alpha}}{\partial \alpha}+2\left({T}_{2}-{f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)\right)\underset{i=1}{\overset{n}{{\displaystyle \sum}}}-\frac{\partial {f}_{2}}{\partial {\phi}_{i}^{\alpha}}\frac{\partial {\phi}_{i}^{\alpha}}{\partial \alpha}\right].\end{array}$ (4.4)

We can derive the weight ${\alpha}^{*}$ that satisfies (4.4), and obtain the solution $\left({f}_{1}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right),{f}_{2}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right)\right)$.

Consider the case that the target is ${f}_{1}={z}_{1}$ must be attained. We first identify the weight ${\alpha}^{*}$ such that

${f}_{1}\left({x}^{{\alpha}^{*}}\right)={f}_{1}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right)={z}_{1}$. (4.5)

The solution is then $\left({f}_{1}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right),{f}_{2}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right)\right)$.

4.3. Kalai-Smorodinsky Bargaining Solution

Aboulaich *et al.* [57] and Oukennou *et al.* [58] applied the Kalai-Smorodinsky Bargaining Solution [59] for solving multi-objective optimization problems. The Kalai-Smorodinsky solution is a solution to bargaining problems of utility maximizing players. In multi-objective problems, players’ utilities are replaced by objectives that the decision-maker aims to maximize simultaneously. The main advantage of the solution is that it yields a concrete criterion to select one and only one unique point along the POF. Mathematically, it is the intersection of the POF and the line segment connecting the nadir point and the utopia point.

The nadir point is $\left({\underset{\_}{f}}_{1},{\underset{\_}{f}}_{2}\right)$. To obtain the utopia point, we first identify the $\alpha $ that satisfies ${f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right)={\underset{\_}{f}}_{1}$, and denote it by ${\alpha}^{1}$. The point $\left({f}_{1}\left({\phi}^{{\alpha}^{1}}\left({\alpha}^{1}\right)\right),{f}_{2}\left({\phi}^{{\alpha}^{1}}\left({\alpha}^{1}\right)\right)\right)$ is the top anchor point of the POF. Similarly, we identify the $\alpha $ that satisfies ${f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)={\underset{\_}{f}}_{2}$ and denote it by ${\alpha}^{2}$. The point $\left({f}_{1}\left({\phi}^{{\alpha}^{2}}\left({\alpha}^{2}\right)\right),{f}_{2}\left({\phi}^{{\alpha}^{2}}\left({\alpha}^{2}\right)\right)\right)$ is the bottom anchor point of the POF. Using the top anchor point and the bottom anchor point of the POF, we can obtain the utopia point as $\left({f}_{1}\left({\phi}^{{\alpha}^{2}}\left({\alpha}^{2}\right)\right),{f}_{2}\left({\phi}^{{\alpha}^{1}}\left({\alpha}^{1}\right)\right)\right)$.

The slope of the line segment connecting the nadir point and the utopia point can be obtained as $\left[{f}_{2}^{{\alpha}^{1}}\left(\phi \left({\alpha}^{1}\right)\right)-{\underset{\_}{f}}_{2}\right]\xf7\left[{f}_{1}\left({\phi}^{{\alpha}^{2}}\left({\alpha}^{2}\right)\right)-{\underset{\_}{f}}_{1}\right]$, which denote by $\theta $. To obtain the Kalai-Smorodinsky solution, we trace the $\alpha $ satisfying

$\frac{{f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)-{\underset{\_}{f}}_{2}}{{f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right)-{\underset{\_}{f}}_{1}}=\theta $. (4.6)

Let ${\alpha}^{*}$ denote the $\alpha $ that satisfies (4.6). The Kalai-Smorodinsky solution can be obtained as $\left({f}_{1}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right),{f}_{2}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right)\right)$. Graphically, the Kalai-Smorodinsky bargaining solution is the point of intersection of the POF and the line joining the Nadir point and the Utopia point in Figure 3.

Figure 3. Kalai-Smorodinsky bargaining solution.

4.4. Scalarization Method with Weighted-Sum

The scalarization method makes the multi-objective function create a single solution and the weight is determined before the optimization process. The scalarization method incorporates multi-objective functions into scalar fitness function as in the following equation [60].

$\underset{x}{\mathrm{max}}F\left[{f}_{1}\left(x\right),{f}_{2}\left(x\right)\right]={w}_{1}{f}_{1}\left(x\right)+{w}_{2}{f}_{2}\left(x\right)$. (4.7)

The weight of an objective function determines the solution and reveals the performance priority [61]. A large weight that is given to an objective function that has a higher priority compared to the ones with a smaller weight. Normalizing

the weights ${w}_{1}$ and ${w}_{2}$, we can obtain ${\alpha}^{*}=\frac{{w}_{1}}{{w}_{1}+{w}_{2}}$ and $\left(1-{\alpha}^{*}\right)=\frac{{w}_{2}}{{w}_{1}+{w}_{2}}$. The solution can be obtained as $\left({f}_{1}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right),{f}_{2}^{{\alpha}^{*}}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right)\right)$.

4.5. Utility-Based Method

Rădulescu *et al.* [62] considered utility-based analysis to be the standard paradigm for studying multi-objective problems. In particular, they argued that compromises between competing objectives in MOMAS should be analyzed on the basis of the utility that these compromises have for the users of a system, where an agent’s utility function maps their payoff vectors to scalar utility values. The utility of different combinations of objectives is given by the utility function
$U\left({f}_{1},{f}_{2}\right)$. It represents a scalarization of the objectives into a preference ranking index. It can be linear or nonlinear. If the utility function is linear, it resembles a scalarization of the objectives with weighted-sum of the objectives. Very often, nonlinear utility function
$U\left({f}_{1},{f}_{2}\right)$ yields a set of indifference (level) curves of preferences which are convex, showing diminishing marginal rate of substitution between the objectives. Such utility functions represent a nonlinear scalarization of the objectives.

Consider the case where the utility function $U\left({f}_{1},{f}_{2}\right)={f}_{1}{f}_{2}$. It yields in difference (level) curves which are convex and showing diminishing marginal rate of substitution between the objectives.

The maximization of the utility function $U\left({f}_{1},{f}_{2}\right)={f}_{1}{f}_{2}$ can be expressed as:

$\underset{\alpha}{\mathrm{max}}\left[{f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right){f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)\right]$. (4.8)

Perform the maximization operator in (4.8) we obtain the condition

${f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\frac{\partial {f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right)}{\partial {\phi}_{i}^{\alpha}}\frac{\partial {\phi}_{i}^{\alpha}}{\partial \alpha}+{f}_{1}\left({\phi}^{\alpha}\left(\alpha \right)\right)\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\frac{\partial {f}_{2}\left({\phi}^{\alpha}\left(\alpha \right)\right)}{\partial {\phi}_{i}^{\alpha}}\frac{\partial {\phi}_{i}^{\alpha}}{\partial \alpha}=0.$ (4.9)

Figure 4. Utility-based solution.

The weight ${\alpha}^{*}$ that satisfies (4.9) yields the solution of maximizing $U\left({f}_{1},{f}_{2}\right)=s{f}_{1}{f}_{2}$ with ${f}_{1}={f}_{1}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right)$ and ${f}_{2}={f}_{2}\left({\phi}^{{\alpha}^{*}}\left({\alpha}^{*}\right)\right)$. The point where the POF and the indifference curve are tangent to each other demonstrates the utility-based solution Figure 4.

4.6. Performance of Pareto Method with Analytical Solution

In general, multi-objective optimization requires huge computational effort. Frequently an insufficient number of Pareto optimal points will be found. Pareto methods usually require less complicated mathematical equations. The solution using the Pareto method is a performance indicators component that produces a compromise solution and can be displayed on the Pareto optimal front. Obtaining a Pareto optimal solution set is preferable to a single solution. It provides a basis upon which to make value judgment’s in order to settle on a final solution.

Pareto method with analytical solution involves the framing the problem in a well-understood form and deriving exact solution. The method is often more preferred because its solution is in exact closed form. A wide range of the POF can be traced out analytically with relevant mathematical expressions. The method is efficient for manipulation and analysis than numerical analysis. Specifically, it is often faster, more accurate, and more convenient to evaluate an analytical solution than to perform an equivalent numeric implementation. In addition, the effects of variables and their interactions with each other and parameter changes are highly tractable. Finally, the availability of the POF (or its relevant parts) in closed form allows the decision-maker to compare solutions under different criteria for multi-objective optimization.

5. POF with Equality and Inequality Constraints

To complete the analysis, we consider the case where there are equality and inequality constraints in the decision variables.

5.1. Pareto Efficient Strategies

Again, we identify the Pareto efficient strategies by systematically changing the weights among the objective functions. Specifically, the decision-maker considers the problem:

$\underset{x}{\mathrm{max}}\left[\alpha {f}_{1}\left(x\right)+\left(1-\alpha \right){f}_{2}\left(x\right)\right]$, for $\alpha \in \left[0,1\right]$, (5.1)

subject to

$g\left(x\right)=0$ and $h\left(x\right)\ge 0$. (5.2)

To solve the optimization with equality and inequality constraints, we invoke the Karush-Kuhn-Tucker conditions and use the Lagrange multipliers approach with the corresponding Lagrange function:

$L\left(x,\lambda ,\gamma ,\alpha \right)=\left[\alpha {f}_{1}\left(x\right)+\left(1-\alpha \right){f}_{2}\left(x\right)\right]+\underset{j=1}{\overset{m}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{\lambda}_{j}{g}_{j}\left(x\right)+\underset{k=1}{\overset{\tau}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{\gamma}_{k}\left({h}_{k}\left(x\right)\right),$ (5.3)

where $\lambda =\left({\lambda}_{1},{\lambda}_{2},\cdots ,{\lambda}_{m}\right)$ and $\gamma =\left({\gamma}_{1},{\gamma}_{2},\cdots ,{\gamma}_{\tau}\right)$ are the sets of Lagrange multipliers. Necessary conditions for a maximum include:

$\alpha \frac{\partial {f}_{1}\left(x\right)}{\partial {x}_{i}}+\left(1-\alpha \right)\frac{\partial {f}_{2}\left(x\right)}{\partial {x}_{i}}+\underset{j=1}{\overset{m}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{\lambda}_{j}\frac{\partial {g}_{j}\left(x\right)}{\partial {x}_{i}}+\underset{k=1}{\overset{\tau}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{\gamma}_{j}\frac{\partial {h}_{k}\left(x\right)}{\partial {x}_{k}}\le 0,$

for $i\in \left[1,2,\cdots ,n\right]$,

${g}_{j}\left(x\right)=0$, for $j\in \left[1,2,\cdots ,m\right]$,

${\gamma}_{k}{h}_{k}\left(x\right)=0$, for $k\in \left[1,2,\cdots ,\tau \right]$ ; (5.4)

${h}_{k}\left(x\right)\ge 0$, for $k\in \left[1,2,\cdots ,\tau \right]$, and ${\gamma}_{k}\ge 0$, for $k\in \left[1,2,\cdots ,\tau \right]$. (5.5)

In the case where ${\gamma}_{k}\ne 0$, the inequality constraint is binding with ${h}_{k}\left(x\right)=0$ being held and acts as an active constraint. In the case where ${\gamma}_{k}=0$, the condition ${h}_{k}\left(x\right)=0$ does not have to hold and the constraint is inactive.

Equation system (5.4) gives rise to
$n+m+\tau $ equations for *n* decision variables
$\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$, *m* Lagrange multipliers
$\left({\lambda}_{1},{\lambda}_{2},\cdots ,{\lambda}_{m}\right)$, and
$\tau $ Lagrange multipliers
$\left({\gamma}_{1},{\gamma}_{2},\cdots ,{\gamma}_{\tau}\right)$.

Moreover, any admissible solution has to satisfy (5.5). If (5.5) is not satisfied, it means that the solution satisfying the first order conditions is either not in the region fulfilling the constraints, or has a negative Lagrange multiplier, which is not allowed for a maximum.

If condition (5.5) fulfilled and the first order conditions (5.4) for an interior solution satisfy the implicit theorem, one can express the optimal decision variables ${x}^{\alpha}=\left({x}_{1}^{\alpha},{x}_{2}^{\alpha},\cdots ,{x}_{n}^{\alpha}\right)$ and the corresponding Lagrange multipliers ${\lambda}^{\alpha}=\left({\lambda}_{1}^{\alpha},{\lambda}_{2}^{\alpha},\cdots ,{\lambda}_{m}^{\alpha}\right)$ and ${\gamma}^{\alpha}=\left({\gamma}_{1}^{\alpha},{\gamma}_{2}^{\alpha},\cdots ,{\gamma}_{\tau}^{\alpha}\right)$ as functions the exogenous parameter $\alpha $, that is

${\stackrel{^}{x}}_{i}^{\alpha}={\stackrel{^}{\phi}}_{i}^{\alpha}\left(\alpha \right)$, for $i\in \left[1,2,\cdots ,n\right]$,

${\lambda}_{i}^{\alpha}={\varphi}_{i}^{\alpha}\left(\alpha \right)$, for $j\in \left[1,2,\cdots ,m\right]$,

${\gamma}_{k}^{\alpha}={\psi}_{k}^{\alpha}\left(\alpha \right)$, for $k\in \left[1,2,\cdots ,\tau \right]$. (5.6)

5.2. The Corresponding POF

Substituting the optimal decision variables ${\stackrel{^}{x}}_{i}^{\alpha}={\stackrel{^}{\phi}}_{i}^{\alpha}\left(\alpha \right)$ from (5.6) into the objectives ${f}_{1}$ and ${f}_{2}$, we obtain the optimal objectives under $\alpha $ as:

${f}_{1}\left({\stackrel{^}{x}}^{\alpha}\right)={f}_{1}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right)$ and ${f}_{2}\left({\stackrel{^}{x}}^{\alpha}\right)={f}_{2}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right)$. (5.7)

The Pareto optimal frontier (POF) at the point which corresponds to the adoption of objective weight $\alpha $ can be obtained as

$\left({f}_{1}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right),{f}_{2}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right)\right)$, for $\alpha \in \left[0,1\right]$, (5.8)

which is again analytically tractable.

Theoretically, the frame of the POF with both equality and inequality constraints can be delineated by computing the Pareto strategies for different values of $\alpha $ between 0 and 1. Note that the Pareto optimal point $\left({f}_{1}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right),{f}_{2}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right)\right)$ may have to be calculated point by point for different values of $\alpha $, because the set of active inequality constraints may vary as $\alpha $ changes. The POF with both equality and inequality constraints is bounded by the POF with equality constraint only. Unlike the case with equality constraints, we have to track down the corresponding point of the POF with individual values of $\alpha $, and there exists the possibility that the solution satisfying the first order conditions is not in a feasible region bounded by the constraints. Therefore, the POF may have broken ranges as shown in Figure 5.

Figure 5. Broken POF.

6. Analytical Pareto Solutions under Equality and Inequality Constraints

Note that various solution methods via the analytical solution of the POF derived in Section 4 yield a unique solution ${\alpha}^{*}$. If the solution is in an area where all inequality constraints are inactive, the solution would be the same as that in section 4.1. If the solution is in an area where some inequality constraints are active, we first solve the first-order conditions in (5.4) for $\alpha $ in an area near ${\alpha}^{*}$ identified in Section 4. Specifically, we obtain

${\stackrel{^}{x}}_{i}^{\alpha}={\stackrel{^}{\phi}}_{i}^{\alpha}\left(\alpha \right)$, for $i\in \left[1,2,\cdots ,n\right]$,

${\lambda}_{i}^{\alpha}={\varphi}_{i}^{\alpha}\left(\alpha \right)$, for $j\in \left[1,2,\cdots ,m\right]$,

${\gamma}_{k}^{\alpha}={\psi}_{k}^{\alpha}\left(\alpha \right)$, for $k\in \left[1,2,\cdots ,\tau \right]$, for $\alpha \in \left[{\alpha}^{*}-\epsilon ,{\alpha}^{*}+\epsilon \right]$. (6.1)

The corresponding point of the POF can be expressed as

$\left({f}_{1}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right),{f}_{2}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right)\right)$, for $\alpha \in \left[{\alpha}^{*}-\epsilon ,{\alpha}^{*}+\epsilon \right]$. (6.2)

Then, we check whether the point derived in (6.2) with active inequality constraints still fulfills the optimality condition. If not, we have to identify some points on the POF in the adjacent area and search for the optimal solution.

For instance, consider the target-attainment method in Section 4.2 in which the target for ${f}_{1}\left(x\right)$ is to reach ${T}_{1}$ and the target for ${f}_{2}\left(x\right)$ to is reach ${T}_{2}$. The decision maker seeks to minimize the deviation of the solution from the target $\left({T}_{1},{T}_{2}\right)$. We first identify the POF points for ${\alpha}^{*}$ under equality constraint only given in Section given in Section 4.2. Then we verify whether there exist active inequality constraints. If some inequality constraints are active in the solution point ${\alpha}^{*}$, we have to consider some POF points at $\alpha $ in a neighborhood near ${\alpha}^{*}$. We follow (5.3)-(5.5) and solve the problem with equality and inequality constraints under the weight $\alpha \in \left[{\alpha}^{*}-\epsilon ,{\alpha}^{*}+\epsilon \right]$ to obtain the Pareto efficient strategies and the corresponding POF. We let

${\stackrel{^}{x}}_{i}^{\alpha}={\stackrel{^}{\phi}}_{i}^{\alpha}\left(\alpha \right)$, for $i\in \left[1,2,\cdots ,n\right]$ and $\alpha \in \left[{\alpha}^{*}-\epsilon ,{\alpha}^{*}+\epsilon \right]$ (6.3)

denote the optimal decision variables with the presence of active inequality constraints. We then calculate the distance between the target $\left({T}_{1},{T}_{2}\right)$ and $\left({f}_{1}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right),{f}_{2}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right)\right)$, that is

${\left[{\left({T}_{1}-{f}_{1}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right)\right)}^{2}+{\left({T}_{2}-{f}_{2}\left({\stackrel{^}{\phi}}^{\alpha}\left(\alpha \right)\right)\right)}^{2}\right]}^{\frac{1}{2}}$, for $\alpha \in \left[{\alpha}^{*}-\epsilon ,{\alpha}^{*}+\epsilon \right]$. (6.4)

Finally, the point $\left({f}_{1}\left({\stackrel{^}{\phi}}^{{\alpha}^{**}}\left({\alpha}^{**}\right)\right),{f}_{2}\left({\stackrel{^}{\phi}}^{{\alpha}^{**}}\left({\alpha}^{**}\right)\right)\right)$ which yields the shortest distance in (6.4) is the solution for the target-attainment method.

7. An Illustrative Example

Consider a bi-objective optimization in which the decision-maker faces two objectives:

${f}_{1}\left(x\right)=Y+\beta {x}_{1}-\frac{1}{2}{\left({x}_{1}\right)}^{2}-w{x}_{2}+C{x}_{3}-\frac{1}{2}{\left({x}_{3}\right)}^{2}$, (7.1)

and

${f}_{2}\left(x\right)=P+q{x}_{2}-\frac{1}{2}{\left({x}_{2}\right)}^{2}-\pi {x}_{1}-\mu {x}_{3}$. (7.2)

There is an equality constraint

${\chi}_{1}-{x}_{1}-{x}_{2}=0$, (7.3)

and an inequality constraint

${\chi}_{2}-{x}_{2}\ge 0$. (7.4)

7.1. POF with Equality Constraint Only

We first consider as a bench mark the case with the equality constraint only. To obtain Pareto efficient strategies in the bi-objective optimization problem (7.1)-(7.4), the decision-maker considers the problem:

$\begin{array}{l}\underset{{x}_{1},{x}_{2}}{\mathrm{max}}\{\alpha \left[Y+\beta {x}_{1}-\frac{1}{2}{\left({x}_{1}\right)}^{2}-w{x}_{2}+C{x}_{3}-\frac{1}{2}{\left({x}_{3}\right)}^{2}\right]\\ +\left(1-\alpha \right)\left[P+q{x}_{2}-\frac{1}{2}{\left({x}_{2}\right)}^{2}-\pi {x}_{1}-\mu {x}_{3}\right]\},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\alpha \in \left[0,1\right],\end{array}$ (7.5)

subject to (7.3).

The corresponding Lagrange function can be expressed as:

$\begin{array}{c}L\left(x,\lambda ,\alpha \right)=\alpha \left[Y+\beta {x}_{1}-\frac{1}{2}{\left({x}_{1}\right)}^{2}-w{x}_{2}+C{x}_{3}-\frac{1}{2}{\left({x}_{3}\right)}^{2}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}+\left(1-\alpha \right)\left[P+q{x}_{2}-\frac{1}{2}{\left({x}_{2}\right)}^{2}-\pi {x}_{1}-\mu {x}_{3}\right]+\lambda \left({\chi}_{1}-{x}_{1}-{x}_{2}\right).\end{array}$ (7.6)

The Pareto efficient strategies of the problem of maximizing (7.5) subject to equality constraint (7.3) can be solved as:

Proposition 7.1.

The Pareto efficient strategies of the problem of maximizing (7.5) subject to equality constraint (7.3) are:

$\begin{array}{l}{x}_{1}^{\left(\alpha \right)}=\alpha \beta -\pi +\alpha \pi -q+\alpha q+\alpha w+{\chi}_{1}-\alpha {\chi}_{1},\\ {x}_{2}^{\left(\alpha \right)}=q-\alpha q-\alpha w-\alpha \beta +\pi -\alpha \pi +\alpha {\chi}_{1},\\ {x}_{3}^{\left(\alpha \right)}=C-\frac{\left(1-\alpha \right)}{\alpha}\mu ;\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\alpha \in \left[0,1\right].\end{array}$ (7.7)

Proof: See Appendix A.

The relationship between the Pareto efficient strategies ${x}_{1}^{\left(\alpha \right)}$, ${x}_{2}^{\left(\alpha \right)}$ and ${x}_{3}^{\left(\alpha \right)}$ can be obtained as follows.

Proposition 7.2.

$\frac{\partial {x}_{1}^{\left(\alpha \right)}}{\partial \alpha}=\beta +\pi +q+w-{\chi}_{1}>0$,

$\frac{\partial {x}_{2}^{\left(\alpha \right)}}{\partial \alpha}=-\beta -\pi -q-w+{\chi}_{1}<0$,

$\frac{\partial {x}_{3}^{\left(\alpha \right)}}{\partial \alpha}=\frac{1}{{\alpha}^{2}}\mu >0$.

Proof: See Appendix B.

Substituting the Pareto efficient strategies into the objective functions (7.1)-(7.2) yields the POF as:

$\begin{array}{l}(\left(Y+\beta {x}_{1}^{\left(\alpha \right)}-\frac{1}{2}{\left({x}_{1}^{\left(\alpha \right)}\right)}^{2}-w{x}_{2}^{\left(\alpha \right)}+C{x}_{3}^{\left(\alpha \right)}-\frac{1}{2}{\left({x}_{3}^{\left(\alpha \right)}\right)}^{2}\right),\\ \left(P+q{x}_{2}^{\left(\alpha \right)}-\frac{1}{2}{\left({x}_{2}^{\left(\alpha \right)}\right)}^{2}-\pi {x}_{1}^{\left(\alpha \right)}-\mu {x}_{3}^{\left(\alpha \right)}\right)),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\alpha \in \left[0,1\right].\end{array}$ (7.8)

In addition, if there exist minimum levels of the objectives, ${f}_{1}\left(x\right)\ge {\underset{\_}{f}}_{1}$ and ${f}_{2}\left(x\right)\ge {\underset{\_}{f}}_{2}$, that the optimal solution have to fulfilled, then the range of the POF has to be restricted to be above ${\underset{\_}{f}}_{1}$ and above ${\underset{\_}{f}}_{2}$. We denote the corresponding restriction on the weight as $\alpha \in \left(\underset{\_}{\alpha},\stackrel{\xaf}{\alpha}\right)$. The values of $\underset{\_}{\alpha}$ can be obtained by solving

$Y+\beta {x}_{1}^{\left(\underset{\_}{\alpha}\right)}-\frac{1}{2}{\left({x}_{1}^{\left(\underset{\_}{\alpha}\right)}\right)}^{2}-w{x}_{2}^{\left(\underset{\_}{\alpha}\right)}+C{x}_{3}^{\left(\underset{\_}{\alpha}\right)}-\frac{1}{2}{\left({x}_{3}^{\left(\underset{\_}{\alpha}\right)}\right)}^{2}={\underset{\_}{f}}_{1}$. (7.9)

The values of $\stackrel{\xaf}{\alpha}$ can be obtained by solving

$P+q{x}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({x}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}-\pi {x}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}-\mu {x}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}={\underset{\_}{f}}_{2}$. (7.10)

The point

$\begin{array}{l}(\left(Y+\beta {x}_{1}^{\left(\underset{\_}{\alpha}\right)}-\frac{1}{2}{\left({x}_{1}^{\left(\underset{\_}{\alpha}\right)}\right)}^{2}-w{x}_{2}^{\left(\underset{\_}{\alpha}\right)}+C{x}_{3}^{\left(\underset{\_}{\alpha}\right)}-\frac{1}{2}{\left({x}_{3}^{\left(\underset{\_}{\alpha}\right)}\right)}^{2}\right),\\ \left(P+q{x}_{2}^{\left(\underset{\_}{\alpha}\right)}-\frac{1}{2}{\left({x}_{2}^{\left(\underset{\_}{\alpha}\right)}\right)}^{2}-\pi {x}_{1}^{\left(\underset{\_}{\alpha}\right)}-\mu {x}_{3}^{\left(\underset{\_}{\alpha}\right)}\right))\end{array}$ (7.11)

becomes an anchor point at which the objective ${f}_{2}$ reaches its maximum.

The point

$\begin{array}{l}(\left(Y+\beta {x}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({x}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}-w{x}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}+C{x}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({x}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}\right),\\ \left(P+q{x}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({x}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}-\pi {x}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}-\mu {x}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}\right))\end{array}$ (7.12)

becomes an anchor point at which the objective ${f}_{1}$ reaches its maximum.

The point

$\begin{array}{l}(\left(Y+\beta {x}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({x}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}-w{x}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}+C{x}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({x}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}\right),\\ \left(P+q{x}_{2}^{\left(\underset{\_}{\alpha}\right)}-\frac{1}{2}{\left({x}_{2}^{\left(\underset{\_}{\alpha}\right)}\right)}^{2}-\pi {x}_{1}^{\left(\underset{\_}{\alpha}\right)}-\mu {x}_{3}^{\left(\underset{\_}{\alpha}\right)}\right))\end{array}$ (7.13)

becomes the utopia point.

7.2. POF with Equality and Inequality Constraints

Now, we consider the case under both the equality constraint and the inequality constraint. Invoking (7.4), one can observe that the inequality constraint will be active if ${x}_{2}>{\chi}_{2}$. To depict the POF, we first check whether the inequality constraint is active at $\underset{\_}{\alpha}$ and $\stackrel{\xaf}{\alpha}$. If ${x}_{2}^{\left(\underset{\_}{\alpha}\right)}=q-\underset{\_}{\alpha}q-\underset{\_}{\alpha}w-\underset{\_}{\alpha}\text{\hspace{0.05em}}\beta +\pi -\underset{\_}{\alpha}\pi +\underset{\_}{\alpha}\text{\hspace{0.05em}}{\chi}_{1}>{\chi}_{2}$, then the inequality constraint is active at $\underset{\_}{\alpha}$. Similarly, if ${x}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}=q-\stackrel{\xaf}{\alpha}q-\stackrel{\xaf}{\alpha}w-\stackrel{\xaf}{\alpha}\beta +\pi -\stackrel{\xaf}{\alpha}\pi +\stackrel{\xaf}{\alpha}{\chi}_{1}>{\chi}_{2}$, then the inequality constraint is active at $\stackrel{\xaf}{\alpha}$. Since ${x}_{2}^{\left(\underset{\_}{\alpha}\right)}$ is monotonically decreasing in $\alpha $, the inequality constraint is active in the entire POF.

To obtain the Pareto efficient strategies, the decision-maker considers the problem:

$\begin{array}{l}\underset{{x}_{1},{x}_{2}}{\mathrm{max}}\{\alpha \left[Y+\beta {x}_{1}-\frac{1}{2}{\left({x}_{1}\right)}^{2}-w{x}_{2}+C{x}_{3}-\frac{1}{2}{\left({x}_{3}\right)}^{2}\right]\\ +\left(1-\alpha \right)\left[P+q{x}_{2}-\frac{1}{2}{\left({x}_{2}\right)}^{2}-\pi {x}_{1}-\mu {x}_{3}\right]\}\end{array}$ (7.14)

subject to (7.3) and (7.4).

The corresponding Lagrange function can be expressed as:

$\begin{array}{c}L\left(x,\lambda ,\alpha \right)=\alpha \left[Y+\beta {x}_{1}-\frac{1}{2}{\left({x}_{1}\right)}^{2}-w{x}_{2}+C{x}_{3}-\frac{1}{2}{\left({x}_{3}\right)}^{2}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}+\left(1-\alpha \right)\left[P+q{x}_{2}-\frac{1}{2}{\left({x}_{2}\right)}^{2}-\pi {x}_{1}-\mu {x}_{3}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}+\lambda \left({\chi}_{1}-{x}_{1}-{x}_{2}\right)+\gamma \left({\chi}_{2}-{x}_{2}\right).\end{array}$ (7.15)

The Pareto efficient strategies of the problem of maximizing (7.14) subject to equality constraint (7.3) and inequality constraint (7.4) can be solved as:

Proposition 7.3.

The Pareto efficient strategies of the problem of maximizing (7.14) subject to the constraints (7.3)-(7.4) are:

$\begin{array}{l}{\stackrel{^}{x}}_{1}^{\left(\alpha \right)}={\chi}_{1}-{\chi}_{2},\\ {\stackrel{^}{x}}_{2}^{\left(\alpha \right)}={\chi}_{2},\\ {\stackrel{^}{x}}_{3}^{\left(\alpha \right)}=C-\frac{\left(1-\alpha \right)}{\alpha}\mu .\end{array}$ (7.16)

Proof: See Appendix C.

The values of $\underset{\_}{\alpha}$ can be obtained by solving

$Y+\beta {\stackrel{^}{x}}_{1}^{\left(\underset{\_}{\alpha}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{1}^{\left(\underset{\_}{\alpha}\right)}\right)}^{2}-w{\stackrel{^}{x}}_{2}^{\left(\underset{\_}{\alpha}\right)}+C{\stackrel{^}{x}}_{2}^{\left(\underset{\_}{\alpha}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{3}^{\left(\underset{\_}{\alpha}\right)}\right)}^{2}={\underset{\_}{f}}_{1}$. (7.17)

The values of $\stackrel{\xaf}{\alpha}$ can be obtained by solving

$P+q{\stackrel{^}{x}}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}-\pi {\stackrel{^}{x}}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}-\mu {\stackrel{^}{x}}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}={\underset{\_}{f}}_{2}$. (7.18)

Substituting the Pareto efficient strategies from (7.13) into the objectives ${f}_{1}\left(x\right)$ and ${f}_{2}\left(x\right)$, we can obtain the POF as

$\begin{array}{l}(\left(Y+\beta {\stackrel{^}{x}}_{1}^{\left(\alpha \right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{1}^{\left(\alpha \right)}\right)}^{2}-w{\stackrel{^}{x}}_{2}^{\left(\alpha \right)}+C{\stackrel{^}{x}}_{3}^{\left(\alpha \right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{3}^{\left(\alpha \right)}\right)}^{2}\right),\\ \left(P+q{\stackrel{^}{x}}_{2}^{\left(\alpha \right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{2}^{\left(\alpha \right)}\right)}^{2}-\pi {\stackrel{^}{x}}_{1}^{\left(\alpha \right)}-\mu {\stackrel{^}{x}}_{3}^{\left(\alpha \right)}\right)),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\alpha \in \left[\underset{\_}{\alpha},\stackrel{\xaf}{\alpha}\right].\end{array}$ (7.19)

The corresponding anchor points and utopia point can be derived accordingly.

Finally, consider the case where ${x}_{2}^{\left(\underset{\_}{\alpha}\right)}>{\chi}_{2}$ and ${x}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}<{\chi}_{2}$, we search the point at which the inequality constraint turns active, that is

${x}_{2}^{\left(\alpha \right)}=q-\alpha q-\alpha w-\alpha \beta +\pi -\alpha \pi +\alpha {\chi}_{1}={\chi}_{2}$. (7.20)

Solving (7.20) yields

$\alpha =\frac{q+\pi -{\chi}_{2}}{q+w+\beta +\pi -{\chi}_{1}}\equiv \stackrel{\u02dc}{\alpha}$. (7.21)

Figure 6. POF in solid line.

Therefore, the POF will be the same as that without inequality constraint in the range of $\alpha \in \left(\stackrel{\u02dc}{\alpha},\stackrel{\xaf}{\alpha}\right)$, and be the same as that with inequality constraint in the range of $\alpha \in \left(\underset{\_}{\alpha},\stackrel{\u02dc}{\alpha}\right)$. The actual POF is the solid line in Figure 6.

7.3. The Case of Kalai-Smorodinsky Solution

With the analytical solution of POF completely depicted, we can solve the solutions in Section 4. Consider the case of using the Kalai-Smorodinsky solution for solving multi-objective optimization problems. The solution is the intersection of the POF and the line segment connecting the nadir point and the utopia point. We first obtain the bench-mark POF with equality constraint only. Then, when check the anchor points in (7.11) and (7.12). If ${x}_{2}^{\left(\underset{\_}{\alpha}\right)}<{\chi}_{2}$ in both anchor points, then the POF will be the same as that with equality constraint only. If ${x}_{2}^{\left(\underset{\_}{\alpha}\right)}>{\chi}_{2}$ in both anchor points, then the POF will be the same as that with equality constraint and active inequality constraint. If ${x}_{2}^{\left(\underset{\_}{\alpha}\right)}>{\chi}_{2}$ in the anchor point (7.11) and ${x}_{2}^{\left(\underset{\_}{\alpha}\right)}<{\chi}_{2}$ in the anchor point (7.12) the inequality constraint is active, then the point ${x}_{2}^{\left(\alpha \right)}={\chi}_{2}$ has to be identified as $\alpha =\frac{q+\pi -{\chi}_{2}}{q+w+\beta +\pi -{\chi}_{1}}\equiv \stackrel{\u02dc}{\alpha}$ (see (7.21)).

Given the above information, the relevant utopia point can be identified

$\begin{array}{l}(\left(Y+\beta {x}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({x}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}-w{x}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}+C{x}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({x}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}\right),\\ \left(P+q{\stackrel{^}{x}}_{2}^{\left(\underset{\_}{\alpha}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{2}^{\left(\underset{\_}{\alpha}\right)}\right)}^{2}-\pi {\stackrel{^}{x}}_{1}^{\left(\underset{\_}{\alpha}\right)}-\mu {\stackrel{^}{x}}_{3}^{\left(\underset{\_}{\alpha}\right)}\right))\end{array}$

and the nadir point is $\left({\underset{\_}{f}}_{1},{\underset{\_}{f}}_{2}\right)$. The slope of the line segment linking the nadir point and the utopia point in the area bounded by the nadir point and the utopia point is

$\frac{\left(P+q{\stackrel{^}{x}}_{2}^{\left(\underset{\_}{\alpha}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{2}^{\left(\underset{\_}{\alpha}\right)}\right)}^{2}-\pi {\stackrel{^}{x}}_{1}^{\left(\underset{\_}{\alpha}\right)}-\mu {\stackrel{^}{x}}_{3}^{\left(\underset{\_}{\alpha}\right)}\right)-{\underset{\_}{f}}_{2}}{\left(Y+\beta {x}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({x}_{1}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}-w{x}_{2}^{\left(\stackrel{\xaf}{\alpha}\right)}+C{x}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}-\frac{1}{2}{\left({x}_{3}^{\left(\stackrel{\xaf}{\alpha}\right)}\right)}^{2}\right)-{\underset{\_}{f}}_{1}}=\theta $. (7.22)

If there exist a ${\alpha}^{*}$ that satisfies

$\frac{\left(P+q{x}_{2}^{\left({\alpha}^{*}\right)}-\frac{1}{2}{\left({x}_{2}^{\left({\alpha}^{*}\right)}\right)}^{2}-\pi {x}_{1}^{\left({\alpha}^{*}\right)}-\mu {x}_{3}^{\left({\alpha}^{*}\right)}\right)-{\underset{\_}{f}}_{2}}{\left(Y+\beta {x}_{1}^{\left({\alpha}^{*}\right)}-\frac{1}{2}{\left({x}_{1}^{\left({\alpha}^{*}\right)}\right)}^{2}-w{x}_{2}^{\left({\alpha}^{*}\right)}+C{x}_{3}^{\left({\alpha}^{*}\right)}-\frac{1}{2}{\left({x}_{3}^{\left({\alpha}^{*}\right)}\right)}^{2}\right)-{\underset{\_}{f}}_{1}}=\theta $, and ${x}_{2}^{\left({\alpha}^{*}\right)}<{\chi}_{2}$, (7.23)

then, the Kalai-Smorodinsky solution is given by

$\begin{array}{l}(\left(Y+\beta {x}_{1}^{\left({\alpha}^{*}\right)}-\frac{1}{2}{\left({x}_{1}^{\left({\alpha}^{*}\right)}\right)}^{2}-w{x}_{2}^{\left({\alpha}^{*}\right)}+C{x}_{3}^{\left({\alpha}^{*}\right)}-\frac{1}{2}{\left({x}_{3}^{\left({\alpha}^{*}\right)}\right)}^{2}\right),\\ \left(P+q{x}_{2}^{\left({\alpha}^{*}\right)}-\frac{1}{2}{\left({x}_{2}^{\left({\alpha}^{*}\right)}\right)}^{2}-\pi {x}_{1}^{\left({\alpha}^{*}\right)}-\mu {x}_{3}^{\left({\alpha}^{*}\right)}\right))\end{array}$ (7.24)

If there exist a ${\alpha}^{**}$ that satisfies

$\frac{\left(P+q{\stackrel{^}{x}}_{2}^{\left({\alpha}^{**}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{2}^{\left({\alpha}^{**}\right)}\right)}^{2}-\pi {\stackrel{^}{x}}_{1}^{\left({\alpha}^{**}\right)}-\mu {\stackrel{^}{x}}_{3}^{\left({\alpha}^{**}\right)}\right)-{\underset{\_}{f}}_{2}}{\left(Y+\beta {\stackrel{^}{x}}_{1}^{\left({\alpha}^{**}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{1}^{\left({\alpha}^{**}\right)}\right)}^{2}-w{\stackrel{^}{x}}_{2}^{\left({\alpha}^{**}\right)}+C{\stackrel{^}{x}}_{3}^{\left({\alpha}^{**}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{3}^{\left({\alpha}^{**}\right)}\right)}^{2}\right)-{\underset{\_}{f}}_{1}}=\theta $, and ${\stackrel{^}{x}}_{2}^{\left({\alpha}^{**}\right)}={\chi}_{2}$. (7.25)

then the Kalai-Smorodinsky solution is given by

$\begin{array}{l}(\left(Y+\beta {\stackrel{^}{x}}_{1}^{\left({\alpha}^{**}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{1}^{\left({\alpha}^{**}\right)}\right)}^{2}-w{\stackrel{^}{x}}_{2}^{\left({\alpha}^{**}\right)}+C{\stackrel{^}{x}}_{3}^{\left({\alpha}^{**}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{3}^{\left({\alpha}^{**}\right)}\right)}^{2}\right),\\ \left(P+q{\stackrel{^}{x}}_{2}^{\left({\alpha}^{**}\right)}-\frac{1}{2}{\left({\stackrel{^}{x}}_{2}^{\left({\alpha}^{**}\right)}\right)}^{2}-\pi {\stackrel{^}{x}}_{1}^{\left({\alpha}^{**}\right)}-\mu {\stackrel{^}{x}}_{3}^{\left({\alpha}^{**}\right)}\right)).\end{array}$ (7.26)

Remark 7.1.

Note that with the part of POF under equality constraint and the part under inequality constraint as indicated explicitly in (7.17)-(7.21), we can characterize the solutions to the Nash arbitration, target-attainment method, scalarization method with weighted-sum and utility-based method in a similar way as that for the characterization of the Kalai-Smorodinsky bargaining solution.

8. Extension and Conclusion

The analysis can be extended to the case with more than two objectives separated into two competing/conflicting types of objectives. In particular, the type A objectives include ${f}_{1}^{A}\left(x\right),{f}_{2}^{A}\left(x\right),\cdots ,{f}_{{n}_{A}}^{A}\left(x\right)$, and the type B objectives include ${f}_{1}^{B}\left(x\right),{f}_{2}^{B}\left(x\right),,\cdots ,{f}_{{n}_{B}}^{B}\left(x\right)$. A normalized weight is attached to each objective within a type, reflecting the relative importance of the objective in that group of objectives. The weighted sum of objectives within a type signifies the scalarized preference of the decision-maker for that type of objectives. The problem becomes

$\underset{x}{\mathrm{max}}F\left[\underset{i=1}{\overset{{n}_{A}}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{w}_{i}^{A}{f}_{i}^{A}\left(x\right),\underset{j=1}{\overset{{n}_{B}}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{w}_{j}^{B}{f}_{j}^{B}\left(x\right)\right],$ (8.1)

subject to

$g\left(x\right)=0$ and $h\left(x\right)\ge 0$, (8.2)

where ${w}_{i}^{A}>0$, ${w}_{j}^{B}>0$, ${\sum}_{i=1}^{{n}_{A}}{w}_{i}^{A}}=1$ and ${\sum}_{j=1}^{{n}_{B}}{w}_{j}^{B}}=1$.

We identify the Pareto efficient strategies by systematically changing the weights among the objective functions ${\sum}_{i=1}^{{n}_{A}}{w}_{i}^{A}{f}_{i}^{A}\left(x\right)$ and ${\sum}_{j=1}^{{n}_{B}}{w}_{j}^{B}{f}_{j}^{B}\left(x\right)$. Specifically, the decision-maker considers the problem:

$\underset{x}{\mathrm{max}}\left[\alpha \underset{i=1}{\overset{{n}_{A}}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{w}_{i}^{A}{f}_{i}^{A}\left(x\right)+\left(1-\alpha \right)\underset{j=1}{\overset{{n}_{B}}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{w}_{j}^{B}{f}_{j}^{B}\left(x\right)\right]$, for $\alpha \in \left[0,1\right]$, (8.3)

subject to (8.2).

Invoking Karush-Kuhn-Tucker conditions, we can express the corresponding Lagrange function as:

$\begin{array}{c}L\left(x,\lambda ,\gamma ,\alpha \right)=\left[\alpha \underset{i=1}{\overset{{n}_{A}}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{w}_{i}^{A}{f}_{i}^{A}\left(x\right)+\left(1-\alpha \right)\underset{j=1}{\overset{{n}_{B}}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{w}_{j}^{B}{f}_{j}^{B}\left(x\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{j=1}{\overset{m}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{\lambda}_{j}{g}_{j}\left(x\right)+\underset{k=1}{\overset{\tau}{{\displaystyle \sum}}}\text{\hspace{0.17em}}{\gamma}_{k}\left({h}_{k}\left(x\right)\right).\end{array}$ (8.4)

Following the analysis in Section 5, we attempt to establish an analytical path of the POF which would be used for obtaining the solution under different methods that are mentioned in Section 4.

Finally, this paper presents a new Pareto Method for bi-objective optimization yielding the POF in the form of analytical solutions. Analytical methods enjoy the advantages of being transparent, efficient and rigorous. These advantages are extremely useful in deriving accurate, exact and well-understood solutions, especially for policy design. The possibility to provide an extension for multi-objective optimization by separating the objectives into two types allows wider applicability of the developed results. This paper does not claim superiority of the analytical Pareto method over other methods of multi-objective optimization, rather the method is a novel addition to the growing pursuit of Pareto generators, with potential advantages of being handy for analysis. Further theoretical development and applications are expected.

Appendix

Appendix A: Proof of Proposition 7.1

First-order conditions for a maximum from the Lagrange function (7.6) yield

$\alpha \beta -\alpha {x}_{1}-\left(1-\alpha \right)\pi -\lambda =0$,

$\left(1-\alpha \right)\left(q-{x}_{2}\right)-\alpha w-\lambda =0$,

$\alpha C-\alpha {x}_{3}-\left(1-\alpha \right)\mu =0$,

${\chi}_{1}-{x}_{1}-{x}_{2}=0$ (A.1)

Solving (A.1) yields the Pareto efficient strategies and the Lagrange multiplier with equality constraint only

${x}_{1}^{\left(\alpha \right)}=\beta -\frac{1-\alpha}{\alpha}\pi -\frac{{\lambda}^{\left(\alpha \right)}}{\alpha}$,

${x}_{2}^{\left(\alpha \right)}=q-\frac{\alpha}{1-\alpha}w-\frac{{\lambda}^{\left(\alpha \right)}}{1-\alpha}$,

${x}_{3}^{\left(\alpha \right)}=C-\frac{1-\alpha}{\alpha}\mu $,

${\lambda}^{\left(\alpha \right)}=\alpha \left(1-\alpha \right)\left[\left(\beta -\frac{1-\alpha}{\alpha}\pi \right)+\left(q-\frac{\alpha}{1-\alpha}w\right)-{\chi}_{1}\right]$. (A.2)

Hence,

$\begin{array}{c}{x}_{1}^{\left(\alpha \right)}=\beta -\frac{1-\alpha}{\alpha}\pi -\left(1-\alpha \right)\left[\left(\beta -\frac{1-\alpha}{\alpha}\pi \right)+\left(q-\frac{\alpha}{1-\alpha}w\right)-{\chi}_{1}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}-q+\alpha q+\alpha w+{\chi}_{1}-\alpha {\chi}_{1},\end{array}$ and

$\begin{array}{c}{x}_{2}^{\left(\alpha \right)}=q-\frac{\alpha}{1-\alpha}w-\alpha \left[\left(\beta -\frac{1-\alpha}{\alpha}\pi \right)+\left(q-\frac{\alpha}{1-\alpha}w\right)-{\chi}_{1}\right]\\ =q-\alpha q-\alpha w-\alpha \beta +\pi -\alpha \pi +\alpha {\chi}_{1}.\end{array}$ (A.3)

Appendix B: Proof of Proposition 7.2.

Differentiating ${x}_{1}^{\left(\alpha \right)}$ in Proposition 7.1 with respect to $\alpha $ yields

$\frac{\partial {x}_{1}^{(\alpha )}}{\partial \alpha}=\beta +\pi +q+w-{\chi}_{1}$. (B.1)

Invoking the constraint ${\chi}_{1}={x}_{1}+{x}_{2}$ in (7.3) and the first two equations in (A.2), we have

$\left(\beta -\frac{1-\alpha}{\alpha}\pi -\frac{{\lambda}^{\left(\alpha \right)}}{2\alpha}\right)+\left(q-\frac{\alpha}{1-\alpha}w-\frac{{\lambda}^{\left(\alpha \right)}}{1-\alpha}\right)={\chi}_{1}$, (B.2)

which shows that $\beta +\pi +q+w>{\chi}_{1}$.

Hence,

$\frac{\partial {x}_{1}^{\left(\alpha \right)}}{\partial \alpha}=\beta +\pi +q+w-{\chi}_{1}>0$. (B.3)

In a similar manner, we can show that

$\frac{\partial {x}_{2}^{\left(\alpha \right)}}{\partial \alpha}=-\beta -\pi -q-w+{\chi}_{1}<0$. (B.4)

Finally,

$\frac{\partial {x}_{3}^{\left(\alpha \right)}}{\partial \alpha}=\frac{1}{{\alpha}^{2}}\mu >0$. (B.5)

Appendix C: Proof of Proposition 7.3

First-order conditions for a maximum for the problem of maximizing (7.11) subject to (7.3)-(7.4) yield

$\alpha \beta -\alpha {x}_{1}-\left(1-\alpha \right)\pi -\lambda =0$,

$\left(1-\alpha \right)\left(q-{x}_{2}\right)-\alpha w-\lambda -\gamma =0$,

$\alpha C-\alpha {x}_{3}-\left(1-\alpha \right)\mu =0$,

${\chi}_{1}-{x}_{1}-{x}_{2}=0$,

$\gamma \left({\chi}_{2}-{x}_{2}\right)=0$,

$\gamma \ge 0$ and ${\chi}_{2}-{x}_{2}\ge 0$. (C.1)

Solving (C.1) yields the Pareto efficient strategies and Lagrange multipliers:

${x}_{1}^{(\alpha )}=\beta -\frac{1-\alpha}{\alpha}\pi -\frac{{\lambda}^{\left(\alpha \right)}}{\alpha}$,

${x}_{2}^{\left(\alpha \right)}=q-\frac{\alpha}{1-\alpha}w-\frac{{\lambda}^{\left(\alpha \right)}}{1-\alpha}-\frac{{\gamma}^{\left(\alpha \right)}}{1-\alpha}$,

${x}_{3}^{\left(\alpha \right)}=C-\frac{1-\alpha}{\alpha}\mu $,

${\lambda}^{\left(\alpha \right)}=\alpha \beta -\left(1-\alpha \right)\pi -\alpha {\chi}_{1}+\alpha {\chi}_{2}$,

${\gamma}^{\left(\alpha \right)}=\left(1-\alpha \right)q-\alpha w-\alpha \beta +\left(1+\alpha \right)\pi +\alpha {\chi}_{1}-{\chi}_{2}$. (C.2)

Substituting ${\lambda}^{\left(\alpha \right)}$ and ${\gamma}^{\left(\alpha \right)}$ into ${x}_{1}^{\left(\alpha \right)}$ and ${x}_{2}^{\left(\alpha \right)}$ in (C.2) yields:

${x}_{1}^{\left(\alpha \right)}={\chi}_{1}-{\chi}_{2}$ and

${x}_{2}^{\left(\alpha \right)}={\chi}_{2}$. (C.3)

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Zhou, A., Zhang, Q., Jin, Y., Tsang, E. and Okabe, T. (2005) A Model-Based Evolutionary Algorithm for Bi-Objective Optimization. 2005 IEEE Congress on Evolutionary Computation, Vol. 3, 2568-2575. https://doi.org/10.1109/CEC.2005.1555016 |

[2] |
Kukkonen, S. and Deb, K. (2006) Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems. IEEE International Conference on Evolutionary Computation, Vancouver, 16-21 July 2006, 1179-1186. https://doi.org/10.1109/CEC.2006.1688443 |

[3] |
Pinto-Varela, T., Barbosa-Póvoa, A.P. and Novaisa, A.Q. (2011) Bi-Objective Optimization Approach to the Design and Planning of Supply Chains: Economic versus Environmental Performances. Computers & Chemical Engineering, 35, 1454-1468. https://doi.org/10.1016/j.compchemeng.2011.03.009 |

[4] |
Lath, B., Basavarajappa, S.S., Kadadevaramath, R.S. and Chen, J.C.H. (2013) A Bi-Objective Optimization of Supply Chain Design and Distribution Operations Using Non-Dominated Sorting Algorithm: A Case Study. Expert Systems with Applications, 40, 5730-5739. https://doi.org/10.1016/j.eswa.2013.03.047 |

[5] |
Pereyra, V., Saunders, M. and Castillo, J. (2013) Equispaced Pareto Front Construction for Constrained Bi-Objective Optimization. Mathematical and Computer Modelling, 57, 2122-2131. https://doi.org/10.1016/j.mcm.2010.12.044 |

[6] |
Garg, H., Monica, R., Sharma, S.P. and Vishwakarma, Y. (2014) Bi-Objective Optimization of the Reliability-Redundancy Allocation Problem for Series-Parallel System. Journal of Manufacturing Systems, 33, 335-347. https://doi.org/10.1016/j.jmsy.2014.02.008 |

[7] |
Futrell, B.J., Ozelkan, E.C. and Brentrup, D. (2015) Bi-Objective Optimization of Building Enclosure Design for Thermal and Lighting Performance. Building and Environment, 92, 591-602. https://doi.org/10.1016/j.buildenv.2015.03.039 |

[8] |
Hirpa, D., Hare, W., Lucet, Y., Pushak, Y. and Tesfamariam, S. (2016) A Bi-Objective Optimization Framework for Three-Dimensional Road Alignment Design. Transportation Research Part C: Emerging Technologies, 65, 61-78. https://doi.org/10.1016/j.trc.2016.01.016 |

[9] |
Liu, M., Lee, C.-Y., Zhang, Z. and Chua, C. (2016) Bi-Objective Optimization for the Container Terminal Integrated Planning. Transportation Research Part B: Methodological, 93, 720-749. https://doi.org/10.1016/j.trb.2016.05.012 |

[10] |
Wang, S., Liu, M., Chu, F. and Chua, C. (2016) Bi-Objective Optimization of a Single Machine Batch Scheduling Problem with Energy Cost Consideration. Journal of Cleaner Production, 137, 1205-1215. https://doi.org/10.1016/j.jclepro.2016.07.206 |

[11] |
Cheraghalipour, A., Paydar, M.M. and Hajiaghaei-Keshtelia, M. (2018) A Bi-Objective Optimization for Citrus Closed-Loop Supply Chain Using Pareto-Based Algorithms. Applied Soft Computing, 69, 33-59. https://doi.org/10.1016/j.asoc.2018.04.022 |

[12] |
Ho-Huu, V., Hartjes, S., Visser, H.G. and Curran, R. (2018) An Improved MOEA/D Algorithm for Bi-Objective Optimization Problems with Complex Pareto Fronts and Its Application to Structural Optimization. Expert Systems with Applications, 92, 430-446. https://doi.org/10.1016/j.eswa.2017.09.051 |

[13] |
Yeh, C.-T. (2019) An Improved NSGA2 to Solve a Bi-Objective Optimization Problem of Multi-State Electronic Transaction Network. Reliability Engineering & System Safety, 191, Article ID: 106578. https://doi.org/10.1016/j.ress.2019.106578 |

[14] |
Liu, D., Huang, Q., Yang, Y., Liu, D. and Wei, X. (2020) Bi-Objective Algorithm Based on NSGA-II Framework to Optimize Reservoirs Operation. Journal of Hydrology, 585, Article ID: 124830. https://doi.org/10.1016/j.jhydrol.2020.124830 |

[15] |
Nagamanjula, R. and Pethalakshmi, A. (2020) A Novel Framework Based on Bi-Objective Optimization and LAN2FIS for Twitter Sentiment Analysis. Social Network Analysis and Mining, 10, 34. https://doi.org/10.1007/s13278-020-00648-5 |

[16] |
Xu, Z., Yao, L. and Chen, X. (2020) Urban Water Supply System Optimization and Planning: Bi-Objective Optimization and System Dynamics Methods. Computers & Industrial Engineering, 142, Article ID: 106373. https://doi.org/10.1016/j.cie.2020.106373 |

[17] |
Diao, B., Zhang, X. and Fang, H. (2021) Bi-Objective Optimization for Improving the Locomotion Performance of the Vibration-Driven Robot. Archive of Applied Mechanics, 91, 2073-2088. https://doi.org/10.1007/s00419-020-01870-5 |

[18] |
Mohammadi, M., Dehghan, M., Pirayesh, A. and Dolgui, A. (2022) Bi-Objective Optimization of a Stochastic Resilient Vaccine Distribution Network in the Context of the COVID-19 Pandemic. Omega, 113, Article ID: 102725. https://doi.org/10.1016/j.omega.2022.102725 |

[19] | Kparib, D.Y., Twum, S.B. and Boah, D.K. (2018) An Improved Ant Colony System Algorithm for Solving Shortest Path Network Problems. International Journal of Science and Research, 7, 1123-1127. |

[20] |
Kparib, D., Twum, S. and Boah, D. (2019) A Min-Max Strategy to Aid Decision Making in a Bi-Objective Discrete Optimization Problem Using an Improved Ant Colony Algorithm. American Journal of Operations Research, 9, 161-174. https://doi.org/10.4236/ajor.2019.94010 |

[21] |
Gulben, C. and Orhan, Y. (2015) An Improved Ant Colony Optimization Algorithm for Construction Site Layout Problems. Journal of Building Construction and Planning Research, 3, 221-232. https://doi.org/10.4236/jbcpr.2015.34022 |

[22] |
Zaninudin, Z. and Paputungan, T.V. (2013) A Hybrid Optimization Algorithm Based on Genetic Algorithm and Ant Colony Optimization. International Journal of Artificial Intelligence and Applications, 4, 63-75. https://doi.org/10.5121/ijaia.2013.4505 |

[23] | Stutzle, T. and Hoos, H. (1997) Max-Min Ant System and Local Search for the Travelling Salesman Problem. In: IEEE International Conference on Evolutionary Computation, IEEE Press, Piscataway, 309-314. |

[24] |
Messac, A. (1996) Physical Programming Effective Optimization for Computational Design. AIAA Journal, 34, 149-158. https://doi.org/10.2514/3.13035 |

[25] |
Das, I. and Dennis, J.E. (1998) Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems. SIAM Journal on Optimization, 8, 631-657. https://doi.org/10.1137/S1052623496307510 |

[26] |
Deb, K. (2001) Nonlinear Goal Programming Using Multi-Objective Genetic Algorithms. Journal of the Operational Research Society, 52, 291-302. https://doi.org/10.1057/palgrave.jors.2601089 |

[27] |
Messac, B., Ismail-Yahaya, A. and Mattson, C.A. (2003) The Normalized Normal Constraint Method for Generating the Pareto Frontier. Structural and Multidisciplinary Optimization, 25, 86-98. https://doi.org/10.1007/s00158-002-0276-1 |

[28] |
Messac, A. and Mattson, C.A. (2002) Generating Well-Distributed Sets of Pareto Points for Engineering Design Using Physical Programming. Optimization and Engineering, 3, 431-450. https://doi.org/10.1023/A:1021179727569 |

[29] |
Kim, I.Y. and de Weck, O.L. (2005) Adaptive Weighted-Sum Method for Bi-Objective Optimization: Pareto Front Generation. Structural and Multidisciplinary Optimization, 29, 149-158. https://doi.org/10.1007/s00158-004-0465-1 |

[30] |
Zhang, Q. and Li, H. (2007) MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Transactions on Evolutionary Computation, 11, 712-731. https://doi.org/10.1109/TEVC.2007.892759 |

[31] |
Chinchuluun, A. and Pardalos, P.M. (2007) A Survey of Recent Developments in Multiobjective Optimization. Annals of Operations Research, 159, 29-50. https://doi.org/10.1007/s10479-007-0186-0 |

[32] |
Mueller-Gritschneder, D., Graeb, H. and Schlichtmann, U. (2009) A Successive Approach to Compute the Bounded Pareto Front of Practical Multiobjective Optimization Problems. SIAM Journal on Optimization, 20, 915-934. https://doi.org/10.1137/080729013 |

[33] |
Pérez-Fernández, P., Ochoa, G., Montes, S., Díaz, I., Fernández, J., Paternain, D. and Bustince, H. (2021) Axiomatization and Construction of Orness Measures for Aggregation Functions. International Journal of Intelligent Systems, 36, 2208-2228. https://doi.org/10.1002/int.22376 |

[34] |
Marler, R.T. and Arora, J.S. (2004) Survey of Multi-Objective Optimization Methods for Engineering. Structural and Multidisciplinary Optimization, 26, 369-395. https://doi.org/10.1007/s00158-003-0368-6 |

[35] |
Gunantara, N. (2018) A Review of Multi-Objective Optimization: Methods and Its Applications. Cogent Engineering, 5, Article ID: 1502242. https://doi.org/10.1080/23311916.2018.1502242 |

[36] |
Orths, A., Schmitt, A., Styczyuskiz, A. and Verstege, J. (2001) Multi-Criteria Optimization Methods for Planning and Operation of Electrical Energy System. Electrical Engineering, 83, 251-258. https://doi.org/10.1007/s002020100085 |

[37] | Collette, Y. and Siarry, P. (2009) Multi-Objective Optimization: Principles and Case Studies (Decision Engineering). Springer-Verlag, Berlin. |

[38] | Ehrgott, M. (2005) Multi-Criteria Optimization. Springer, Berlin. |

[39] |
Eskelinen, P., Miettinen, K., Klamroth, K. and Hakanen (2008) Pareto Navigator for Interactive Nonlinear Multi-Objective Optimization. Springer-Verlag, Berlin. https://doi.org/10.1007/s00291-008-0151-6 |

[40] |
Fonseca, C.M. and Fleming, P.J. (1995) An Overview of Evolutionary Algorithms in Multi-Objective Optimization. Evolutionary Computing, 3, 1-16. https://doi.org/10.1162/evco.1995.3.1.1 |

[41] | Alaa, S., Abdel, K.B., Mohamed, A. and Noor, K. (2012) Multi-Objective Evolutionary Computation Solution for Chocolate Production System Using Pareto Method. International Journal of Computer Science, 9, 75-83. |

[42] | Subhamoy, C. and Sugata (2015) An Elitist Simulated Annealing Algorithm for Solving Multi-Objective Optimization Problems in Internet of Design. International Journal Advanced Network and Applications, 7, 2784-2789. |

[43] |
Wilfried, J. and Blum, C. (2014) Pareto Optimization or Cascaded Weighted Sum: A Comparison of Concepts. Algorithms, 7, 166-185. https://doi.org/10.3390/a7010166 |

[44] |
Caramia, M. and Dell’Olmo, P. (2008) Multi-Objective Management in Freight Logistics Increasing Capacity, Services Level and Safety with Optimization Algorithms. Springer, Berlin. http://www.springer.com/978-1-84800-381-1 |

[45] | Rohilla, D.K. (2020) Classical Methods of Multi-Objective Optimization—A Comparative Study. International Journal for Technological Research in Engineering, 7, 6554-6560. |

[46] |
Engau, A. and Wiecek, M. (2005) Generating e-Efficient Solutions in Multi-Objective Programming. European Journal of Operational Research, 177, 1566-1579. https://doi.org/10.1016/j.ejor.2005.10.023 |

[47] |
Obayashi, S., Sasaki, D. and Oyama, A. (2004) Finding Tradeoffs by Using Multiobjective Optimization Algorithms. Transactions of the Japan Society for Aeronautical and Space Sciences, 47, 51-58. https://doi.org/10.2322/tjsass.47.51 |

[48] |
Lagarias, J.C., Reeds, J.A., Wright, M.H. and Wright, P.E. (1998) Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions. SIAM Journal of Optimization, 9, 112-147. https://doi.org/10.1137/S1052623496303470 |

[49] | Miettinen, K. (1999) Nonlinear Multiobjective Optimization. Kluwer Academic Publishers, Boston. |

[50] |
Bendsoe, M.P., Olhoff, N. and Taylor, J.E. (1984) A Variational Formulation for Multicriteria Structural Optimization. Journal of Structural Mechanics, 11, 523-544. https://doi.org/10.1080/03601218308907456 |

[51] |
Chankong, V. and Haimes, Y.Y. (1983) Multiobjective Decision Making Theory and Methodology. Elsevier Science Publishing, New York. https://doi.org/10.1007/978-3-7091-2914-2 |

[52] |
Leitmann, G. (1974) Cooperative and Non-Cooperative Many Players Differential Games. Springer, New York. https://doi.org/10.1016/j.automatica.2015.01.030 |

[53] |
Yeung, D.W.K. and Petrosyan, L.A. (2015) Subgame Consistent Cooperative Solution for NTU Dynamic Games via Variable Weights. Automatica, 50, 84-89. https://doi.org/10.1007/978-981-10-1545-8 |

[54] |
Yeung, D.W.K. and Petrosyan, L.A. (2016) Subgame Consistent Cooperation—A Comprehensive Treatise. Springer, Berlin. https://doi.org/10.1073/pnas.36.1.48 |

[55] | Nash, J.F. (1950) Equilibrium Points in N-Person Games. Proceedings of the National Academy of Sciences of the United States of America, 36, 48-49. |

[56] | Davis, L. (1985) Applying Adaptive Algorithms to Epistatic Domains. Proceedings of the Joint International Conference on Artificial Intelligence, Los Angeles, 18-23 August 1985, 162-164. |

[57] |
Aboulaich, R., Ellaia, R., Elmoumen, S., Habbal, A. and Moussaid, N. (2017) The Mean-CVaR Model for Portfolio Optimization Using a Multi-Objective Approach and the Kalai-Smorodinsky Solution. MATEC Web of Conferences, 105, 103-108. https://doi.org/10.1051/matecconf/201710500010 |

[58] |
Oukennou, A., Sandali, A. and Elmoumen, S. (2018) Coordinated Placement and Setting of FACTS in Electrical Network Based on Kalai-Smorodinsky Bargaining Solution and Voltage Deviation Index. International Journal of Electrical and Computer Engineering, 8, 4079. https://doi.org/10.11591/ijece.v8i6.pp4079-4088 |

[59] |
Kalai, E. and Smorodinsky, M. (1975) Other Solutions to Nash’s Bargaining Problem. Econometrica, 43, 513-518. https://doi.org/10.2307/1914280 |

[60] |
Murata, T. and Ishibuchi, H. (1995) MOGA: Multi-Objective Genetic Algorithms. Proceedings of 1995 IEEE International Conference on Evolutionary Computation, Vol. 1, 289. https://doi.org/10.1109/ICEC.1995.489161 |

[61] |
Dodgson, J., Spackman, M.D., Pearman, A.D. and Phillips, L.D. (2009) Multi-Criteria Analysis: A Manual. Communities and Local Government Publications, London. http://www.communities.gov.uk/documents/corporate/pdf/1132618.pdf |

[62] |
Rădulescu, R., Mannion, P., Roijers, D.M. and Nowé, A. (2020) Multi-Objective Multi-Agent Decision Making: A Utility-Based Analysis and Survey. Autonomous Agents and Multi-Agent Systems, 34, 10. https://doi.org/10.1007/s10458-019-09433-x |

Journals Menu

Contact us

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2023 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.