Bi-Objective Optimization: A Pareto Method with Analytical Solutions

Abstract

Multiple objectives to be optimized simultaneously are prevalent in real-life problems. This paper develops a new Pareto Method for bi-objective optimization which yields analytical solutions. The Pareto optimal front is obtained in closed-form, enabling the derivation of various solutions in a convenient and efficient way. The advantage of analytical solution is the possibility of deriving accurate, exact and well-understood solutions, which is especially useful for policy analysis. An extension of the method to include multiple objectives is provided with the objectives being classified into two types. Such an extension expands the applicability of the developed techniques.

Share and Cite:

Yeung, D. and Zhang, Y. (2023) Bi-Objective Optimization: A Pareto Method with Analytical Solutions. Applied Mathematics, 14, 57-81. doi: 10.4236/am.2023.141004.

1. Introduction

Many real life optimization problems require that two or more objectives under analysis be optimized simultaneously. Frequently, these objectives conflict with each other, and it is not possible to find a single platform that maximizes all objectives simultaneously. Among them are the cases with two conflicting objectives such as: inflation and unemployment, risk and returns, environmental preservation and national income, current enjoyment and future education, and short-term profit and future growth, etc. Studies in bi-objective optimization constitute a non-trivial part in multi-objective analyses. For instance, Zhou et al. [1], Kukkonen and Deb [2], Pinto-Varela et al. [3], Lath et al. [4], Pereyra et al. [5], Garg [6], Futrell et al. [7], Hirpa et al. [8], Liu et al. [9], Wang et al. [10], Cheraghalipour et al. [11], Ho-Huu et al. [12], Yeh [13], Liu et al. [14], Nagamanjula and Pethalakshmi [15], Xu et al. [16], Diao et al. [17], Mohammadi et al. [18], Kparib et al. [19], Kparib et al. [20], Gulben and Orhan [21], Zaninudin and Paputungan [22], and Stutzle and Hoos [23]. Studies proposing multi-objective optimization techniques and solution can be found in Messac [24], Das and Dennis [25], Deb [26], Messac et al. [27], Messac and Mattson [28], Kim and Weck [29], Zhang and Li [30], Chinchuluun and Pardalos [31], Mueller-Gritschneder et al. [32], Pereyra et al. [5], Pérez-Fernández et al. [33], Marler and Arora [34], Gunantara [35], Orths et al. [36], Collette and Siarry [37], Ehrgott [38], Eskelinen et al. [39], Fonseca and Fleming [40], Alaa et al. [41], Subhamoy and Sugata [42], Wilfried and Blum (2014) [43], Caramia and Dell’Olmo (2008) [44], Rohilla (2020) [45], Engau and Wiecek [46], Obayashi et al. [47], Lagarias et al. [48], Miettinen [49], Zhang and Li [30], Bendsoe et al. [50], and Chankong and Haimes [51].

In these studies, several methods are commonly used for constructing aggregation functions, they include the weighted sum, Tchebycheff inequality, the normal boundary intersection, the normal constraint method, the Physical Programming method, Goal Programming, the epsilon constraints and Directed Search Domain to approximate the preference of the decision-maker. Often very lengthy computational efforts have to be invested and may end up with insufficient number of Pareto optimal points to be considered.

A crucial goal of a multi-objective optimization problem is to construct the Pareto optimal front (POF), which depicts the best trade-offs among the objectives to be optimized. The POF can be approximated as the solution of a series of scalar optimization subproblems in which the objective is an aggregation of the objectives. This paper presents a new Pareto Method for bi-objective optimization yielding the POF in the form of analytical solutions. An analytical solution involves framing the problem in a well-understood form and deriving exact solution. The analytical method is often preferred because its solution is in exact closed form.

Analytical solutions have three important advantages:

1) Transparency: Analytical solutions are presented as mathematical expressions, they make the effects of variables and their interactions with each other explicit.

2) Efficiency: Usually, algorithms and models expressed with analytical solutions are more efficient for manipulation and analysis than numerical analysis. Specifically, it is often faster, more accurate, and more convenient to evaluate an analytical solution than to perform an equivalent numeric implementation.

3) Mathematical Rigor: Analytical methods are rigorous and provide exact solutions with high tractability.

This paper is organized as follows. Bi-objective optimization problem is formulated in Section 2. Derivation of POF with equality constraints is provided in Section 3. Section 4 presents different analytical Pareto solutions with equality constraints. Section 5 derives the POF in cases with equality and inequality constraints. Analytical Pareto solutions under equality and inequality constraints are examined in Section 6. An Illustrative example is given in Section 7. Extension and conclusion are provided in Section 8 concludes.

2. Bi-Objective Optimization Problem

Consider a bi-objective optimization in which the decision-maker faces two objectives: f 1 ( x ) and f 2 ( x ) . The problem becomes

max x F [ f 1 ( x ) , f 2 ( x ) ] ,

subject to

x X R n . (2.1)

where x = ( x 1 , x 2 , , x n ) X R n is a set of decision variables which values are to be chosen in the optimization problem.

The feasible set of decision variables X R n is implicitly determined by a set of equality constraints and a set of inequality constraints,

g ( x ) = 0 and h ( x ) 0 , (2.2)

where g ( x ) is a m-dimensional vector of functions, and h ( x ) is a τ-dimensional vector of functions.

The objectives f 1 ( x ) and f 2 ( x ) are functions which measures the effects of the decision variables x on the objectives f 1 and f 2 . The function F [ f 1 , f 2 ] represents the ranking preference of different combinations of f 1 and f 2 . It can take various functional forms, contingent upon the preference or targets fixed by the decision-maker.

The problem defined in (2.1)-(2.2) belongs to the class of constrained multi-objective optimization problems. There are a number of methods designed to assist the decision maker to arrive at the best compromise solution.

1) Scalarization: The most commonly used methods adopt schemes to convert the multiple objectives into a single scalar objective and apply standard scalar optimization algorithms to generate an optimal solution. Various weighted schemes to scalarize the multiple objectives into a scalar function are available, such as weighted global methods, weighted sum methods and exponential weighted criterion. One of the problems in scalarization is the existence of conflicting objectives.

2) Utility-Based Optimization: Another solution for multi-objective optimization is to explicitly consider the possible trade-offs between conflicting objective functions. Such trade-offs can be analyzed on the basis of the utility that these compromises have for the decision-maker. Many studies considered the utility-based optimization should be a common standard in multi-objective optimization.

3) Axiomatic Solution: Often the decision-maker cannot concretely define what he prefers. Axiomatic solutions like the Nash arbitration scheme can be chosen. Based on predetermined axioms of fairness, the solution suggests an arbitration yielding the maximum (over a convex compact set of points) of the product of the players’ utilities. In this case, the utility functions always have non-negative values and have a value of zero in the absence of cooperation. It can also be generalized to become weighted product methods. Similarly, the Kalai-Smorodinsky solution is another solution to bargaining problems of utility maximizing players. In multi-objective problems, players’ utilities are replaced by objectives that the decision-maker aims to maximize simultaneously.

4) Goal Programming Method: Finally, the decision-maker may consider a goal programming solution. In particular, the decision-maker aims to reach or getting as close as possible to a goal or a vector of targets.

In Section 4, we consider five methods with analytical solutions. Specifically, they are Nash arbitration and objective product method, target-attainment method, Kalai-Smorodinsky bargaining solution, scalarization method with weighted-sum and utility-based method.

3. Derivation of POF with Equality Constraints

We first consider the case where there are only equality constraints in the decision variables as a bench mark. (This corresponds to the case where the inequality constraints are either absent or inactive). A way to obtain Pareto efficient strategies in the bi-objective optimization problem is through the weighted-sum method. Such approach is also employed in identifying the players’ cooperative strategies belonging to the Pareto optimal set in non-transferrable utility games (see [52] [53] [54] ). In particular, the POF can be traced out by identifying the Pareto efficient strategies through systematically changing the weights among the objective functions. Therefore, the decision-maker considers the problem:

max x [ α f 1 ( x ) + ( 1 α ) f 2 ( x ) ] , for α [ 0 , 1 ] ,

subject to

g ( x ) = 0 . (3.1)

The corresponding Lagrange function can be expressed as:

L ( x , λ , α ) = [ α f 1 ( x ) + ( 1 α ) f 2 ( x ) ] + j = 1 m λ j g j ( x ) , (3.2)

where λ = ( λ 1 , λ 2 , , λ m ) is the set of Lagrange multipliers, and α and 1 α , for α [ 0 , 1 ] , are the weight for the objective 1 and objective 2 respectively.

First-order conditions for a maximum yield

α f 1 ( x ) x i + ( 1 α ) f 2 ( x ) x i + j = 1 m λ j g j ( x ) x i = 0 ,

for i [ 1 , 2 , , n ] ,

g j ( x ) = 0 , for j [ 1 , 2 , , m ] . (3.3)

If the system of n + m first-order conditions in (3.3) satisfies the implicit function theorem, one can express the optimal decision variables x α = ( x 1 α , x 2 α , , x n α ) and the corresponding Lagrange multipliers λ α = ( λ 1 α , λ 2 α , , λ m α ) as functions the exogenous parameter α , that is

x i α = φ i α ( α ) , for i [ 1 , 2 , , n ] ,

λ i α = ϕ i α ( α ) , for j [ 1 , 2 , , m ] . (3.4)

Substituting the optimal decision variables x i α = φ i α ( α ) from (3.4) into the objectives f 1 and f 2 , we can obtain the optimal objectives under α as:

f 1 ( x α ) = f 1 ( φ α ( α ) )

and

f 2 ( x α ) = f 2 ( φ α ( α ) ) , (3.5)

where φ α ( α ) = ( φ 1 α ( α ) , φ 2 α ( α ) , , φ n α ( α ) ) .

In the case where α = 1 , it generates the anchor point where the best of objective f 1 is obtained, that is max x f 1 ( x ) . In the case where α = 0 , it generates the anchor point where the best of objective f 2 is obtained, that is max x f 2 ( x ) . The Pareto optimal frontier (POF) can be obtained as

( f 1 ( φ α ( α ) ) , f 2 ( φ α ( α ) ) ) , for α [ 0 , 1 ] , (3.6)

which is analytically tractable.

An increase in the value of α signifies an increase in the weight for objective f 1 and a decrease in the weight for objective f 2 . Hence the POF is downward sloping in the ( f 1 , f 2 ) space.

The point ( f 1 ( φ 1 ( 1 ) ) , f 2 ( φ 1 ( 1 ) ) ) is an anchor point at which the objective f 1 reaches its maximum. Similarly, the point ( f 0 ( φ 0 ( 0 ) ) , f 2 ( φ 0 ( 0 ) ) ) is an anchor point at which the objective f 2 reaches its maximum. The point ( f 1 ( φ 1 ( 1 ) ) , f 2 ( φ 0 ( 0 ) ) ) is the utopia (ideal) point at which f 1 reaches its maximum and f 2 reaches its maximum simultaneously.

In addition, if there exist minimum levels of the objectives, f 1 ( x ) f _ 1 and f 2 ( x ) f _ 2 , that the optimal solution have to fulfilled, then the range of the POF has to be restricted to be above f _ 1 and above f _ 2 . The corresponding restriction on the weight can be obtained as α ( α _ , α ¯ ) , where f 1 ( φ α _ ( α _ ) ) = f _ 1 and f 2 ( φ α ¯ ( α ¯ ) ) = f _ 2 .

The point ( f _ 1 , f _ 2 ) is called the nadir point. The point ( f 1 ( φ α _ ( α _ ) ) , f 2 ( φ α _ ( α _ ) ) ) becomes an anchor point at which the objective f 2 reaches its maximum. The point ( f 1 ( φ α ¯ ( α ¯ ) ) , f 2 ( φ α ¯ ( α ¯ ) ) ) becomes an anchor point at which the objective f 1 reaches its maximum. The point ( f 1 ( φ α ¯ ( α ¯ ) ) , f 2 ( φ α _ ( α _ ) ) ) becomes the utopia point.

The POF is inside the rectangle bounded the nadir point, the utopia point and the two anchor points.

The part inside the area bounded by the nadir point, two anchor points and the curve of the POF in Figure 1 are dominated points. The part inside the area bounded by the utopia point, two anchor points and the curve of the POF in Figure 1 are unreachable points. Since the decision-maker would not choose a

Figure 1. POF under equality constraints.

dominated point and could not reach unreachable points, any optimal solution chosen by the decision-maker would be on the POF.

4. Analytical Pareto Solutions with Equality Constraints

In this section, we consider various solution methods via the analytical solution of the POF derived in Equation (3.6) under equality constraints only.

4.1. Nash Arbitration and Objective Product Method

The Nash objective Product maximization seeks a solution which yields the maximum of the product of the objectives in the feasible decision region. The idea is derived from Nash [55] and applied by Davis [56] in multi-objective optimization. Consider Figure 1, the feasible decision region is the POF bounded by the vertical line f 1 = f _ 1 and the horizontal line f 2 = f _ 2 . The maximization of the product of the relevant objectives can be expressed as:

max α [ f 1 ( φ α ( α ) ) f _ 1 ] [ f 2 ( φ α ( α ) ) f _ 2 ] . (4.1)

Performing the maximization operator in (4.1) we obtain the condition

[ f 2 ( φ α ( α ) ) f _ 2 ] i = 1 n f 1 ( φ α ( α ) ) φ i α φ i α α + [ f 1 ( φ α ( α ) ) f _ 1 ] i = 1 n f 2 ( φ α ( α ) ) φ i α φ i α α = 0. (4.2)

The weight α * that satisfies (4.2) yields the solution to the objective product maximization method can be obtained as ( f 1 ( φ α * ( α * ) ) , f 2 ( φ α * ( α * ) ) ) .

4.2. Target-Attainment Method

In the target-attainment method, the decision-maker aims to reach a target or a vector of targets. For instance, the target for f 1 ( x ) is to reach T 1 and the target for f 2 ( x ) to is reach T 2 . The objective is to minimize the deviation of the solution from the targets. One can depict the explicitly derived POF and compared to the target ( T 1 , T 2 ) .

If the target point ( T 1 , T 2 ) is outside the POF, the problem becomes minimizing the distance between the POF and the point ( T 1 , T 2 ) indicated by the dotted line in Figure 2, that is

Figure 2. Target-attainment solution.

min α [ ( T 1 f 1 ( φ α ( α ) ) ) 2 + ( T 2 f 2 ( φ α ( α ) ) ) 2 ] 1 2 . (4.3)

The solution to (4.3) will be characterized by the condition

0 = 1 2 [ ( T 1 f 1 ( φ α ( α ) ) ) 2 + ( T 2 f 2 ( φ α ( α ) ) ) 2 ] 1 2 × [ 2 ( T 1 f 1 ( φ α ( α ) ) ) i = 1 n f 1 φ i α φ i α α + 2 ( T 2 f 2 ( φ α ( α ) ) ) i = 1 n f 2 φ i α φ i α α ] . (4.4)

We can derive the weight α * that satisfies (4.4), and obtain the solution ( f 1 ( φ α * ( α * ) ) , f 2 ( φ α * ( α * ) ) ) .

Consider the case that the target is f 1 = z 1 must be attained. We first identify the weight α * such that

f 1 ( x α * ) = f 1 ( φ α * ( α * ) ) = z 1 . (4.5)

The solution is then ( f 1 ( φ α * ( α * ) ) , f 2 ( φ α * ( α * ) ) ) .

4.3. Kalai-Smorodinsky Bargaining Solution

Aboulaich et al. [57] and Oukennou et al. [58] applied the Kalai-Smorodinsky Bargaining Solution [59] for solving multi-objective optimization problems. The Kalai-Smorodinsky solution is a solution to bargaining problems of utility maximizing players. In multi-objective problems, players’ utilities are replaced by objectives that the decision-maker aims to maximize simultaneously. The main advantage of the solution is that it yields a concrete criterion to select one and only one unique point along the POF. Mathematically, it is the intersection of the POF and the line segment connecting the nadir point and the utopia point.

The nadir point is ( f _ 1 , f _ 2 ) . To obtain the utopia point, we first identify the α that satisfies f 1 ( φ α ( α ) ) = f _ 1 , and denote it by α 1 . The point ( f 1 ( φ α 1 ( α 1 ) ) , f 2 ( φ α 1 ( α 1 ) ) ) is the top anchor point of the POF. Similarly, we identify the α that satisfies f 2 ( φ α ( α ) ) = f _ 2 and denote it by α 2 . The point ( f 1 ( φ α 2 ( α 2 ) ) , f 2 ( φ α 2 ( α 2 ) ) ) is the bottom anchor point of the POF. Using the top anchor point and the bottom anchor point of the POF, we can obtain the utopia point as ( f 1 ( φ α 2 ( α 2 ) ) , f 2 ( φ α 1 ( α 1 ) ) ) .

The slope of the line segment connecting the nadir point and the utopia point can be obtained as [ f 2 α 1 ( φ ( α 1 ) ) f _ 2 ] ÷ [ f 1 ( φ α 2 ( α 2 ) ) f _ 1 ] , which denote by θ . To obtain the Kalai-Smorodinsky solution, we trace the α satisfying

f 2 ( φ α ( α ) ) f _ 2 f 1 ( φ α ( α ) ) f _ 1 = θ . (4.6)

Let α * denote the α that satisfies (4.6). The Kalai-Smorodinsky solution can be obtained as ( f 1 ( φ α * ( α * ) ) , f 2 ( φ α * ( α * ) ) ) . Graphically, the Kalai-Smorodinsky bargaining solution is the point of intersection of the POF and the line joining the Nadir point and the Utopia point in Figure 3.

Figure 3. Kalai-Smorodinsky bargaining solution.

4.4. Scalarization Method with Weighted-Sum

The scalarization method makes the multi-objective function create a single solution and the weight is determined before the optimization process. The scalarization method incorporates multi-objective functions into scalar fitness function as in the following equation [60].

max x F [ f 1 ( x ) , f 2 ( x ) ] = w 1 f 1 ( x ) + w 2 f 2 ( x ) . (4.7)

The weight of an objective function determines the solution and reveals the performance priority [61]. A large weight that is given to an objective function that has a higher priority compared to the ones with a smaller weight. Normalizing

the weights w 1 and w 2 , we can obtain α * = w 1 w 1 + w 2 and ( 1 α * ) = w 2 w 1 + w 2 . The solution can be obtained as ( f 1 ( φ α * ( α * ) ) , f 2 α * ( φ α * ( α * ) ) ) .

4.5. Utility-Based Method

Rădulescu et al. [62] considered utility-based analysis to be the standard paradigm for studying multi-objective problems. In particular, they argued that compromises between competing objectives in MOMAS should be analyzed on the basis of the utility that these compromises have for the users of a system, where an agent’s utility function maps their payoff vectors to scalar utility values. The utility of different combinations of objectives is given by the utility function U ( f 1 , f 2 ) . It represents a scalarization of the objectives into a preference ranking index. It can be linear or nonlinear. If the utility function is linear, it resembles a scalarization of the objectives with weighted-sum of the objectives. Very often, nonlinear utility function U ( f 1 , f 2 ) yields a set of indifference (level) curves of preferences which are convex, showing diminishing marginal rate of substitution between the objectives. Such utility functions represent a nonlinear scalarization of the objectives.

Consider the case where the utility function U ( f 1 , f 2 ) = f 1 f 2 . It yields in difference (level) curves which are convex and showing diminishing marginal rate of substitution between the objectives.

The maximization of the utility function U ( f 1 , f 2 ) = f 1 f 2 can be expressed as:

max α [ f 1 ( φ α ( α ) ) f 2 ( φ α ( α ) ) ] . (4.8)

Perform the maximization operator in (4.8) we obtain the condition

f 2 ( φ α ( α ) ) i = 1 n f 1 ( φ α ( α ) ) φ i α φ i α α + f 1 ( φ α ( α ) ) i = 1 n f 2 ( φ α ( α ) ) φ i α φ i α α = 0. (4.9)

Figure 4. Utility-based solution.

The weight α * that satisfies (4.9) yields the solution of maximizing U ( f 1 , f 2 ) = s f 1 f 2 with f 1 = f 1 ( φ α * ( α * ) ) and f 2 = f 2 ( φ α * ( α * ) ) . The point where the POF and the indifference curve are tangent to each other demonstrates the utility-based solution Figure 4.

4.6. Performance of Pareto Method with Analytical Solution

In general, multi-objective optimization requires huge computational effort. Frequently an insufficient number of Pareto optimal points will be found. Pareto methods usually require less complicated mathematical equations. The solution using the Pareto method is a performance indicators component that produces a compromise solution and can be displayed on the Pareto optimal front. Obtaining a Pareto optimal solution set is preferable to a single solution. It provides a basis upon which to make value judgment’s in order to settle on a final solution.

Pareto method with analytical solution involves the framing the problem in a well-understood form and deriving exact solution. The method is often more preferred because its solution is in exact closed form. A wide range of the POF can be traced out analytically with relevant mathematical expressions. The method is efficient for manipulation and analysis than numerical analysis. Specifically, it is often faster, more accurate, and more convenient to evaluate an analytical solution than to perform an equivalent numeric implementation. In addition, the effects of variables and their interactions with each other and parameter changes are highly tractable. Finally, the availability of the POF (or its relevant parts) in closed form allows the decision-maker to compare solutions under different criteria for multi-objective optimization.

5. POF with Equality and Inequality Constraints

To complete the analysis, we consider the case where there are equality and inequality constraints in the decision variables.

5.1. Pareto Efficient Strategies

Again, we identify the Pareto efficient strategies by systematically changing the weights among the objective functions. Specifically, the decision-maker considers the problem:

max x [ α f 1 ( x ) + ( 1 α ) f 2 ( x ) ] , for α [ 0 , 1 ] , (5.1)

subject to

g ( x ) = 0 and h ( x ) 0 . (5.2)

To solve the optimization with equality and inequality constraints, we invoke the Karush-Kuhn-Tucker conditions and use the Lagrange multipliers approach with the corresponding Lagrange function:

L ( x , λ , γ , α ) = [ α f 1 ( x ) + ( 1 α ) f 2 ( x ) ] + j = 1 m λ j g j ( x ) + k = 1 τ γ k ( h k ( x ) ) , (5.3)

where λ = ( λ 1 , λ 2 , , λ m ) and γ = ( γ 1 , γ 2 , , γ τ ) are the sets of Lagrange multipliers. Necessary conditions for a maximum include:

α f 1 ( x ) x i + ( 1 α ) f 2 ( x ) x i + j = 1 m λ j g j ( x ) x i + k = 1 τ γ j h k ( x ) x k 0 ,

for i [ 1 , 2 , , n ] ,

g j ( x ) = 0 , for j [ 1 , 2 , , m ] ,

γ k h k ( x ) = 0 , for k [ 1 , 2 , , τ ] ; (5.4)

h k ( x ) 0 , for k [ 1 , 2 , , τ ] , and γ k 0 , for k [ 1 , 2 , , τ ] . (5.5)

In the case where γ k 0 , the inequality constraint is binding with h k ( x ) = 0 being held and acts as an active constraint. In the case where γ k = 0 , the condition h k ( x ) = 0 does not have to hold and the constraint is inactive.

Equation system (5.4) gives rise to n + m + τ equations for n decision variables ( x 1 , x 2 , , x n ) , m Lagrange multipliers ( λ 1 , λ 2 , , λ m ) , and τ Lagrange multipliers ( γ 1 , γ 2 , , γ τ ) .

Moreover, any admissible solution has to satisfy (5.5). If (5.5) is not satisfied, it means that the solution satisfying the first order conditions is either not in the region fulfilling the constraints, or has a negative Lagrange multiplier, which is not allowed for a maximum.

If condition (5.5) fulfilled and the first order conditions (5.4) for an interior solution satisfy the implicit theorem, one can express the optimal decision variables x α = ( x 1 α , x 2 α , , x n α ) and the corresponding Lagrange multipliers λ α = ( λ 1 α , λ 2 α , , λ m α ) and γ α = ( γ 1 α , γ 2 α , , γ τ α ) as functions the exogenous parameter α , that is

x ^ i α = φ ^ i α ( α ) , for i [ 1 , 2 , , n ] ,

λ i α = ϕ i α ( α ) , for j [ 1 , 2 , , m ] ,

γ k α = ψ k α ( α ) , for k [ 1 , 2 , , τ ] . (5.6)

5.2. The Corresponding POF

Substituting the optimal decision variables x ^ i α = φ ^ i α ( α ) from (5.6) into the objectives f 1 and f 2 , we obtain the optimal objectives under α as:

f 1 ( x ^ α ) = f 1 ( φ ^ α ( α ) ) and f 2 ( x ^ α ) = f 2 ( φ ^ α ( α ) ) . (5.7)

The Pareto optimal frontier (POF) at the point which corresponds to the adoption of objective weight α can be obtained as

( f 1 ( φ ^ α ( α ) ) , f 2 ( φ ^ α ( α ) ) ) , for α [ 0 , 1 ] , (5.8)

which is again analytically tractable.

Theoretically, the frame of the POF with both equality and inequality constraints can be delineated by computing the Pareto strategies for different values of α between 0 and 1. Note that the Pareto optimal point ( f 1 ( φ ^ α ( α ) ) , f 2 ( φ ^ α ( α ) ) ) may have to be calculated point by point for different values of α , because the set of active inequality constraints may vary as α changes. The POF with both equality and inequality constraints is bounded by the POF with equality constraint only. Unlike the case with equality constraints, we have to track down the corresponding point of the POF with individual values of α , and there exists the possibility that the solution satisfying the first order conditions is not in a feasible region bounded by the constraints. Therefore, the POF may have broken ranges as shown in Figure 5.

Figure 5. Broken POF.

6. Analytical Pareto Solutions under Equality and Inequality Constraints

Note that various solution methods via the analytical solution of the POF derived in Section 4 yield a unique solution α * . If the solution is in an area where all inequality constraints are inactive, the solution would be the same as that in section 4.1. If the solution is in an area where some inequality constraints are active, we first solve the first-order conditions in (5.4) for α in an area near α * identified in Section 4. Specifically, we obtain

x ^ i α = φ ^ i α ( α ) , for i [ 1 , 2 , , n ] ,

λ i α = ϕ i α ( α ) , for j [ 1 , 2 , , m ] ,

γ k α = ψ k α ( α ) , for k [ 1 , 2 , , τ ] , for α [ α * ε , α * + ε ] . (6.1)

The corresponding point of the POF can be expressed as

( f 1 ( φ ^ α ( α ) ) , f 2 ( φ ^ α ( α ) ) ) , for α [ α * ε , α * + ε ] . (6.2)

Then, we check whether the point derived in (6.2) with active inequality constraints still fulfills the optimality condition. If not, we have to identify some points on the POF in the adjacent area and search for the optimal solution.

For instance, consider the target-attainment method in Section 4.2 in which the target for f 1 ( x ) is to reach T 1 and the target for f 2 ( x ) to is reach T 2 . The decision maker seeks to minimize the deviation of the solution from the target ( T 1 , T 2 ) . We first identify the POF points for α * under equality constraint only given in Section given in Section 4.2. Then we verify whether there exist active inequality constraints. If some inequality constraints are active in the solution point α * , we have to consider some POF points at α in a neighborhood near α * . We follow (5.3)-(5.5) and solve the problem with equality and inequality constraints under the weight α [ α * ε , α * + ε ] to obtain the Pareto efficient strategies and the corresponding POF. We let

x ^ i α = φ ^ i α ( α ) , for i [ 1 , 2 , , n ] and α [ α * ε , α * + ε ] (6.3)

denote the optimal decision variables with the presence of active inequality constraints. We then calculate the distance between the target ( T 1 , T 2 ) and ( f 1 ( φ ^ α ( α ) ) , f 2 ( φ ^ α ( α ) ) ) , that is

[ ( T 1 f 1 ( φ ^ α ( α ) ) ) 2 + ( T 2 f 2 ( φ ^ α ( α ) ) ) 2 ] 1 2 , for α [ α * ε , α * + ε ] . (6.4)

Finally, the point ( f 1 ( φ ^ α * * ( α * * ) ) , f 2 ( φ ^ α * * ( α * * ) ) ) which yields the shortest distance in (6.4) is the solution for the target-attainment method.

7. An Illustrative Example

Consider a bi-objective optimization in which the decision-maker faces two objectives:

f 1 ( x ) = Y + β x 1 1 2 ( x 1 ) 2 w x 2 + C x 3 1 2 ( x 3 ) 2 , (7.1)

and

f 2 ( x ) = P + q x 2 1 2 ( x 2 ) 2 π x 1 μ x 3 . (7.2)

There is an equality constraint

χ 1 x 1 x 2 = 0 , (7.3)

and an inequality constraint

χ 2 x 2 0 . (7.4)

7.1. POF with Equality Constraint Only

We first consider as a bench mark the case with the equality constraint only. To obtain Pareto efficient strategies in the bi-objective optimization problem (7.1)-(7.4), the decision-maker considers the problem:

max x 1 , x 2 { α [ Y + β x 1 1 2 ( x 1 ) 2 w x 2 + C x 3 1 2 ( x 3 ) 2 ] + ( 1 α ) [ P + q x 2 1 2 ( x 2 ) 2 π x 1 μ x 3 ] } , for α [ 0 , 1 ] , (7.5)

subject to (7.3).

The corresponding Lagrange function can be expressed as:

L ( x , λ , α ) = α [ Y + β x 1 1 2 ( x 1 ) 2 w x 2 + C x 3 1 2 ( x 3 ) 2 ] + ( 1 α ) [ P + q x 2 1 2 ( x 2 ) 2 π x 1 μ x 3 ] + λ ( χ 1 x 1 x 2 ) . (7.6)

The Pareto efficient strategies of the problem of maximizing (7.5) subject to equality constraint (7.3) can be solved as:

Proposition 7.1.

The Pareto efficient strategies of the problem of maximizing (7.5) subject to equality constraint (7.3) are:

x 1 ( α ) = α β π + α π q + α q + α w + χ 1 α χ 1 , x 2 ( α ) = q α q α w α β + π α π + α χ 1 , x 3 ( α ) = C ( 1 α ) α μ ; for α [ 0 , 1 ] . (7.7)

Proof: See Appendix A.

The relationship between the Pareto efficient strategies x 1 ( α ) , x 2 ( α ) and x 3 ( α ) can be obtained as follows.

Proposition 7.2.

x 1 ( α ) α = β + π + q + w χ 1 > 0 ,

x 2 ( α ) α = β π q w + χ 1 < 0 ,

x 3 ( α ) α = 1 α 2 μ > 0 .

Proof: See Appendix B.

Substituting the Pareto efficient strategies into the objective functions (7.1)-(7.2) yields the POF as:

( ( Y + β x 1 ( α ) 1 2 ( x 1 ( α ) ) 2 w x 2 ( α ) + C x 3 ( α ) 1 2 ( x 3 ( α ) ) 2 ) , ( P + q x 2 ( α ) 1 2 ( x 2 ( α ) ) 2 π x 1 ( α ) μ x 3 ( α ) ) ) , for α [ 0 , 1 ] . (7.8)

In addition, if there exist minimum levels of the objectives, f 1 ( x ) f _ 1 and f 2 ( x ) f _ 2 , that the optimal solution have to fulfilled, then the range of the POF has to be restricted to be above f _ 1 and above f _ 2 . We denote the corresponding restriction on the weight as α ( α _ , α ¯ ) . The values of α _ can be obtained by solving

Y + β x 1 ( α _ ) 1 2 ( x 1 ( α _ ) ) 2 w x 2 ( α _ ) + C x 3 ( α _ ) 1 2 ( x 3 ( α _ ) ) 2 = f _ 1 . (7.9)

The values of α ¯ can be obtained by solving

P + q x 2 ( α ¯ ) 1 2 ( x 2 ( α ¯ ) ) 2 π x 1 ( α ¯ ) μ x 3 ( α ¯ ) = f _ 2 . (7.10)

The point

( ( Y + β x 1 ( α _ ) 1 2 ( x 1 ( α _ ) ) 2 w x 2 ( α _ ) + C x 3 ( α _ ) 1 2 ( x 3 ( α _ ) ) 2 ) , ( P + q x 2 ( α _ ) 1 2 ( x 2 ( α _ ) ) 2 π x 1 ( α _ ) μ x 3 ( α _ ) ) ) (7.11)

becomes an anchor point at which the objective f 2 reaches its maximum.

The point

( ( Y + β x 1 ( α ¯ ) 1 2 ( x 1 ( α ¯ ) ) 2 w x 2 ( α ¯ ) + C x 3 ( α ¯ ) 1 2 ( x 3 ( α ¯ ) ) 2 ) , ( P + q x 2 ( α ¯ ) 1 2 ( x 2 ( α ¯ ) ) 2 π x 1 ( α ¯ ) μ x 3 ( α ¯ ) ) ) (7.12)

becomes an anchor point at which the objective f 1 reaches its maximum.

The point

( ( Y + β x 1 ( α ¯ ) 1 2 ( x 1 ( α ¯ ) ) 2 w x 2 ( α ¯ ) + C x 3 ( α ¯ ) 1 2 ( x 3 ( α ¯ ) ) 2 ) , ( P + q x 2 ( α _ ) 1 2 ( x 2 ( α _ ) ) 2 π x 1 ( α _ ) μ x 3 ( α _ ) ) ) (7.13)

becomes the utopia point.

7.2. POF with Equality and Inequality Constraints

Now, we consider the case under both the equality constraint and the inequality constraint. Invoking (7.4), one can observe that the inequality constraint will be active if x 2 > χ 2 . To depict the POF, we first check whether the inequality constraint is active at α _ and α ¯ . If x 2 ( α _ ) = q α _ q α _ w α _ β + π α _ π + α _ χ 1 > χ 2 , then the inequality constraint is active at α _ . Similarly, if x 2 ( α ¯ ) = q α ¯ q α ¯ w α ¯ β + π α ¯ π + α ¯ χ 1 > χ 2 , then the inequality constraint is active at α ¯ . Since x 2 ( α _ ) is monotonically decreasing in α , the inequality constraint is active in the entire POF.

To obtain the Pareto efficient strategies, the decision-maker considers the problem:

max x 1 , x 2 { α [ Y + β x 1 1 2 ( x 1 ) 2 w x 2 + C x 3 1 2 ( x 3 ) 2 ] + ( 1 α ) [ P + q x 2 1 2 ( x 2 ) 2 π x 1 μ x 3 ] } (7.14)

subject to (7.3) and (7.4).

The corresponding Lagrange function can be expressed as:

L ( x , λ , α ) = α [ Y + β x 1 1 2 ( x 1 ) 2 w x 2 + C x 3 1 2 ( x 3 ) 2 ] + ( 1 α ) [ P + q x 2 1 2 ( x 2 ) 2 π x 1 μ x 3 ] + λ ( χ 1 x 1 x 2 ) + γ ( χ 2 x 2 ) . (7.15)

The Pareto efficient strategies of the problem of maximizing (7.14) subject to equality constraint (7.3) and inequality constraint (7.4) can be solved as:

Proposition 7.3.

The Pareto efficient strategies of the problem of maximizing (7.14) subject to the constraints (7.3)-(7.4) are:

x ^ 1 ( α ) = χ 1 χ 2 , x ^ 2 ( α ) = χ 2 , x ^ 3 ( α ) = C ( 1 α ) α μ . (7.16)

Proof: See Appendix C.

The values of α _ can be obtained by solving

Y + β x ^ 1 ( α _ ) 1 2 ( x ^ 1 ( α _ ) ) 2 w x ^ 2 ( α _ ) + C x ^ 2 ( α _ ) 1 2 ( x ^ 3 ( α _ ) ) 2 = f _ 1 . (7.17)

The values of α ¯ can be obtained by solving

P + q x ^ 2 ( α ¯ ) 1 2 ( x ^ 2 ( α ¯ ) ) 2 π x ^ 1 ( α ¯ ) μ x ^ 3 ( α ¯ ) = f _ 2 . (7.18)

Substituting the Pareto efficient strategies from (7.13) into the objectives f 1 ( x ) and f 2 ( x ) , we can obtain the POF as

( ( Y + β x ^ 1 ( α ) 1 2 ( x ^ 1 ( α ) ) 2 w x ^ 2 ( α ) + C x ^ 3 ( α ) 1 2 ( x ^ 3 ( α ) ) 2 ) , ( P + q x ^ 2 ( α ) 1 2 ( x ^ 2 ( α ) ) 2 π x ^ 1 ( α ) μ x ^ 3 ( α ) ) ) , for α [ α _ , α ¯ ] . (7.19)

The corresponding anchor points and utopia point can be derived accordingly.

Finally, consider the case where x 2 ( α _ ) > χ 2 and x 2 ( α ¯ ) < χ 2 , we search the point at which the inequality constraint turns active, that is

x 2 ( α ) = q α q α w α β + π α π + α χ 1 = χ 2 . (7.20)

Solving (7.20) yields

α = q + π χ 2 q + w + β + π χ 1 α ˜ . (7.21)

Figure 6. POF in solid line.

Therefore, the POF will be the same as that without inequality constraint in the range of α ( α ˜ , α ¯ ) , and be the same as that with inequality constraint in the range of α ( α _ , α ˜ ) . The actual POF is the solid line in Figure 6.

7.3. The Case of Kalai-Smorodinsky Solution

With the analytical solution of POF completely depicted, we can solve the solutions in Section 4. Consider the case of using the Kalai-Smorodinsky solution for solving multi-objective optimization problems. The solution is the intersection of the POF and the line segment connecting the nadir point and the utopia point. We first obtain the bench-mark POF with equality constraint only. Then, when check the anchor points in (7.11) and (7.12). If x 2 ( α _ ) < χ 2 in both anchor points, then the POF will be the same as that with equality constraint only. If x 2 ( α _ ) > χ 2 in both anchor points, then the POF will be the same as that with equality constraint and active inequality constraint. If x 2 ( α _ ) > χ 2 in the anchor point (7.11) and x 2 ( α _ ) < χ 2 in the anchor point (7.12) the inequality constraint is active, then the point x 2 ( α ) = χ 2 has to be identified as α = q + π χ 2 q + w + β + π χ 1 α ˜ (see (7.21)).

Given the above information, the relevant utopia point can be identified

( ( Y + β x 1 ( α ¯ ) 1 2 ( x 1 ( α ¯ ) ) 2 w x 2 ( α ¯ ) + C x 3 ( α ¯ ) 1 2 ( x 3 ( α ¯ ) ) 2 ) , ( P + q x ^ 2 ( α _ ) 1 2 ( x ^ 2 ( α _ ) ) 2 π x ^ 1 ( α _ ) μ x ^ 3 ( α _ ) ) )

and the nadir point is ( f _ 1 , f _ 2 ) . The slope of the line segment linking the nadir point and the utopia point in the area bounded by the nadir point and the utopia point is

( P + q x ^ 2 ( α _ ) 1 2 ( x ^ 2 ( α _ ) ) 2 π x ^ 1 ( α _ ) μ x ^ 3 ( α _ ) ) f _ 2 ( Y + β x 1 ( α ¯ ) 1 2 ( x 1 ( α ¯ ) ) 2 w x 2 ( α ¯ ) + C x 3 ( α ¯ ) 1 2 ( x 3 ( α ¯ ) ) 2 ) f _ 1 = θ . (7.22)

If there exist a α * that satisfies

( P + q x 2 ( α * ) 1 2 ( x 2 ( α * ) ) 2 π x 1 ( α * ) μ x 3 ( α * ) ) f _ 2 ( Y + β x 1 ( α * ) 1 2 ( x 1 ( α * ) ) 2 w x 2 ( α * ) + C x 3 ( α * ) 1 2 ( x 3 ( α * ) ) 2 ) f _ 1 = θ , and x 2 ( α * ) < χ 2 , (7.23)

then, the Kalai-Smorodinsky solution is given by

( ( Y + β x 1 ( α * ) 1 2 ( x 1 ( α * ) ) 2 w x 2 ( α * ) + C x 3 ( α * ) 1 2 ( x 3 ( α * ) ) 2 ) , ( P + q x 2 ( α * ) 1 2 ( x 2 ( α * ) ) 2 π x 1 ( α * ) μ x 3 ( α * ) ) ) (7.24)

If there exist a α * * that satisfies

( P + q x ^ 2 ( α * * ) 1 2 ( x ^ 2 ( α * * ) ) 2 π x ^ 1 ( α * * ) μ x ^ 3 ( α * * ) ) f _ 2 ( Y + β x ^ 1 ( α * * ) 1 2 ( x ^ 1 ( α * * ) ) 2 w x ^ 2 ( α * * ) + C x ^ 3 ( α * * ) 1 2 ( x ^ 3 ( α * * ) ) 2 ) f _ 1 = θ , and x ^ 2 ( α * * ) = χ 2 . (7.25)

then the Kalai-Smorodinsky solution is given by

( ( Y + β x ^ 1 ( α * * ) 1 2 ( x ^ 1 ( α * * ) ) 2 w x ^ 2 ( α * * ) + C x ^ 3 ( α * * ) 1 2 ( x ^ 3 ( α * * ) ) 2 ) , ( P + q x ^ 2 ( α * * ) 1 2 ( x ^ 2 ( α * * ) ) 2 π x ^ 1 ( α * * ) μ x ^ 3 ( α * * ) ) ) . (7.26)

Remark 7.1.

Note that with the part of POF under equality constraint and the part under inequality constraint as indicated explicitly in (7.17)-(7.21), we can characterize the solutions to the Nash arbitration, target-attainment method, scalarization method with weighted-sum and utility-based method in a similar way as that for the characterization of the Kalai-Smorodinsky bargaining solution.

8. Extension and Conclusion

The analysis can be extended to the case with more than two objectives separated into two competing/conflicting types of objectives. In particular, the type A objectives include f 1 A ( x ) , f 2 A ( x ) , , f n A A ( x ) , and the type B objectives include f 1 B ( x ) , f 2 B ( x ) , , , f n B B ( x ) . A normalized weight is attached to each objective within a type, reflecting the relative importance of the objective in that group of objectives. The weighted sum of objectives within a type signifies the scalarized preference of the decision-maker for that type of objectives. The problem becomes

max x F [ i = 1 n A w i A f i A ( x ) , j = 1 n B w j B f j B ( x ) ] , (8.1)

subject to

g ( x ) = 0 and h ( x ) 0 , (8.2)

where w i A > 0 , w j B > 0 , i = 1 n A w i A = 1 and j = 1 n B w j B = 1 .

We identify the Pareto efficient strategies by systematically changing the weights among the objective functions i = 1 n A w i A f i A ( x ) and j = 1 n B w j B f j B ( x ) . Specifically, the decision-maker considers the problem:

max x [ α i = 1 n A w i A f i A ( x ) + ( 1 α ) j = 1 n B w j B f j B ( x ) ] , for α [ 0 , 1 ] , (8.3)

subject to (8.2).

Invoking Karush-Kuhn-Tucker conditions, we can express the corresponding Lagrange function as:

L ( x , λ , γ , α ) = [ α i = 1 n A w i A f i A ( x ) + ( 1 α ) j = 1 n B w j B f j B ( x ) ] + j = 1 m λ j g j ( x ) + k = 1 τ γ k ( h k ( x ) ) . (8.4)

Following the analysis in Section 5, we attempt to establish an analytical path of the POF which would be used for obtaining the solution under different methods that are mentioned in Section 4.

Finally, this paper presents a new Pareto Method for bi-objective optimization yielding the POF in the form of analytical solutions. Analytical methods enjoy the advantages of being transparent, efficient and rigorous. These advantages are extremely useful in deriving accurate, exact and well-understood solutions, especially for policy design. The possibility to provide an extension for multi-objective optimization by separating the objectives into two types allows wider applicability of the developed results. This paper does not claim superiority of the analytical Pareto method over other methods of multi-objective optimization, rather the method is a novel addition to the growing pursuit of Pareto generators, with potential advantages of being handy for analysis. Further theoretical development and applications are expected.

Appendix

Appendix A: Proof of Proposition 7.1

First-order conditions for a maximum from the Lagrange function (7.6) yield

α β α x 1 ( 1 α ) π λ = 0 ,

( 1 α ) ( q x 2 ) α w λ = 0 ,

α C α x 3 ( 1 α ) μ = 0 ,

χ 1 x 1 x 2 = 0 (A.1)

Solving (A.1) yields the Pareto efficient strategies and the Lagrange multiplier with equality constraint only

x 1 ( α ) = β 1 α α π λ ( α ) α ,

x 2 ( α ) = q α 1 α w λ ( α ) 1 α ,

x 3 ( α ) = C 1 α α μ ,

λ ( α ) = α ( 1 α ) [ ( β 1 α α π ) + ( q α 1 α w ) χ 1 ] . (A.2)

Hence,

x 1 ( α ) = β 1 α α π ( 1 α ) [ ( β 1 α α π ) + ( q α 1 α w ) χ 1 ] q + α q + α w + χ 1 α χ 1 , and

x 2 ( α ) = q α 1 α w α [ ( β 1 α α π ) + ( q α 1 α w ) χ 1 ] = q α q α w α β + π α π + α χ 1 . (A.3)

Appendix B: Proof of Proposition 7.2.

Differentiating x 1 ( α ) in Proposition 7.1 with respect to α yields

x 1 ( α ) α = β + π + q + w χ 1 . (B.1)

Invoking the constraint χ 1 = x 1 + x 2 in (7.3) and the first two equations in (A.2), we have

( β 1 α α π λ ( α ) 2 α ) + ( q α 1 α w λ ( α ) 1 α ) = χ 1 , (B.2)

which shows that β + π + q + w > χ 1 .

Hence,

x 1 ( α ) α = β + π + q + w χ 1 > 0 . (B.3)

In a similar manner, we can show that

x 2 ( α ) α = β π q w + χ 1 < 0 . (B.4)

Finally,

x 3 ( α ) α = 1 α 2 μ > 0 . (B.5)

Appendix C: Proof of Proposition 7.3

First-order conditions for a maximum for the problem of maximizing (7.11) subject to (7.3)-(7.4) yield

α β α x 1 ( 1 α ) π λ = 0 ,

( 1 α ) ( q x 2 ) α w λ γ = 0 ,

α C α x 3 ( 1 α ) μ = 0 ,

χ 1 x 1 x 2 = 0 ,

γ ( χ 2 x 2 ) = 0 ,

γ 0 and χ 2 x 2 0 . (C.1)

Solving (C.1) yields the Pareto efficient strategies and Lagrange multipliers:

x 1 ( α ) = β 1 α α π λ ( α ) α ,

x 2 ( α ) = q α 1 α w λ ( α ) 1 α γ ( α ) 1 α ,

x 3 ( α ) = C 1 α α μ ,

λ ( α ) = α β ( 1 α ) π α χ 1 + α χ 2 ,

γ ( α ) = ( 1 α ) q α w α β + ( 1 + α ) π + α χ 1 χ 2 . (C.2)

Substituting λ ( α ) and γ ( α ) into x 1 ( α ) and x 2 ( α ) in (C.2) yields:

x 1 ( α ) = χ 1 χ 2 and

x 2 ( α ) = χ 2 . (C.3)

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Zhou, A., Zhang, Q., Jin, Y., Tsang, E. and Okabe, T. (2005) A Model-Based Evolutionary Algorithm for Bi-Objective Optimization. 2005 IEEE Congress on Evolutionary Computation, Vol. 3, 2568-2575.
https://doi.org/10.1109/CEC.2005.1555016
[2] Kukkonen, S. and Deb, K. (2006) Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems. IEEE International Conference on Evolutionary Computation, Vancouver, 16-21 July 2006, 1179-1186.
https://doi.org/10.1109/CEC.2006.1688443
[3] Pinto-Varela, T., Barbosa-Póvoa, A.P. and Novaisa, A.Q. (2011) Bi-Objective Optimization Approach to the Design and Planning of Supply Chains: Economic versus Environmental Performances. Computers & Chemical Engineering, 35, 1454-1468.
https://doi.org/10.1016/j.compchemeng.2011.03.009
[4] Lath, B., Basavarajappa, S.S., Kadadevaramath, R.S. and Chen, J.C.H. (2013) A Bi-Objective Optimization of Supply Chain Design and Distribution Operations Using Non-Dominated Sorting Algorithm: A Case Study. Expert Systems with Applications, 40, 5730-5739.
https://doi.org/10.1016/j.eswa.2013.03.047
[5] Pereyra, V., Saunders, M. and Castillo, J. (2013) Equispaced Pareto Front Construction for Constrained Bi-Objective Optimization. Mathematical and Computer Modelling, 57, 2122-2131.
https://doi.org/10.1016/j.mcm.2010.12.044
[6] Garg, H., Monica, R., Sharma, S.P. and Vishwakarma, Y. (2014) Bi-Objective Optimization of the Reliability-Redundancy Allocation Problem for Series-Parallel System. Journal of Manufacturing Systems, 33, 335-347.
https://doi.org/10.1016/j.jmsy.2014.02.008
[7] Futrell, B.J., Ozelkan, E.C. and Brentrup, D. (2015) Bi-Objective Optimization of Building Enclosure Design for Thermal and Lighting Performance. Building and Environment, 92, 591-602.
https://doi.org/10.1016/j.buildenv.2015.03.039
[8] Hirpa, D., Hare, W., Lucet, Y., Pushak, Y. and Tesfamariam, S. (2016) A Bi-Objective Optimization Framework for Three-Dimensional Road Alignment Design. Transportation Research Part C: Emerging Technologies, 65, 61-78.
https://doi.org/10.1016/j.trc.2016.01.016
[9] Liu, M., Lee, C.-Y., Zhang, Z. and Chua, C. (2016) Bi-Objective Optimization for the Container Terminal Integrated Planning. Transportation Research Part B: Methodological, 93, 720-749.
https://doi.org/10.1016/j.trb.2016.05.012
[10] Wang, S., Liu, M., Chu, F. and Chua, C. (2016) Bi-Objective Optimization of a Single Machine Batch Scheduling Problem with Energy Cost Consideration. Journal of Cleaner Production, 137, 1205-1215.
https://doi.org/10.1016/j.jclepro.2016.07.206
[11] Cheraghalipour, A., Paydar, M.M. and Hajiaghaei-Keshtelia, M. (2018) A Bi-Objective Optimization for Citrus Closed-Loop Supply Chain Using Pareto-Based Algorithms. Applied Soft Computing, 69, 33-59.
https://doi.org/10.1016/j.asoc.2018.04.022
[12] Ho-Huu, V., Hartjes, S., Visser, H.G. and Curran, R. (2018) An Improved MOEA/D Algorithm for Bi-Objective Optimization Problems with Complex Pareto Fronts and Its Application to Structural Optimization. Expert Systems with Applications, 92, 430-446.
https://doi.org/10.1016/j.eswa.2017.09.051
[13] Yeh, C.-T. (2019) An Improved NSGA2 to Solve a Bi-Objective Optimization Problem of Multi-State Electronic Transaction Network. Reliability Engineering & System Safety, 191, Article ID: 106578.
https://doi.org/10.1016/j.ress.2019.106578
[14] Liu, D., Huang, Q., Yang, Y., Liu, D. and Wei, X. (2020) Bi-Objective Algorithm Based on NSGA-II Framework to Optimize Reservoirs Operation. Journal of Hydrology, 585, Article ID: 124830.
https://doi.org/10.1016/j.jhydrol.2020.124830
[15] Nagamanjula, R. and Pethalakshmi, A. (2020) A Novel Framework Based on Bi-Objective Optimization and LAN2FIS for Twitter Sentiment Analysis. Social Network Analysis and Mining, 10, 34.
https://doi.org/10.1007/s13278-020-00648-5
[16] Xu, Z., Yao, L. and Chen, X. (2020) Urban Water Supply System Optimization and Planning: Bi-Objective Optimization and System Dynamics Methods. Computers & Industrial Engineering, 142, Article ID: 106373.
https://doi.org/10.1016/j.cie.2020.106373
[17] Diao, B., Zhang, X. and Fang, H. (2021) Bi-Objective Optimization for Improving the Locomotion Performance of the Vibration-Driven Robot. Archive of Applied Mechanics, 91, 2073-2088.
https://doi.org/10.1007/s00419-020-01870-5
[18] Mohammadi, M., Dehghan, M., Pirayesh, A. and Dolgui, A. (2022) Bi-Objective Optimization of a Stochastic Resilient Vaccine Distribution Network in the Context of the COVID-19 Pandemic. Omega, 113, Article ID: 102725.
https://doi.org/10.1016/j.omega.2022.102725
[19] Kparib, D.Y., Twum, S.B. and Boah, D.K. (2018) An Improved Ant Colony System Algorithm for Solving Shortest Path Network Problems. International Journal of Science and Research, 7, 1123-1127.
[20] Kparib, D., Twum, S. and Boah, D. (2019) A Min-Max Strategy to Aid Decision Making in a Bi-Objective Discrete Optimization Problem Using an Improved Ant Colony Algorithm. American Journal of Operations Research, 9, 161-174.
https://doi.org/10.4236/ajor.2019.94010
[21] Gulben, C. and Orhan, Y. (2015) An Improved Ant Colony Optimization Algorithm for Construction Site Layout Problems. Journal of Building Construction and Planning Research, 3, 221-232.
https://doi.org/10.4236/jbcpr.2015.34022
[22] Zaninudin, Z. and Paputungan, T.V. (2013) A Hybrid Optimization Algorithm Based on Genetic Algorithm and Ant Colony Optimization. International Journal of Artificial Intelligence and Applications, 4, 63-75.
https://doi.org/10.5121/ijaia.2013.4505
[23] Stutzle, T. and Hoos, H. (1997) Max-Min Ant System and Local Search for the Travelling Salesman Problem. In: IEEE International Conference on Evolutionary Computation, IEEE Press, Piscataway, 309-314.
[24] Messac, A. (1996) Physical Programming Effective Optimization for Computational Design. AIAA Journal, 34, 149-158.
https://doi.org/10.2514/3.13035
[25] Das, I. and Dennis, J.E. (1998) Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems. SIAM Journal on Optimization, 8, 631-657.
https://doi.org/10.1137/S1052623496307510
[26] Deb, K. (2001) Nonlinear Goal Programming Using Multi-Objective Genetic Algorithms. Journal of the Operational Research Society, 52, 291-302.
https://doi.org/10.1057/palgrave.jors.2601089
[27] Messac, B., Ismail-Yahaya, A. and Mattson, C.A. (2003) The Normalized Normal Constraint Method for Generating the Pareto Frontier. Structural and Multidisciplinary Optimization, 25, 86-98.
https://doi.org/10.1007/s00158-002-0276-1
[28] Messac, A. and Mattson, C.A. (2002) Generating Well-Distributed Sets of Pareto Points for Engineering Design Using Physical Programming. Optimization and Engineering, 3, 431-450.
https://doi.org/10.1023/A:1021179727569
[29] Kim, I.Y. and de Weck, O.L. (2005) Adaptive Weighted-Sum Method for Bi-Objective Optimization: Pareto Front Generation. Structural and Multidisciplinary Optimization, 29, 149-158.
https://doi.org/10.1007/s00158-004-0465-1
[30] Zhang, Q. and Li, H. (2007) MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Transactions on Evolutionary Computation, 11, 712-731.
https://doi.org/10.1109/TEVC.2007.892759
[31] Chinchuluun, A. and Pardalos, P.M. (2007) A Survey of Recent Developments in Multiobjective Optimization. Annals of Operations Research, 159, 29-50.
https://doi.org/10.1007/s10479-007-0186-0
[32] Mueller-Gritschneder, D., Graeb, H. and Schlichtmann, U. (2009) A Successive Approach to Compute the Bounded Pareto Front of Practical Multiobjective Optimization Problems. SIAM Journal on Optimization, 20, 915-934.
https://doi.org/10.1137/080729013
[33] Pérez-Fernández, P., Ochoa, G., Montes, S., Díaz, I., Fernández, J., Paternain, D. and Bustince, H. (2021) Axiomatization and Construction of Orness Measures for Aggregation Functions. International Journal of Intelligent Systems, 36, 2208-2228.
https://doi.org/10.1002/int.22376
[34] Marler, R.T. and Arora, J.S. (2004) Survey of Multi-Objective Optimization Methods for Engineering. Structural and Multidisciplinary Optimization, 26, 369-395.
https://doi.org/10.1007/s00158-003-0368-6
[35] Gunantara, N. (2018) A Review of Multi-Objective Optimization: Methods and Its Applications. Cogent Engineering, 5, Article ID: 1502242.
https://doi.org/10.1080/23311916.2018.1502242
[36] Orths, A., Schmitt, A., Styczyuskiz, A. and Verstege, J. (2001) Multi-Criteria Optimization Methods for Planning and Operation of Electrical Energy System. Electrical Engineering, 83, 251-258.
https://doi.org/10.1007/s002020100085
[37] Collette, Y. and Siarry, P. (2009) Multi-Objective Optimization: Principles and Case Studies (Decision Engineering). Springer-Verlag, Berlin.
[38] Ehrgott, M. (2005) Multi-Criteria Optimization. Springer, Berlin.
[39] Eskelinen, P., Miettinen, K., Klamroth, K. and Hakanen (2008) Pareto Navigator for Interactive Nonlinear Multi-Objective Optimization. Springer-Verlag, Berlin.
https://doi.org/10.1007/s00291-008-0151-6
[40] Fonseca, C.M. and Fleming, P.J. (1995) An Overview of Evolutionary Algorithms in Multi-Objective Optimization. Evolutionary Computing, 3, 1-16.
https://doi.org/10.1162/evco.1995.3.1.1
[41] Alaa, S., Abdel, K.B., Mohamed, A. and Noor, K. (2012) Multi-Objective Evolutionary Computation Solution for Chocolate Production System Using Pareto Method. International Journal of Computer Science, 9, 75-83.
[42] Subhamoy, C. and Sugata (2015) An Elitist Simulated Annealing Algorithm for Solving Multi-Objective Optimization Problems in Internet of Design. International Journal Advanced Network and Applications, 7, 2784-2789.
[43] Wilfried, J. and Blum, C. (2014) Pareto Optimization or Cascaded Weighted Sum: A Comparison of Concepts. Algorithms, 7, 166-185.
https://doi.org/10.3390/a7010166
[44] Caramia, M. and Dell’Olmo, P. (2008) Multi-Objective Management in Freight Logistics Increasing Capacity, Services Level and Safety with Optimization Algorithms. Springer, Berlin.
http://www.springer.com/978-1-84800-381-1
[45] Rohilla, D.K. (2020) Classical Methods of Multi-Objective Optimization—A Comparative Study. International Journal for Technological Research in Engineering, 7, 6554-6560.
[46] Engau, A. and Wiecek, M. (2005) Generating e-Efficient Solutions in Multi-Objective Programming. European Journal of Operational Research, 177, 1566-1579.
https://doi.org/10.1016/j.ejor.2005.10.023
[47] Obayashi, S., Sasaki, D. and Oyama, A. (2004) Finding Tradeoffs by Using Multiobjective Optimization Algorithms. Transactions of the Japan Society for Aeronautical and Space Sciences, 47, 51-58.
https://doi.org/10.2322/tjsass.47.51
[48] Lagarias, J.C., Reeds, J.A., Wright, M.H. and Wright, P.E. (1998) Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions. SIAM Journal of Optimization, 9, 112-147.
https://doi.org/10.1137/S1052623496303470
[49] Miettinen, K. (1999) Nonlinear Multiobjective Optimization. Kluwer Academic Publishers, Boston.
[50] Bendsoe, M.P., Olhoff, N. and Taylor, J.E. (1984) A Variational Formulation for Multicriteria Structural Optimization. Journal of Structural Mechanics, 11, 523-544.
https://doi.org/10.1080/03601218308907456
[51] Chankong, V. and Haimes, Y.Y. (1983) Multiobjective Decision Making Theory and Methodology. Elsevier Science Publishing, New York.
https://doi.org/10.1007/978-3-7091-2914-2
[52] Leitmann, G. (1974) Cooperative and Non-Cooperative Many Players Differential Games. Springer, New York.
https://doi.org/10.1016/j.automatica.2015.01.030
[53] Yeung, D.W.K. and Petrosyan, L.A. (2015) Subgame Consistent Cooperative Solution for NTU Dynamic Games via Variable Weights. Automatica, 50, 84-89.
https://doi.org/10.1007/978-981-10-1545-8
[54] Yeung, D.W.K. and Petrosyan, L.A. (2016) Subgame Consistent Cooperation—A Comprehensive Treatise. Springer, Berlin.
https://doi.org/10.1073/pnas.36.1.48
[55] Nash, J.F. (1950) Equilibrium Points in N-Person Games. Proceedings of the National Academy of Sciences of the United States of America, 36, 48-49.
[56] Davis, L. (1985) Applying Adaptive Algorithms to Epistatic Domains. Proceedings of the Joint International Conference on Artificial Intelligence, Los Angeles, 18-23 August 1985, 162-164.
[57] Aboulaich, R., Ellaia, R., Elmoumen, S., Habbal, A. and Moussaid, N. (2017) The Mean-CVaR Model for Portfolio Optimization Using a Multi-Objective Approach and the Kalai-Smorodinsky Solution. MATEC Web of Conferences, 105, 103-108.
https://doi.org/10.1051/matecconf/201710500010
[58] Oukennou, A., Sandali, A. and Elmoumen, S. (2018) Coordinated Placement and Setting of FACTS in Electrical Network Based on Kalai-Smorodinsky Bargaining Solution and Voltage Deviation Index. International Journal of Electrical and Computer Engineering, 8, 4079.
https://doi.org/10.11591/ijece.v8i6.pp4079-4088
[59] Kalai, E. and Smorodinsky, M. (1975) Other Solutions to Nash’s Bargaining Problem. Econometrica, 43, 513-518.
https://doi.org/10.2307/1914280
[60] Murata, T. and Ishibuchi, H. (1995) MOGA: Multi-Objective Genetic Algorithms. Proceedings of 1995 IEEE International Conference on Evolutionary Computation, Vol. 1, 289.
https://doi.org/10.1109/ICEC.1995.489161
[61] Dodgson, J., Spackman, M.D., Pearman, A.D. and Phillips, L.D. (2009) Multi-Criteria Analysis: A Manual. Communities and Local Government Publications, London.
http://www.communities.gov.uk/documents/corporate/pdf/1132618.pdf
[62] Rădulescu, R., Mannion, P., Roijers, D.M. and Nowé, A. (2020) Multi-Objective Multi-Agent Decision Making: A Utility-Based Analysis and Survey. Autonomous Agents and Multi-Agent Systems, 34, 10.
https://doi.org/10.1007/s10458-019-09433-x

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.