Lagrangian Relaxation Method for Multiobjective Optimization Methods: Solution Approaches ()
1. Introduction
Many real-life problems turn into nonsmooth and large-scale when formulated mathematically. There are not enough approaches to solve these nonsmooth and large-scale problems. In such a case, the Lagrangian relaxation is used in dealing with the problems when they are nonsmooth and difficult to solve. The method can be considered as the extended nonlinear programming Lagrange multiplier method. Theoretical concepts of the Lagrangian relaxation method introduced in 1970’s considered a tool that is the backbone of a number of large-scale applications. Several surveys of the Lagrangian relaxation method were carried out by Fisher [1], Geoffrion [2], Shapiro [3]. Later, Fisher [1] and Held et al. [4] provided methodologies to use the Lagrangian relaxation approach in practice.
Many difficult integer problems can be altered into relatively simple problems when Lagrangian relaxation is used. To exploit this transformation, we create a Lagrangian problem in which the complicating constraints are replaced with a penalty term in the objective function involving the amount of violation of the constraints and their dual variables. The Lagrangian problem is relatively easy to solve and provides a lower bound (for minimization problem) on the optimal value of the original problem. It can be used in place of a linear programming relaxation to provide bounds in a branch and bound algorithm. It is also noted that after some heuristic adjustment of the Lagrange problem solution, this indicates the dual solution of the problem, a good approximation of the optimal solution of the primal problem can be obtained.
In our analysis, we transformed multiple objective functions into a single objective function which is called the scalarization technique for multiobjective optimization problems. The significance of the scalarization techniques is that once we transform the original into auxiliary problems, we can utilize the existing techniques used for single-objective optimization problems available in the literature. This paper proposes the Lagrangian relaxation problem for multiobjective optimization problems. Furthermore, we generate Lagrange multipliers that can be used in the algorithms to solve the proposed problems. We have used the Brute force technique to solve the problems, and Lagrangian parameters are generated by using the discretization technique with an equal step size.
In Section 2, we introduce Lagrangian relaxation technique, and in Section 3, multiobjective optimization problem and scalarization technique are discussed. Section 4 illustrates the use of Lagrangian relaxation method to solve scalarization problems.
2. Lagrangian Relaxation Technique
We formulate a linear programming problem:
(P)
,
subject to,
,
,
and integers,
where x is
, b is
, d is
and all other matrices have conformable dimensions.
We assume that the constraints of (P) have been partitioned into the two sets
and
. Now, we introduce new objective function that includes the original objective
and part of the constraints
, thus the Problem (P) transforms into auxiliary problem
(AP)
,
and
,
where
is a vector of Langrage multipliers. It is easier to solve the problem (AP) than solve it (P).
For convenience, we assume that the feasible set of (P) is
,
and the feasible set of Problem (AP).
, and
.
Then,
is finite for all u. It is easy to extend the development when these assumptions are violated or when inequality constraints are included in the set to be dualized.
It is well known that
. This is easy to show by assuming an optimal solution
to (P) and obtain that
.
The inequality in the above relation follows from the definition of
and equality from
and
.
If
is replaced by
in (P), then we require
and the relation becomes.
.
as
,
and
. Similarly, for
we require
for
to hold.
In general, it is not possible to guarantee to find u for which
, but this frequently happens for particular problem cases. The fact that
allows (AP) to be used in place of (P) to provide lower bounds in a branch and bound algorithm for (P). While this is the most obvious use of (AP), it has a number of other uses. It can be a medium for selecting branching variables and choosing the next branch to explore. Good feasible solutions to (P) can frequently be obtained by perturbing nearly feasible solutions to (AP). Finally, Lagrangian relaxation has been used recently in [5] as an analytic tool for establishing worst-case bounds on the performance of certain heuristics.
Determining Lagrangian Parameters u
One of the important steps is finding the suitable Lagrangian parameter u that provides the optimal solution of (AP).
The best choice of finding u is to solve the dual problem of (AP) which is stated as follows (see for details in [4] ).
,
subject to,
,
,
where
is the solution of (AP).
Held, Wolfe and Crowder Approach (1974) [4]:
We are here recalling the main steps to calculate u. A detail of this setting can be seen in [4]. It is required to determine a sequence of values for u that are
,
where
is the step-size and
are the optimal solution of (AP).
The step-size
can be determined as follows.
.
where
is determined the value of known feasible solution. The sequence
is concluded from the interval
and suitable
is needed if
is failed to increase.
In this paper, we have not used the Held et al. approach. Instead, we generate a matrix for Lagrangian multipliers and solve the problem for each of the multipliers. In the end, we recommended the best possible multipliers that lead to getting optimum solution.
The following Table 1 depicts the chosen values of Lagrangian multipliers.
3. Multiobjective Optimization Problems
Many real-life problems that often deal with multiple objectives and intend to optimize all criteria simultaneously. We now formulate the multiobjective optimization problem as follows.
(MOP) minimize
,
subject to the set
.
Table 1. Lagrangian multipliers are generated with step size 0.1 within the interval
.
The solution of multiobjective optimization problem is called an efficient solution or weak efficient solution [6]. The relation between efficient and weak efficient is that every efficient point is a weak efficient point, but the reverse does not hold. There are a number of ways to solve (MOP), and one of the popular methods is the scalarization technique. When a multiobjective problem is solved by combining its multiple objectives into one single-objective scalar function, this approach is generally known as the scalarization method.
The kth objective weighted constraint problem introduced in [7] [8] is one of the well-known scalarization techniques. The method applies to the problems with the disconnected Pareto front and the problems with a disconnected feasible set under the mild assumptions that the objective functions are continuous and bounded from below with a known lower bound. For each fixed k, the kth objective is minimized while the other weighted objective functions are incorporated as constraints. The problem structure is as follows:
(Sc-MOP)
,
subject to
,
,
and
.
If
is a weak efficient point to the original problem (MOP) then
is the solution of the (Sc-MOP) for each fixed k. On the other hand, a point
solves all subproblems of (Sc-MOP) then it is an efficient solution to the original problem (MOP). This is a strong condition and it can be relaxed by setting
solves the k-subproblems of (MOP), and if Proposition 3.3 in [5] holds, then
are weak efficient solutions of the original problem (MOP).
4. Lagrangian Relaxation Technique for (MOP)
We now employ Lagrangian relaxation techniques to solve the subproblems of (Sc-MOP). We ease the constraints
We define an
vector of nonnegative multipliers
and added the nonnegative term
to the objective function of (Sc-MOP). Therefore, the mathematical formulation of the revised kth subproblem is
(LRBMOP)
,
subject to
,
,
.
This can be reformulated as
,
subject to
,
,
.
To analyze the concepts, we now illustrate the following example.
Example 1. We take
; Consider the problem
and
.
Let’s take
and
.
We now employ Lagrangian relaxation approach in (Sc-MOP) to solve the Example 1.
(Subproblem-1)
,
subject to
,
.
.
We construct the auxiliary problem of Subproblem-1 using the Lagrangian relaxation technique given below.
(LR-1):
,
subject to
.
Similarly, (Subproblem-2)
,
subject to
,
,
.
Lagrangian relaxation technique of the above problem would be
(LR-2)
,
subject to
.
Solve (LR-1): The Lagrangian multiplier
is provided into the algorithm. First, we form a matrix for
under interval
that includes 101 parameter values with step-size is 0.1 (see Table 1). We solve (LR-1) for each multiplier, and the optimum solution is recorded. In the algorithm, the Brute Force technique is used to identify the minimum value of the problem. The implementation of the algorithm is done through MATLAB and it is also noted that the differentiable solvers “Ipopt” [9] and “SCIP” [10] are used to solve each of the subproblems. Associated solution-value for each of the multipliers is shown in Table 2.
Remark 1
The minimum value in Table 2 is found for
(Lagrangian multiplier) and the respective solution is
and
which is shown in a square box (Yellow) in Figure 1. In Cell (1, 1) of Table 1 and Table 2 (Yellow in both tables) are the respective multiplier and the solution. However,
and
is not a feasible solution of (LR-1), this means, it violates
. Therefore, we need to integrate either Brute Force with constraint condition or Held et al. approach [4] in the algorithm to get the best u to obtain the optimal solution of (Subproblem-1). The optimal solution to this problem is
and
and the optimum value is 1.4163 (Green in both Table 1 and Table 2).
Figure 2 given below shows the optimal solution to (Subproblem-1) that also satisfies the constraint
.
Now we will solve the other subproblem (Subproblem 2) which is as follows
,
subject to
.
Similar Lagrangian multipliers (Table 1) are provided in the Algorithm to solve the problem (Subproblem 2). The solutions are obtained for associated
Table 2. Solution set of (LR-1) obtained for each multiplier stated in Table 1.
Figure 1. The square (Yellow) box depicts the optimum solution (but not feasible of (Subproblem-1)) and small circles are the solution of the respective other Lagrangian multipliers.
Figure 2. The square (Yellow) box depicts the optimum solution (which is also feasible of (Subproblem-1)) and small circles are the solution of the respective other Lagrangian multipliers.
multipliers. In the last step, the Brute Force method is applied to identify the optimum solution of (SubP2). Figure 3 and Table 3 are depicted as the solutions and optimum solution of the subproblem (Subproblem 2). The implementation required MATLAB programming software and app solvers.
Table 3. Solution list of (Subproblem 2) obtained for each multiplier listed in Table 1.
Figure 3. The square (Yellow) box depicts the optimum solution of (Subproblem 2) and small circles are the solution of the respective Lagrangian multipliers.
Remark 2
The optimum solution of (Subproblem 2) is found for
(Lagrangian multiplier) and the solution is
and
which is shown in a square box (Yellow) in Figure 3. The minimum value is 0. In Cell (11, 10) of Table 1 and Table 3 (Green in both tables) are the respective multiplier and the solution. However,
and
is not a feasible solution of (Subproblem 2), which means, it violates constraints
. Therefore, we need to fit in either the Brute Force with constraint condition or Held et al. [4] in the algorithm to get the best u to obtain the optimal solution of (Subproblem-2). The optimal solution (see, Figure 4) to this problem is
and
and the optimum value is 1.462 (Green in both Table 1 and Table 3).
Now we solve both subproblems for a set of weight vectors
. This provides the solution of Example-1 and the solutions construct the Pareto front (see Figure 3) of the problem.
Remark 3
The implementation of Held et al. [4] algorithm is quite computationally expensive. Lambda which is in the interval
needs to be chosen cautiously. This is required for each subproblem as well as each given weight. Figure 5 is given
Figure 4. The square (Yellow) box depicts the optimum solution (which is also feasible of (Subproblem-2)), and small circles are the solution of the respective other Lagrangian multipliers.
Figure 5. The circle (Blue) depicts the efficient solution of Example-1 which also approximates the Preto front.
below to demonstrate the Pareto front obtained by the Lagrangian relaxation approach (Subproblem 1) and (Subproblem 2).
5. Conclusion
The Lagrangian relaxation method to solve multiobjective optimization problems has been introduced in this paper. We began by selecting an appropriate and well-known scalarization method that transforms the original multiobjective into a scalar objective optimization problem. Afterwards, this scalar objective problem was solved by us using Lagrangian relaxation techniques. We supplied grids of multipliers in the algorithm followed by the application of Brute force techniques to sort optimum solutions. Ultimately, we analyzed the results and recommended efficient methods.