An Objective Penalty Functions Algorithm for Multiobjective Optimization Problem

By using the penalty function method with objective parameters, the paper presents an interactive algorithm to solve the inequality constrained multi-objective programming (MP). The MP is transformed into a single objective optimal problem (SOOP) with inequality constrains; and it is proved that, under some conditions, an optimal solution to SOOP is a Pareto efficient solution to MP. Then, an interactive algorithm of MP is designed accordingly. Numerical examples show that the algorithm can find a satisfactory solution to MP with objective weight value adjusted by decision maker.


Introduction
The interactive algorithm is very efficient in solving multi-objective optimization problems of many fields, while the penalty function is a very important method in solving optimization problems with constraints.Hence, based on objective penalty function, we propose an interactive algorithm which provides a versatile tool in finding solutions to multi-objective optimization problems with constraints.In solving multi-objective optimization problems, the interactive algorithm provides a way to adjust objective weight value between the decision maker and computer, so that solution space is readily understood, which also makes it easier in use and more convenient in operation.
In this paper the following inequality constrained multi-objective programming is considered: , 0, 1, 2, , , where 1 : and X is a subset of .
  It is good to find out a satisfactory solution to (MP) such that all objective values are optimal, but this is obviously difficult in general.Hence, lots of efforts have been devoted to this area to find an efficient method, and up until now many algorithms are presented [1][2][3][4][5][6][7][8][9][10][11].
In 1971, Benayoun et al. firstly presented an interactive algorithm STEM for linear multiobjective programming [1].Its idea is the first in finding out a solution of an ideal value to every objective, obtaining better solution by improving unsatisfactory objectives value, and keeping within concession value of satisfactory objectives.With man-machine conversation, interactive algorithms provide a method to solve MP.There are many interactive approaches, as it is not possible for a decision maker to know all the objective values of MP.Then through interactive algorithms, he may gradually learn objective value changes and thus in the interactive procedure may determine his preferences to objectives.As to the dissatisfactory objectives, he may get satisfactory solution by modifying some parameters he gives, e.g. the ideal objective values and the weights of objectives.
For example, Geoffrion, Dyer and Feinberg (1972) gave an interactive approach to multi-criterion optimization, where they defined a non-explicitly criterion functions to show the DM's overall preference [2].Zionts and Wallenius (1976) also presented a man-machine interactive mathematical programming method, where the overall utility function is assumed to be implicitly a linear function and more generally a concave function of the objective functions [3].Furthermore, Rosinger (1981) studied the algorithm which is a modification of the steepest ascent method, giving at each iteration a signifi-cant freedom and ease for the decision-maker's self-expression, and requiring a minimal information on his local estimate of the steepest-ascent direction [4].Zionts and Wallenius (1983) developed a method for interactive multiple objective linear programming by assuming an unknown pseudo concave utility function satisfying certain general properties [5].Sadagopan and Ravinderan (1986) developed several interactive procedures for solving multiple criteria nonlinear programming problems based on the generalized reduced gradient method for solving single objective nonlinear programming problems [6].Siegfried (1990) presented an interactive algorithm for nonlinear vector optimization problems, after solving only two optimization problems [7].Kassem (1995) dealt with an interactive stability of multiobjective nonlinear programming problems with fuzzy parameters in the constraints [8].Aghezzaf and Ouaderhman (2001) proposed an interactive interior point method for finding the best compromise solution to a multiple objective linear programming problem [9].Abo-Sinna and Abou-El-Enien (2006) extended the technique of order preference by similarity ideal solution (TOPSIS) for solving large scale multiple objective programming problems involving fuzzy parameters [10].Luque, Ruiz and Steuer pointed out that many interactive algorithms have two main features: 1) they help a decision maker (DM) learn about a problem while solving it, and 2) they put to work iteratively any new insights gained during the solution process to help the DM navigate to a final solution [11].
It is difficult to define an appropriate utility function for MP in the interactive algorithms.By using objective penalty functions as utility functions for MP, the paper obtains a satisfactory solution, when the decision maker is allowed, in the interactive algorithm, to choose another weight of objectives for some dissatisfactory objectives time and again.So our interactive algorithm has two advantages: 1) it is able to find out an efficient solution to each new MP with better convergence, 2) it can control the change of objectives such that a more satisfactory solution is obtained.What needs to focus on for the decision maker is the objective changes.Numerical examples show that the proposed interactive algorithm has faster convergence effect in Section 3.
The remainder of this paper is organized as follows.In Section 2, we provide results of the penalty problem of MP with penalty parameters.In Section 3, we present an interactive algorithm to solve the MP.Numerical examples show that the proposed algorithm has good convergence and can control objective changes by changing objective weights.

An Objective Penalty Function
In this section, an objective penalty function of (MP) is introduced.
For (MP), let  for i I  and define a function as follows: . Let an objective weight value be a given positive vector, and

Theorem 2.1 If
x is an optimal solution to the following problem: and for all j J  , Proof.Suppose that x is not a Pareto efficient solution to (MP), then there is an . It follows from the assumption that and there is at least such a that , which results in a contradiction.From Theorem 2.1, we learn that a Pareto efficient solution to (MP) can be found out by solving the single objective problem (P λ ).Furthermore, the problem (P λ ) can be transformed into an unconstrained optimization by using nonlinear penalty function, which is defined as: where 0 M  .Consider the following nonlinear penalty optimization problem: x is an optimal solution to P λ (M), and for all , x is a feasible solution to (MP), then * M x is a Pareto efficient solution to (MP).

Proof. Let *
x be an optimal solution to P λ (M).From the given conditions, we have Since *

M
x is a feasible solution to (MP) and * x is an optimal solution to P λ (M), 0 x is an optimal solution to P λ (M).From Theorem 2.1, * M x is a Pareto efficient solution to (MP).Theorem 2.1 and Theorem 2.2 give a good way to solve (MP).The objective parameter M required in Theorem 2.2 may exist, as shown in the following example.
It is proved in [13] that the stability of constrained penalty function can ensure exactness.So in this paper we define the stability of objective penalty function to ensure equivalence between P λ (M) and (P λ ).
Let a perturbed problem of (P λ ) defined as where , , , M s s s s   .Definition 2.1 For n s R  , let x * be an optimal solution to (P λ ) and * s x be an optimal solution to P λ (s).If where is called stable for M. Theorem 2.3 Let x * be an optimal solution to (P λ ).Then, problem (P λ ) is stable for M if and only if x * is an optimal solution to P λ (M).
Proof.First, if problem (P λ ) is stable for M, it is hereby proved that x * is an optimal solution to P λ (M).Assume that x * is not an optimal solution to P λ (M), then, there is some x′ such that If x′ is a feasible solution to (P λ ), we will get a contradiction.Since x * be an optimal solution to (P λ ), we have , then x * is not an optimal solution to (P λ ).Hence, we have that x′ is not any feasible solution to (P λ ) and .Let 0 x be an optimal solution to P λ (s′).So, it is clear to have Hence, the problem (P λ ) is not stable for M. Next, let's prove that the problem (P λ ) is stable for M, under the condition that if x * is an optimal solution to P λ (M).Let x * s be an optimal solution to (P λ (s)).Since x * is an optimal solution to P λ (M), we have We have that problem (P λ ) is stable for M. Example 2.2 Consider the problem (P2.1) and its (P2.1)(s): , .
We have that stable condition holds as follows: Based on Theorem 2.2 and Theorem 2.3, we develop an algorithm to compute (MP).It solves the problem P λ (M) sequentially and we name it Objective Penalty Function Algorithm of Multiobjective Optimization Problem (OPFAMOP for short).
Step 2: Take the violation x k as the starting point for solving the problem: . Let x k+1 be an optimal solution.
Step 3: If x k+1 is a feasible solution to (MP) and M k < f j (x k+1 ) for all , stop and x k+1 is a Pareto efficient solution to (MP).Otherwise, let M k+1 = NM k , k: = k + 1 and go to Step 2.

j J 
The convergence of the OPFAMOP algorithm is proved in the following theorem.Let which is called an L-level set.We say that S(L, f 0 ) is bounded if, for any given L > 0, S(L, f 0 ) is bounded.Theorem 2.4 Suppose that f j ( ) and g i ( i j J  I  ) are continuous on R n , and the L-level set S(L, f 0 ) is bounded.Let {x k } be the sequence generated by the OPFAMOP algorithm. 1 ) is a finite sequence (i.e., the OPFAMOP algorithm stops at the k -th iteration), then k x is a Pareto efficient to (MP).
2) If {x k } is an infinite sequence and there is some k′ > 1 such that f j (x k+1 ) > M k ( ) for all k > k′, then {x k } is bounded and any limit point of it is a Pareto efficient to (MP).Otherwise, for some j  J, as .
Proof. 1) If the OPFAMOP Algorithm terminates at the k th iteration and the second situation of Step 3 occurs, by Theorem 2.1 and Theorem 2. 2, k x is a Pareto efficient to (MP).
2) Suppose that {x k } is an infinite sequence and there is some k′ > 1 such that f j (x k+1 ) > M k ( ) for all k > k′.Let x' be a feasible solution to (MP).

j J  
We first show that the sequence {x k } is bounded.Since x k is an optimal solution to Hence, the L-level set S(f 0 (x′), f 0 ) is bounded, then the sequence {x k } is bounded.Without loss of generality, we assume k x x  .And, for any 0 x X  , we have It is clear that k as .By letting in the above equation, we obtain x is a feasible solution of (MP) and . Therefore, x is Pareto efficient to (MP).

An Interactive Algorithm
In this section, we propose an interactive algorithm by the objective penalty function.There are many approaches of the MP problem to be transformed into a single objective optimal problem, such as a non-explicitly criterion function [2][3][4]9], nonlinear utility functions [5,6], weighting Tchebycheff function [8,9] and TOPSIS method [10].So, our proposed approach is novel.
According to the OPFAMOP Algorithm, we can select to get an approximate Pareto efficient solution to (MP).
Now, we present the following interactive algorithm IOPFAMOP based on the OPFAMOP.
Step 2.1: Step 2.2: Solve the problem: ,to get an ε-feasible solution x k .
Step 2.3: If k < K, modify the penalty objective values M k+1 = NM k .
Otherwise, k = K, let x s = x k and go to Step 3.
Step 3: The decision maker analyzes the objective ): if the solution x s is satisfactory, then stop; otherwise, the decision maker will modify the weight values of objective, and go to Step 4.
Step 4: Deal with all the unsatisfactory objectives f j (x s ) as per the following procedure repeatedly: if the decision maker wants to increase jth objective value, then a 0 s j   should be given, then let 1 : We wish to find out a solution such that every objective function value is close to each other.
Let penalty function , then different approximate solutions (x 1 , x 2 ) are obtained by selecting different (λ 1 , λ 2 ) (as shown in Table 3.1).Remarks for Table 3.1 Step 1: The decision maker (DM) first takes a weight value . ,

  
x x = (1.551123,0.965918).Because the second objective value f 2 is less than the first objective value f 1 , the DM will improve weight value λ 1 in the Step 2.
Step 2: Then, the DM takes a second weight value .By the interactive algorithm, the DM obtains the objective function value ,  x x = (1.927601,0.714933).In order to decrease the first objective value f 1 , the DM still need to improve , weight value λ 1 in the next step.
Step 3: Then, the DM takes a third weight value    , x x = (2.201802,0.532132).This time, the DM need to decrease weight value λ 1 in the next step.
Step 4: Then, the DM take a forth weight value    , x x = (2.019328,0.653781).Then, the DM is satisfied with the approximate solution, and wishes to stop.
Example 3.2 Consider the problem: Let penalty function Let M 1 = -1, N = 2, K = 3, error of an approximate solution (x 1 , x 2 ):   , then we get numerical results for s = 6 in Table 3.2.
In Table 3.2, from s = 1 to s = 3, the second objective value 2 s f improves from -2.109783 to -2.555347.Now, the DM wishes to find a solution such that three objectives are as small as possible with the second objective less than -2.4,and the first objective less than -2.5.Then, when s = 4, 5, 6, the first objective value  [11]) a Let M 1 = -1, N = 4, K = 5, error of an approximate solution x:   , then we get numerical results for s = 6 in Table 3.3.

Let penalty function
In Table 3.3, when s = 1, 2, the second objective value f f f f = (6.292457,6.723388, 12.731173, 4.799065), with the summing of the four objective values being 30.546081.In the iteration 3 of Mariano [11], they obtained four groups  

Conclusions
In this paper, using the nonlinear penalty function method with objective parameters, we present an interactive algorithm to solve the multi-objective programming with inequality constraints.With this algorithm, we can readily find out a satisfactory solution.When objective parameter M is increased, we may obtain a stable solution, but unsatisfactory.Then, by adopting different weights in the algorithm, we can go on interacting with computer and get many approximate different solutions, among which we can choose a satisfactory one.By the objective penalty function, new algorithms for multiobjective programming and bilevel multiobjective programming deserve further study.
By the interactive algorithm, the DM obtains the objective function value  1