Alienor Method for Nonlinear Multi-Objective Optimization

This paper deals with the Alienor method to tackle multiobjective nonlinear optimization problems. In this approach, the multiple criteria of the optimization problem are aggregated into a single one using weighted sums. Then, the resulting single objective nonlinear optimization problem is solved using the Alienor method associated with the Optimization Preserving Operators   . . O P O  technique which has proved to be suitable for (nonlinear) optimization problems with a large number of variables (see [1]). The proposed approach is evaluated through test problems. The results show that the approach provides good approximations of the Pareto front while requiring small computational time, even for large instances.


Introduction
These last years, the field of multicriteria optimization have experienced some significant evolutions.This have allowed the development of several solutions methods or approaches.This multiplicity of multiobjective optimization methods is perceived like one of the wealth of this field.This high number of approaches is explained by the diversity of the problems and the existence of various possible and legitimate solutions to these problems.However, this phenomenon reveals also some weaknesses.
As in monoobjective optimization, the optimization algorithms used to solve multiobjective optimization problem (MOP) can be classified into exact and approximate algorithms.In the literature on exact algorithms, more attention has been devoted to bicriteria optimization problems by using exact methods such as branch and bound algorithms, A  algorithm and dynamic programming.These methods are effective for small size problems.But, for problems with more than two criteria, there aren't many effective exact procedures, given the simultaneous difficulties coming from the NP-hard complexity of problems and the multicriteria framework of the problems [2].
To tackle these difficulties, we propose a determinist approach called Alienor method to solve multiobjective optimization.This approach is based on concepts such as Aggregation Method (weighted sum), Penalized method (for constrained problem) and Alienor method associated to the . .O P O  's technique.It can be used in various multicriteria situations.In [3], Maimos et al. proposed to solve multiojective linear programming (MOLP) problems by using Alienor method associated to the . .O P O  's technique.This paper aims at extending the Alienor method approach to multiobjective nonlinear programming (MONLP) problems.The Alienor method associated to the . .O P O  's technique would then appear like a unique determinist method to solve efficiently linear or non-linear multiojective programming.
Let's consider the following MONLP problem: where 2 k  is the number of objectives, is the vector representing the decision variables and D represents the set of feasible solutions associated with equality and inequality constraints and explicit bounds.
is the objective's vector to be optimized.
The problem to solve is: Find a good one or several compromises in a subset of n  .
The aggregation method is one of the first and most used methods to generate Pareto optimal solutions.It consists in using an aggregation function to transform the MONLP (1) into a monoobjective problem using a convex combination of the objective functions i f into a single function 1 S as follows: where =1 = 1, 0.
 are the weights that reflect the relative importance of each criteria.Now, the problem (1) becomes: 1 ( ) min .

S x x
Let us notice that some Pareto optimal solutions may be obtained by the resolution of the mathematical program (3) for various values of the weight vector  .Such solutions are known as supported solutions [2].
The complexity of MONLP is equivalent to the one of the subjacent monoobjective optimization problem.If the subjacent optimization problems are polynomial, it will be relatively easy to generate the supported solutions.Nevertheless, there exists other Pareto optimal solutions that cannot be obtained by the resolution of a mathe-matical program (3).Indeed, these solutions, known as nonsupported solutions, are dominated by convex combinations of supported solutions.
The obtained results in the resolution of the problem (3) depend strongly on the parameters chosen for the weight vector  .In this paper, we use the "priori multiple weights" strategy [2] which consists in generating various weight vectors.The problem (3) is solved in parallel and independent ways for the different vector weights.Various weights may provide different supported solutions.However, the same solution can be generated by using different weights [2].
The remainder of this paper is organized as follows: Section 2 presents the penalization technique used to transform a constrained optimization problem into an unconstrained one.Section 3 is devoted to the Alienor method and the * . .O P O 's technique.Section 4 presents the main algorithm to solve the MOP problem.To illustrate our approach, computational results and an automatic way to generate the weight vectors are presented in Section 5.

The Penalized Problem
The approach using the Alienor method that we are proposing here requires the resolution of a contrained optimization problem.The main idea to solve the constrained optimization problem is to transform this pro-blem into an unconstrained one.The classic way to achieve such transformation uses the Lagrangian parameters.
In this section we use a transformation proposed by Konfé et al. [4]: Definition 1 Let's denote by L the continuous function mapping I into  and defined by: where ning the set of constraints  .| .| is the absolute value in  and K is a real positive number sufficiently large.
We can define the unconstrained global optimization problem associated to (3) by: .min( ) Indeed, we have the following theorem: Let x  , be the global minimizer of   The complete proof for this theorem is given in Konfé et al. [4].

Alienor's Method
The Alienor reducing transformation method is based on a simple idea consisting in approximating an n vari- ables function by a single variable function by using dense   curves.These curves have the property of filling the space [5].More precisely, consider a continuous n variables function: The reducing transformation method consists in set-ting: where  is a real variable and the i h are regular functions generally C  defining an dense   curve.Therefore, the n variables function   First we recall the following definition [6,7]: Definition 2 Given a positive number a, a continuous function : For any , where d is the euclidian distance in .
n  Let us now consider the following problem: where L is a continuous function and where  is a compact of n  .Then, using any dense where only depends on the compact set  .It is possible to assert that if   is a solution of (7), then  is an approximation of (6).Moreover there exists a solution x  of (6) such that (see [6,7]): where   0 є   as 0   .About the choice of the reducing transformation, the smaller the length of the curve is, the smaller the calculation time gets.Several works [5][6][7][8] have been devoted to find dense   curves with a minimal length and a good precision (small co-efficient  ).
The transformation: parameter is given by: where r is a real number.

Remark 1 This curve is dense
and with: where max  depends on the reducing transformation; this will be precised later.
Theorem 2 The transformation: where: Practical applications of this reducing technique show that the obtained function L  is, in most cases, a multimodal function involving a long calculation-time to find a global minimum.That's why a new concept to solve a multimodal function optimization problem was developed by Mora et al. [10].

Optimization Preserving
is a subset of continuous functions; defined by: is an Optimization-Preserving-Operator * (O.P.O * ).The globally convex property means that: Then we have the following fundamental theorem: Theorem 3 Fundamental result: Let L  be an ob-jective function, 0  is an arbitrary element of .

I Let S be the set of the solutions of:
In other words, if S contains a unique element, it is the solution of the global minimization problem.
The complete proof for this theorem is given in [11].
To solve the global optimization problem (7), we use the O.P.O * to find a unique 0  such that:

Algorithm
Now, we fully describe the algorithm we propose to solve our initial problem (1).
The MOP problem to solve is: Step 1: Use the weighted sum to aggregate the different functions and therefore obtain the following objective function: where =1 = 1, 0.
Step 2: If the problem (1) is constrained, use the penalization technique to define the multidimensional function   L x as in (13): given that Step 3: Use Alienor method to convert the multi- x h  where:
Use OPO*: to eliminate all local minima, i.e. solve 2) *Now let i S  be the set of solutions defined by:

Computing environment:
We implemented the algorithm in Maple12 software on an Intel Core2Duo CPU T5850 @2.16 GHz computer with 4 GB RAM using a windows vista Service patch 2, operating system.In order to have graphical representations, we have only considered bi-criteria problems with a large number of variables.

Example 1: Zitzler's Test Function
For the first example, we solve the well known Zitzler's test function.Note that this example is an unconstrained MONLP problem, it has been solved in [12] and the true Pareto fronts is found when   = 1 g X :  n is the number of variables of the decision vector X .In our example, we set = 30 n .With our approach, we obtain the result (see Table 1) with a computational time of 411.09 s.
The Pareto optimal front is represented by Figure 1.Using Matlab 7.01, we plot together our result and Pareto curve obtained using NSGA II.It is clear that Alienor method shows a better efficiency, compared to NSGA II which is the most used algorithm nowadays (see Figure 2).

Example 2: Test Function 2
This second problem is a constrained MONLP.It has been proposed and solved in [13].
Our algorithm obtain the result (see Table 2) with a computational time of 17.971 s.
The Pareto optimal front is represented by Figure 3.
This problem has been solved in [14].With our approach, we have the result (see Table 3) with a computational time of 12.667 s.
The Pareto optimal front is represented by Figure 4.

Example 4: Osyczka and Kundu Problem
For the last two objectives test function, we consider the following Osyczka and Kundu problem which has been solved in [14].
Alienor method with * OPO technique give the result in 286.761 s (see Table 4).The Pareto optimal front is represented by Figure 5.

Example 5: Tamaki Test Problem
In this section, we propose a three objective test functions: The Tamaki test problem defined by: "max" . : 0 and , , , 0, 4 This problem has been solve in [15].Alienor method associated to the * OPO technique give the following results.To generate the weight vector, we use the following maple software code which can be easily extended to more than three objective functions.The Pareto optimal front is represented by Figure 6.
With this example we show that our method allows us to solve the MOP with more than two objective functions.The only difficulty was to find a procedure to generate the weight vectors in this case.This difficulty was overcomed with the preceding code.However, we observed the great calculation time due to the number of optimization problems to solve.For instance, with three objective functions we have to solve 66 optimization problems and 286 with 4 objective functions.In further works, our interest is in the resolution of the calculation time problem and the combinatory multiobjective optimization problem using the Alienor method associated with * OPO technique.

Conclusions
We have proposed in this paper an extended approach to solve multiobjective non linear optimization problems (MONLP).Solving such problems is crucial since numerous real-life siutations in science and engineering are modelled as non linear optimization problems with multiple objectives.Our approach relies on aggregation techniques and the Alienor method to generate the Pareto curve of MONLPs.It is an alternative to metaheuristics which are the most popular approach nowadays to tackle complex multiobjective problems.Using test problems from the MONLP literature, our computational experiments have shown that our approach provides good approximations to feasible Pareto-optimal fronts, and is time efficent, even when the problems have a large number of variables.

Figure 3 .
Figure 3.The Pareto curve of test function 2.

Figure 4 .
Figure 4.The Pareto curve for BINH and KORN problem.

Figure 5 .
Figure 5.The Pareto curve for Osyczka and Kundu problem.

Figure 6 .
Figure 6.The Pareto curve for Tamaki test problem.
Let L  be an objective function mapping  into  : We assume that L  is a lipschitzian function having a globally convex property.Let 0 [11]ators * (O.P.O * )Konfé et al.[11]have proposed a new type of . .
a global minimizer of