Journal of Software Engineering and Applications, 2011, 4, 316-319
doi:10.4236/jsea.2011.45035 Published Online May 2011 (http://www.SciRP.org/journal/jsea)
Copyright © 2011 SciRes. JSEA
A Hybrid Parallel Multi-Objective Genetic
Algorithm for 0/1 Knapsack Problem
Sudhir B. Jagtap1, Subhendu Kumar Pani2, Ganeshchandra Shinde3*
1Swami Vivekanand Mahavidyalaya, Udgir, India; 2RCMA, BPUT Bhubaneswar, Orissa, India; 3IndiraGandhi College, Nanded,
India.
Email: sudhir.jagtap7@gmail.com, Subhendu_pani@rediffmail.com, *shindegn@yahoo.co.in
Received February 23rd, 2011; revised April 1st, 2011; accepted April 12th, 2011.
ABSTRACT
In this paper a hybrid parallel multi-objective genetic algorithm is proposed for solving 0/1 knapsack problem.
Multi-objective problems with non-convex and discrete Pareto front can take enormous computation time to converge
to the true Pareto front. Hence, the classical multi-objective genetic algorithms (MOGAs) (i.e., non-Parallel MOGAs)
may fail to solve such in tracta ble p roblem in a reasonab le amoun t of time. The p ropo sed hybrid mod el will comb ine the
best attribute of island and Jakobovic ma ster slave mode ls. We conduct a n extensive exp erimental stud y in a multi-co re
system by varying the different size of processors and the resu lt is comp ar ed with basic parallel mod el i.e. , master-slave
model which is used to parallelize NSGA-II. The experimental results confirm that the hybrid model is showing a clear
edge over master-slave model in terms of processing time and approximation to the true Pareto front.
Keywords: Multi-Objective Genetic Algorithm, Parallel Processing Techniques, NSGA-II, 0/1 Knapsack Problem,
Trigger Model, Cone Separation Model, Island Model
1. Introduction
Many of the real-world engineering optimization prob-
lems are multi-objective in nature, since they normally
have several non-commensurable objectives that must
be satisfied at the same time. These problems are
known as multi-objective optimization problems (MOP)
in contrast with single objective optimization problems
(SOP) [1]. The notion of optimum has to be redefined
in this context and instead of aiming to find a single
optimal solution; a procedure for solving MOP should
determine a set of good compromises or trade-off solu-
tions, generally known as Pareto optimal solutions,
where the decision maker will get enough flexibility to
choose a particular solution. These solutions are opti-
mal in the wider sense that no other solution in the
search space is superior when all objectives are con-
sidered. Pareto optimal solutions form the Pareto front
in a k-dimensional objective space, where k is the
number of the objectives in the optimization problem.
Evolutionary algorithms (EAs) have the potential for
finding multiple Pareto optimal solutions in a single
run and have been widely used in this area.
One of the major drawbacks of multi-objective ge-
netic algorithm (MOGA) is that a relatively large
number of solutions have to be evaluated before gener-
ating good results which is true for multi-objective
optimization problems in higher domain. As a result, a
large population size is required for this. The above
mentioned drawback can be compensated by parallel-
izing the MOGA.
With regards to parallelization, the main difference
between single and multi-objective evolutionary algo-
rithms seems to be that in the multi-objective case, a
set of solutions is sought rather than a single optimum
[2]. This opens the possibility of having the different
processors search for different solutions, rather than to
follow an identical goal. The hope is that such a di-
vide-and-conquer principle is more efficient if all
processors work on the whole problem [3].
There are different models like Jakobovic master
slave, trigger, island and cone-separation models pro-
posed by different scientist to implement parallel
MOGA (PMOGA), e.g., NSGA II [4-6]. In this paper,
we have proposed a hybrid model to implemented the
MOGA for solving the real world multi-objective 0/1-
knapsack problem and studied their characteristics on
the basis of convergence quality and time factor as pa-
rameter.
A Hybrid Parallel Multi-Objective Genetic Algorithm for 0/1 Knapsack Problem317
c
The 0/1 knapsack problem is a widely studied prob-
lem due its NP-hard nature and practical importance.
Generally, a 0/1 knapsack problem consists of a set of
items, weight and profit associated with each item, and
an upper bound for the capacity of the knapsack. The
task is to find a subset of items which maximizes the
total of profits in the subset, yet all the selected items
t into the knapsack, i.e. the total weight does not ex-
ceed the given capacity [7]. This single objective
problem can be extended directly to a multi-objective
case by allowing an arbitrary number of the knapsacks.
Formally, the multi-objective 0/1 knapsack problem is
dened through (1) and (2).
Given a set of m items and a set of n knapsacks with
pij = profit of item j according to knapsack i, wij =
weight of item j according to knapsack i, ci = capacity
of knapsack i.
Find a vector such that,


12
,,,0,1
m
m
xxx x

1
1, 2,,:m
ij ji
j
inw.x
 
(1)
and for which


12
,,,
n
f
xfxfx fx is
max imum, whe r e

iijj
f
xpx (2)
and xj = 1 if item j is selected.
In order to obtain reliable and sound result, three
different test problems are investigated where both the
number of knapsacks (i.e., number of objectives) and
the number of items are varied. Two knapsacks (i.e.,
two objectives) are taken under consideration in com-
bination with 250, items. Following suggestions in,
prots and weights are chosen, where pij and wij are
random integers in the interval (10,100) [7]. Also as
reported, about half of the items are expected to be in
the optimal solutions when this type of knapsack ca-
pacity is used [7]. Thus, the knapsack capacities are
normally set to half the total weight according to the
corresponding knapsack as indicated in Equation (3)
1
05m
i
j
c.w
i
j
(3)
The paper is structured as follows: Section 2 deals
with review on PMOGA and its models. Section 3 pre-
sents the proposed hybrid model. The problem is
evaluated and studied empirically by using the above
model in Section 4. Finally conclusion is drawn in Sec-
tion 5.
2. Review of Parallel MOGA Models
Evolutionary algorithms are very suitable for paralleliza-
tion, as crossover, mutation, and in particular the time-
consuming fitness evaluation can be performed inde-
pendently on different individual.
The approaches to parallelize multi-objective GAs can
be grouped into three categories:
1) Jakobovic Master-Slave: Jakobovic model is one
type of master slave models [1]. In this model the master
only triggers the slaves. Each slave and the master (also
perform as a slave instead of being idle), then creates
random initial population, evaluates created individuals,
perform whole evaluation process and then return the
final results to the master. This eliminates the time re-
quired to generate each population at the server for slaves,
the communication overhead and allows for an exhaus-
tive search of the solution space by the slaves through
random explorations [1].
2) Island Model: In this model, every processor runs an
independent EA, using a separate sub-population. The
processors cooperate by regularly exchanging migrants
(good individuals). The island model is particularly suit-
able for computer clusters, as communication is limited
[3]. This model scales very well. Low communication
overhead as less population migration between the is-
lands. This model is robust to failure as it is willing to
lose small populations. This model has higher diversity
as it has the working principle of isolated population with
migration.
3) Cone Separation model: It was suggested by Branke
et al. in 2004 [2]. The basic idea of this model is to di-
vide the search space in several regions and assign it to
the different processor. The partitioning of the search
space is adopted at regular intervals by normalizing the
fitness values in such a way that the whole non-domi-
nated front is within the unit square (hypercube, for more
than 2 dimensions). After the normalization, the fitness
space is partitioned into equal cones. Each processor is
then assigned one part. Therefore, whenever the con-
straints are adapted, individuals violating the constraints
are migrated into the population where they do not vio-
late the constraints. Thereby, individuals are just added
to the receiving population, without explicitly deleting
others. Overall, the approach is integrated into NSGA-II
[4,6].
3. Hybrid Model
Master-slave and multiple-deme parallel GAs can
quickly reach to good solutions when they are designed
properly, but even better quality result can be found by
combining any two basic form of parallel GA, in to a
hierarchical algorithm. The basic idea is to design a mul-
tiple-deme algorithm where the demes themselves are
some form of parallel MOGAs. In our proposed model
we have integrated two basic models Island i.e., multiple
deme model at the higher level with Jakobovic model at
Copyright © 2011 SciRes. JSEA
A Hybrid Parallel Multi-Objective Genetic Algorithm for 0/1 Knapsack Problem
318
the lower level. In this, population of each deme is taken
care by Jakobovic model. Each deme is connected
through a ring topology, and the migration takes place
between each deme at a regular interval. The model is
pictorially represented in Figure 1.The working principle
of the model can be explained in following steps:
1) Each Island generats its population.
2) The Jakobovic model distributes the population
equally among the sla ves including the master.
3) All master and slave processor starts running in-
dependent MOGA in parallel.
4) Master collects the best population from the slaves.
5) Best individuals of each island are migrated to the
neighboring island.
6) Neighboring island receives the individuals and
distributes equally to all the processor inside it.
7) 3rd step onward is repeated till termination condi-
tion is achieved.
4. Experimenrtal Study
In this section we present the experimental setup and
discuss the empirical results of each experiment in detail.
4.1. Experimental Setup
The algorithm is implemented using C language on a
multi-core system core i7 with 8 cores, each of 1.6 GHz,
4GB RAM, under Linux OS. The communication be-
tween the processors has been supported by the free
available MPICH (Message Passing Interface) library.
The parameters of the parallel MOGA are shown in Ta-
ble 1, in which the first row explains the population size
per deme, second, third, fourth, and fifth row represents
the crossover probability, mutation probability, migration
rate and termination condition respectively, assumed for
Figure 1. A hybrid model where at the upper level an Island
model and lower level a Jakobovic model.
Table 1. Parameter setting.
the experiment.
4.2. Experimental Results
The experiment is conducted with the very well-known
multi-objective 0/1-knapsack problem. The problem is
solved by proposed hybrid model and the result is com-
pared with the master-slave model. Figure 2 explains the
convergence result with different models with two and
four processors respectively. From Figure 2, it can be
seen that in a master slave model, the convergence qual-
ity detoriates as the number of processor increases. The
convergence of two processors is better than four proc-
essors. In hybrid model it is clearly observed that the
convergence quality increases as we go on increasing the
demes. Over all it is observed that the quality of result
we get is better in the proposed hybrid model than the
master-slave model.
To observe the behavior of the proposed model we
varied the number of processors in each deme, keeping
the number of deme constant. It is observed that, if we
increase the number of processors inside the deme, the
quality of the result detoriates, but however the time
taken for each generation is less. The speedup graph is
shown in Figure 3 which is drawn by varying the num-
ber of processor in each deme, keeping number of deme
as two.
The main challenges associated with parallelism are:
Figure 2. Convergence result of the hybrid model with two
and four demes and Master -slave model using two and four
processors.
Population Size/Deme 200
Crossover Probability 0.8
Mutation Probability 0.016/bit
Migration Rate After every 10 generation.
Termination Condition
Average knapsack profit greater
than 10,000 or the movement of
non-dominated solution remains
stagnant for 20 generation.
Copyright © 2011 SciRes. JSEA
A Hybrid Parallel Multi-Objective Genetic Algorithm for 0/1 Knapsack Problem
Copyright © 2011 SciRes. JSEA
319
to address these issues by balancing the load and reas-
signing the loads to the lighter ones. The development of
distributing search space is a challenging and critical task.
Since it requires knowledge at all the individuals stored
at different locations and the ability to combine partial
results from individuals search space into a single result.
5. Conclusions
Parallelizing multi-objective evolutionary algorithms is
an important issue as it is associated with large com-
putation time with multiple solutions. In this paper, we
proposed a new hybrid model to parallelize multi-objec-
tive genetic algorithms and implemented it on 0/1 knap-
sack problem. Further, the result was also compared with
the basic master-slave model. Again by discussing the
convergence parameter, it is concluded that the hybrid
model gives better result in comparison with mas-
ter-slave model irrespective of any number of processor.
Figure 3. Speedup analysis of hybrid model.
Minimizing Input/output.
Minimizing synchronization and communication.
Effective load balancing.
Deciding the best search procedure use.
Maximizing/avoidin g duplication of work.
The speed and efficiency of parallel formulations are
as below. In a parallel MOGA, the main issues taken into
account are:
Load balancing REFERENCES
Minimizing communication.
Overlapping communication and computation.
Speed Up:
A program is basically associated with some serial in-
structions and some parallel instructions. The serial in-
structions cannot be parallelized because there may be
presence of dependency between the instructions.
Let us assume that, serial fraction of instructions: fs,
parallel fraction of instructions: fp = 1 – fs, then speedup
is defined as
[1] T. Al-Somani and K. Qureshi, “Reliability Optimization
Using Genetics Algorithms,” M. Sc. Thesis, King Ab-
dul-Aziz University, Saudi Arabia, 2000.
[2] J. Branke, H. Schmeck, K. Deb and M. S Reddy, “Paral-
lelizing Multi-Objective Evolutionary Algorithms: Cone
Separation,” Congress on Evolutionary Computation, Vol.
2, 2004, pp. 1952-1957.
[3] E. Cantu-Paz, “A Survey of Parallel Genetic Algorithms,”
Calculateurs Paralleles, Vol. 10, No. 2, 1998, pp. 141-
171.
1
s
pp
s
T
STf
fp








(i.e., p: number of processors). [4] K. Deb, A. Pratap, S. Agrawal and T. Mey arivan, “A Fast
and Elitist Multi-objective Genetic Algorithm: NSGA-II,”
IEEE Transactions on Evolutionary Computation, Vol. 6,
No. 2, 2002, pp. 182-197. doi:10.1109/4235.996017
[5] C. Grosan, “How to Compare the Multi-Objective Evolu-
tionary Algorithms Performances?” Zilele Academice
Clujene, Vol. 4, No. 2, 2003, pp. 82-87.
Efficiency:
The efficiency in terms of fp and fs is computed as fol-
lows: [6] F. Streichert, H. Ulmer and A. Zell, “Parallelization of
Multi-Objective Evolutionary Algorithms Using Cluster-
ing Algorithms,” In: C. A. Coello, A. H. Aguirre and E.
Zitzler, Eds., Evolutionary Multi-Criterion Optimization,
Springer, Berlin/Heidelberg, Vol. 3410, 2005, pp. 92-107.
doi:10.1007/978-3-540-31880-4_7

1
p
S
EPpfs f
 .
In general, we can say that, the total overhead need in
an increasing function of P, at least linearly when fs > 0.
In case of parallel processing, we are using more than
one numbers of processor, so the total load balancing is
required between the processors. There are two types of
load balancing static and dynamic load balancing seeks
[7] E. Zitzler, K. Deb and L. Thiele, “Comparison of
Multi-Objective Evolutionary Algorithms: Empirical Re-
sults,” Institution Swiss Federal Institute of Technology
(ETH), Zurich, Vol. 3, No. 2, 1999, pp. 192-197.