Cooperative Particle Swarm Optimization in Distance-Based Clustered Groups

TCPSO (Two-swarm Cooperative Particle Swarm Optimization) has been proposed by Sun and Li in 2014. TCPSO divides the swarms into two groups with different migration rules, and it has higher performance for high-dimensional nonlinear optimization problems than traditional PSO and other modified method of PSO. This paper proposes a particle swarm optimization by modifying TCPSO to avoid inappropriate convergence onto local optima. The quite feature of the proposed method is that two kinds of subpopulations constructed based on the scheme of TCPSO are divided into some clusters based on distance measure, k -means clustering method, to maintain both diversity and centralization of search process are maintained. This paper conducts numerical experiments using several types of functions, and the experimental results indicate that the proposed method has higher performance than the TCPSO for large-scale optimization problems.


Introduction
PSO (Particle Swarm Optimization) is one of the most famous evolutionary computation methods that emulate the swarm behavior of some kinds of birds [1].This evolutionary computation method interprets that the feature of the corresponding problem is not independently recorded on each individual of the swarm, but the whole population share the information for smart search.In the search process of PSO, each individual gets closer to an individual with highest evaluation value, by updating the current solution of each individual.A variety of PSO are applied to a number of problems in engineering discipline, because the updating method of the solution is very simple and computationally cheap [2] [3] [4].
However, search process of PSO rarely converges a local optimum solution depending on the initial solution and the parameter settings due to the simpleness of the search procedure.To avoid the convergence problem several improved PSOs are proposed.For example, fully informed PSO (FIPSO) that several subgroups of particles (individuals) can communicate to each other by intermediary of the link between the subgroups [5] [6], linearly decreasing weight PSO (LDWPSO) such that the value of the parameters in regard to search process are adapted to actualize a global search in the first part of the search process and a local search toward the end of the process [7], FAPSO-ACO-K, a hybrid algorithm of fuzzy adaptive PSO (FAPSO), ant colony optimization (ACO), and k -means clustering, is combined search procedure of hybrid of FAPSO and ACO for local search and k -means clustering method for global search [8], two-swarm cooperative PSO (TCPSO) which maintains the diversity of the population by division of the population into two kinds of subgroups [9], and so forth.
The manner of division in TCPSO is not always suitable for the corresponding problems, thus the search process of TCPSO is not always appropriate.This study improves TCPSO by applying a statical clustering method for effective division of the population.The clustering is applied several times in process of search based on the degree of convergence of search.The experimental results indicate that the proposed method has higher performance for some highdimensional problems than the existing methods relevant to PSO.
The rest of this paper is structured as follows: In Section 2, several works related to PSO, the original PSO, FIPSO, and TCPSO are briefly introduced.In Section 3, the proposed method using distance-based clustering method is constructed.In Section 4, numerical experiments using multiple benchmark problems are conducted, and finally Section 5 concludes this paper.

Related Works
In this section, some related works, PSO, FIPSO, and TCPSO are briefly described.

Particle Swarm Optimization (PSO)
Let n be the number of swarms (individuals), and a swarm i , 1, 2, , i n =  retains its positional information vector at time t , ( ) i x t , and its velocity vector, ( ) , , , , , , , here J indicates number of dimension of the corresponding problem.Here, let j x and j x be the minimum and maximum values of the j -th dimensional value, respectively, i.e., , , , , , , of which has the highest evaluation value in which the swarm experienced from time 0 to t .In similar way, whole swarms in the population shares a positional information vector of which has the highest evaluation value in which whole swarms experienced from time 0 to t .The position information vector ( ) i x t and velocity vector ( ) i v t of a swarm i is updated by using following equation.

( ) ( ) ( )
( ) where 1 C and 2 C are parameters, w indicates inertia coefficient, and , 0,1 r r ∈ are random numbers which are determined before each updating.From ( 1) and ( 2), the performance of PSO depends in a large part on values of there parameters.The positions of all swarms are updated depending also on the common position information vector X * , therefore it is difficult to deviate from a local optima if the search process converges near the local optima.

Fully Informed PSO (FIPSO)
Whereas whole swarms in the population share the common position information in PSO, a pair of swarms which located at nearest in whole swarms each other share personal best position information with the highest evaluation i x * in FIPSO.The nearest swarm of a swarm is called neighbor of her/him, and let ( ( ) where χ is a learning parameter.

Two-Swarm Cooperative PSO (TCPSO)
TCPSO divides population into two subgroups, master swarms and slave swarms.As shown in Figure 1, the master swarms are assigned to wide search and the slave swarms are assigned to intensive local search.( ) ( ) ( ) ( ) The position information vector ( ) x t and the velocity vector ( ) ( ) ( ) ( ) In Equations ( 4)-( 7), " M " and " S " indicate the master and the slave swarms in order, respectively.From Equation ( 7), a slave swarm updates the velocity vector ( ) S i v t based only on the position information, but the last velocity.This updating mechanism without inertia term leads that search regions of the slave swarms becomes more narrow than of the master swarms.The difference of search area between the master and the slave swarms are due to updating mechanism differences such whether it refers another kind of swarms or not.On the other hand, the term including ( ) in the updating mechanism of the velocity of a master slave refers the search efforts of the slave swarms.Several kinds of experimental results using TCPSO indicate that it has satisfying performance in many types of optimization problems.However, it does not have satisfying performance in high dimensional maps.

Distance-Based Divided Groups and Cooperative PSO
As described in the above section, TCPSO divides the population into two groups with different migration rules.However, the master swarms cannot always maintain the search area widely, because, a master swam updates its velocity in refer not only to master swarms but also the best solution in the slave swarms as Equations ( 4) and ( 5).This is a reason of which the searching process of TCPSO rarely converges on the local optima of the target optimization problem which is not the global optima of it, depending on the future of target problems.
This paper revises TCPSO to avoid the convergence of the swarms on the local optima by dividing the master swarms into multiple groups based on Euclidean distance.This is the main feature of the proposed method.Similarly, the slave swarms are divided into same number of groups and each group is connected a group of divided master swarms.For distance-based clustering, k -means clustering method is applied for dividing the swarms.

k -Means Clustering
The proposed method applies k -means clustering method to the population.
k -means clustering is a non-hierarchical clustering algorithm, the whole swarms in the population are divided into k groups based on the distance measure.Here, let x κ be a median point of the positional information vector of the swarms belonging to a cluster κ .Revise the division of the population into cluster to satisfy following condition.

Algorithms
Let M X κ * and S X κ * indicate the locational information vector of which has the highest evaluation value in which the master and slave swarms in a cluster κ experienced from time 0 to t .This study revises update procedure of the velocity vector of a swarm as follows, differing depending on the kind whether the swarm is master or slave swarm as following equations. ( 1 ( ) [ ] , , , 0,1 r r r r ∈ are the random variables as employed in Equations ( 2), (5), and (7).The positional information vector is updated based on Equations ( 4) and (6).Equation ( 9) maintains or increases diversity of search process of TCPSO by referring the positional information vector ( ) with the highest evaluation value in the belonging cluster, and Equation (10) avoids excessive convergence.
The outline of the proposed method is briefly summarized as follows.
Step 0 Initialize the parameter temp 2 k k = = .Step 3 Generate initial population of the master and the slave swarms.
Step 2 If 1 k > , execute k -means clustering to the population of master and the slave swarms.
Step 3 Calculate the evaluation value of each swarm, based on the positional information vector.
Step 4 Update the positional information and velocity vector by Equations ( 4), ( 6), (9), and (10).If the largest evaluation value in the population is larger than predetermined value, it remains unchanged during for a given period, or number of updating of the positional information and the velocity vector approaches predetermined number, then terminate the search process.If a convergence condition satisfied, then go to Step 3, otherwise go to Step 5.
Step 5 If and go to Step 2. A quite feature of the proposed method is applying k -means clustering to population of the master swarms and the slave swarms, respectively, with changing number of clusters k .Note that there exists a hybrid algorithm using k -means clustering, FAPSO-ACO-K [8], the algorithm divides the all swarms in the population into clusters.Our method has stronger tendency to avoid the convergence of swarms on the local optima, because the number of clusters k periodically changes in our method as 2,1, 3,1, 4,1,.Whereas FAPSO-ACO-K uses predetermined number of clusters.The desired effect of the proposed method is that population repeats widespread migrations and convergences when the number of k changes at Step 5.It is expected to discover another better solutions by the widespread migration, and intensive local search by the convergence of the search process.

Numerical Experiments
In this section, numerical experiments using some kinds of nonlinear functions.
The terminate conditions are set as that the function values ( ) f x is less than .The values of parameters are set as Table 1.
The numerical experiments which conduct in this study use 11 kinds of functions shown in Table 2 and Table 3.
Optimal solution: ( ) ( ) ( ) The aim of these optimization problem is finding the solution vector  4 and Table 5 as experimental result.Figures 2-5 summarizes these results.
From the experimental result shown in Table 4, Figure 2 and Figure 3, the proposed method has higher or approximately equivalent performance relative to TCPSO in the optimization problem of unimodal functions.In other words, the proposed method is more helpful than the comparative approach, TCPSO.
In the case of Rosenbrock, it is a unimodal function, though the gradient around the optimal solution is very small, and it is very difficult to find the optimal solution of such problems by heuristic approaches.
As shown in Table 5, Figure 4 and Figure 5, the proposed method has higher performance relative to TCPSO also in almost types of multi-modal functions.
However, the function "Generalized penalized 2" with the number of demensions , the amount of error by the proposed method is larger than of TCPSO.This function is nearly discrete type function as shown in Figure 15.
From the experimental result, the proposed method is very effective for a lot of types of optimization problems, however, it should be revised to improve the performance also in high-dimensional discrete type functions such as "Generalized penalized 2".

Conclusions
This paper proposed a procedure of particle swarm optimization (PSO), which is constructed based on two-swarm cooperative PSO (TCPSO) [9] and includes the procedure of distance-based clustering.The main idea of the proposed method is dividing whole swarms into multiple subgroups by using k -means clustering, and the divisions are executed several times with periodical change of k during the search process.This mechanism maintains the diversity and centralization of the search, and resolves the optimization problem of several kinds of functions.This paper conducts numerical experiments using some benchmark problems of unimodal and multi-modal functions, and the experimental results indicate that the proposed method succeed to discover better solutions of some problems compared to the existing method (TCPSO).As shown in Table 4, Figure 2 and       Submit or recommend next manuscript to SCIRP and we will provide best service for you: Accepting pre-submission inquiries through Email, Facebook, LinkedIn, Twitter, etc.A wide selection of journals (inclusive of 9 subjects, more than 200 journals) Providing 24-hour high-quality service User-friendly online submission system Fair and swift peer-review system Efficient typesetting and proofreading procedure Display of the result of downloads and visits, as well as the number of cited articles Maximum dissemination of your research work Submit your manuscript at: http://papersubmission.scirp.org/Or contact jsea@scirp.org a neighbor of a swarm i .FIPSO updates the position information vector
swarm i belonging to slave swarms are updated by following equations.

10 1 .
0 10 − × or the number of iterations without change of the largest evaluation values of solution.10 discrete trail runs are executed for each problem by changing number of the dimensions of the problems as 2,10,30,50 D = target function.The functions are classified in terms of unimodal or multi-modal.D indicates number of dimensions.In Appendix, some functions with number of dimensions is 2 ( ) 2 D = are shown in Figures 6-15 for example.The error per number of dimensions D and termination term per D are shown in Table

Figure 3 ,
only in a case of highdimensional function of Rosenbrock ( ) 100 D = , the proposed method is obviously defeated by TCPSO, we should explain the reason and propose an improvement of the performance, for example, by revising the condition of change of k is improved.