Particle Swarm Optimization Algorithm Based on Chaotic Sequences and Dynamic Self-Adaptive Strategy

To deal with the problems of premature convergence and tending to jump into the local optimum in the traditional particle swarm optimization, a novel improved particle swarm optimization algorithm was proposed. The self-adaptive inertia weight factor was used to accelerate the converging speed, and chaotic sequences were used to tune the acceleration coefficients for the balance between exploration and exploitation. The performance of the proposed algorithm was tested on four classical multi-objective optimization functions by comparing with the non-dominated sorting genetic algorithm and multi-objective particle swarm optimization algorithm. The results verified the effectiveness of the algorithm, which improved the premature convergence problem with faster convergence rate and strong ability to jump out of local optimum.


Introduction
Particle swarm optimization (PSO) algorithm is a swarm intelligence optimization algorithm, which derives agglomeration of organism behavior, such as a simulation of the behavior of a flock of birds or fish.Compared to other intelligent algorithms, PSO algorithm has simple structure, less parameters and is easy M. S. Li  pattern recognition.In particular, it is applicable to solving the problems of nonlinear, multipolar and non-differentiable and complex optimization [1] [2] [3].
However, the standard PSO algorithm also has shortcomings such as premature convergence and bad local searching ability similar to other intelligent algorithms [4] [5] [6].For example, in the optimization of complex problems in high-dimension, the population may have accumulated to a certain point of stagnation without finding the global optimization point, forming premature convergence.In other words, the premature convergence problem does not guarantee that the algorithm can converge to the global extreme point.Meanwhile, in the PSO algorithm search process, when the particle is approaching or entering the most advantageous region, the convergence speed is obviously slow.
That is, in the later period of particle optimization, the search ability is poor.
Thus, the application of PSO algorithm is restricted.
For the lack of PSO algorithm, the researchers propose many improvement strategies [7].Inertia weighting factor [8], contraction factor and adaptive mutation operator are the most representative, such as linear decrease method [9], fuzzy adaptive method [10], distance information method and other inertial coefficients adaptive adjustment methods [11], PSO algorithm with compression factor, the PSO algorithm of adaptive mutation operator, etc.In addition, the PSO algorithm and the hybrid PSO algorithm combined with the PSO algorithm, synergy polices, chaos theory [12] and other algorithms [13] are also attracted by the researchers, such as quantum PSO algorithm with chaotic mutation operator [14].In addition, there are also many researches on discrete PSO algorithm, multi-objective PSO algorithm [15] [16], etc.At present, the improvement of PSO algorithm mainly focuses on two aspects: adjustment of algorithm parameters and update of particle structure and trajectory.The aim is to make the algorithm solve or improve the local search slow, precocious convergence and so on, and improve the convergence speed and accuracy of the algorithm to improve the performance of the algorithm [17].Although the proposed particle swarm improvement algorithm improves both performance and efficiency, it is difficult to improve the local search ability of the algorithm while avoiding precocious convergence.To provide better, more efficient, and cheaper particle swarm algorithms, academics and industry researchers have been exploring and experimenting with new approaches [18].
In order to improve search precision and convergence speed of the standard PSO algorithm, this paper tries to propose a more efficient and higher convergence speed algorithm by combining chaos theory with dynamic adaptive weight adjustment strategy.A new chaos self-adaptive particle swarm optimization algorithm (CSAPSO) was proposed.The algorithm improves the convergence speed by adaptive adjustment strategy evolution inertia weight.The chaotic sequence of chaos theory is used to optimize the learning factor of the algorithm, so that it can get out of the local optimum when it comes to precocious convergence.Finally, according to the solution experiment of multi-objective optimization problem, by comparing and analyzing the CSAPSO algorithm with the Journal of Computer and Communications standard PSO algorithm and the classical multi-objective algorithm, the feasibility and validity of the algorithm are verified and the convergence speed and accuracy are discussed.

Standard PSO
Particle Swarm Optimization algorithm (PSO) is a group evolution algorithm proposed by scholars Eberhart and Kennedy [19] based on the social behaviors of birds in 1995.The PSO algorithm is derived from the behavior characteristics of biological groups and is used to solve the optimization problems.It has the advantages of easy description, easy implementation, little adjustment parameter, fast convergence speed and low calculation cost.And there is no high requirement for memory and CPU speed.In the process of particle optimization, the potential solution of the problem is assumed to be a "particle" in the n-dimensional space, and the particle will fly at a certain speed and direction in the solution space.In the iterative process, all particles use two global variables to represent the best position of the particle itself (pbest) and the best position of all particles (gbest).It is assumed that, in an n-dimensional search space, the particle population is composed of m particles.The position of the ith particle is denoted as and the velocity is denoted as , The global extreme of the population of particles is  ) ( ) ; ω is called an inertial weight factor, it makes the particles keep sport inertia and have the ability to expand search space; C 1 and C 2 are the learning factors, which represent the weight of each particle to the statistical acceleration item of the extremum position; rand () is a random number within (0, 1), ,

CSAPSO Algorithm
The standard PSO algorithm has its own limitations, such as the implementation process of the algorithm has a great relationship with the value of the parameters.When the algorithm is applied to the complex optimization problem of high dimension, the algorithm tends to converge to some extreme point and stagnates when the global optimum is not found, that is, precocious convergence is easy to occur.These points can be a point in the local extreme point or local extreme point area.In addition, the convergence rate of the algorithm becomes slow when approaching or entering the optimal solution area.The early convergence rate of PSO algorithm is fast, but in the later stage, when the algorithm converges to local minimum, due to the lack of effective local search mechanism, the local search speed is slow.According to the formula of particle velocity update of PSO algorithm, the change of particle velocity is determined by three factors: 1) The inertial weight factor, which represents the velocity information at the previous moment.It indicates the relationship between current speed and forward speed.2) The cognitive factor, which is the development capacity coefficient, represents the error of the optimum of the particle itself, and reflects the local mining and development capability of the particle.3) The exploration factor, which is the social sharing ability coefficient, represents the error of global optimum, the information sharing and cooperation ability between the particles.Under the circumstances, the inertia coefficient determines the search step length.When it is larger, it is good for global search.When it is smaller, it is helpful for local exploration.Cognitive factors and exploration factors are collectively called learning factors, which represent the effect proportion of the optimum of the particle itself and global optimum.By adjusting the learning factors properly, the global and local search of the particles can be weighed.When the algorithm is in precocious convergence, it is possible to change the exploration factor to achieve out of local optimal.
In order to improve the precocious convergence of the algorithm and improve the convergence speed of the algorithm, this paper uses adaptive weight adjustment strategy to realize the dynamic adjustment of inertia coefficient.The chaotic sequence generated by chaotic mapping is used to optimize the parameters of learning factor C 1 and C 2 , and a chaos self-adaptive particle swarm optimization algorithm (CSAPSO) is obtained.The inertial weight factor ω is adjusted by Formula (3).
( ) ( ) where, ω max and ω min respectively represent the maximum and minimum values of inertial weight; Pgbest (k) represents the global optimal for the kth iteration; Plbest ave represents the local optimal average of all particles; k max is maximum number of iterations; k is current iteration times.
Learning factor C 1 and C 2 are adjusted by chaotic sequence generated by chaotic mapping.This paper uses the typical Lorenz's equation to generate chaotic sequences, as shown in Formula (4).
( ) where, parameters a, b and c are controlled parameters, which are 0.2, 0.4 and Journal of Computer and Communications 5.7 respectively.The learning factor (c 1 , c 2 ) is defined as: Because the change of chaotic variables is random, ergodic and regular, the algorithm can maintain the diversity of population, effectively overcome the problem of precocious convergence, and improve global search performance.
The CSAPSO algorithm performs the following process: 1) To initialize the particle group The position and velocity of particles in PSO algorithm are initialized.The initial position and velocity of the particles are generated randomly.The current position of each particle is used as the particle individual extremum, and the optimal value of the individual extremum is selected as the global optimal value.
2) To calculate the adaptive value of group particles.
3) The adaptive value of each particle is compared with the adaptive value of the best position it has passed.If it is better, the current position is the best position of the particle.
4) The adaptive value of each particle is compared with the adaptive value of the global best position, and if it is better, the current position is the global best position.
5) The learning factor C 1 , C 2 and inertial weight ω were obtained respectively, and the velocity and position of the particles were updated and optimized 6) If the end condition of the algorithm is satisfied, the global best position is the optimal solution, saving the result and ending.Otherwise return to Step (2).

Experiment Function and Evaluation
In order to test the performance of CSAPSO algorithm, this paper selects the multi-objective optimization test functions proposed by Schaffer [20] and Deb [21] as an experimental case.Multi-objective optimization problem is the most typical optimization problems, due to the constant contradiction and constraint among targets, it is difficult to achieve the optimal at the same time, as well as one of the optimization of goals must be at the expense of the other goals.The solutions to such problems are usually not unique, but a series of optimal solutions, also called non-inferior solutions.A collection of non-inferred solutions is often referred to as Pareto optimal solution.Because intelligent algorithm can search multiple solutions of solution space in parallel, multi-objective optimization is more suitable to verify the performance of intelligent algorithm.The multi-objective optimization test functions used in this paper are shown in Table 1.
In order to evaluate the merits of non-inferior solutions, this paper adopts the convergence index and the distribution index to evaluate the performance of the algorithm, and the indexes of convergence and distribution uniformity are respectively defined as follows [2] [3]: 4 , 4 Convergence index (GD), GD is used to describe the distance between the ungoverned solution that the algorithm searches for and the optimal front-end of the real Pareto.
where, N represents the number of ungoverned solutions that the algorithm searches for, 2 i d represents the shortest Euclidean distance between the non-inferior solution i and all solutions in the optimal front-end of the real Pareto.
2) Distribution index (SP), SP is used to evaluate the uniformity of distribution of ungoverned solution.
( ) where, N is the number of ungoverned solutions, and represents the shortest distance between the ith non-inferior solution in the target space and all solutions in the optimal front-end of the real Pareto.

Experimental Results
The CSAPSO algorithm was used to experiment with SCH1, SCH2, ZDT2 and ZDT3.The algorithm parameter is set to: the particle size is 50; the maximum iteration number is 100; the maximum and minimum values of inertia weight are 0.9 and 0.3 respectively; inertia weight ω and learning factor C 1 and C 2 are obtained according to Formulas (3) and ( 5) respectively.The Pareto non-inferior solutions of each function are shown in Figures 1-4.In the target function space, non-inferior optimal target domain is the boundary of the fitness value region, which is the effective interface.It can be seen from the experimental results that the four test functions accurately give the effective interface, and the complete Pareto non-inferior solution can be obtained.
In particular, for the discrete problem of ZDT3, the algorithm also gives a more accurate non-inferior solution.In general, the number of Pareto solutions obtained by the algorithm is more and the distribution is more uniform.The accuracy and reliability of CSAPSO algorithm are verified.
Through CSAPSO algorithm runs 30 times for each test function, the convergence index GD, distribution index SP and the mean value of computed time CT were respectively calculated, and four test function evaluation index are calculated, the results are shown in Table 2.The evaluation index GD, SP and CT confirmed the feasibility, accuracy and efficiency of the CSAPSO algorithm for solving multi-objective optimization problems.GD indicates that the non-inferior solution is very close to the optimal front end of the real Pareto.SP shows that the non-inferior solution has good distribution.CT shows that the time spent running is within acceptable limits.
In order to test the superiority of the algorithm in multi-objective optimization solution, comparing CSAPSO algorithm in this paper with classic non-poor classification multi-objective genetic algorithm(NSGA-II), multi-objective particle swarm optimization, the statistical comparison results are shown in Table 3.
According to GD of Table 3, the convergence of CSAPSO algorithm is better than the other two algorithms.It is shown that the optimal front distance between the non-inferior solution and real Pareto is smaller.That is, the solution is closer to the real solution.The SP of Table 3 shows that the distribution of non-inferior solutions obtained by the CSAPSO algorithm is better, that is, the distribution of the non-inferior solution of the algorithm is more uniform than the other two algorithms.For CT, the execution time of CSAPSO algorithm is between two algorithms, lower than NSGA II algorithm but higher than MOPSO.The reason for this is that the standard PSO algorithm is according to the equal step and flying in a single direction search, while CSAPSO algorithm dynamically adjusted with the flight process of particle, the process of dynamic adjustment will consume more time.Although the CSAPSO algorithm spends more computation time, the convergence and distribution of non-inferior solutions are better than the other two algorithms, which can obtain more and more evenly distributed feasible solutions.
In conclusion, through the CSAPSO algorithm for the numerical experiments of four multi-objective optimization problems, compared with the classical multi-objective optimization of NSGA II algorithm and MOPSO algorithm, we can know, CSAPSO algorithm has a better comprehensive performance.Algorithm improves the convergence speed by dynamic adaptive mechanism.Through the chaotic learning mechanism, the precocious convergence problem is improved.

Conclusions
This paper presents a chaotic self-adaptive particle swarm optimization algorithm (CSAPSO).The algorithm uses chaos theory and dynamic adaptive adjustment strategy to optimize the parameters in PSO algorithm, improve the precocious convergence of PSO algorithm, and improve the convergence speed.
By the experiment of four standard test functions, the proposed algorithm can be used to solve the multi-target problem, and the obtained non-inferiority solution can get a good approximation of the optimal solution set of Pareto and distribute evenly.By comparing with other algorithms, CSAPSO algorithm has better property, which can provide practical reference value for many optimization problems in the project.In the future, the convergence strategy and mathematical proof of the PSO algorithm can be discussed in-depth.

.
During the k + 1 iteration, the particle updates its speed and position through Formulas (1) and (2).

x
are respectively the velocity and position of particle i in d-dimension kth iteration; , k i d p is the position of the individual extremum of particle i in d-dimension, is the position of the global extremum of the whole population in d-dimension.
et al.

Table 1 .
Test functions used in this paper.

Table 3 .
Comparison results of three algorithms.