Optimization Scheme Based on Differential Equation Model for Animal Swarming

Abstract

This paper is devoted to introducing an optimization algorithm which is devised on a basis of ordinary differential equation model describing the process of animal swarming. By several numerical simulations, the nature of the optimization algorithm is clarified. Especially, if parameters included in the algorithm are suitably set, our scheme can show very good performance even in higher dimensional problems.

Share and Cite:

Uchitane, T. and Yagi, A. (2013) Optimization Scheme Based on Differential Equation Model for Animal Swarming. Open Journal of Optimization, 2, 45-51. doi: 10.4236/ojop.2013.22007.

1. Introduction

Let be a real valued continuous function defined for, where is a positive integer. The problem of finding the minimal value

is called a D-dimensional optimization problem for. Such an optimization problem is one of fundamental problems in the study of science and technology and so far various methods have already been presented.

When is a function, the point which hits the minimal value is obtained in the set of solutions of the functional equation. But, in general, it is not so easy to solve this functional equation. So, approximate solutions become more important. The steepest descent method is a method of obtaining a sequence in which approaches to by using the recurrence formula

with some suitable coefficient. This method is very convenient and it is easily observed that converges to a limit such that. But may hit very often some local minimal value of and not the very minimal value. So, when possesses many local minimal values, it is very difficult to find out the point by this method.

Recently, a development of techniques of numerical computations has yielded a new paradigm of optimization using a collection of particles in which interact each other and move for striking out the point. Such a method is called the particle optimization. One of the typical particle optimization was devised by KennedyEberhart [1]. They consider a swarm of particles not only flying in like bird but also behaving as an intellectual individual which can memorize its personal best through the whole past and, on the other hand, can know the swarm’s global best at each instant. In each step, processing such personal and swarm information, they move to a suitable position. The phase space in which the particles move about is therefore an abstract multi-dimensional space which is collision-free. They called their method the particle swarm optimization. Afterword, the particle swarm optimization has been developed extensively and applied to various problems. We will here quote only some of them [2-4]. A survey of the method has recently been published by Eslami-Shareef-Khajehzadeh-Mohamed [5].

In this paper, we want to compose another type particle optimization which is inspired more directly from the animal swarming like fish schooling, bird flocking, or mammal herding. Our particles move truly according to the animal’s behavioral rules for forming swarm. We first set a D-dimensional physical space in which the particles move about. We then assume among individuals two kinds of interactions, attraction and collision avoidance. These interactions will be formulated by generalized gravitation laws. Regarding as an environmental potential, we assume also that the particles are sensitive to the gradient of at their positions and have tendency to move toward the most descending direction of the value of. But they cannot know which mate has the global best position of the swarm at any momemt. We will also incorporate uncertainty of their information processing and executing. The phase space of particles is therefore given by, here means that at that moment the i-th particle is at position xi with velocity vi. As for animal’s behavioral rules, we are going to follow those presented by Camazine-Deneubourg-Franks-Sneyd-Theraulaz-Bonabeau ([6], Chapter 11). That is:

1) The swarm has no leaders and each individual follows the same behavioral rules;

2) To decide where to move, each individual uses some form of weighted average of the position and orientation of its nearest neighbors;

3) There is a degree of uncertainty in the individual’s behavior that reflects both the imperfect informationgathering ability of an animal and the imperfect execution of its actions.

(We refer also a similar idea due to Reynolds [7].) The authors have in facttried to model in the previous paper [8] these mechanisms by stochastic differential Equations (2.1) and (2.2) below.

Our optimization scheme is actually composed on the basis of the continuous model Equations (2.1), (2.2) but just ignoring velocity matching (i.e., taking) and setting the external force as the sum of resistance and gradient of the potential function to be optimized. At each step, the particles have a velocity determined by the sum of centering with nearby mates and acceleration by the external force. Its position is renewed by the sum of the velocity and a noise which reflects the imperfectness of information-gathering and execution of actions. At the first stage, the particles move striking out the global minimal value on one hand keeping a swarm and on the other hand keeping a territorial distance with other mates. The noise helps the swarm to escape from the traps of local minimal values and to reach into a neighborhood of the global one. At the second stage, the movement of particles slows down. The particles go to an equilibrium state in which some particle attains at the swarm’s best value.

In order to investigate swarm behavior of particles, we shall apply our optimization scheme to a few benchmark problems. When has many local minimal values, the territorial distance must be chosen in a suitable length to find a good approximate solution. Optimal strength of noise is also required. If it is too small, the swarm is easily trapped by a local minimum; to the contrary, if too large, the particles cannot keep on swarming because of the strong dispersion. If these are suitably set, then our scheme can show very high performance even in 12- dimensional problems.

2. Continuous Model

We being with reviewing the continuous model presented by [8].

We consider motion of I animals in the physical space. The position of i-th animal is denoted by , and its velocity by. The model equations are then written as

(2.1)

(2.2)

where and. The first Equation (2.1) is a stochastic equation for xi, here

denotes a system of independent 3-dimensional Brownian motions defined on a complete probability space with filtration (see [9]). The term therefore denotes a noise resulting from the imperfectness of information-gathering and action of the i-th animal, being some coefficient. In the meantime, (2.2) is a deterministic equation for vi. The first term in the right hand side denotes the centering and the collision avoidance of animal. The animals have tendency to stay nearby their mates and at the same time avoid colliding each other. As p and q are such that, if, then the -th animal moves toward the j-th; to the contrary, if, then it acts in order to avoid collision with the other. The number therefore denotes a critical distance. If p is large, then, as the distance increases, its power decreases quickly. Hence, the larger p is, the shorter the range of centering is. The second term of (2.2) denotes the effect of velocity matching with nearby mates. Finally, the term denotes an external force imposed to the i-th animal at time t which is a given function for xi and vi. In the subsequent section, we take a function of the form

(2.3)

where denotes a resistance for motion and denotes an external force determined by a potential function.

3. Optimization Scheme

Let be a real valued function defined for with positive integer. Consider the optimization problem

(3.1)

On the basis of (2.1), (2.2), we introduce an optimization algorithm for (3.1). Let, denote the positions of I particles moving about in the space, and let, denote their velocities. We ignore velocity matching of particles, namely, we set, and take the function in (2.2) in the form (2.3) above. Our scheme is then described by

(3.2)

(3.3)

where. In addition,

, is a family of independent stochastic functions defined on a complete probability space with values in whose distributions are a normal distribution with mean 0 and variance. And is given by

.

Set initial positions and initial velocities

respectively. Then, the algorithm (3.2), (3.3) defines a discrete trajectory

where. In each step n, we compute the minimal value and memorize its value together with the point hitting it, i.e.,. Repeating the iteration N-times,

is an approximation value of (3.1) and

such that is an approximation solution of our scheme.

As is one of members of swarm having interactions one and another, the approximate solution may not satisfy the condition. This means that there is a point in a neighborhood of which hits a local minimum of, i.e.,. In this case, which can easily be obtained by classical methods (e.g., the steepest descent method) gives a better approximate solution of (3.1) than.

4. Numerical Experiments

We show some numerical experiments to expose the particle’s behavior of our scheme. It is expected that the particles crowd around a point giving the global minimum of or are dispersed into a number of neighborhoods of points giving local minimums. The behavior may change heavily depending on the choice of parameters and.

We here use three well known benchmark problems, namely, Problem (3.1) with Sphere function, Rastrigin function and Rosenbrock function. Problem (3.1) with the Sphere function

(4.1)

is the simplest problem. The optimal point is obviously given by. The following function

(4.2)

is called the Rastrigin function. Its optimal point is also given by with the global minimum

. It is easily seen that this function possesses many local minimums; indeed, at every lattice point, has a local minimum. Therefore, Problem (3.1) with function (4.2) is very difficult to treat especially in a high dimension. Finally, the Rosenbrock function is formulated by

(4.3)

This function takes its global minimum 0 at the lattice point

As shown below, behavior of the particles depends on the model parameters. We set and

. The step size is fixed by

and the total step number is. As for D, we consider 2-dimensional and 12-dimensional problems.

When, the initial position is taken in, , where no optional point exists in, moreover, is far away from the optimal point. In addition, for 2-dimensional Rosenbrock problem, we will take, too, because the Rosenbrock function defined by (4.3) is not symmetric with respect to the transformation.

When, the initial position is taken randomly in, but it is very difficult to find the optimal point in a higher dimensional search space.

4.1. Sphere Function

Figure 1 shows the numerical results for Sphere function. The parameters are set as,;;, and. For, the computed result is given by Figure 1(a); similarly for, by Figure 1(b).

In the first stage, decreases rapidly, which means that all the particles strike out for the optimal point 0 in group influenced form the force due to the gradient of. In the second stage, since the interaction among particles becomes dominant rather than the potential force, decreasing of the value becomes slow. This means that the particles make local searches in a neighborhood of the global optimum. But, the interaction may prevent the optimal particle from attaining exactly the global minimum. So, our method makes it possible to search both wide range and small range by using the same scheme. On the other hand, it is an important problem to find a better combination of parameters in order to have a suitable balance between the global search and the local search.

4.2. Rastrigin Function

Figure 2 shows the numerical results for the Rastrigin

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Kennedy and R. Eberhart, “Particle Swarm Optimization,” Proceedings of IEEE International Conference Neural Networks, Perth, 27 November-1 December 1995, 1942-1948.
[2] D. Bratton and J. Kennedy, “Defining a Standard for Particle Swarm Optimization,” IEEE Swarm Intelligence Symposium, Honolulu, 1-5 April 2007, pp. 120-127.
[3] H. Liu, A. Abraham and W. Zhang, “A Fuzzy Adaptive Turbulent Particle Swarm Optimization,” International Journal of Computer Applications, Vol. 1, No. 1, 2007, pp. 39-47. doi:10.1504/IJICA.2007.013400
[4] S. He, Q. Wu, J. Wen, J. Sanuders and R. Paton, “A Particle Swarm Optimizer with Passive Congregation,” Biosystems, Vol. 78, 2004, pp. 135-147. doi:10.1016/j.biosystems.2004.08.003
[5] M. Eslami, H. Shareef, M. Khajehzadeh and A. Mohamed, “A Survey of the State of the Art in Particle Swarm Optimization,” Research Journal of Applied Science, Engineering and Technology, Vol. 4, 2012, pp. 1181-1197.
[6] J. S. Camazine, J. L. Deneubourg, N. R. Franks, J. Sneyd, G. Theraulaz and E. Bonabeau, “Self-Organization in Biological Systems,” Princeton University Press, Princeton, 2001.
[7] C. W. Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model,” Computer Graphics, Vol. 21, No. 4, 1987, pp. 25-34. doi:10.1145/37402.37406
[8] T. Uchitane, T. V. Ton and A. Yagi, “An Ordinary Differential Equation Model for Fish Schooling,” Scientiae Mathematicae Japonicae, Vol. 75, 2012, pp. 339-350.
[9] B. Oksendal, “Stochastic Differential Equations,” Springer, Berlin, 2007.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.