Hybrid Whale Optimization Algorithm with Modified Conjugate Gradient Method to Solve Global Optimization Problems

Abstract

Whale Optimization Algorithm (WOA) is a meta-heuristic algorithm. It is a new algorithm, it simulates the behavior of Humpback Whales in their search for food and migration. In this paper, a modified conjugate gradient algorithm is proposed by deriving new conjugate coefficient. The sufficient descent and the global convergence properties for the proposed algorithm proved. Novel hybrid algorithm of the Whale Optimization Algorithm (WOA) proposed with modified conjugate gradient Algorithm develops the elementary society that randomly generated as the primary society for the Whales optimization algorithm using the characteristics of the modified conjugate gradient algorithm. The efficiency of the hybrid algorithm measured by applying it to (10) of the optimization functions of high measurement with different dimensions and the results of the hybrid algorithm were very good in comparison with the original algorithm.

Share and Cite:

Khaleel, L. and Mitras, B. (2020) Hybrid Whale Optimization Algorithm with Modified Conjugate Gradient Method to Solve Global Optimization Problems. Open Access Library Journal, 7, 1-18. doi: 10.4236/oalib.1106459.

1. Introduction

Optimization can be defined as one of the branches of knowledge dealing with discovering or arriving at the optimal solutions to a specific issue within a set of alternatives.

The methods of solving optimization problems divided into two types of algorithms: Deterministic Algorithms and Stochastic Algorithms.

Most of classical algorithms are specific algorithms. For example, the Simplex method in linear programming is a specific algorithm, and some specific algorithms use tilt information (Gradient), which called slope-based algorithms. For example, Newton-Raphson algorithm) is an algorithm based on slope or derivative [1].

As for random algorithms, they have two types of algorithms, although the difference between them is small: Heuristic Algorithms and Meta-Heuristic Algorithms [2].

The Whale Optimization Algorithm (WOA) is an algorithm inspired by the humpback whale search behavior for its food and hunting method and was first proposed by Lewis and Mirjalili (2016) [3].

In the same year, WOA improved by Trivedi and others by incorporating a new technology called WOA Adaptive Technology (WOA) [4].

In the same year, Touma studied the economic transmission problem on the IEEE Bus-30 system using a whale optimization algorithm, which gives good results compared to other algorithms [5].

In 2017, Hu and others proposed an improved algorithm of whale optimization by adding its inertia weights called (WOAs). The new algorithm tested using 27 functions and applied to predict the daily air quality index. The proposed algorithm showed efficiency compared to other algorithms [6].

This algorithm used in the same year by researchers Prakash and Lakshminarayana in determining the optimal location of the capacitors and determining their size in the radial distribution network, in order to reduce the losses of the distribution network line as the positioning of the capacitors in optimal locations will improve system performance, stability and reliability [7].

In the same year, the researcher Desuky used a whale optimization algorithm to improve two levels of male fertility classification. Recently, diseases and health problems that were common among the elderly only became common among young people, and some of the causes of these medical problems are behavioral, environmental and lifestyle factors. The whale optimization algorithm then combined with the Pegasos algorithm to enhance the male fertility rating at both levels. This integration improved the results by 90% [8].

The algorithm of the whale optimization was also used in the same year by Reddy and others to optimize renewable resources to reduce losses in electricity distribution systems [9].

In the same year, Mafarja and Mirjalili crossed the whale optimization algorithm with Simulated Annealing Algorithm and used in the classification process. The results confirm the efficiency of the hybrid algorithm in improving classification accuracy [10].

The aim of the research is to propose a new hybrid algorithm consisting of a Whale optimization algorithm (WOA) with Modified traditional Conjugate Gradient Directional Methods (WOA-MCG). Table 1 represents a definition of the variables used in this study.

2. Conjugate Gradient Method

In unconstrained optimization, we minimize an objective function depends on real variables with no restrictions on the values of these variables. The unconstrained optimization problem is:

min f ( x ) : x R n , (1)

where f : R n R is a continuously differentiable function, bounded from below. A nonlinear conjugate gradient method generates a sequence { x k } , k: integer number, k 0 . Starting from an initial point x 0 , the value of x k calculate by the following equation:

x k + 1 = x k + λ k d k , (2)

where the positive step size λ k > 0 is obtained by a line search, and the directions d k are generated as:

d k + 1 = g k + 1 + β k d k , (3)

where d 0 = g 0 , the value of β k is determined according to the algorithm of Conjugate Gradient (CG), and its known as a conjugate gradient parameter, s k = x k + 1 x k and g k = f ( x k ) = f ( x k ) , consider . is the Euclidean norm and y k = g k + 1 g k . The termination conditions for the conjugate gradient line search are often based on some version of the Wolfe conditions. The standard Wolfe conditions:

f ( x k + λ k d k ) f ( x k ) ρ λ k g k T d k , (4)

g ( x k + λ k d k ) T d k σ g k T d k , (5)

Table 1. Represents a definition of the variables used.

where d k is a descent search direction and 0 < ρ σ < 1 , where β k is defined by one of the following formulas:

β k ( H S ) = y k T g k + 1 y k T d k ; β k ( F R ) = g k + 1 T g k + 1 g k T g k β k ( P R P ) = y k T g k + 1 g k T g k (6)

β k ( C D ) = g k + 1 T g k + 1 g k T d k ; β k ( L S ) = y k T g k + 1 g k T d k ; β k ( D Y ) = g k + 1 T g k + 1 y k T s k (7)

Al-Bayati and Al-Assady (1986) proposed three forms for the scalar β k defined by:

β k A B 1 = y k 2 g k 2 ; β k A B 2 = y k 2 d k T g k ; β k A B 3 = y k 2 d k T y k (8) [11]

3. Proposed a New Conjugacy Coefficient

We have the quasi-Newton condition

y k = G k s k (9)

We multiply both sides of Equation (9) by s k and we get

[ y k = G k s k ] * s k y k T s k = G s k T s k

G = y k T s k s k 2 I n × n (10)

Let d k + 1 N = λ G k 1 g k + 1 (11)

d k + 1 N = λ y k T s k s k 2 g k + 1 (12)

Multiply both sides of Equation (12) by y k and we get

y k T d k + 1 N = λ [ y k T s k s k 2 ] y k T g k + 1 (13)

y k T d k + 1 C G = y k T g k + 1 + β k d k T y k (14)

From (13) and (14) we have

y k T g k + 1 + β k d k T y k = λ [ y k T s k s k 2 ] y k T g k + 1 (15)

We assume that β k = β k ( D Y ) = g k + 1 T g k + 1 y k T d k

Then we have

y k T g k + 1 + β k D Y d k T y k = λ [ y k T s k s k 2 ] y k T g k + 1 (16)

y k T g k + 1 + g k + 1 2 d k T y k d k T y k = λ [ y k T s k s k 2 ] y k T g k + 1 (17)

From Equation (17) we get:

y k T g k + 1 + β k τ k = λ [ y k T s k s k 2 ] y k T g k + 1 (18)

Then, we have

β k = [ s k 2 2 [ f k f k + 1 + g k + 1 T s k ] ] [ y k T s k s k 2 ] y k T g k + 1 + y k T g k + 1 τ k (19)

β k = [ y k T s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k T g k + 1 + y k T g k + 1 τ k (20)

β k = [ 1 y k T s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k T g k + 1 τ k (21)

Since τ k + 1 > 0 then we suppose: τ k = g k 2 then:

β k = [ 1 y k T s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k T g k + 1 g k 2 . (22)

3.1. Outlines of the Proposed Algorithm

Step (1): The initial step: We select the starting point x 0 R n , and we select the accuracy solution ε > 0 is a small positive real number and we find d k = g k , λ 0 = M i n a r y ( g 0 ) , and we set k = 0 .

Step (2): The convergence test: If g k ε then stop and set the optimal solution is x k . Else, go to step (3).

Step (3): The line search: We compute the value of λ k by Cubic method and that satisfy the Wolfe conditions in Equations (4), (5) and go to step (4).

Step (4): Update the variables: x k + 1 = x k + λ k d k and compute f ( x k + 1 ) , g k + 1 and s k = x k + 1 x k , y k = g k + 1 g k .

Step (5): Check: if g k + 1 ε then stop. Else continue.

Step (6): The search direction: We compute the scalar β k ( N e w ) by using the Equation (22) and set k = k + 1 , and go to step (4).

3.2. Flowchart of Conjugated Gradient Algorithm

Figure 1 shows the flowchart of the standard conjugated gradient method.

3.3. Theoretical Properties for the New CG-Method.

In this section, we focus on the convergence behavior on the β k N e w method with exact line searches. Hence, we make the following basic assumptions on the objective function.

Figure 1. Flowchart of the standard conjugated gradient method.

Assumption (1):

f is bounded below in the level set L x 0 = { x R n | f ( x ) f ( x 0 ) } ; in some neighborhood U of the level set L x 0 , f is continuously differentiable and its gradient f is Lipschitz continuous in the level set L x 0 , namely, there exists a constant L > 0 such that:

f ( x ) f ( y ) L x y forall x , y L x 0 . (23)

3.3.1. Sufficient Descent Property

We will show that in this section the proposed algorithm defined in the equations (22) and (3) satisfy the sufficient descent property which satisfies the convergence property.

Theorem (1):

The search direction d k that generated by the proposed algorithm of modified CG satisfies the descent property for all k, when the step size λ k satisfied the Wolfe conditions (4), (5).

Proof: we will use the indication to prove the descent property, for k = 0 , d 0 = g 0 d 0 T g 0 = g 0 < 0 , then we proved that the theorem is true for k = 0 , we assume that s k η ; g k + 1 Γ and g k η 2 and assume that the theorem is true for any k i.e. d k T g k < 0 or s k T g k < 0 since s k = λ k d k , now we will prove that the theorem is true for k + 1 then:

d k + 1 = g k + 1 + β k ( N e w ) d k (24)

i.e. d k + 1 = g k + 1 + [ 1 y k T s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k T g k + 1 g k 2 d k (25)

Multiply both sides of the Equation (25) by g k + 1 we get:

g k + 1 T d k + 1 = g k + 1 2 + [ 1 y k T s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k T g k + 1 g k 2 g k + 1 T d k (26)

Divided both side by g k + 1 2 :

g k + 1 T d k + 1 + g k + 1 2 g k + 1 2 = [ 1 y k T s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k T g k + 1 g k 2 g k + 1 T d k g k + 1 2 (27)

g k + 1 T d k + 1 + g k + 1 2 g k + 1 2 [ 1 y k T s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k g k + 1 g k 2 g k + 1 d k g k + 1 2 (28)

g k + 1 T d k + 1 + g k + 1 2 g k + 1 2 [ 1 y k T s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k d k g k 2 (29)

g k + 1 T d k + 1 + g k + 1 2 g k + 1 2 [ 1 y k s k 2 ( f k f k + 1 + g k + 1 s k ) ] y k d k g k 2 (30)

g k + 1 T d k + 1 + g k + 1 2 g k + 1 2 y k d k g k 2 (31)

g k + 1 2 g k + 1 T d k + 1 + g k + 1 2 g k 2 y k d k = δ > 1 (32)

g k + 1 T d k + 1 + g k + 1 2 g k + 1 2 1 δ (33)

g k + 1 T d k + 1 + g k + 1 2 1 δ g k + 1 2 (34)

g k + 1 T d k + 1 ( 1 1 δ ) g k + 1 2

Let c = 1 1 δ (35)

Then g k + 1 T d k + 1 c g k + 1 2 (36)

For some positive constant c > 0. This condition often has been used to analyze the global convergence of conjugate gradient methods with inexact line search.

3.3.2. Global Convergence Property

The conclusion of the following lemma used to prove the global convergence of nonlinear conjugate gradient methods, under the general Wolfe line search.

Lemma 1:

Suppose assumptions (1) (i) and (ii) hold and consider any conjugate gradient method (22) and (3), where d k is a descent direction and λ k is obtained by the strong Wolfe line search. If

k 1 α 1 d k 2 = α (37)

Then lim inf k g k = 0 (38)

For uniformly convex functions which satisfy the above assumptions, we can prove that the norm of d k + 1 given by (25) is bounded above. Assume that the function f is a uniformly convex function, i.e. there exists a constant μ 0 such that for all x , y S ,

( g ( x ) g ( y ) ) T ( x y ) μ x y 2 , (39)

Using Lemma 1 the following result can be proved.

Theorem 2:

Suppose that the assumptions (i) and (ii) hold. Consider the algorithm (3), (22). If s k tends to zero and there exists nonnegative constants η 1 and η 2 such that:

g k 2 η 1 s k 2 , g k + 1 2 η 2 s k (40)

and f is a uniformly convex function, then.

lim inf k g k = 0 (41)

Proof: From Equation (22) We have:

β k n e w = [ 1 y k T s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k T g k + 1 g k 2

From Cuchy-Shwartz we get:

| β k + 1 N e w | = | [ 1 y k T s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k T g k + 1 g k 2 | (42)

| β k + 1 N e w | [ 1 y k s k 2 ( f k f k + 1 + g k + 1 T s k ) ] y k g k + 1 g k 2 (43)

But y k L s k . Then

| β k + 1 N e w | [ 1 L s k s k 2 ( f k f k + 1 + g k + 1 s k ) ] L s k g k + 1 g k 2 (44)

| β k + 1 N e w | [ 1 L s k s k 2 ( f k f k + 1 + g k + 1 s k ) ] L s k g k + 1 g k 2 (45)

From Equation (41)

| β k + 1 N e w | [ 1 L η 2 2 ( | f k f k + 1 | + η Γ ) ] L η Γ η 1 η s k (46)

Let from theorem (1):

A = f k f k + 1 then | β k + 1 N e w | [ 1 L η 2 2 ( | A | + η Γ ) ] L η Γ η 1 η s k (47)

| β k + 1 N e w | L η Γ η 1 η s k (48)

Hence,

d k + 1 g k + 1 + | β k N | s k (49)

d k + 1 γ + L η Γ η 1 η s k s k = γ + L η Γ η 1 η (50)

k 1 1 d k + 1 2 = (51)

1 ( γ + L η Γ η 1 η ) 2 k 1 1 = . (52)

4. Whale Optimization Algorithm (WOA)

Whales are the largest animals in the world where there are whales with a length of up to 30 meters and weighs 180 tons. There are major species in the world such as killer whales, humpback whales, blue whales. Whales are often predators and whales are sleepless because they breathe from the ocean surface. In fact, only half of the brain sleeps and the interesting thing about whales is that they are very smart animals to add to emotion [3].

The hunting technique used by these whales is one of the most interesting methods and is called the method of nutrition, the process of searching for food. Figure 2 represents the feeding behavior using the bubble trap in humpback whales.

Figure 2. Represents the feeding behavior using the bubble trap in humpback whales.

The humpback whale dives about (12) meters down and then creates bubbles in the form of circles or spiral encircles the prey and then swim towards the surface and this process consists of three different stages as follows:

1: Coral loop.

2: Lob tail.

3: Capture loop.

This style of food can be observed in the humpback whale only.

4.1. Mathematical Model

In this section, we will talk about how physically encircle the prey, which divided into maneuvering the spiral and how to get to the prey. We will also discuss Whale Optimization Algorithm (WOA).

1) Encircling prey

One of the characteristics of humpback whales is their knowledge of the location of the prey and encircling them either in the research space. The optimal location can not be known in advance. But the whale algorithm assumes that the target prey is the best or near solution, then the rest of the other elements will update their positions according to the best location and is represented by the following equations:

D = | C X ( t ) X ( t ) | (53)

X ( t + 1 ) = X ( t ) A D (54)

Since:

t: represents instantaneous iteration (instantaneous).

A, C: Indicates the vectors.

X : means the position vector.

X represents the site vector for the best solution obtained and should occur in all iterations if the solution is not preferred

To calculate the values of vectors A and C, we use the following formulas:

A = 2 a r a (55)

C = 2 r (56)

The value of a decreases over the frequency range from 2 to 0 and r represents a vector that takes values in the period [0, 1] at random.

2) Bubble-net attacking method (Exploitation phase)

Where the humpback whale style of food Bubble trap was divided mathematically into two parts are:

a. Shrinking encircling mechanism

This process carried out by the value of a where its value decreases as in Equation (5) which leads to decreasing A as well.

The value of a can found from the following formula:

a = 2 t 2 M a x l t e r (57)

where

t: means the current iteration.

Maxlter: Maximum number of iterations allowed [10].

From this, we conclude that the value of A falls between [−a, a] which is a random value and that over all iterations the value of (a) decreases from 2 to 0 by placing values of A in [−1, 1] randomly.

The new location of the researched element can be considered in any position between the best element currently and the original position of the element. Figure 3 shows the possible position of (X, Y) in the direction of (X*, Y*) where this can be achieved in the space of the two axes by setting 0 A 1 as follows It also explains the reduction of the encircling mechanism:

b. Spiral updating position

In this method, we will calculate the distance between the whale in (X, Y) and the prey in (X*, Y*) as shown in Figure 4 and then create an equation that is a spiral equation between the position of the prey and the whale that represents the movement of the snail movement Humpback whales are as follows:

X ( t + 1 ) = D e b l cos ( 2 π l ) + X * ( t ) (58)

Figure 3. Represents the reduction of the encircling mechanism.

Figure 4. Represents the spiral updated location.

where

D = | X * ( t ) X ( t ) | (59)

Also, D represents the best distance between the whale and its prey obtained to the present moment, (b) is a constant number to determine the shape of the logarithmic spiral, l is a number belonging to the period [−1, 1] randomly.

Humpback whales run around their prey in a shrinking circle and are in the form of a spiral. To express this technique, we will impose a 50% probability of selection to reduce the cord or spiral pattern to improve the position of the whales. It shall be mathematically as follows:

X ( t + 1 ) = { X * ( t ) A D P < 0.5 D e b l cos ( 2 π l ) + X * ( t ) P 0.5 (60)

where P: number represents belong to the period [0, 1] at randomly [6].

3) Search for prey:

The same method based on the variation in vector A can be used to search for prey. Humpback whales randomly search for their prey depending on the position of each one. Therefore, we will use the vector A with values greater than 1 or less than −1 randomly as this process will force the search element to search away from the reference whale, and in contrast to the exploitation stage. We will improve the position of the search element randomly in the exploration phase rather than better Element obtained so far, this method and 1 < | A | . It emphasizes the exploration process and allows the WOA algorithm to do a full research and the mathematical representation is:

D = | C X r a n d X | (61)

X ( t + 1 ) = X r a n d A D (62)

Since rand : represents the selection of a random whale from the community currently, we have developed some possible placements about any solution with 1 < | A | in Figure 5 [10].

4.2. Whale Optimization Algorithm

The WOA algorithm relies on a set of random solutions that begin with each

Figure 5. Represents the exploration mechanism in the WOA algorithm.

process. The search elements optimize their position based on the randomly selected search element or based on the best solution found to date. To facilitate the properties of exploration and exploitation, where the random element is found when | A | > 1 , when | A | < 1 is the best solution, because the position of the search element is improved [3].

The WOA algorithm is able to change the movements between the motion of the helix or circular motion based on the value of P and the algorithm will terminate if the stop condition is met.

If we take the WOA algorithm in theory, we can say that it is an integrated optimization algorithm because it has the ability to explore and exploit. On this basis, the proposed method defines the process of research on the best solutions and allows the rest of the other research elements to take the best obtained so far.

In the WOA algorithm, the search vector (A) can be allowed to update for the better by the easy passage between exploration and exploitation. For exploitation at ( | A | 1 ), it should be noted that the WOA algorithm has only two internal key parameters to be modified, A, C. [3].

4.3. Whale Optimization Algorithm Features

1. Algorithms are easy to implement.

2. This algorithm is highly flexible.

3. Do not need many parameters.

4. You can easily navigate through exploration and exploitation based on one parameter.

5. Due to the simplicity of this algorithm and its lack of many parameters, it is used to solve the logarithmic spiral function, it covers the boundary area in the research space.

6. The position of the elements (solutions) in the exploration phase is improved based on randomly selected solutions rather than the best solution obtained so far [10].

4.4. Proposed Hybrid Algorithm

In this paragraph, a new hybrid method proposed to solve the optimization issues called WOA-CG, a proposed hybrid algorithm that links the evolutionary ideas of the WOA algorithm with the classical optimization of Conjugate Gradient Algorithm, called WOA-CG. Figure 6 represents the proposed algorithm

Figure 6. Represents the proposed algorithm (WOA-CG).

(WOA-CG). In this algorithm, the process in each iteration divided into two phases. In the first stage, the random community and the initial velocity of the WOA are generated, and in the second stage, the HS-CG algorithm is used. The steps of the proposed hybrid algorithm (WOA-CG) can be summarized as follows:

Step 1: Create a primary community by generating a primary community and configuring parameters A, C.

Step 2: The random community then entered into the classic conjugate gradient algorithm to improve the community and get the best solution.

Step 3: Calculate the fitness function of the resulting new community (from the traditional conjugate gradient algorithm as the primary community of the whale optimization algorithm) for each search element that represents the distance between the whale and its prey.

Step 4: Calculate the best position in the search elements. With this feature can produce a new generation of children.

Step 5: Update the location of each search element using the algorithm attributes: prey search, prey encirclement, hunting and attacking prey.

Step 6: Update the new generation position using numbered Equations (6) and (7).

Step 7: The WOA algorithm performs a number of iterative steps until the stop condition is met.

5. Practical Aspect

For the purpose of evaluating the performance of the proposed algorithms in solving optimization problems, the proposed WOA-CG algorithm was tested, using (10) standard functions to compare with the algorithm of the whale optimization themselves. The minimum and upper limits of each function used when the function reaches the minimum value and the highest frequency of all programs equals (500) iterations (Table 2).

Tables 3-5 show the results of the WOA-MCG algorithm compared with the results of the WOA algorithm. The proposed WOA-MCG algorithm shown to be successful by improving the results of most high-standard test functions. This confirms the success of the hybridization process.

The test was carried out by a laptop with the following characteristics: CPU speed is 2.70, RAM is 8 GB, and Matlab R2014a runs on Windows 8.

6. Conclusions

1. Hybridization of post-intuitive algorithms with one of the classical algorithms has contributed to improving its performance by increasing the speed of convergence.

2. Hybridization of post-intuitive algorithms with one of the classical algorithms has contributed to an improvement in the quality of the resulting solutions by increasing its exploratory and exploitative capabilities, as numerical results show the ability of hybrid algorithms to solve different optimization problems.

Table 2. Details of test functions.

Table 3. Comparison of results between WOA and WOA-MCG using the number of elements consisting of 5 elements and the number of iterations 500.

Table 4. Comparison of results between WOA and WOA-MCG using the number of elements consisting of 10 elements and the number of iterations 500.

(a) (b)

Table 5. (a) Comparison of results between AWO and AWO-MCG using the number of elements consisting of 15 elements and the number of iterations 500; (b) Comparison of results between AWO and AWO-MCG using the number of elements consisting of 30 elements and the number of iterations 500.

The results of the WOA-MCG algorithm compared with the WOA algorithm itself, which led to encouraging results as good solutions were obtained for most test functions.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Yang, X.-S. (2010) Engineering Optimization: An Introduction with Metaheuristic Applications. John Wiley & Sons, New Jersey. https://doi.org/10.1002/9780470640425
[2] Meng, X., Liu, Y., Gao, X. and Zhang, H. (2014) A New Bio-Inspired Algorithm: Chicken Swarm Optimization. In: Tan, Y., Shi, Y., Coello, C.A.C., Eds., Advances in Swarm Intelligence, ICSI 2014. Lecture Notes in Computer Science, Vol. 8794. Springer, Cham, 86-94.
https://doi.org/10.1007/978-3-319-11857-4_10
[3] Mirjalili, S. and Lewis, A. (2016) The Whale Optimization Algorithm. Advances in Engineering Software, 95, 51-67. https://doi.org/10.1016/j.advengsoft.2016.01.008
[4] Trivedi, I.N., Pradeep, J., Narottam, J., Arvind, K. and Dilip, L. (2016) Novel Adaptive Whale Optimization Algorithm for Global Optimization. Indian Journal of Science and Technology, 9, 1-6. https://doi.org/10.17485/ijst/2016/v9i38/101939
[5] Touma, H.J. (2016) Study of the Economic Dispatch Problem on IEEE 30-Bus System Using Whale Optimization Algorithm. International Journal of Engineering Technology and Sciences (IJETS), 5, 11-18.
[6] Abhiraj, T. and Aravindhababu, P. (2017) Dragonfly Optimization Based Reconfiguration for Voltage Profile Enhancement in Distribution Systems. International Journal of Computer Applications, 158, 1-4. https://doi.org/10.5120/ijca2017912758
[7] Andrea, R., Blesa, M., Blum, C. and Michael, S. (2008) Hybrid Metaheuristics: An Emerging Approach to Optimization, Springer, Berlin.
[8] Manoharan, N., Dash, S.S., Rajesh, K.S. and Panda, S. (2017) Automatic Generation Control by Hybrid Invasive Weed Optimization and Pattern Search Tuned 2-DOF PID Controller. International Journal of Computers, Communications & Control, 12, 533-549.
https://doi.org/10.15837/ijccc.2017.4.2751
[9] Mirjalili, S. (2016) Dragonfly Algorithm: A New Meta-Heuristic Optimization Technique for Solving Single-Objective, Discrete, and Multi-Objective Problems. Neural Computing and Applications, 27, 1053-1073. https://doi.org/10.1007/s00521-015-1920-1
[10] Mafarja, M.M. and Mirjalili, S. (2017) Hybrid Whale Optimization Algorithm with Simulated Annealing for Feature Selection. Neurocomputing, 260, 302-312.
https://doi.org/10.1016/j.neucom.2017.04.053
[11] Bayati, A.Y. and Assady, N.H. (1986) Conjugate Gradient Method. Technical Research, School of Computer Studies, Leeds University.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.