^{1}

^{2}

^{*}

^{2}

In this paper, we approach the problem of obtaining approximate solution of second-order initial value problems by converting it to an optimization problem. It is assumed that the solution can be approximated by a polynomial. The coefficients of the polynomial are then optimized using simulated annealing technique. Numerical examples with good results show the accuracy of the proposed approach compared with some existing methods.

The use of techniques that are based on evolutionary algorithms for solving optimization problems has been gaining interests over the last few years. These algorithms use mechanisms inspired by biological evolution, such as reproduction, recombination, mutation, and selection. Since the work of Isaac Newton and Gottfried Leibniz in the late 17th century, differential equations (DEs) have been an important concept in many branches of science. Differential equations arise in physics, engineering, chemistry, biology, economics and a lot of fields. The idea of solving DEs via evolutionary algorithms has been on the increase recently. Approximate solutions of differential equations are obtained by converting the equations to optimization problems and then solved via optimization techniques. The use of classical genetic algorithm to obtain approximate solutions of second-order initial value problems was considered in [

y ″ = f ( t , y ) ; y ( t 0 ) = y 0 , y ′ ( t 0 ) = y ′ 0 , t ∈ [ a , b ] . (1)

Simulated annealing is a simple stochastic function minimizer. It is motivated from the physical process of annealing, where a metal object is heated to a high temperature and allowed to cool slowly. The process allows the atomic structure of the metal to settle to a lower energy state, thus becoming a tougher metal. Using optimization terminology, annealing allows the structure to escape from a local minimum, and to explore and settle on a better, hopefully global, minimum.

At each iteration, a new point, x n e w , is generated in the neighborhood of the current point, x. The radius of the neighborhood decreases with each iteration. The best point found so far, x b e s t , is also tracked.

If f ( x n e w ) ≤ f ( x b e s t ) , x n e w replaces x b e s t and x. Otherwise, x n e w replaces x with a probability exp ( b ( i , Δ f , f 0 ) ) . Here b is the function defined by Boltzmann Exponent-exponent of the probability function, i is the current iteration, Δ f is the change in the objective function value, and f 0 is the value of the objective function from the previous iteration. The default definition of

the function for b is given as b ( i , Δ f , f 0 ) : = − Δ f log ( i + 1 ) 10 .

Simulated annealing uses multiple starting points, and finds an optimum starting from each of them. The default number of starting points, given by the parameter SearchPoints, is min ( 2 d ,50 ) , where d is the number of variables and in this case d = 1 , since the number of independent variable is one.

Consider the second-order initial value problem (1), assume a solution of the form

y ( t ) = ∑ i = 0 k ψ i t i , k ∈ ℤ + (2)

where ψ i are parameters to be determined. Substituting (2) and its second derivative into (1) gives

∑ i = 2 k i ( i − 1 ) ψ i t i − 2 = f ( t , y ( t ) ) (3)

Using the initial conditions we have the constraint that

[ ∑ i = 0 k ψ i t i ] t = t 0 = y 0 and [ ∑ i = 1 k i ψ i t i − 1 ] t = t 0 = y ′ 0 (4)

At each node point t n , we require that

E n ( t ) = [ ∑ i = 2 k i ( i − 1 ) ψ i t i − 2 − f ( t , y ( t ) ) ] t = t n ≃ 0 (5)

To solve the above problem, we need to find the set { ψ i | i = 0 ( 1 ) k } , which minimizes the expression

∑ n = 1 b − a h E n 2 ( t ) (6)

where h is the steplength. We now formulate the problem as an optimization problem in the following way:

Minimize: ∑ n = 1 b − a h E n 2 ( t ) (7)

Subject to: [ ∑ i = 0 k ψ i t i ] t = t 0 = y 0 and [ ∑ i = 1 k i ψ i t i − 1 ] t = t 0 = y ′ 0 (8)

Using the simulated annealing algorithm we are able to obtain the set { ψ i | i = 0 , 1 , ⋯ , k } which minimizes the expression ∑ n = 1 b − a h E n 2 ( t ) .

We now perform some numerical experiments confirming the theoretical expectations regarding the method we have proposed. Our proposed algorithm shall be compared with the Runge-Kutta Nystrom method in this section. The following parameters needed to implement the simulated annealing are set as follows:

exponent of the probability function (Boltzmann Exponent = 1).

set of initial points (Initial Points = 1000).

maximum number of iterations to stay at a given point (Level Iterations = 50).

scale for the random jump (Perturbation Scale = 1.0).

starting value for the random number generator (Random Seed = 0).

number of initial points (Search Points = 0).

tolerance for accepting constraint violations (Tolerance = 0.000001).

We examine the following linear equation

y ″ ( t ) − y ( t ) = t − 1 ; y ( 0 ) = 2 , y ′ ( 0 ) = − 2 (9)

with the exact solution y ( t ) = 1 − t + exp ( − t ) .

Implementing the proposed scheme with k = 10 , we obtain { ψ i | i = 0 ( 1 ) 10 } as

{ 2 , − 2 , 416995243 834315644 , − 105164777 636059757 , 69800031 1811256752 , − 6535788 1275358859 } .

Using a steplength of h = 0.01 , the absolute errors obtained by our proposed algorithm are compared with those produced by the well-known Runge-Kutta Nystrom method as shown in

Consider the equation

y ″ ( t ) = ( 1 + t 2 ) y ( t ) ; y ( 0 ) = 1 , y ′ ( 0 ) = 0 (10)

with the exact solution y ( t ) = exp ( t 2 2 ) .

Implementing the proposed scheme with k = 11 , we obtain { ψ i | i = 0 ( 1 ) 11 } as

{ 1,0, 1306409430 2612828131 , 29397245 1713857114397 , 3187969586 25524507753 , 172091099 436085951591 , 313833621 15857243966 , 382909153 201794651238 , 117010789 496614906383 , 766119929 389265107664 , − 130287162 172534329575 , 125796527 456658410146 }

t | Runge-Kutta Nystom | Proposed Scheme |
---|---|---|

0.00 | 0 | 0. |

1.00E−1 | 2.973739E−10 | 2.591705E−12 |

2.00E−1 | 7.050944E−10 | 5.964562E−12 |

3.00E−1 | 1.217025E−9 | 9.366508E−12 |

4.00E−1 | 1.829043E−9 | 1.286815E−11 |

5.00E−1 | 2.538910E−9 | 1.649259E−11 |

6.00E−1 | 3.346159E−9 | 2.029099E−11 |

7.00E−1 | 4.252021E−9 | 2.428346E−11 |

8.00E−1 | 5.259365E−9 | 2.852540E−11 |

9.00E−1 | 6.372666E−9 | 3.304634E−11 |

1.00 | 7.597991E−9 | 3.792694E−11 |

t | Runge Kutta Nystom | Proposed Scheme |
---|---|---|

0.00 | 0 | 0. |

2.00E−1 | 1.266432E−7 | 1.76241E−8 |

4.00E−1 | 2.923595E−7 | 3.804534E−8 |

6.00E−1 | 5.418410E−7 | 6.027792E−8 |

8.00E−1 | 9.469284E−7 | 8.585760E−8 |

1.00 | 1.627915E−6 | 1.171072E−7 |

In this paper, we have shown how the problem of obtaining approximate solution to (1) can be converted to an optimization problem, and then solved using simulated annealing. The results obtained compete favourably with the Runge-Kutta Nystrom method.

The authors declare no conflicts of interest regarding the publication of this paper.

Bilesanmi, A., Wusu, A.S. and Olutimo, A.L. (2019) Solution of Second-Order Ordinary Differential Equations via Simulated Annealing. Open Journal of Optimization, 8, 32-37. https://doi.org/10.4236/ojop.2019.81003