Scientific Research

An Academic Publisher

Solution of Second-Order Ordinary Differential Equations via Simulated Annealing ()

Share and Cite:

*Open Journal of Optimization*,

**8**, 32-37. doi: 10.4236/ojop.2019.81003.

1. Introduction

The use of techniques that are based on evolutionary algorithms for solving optimization problems has been gaining interests over the last few years. These algorithms use mechanisms inspired by biological evolution, such as reproduction, recombination, mutation, and selection. Since the work of Isaac Newton and Gottfried Leibniz in the late 17th century, differential equations (DEs) have been an important concept in many branches of science. Differential equations arise in physics, engineering, chemistry, biology, economics and a lot of fields. The idea of solving DEs via evolutionary algorithms has been on the increase recently. Approximate solutions of differential equations are obtained by converting the equations to optimization problems and then solved via optimization techniques. The use of classical genetic algorithm to obtain approximate solutions of second-order initial value problems was considered in [1] . In [2] , the author combined genetic algorithm with the Nelder-Mead method for solving the second-order initial value problem of the form ${y}^{\u2033}=f\left(x,y\right)$ . In a later work, approximate solutions of first-order initial value problem was computed via the combination of collocation method and genetic algorithms by the author in [3] . Adaptation of neural network for the solution of second-order initial value problems was proposed by the authors in [4] . Continuous genetic algorithm was used to compute the solution of two-point second-order ordinary differential equation in [5] . The adaptation of differential evolution algorithm for the solution of the second-order initial value problem of the form ${y}^{\u2033}+p\left(t\right){y}^{\prime}+q\left(t\right)y=r\left(t\right)$ was proposed in [6] . The authors in [7] considered the approach of using differential algorithm to obtain approximate solutions of the second-order two-point boundary value problem ${u}^{\u2033}=f\left(t,u\right);u\left(a\right)={\eta}_{1};u\left(b\right)={\eta}_{2}$ with oscillatory/periodic behaviour. In this paper we show that the simulated annealing algorithm can also be used to find very accurate approximate solutions of second-order initial value problems of the form

${y}^{\u2033}=f\left(t,y\right);\text{\hspace{1em}}y\left({t}_{0}\right)={y}_{0},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}^{\prime}\left({t}_{0}\right)={{y}^{\prime}}_{0},\text{\hspace{1em}}t\in \left[a,b\right].$ (1)

2. Basic Notions of Simulated Annealing Algorithm

Simulated annealing is a simple stochastic function minimizer. It is motivated from the physical process of annealing, where a metal object is heated to a high temperature and allowed to cool slowly. The process allows the atomic structure of the metal to settle to a lower energy state, thus becoming a tougher metal. Using optimization terminology, annealing allows the structure to escape from a local minimum, and to explore and settle on a better, hopefully global, minimum.

At each iteration, a new point, ${x}_{new}$ , is generated in the neighborhood of the current point, x. The radius of the neighborhood decreases with each iteration. The best point found so far, ${x}_{best}$ , is also tracked.

If $f\left({x}_{new}\right)\le f\left({x}_{best}\right)$ , ${x}_{new}$ replaces ${x}_{best}$ and x. Otherwise, ${x}_{new}$ replaces x with a probability $\mathrm{exp}\left(b\left(i\mathrm{,}\Delta f\mathrm{,}{f}_{0}\right)\right)$ . Here b is the function defined by Boltzmann Exponent-exponent of the probability function, i is the current iteration, $\Delta f$ is the change in the objective function value, and ${f}_{0}$ is the value of the objective function from the previous iteration. The default definition of

the function for b is given as $b\left(i,\Delta f,{f}_{0}\right):=\frac{-\Delta f\mathrm{log}\left(i+1\right)}{10}$ .

Simulated annealing uses multiple starting points, and finds an optimum starting from each of them. The default number of starting points, given by the parameter SearchPoints, is $\mathrm{min}\left(2d\mathrm{,50}\right)$ , where d is the number of variables and in this case $d=1$ , since the number of independent variable is one.

3. Proposed Method

Consider the second-order initial value problem (1), assume a solution of the form

$y\left(t\right)={\displaystyle \underset{i=0}{\overset{k}{\sum}}}\text{\hspace{0.05em}}{\psi}_{i}{t}^{i},\text{\hspace{1em}}k\in {\mathbb{Z}}^{+}$ (2)

where ${\psi}_{i}$ are parameters to be determined. Substituting (2) and its second derivative into (1) gives

$\underset{i=2}{\overset{k}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}i\left(i-1\right){\psi}_{i}{t}^{i-2}=f\left(t,y\left(t\right)\right)$ (3)

Using the initial conditions we have the constraint that

${\left[{\displaystyle \underset{i=0}{\overset{k}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\psi}_{i}{t}^{i}\right]}_{t={t}_{0}}={y}_{0}\text{\hspace{1em}}\text{and}\text{\hspace{1em}}{\left[{\displaystyle \underset{i=1}{\overset{k}{\sum}}}\text{\hspace{0.05em}}i{\psi}_{i}{t}^{i-1}\right]}_{t={t}_{0}}={{y}^{\prime}}_{0}$ (4)

At each node point ${t}_{n}$ , we require that

${\mathcal{E}}_{n}\left(t\right)={\left[{\displaystyle \underset{i=2}{\overset{k}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}i\left(i-1\right){\psi}_{i}{t}^{i-2}-f\left(t\mathrm{,}y\left(t\right)\right)\right]}_{t={t}_{n}}\simeq 0$ (5)

To solve the above problem, we need to find the set $\left\{{\psi}_{i}\mathrm{|}i=0\left(1\right)k\right\}$ , which minimizes the expression

$\underset{n=1}{\overset{\frac{b-a}{h}}{\sum}}}\text{\hspace{0.05em}}{\mathcal{E}}_{n}^{2}\left(t\right)$ (6)

where h is the steplength. We now formulate the problem as an optimization problem in the following way:

$\text{Minimize:}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\displaystyle \underset{n=1}{\overset{\frac{b-a}{h}}{\sum}}}\text{\hspace{0.05em}}{\mathcal{E}}_{n}^{2}\left(t\right)$ (7)

$\text{Subject}\text{\hspace{0.17em}}\text{to:}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\left[{\displaystyle \underset{i=0}{\overset{k}{\sum}}}\text{\hspace{0.05em}}{\psi}_{i}{t}^{i}\right]}_{t={t}_{0}}={y}_{0}\text{\hspace{1em}}\text{and}\text{\hspace{1em}}{\left[{\displaystyle \underset{i=1}{\overset{k}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}i{\psi}_{i}{t}^{i-1}\right]}_{t={t}_{0}}={{y}^{\prime}}_{0}$ (8)

Using the simulated annealing algorithm we are able to obtain the set $\left\{{\psi}_{i}|i=0,1,\cdots ,k\right\}$ which minimizes the expression ${\sum}_{n=1}^{\frac{b-a}{h}}}{\mathcal{E}}_{n}^{2}\left(t\right)$ .

4. Numerical Experiments

We now perform some numerical experiments confirming the theoretical expectations regarding the method we have proposed. Our proposed algorithm shall be compared with the Runge-Kutta Nystrom method in this section. The following parameters needed to implement the simulated annealing are set as follows:

exponent of the probability function (Boltzmann Exponent = 1).

set of initial points (Initial Points = 1000).

maximum number of iterations to stay at a given point (Level Iterations = 50).

scale for the random jump (Perturbation Scale = 1.0).

starting value for the random number generator (Random Seed = 0).

number of initial points (Search Points = 0).

tolerance for accepting constraint violations (Tolerance = 0.000001).

4.1. Example 1

We examine the following linear equation

${y}^{\u2033}\left(t\right)-y\left(t\right)=t-1;\text{\hspace{1em}}y\left(0\right)=2,\text{\hspace{1em}}{y}^{\prime}\left(0\right)=-2$ (9)

with the exact solution $y\left(t\right)=1-t+\mathrm{exp}\left(-t\right)$ .

Implementing the proposed scheme with $k=10$ , we obtain $\left\{{\psi}_{i}|i=0\left(1\right)10\right\}$ as

$\left\{2,-2,\frac{416995243}{834315644},-\frac{105164777}{636059757},\frac{69800031}{1811256752},-\frac{6535788}{1275358859}\right\}.$

Using a steplength of $h=0.01$ , the absolute errors obtained by our proposed algorithm are compared with those produced by the well-known Runge-Kutta Nystrom method as shown in Table 1. The comparison shows that our approach gave better result compared with the Runge-Kutta Nystrom method.

4.2. Example 2

Consider the equation

${y}^{\u2033}\left(t\right)=\left(1+{t}^{2}\right)y\left(t\right);\text{\hspace{1em}}y\left(0\right)=1,\text{\hspace{1em}}{y}^{\prime}\left(0\right)=0$ (10)

with the exact solution $y\left(t\right)=\mathrm{exp}\left(\frac{{t}^{2}}{2}\right)$ .

Implementing the proposed scheme with $k=11$ , we obtain $\left\{{\psi}_{i}|i=0\left(1\right)11\right\}$ as

$\begin{array}{l}\{\mathrm{1,0,}\frac{1306409430}{2612828131}\mathrm{,}\frac{29397245}{1713857114397}\mathrm{,}\frac{3187969586}{25524507753}\mathrm{,}\frac{172091099}{436085951591}\mathrm{,}\\ \frac{313833621}{15857243966}\mathrm{,}\frac{382909153}{201794651238}\mathrm{,}\frac{117010789}{496614906383},\\ \frac{766119929}{389265107664}\mathrm{,}-\frac{130287162}{172534329575}\mathrm{,}\frac{125796527}{456658410146}\}\end{array}$

Table 1. The absolute values of error of y(t) in Problem 9 using the proposed scheme compare with the Runge-Kutta Nystom method.

Table 2. The absolute values of error of y(t) in Problem 10 using the proposed scheme compare with the Runge-Kutta Nystom method.

Table 2 shows the absolute errors of the results obtained by our algorithm compared with the Runge-Kutta Nystrom method. Again, our approach gave minimal absolute errors.

5. Conclusion

In this paper, we have shown how the problem of obtaining approximate solution to (1) can be converted to an optimization problem, and then solved using simulated annealing. The results obtained compete favourably with the Runge-Kutta Nystrom method.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] | George, D.M. (2006) On the Application of Genetic Algorithms to Differential Equations. Romanian Journal of Economic Forecasting, 3, 5-9. |

[2] | Mastorakis, N.E. (2005) Numerical Solution of Non-Linear Ordinary Differential Equations via Collocation Method (Finite Elements) and Genetic Algorithms. Proceedings of the 6th WSEAS International Conference on Evolutionary Computing, Lisbon, 16-18 June 2005, 36-42. |

[3] | Mastorakis, N.E. (2006) Unstable Ordinary Differential Equations: Solution via Genetic Algorithms and the Method of Nelder-Mead. Proceedings of the 6th WSEAS International Conference on Systems Theory & Scientific Computation, Elounda, 21-23 August 2006, 1-6. |

[4] | Junaid, A., Raja, A.Z. and Qureshi, I.M. (2009) Evolutionary Computing Approach for the Solution of Initial Value Problems in Ordinary Differential Equations. World Academic of Science, Engineering and Technology, 55, 578-581. |

[5] | Omar, A.A., Zaer, A., Shaher, M. and Nabil, S. (2012) Solving Singular Two-Point Boundary Value Problems Using Continuous Genetic Algorithm. Abstract and Applied Analysis, 2012, Article ID: 205391. |

[6] |
Bakre, O.F., Wusu, A.S. and Akanbi, M.A. (2015) Solving Ordinary Differential Equations with Evolutionary Algorithms. Open Journal of Optimization, 4, 69-73.
https://doi.org/10.4236/ojop.2015.43009 |

[7] | Wusu, A.S. and Akanbi, M.A. (2016) Solving Oscillatory/Periodic Ordinary Differential Equations with Differential Evolution Algorithms. Communications in Optimization Theory, 2016, Article ID: 7. |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.