Solution of Second-Order Ordinary Differential Equations via Simulated Annealing ()
1. Introduction
The use of techniques that are based on evolutionary algorithms for solving optimization problems has been gaining interests over the last few years. These algorithms use mechanisms inspired by biological evolution, such as reproduction, recombination, mutation, and selection. Since the work of Isaac Newton and Gottfried Leibniz in the late 17th century, differential equations (DEs) have been an important concept in many branches of science. Differential equations arise in physics, engineering, chemistry, biology, economics and a lot of fields. The idea of solving DEs via evolutionary algorithms has been on the increase recently. Approximate solutions of differential equations are obtained by converting the equations to optimization problems and then solved via optimization techniques. The use of classical genetic algorithm to obtain approximate solutions of second-order initial value problems was considered in [1] . In [2] , the author combined genetic algorithm with the Nelder-Mead method for solving the second-order initial value problem of the form
. In a later work, approximate solutions of first-order initial value problem was computed via the combination of collocation method and genetic algorithms by the author in [3] . Adaptation of neural network for the solution of second-order initial value problems was proposed by the authors in [4] . Continuous genetic algorithm was used to compute the solution of two-point second-order ordinary differential equation in [5] . The adaptation of differential evolution algorithm for the solution of the second-order initial value problem of the form
was proposed in [6] . The authors in [7] considered the approach of using differential algorithm to obtain approximate solutions of the second-order two-point boundary value problem
with oscillatory/periodic behaviour. In this paper we show that the simulated annealing algorithm can also be used to find very accurate approximate solutions of second-order initial value problems of the form
(1)
2. Basic Notions of Simulated Annealing Algorithm
Simulated annealing is a simple stochastic function minimizer. It is motivated from the physical process of annealing, where a metal object is heated to a high temperature and allowed to cool slowly. The process allows the atomic structure of the metal to settle to a lower energy state, thus becoming a tougher metal. Using optimization terminology, annealing allows the structure to escape from a local minimum, and to explore and settle on a better, hopefully global, minimum.
At each iteration, a new point,
, is generated in the neighborhood of the current point, x. The radius of the neighborhood decreases with each iteration. The best point found so far,
, is also tracked.
If
,
replaces
and x. Otherwise,
replaces x with a probability
. Here b is the function defined by Boltzmann Exponent-exponent of the probability function, i is the current iteration,
is the change in the objective function value, and
is the value of the objective function from the previous iteration. The default definition of
the function for b is given as
.
Simulated annealing uses multiple starting points, and finds an optimum starting from each of them. The default number of starting points, given by the parameter SearchPoints, is
, where d is the number of variables and in this case
, since the number of independent variable is one.
3. Proposed Method
Consider the second-order initial value problem (1), assume a solution of the form
(2)
where
are parameters to be determined. Substituting (2) and its second derivative into (1) gives
(3)
Using the initial conditions we have the constraint that
(4)
At each node point
, we require that
(5)
To solve the above problem, we need to find the set
, which minimizes the expression
(6)
where h is the steplength. We now formulate the problem as an optimization problem in the following way:
(7)
(8)
Using the simulated annealing algorithm we are able to obtain the set
which minimizes the expression
.
4. Numerical Experiments
We now perform some numerical experiments confirming the theoretical expectations regarding the method we have proposed. Our proposed algorithm shall be compared with the Runge-Kutta Nystrom method in this section. The following parameters needed to implement the simulated annealing are set as follows:
exponent of the probability function (Boltzmann Exponent = 1).
set of initial points (Initial Points = 1000).
maximum number of iterations to stay at a given point (Level Iterations = 50).
scale for the random jump (Perturbation Scale = 1.0).
starting value for the random number generator (Random Seed = 0).
number of initial points (Search Points = 0).
tolerance for accepting constraint violations (Tolerance = 0.000001).
4.1. Example 1
We examine the following linear equation
(9)
with the exact solution
.
Implementing the proposed scheme with
, we obtain
as
Using a steplength of
, the absolute errors obtained by our proposed algorithm are compared with those produced by the well-known Runge-Kutta Nystrom method as shown in Table 1. The comparison shows that our approach gave better result compared with the Runge-Kutta Nystrom method.
4.2. Example 2
Consider the equation
(10)
with the exact solution
.
Implementing the proposed scheme with
, we obtain
as
Table 1. The absolute values of error of y(t) in Problem 9 using the proposed scheme compare with the Runge-Kutta Nystom method.
Table 2. The absolute values of error of y(t) in Problem 10 using the proposed scheme compare with the Runge-Kutta Nystom method.
Table 2 shows the absolute errors of the results obtained by our algorithm compared with the Runge-Kutta Nystrom method. Again, our approach gave minimal absolute errors.
5. Conclusion
In this paper, we have shown how the problem of obtaining approximate solution to (1) can be converted to an optimization problem, and then solved using simulated annealing. The results obtained compete favourably with the Runge-Kutta Nystrom method.