^{1}

^{1}

A variation of the direct Taylor expansion algorithm is suggested and applied to several linear and nonlinear differential equations of interest in physics and engineering, and the results are compared with those obtained from other algorithms. It is shown that the suggested algorithm competes strongly with other existing algorithms, both in accuracy and ease of application, while demanding a shorter computation time.

With the advent of high speed personal computers and workstations and the decrease of the cost of computer resources in general, numerical methods and computer simulation have become an integral part of the scientific method and a third approach to the study of physical problems, in addition to theoretical and experimental methods.

The problem of solving differential equations numerically had long been of interest to mathematicians and scientists alike, long before the appearance of modern computers. One of the oldest and simplest algorithms is the Euler me- thod, also known as the Euler-Cauchy method or the polygonal method [

d y d x = lim Δ x → 0 Δ y Δ x (1)

to solve first-order differential equations of the form [

d y d x = f ( x , y ) (2)

subject to initial condition y ( x 0 ) = y 0 . The method replaces the differential equation by a difference equation and, using a small enough step size Δ x = h , advances the solution from x n to x n + 1 = x n + h through

y n + 1 = y n + h f ( x n , y n ) (3)

The Euler method can also be viewed as a Taylor expansion of the function about point x n and retaining only the first two terms. Thus the remaining terms, or the error in the Euler algorithm, is given by

E = 1 2 ! d 2 y d x 2 | x = ξ h 2 (4)

where ξ is some value of x in the interval under consideration of width h . Consequently, the local error in the Euler method is of the order of h 2 , resulting in a global error of the order of h . The Euler algorithm is, therefore, first order [

An improved version of the Euler method is obtained by retaining three terms in the Taylor expansion of the function instead of two, yielding a second order algorithm [

y n + 1 = y n + h f [ x n + h 2 , y n + h 2 f ( x n , y n ) ] (5)

This algorithm is known as the modified Euler method, the midpoint method, or the second-order Runge-Kutta method [

Yet another way of improving the order of the Euler method is to use the average value of the derivative at the beginning and at the end of each interval,

y n + 1 = y n + h f ( x n , y n ) + f ( x n + 1 , y n + 1 ) 2 (6)

where y n + 1 on the right hand side is obtained from Equation (3). This algo- rithm is referred to as the Adams-Bashforth rule [

The accuracy of the Euler method can be improved further by including higher terms of the Taylor expansion in the numerical calculations. Thus, by including the first five terms, one achieves a fourth-order algorithm. This approach, also referred to as the “Creeping up” process, has been mentioned in a limited number of references [

Currently the most widely used numerical algorithms for solving differential equations are the fourth-order Runge-Kutta (RK), fourth-order Adams-Bashforth- Moulton (ABM), and the fourth-order Milne methods. The RK algorithm [

The objective of this article is to discuss a variation of the direct Taylor series (DTS) algorithm for the solution of first- and higher-order differential equations. We show that not only this algorithm remains accurate away from the initial point, evaluation of the higher derivatives that are needed for accuracies com- parable to the RK, ABM, and Milne methods are indeed quite simple. Finally, the accuracy and ease of application of the DTS method are explicitly demon- strated by considering several important second-order linear and nonlinear dif- ferential equations of mathematical physics and comparing their solutions using the fourth-order DTS, RK, ABM, and Milne methods.

Consider a first-order differential equation given by (2). We expand the solution of this differential equation in a Taylor series about the initial point in each interval x n to obtain its value at the end of that interval x n + 1 = x n + h

y n + 1 = y n + y n ( 1 ) 1 ! h + y n ( 2 ) 2 ! h 2 + y n ( 3 ) 3 ! h 3 + y n ( 4 ) 4 ! h 4 + ⋯ (7)

where y n ( 1 ) , y n ( 2 ) , y n ( 3 ) , y n ( 4 ) are the first, second, third, and fourth derivatives of the function evaluated at x = x n . Using the initial condition of the problem, y ( x 0 ) = y 0 , this expansion can be used iteratively to solve the differential equation up to some final value of the independent variable. The higher deri- vatives of the function, which are required in Equation (7), can be obtained by successive differentiation of the original differential Equation (2). Thus,

{ y ( 1 ) = f ( x , y ) y ( 2 ) = ∂ f ∂ x + ∂ f ∂ y y ( 1 ) y ( 3 ) = ∂ 2 f ∂ x 2 + 2 ∂ 2 f ∂ x ∂ y y ( 1 ) + ∂ 2 f ∂ y 2 ( y ( 1 ) ) 2 + ∂ f ∂ y y ( 2 ) y ( 4 ) = ∂ 3 f ∂ x 3 + 3 ( ∂ 3 f ∂ x 2 ∂ y + ∂ 3 f ∂ x ∂ y 2 y ( 1 ) ) y ( 1 ) + ∂ 3 f ∂ y 3 ( y ( 1 ) ) 3 + 3 ( ∂ 2 f ∂ x ∂ y + ∂ 2 f ∂ y 2 y ( 1 ) ) y ( 2 ) + ∂ f ∂ y y ( 3 ) ⋮ ⋮ ⋮ ⋮ ⋮ (8)

Although these equations look tedious, their evaluations in most cases are quite straightforward and result in fairly simple expressions.

Using terms up to and including the k -th order in Equation (7) (i.e., retaining k + 1 terms), results in a local error of

E = y ( k + 1 ) ( ξ ) ( k + 1 ) ! h k + 1 (9)

and, thus, a global error of the order of h k . Since the commonly used high- order numerical algorithms for solving differential equations are fourth order, we restrict our attention to Taylor expansions up to and including the fourth derivative, resulting in a fourth-order algorithm for comparison.

The DTS algorithm can be extended to numerically solve a differential equa- tion of any order. To demonstrate this, consider a second-order differential equation given by

d 2 y d x 2 = f ( x , y , y ( 1 ) ) (10)

subject to the initial conditions y ( x 0 ) = y 0 and y ( 1 ) ( x 0 ) = y 0 ( 1 ) . Of course, this equation can always be reduced to a system of two first-order differential equations. Alternatively, one can extend the Taylor algorithm as follows: From Equation (10) and its differentiation, the second and higher derivatives of the function are obtained and evaluated at x 0 . Then Equation (7) is used to advance the solution from the initial point x 0 to x 1 . To advance the solution from x 1 to x 2 , however, various derivatives of the function at x 1 are needed. These, in turn, can be obtained from the Taylor expansion of the derivatives themselves,

{ y 1 ( 1 ) = y 0 ( 1 ) + y 0 ( 2 ) 1 ! h + y 0 ( 3 ) 2 ! h 2 + y 0 ( 4 ) 3 ! h 3 + ⋯ y 1 ( 2 ) = y 0 ( 2 ) + y 0 ( 3 ) 1 ! h + y 0 ( 4 ) 2 ! h 2 + y 0 ( 5 ) 3 ! h 3 + ⋯ y 1 ( 3 ) = y 0 ( 3 ) + y 0 ( 4 ) 1 ! h + y 0 ( 5 ) 2 ! h 2 + y 0 ( 6 ) 3 ! h 3 + ⋯ y 1 ( 4 ) = y 0 ( 4 ) + y 0 ( 5 ) 1 ! h + y 0 ( 6 ) 2 ! h 2 + y 0 ( 7 ) 3 ! h 3 + ⋯ ⋮ ⋮ ⋮ (11)

and, using Equation (7), the solution is advanced from x 1 to x 2 . Iteration of these steps will eventually yield the value of the function at the desired final value of the variable. In Equation (11), the order of the highest derivative retained on the right side of each equation should be the same as the order of the numerical algorithm required. Thus, for a the numerical algorithm to be fourth- order, derivatives up to and including the fourth-order should be included in the expansions.

The suggested variation of the direct Taylor series, which we refer to as the DTS method, differs from the standard Taylor series method in the following way. In the standard method, the function is calculated at the required value of the variable x directly from the value of the function and its derivatives at the initial value x 0 [

y ≅ y 0 + y 0 ( 1 ) 1 ! ( x − x 0 ) + y 0 ( 2 ) 2 ! ( x − x 0 ) 2 + ⋯ + y 0 ( n ) n ! ( x − x 0 ) n (12)

In the suggested variation of the algorithm (the DTS method), on the other hand, the interval [ x 0 , x ] is divided into many subintervals, each of width h . The function and its derivatives are then Taylor expanded and advanced from subinterval to subinterval until the function is evaluated at the required value of the variable x , thus resulting in a much greater accuracy.

Applications of the DTS (direct Taylor series) method, as well as other common algorithms, to numerical solutions of first-order differential equations are straightforward and will not be discussed here.

Differential Equation | RK | ABM | Milne | DTS | True | ||
---|---|---|---|---|---|---|---|

(a) | 4 | ||||||

(b) | 3 | ||||||

(c) | 4 | ||||||

(d) | 3 | ||||||

(e) | 5 | ||||||

(f) | 2 | 1.1485 | 1.1485 | 1.1485 | 1.1485 | 1.1485 | |

(g) | 2 | 0.6275 | 0.6275 | 0.6275 | 0.6275 | 0.6275 | |

(h) | 1 | ||||||

(i) | 2 | 0.2338 | 0.2338 | 0.2338 | 0.2338 | 0.2338 | |

(j) | 1 |

solved and the function is evaluated at some final value of the variable x f by the fourth-order RK (Runge-Kutta), ABM (Adams-Bashforth-Moulton), Milne, and the DTS algorithms, using a step size h = 0.1 in each case.

In

Equations (g)-(j) are some of the common linear differential equations of mathematical physics, namely, Bessel, Legendre, Laguerre, and Hermite, res- pectively [

In all cases studied, the DTS algorithm generated a noticeably shorter CPU time than any other algorithm. In fact, the average CPU time for the DTS programs was found to be 21%, 58%, and 68% shorter than those for the cor- responding RK, Milne and ABM programs, respectively.

Based on the results listed in

Evaluation of higher derivatives does not pose any serious difficulty. Indeed, higher derivatives become less problematic for differential equations of higher order. For example, with the fourth-order DTS algorithm and a second-order differential equation, only two higher derivatives should be computed from the differential equation; for a fourth-order equation, no higher-order derivatives are needed.

A striking feature of the DTS method is the ease with which its order of accuracy can be increased. For instance, to increase the order of the algorithm from four to six, one only needs to retain two additional terms in the Taylor expansion. Such an increase of the order of algorithm is not a trivial task in the RK (Runge-Kutta), ABM (Adams-Bashforth-Moulton), or Milne method. Simi- larly, while generalizations of the latter methods to higher-order differential equations are not trivial, in the former case, it can be accomplished by simply incorporating Taylor expansions for higher derivatives, as we shall demonstrate in the following paragraph.

Third-order differential equations are not common in physics and engi- neering. Fourth-order equations, on the other hand, are occasionally encoun- tered in some cases, such as bending of beams. As an example, we demonstrate the power of the DTS algorithm for the following simple fourth-order linear differential equation for which the analytical solution exits for comparison,

4 d 4 y d x 4 − 5 d 2 y d x 2 + y = 0 (13)

subject to the initial conditions

y ( 0 ) = 1 , y ( 1 ) ( 0 ) = 1 2 , y ( 2 ) ( 0 ) = 1 , y ( 3 ) ( 0 ) = 1 8 (14)

With a step size h = 0.1 , the fourth-order DTS method yields y ( 1 ) = 2.0637 . With the sixth-order algorithm and the same step size, we find y ( 1 ) = 2.0641757 . These compare with the true value of 2.0641759, obtained from the analytical solution of the differential equation,

y = cosh x + sinh x 2 (15)

The discrepancies between the true and the numerical values in the two cases are, respectively, of the order h 4 and h 6 , as they should be.

In conclusion, we see that the direct Taylor series (DTS) algorithm is simple, easy to use, accurate, extendable to higher accuracies by simply retaining higher- order terms in Taylor expansions while requiring noticeably less computation time. Higher-order differential equations can be solved by trivial inclusion of Taylor expansions of higher derivatives. The other algorithms, such as Runge- Kutta, Adams-Bashforth-Moulton, and Milne methods are elegant but require complete construction of their working equations, which can be quite tedious depending on the order of the algorithm.

This work was supported by a URAP grant from the University of Wisconsin- Parkside.

Mohazzabi, P. and Becker, J.L. (2017) Numerical Solution of Differential Equations by Direct Taylor Ex- pansion. Journal of Applied Mathematics and Physics, 5, 623-630. https://doi.org/10.4236/jamp.2017.53053