_{1}

^{*}

In this note we consider some basic, yet unusual, issues pertaining to the accuracy and stability of numerical integration methods to follow the solution of first order and second order initial value problems (IVP). Included are remarks on multiple solutions, multi-step methods, effect of initial value perturbations, as well as slowing and advancing the computed motion in second order problems.

Numerically following the solution of an evolution equation in time is one of the central tasks of numerical analysis and has received over the years vast and repeated attention, see [

The decisive significance of Taylor’s theorem (as can be looked up in any elementary calculus textbook) to applied mathematics in general, and to numerical analysis in particular, is that it ascertains that every differential function looks locally like a polynomial. Polynomials having the advantage of being easily computed, differentiated and integrated, relieving us by their use of the burden of possibly high complications in the symbolic manipulation of functions, often only implicitly given as the solution of an initial value problem (IVP) or a boundary value problem (BVP). We look at this theorem here from an unusual angle.

The mean value theorem (MVT) states that if function f(x), f ( 0 ) = 0 , f ′ ( 0 ) ≠ 0 , is continuous in the closed interval [0, x] and differentiable on the open interval (0, x), then point ξ exists, strictly inside the interval, 0 < ξ < x , at which the slope of the chord equals to the slope of the tangent line to f(x) at point ξ, or

f ( x ) − f ( 0 ) x = f ′ ( ξ ) , or f ( x ) = x f ′ ( ξ ) , 0 < ξ < x (1)

implying that f(x) is nearly linear near x = 0.

The MVT theorem is a direct result, of the geometrically intuitively plausible, Rolle’s theorem. The generalized mean value theorem (GMVT) is a result of the application of the MVT (or Rolle’s theorem) to the higher order derivative functions of f(x). Here it is in its most concise form: Let function f(x) be continuous at x = 0, and such that

f ( 0 ) = 0 , f ′ ( 0 ) = 0 , f ″ ( 0 ) = 0 , ⋯ , f ( n − 1 ) ( 0 ) = 0 , f ( n ) ( 0 ) ≠ 0. (2)

Then, function f(x) may be expressed in the form

f ( x ) = 1 n ! x n f ( n ) ( ξ ) , 0 < ξ < x (3)

implying that if f^{(n)}(x) is bounded near x = 0, then f(x), like x^{n}, is small if | x | ≪ 1 .

We consider first the simplest case of n = 1, for which Equation (3) is

f ( x ) = x f ′ ( ξ ) , f ( 0 ) = 0 , f ′ ( 0 ) ≠ 0 , 0 < ξ < x (4)

and take

f ( x ) = A x + B x 2 + C x 3 , f ′ ( x ) = A + 2 B x + 3 C x 2 . (5)

such that A = f ′ ( 0 ) , B = f ″ ( 0 ) / 2 ! , C = f ‴ ( 0 ) / 3 ! .

We further assume that, approximately

ξ = k x + m x 2 (6)

and have from this that

f ( x ) − x f ′ ( ξ ) = B ( 1 − 2 k ) x 2 + ( G − 3 G k 2 − 2 B m ) x 3 + O ( x 4 ) . (7)

Annulling the first two terms of the above equation results in

k = 1 2 , m = 1 8 G B , or m = 1 24 f ‴ ( 0 ) f ″ ( 0 ) (8)

or generally, for any n in Equation (2)

ξ = 1 n + 1 x , if | x | ≪ 1. (9)

Some examples will convince us of the decisive usefulness of the GMVT theorem to numerical analysis. As a first example, we use the theorem to get a good polynomial approximation to e^{x} near x = 0. We start by writing

r ( x ) = e x − ( a + b x ) (10)

and propose to fix free parameters a and b such that r ( 0 ) = 0 , r ′ ( 0 ) = 0 , namely, such that

r ( x ) = e x − ( 1 + x ) , r ″ ( x ) = e x , r ″ ( 0 ) = 1 ≠ 0. (11)

Now, by fundamental Equation (3), we may write r(x) of r ( 0 ) = 0 , r ′ ( 0 ) = 0 , as

r ( x ) = 1 2 x 2 r ″ ( ξ ) or e x = 1 + x + 1 2 x 2 e ξ , 0 < ξ < x (12)

providing us with a good linear polynomial approximation to e^{x} in the vicinity of x = 0.

Moreover, since e^{x} is an increasing function, we readily obtain from Equation (12) the strict inequalities

1 + x + 1 2 x 2 < e x < 1 + x + 1 2 x 2 e x (13)

or

1 + x + 1 2 x 2 < e x < 1 + x 1 − 1 2 x 2 . (14)

The reason the lower bound on e^{x} in the above inequality (14) is better then the upper bound is due to the fact that as the order of the approximation increases, ξ moves rather ever closer to the osculation point x = 0. Here, since r ( 0 ) = 0 , r ′ ( 0 ) = 0 , r ″ ( 0 ) ≠ 0 , then, according to Equation (9), ξ = x/3, nearly, if | x | ≪ 1 .

Replacing e^{ξ} in Equation (12) by 1 + ξ, with ξ = x/3, we obtain, forthwith, the better approximation

e x = 1 + x + 1 2 x 2 ( 1 + ξ ) = 1 + x + 1 2 x 2 ( 1 + 1 3 x ) = 1 + x + 1 2 x 2 + 1 6 x 3 . (15)

Otherwise we may start from

e x = 1 + x + 1 2 x 2 + 1 6 x 3 e ξ , 0 < ξ < x , (16)

take ξ = k x + m x 2 , expand further

e ξ = 1 + k x + ( 1 2 k 2 + m ) x 2 + ( 1 6 k 3 + k m ) x 3 + ( 1 24 k 4 + 1 2 k 2 m + 1 2 m 2 ) x 4 + ⋯ (17)

or

e x = 1 + x + 1 2 x 2 + 1 6 x 3 + k 6 x 4 + ( k 2 12 + m 6 ) x 5 + ( k 3 36 + k m 6 ) x 6 + ⋯ (18)

To have in the above equation as many correct terms as possible we set

k 6 = 1 4 ! and ( k 2 12 + m 6 ) = 1 5 ! (19)

resulting in

ξ = 1 4 x + 3 160 x 2 if x ≪ 1. (20)

Taylor’s theorem is not restricted to polynomials, but may be advantageously used to directly construct other approximating functions. For instance, we may start with

r ( x ) = 1 + x − ( a cos ( x ) + b sin ( x ) ) (21)

and use free numbers a and b to enforce r ( 0 ) = 0 and r ′ ( 0 ) = 0 , to have

r ( x ) = 1 + x − ( cos ( x ) + 1 2 sin ( x ) ) , r ″ ( x ) = − 1 4 ( 1 + x ) − 3 2 + cos ( x ) + 1 2 sin ( x ) , r ″ ( 0 ) = 3 4 ≠ 0. (22)

Consequently, the Taylor, or the GMVT, form of r(x) is

r ( x ) = 1 2 x 2 r ″ ( ξ ) = 1 2 x 2 ( − 1 4 ( 1 + ξ ) − 3 2 + cos ( ξ ) + 1 2 sin ( ξ ) ) , 0 < ξ < x (23)

or, asymptotically, as x → 0 , ξ → 0 ,

1 + x = cos ( x ) + 1 2 sin ( x ) + 3 8 x 2 , if | x | ≪ 1 (24)

Rational approximations are also desirable, and efficient. Here we start with, say

r ( x ) = e x − a + b x 1 + c x (25)

of the three free parameters a, b, c. Imposing on r(x) the conditions

r ( 0 ) = 0 , r ′ ( 0 ) = 0 , r ″ ( 0 ) = 0 (26)

we readily have

e x = 2 + x 2 − x − 1 12 x 3 , if x ≪ 1. (27)

If the point of osculation is not x = 0, but x = a, then x in the theorem is shifted to x − a.

In the numerical integration of IVPs we are constantly confronted by the need to solve linear homogeneous recursions.

The first order, homogeneous, recursion

y n + 1 + b y n = 0 , n = 0 , 1 , 2 , ⋯ (28)

where b is a constants independent of n, is brought, without much ado, to the explicit, closed-form, representation

y n = ( − b ) n y 0 (29)

by merely repeating the recursion. We note that if | b | > 1 , then y_{n} keeps growing with n, while if | b | < 1 , then y n → 0 , as n → ∞ .

Next we consider the three-term homogeneous recursion.

Let the sequence y 0 , y 1 , y 2 , ⋯ , y n − 1 , y n , y n + 1 be generated by the homogeneous recursion

y n + 2 + b y n + 1 + c y n = 0 (30)

with coefficients b and c assumed independent of n. Recursion (30) is satisfied by y n = z n provided z is a root the characteristic equation

z 2 + b z + c = 0. (31)

In case the two roots z_{1}, z_{2} of Equation (31) are distinct, z 1 ≠ z 2 , then by the linearity of the recursion we have the general solution of this recursion in the form

y n = c 1 z 1 n + c 2 z 2 n (32)

with c_{1} and c_{2} determined by the initial conditions y_{0} and y_{1}.

In case the roots of Equation (31) are equal, z 1 = z 2 = z = − b / 2 , b 2 − 4 c = 0 , then we verify that y n = c 1 z n + c 2 n z n , with

c 1 = y 0 , c 2 = y 1 / 2 − y 0 . (33)

In case the roots of Equation (31) are complex conjugates, z 1 = α + i β , z 2 = α − i β , z 1 z 2 = | z | 2 = c = α 2 + β 2 , then we may put z in the form

z = | z | e ± i θ , z = | z | ( cos ( θ ) ± i sin ( θ ) ) , z n = | z | n ( cos ( n θ ) ± i sin ( n θ ) ) (34)

where cos ( θ ) = α / | z | , sin ( θ ) = β / | z | . Now, y n = c 1 z 1 n + c 2 z 2 n becomes

y n = | z | n ( ( c 1 + c 2 ) cos ( n θ ) + i ( c 1 − c 2 ) sin ( n θ ) ) . (35)

with c_{1},c_{2} fixed by the initial conditions y_{0}, y_{1}.

For example, the recursion

y 2 − 3 y 1 + 2 y 0 = 0 , y 0 = 1 , y 1 = 1 + ∈ (36)

results in

y n = 1 + ∈ ( 2 n − 1 ) . (37)

Implicit methods for the numerical integration of the first-order IVP hold some stability advantages, but they may require the solution of a nonlinear equation for the next predicted value.

At a point of bifurcation they hold the extra advantage of capturing multiple solutions, otherwise missed by an explicit method. Here is an example. The initial value problem

y ′ = − 1 − y 2 , y ( 0 ) = 1 , y ′ ( 0 ) = 0 (38)

is solved by both

y ( t ) = 1 and y ( t ) = cos ( t ) . (39)

Using the Euler explicit method

y 1 = y 0 + τ y ′ 0 (40)

where y_{1} is an approximation to y(τ), we have

y 1 = y 0 and y n = 1 (41)

which is only the first solution y(t) = 1 of IVP (38).

Using the implicit method

y 1 = y 0 + 1 2 τ ( y ′ 0 + y ′ 1 ) (42)

we obtain

y 1 = 1 − 1 2 τ 1 − y 1 2 (43)

then the quadratic equation

( 1 + 1 4 τ 2 ) y 1 2 − 2 y 1 + ( 1 − 1 4 τ 2 ) = 0 (44)

for y_{1}, solved by

a first y 1 = 1 , and a second y 1 = 4 − τ 2 4 + τ 2 = 1 − 1 2 τ 2 + 1 8 τ 4 + O ( τ 6 ) (45)

as compared with

y ( τ ) = cos ( τ ) = 1 − 1 2 τ 2 + 1 24 τ 4 + O ( τ 6 ) . (46)

The solution of the IVP

y ′ = y − 1 , y ( 0 ) = y 0 (47)

is

y ( t ) = 1 + ( y 0 − 1 ) e t (48)

and if y 0 = 1 , then y ( t ) = 1 , but if y 0 > 1 , then y ( t ) → ∞ as t → ∞ .

To fully, yet concisely, demonstrate the consistency and stability issues in the integration of first order IVP, and their resolution, we shall look in detail at the general two-step method

y 2 = α 0 y 0 + α 1 y 1 + τ ( β 0 y ′ 0 + β 1 y ′ 1 ) + e r r , y = y ( t ) (49)

in which y_{2} is the computed approximation to the correct y(2τ), in which err is the error y(2τ) − y_{2}, and in which α_{0}, α_{1}, β_{0}, β_{1} are free parameters to be determined for highest accuracy and method stability.

In accordance with Taylor’s theorem we require, for the highest possible order of consistency, that the calculated y_{2} is the correct y(2τ), namely err = 0, for

y = 1 , y = t , y = t 2 (50)

leading to the system of equations

α 0 + α 1 = 1 , α 1 + β 0 + β 1 = 2 , α 1 + 2 β 1 = 4 (51)

and then to

α 1 = 1 − α 0 , β 0 = 1 2 ( − 1 + α 0 ) , β 1 = 1 2 ( 3 + α 0 ) (52)

in which we leave α_{0} free for now, to use it next to guarantee the stability of the method.

According to Taylor’s theorem the worst case error arises from the next order polynomial, or the function with a constant third derivative. Accordingly, we take next

y ( t ) = 1 6 M 3 t 3 , y ‴ ( t ) = M 3 (53)

and have from Equation (49) and Equation (51) that

y ( 2 τ ) − y 2 = 5 − a 0 12 M 3 τ 3 (54)

which is the local error per step. After n steps the error rises to the global O(τ^{2}).

The following Lemma is greatly useful for ascertaining the stability of an integration method.

Lemma. Let real roots z(τ) of the characteristic equation for the integration scheme of the first-order IVP be such that z ( τ = 0 ) ≤ 1 . Then, a sufficient condition for the (conditional) stability of the method is that at z ( τ = 0 ) = 1 , d z / d τ < 0 and that at z ( τ = 0 ) = − 1 , d z / d τ > 0 .

Proof. It results directly from the continuity and differentiability of z(τ) that if z ( τ = 0 ) = 1 and d z / d τ < 0 at τ = 0, then z(τ) < 1 for some τ > 0. See also [

Specifically, for the model IVP

y ′ = − y , y ( 0 ) = y 0 = 1 , y ( t ) = y ( 0 ) e − t (55)

the two-step method of Equation (49) becomes

2 y 2 = ( 2 α 0 + τ ( 1 − α 0 ) ) y 0 + ( 2 − 2 α 0 − τ ( 3 + α 0 ) ) y 1 (56)

of the characteristic equation

2 z 2 + ( − 2 + 2 α 0 + τ ( 3 + α 0 ) ) z + ( − 2 α 0 + τ ( − 1 + α 0 ) ) = 0. (57)

At τ = 0 the equation reduces to

z 2 + ( − 1 + α 0 ) z − α 0 = 0 (58)

which is of the two roots

z 1 = 1 , z 2 = − α 0 , and − 1 < α 0 ≤ 1. (59)

To verify the stability of the method we seek z ′ ( τ ) at τ = 0.

Implicitly differentiating characteristic Equation (57) with respect to τ we have

4 z z ′ + ( 3 + α 0 ) z + ( − 2 + 2 α 0 + τ ( 3 + α 0 ) ) z ′ − 1 + α 0 = 0. (60)

At τ = 0, the above equation reduces to

4 z z ′ + ( 3 + α 0 ) z + ( − 2 + 2 α 0 ) z ′ − 1 + α 0 = 0 (61)

and for z 1 = 1 we obtain from the above equation that

z ′ 1 ( τ = 0 ) = − 2 ( 1 + α 0 ) 2 + a 0 < 0 (62)

and since for stability z_{1}(τ) needs to come down at τ = 0, hence α 0 > − 1 .

In this method α 0 = 0 , for which the characteristic equation reduces to

2 z 2 + ( − 2 + 3 τ ) z − τ = 0. (63)

We set z = 1 in the above equation and get from it the corresponding τ = 0. Then we set in the above equation z = −1 and obtain from it τ = 1, at which d z / d τ = − 4 / 3 , implying that the method is stable for 0 < τ < 1 .

Actually

z = 1 4 ( 2 − 3 τ ± 4 − 4 τ + 9 τ 2 ) . (64)

For α 0 = − 3 / 4 the characteristic equation of the multistep method becomes

8 z 2 + ( − 14 + 9 τ ) z + ( 6 − 7 τ ) = 0 (65)

and by implicit differentiation with respect to τ

16 z z ′ + 9 z + ( − 14 + 9 τ ) z ′ − 7 = 0 (66)

where z ′ = d z / d τ . At τ = 0, z = 1, the above equation yields z ′ ( τ = 0 ) = − 1 , and the method is stable for some τ. We set z = 1 into the characteristic equation and get for it but τ = 0. We set into the characteristic equation z = −1 and get from it τ = 7/4, implying that the method is stable now for 0 < τ < 7 / 4 .

Actually,

z = 1 16 ( 14 − 9 τ ± 4 − 28 τ + 81 τ 2 ) . (67)

We shall consider now the application of the two-step method of the previous section to the IVP

y ′ = 0 , y ( 0 ) = 1 , y ( t ) = 1. (68)

For α 0 = − 3 / 4 , α 1 = 7 / 4 , and with y ′ ( t ) = 0 , the two-step method becomes

y 2 = 1 4 ( − 3 y 0 + 7 y 1 ) (69)

for which the characteristic equation is

4 z 2 − 7 z + 3 = 0 , z 1 = 1 , z 2 = 3 4 (70)

and hence

y n = c 1 ( 1 ) n + c 2 ( 3 4 ) n . (71)

Say we start the method with y 0 = 1 , y 1 = 1 + ε , such that

c 1 + c 2 = 1 , c 1 + 3 4 c 2 = 1 + ε (72)

resulting in c 1 = 1 + 4 ε , c 2 = − 4 ε , and

y n = 1 + 4 ε − 4 ε ( 3 4 ) n (73)

and

y ∞ = 1 + 4 ε . (74)

The integration method

y 3 = y 2 + 1 12 τ ( 23 y ′ 2 − 16 y ′ 1 + 5 y ′ 0 ) (75)

is correct for y = 1, y = t, y = t^{2} and y = t^{3}. For

y ( t ) = 1 24 M 4 t 4 (76)

we have from Equation (75) that

y ( 3 τ ) − y 3 = 3 8 M 4 τ 3 . (77)

For y ′ = − y , y ( 0 ) = y 0 the characteristic equation of the three step method becomes

12 z 3 + ( − 12 + 23 τ ) z 2 − 16 τ z + 5 τ = 0 (78)

and at τ = 0 it reduces to

12 z 3 − 12 z 2 = 0 , of roots z 1 = 1 , z 2 = z 3 = 0 . (79)

Implicit differentiation of Equation (78) with respect to τ yields

36 z 2 z ′ + 23 z 2 + ( − 12 + 23 τ ) 2 z z ′ − 16 z − 16 τ z ′ + 5 = 0 (80)

and at τ = 0, z = 1, we have that z ′ = − 1 , so that near τ = 0

z = 1 − τ (81)

and the method is stable.

We further have from Equation (79) that at τ = 6/11, z_{3} = −1 and z_{1} = z_{2} = 1/2.

At the repeating root z = 0, the derivative function z ′ does not exist. Instead we write τ in terms of z as

τ = 12 ( 1 − z ) 23 z 2 − 16 z + 5 z 2 (82)

and if z = 0, nearly, then

τ = 12 5 z 2 (83)

and

z = ± 5 12 τ . (84)

Next, we turn our attention to the second order IVP, see [

y ″ + y = 0 , y ( 0 ) = y 0 , y ′ ( 0 ) = y ′ 0 (85)

which we propose to approximate as

y 0 − 2 y 1 + y 2 τ 2 + ( 1 + ∈ ) y 1 = 0 , ∈ = α τ + β τ 2 . (86)

The characteristic equation of this method is

z 2 + ( − 2 + τ 2 ( 1 + ∈ ) ) z + 1 = 0. (87)

For a sufficiently small τ, the roots of the characteristic equation are complex and | z | = 1 . Hence the closed-form prediction of the computed y at step n

y n = c 1 cos ( n θ ) + c 2 sin ( n θ ) (88)

where c_{1} and c_{2} are determined by the initial conditions.

From the characteristic Equation (87) we have that

cos ( θ ) = 1 − 1 2 τ 2 ( 1 + ∈ ) , sin ( θ ) = 1 2 4 − ( − 2 + τ 2 ( 1 + ∈ ) ) 2 (89)

or

θ = τ + 1 2 α τ 2 + 1 24 ( 1 − 3 α 2 + 12 β ) τ 3 + O ( τ 5 ) , ∈ = 2 − τ 2 − 2 cos ( τ ) . (90)

To drop the τ^{2} term in the above equation we take α = 0, and are left with

θ τ = 1 + 1 24 ( 1 + 12 β ) τ 2 + O ( τ 4 ) (91)

suggesting that θ may be advanced or retarded relative to τ with a proper choice of β. For instance, for β = −1/12 we have that θ = τ, nearly.

Inasmuch as the initial conditions to the second order equation of motion are usually given in terms of initial position and initial velocity, we prefer to reduce the second order problem into a coupled system of first order equations for position and velocity, and follow them in tandem.

To remain both concise and specific, we consider the model initial value problem

x ′ = − y , y ′ = x , x 0 = x ( 0 ) = 1 , y 0 = y ( 0 ) = 0 (92)

where x = x(t), y = y(t), t > 0, and where ( ) ′ denotes differentiation with respect to time t. This initial value problem, that coincides with a single second order problem, is solved by x = cos ( t ) , y = sin ( t ) representing a constant circular motion of period T = 2 π .

We propose to follow the initial value problem with the explicit scheme

x 1 = x 0 + τ x ′ 0 y 1 = y 0 + τ ( α 0 y ′ 0 + α 1 y ′ 1 ) (93)

in which τ is the time step, where x_{1} = x(τ) and y_{1} = y(τ), approximately, and with the coefficients α_{0}, α_{1} to be presently determined by stability and accuracy considerations.

With x ′ = − y , y ′ = x , system (93) becomes the system of recursions

x 1 = x 0 − τ y 0 y 1 = y 0 + τ ( α 0 x 0 + α 1 x 1 ) (94)

that explicitly produces x_{1} and y_{1} out of x_{0} and y_{0}, and then x_{2} and y_{2} out of x_{1} and y_{1}, and so on up to x_{n} and y_{n}. System (94) is solved by x n = z n x 0 , y n = z n y 0 for magnification factor z that satisfies the pair of linear equations

z x 0 = x 0 − τ y 0 z y 0 = y 0 + τ ( α 0 x 0 + α 1 z x 0 ) (95)

for any x_{0} and y_{0}. Equation (95) is recast in the matrix-vector form to assume the form

[ z − 1 τ − τ ( a 0 + a 1 z ) z − 1 ] [ x 0 y 0 ] = 0 (96)

and the condition that it have a nontrivial solution is that the determinant of the system’s matrix of coefficients be zero, leading to the characteristic equation

det [ z − 1 τ − τ ( a 0 + a 1 z ) z − 1 ] = 0 , z 2 + 2 ( − 1 + 1 2 a 1 τ 2 ) z + 1 + a 0 τ 2 = 0 (97)

for z. The periodic nature of the solution to this initial value problems dictates that z be complex. Let | z | be the modulus of complex z. If | z | < 1 , then | z | n → 0 as n → ∞ , and if | z | > 1 , then | z | n → ∞ as n → ∞ . To avoid these undesirable eventualities of an artificial energy sink and an artificial, numerically induced, energy source, we select α_{0} = 0 in Equation (94), and are left with the reduced characteristic equation

z 2 + 2 ( − 1 + 1 2 α 1 τ 2 ) z + 1 = 0 , a z 2 + b z + c = 0 (98)

that possesses two complex roots z_{1} and z_{2} such that z 1 z 2 = c / a = | z | 2 = 1 .

In fact,

z = 1 − 1 2 α 1 τ 2 ± i τ α 1 − 1 4 α 1 2 τ 2 (99)

where i^{2} = −1, and z is complex if

α 1 > 0 , 4 − α 1 τ 2 > 0. (100)

By the fact that | z | = 1 , the complex solution to Equation (98) is energy conserving, and may be written as

z 1 = cos ( θ ) − i sin ( θ ) , z 2 = cos ( θ ) + i sin ( θ ) (101)

with

cos ( θ ) = 1 − 1 2 α 1 τ 2 , sin ( θ ) = τ α 1 − 1 4 α 1 2 τ 2 . (102)

Now, x_{n} and y_{n} are generally written as

x n = c 1 z 1 n + c 2 z 2 n , y n = c ′ 1 z 1 n + c ′ 2 z 2 n (103)

with constants c 1 , c 2 , c ′ 1 , c ′ 2 determined by the initial conditions. Given x_{0} = 1, y_{0} = 0 we get from Equation (94), x_{1} = 1, y_{1} = α_{1}τ. Writing x_{n} and y_{n} in Equation (102) for n = 0 and n = 1 we obtain the two systems of linear equations

[ 1 1 z 1 z 2 ] [ c 1 c 2 ] = [ 1 1 ] , [ 1 1 z 1 z 2 ] [ c ′ 1 c ′ 2 ] = a 1 τ [ 0 1 ] (104)

readily solved for c 1 , c 2 , c ′ 1 , c ′ 2 as

[ c 1 c 2 ] = 1 z 2 − z 1 [ z 2 − 1 − z 1 + 1 ] , [ c ′ 1 c ′ 2 ] = a 1 τ z 2 − z 1 [ − 1 1 ] (105)

in which z 1 = cos ( θ ) − i sin ( θ ) , z 2 = cos ( θ ) + i sin ( θ ) , z 2 − z 1 = 2 i sin ( θ ) . Writing z_{1} and z_{2} in terms of θ recasts Equation (103) into the form

x n = ( c 1 + c 2 ) cos ( n θ ) + i ( c 2 − c 1 ) sin ( n θ ) , y n = ( c ′ 1 + c ′ 2 ) cos ( n θ ) + i ( c ′ 2 − c ′ 1 ) sin ( n θ ) (106)

and we have from Equation (105) that

c 1 + c 2 = 1 , c 2 − c 1 = − i ( sin ( θ ) ) − 1 α 1 τ 2 c ′ 1 + c ′ 2 = 0 , c ′ 2 − c ′ 1 = − i ( sin ( θ ) ) − 1 α 1 τ (107)

with which we finally get

x n = cos ( n θ ) + 1 2 α 1 τ 2 ( sin ( θ ) ) − 1 sin ( n θ ) , y n = α 1 τ ( sin ( θ ) ) − 1 sin ( n θ ) (108)

as the general numerical solution to our initial value problem.

A cycle is completed when sin ( n θ ) = 0 or n θ = 2 π . Then, according to Equation (108) y_{n} = 0 and x_{n} = 1. From τ n θ = 2 π τ and T = nτ we obtain the computed period as

T = 2 π τ θ (109)

and to retain T = 2 π we select α_{1} in Equation (93) so as to guarantee τ = θ or sin ( τ ) = sin ( θ ) . This condition becomes, in view of Equation (102),

sin ( τ ) = τ α 1 − 1 4 α 1 2 τ 2 (110)

leading to the quadratic equation

1 4 τ 2 α 1 2 − α 1 + sin 2 ( τ ) τ 2 = 0 (111)

for α_{1}, and resulting in

α 1 = 2 ( 1 − cos ( τ ) ) / τ 2 (112)

or

α 1 = 1 − 1 12 τ 2 + 1 360 τ 4 . (113)

if τ is small.

Inclusion of the acceleration in the prediction of x_{1} suggests the higher order scheme

x 1 = x 0 + τ x ′ 0 + 1 2 τ 2 x ″ 0 y 1 = y 0 + 1 2 τ ( α 0 y ′ 0 + α 1 y ′ 1 ) (114)

that becomes for

x ′ = − y , y ′ = x , x ″ = − x x 1 = x 0 − τ y 0 − 1 2 τ 2 x 0 , y 1 = y 0 + 1 2 τ ( α 0 x 0 + α 0 x 1 ) . (115)

Substitution of x_{1} = zx_{0}, y_{1} = zy_{0} in Equation (115) results in the system

[ z − 1 + 1 2 τ 2 τ − 1 2 τ ( a 0 + a 1 z ) z − 1 ] [ x 0 y 0 ] = 0 (116)

from which we obtain the quadratic characteristic equation

det [ z − 1 + 1 2 τ 2 τ − 1 2 τ ( a 0 + a 1 z ) z − 1 ] = 0 , z 2 + 2 ( − 1 + 1 4 τ 2 + 1 4 a 1 τ 2 ) z + 1 + 1 2 τ 2 ( a 0 − 1 ) = 0 (117)

for magnification factor z. To assure | z | = 1 for the complex roots of Equation (117) we set α_{0} = 1 and are left with

z 2 + 2 ( − 1 + 1 4 τ 2 β ) z + 1 = 0 (118)

where β = 1 + α_{1}. The two roots of Equation (118) are

z = 1 − 1 4 τ 2 β ± i τ 1 2 β − 1 16 τ 2 β 2 (119)

and z is complex if

β > 0 , 8 − τ 2 β > 0. (120)

Because | z | = 1 we may write the complex roots of Equation (118) as

z = cos ( θ ) ± i sin ( θ ) , cos ( θ ) = 1 − 1 4 τ 2 β , sin ( θ ) = τ 1 2 β − 1 16 τ 2 β 2 . (121)

The numerical scheme is period conserving if τ = θ, or sin θ = sin τ . This is assured, according to Equation (121), by β such that

sin ( τ ) = τ 1 2 β − 1 16 τ 2 β 2 (122)

or

1 16 τ 2 β − 1 2 β + ( sin ( τ ) τ ) 2 = 0 (123)

resulting in

β = 4 ( 1 − cos ( τ ) ) / τ 2 (124)

or

α 1 = 1 − 1 6 τ 2 + 1 180 τ 4 (125)

if τ is small.

We accomplished here showing how to routinely determine the consistency and stability of any multistep method, explicit as well as implicit, for the stepwise integration of the first order initial value problem. We have also demonstrated here the advantage of using implicit methods to capture different solutions emanating from a branch-off point. For the integration of the second order equation of motion we have shown how to advance and retard the motion of the computed solution.

The author declares no conflicts of interest regarding the publication of this paper.

Fried, I. (2019) Consistency and Stability Issues in the Numerical Integration of the First and Second Order Initial Value Problem. Applied Mathematics, 10, 676-690. https://doi.org/10.4236/am.2019.108048