A Filtered Milne-Simpson ODE Method with Enhanced Stability Properties

Abstract

The Milne-Simpson method is a two-step implicit linear multistep method for the numerical solution of ODEs that obtains the theoretically highest order of convergence for such a method. The stability region of the method is only an interval on the imaginary axis and the method is classified as weakly stable which causes non-physical oscillations to appear in numerical solutions. For this reason, the method is seldom used in applications. This work examines filtering techniques that improve the stability properties of the Milne-Simpson method while retaining its fourth-order convergence rate. The resulting filtered Milne-Simpson method is attractive as a method of lines integrator of linear time-dependent partial differential equations.

Share and Cite:

Aluthge, A. and Sarra, S. (2023) A Filtered Milne-Simpson ODE Method with Enhanced Stability Properties. Journal of Applied Mathematics and Physics, 11, 192-208. doi: 10.4236/jamp.2023.111013.

1. Introduction

The first Dahlquist barrier [1] states that a zero-stable and linear s-step multistep method cannot attain an order of convergence greater than s + 1 if s is odd and greater than s + 2 if s is even. If the method is also explicit, then it cannot attain an order greater than s. The Milne-Simpson (MS) method is an implicit 2-step Linear Multistep Method (LMM) that realizes the optimal fourth-order convergence rate limit given in the first Dahlquist barrier. However, the MS method is only weakly stable which severely limits its use in applications. In this work, filters are examined that improve the stability properties of the MS method. The filters were first introduced in [2] and [3] where they were arbitrarily applied every ten-time steps. However, in these works, there was no examination of how the application of the filters affected the stability region of the method. In this work, we show that the frequency of the application of the filters can be used to modify the location of the stability region of the method which allows the technique to be tailored to the type of ODE system that is being solved.

In previous work [4], similar filters were used to improve the stability properties of the weakly stable explicit midpoint rule (also called the leapfrog method) while maintaining the second-order accuracy of the method. The filters used with the midpoint rule were of lengths three and five. Using filters to improve the stability properties of the MS method while maintaining fourth-order convergence is more difficult with the MS method than with the midpoint rule due to the MS method being implicit and also due to the necessity that the filter must be of length seven.

In the next section, LMMs and their stability properties are summarized. Then, a seven-point filter is developed that allows the MS method to retain its fourth-order convergence rate while improving the stability properties of the method and insight is given as to how to most effectively apply the filter. Finally, numerical examples are carried out that demonstrate the effectiveness of the filter.

2. Linear Multistep Methods

An s-step linear multistep method for the numerical solution of the Ordinary Differential Equation (ODE) Initial Value Problem (IVP)

y = f ( t , y ( t ) ) , y ( 0 ) = y 0 (1)

is of the form

m = 0 s α m y n + m = k m = 0 s β m f ( t n + m , y n + m ) , n = 0 , 1 , (2)

where α m and β m are given constants and k is the size of the time step. Multistep methods use information from the previous s steps to calculate the next value. It is conventional to normalize (2) by setting α s = 1 . When β s = 0 the method is explicit. Otherwise it is implicit. In order to start the method, the first s 1 time levels need to be calculated by a one-step method.

The stability properties of LMMs are examined by applying the method to the ODE

y = λ y , y ( 0 ) = y 0 (3)

where λ is a complex number with ( λ ) 0 . Equation (3) is called the Dahlquist test equation. The first concepts of stability result from considering the simple case of λ = 0 in (3) for which the solution is the constant y 0 . In this case, the application of the LMM (2) to the test equation results in a difference equation with a general solution of the form

y n = c 1 ζ 1 n + + c s ζ s n (4)

where ζ i are the zeros of the first characteristic polynomial

ρ ( ζ ) = m = 0 s α m ζ m (5)

of the LMM (2). For a consistent method, one of the terms, say ζ i n , approaches the solution of (3) as n while the other roots correspond to extraneous solutions. The following stability definitions quantify to what extent the computed solutions of (4) remain bounded. A method is stable (also called zero stable or strongly stable) if ρ has a simple root equal to one and if all the other roots of ρ are less than one in absolute value. The criteria that the roots must satisfy for the method to be stable are called a root condition. A weakly stable method has all roots of ρ satisfying | ζ | 1 and has at least one root of absolute value one other than the simple root ζ = 1 . The solutions may remain bounded for a fixed interval of time [ 0, T ] but will eventually become unbounded as t . A method for which a root of ρ is greater than one in magnitude is unstable. In this case the solutions will rapidly become unbounded.

A stronger definition of stability can be made by considering non-zero λ C with negative real part in Equation (3). In this case the exact solution of (3) is y ( t ) = y 0 e λ t which approaches zero as t since the real part of λ is negative. A method that produces numerical solutions with the same asymptotic behavior, that is y n 0 as t n , for a fixed value of k is said to be absolutely stable. This type of stability has also been called asymptotic stability, or eigenvalue stability. Absolute stability may also be characterized in terms of a stability function or amplification factor R ( z ) which satisfies y n + 1 = R ( z ) y n where z = k λ . Then the absolute stability region is S = { z C : | R ( z ) | 1 } . For z in the stability region, the numerical solutions exhibit the same asymptotic behavior as does the exact solution of (3).

For a linear system of ODEs, a necessary condition for absolute stability is that all of the scaled (by the time step size k) eigenvalues z = k λ of the coefficient matrix of the system must lie in the stability region. For nonlinear systems, the system must be linearized and the scaled eigenvalues of the Jacobian matrix of the system must be analyzed at each t n . Standard references on numerical ODE methods [5] - [10] can be consulted for more details.

3. The Milne-Simpson Method

The Milne-Simpson method for the system (1) is

y n + 1 = y n 1 + k 3 [ f n + 1 + 4 f n + f n 1 ] . (6)

The method is constructed by integrating the ODE (1) on the interval [ t n , t n + 1 ] to get

y ( t n + 1 ) = y ( t n ) + t n t n + 1 f ( t , y ) d t

and then approximating the integral by the well-known Simpson’s rule that is usually introduced in the Calculus sequence. The resulting implicit method has fourth-order algebraic convergence. The Milne-Simpson method is of interest since it is the only 2-step linear multistep method that attains its optimal order [8]. Any optimal linear multistep method has no region of absolute stability [8]. The MS method needs two time levels to start, y 0 which is provided by the initial condition and y 1 . Typically, the time level y 1 is obtained by an explicit fourth-order Runge-Kutta (RK) method.

The first characteristic polynomial of the Milne-Simpson method is ρ ( z ) = z 2 1 with roots ζ = ± 1 and the method is only weakly stable. The stability region only consists of the interval S = [ i 3 , i 3 ] on the imaginary axis.

After discretizing a system of nonlinear ODEs, the resulting system of nonlinear algebraic equations at each time step is solved using Newton’s method with the previous time level used as an initial guess. In the numerical examples, one or two iterations with Newton’s method was typically required at each time step. Each iteration requires one evaluation of f ( t , y ) from the right side of (1) and the solution of a linear algebraic system using LU factorization.

Linear systems of ODEs can be more efficiently evaluated using the MS method. For a constant coefficient linear system of ODEs, y = A y , where A is a N × N coefficient matrix the MS method can be advanced in time by solving the algebraic linear system

L y n + 1 = b (7)

where

L = I k 3 A (8)

I is the N × N identity matrix, and

b = y n 1 + k 3 [ 4 A y n + A y n 1 ] .

The matrix L must be factorized once at the beginning of the computation with a O ( N 3 ) LU factorization. Then, at each time step one matrix-vector multiplications and a forward and back substitution to solve (7) are required for a total flop count of 4 N 2 . In comparison, an explicit fourth-order RK method requires four matrix-vector multiplications that require 8 N 2 flops. The process for linear variable coefficient systems of ODEs is similar except that the matrix L must be factorized at each time step.

4. Filter Construction

It is shown in [3] that the numerical solution of the IVP (1) obtained by the Milne-Simpson method using starting values only assumed to have an asymptotic expansion of the very general form

η n ( k ) = y n + m = 4 5 η m n k m + O ( k 6 ) , n = 0 , 1 (9)

has an asymptotic expansion of the form

y n = physical mode ( y p ) + computational ( y c ) + O ( k 6 ) . (10)

The physical mode y p

y p = y 0 e λ t + [ ( λ 5 180 y 0 t + 1 2 ( η 40 + η 41 ) ) e λ t ] k 4 + [ 1 2 ( λ 5 180 y 0 2 3 η 40 1 3 η 41 + η 50 + η 51 ) e λ t ] k 5 (11)

consists of the exact solution y 0 e λ t and a O ( k 4 ) truncation error that goes to zero as k 0 . The computational mode

y c = [ ( 1 ) n 2 ( η 40 η 41 ) e λ t / 3 ] k 4 + [ ( 1 ) n 2 ( λ 5 180 y 0 + 2 3 λ η 40 + 1 3 η 41 + η 50 η 51 ) e λ t / 3 ] k 5 (12)

contributes a non-physical oscillatory factor to the solution which grows in size with increasing t. References [3] and [11] can be consulted for the details of the derivation of the expressions for y p and y c .

In order to damp the computational mode while retaining the fourth-order accurate approximation of the physical mode a filter of the form

P ( E ) = j = r 1 r 2 a j E j (13)

is considered where a j , r 1 , r 2 + , and E is the forward shift operator defined by E y ( t ) = y ( t + k ) in continuous notation and E y n = y n + 1 in discrete notation. Let s = λ k and then elementary calculus gives

P ( E ) e λ t = e λ t P ( e s ) , (14)

P ( E ) t e λ t = e λ t [ t P ( e s ) + k d d s P ( e s ) ] , (15)

and for any μ R

P ( E ) ( 1 ) n e μ λ t = ( 1 ) n e μ λ t P ( e μ s ) . (16)

Applying the filter (13) to the physical mode (11) results in

y ^ p = P ( E ) y p = y 0 e λ t P ( e s ) + [ ( λ 5 180 y 0 ( t P ( e s ) + k d d s P ( e s ) ) + 1 2 ( η 40 + η 41 ) P ( e s ) ) e λ t ] k 4 + [ 1 2 ( λ 5 180 y 0 2 3 η 40 1 3 η 41 + η 50 + η 51 ) e λ t P ( e s ) ] k 5 (17)

and applying the filter to the computational mode (12) gives

y ^ c = P ( E ) y c = [ ( 1 ) n 2 ( η 40 η 41 ) e λ t / 3 P ( e s / 3 ) ] k 4 + [ ( 1 ) n 2 ( λ 5 180 y 0 + 2 3 λ η 40 + 1 3 η 41 + η 50 η 51 ) e λ t / 3 P ( e s / 3 ) ] k 5 . (18)

Equation (17) suggests that in order for the physical mode to be kept unchanged up to the order k N terms that P must satisfy

P ( e s ) = 1 + O ( s N + 1 ) . (19)

In a similar manner, Equation (18) reveals that in order for the computational mode to be reduced by a factor of k M that P must satisfy

P ( e s ) = O ( s M ) . (20)

Equation (19) implies that

d d s P ( e s ) = O ( s N ) . (21)

Substituting Equations (19), (20), and (21) into Equations (17) and (18), results in

y ^ p = y 0 e λ t ( 1 + O ( s N + 1 ) ) + [ ( λ 5 180 y 0 ( t ( 1 + O ( s N + 1 ) ) + k O ( s N ) ) + 1 2 ( η 40 + η 41 ) ( 1 + O ( s N + 1 ) ) ) e λ t ] k 4 + [ 1 2 ( λ 5 180 y 0 2 3 η 40 1 3 η 41 + η 50 + η 51 ) e λ t ( 1 + O ( s N + 1 ) ) ] k 5 (22)

and

y ^ c = [ ( 1 ) n 2 ( η 40 η 41 ) e λ t / 3 O ( s M ) ] k 4 + [ ( 1 ) n 2 ( λ 5 180 y 0 + 2 3 λ η 40 + 1 3 η 41 + η 50 η 51 ) e λ t / 3 O ( s M ) ] k 5 (23)

respectively.

The term λ 5 180 y 0 t ( 1 + O ( s N + 1 ) ) in (22) shows that the filter P is unable to improve the order of accuracy of the method. It suggests taking N = 4 . Additionally, so that extrapolation for improving the accuracy of the physical mode is applicable after applying the filter P, it is necessary to reduce the computational mode up to O ( s 6 ) [8]. This suggest taking M = 2 . Thus,

P ( e s ) = 1 + O ( s 5 ) (24)

and

P ( e s / 3 ) = O ( s 2 ) . (25)

Equations (24) and (25) consist of seven linear equations which can be solved for seven unknowns (i.e. for any seven a j ’s). This set of unknowns may be written in the form

{ a l 3 ( l ) , a l 2 ( l ) , a l 1 ( l ) , a l ( l ) , a l + 1 ( l ) , a l + 2 ( l ) , a l + 3 ( l ) } , l = 3 , 2 , 1 , 0 , 1 , 2 , 3

where, for example, the case l = 3 is the case r 1 = 6 , r 2 = 0 and the case l = 2 is the case r 1 = 5 , r 2 = 1 and so on. Elementary linear algebra shows that, in all of the above cases, the systems of linear equations are linearly independent and can be solved uniquely for seven unknowns. In each case, the result yields a unique filter involving seven terms.

A general filter may be written in the form

P l ( E ) = j = 3 3 a l + j ( l ) E j , l = 3 , 2 , 1 , 0 , 1 , 2 , 3. (26)

Equations (24), (25), and (26) lead to the equations

j = 3 3 a l + j ( l ) e s j = 1 + O ( s 5 ) , l = 3 , 2 , 1 , 0 , 1 , 2 , 3 (27)

and

j = 3 3 a l + j ( l ) ( e s / 3 ) j = O ( s 2 ) , l = 3 , 2 , 1 , 0 , 1 , 2 , 3. (28)

For any given filter (any fixed value of l ), Equation (27) contains five linear equations and Equation (28) contains two linear equations in s. Each equation contains seven unknowns. For example, for the filter P 3 ( l = 3 ), the five equations resulting from (27) are:

2 a 6 ( 3 ) + a 5 ( 3 ) + a 4 ( 3 ) + a 3 ( 3 ) + a 2 ( 3 ) + a 1 ( 3 ) + a 0 ( 3 ) = 1

6 a 6 ( 3 ) + 5 a 5 ( 3 ) + 4 a 4 ( 3 ) + 3 a 3 ( 3 ) + 2 a 2 ( 3 ) + a 1 ( 3 ) = 0

36 a 6 ( 3 ) + 25 a 5 ( 3 ) + 16 a 4 ( 3 ) + 9 a 3 ( 3 ) + 4 a 2 ( 3 ) + a 1 ( 3 ) = 0

216 a 6 ( 3 ) + 125 a 5 ( 3 ) + 64 a 4 ( 3 ) + 27 a 3 ( 3 ) + 8 a 2 ( 3 ) + a 1 ( 3 ) = 0

1296 a 6 ( 3 ) + 625 a 5 ( 3 ) + 256 a 4 ( 3 ) + 81 a 3 ( 3 ) + 16 a 2 ( 3 ) + a 1 ( 3 ) = 0

and the two equations resulting from (28) are:

2 a 6 ( 3 ) a 5 ( 3 ) + a 4 ( 3 ) a 3 ( 3 ) + a 2 ( 3 ) a 1 ( 3 ) + a 0 ( 3 ) = 0

6 a 6 ( 3 ) 5 a 5 ( 3 ) + 4 a 4 ( 3 ) 3 a 3 ( 3 ) + 2 a 2 ( 3 ) a 1 ( 3 ) = 0.

The solution of the 7 × 7 linear systems is a 6 ( 3 ) = 5 64 , a 5 ( 3 ) = 18 64 , a 4 ( 3 ) = 15 64 , a 3 ( 3 ) = 20 64 , a 2 ( 3 ) = 45 64 , a 1 ( 3 ) = 30 64 , and a 0 ( 3 ) = 57 64 . This filter can be written as

P 3 ( E ) = 1 64 ( 5 E 6 18 E 5 + 15 E 4 + 20 E 3 45 E 2 + 30 E 1 + 57 E 0 )

or

y ^ n = P 3 ( E ) y n = 1 64 ( 5 y n 6 18 y n 5 + 15 E n 4 + 20 y n 3 45 y n 2 + 30 y n 1 + 57 y n )

where y n is the unfiltered solutions and y ^ n is the filtered solution at t n = n k . The coefficients of all seven filters are listed in Table 1.

Table 1. Seven-point filter coefficients.

Filter P 7 ( 0 ) is called a centered filter as it requires three time levels greater than n and three less than n to filter at time level n. Filters P 7 ( 1 ) , P 7 ( 2 ) , and P 7 ( 3 ) are forward biased filters as they require more time levels greater than n than less than n to filter at time level n. Filters P 7 ( 1 ) , P 7 ( 2 ) , and P 7 ( 3 ) are backward biased filters as they require fewer time levels greater than n and more less than n to filter at time level n. Table 2 lists the number of additional time levels that must be calculated and disregarded each time a filter is applied.

5. Filter Application

This section gives insight on how to use the filters to manipulate both the size and location of the absolute stability region of the MS method which without filtering is only an interval on the imaginary axis.

In applications, it is not necessary to filter every step. Rather, a filter is applied every N 0 time steps. The choice of N 0 is problem dependent as it determines both the size and the location of the absolute stability region of the method. There is no exact N 0 , but rather a range of N 0 that is appropriate for a specific problem.

The method starts by setting y 0 equal to the initial condition and calculating y 1 via an explicit fourth-order Runge-Kutta method [5]. Then, N 0 1 Milne-Simpson steps are taken as well as the additional time levels required by the filter (Table 2) and then the level N 0 step is filtered.

The method is restarted and continues with the filtered y N 0 as the initial condition and the following level calculated with RK4 and then additional levels are calculated with the MS method. This approach is needed in order to calculate the stability region of what is a composition of three methods: RK4, MS, and a filter. It can be done over a finite time interval, which is why such a restart is needed. Another approach is to restart with level y N 0 1 as the first level and the filtered y N 0 as the second level in the MS method which omits the RK4 step in the restart after filtering. With this approach, an exact stability region can not be determined for what is now a composition of methods on an infinite time interval. However, if this simpler approach is taken the stability regions from the method with RK4 in the startup still provide a good sense of the stability region. The approach without the RK4 step in the restart has been taken in all the numerical examples that follow.

Table 2. Extra time levels to be calculated and then disregarded for each filter.

The absolute stability regions of the filtered MS methods are found in the following manner. For example, the stability region that results from applying filter P 7 ( 3 ) with N 0 = 6 is found by applying the method to the test problem (3). Then the amplification factor R ( z ) of the method where z = k λ is found by looking at

2 y 1 = ( 1 + z + z 2 2 + z 3 6 + z 4 24 ) y 0 ( RK4 )

y 2 = ( 1 + z 3 ) y 0 + 4 3 z y 1 ( 1 z 3 ) = y 0 ( z 5 + 4 z 4 + 12 z 3 + 24 z 2 + 30 z + 18 ) 6 z 18 ( MS )

y 6 = ( 1 + z 3 ) y 4 + 4 3 z y 5 ( 1 z 3 ) = y 0 ( 65 z 9 + 260 z 8 + 954 z 7 + 2256 z 6 + 4147 z 5 + 5754 z 4 + 5976 z 3 + 4428 z 2 + 2106 z + 486 ) 2 z 5 30 z 4 + 180 z 3 540 z 2 + 810 z 486

and

y 6 = 1 64 ( 5 y 0 18 y 1 + 15 y 2 + 20 y 3 45 y 4 + 30 y 5 + 57 y 6 ) ( apply filter ) = K y 0 192 ( z 6 18 z 5 + 135 z 4 540 z 3 + 1215 z 2 1458 z + 729 ) = R ( z ) y 0 ,

where

K = 16538 z 10 + 82757 z 9 + 312567 z 8 + 823791 z 7 + 1652835 z 6 + 2550312 z 5 + 3015144 z 4 + 2672352 z 3 + 1702944 z 2 + 699840 z + 139968.

The absolute stability region contains all z C for which | R ( z ) | 1 .

Figure 1 shows the absolute stability regions of filters P 7 ( 0 ) , P 7 ( 1 ) , P 7 ( 2 ) , and P 7 ( 3 ) applied every N 0 steps for N 0 = 6 , N 0 = 10 , N 0 = 15 , and N 0 = 20 . The regions of the three forward biased filters are not shown but the stability regions of P 7 ( 3 ) are very similar to that of P 7 ( 3 ) , the regions for P 7 ( 2 ) are similar to P 7 ( 2 ) , and the regions of P 7 ( 1 ) are similar to P 7 ( 1 ) . The trend that is observed is that increasing the frequency of filtering causes more of the left half-plane to be included and some of the imaginary axis coverage to be lost. This indicates that if the eigenvalues of the coefficient matrix (or of the Jacobian matrix if the system is nonlinear) of the ODE system are purely imaginary or have small negative real parts that only very infrequent filtering will be required for stability. However, if the eigenvalues have significant negative real parts more frequent filtering will be necessary.

In general the centered P 7 ( 0 ) and marginally forward P 7 ( 1 ) and backward P 7 ( 1 ) biased filters have the largest stability regions while the more extreme forward and backward biased filters such as P 7 ( 3 ) and P 7 ( 3 ) have smaller stability regions. Filter P 7 ( 3 ) has a smaller stability region, but is attractive since it does not require any additional time levels to be computed. Figure 2 shows the stability regions of the four filters in Figure 1 for filtering every 6 and 20 steps in the same image.

Figure 1. Absolute stability regions in the complex plane of filtering every N 0 t h step from outer to inner: N 0 = 6 , N 0 = 10 , N 0 = 15 , and N 0 = 20 . Top left: P 7 ( 0 ) . Top right: P 7 ( 1 ) . Bottom left: P 7 ( 2 ) . Bottom right: P 7 ( 3 ) .

Figure 2. Absolute stability regions in the complex plane of filtering every 6 steps (left) and 20 steps (right) listed from most to least negative real axis coverage: P 7 ( 1 ) , P 7 ( 2 ) , P 7 ( 0 ) , and P 7 ( 3 ) .

6. Numerical Examples

In the numerical examples, first a nonlinear ODE is considered, and then linear ODE systems with coefficient matrices with real and purely imaginary eigenvalues are considered. Next, a linear variable coefficient ODE is solved. Finally, the filtered MS method is used in a method of lines approach to approximate the time derivative in linear PDEs which is where the filtered MS method is very competitive with standard methods for this type of problem.

6.1. Nonlinear ODE

The nonlinear initial value problem

y = 1 y 2 , y ( 0 ) = 0 (29)

has the exact solution y ( t ) = tanh ( t ) . For this equation f y = 2 y < 0 and for any y > 0 this value is out of the stability interval of the MS method and instability should be expected as is illustrated in Figure 3. Since the stability region needs substantial negative real axis coverage, the filters will have to be applied frequently. Table 3 lists the accuracy of the seven filtered MS methods over several time intervals with two different filtering frequencies, N = 5 (except for N = 6 with P 7 ( 3 ) ) and N = 15 . All seven methods produce accurate solutions with N = 5 at the final time of t = 100 . However, N = 15 is not frequent enough and the errors are larger as some instability begins to set in over long time intervals. If N is increased to 25, instability sets in and the solutions become infinite by all seven filtered MS methods.

6.2. Linear Constant Coefficient ODE Systems

The linear constant coefficient ODE system

y = A y (30)

is considered with two coefficient matrices, A. One has purely imaginary eigenvalues and one has complex eigenvalues with negative real parts.

Table 3. Errors from problem (29) with k = 0.125 and N 0 = 5 (Columns 3 to 5) and N 0 = 15 (Columns 6 to 8).

Figure 3. Exact (dashed) and MS (solid) solution of problem (29) with k = 0.125 up to time t = 30 (left) and t = 100 (right).

The matrix

A = [ 0 2 2 0 ] (31)

has eigenvalues ± 2 i . The initial condition for the problem is y ( 0 ) = ( 1,2 ) T which leads to the exact solution

y 1 = cos 2 t + 2 sin 2 t

y 2 = sin 2 t + 2 cos 2 t .

The eigenvalues can be scaled byk so that they lie in the stability region of the MS method and filtering is not necessary for stability. However, the seven filters are applied to verify that a fourth-order convergence rate is preserved when the filters are applied. In the left image of Figure 4, a convergence plot is shown for the number of time steps varying from 40 to 5120 on the interval [ 0,8 ] and the filters are applied every 25 steps. Once the convergence trend sets in, all lines have a slope of approximately four. While all seven filtered methods converge at the same rate, they do not reach the same solution at t = 100 . At this time, the unfiltered MS method has error 2.2e-10, the P 7 ( 3 ) method 3.2e-11, and the P 7 ( 3 ) method 4.2e-10. This is typical when the MS method is applicable and can be compared to the filtered methods. A particular filter may give a solution slightly more or less accurate than the MS method but it is problem dependent as to which filter gives the most accurate solution.

The matrix

A = [ 4 2 0 2 4 1 1 2 2 1 1 2 1 1 1 0 ] (32)

has eigenvalues 1 + 1 i and 1 1 i which are each of multiplicity two. The initial condition for the problem is y ( 0 ) = ( 1 , 0 , 1 , 0 ) T . The exact solution is

y 1 = e t cos t 3 e t sin t

y 2 = 3 e t sin t

Figure 4. Convergence plot of the filtered MS methods P 7 ( 3 ) (cyan), P 7 ( 2 ) (green), P 7 ( 1 ) (blue), P 7 ( 0 ) (black), P 7 ( 1 ) (magenta), P 7 ( 2 ) (yellow), and P 7 ( 3 ) (red). Left: purely imaginary eigenvalues (31) filtered every 25 steps. Right: complex eigenvalues with negative real part (32) filtered every 10 steps.

y 3 = e t cos t 2 e t sin t

y 4 = 2 e t sin t .

Since the eigenvalues have negative real parts a more frequent application of the filters is necessary to obtain enough left half-plane coverage of the stability region. In the right image of Figure 4 a convergence plot is shown for the number of time steps varying from 40 to 2560 on the interval [ 0,8 ] and the filters are applied every 10 steps. Once the convergence trend sets in, all lines have a slope of approximately four.

6.3. Linear Variable Coefficient Linear System

This example considers the second-order ODE IVP

y + t y + y = 0 = 0 , y ( 0 ) = 0 , y ( 0 ) = 1. (33)

The exact solution is y ( t ) = exp ( 0.5 t 2 ) 0 t exp ( 0.5 x 2 ) d x . For computational purposes, the second-order ODE is converted to an equivalent first-order system of the form (30) with coefficient matrix

A = [ 0 1 1 t ] (34)

and initial condition y ( 0 ) = ( 0,1 ) T . At t = 0 the coefficient matrix has purely imaginary eigenvalues but as the system is advanced in time the eigenvalues transition to being real and negative and grow in magnitude as time is advanced. Thus, overall this problem will need frequent filtering. Filters P 7 ( 1 ) and P 7 ( 0 ) have stability regions with good negative real axis coverage so they are applied ever 5 steps. With a relatively large time step size of k = 0.1 the solution is advanced to time t = 20 . At this time, the error from the P 7 ( 1 ) solution is 1.87e-4 and the error from the P 7 ( 0 ) solution is 1.36e-4.

6.4. Linear PDEs, Method of Lines, and Advection

The advection equation

u t u x = 0. (35)

is considered on the interval [ 0,1 ] with periodic boundary conditions. The initial condition is taken from the exact solution u ( x , t ) = sin 40 ( π ( x + t ) ) . The space derivative is discretized with the Fourier pseudospectral method [12] using M = 80 points. The eigenvalues of the first-order Fourier psuedospectral differentiation matrix are purely imaginary and are given by i w where w = M 2 + 1 , , M 2 + 1 . The eigenvalues lie in the interval [ π i / Δ x , π i / Δ x ] on the imaginary axis. The MS method is applicable without filtering. This is another example of where infrequent filtering may slightly improve the accuracy of the solution.

Table 4 gives the accuracy of the centered and backward biased filters up to time t = 1000 with solutions filtered every 10 steps and every 100 steps. The less frequently filtered solutions are slightly more accurate but the heavier filtered solutions retain good accuracy as well. The more computationally expensive forward biased filters were not considered in this example as they have not shown to have any favorable properties when compared to the less expensive centered and backward biased filters. For comparison, the more computationally expensive explicit RK4 method which is commonly used on this type of problem is compared as well. The RK4 method was less accurate than the MS method or the three filtered MS methods.

The heat equation

u t = u x x (36)

is considered on the domain [ 0,1 ] with initial condition u ( x ,0 ) = sin ( π x ) . Zero Dirichlet boundary conditions of u ( 0 ) = 0 and u ( 1 ) = 0 are applied. The exact solution is u ( x , t ) = exp ( π 2 t ) sin ( π x ) . The space derivative is discretized with the Chebyshev pseudospectral method [12] on the Chebyshev-Gauss-Lobbato (GCL) points which cluster densely around the boundary points and are less dense in the interior of the interval. The boundary conditions are enforced by setting the elements in the first and last rows of the derivative matrix to zero. This is a well-known “stiff” problem with large negative real eigenvalues. There are specifically designed numerical methods [7] that are appropriate for this type of problem and the Milne-Simpson method with its interval of absolute stability on the imaginary axis is certainly not one of them. However, with the use of filters it can nevertheless be applied.

The space interval is discretized with 20 CGL points and the P 7 ( 3 ) filtered MS method with a time step size of k = 0.00005 is applied. The solution is advanced to time t = 0.4 , and the numerical and exact solution differ by 7.5e-14. The scaled eigenvalues and the absolute stability region of the MS method that is filtered every 6 time steps by P 7 ( 3 ) is shown in Figure 5.

Table 4. Errors from problem (35) with k = 0.001 and N 0 = 10 (Columns 3 to 5) and N 0 = 100 (Columns 6 to 8).

Figure 5. Stability region of the MS method filtered every 6 steps by P 7 ( 3 ) and the scaled eigenvalues of the space discretization of Equation (36).

7. Conclusions

The Milne-Simpson method has fourth-order convergence which is the highest order theoretically possible for a two-step LMM. However, the method is only weakly stable, has only an interval of absolute convergence on the imaginary axis, and it is implicit. These factors lead to the MS method seldom being used in applications.

Seven-point filters which retain fourth-order accuracy can be used to expand the size of and to manipulate the location of the absolute stability region of the method. Systems with coefficient matrices or Jacobian matrices with eigenvalues with negative real parts require more frequent filtering in the range of every 5 to 20-time steps. Even if the eigenvalues are purely imaginary and lie within the stability region of the unfiltered MS method, infrequent filtering every 100 to 200 stepping can be beneficial to accuracy and stability over long time intervals. The enhanced stability properties of the filtered MS method make it applicable to a much larger class of problems than the unfiltered MS method is applicable to. The filtered MS method is in particular an attractive time integrator in a method of lines approach with linear PDEs where it is more efficient than fourth-order Runge-Kutta methods that are very popular for this type of problem.

All seven of the filters can be effective, but we are unable to label which filter is definitively the best. Filters P 7 ( 1 ) , P 7 ( 2 ) , and P 7 ( 3 ) require 4, 5, and 6 extra time levels respectively to be computed and disregarded and show no advantage over the other four filters. Due to their computational expense, they can not be recommended for applications. Filters P 7 ( 0 ) and P 7 ( 1 ) allow for the largest stability domain coverage in the left half-plain and are well suited to problems involving eigenvalues with large negative real parts. Filter P 7 ( 3 ) is the only one of the filters that does not require additional time levels, but its application does not expand the stability region as much as the other filters and as a result, will require a smaller time-step size for stability than some of the other filtered MS methods.

The filtering technique that has been used in this work was previously used with the leapfrog (midpoint) method [4]. Both the leapfrog and Milne-Simpson methods are derived using polynomial basis functions. In particular, the leapfrog method is derived by discretizing the time derivative in (1) by a three-point centered polynomial approximation. Another way to approximate the derivative in (1) is by a three-point centered approximation using Radial Basis Functions (RBFs) [13]. The resulting method based on RBFs is also weakly stable. Our future work involves applying the filtering technique to methods based on RBFs.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Dahlquist, G. (1985) 33 Years of Numerical Instability, Part I. BIT Numerical Mathematics, 25, 188-204.
https://doi.org/10.1007/BF01934997
[2] Iri, M. (1963) A Stabilizing Device for Unstable Numerical Solutions of Ordinary Differential Equations—Design Principle and Application of a Filter. Journal of the Information Processing Society of Japan, 4, 249-260.
[3] Aluthge, A. (1985) Filtering and Extrapolation Techniques in the Numerical Solution of Ordinary Differential Equations. Master’s Thesis, University of Ottawa, Ottawa, 1-70.
[4] Aluthge, A., Sarra, S.A. and Estep, R. (2016) Filtered Leapfrog Time Integration with Enhanced Stability Properties. Journal of Applied Mathematics and Physics, 4, 1354-1370.
https://doi.org/10.4236/jamp.2016.47145
[5] Butcher, J. (2003) Numerical Methods for Ordinary Differential Equations. Wiley, Hoboken.
https://doi.org/10.1002/0470868279
[6] Hairer, E., Norsett, S. and Wanner, G. (2000) Solving Ordinary Differential Equations I: Nonstiff Problems. Springer, Berlin.
[7] Hairer, E. and Wanner, G. (2000) Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems. 2nd Edition, Springer, Berlin.
[8] Henrici, P. (1962) Discrete Variable Methods in Ordinary Differential Equations. Wiley, Hoboken.
[9] Iserles, A. (1973) A First Course in the Analysis of Differential Equations. Wiley, Hoboken.
[10] Lambert, J. (1973) Computational Methods in Ordinary Differential Equations. Wiley, Hoboken.
[11] Gragg, W.B. (1963) Re-Peated Extrapolation to the Limit in the Numerical Solution of Ordinary Differential Equations. Ph.D. Thesis, University of California, Los Angeles (UCLA), Los Angeles, 1-111.
[12] Canuto, C., Hussaini, M.Y., Quarteroni, A. and Zang, T.A. (1988) Spectral Methods in Fluid Dynamics. Springer, Berlin.
https://doi.org/10.1007/978-3-642-84108-8
[13] Sarra, S.A. and Kansa, E.J. (2009) Multiquadric Radial Basis Function Approximation Methods for the Numerical Solution of Partial Differential Equations. Advances in Computational Mechanics, 2, 1-199.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.