Filtered Leapfrog Time Integration with Enhanced Stability Properties

The Leapfrog method for the solution of Ordinary Differential Equation initial value problems has been historically popular for several reasons. The method has second order accuracy, requires only one function evaluation per time step, and is non-dissipative. Despite the mentioned attractive properties, the method has some unfavorable stability properties. The absolute stability region of the method is only an interval located on the imaginary axis, rather than a region in the complex plane. The method is only weakly stable and thus exhibits computational instability in long time integrations over intervals of finite length. In this work, the use of filters is examined for the purposes of both controlling the weak instability and also enlarging the size of the absolute stability region of the method.


Introduction
In this work the stability properties of a well known linear multistep method (LMM) are examined.The method is based on a three point polynomial finite difference approximation of the first derivative.The Leapfrog method (also known as the explicit midpoint rule or Nyström's method) for the first order Ordinary Differential Equation (ODE) initial value problem (IVP) is a simple three-level scheme with second order accuracy.The leapfrog scheme has a long history in applications.The applications include general oceanic circulation models and atmospheric models that go back at least as far as the pioneering work of Richardson [1] (p. 150) in 1922 where it was called the step-over method.In this context, the time integration method is used to advance in time a semi-discrete system in which the space derivatives of a partial differential equation have been discretized by an appropriate method.Such an application is called the method of lines.The leapfrog method remains relevant today as it is a component of many numerical weather forecasting systems and most current geophysical fluid dynamics (GFD) codes use a form of leapfrog time differencing.Reference [2] lists 25 major GFD codes currently using this approach.The recent textbook [3] (p. 90) describes the leapfrog method as an important method for numerical weather forecasting.The Leapfrog method is efficient as it only requires one function evaluation per time step, with a function evaluation referring to the function F in Equation (1).The leapfrog scheme has the desirable property of being non-dissipative, but it suffers from phase errors.The most serious drawback of the scheme is its well documented [4] weak stability property which results in computational instability when it is used in long time integrations.Methods that have been used to control the weak instability of the leapfrog method involve a process called filtering.A Filtering process replaces a time level with an average of that time level and neighboring time levels.The use of filtering in numerical ODE methods originated in [5] with a two-level averaging.Filtering methods that have been used to control the weak instability of the leapfrog method include the following.The Asselin time filter [6] adds a small artificial viscosity to the leapfrog method and is applied each time step.Gragg [7] used a three-point symmetric filter to replace a time level of the leapfrog solution after every N steps.Bulirsch and Stoer [8] extended Gragg's method to build an extrapolation scheme.In [9] filtering was shown not to be necessary in extrapolation schemes involving the leapfrog method since the Euler starting step and the extrapolation process alone have been able to control the weak instability.Five point symmetric and asymmetric filters were considered by Iri [10] who recommended the use of an asymmetric filter.Stabilizing the Leapfrog method continues to be an active research area.Examples of recent work in the area can be found in [11] and [12].
In the next section LMMs and their stability properties are reviewed.Then three and five point filters are developed and it is shown that a five-point symmetric filter can be effectively used to control the weak instability that is inherent to the leapfrog method.After the five-point symmetric is shown to be effective, ways to incorporate the filter into a leapfrog time differencing schemes are examined.The application of the filter leads to new LMMs as well as various filter and restart algorithms.The ideal application of the filter will control the weak instability as well as retain the favorable features of the method: efficiency, second order accuracy, an absolute stability region with imaginary axis converge, and will be non-dissipative.Finally, the conclusions are illustrated with numerical examples.

Linear Multistep Methods
A general s-step linear multistep method for the numerical solution of the ODE IVP (1) is of the form 0 0 , , 0,1, where m α and m β are given constants and k is the size of the time step.It is conventional to normalize (2) by setting 1 s α = .When 0 s β = the method is explicit.Otherwise it is implicit.In order to start the method, the first 1 s − time levels need to be calculated by a one-step method such as a Runge-Kutta method.The stability of ODE methods is a relatively new concept in numerical methods and was rarely if ever used before the 1950's [13].The first description of numerical instability of numerical ODE methods may have been in 1950 in [14].It was the advent of high speed electronic computers that brought the concept of stability to the forefront.Varying definitions of and types of stability for ODE methods have been proposed over the last 60 years.Indeed, when surveying the literature and textbooks that have been published over that time period it is not straightforward to grasp what is meant by the stability of a LMM due to the multitude of definitions and the same concepts often being given more than one name.
The stability properties of LMMs are examined by applying the method to the ODE where λ is a complex number with ( ) 0 λ ℜ ≤ .Equation ( 3) is commonly known as the Dahlquist test equation.
The first concepts of stability result from considering the simple case of 0 λ = in (3) for which the solution is the constant 0 y .In this case the application of the LMM (2) to the test equation results in a difference equation with a general solution of the form where the i ζ are the zeros of the first characteristic polynomial ( ) of the LMM (2).For a consistent method one of the terms, say ζ , approaches the solution of (3) as n → ∞ while the other roots correspond to extraneous solutions.The following stability definitions quantify to what extent the computed solutions of (4) remain bounded: • A method is stable (also called zero stable) if ρ has a simple root equal to one and if all the other roots of ρ are less than one in absolute value.The criteria that the roots must satisfy for the method to be stable are called a root condition.
• A weakly stable method has all roots of ρ satisfying 1 ζ ≤ and has at least one root of absolute value one other than the simple root 1 ζ = .The solutions may remain bounded for a fixed interval of time [ ] 0,T but will eventually become unbounded as t → ∞ .
• A method for which a root of ρ is greater in magnitude than one is unstable.In this case the solutions will rapidly become unbounded.
A stronger definition of stability can be made by considering non-zero λ in Equation (3).In this case the exact solution of ( 3) is ( ) 0 e t y t y λ = which approaches zero as t → ∞ since the real part of λ is negative.A method that produces numerical solutions with the same asymptotic behavior, that is 0 n y → as n t → ∞ , for a fixed value of k is said to be absolutely stable.This type of stability has also been called asymptotic stability, or eigenvalue stability.Application of the LMM (2) to the test equation again results in a difference equation with a general solution of the form (4), but this time i ζ is the zeros of the stability polynomial where z kλ = , ρ is the first characteristic polynomial (5), and σ is the second characteristic polynomial ( ) of the LMM (2).The method is absolutely stable for a particular value of z if the zeros of (6) satisfy the root condition.The set of all numbers z in the complex plane for which the zeros of (6) satisfy the root condition is called the absolute stability region,  , of the method.The boundary of the absolute stability region is found by plotting the root locus cure that is the ratio of the characteristic polynomials of the method.For z in the stability region, the numerical solutions exhibit the same asymptotic behavior as does the exact solution of (3).For a linear system of ODEs, a necessary condition for absolute stability is that all of the scaled (by k) eigenvalues of the coefficient matrix of the system must lie in the stability region.For nonlinear systems, the system must be linearized and the scaled eigenvalues of the Jacobian matrix of the system must be analyzed at each n t .Standard references on numerical ODE methods [4] [15]- [19] can be consulted for more details.

The Leapfrog Method
The leapfrog method for the system (1) is ( ) The method is constructed by replacing the time derivative in (1) by the second order accurate central difference approximation ( ) of the first derivative.The method needs two time levels to start, 0 y which is provided by the initial condition and 1 y .Typically, the time level 1 y is obtained by an explicit Euler step ( ) The characteristic polynomials of the leapfrog method are ( ) ( )

Filter Construction
It is shown in [20] that the solution of the IVP (1) obtained by the leapfrog method using starting values only assumed to have an asymptotic expansion of the very general form ( ) ( ) has an asymptotic expansion of the form The physical mode p y ( ) consists of the exact solution 0 e t y λ and a ( ) truncation error that goes to zero as 0 k → .The computational mode contributes a non-physical oscillatory factor to the solution which grows in size with increasing t. References [7] and [20] can be consulted for the details of the derivation of the expressions for p y and c y .In order to damp the computational mode while retaining the second order accurate approximation of the physical mode, a filter of the form ( ) is considered where j a ∈  , 1 2 , r r + ∈  , and E is the forward shift operator defined by ( ) ( ) and applying the filter to the computational mode gives Equation (18) suggests that in order for the physical mode to be kept unchanged up to the order N k terms that P must satisfy ( ) ( ) In a similar manner Equation (19) reveals that in order for the computational mode to be reduced by a factor of M k that P must satisfy ( ) ( ) e .

Three Point Filter
First, filters of length three are considered.A three point symmetric filter is of the form ( ) ( ) The filter coefficients can be found by equating the coefficients of ( 22) with the coefficients from Equation (20) and Equation (21).This can be done by setting 1 N = and 1 M = for which the first terms in the expansions of ( 20) and ( 21) respectively become where on the second line e s has been replaced by its linear Taylor series expansion.Equating the constant terms ( 23) and (25) results in the equation  ( ) or equivalently ( ) Three point filters alter the accurate physical mode by a factor of k and also damp the computational mode by a factor of k.
The coefficients of non-symmetric filters ( ) and ( ) can be found in a similar manner.Their coefficients are listed in Table 1.

Five-Point Filter
The consideration of a five point filter presents two more degrees of freedom that can be used to develop a filter that alters the physical mode by one less power of k and that damps the computational mode by one more power of k in comparison to a three point filter.That is, in the asymptotic expansion of the physical mode 2 N = and in the asymptotic expansion of the computational mode 2 M = .A five point symmetric filter is of the form ( ) ( ) In this case the first few terms in the expansions of ( 20) and (21)   ( ) where this time e s has been replaced by its Taylor series expansion.
Equating the constant terms (32) and (34) results in the equation where e s − − has been replaced by its Taylor series expansion.Equating the constant and linear terms in (33) and (39) results in the equations The linear system consisting of Equations ( 36 The coefficients of non-symmetric filters ( ) , and ( ) can be found in a similar manner.Their coefficients are listed in Table 2.
The advantages of the five point filter when compared to the three point filter are now clear.The five point filter retains the second order accuracy of the leapfrog method and damps the computational mode by a factor of 2 k .In contrast, the three point filter reduces the accuracy of the method to first order and only damps the computational mode by a factor of k.It remains to find the best way to apply the filter.In the following sections the filter is applied to derive new LMMs and various filter and restart algorithms are analyzed.

New LMMs
Incorporating the filter in each leapfrog step by averaging 1 n y − with ( ) This approach has the advantage that the result may be analyzed within the LMM framework.The characteristic polynomials of the method are ( ) 4 P in each leapfrog step results in the new LMM ( ) The roots of the first characteristic polynomial of the method are 1 ζ = and two other roots with magnitude one-half.The method is zero-stable but only first order accurate.The maximum stability ordinate along the imaginary axis is approximately 3/4 and along the real axis −1/2.
A strong point of the leapfrog method, especially in the method of lines integration of wave propagation type PDEs, is that it is non-dissipative.The phase and amplitude errors of the two new LMMs are compared with the phase and amplitude errors of the leapfrog method in Figure 2. The ( ) 0 5 P LMM has phase errors are similar to those of the leapfrog method and is by far the least dissipative of the two filtered methods.The ( ) 0 3 P LMM suffers from larger phase errors than does the leapfrog method and is very dissipative.The effects of the dispersion and dissipation errors are illustrated in example 7.3.

Filter Application
In this section, four filter and restart algorithms are explored.For each option the effects on accuracy and stability properties are examined.The options include: • Method 1 (M1).Filter after every step.
• Method 4 (M4).Alternative starting/restarting-algorithm 2.  To allow for the absolute stability region to be examined, all four algorithms have a start up procedure following by multiple steps of leapfrog and/or filtering.Then after a fixed number, N, of steps the algorithm is restarted with the start up procedure.In each of the four algorithms 20 N = or 21 N = has been used.In most cases, increasing N results in the method having an absolute stability region with less coverage of the left half plane and with more imaginary axis coverage.The desired filtering algorithm will maintain second order accuracy, retain as much of the stability interval on the imaginary axis as possible, and extend the absolute stability region into the left half plane.

Filter after Every Step (M1)
The first approach uses an explicit Euler step (11) to calculate the first time level.Each subsequent step takes one leapfrog step to the time level n plus two additional leapfrog steps to calculate time levels 1 n + and 2 n + that are needed by the filter.Then the ( ) 0 5 P filter is applied and time level n is replaced with the average.Time levels 1 n + and 2 n + are disregarded and then the process is repeated until time level N is reached.At that time the method is restarted using time level N as the initial condition.The M1 approach retains second order accuracy but requires three function evaluations per time step.
The absolute stability region of the M1 approach with 20 N = is shown in Figure 3.In the left image of the figure the stability region is seen to cover significant area in the left half plane and appears to retain coverage of the portion of the imaginary axis over the interval [ ] , i i − .However, the right image of the figure zooms in on the imaginary axis and reveals that a large portion of the interval is not included in the stability region.
The absolute stability regions of methods M1, M2, M2, and M4 are found in the following manner.For example, the stability region of M1 with 3 N = is found by applying the method to the test problem (3).Then the amplification factor ( ) R z of the method is found by looking at ( ) The absolute stability region contains all the z ∈  for which

Filter, Euler Restart Every N Steps (M2)
Method M2 approximates the first time level with Euler's method (11) and then advances in time with 2 N + leapfrog steps.Then the ( ) P filter is applied and the value of N y is assigned to be the filtered average.Then the method is restarted with the initial condition N y .Thus the method is ( ) ( ) The M2 approach remains second order accurate and requires ( ) function evaluations per time step.In the given example an average of 1.1 function evaluation per time step are needed.The main drawback of approach M2 is that the stability region (Figure 4) lacks any coverage along the imaginary axis.Method M2 is unstable for all values of 0 z > for which leapfrog method is absolutely stable.

Alternative Start/Restart-Algorithm 1 (M3)
The absolute stability region of Euler's method is a circle of one that is centered at 1 z = − in the complex plane.The region does not include any purely complex numbers.Calculating the first step of method M2 with Euler's method is causing the method's stability region to lack imaginary axis coverage.
The following starting procedure Method M3 requires ( ) function evaluations per time step, which in the given example is equal to 1.25.Method M4 in the next section makes one more modification to further increase the coverage of the stability region along the imaginary axis.   ) ( ) ( ) + function evaluations per time step which is approximately 1.24 in the example given.Matlab source code that implements method M4 is available on the website of the second author at http://www.scottsarra.org/math/math.html.

Example 1
The purpose of the first example is to demonstrate the ability of the proposed methods to control the computational mode that is introduced by the leapfrog weak instability.The example considers the nonlinear IVP ( ) is used with all methods.Figure 7 illustrated behavior typical of a weakly stable method.Over the finite time interval [ ] 0,18 the leapfrog method gives a good approximation to the true solution.Table 3 reports the solution having five digits accuracy at 5 t = .However, after approximately 18 t = , non-physical oscillations appear in the solution and eventually the solution becomes unbounded.
Table 3 records the results of applying the two new LMMs from section 2 and as well as using the four filter and restart algorithms.Additionally, the five point asymmetric filter ( ) and later, all seven of the filtered method have either zero or near zero error.The numerical solutions nearly exactly approximate the exact solution which becomes the constant 1 y = as t → ∞ .In this example, all seven methods have effectively damped the computational mode and prevented the numerical solution from becoming infinite.In five of the methods, graphically non-physical oscillations are not observed.However, in methods ( )

Example 2
This example numerically calculates the order of convergence of the leapfrog method and the seven filtered methods.The test Equation ( 3   Table 3. Errors from problem (47).P − and M2 method solutions prior to the computational mode being damped.M2 methods are only first order accurate while the LMM5, M3, and M4 methods retain the second order accuracy of the leapfrog method.In this example, the three second order accurate filtered methods are about a decimal place more accurate for a given value of k than is the leapfrog method.

Advection
The purpose of the third example is to give an illustration of the effects of the dispersion and dissipation properties of the two LMM described in Section 2 and to illustrate the effectiveness of the M4 algorithm as a method of lines integrator of hyperbolic PDEs.A model hyperbolic PDE problem is the advection equation 0. of the leapfrog method (9) results in a new second order accurate LMM (43) which is zero stable and that has an absolute stability region that includes both coverage of a portion of the imaginary axis as well as a region in the left half plane.Of the four filter and restart algorithms described, algorithm M4 is the most attractive.The M4 method, which restarts after every 21 time steps, has an absolute stability region that includes nearly as much of the imaginary axis as does the leapfrog method as well as includes a region in the left half plane.This allows the method to be an effective method of lines integrator for pure advection and advection dominated problems as is illustrated in the examples.In addition to the second order accuracy, the method also retains the computational efficiency of the leapfrog method.Method M4 is a potential replacement for existing leapfrog type time differencing schemes that are used in geophysical fluid dynamics codes.
of ρ are 1 ζ = ± and the method is only weakly stable.The boundary of the absolute stability region is by its Taylor series truncated after the constant term.Equating the constant terms (24) and (28) results in the equation three-point symmetric filter is ( )

1 .
a root and three other roots less than one in magnitude and thus the method is zero- stable.The absolute stability region of the method is shown in Figure The maximum stability ordinate along the imaginary axis is approximately 0.87 and along the real axis −0.53.The new method is second order accu-

Figure 3 .
Figure 3. (a): Absolute stability region of method M1. (b): close up of the stability region along the imaginary axis.
46) is suggested for minimizing this effect.To advance from the initial condition to time level one, M smaller substeps are taken consisting of one Euler step followed by 1 M − leapfrog steps.After the first step, algorithm M3 is identical to algorithm M2.The absolute stability region of method M3 with 20 N = and 4 M = is shown in Figure 5.The proposed starting procedure has improved the imaginary axis coverage of the stability region to approximately the interval [ ] half the size of the leapfrog methods stability interval.

Figure 5 .Figure 6 .
Figure 5. (a): Absolute stability region of method M3. (b): close up of the stability region along the imaginary axis.

(
which has the exact solution ( ) ( )tanh y t t = .A time step size of 0.1 k = oscillations (as in Figure8) appear in the numerical solution prior to the computational mode being damped sufficiently.
) is used with 1 λ = − and with the initial condition ( ) 0 1 y = .Each method is re- peated on the time interval [ ] 0,1 using the sequence of number of time steps 20, 40,80,160,320, 640,1280, 2560,5120 N = .The resulting convergence plots are shown in the left and right image of Figure 9.On a log-log plot of the absolute value of the error versus the time step size the plot will be a straight line with a slope that is the order of convergence of the method.The LMM3, Asym( )

Table 1 .
Three point filter coefficients.