Effect of Asymmetric Finite Difference Formulas on the Orders of Central Difference Approximations for the Second Derivative of a Periodic Function

Abstract

The traditional numerical computation of the first and higher derivatives of a given function f(x) of a single argument x by central differencing is known to involve aspects of both accuracy and precision. However, central difference formulas are useful only for interior points not for a certain number of end points belonging to a given grid of points. In order to get approximations of a desired derivative at all points, one has to use asymmetric difference formulas at points where central differencing doesn’t work. This must surely affect the accuracy and precision of the approximation. In this paper, we study the dependence of the orders of the five-point and the seven-point central difference formulas for the second derivative of f(x) on the oscillatory properties of this function and the value of the sampling period h in the case where it is necessary to use forward and backward formulas to approximate the derivative at some points belonging to a given grid of equally spaced points. As an illustrative example, we consider the case where f(x)=sin(αx).

Share and Cite:

Nyengeri, H., Sinzingayo, J.J., Dusabe, B. and Ndenzako, E. (2023) Effect of Asymmetric Finite Difference Formulas on the Orders of Central Difference Approximations for the Second Derivative of a Periodic Function. Open Access Library Journal, 10, 1-26. doi: 10.4236/oalib.1110875.

1. Introduction

Numerical differentiation is an elementary and essential part in scientific modeling and numerical analysis [1] . It is widely used for the differentiation of the functions which are difficult to differentiate analytically, and for finding the derivative of the sampled data for which the generating function is not known. This is the case when solving the ordinary or partial differential equations numerically.

Many methods have been introduced and discussed to determine the derivatives numerically [2] . These methods can be classified into two approaches [2] . The first approach aims to develop formulas for calculating the derivatives numerically. This includes the Taylor expansion based finite difference method [3] [4] [5] [6] [7] , the operator method [3] [8] , the interpolating polynomial method [4] [5] [6] [8] [9] , and the lozenge diagram method [10] . The second approach does not give an explicit formula for the derivative; it just aims to evaluate it by using the function data [1] .

The Taylor series based finite difference approximation is used to numerically evaluate the derivative of a function at a grid reference point by using data samples at the other neighboring points within the domain of this function. Depending on the pattern of the samples used in calculation, the most commonly used approximation formulas for derivatives are classified as forward, backward and central difference formulas. Forward difference approximations use the samples at a mesh point and next (forward) equally spaced points of analysis for calculating the derivative at the mesh point. In contrast, backward difference approximations use the samples at a mesh point and the previous (backward) equally spaced points, whereas central difference approximations use both forward and backward samples in calculating the derivative at the specified mesh point. Forward and backward differencing are part of a larger group of what might be called lateral (as distinct from central) differencing methods, including extrapolative differencing that might be appropriate for singularity [11] .

We have to emphasize that the number of points used to approximate the derivative defines the order of the approximation and generally, the greater is this number, the more is the accuracy of the approximation. For example, a finite difference approximation of the second derivative using n p points is said to be of order n p 1 for the central approximation and n p 2 for both the forward and backward approximations, which means that central difference formula gets an extra order of accuracy for free. The central differencing is therefore the most accurate of the above three methods for a given number of equidistant data samples [12] . However, central difference formulas are useful only for interior points not for end points. For example, if we have a grid of N x + 1 points x i given by x i = x 0 + i h , ( i = 0 , 1 , 2 , , N x ) for some fixed tabular interval h, and some integer N x > n p , the n p -point central difference approximation for the second derivative of a certain function f ( x ) can be used only at grid points x j , j [ ( n p 1 ) / 2 , N x ( n p 1 ) / 2 ] but not for j [ 0 , ( n p 3 ) / 2 ] or j [ N x ( n p 3 ) / 2 , N x ] , n p being an odd integer such that n p 3 < N x . In order to get approximations of f ( x j ) at the ( n p 3 ) / 2 first points (resp. last points) while preserving the accuracy of the results, one can theoretically use a ( n p + 1 ) -point forward difference (resp. backward difference) formula, which must surely affect the accuracy and precision of the results.

The main purpose of this paper is to study the dependence of the orders of central difference approximations for the second derivative of a function on the oscillatory properties of this function and the value of the sampling period h in the case where it is necessary to use forward/backward approximations for this derivative at some points belonging to a given grid of equally spaced points.

The rest of this paper is organized as follows. Section 2 contains a brief presentation of Khan and Ohba’s closed-form expressions for finite difference approximations of first and higher derivative based on Taylor’s series, which have been shown to be suitable for computing derivatives of any degree with arbitrary order of accuracy over all the known function sampling points with a minimum effort [13] . Thereafter, we use these expressions to establish forward, backward and central difference formulas of several orders for the first two derivatives. In Section 3, we show how to determine numerically and graphically the order of a finite difference formula for a derivative of a certain degree q. In Section 4, we present and comment our results concerning the orders of some finite difference formulas for the second derivative of the function f ( x ) defined by f ( x ) = sin ( α x ) , α being a positive real parameter which allows working with various oscillation frequencies of the function under consideration. We concentrate on the behavior of the order of each formula as a function of the α parameter and the sampling period h. The conclusion is given in Section 5.

2. Finite Difference Formulas of the First and Second Derivatives Based on Taylor Series

Following the notations as found in [13] , the Taylor series defines the relation between the discrete time values of a time function f ( t ) sampled at t = k h , where k = 0 , ± 1 , ± 2 , , and h is the sampling period, to the value of the function and its derivatives at origin t = 0 . It can, mathematically, be written as [13]

f k = f 0 + k h f 0 ( 1 ) + ( k h ) 2 2 ! f 0 ( 2 ) + + ( k h ) n n ! f 0 ( n ) + O ( h n + 1 ) (1)

where f k denotes the value of f ( t ) at t = k h , f 0 ( k ) denotes the value of the kth derivative of f at t = 0 and O ( h n + 1 ) is a term of the order of h n + 1 coming from the truncation of the series after n + 1 terms. The value of the kth derivative of f at t = i h is denoted by f i ( k ) .

In this Section, we are mainly interested in the forward, backward and central difference approximations of the first and second derivatives of a function. We found it clarifying to use the notations of Ishtiaq Rasool Khan and Ryoji Ohba [13] . The advantage of this notation is that it is suitable for calculating derivatives of any degree with arbitrary order of accuracy over all the known function sampling points with a minimum effort.

The following paragraphs summarize a number of expressions for finite difference approximations of first and second derivatives found in [13] .

2.1. Case of the First Derivative

Based on forward values of the function, we have [13]

f i ( 1 ) = 1 h k = 0 n g k , n F , 1 f k + i + O ( h n ) (2)

where g k , n F , 1 denotes the coefficient of a term f k in a forward approximation of first derivative of order n.

Ishtiaq Rasool Khan and Ryoji Ohba calculated the g k , n F , 1 coefficients for different orders of approximation, and observed that they can be expressed by the following explicit formulas [13] :

g 0 , n F , 1 = j = 1 n 1 j (3)

and

g k , n F , 1 = ( 1 ) k + 1 k C k n , k = 1 , 2 , , n , (4)

where C a b is defined as a ! b ! ( a b ) ! .

Similarly based on the backward values of the function, we may write

f i ( 1 ) = 1 h k = n 0 g k , n B , 1 f k + i + O ( h n ) (5)

where the superscript B refers to backward difference approximations. The g k , n B , 1 coefficients are such that

g 0 , n B , 1 = g 0 , n F , 1 = j = 1 n 1 j (6)

and

g k , n B , 1 = g k , n F , 1 = ( 1 ) k k C k n , k = 1 , 2 , , n . (7)

According to Khan I. R. and Ohba R. [13] , a central difference approximation of order 2n for the first derivative is, for its part, given by the following Equation:

f i ( 1 ) = 1 h k = n n g k , 2 n C , 1 f k + i + O ( h 2 n ) (8)

where

g 0 , 2 n C , 1 = 0 (9)

and

g k , 2 n C , 1 = ( 1 ) k + 1 ( n ! ) 2 k ( n k ) ! ( n + k ) ! , k = ± 1 , ± 2 , , ± n . (10)

The approximations of the first derivative for some values of n are listed below (See Table 1).

2.2. Case of the Second Derivative

For this case we have [13] :

Table 1. Backward, forward and central difference approximations of f i ( 1 ) for some values of n.

• Forward formulas

f i ( 2 ) = 1 h 2 k = 0 n g k , n F , 2 f k + i + O ( h n ) (11)

where

g k , n F , 2 = ( 1 ) k C k n k j = 1 j k n 1 j , k = 1 , 2 , , n , (12)

and

g 0 , n F , 2 = k = 1 n g k , n F , 2 (13)

• Backward formulas

f i ( 2 ) = 1 h 2 k = n 0 g k , n B , 2 f k + i + O ( h n ) (14)

where g k , n B , 2 , the coefficient of f k , is the same as the corresponding kth coefficient in forward difference approximation given by Equations (12) and (13).

• Central difference formulas

f i ( 2 ) = 1 h 2 k = n n g k , 2 n C , 2 f k + i + O ( h 2 n ) (15)

where

g 0 , 2 n C , 2 = k = 1 n g k , 2 n C , 2 (16)

and

g k , 2 n C , 2 = ( 1 ) k + 1 2 ! k 2 ( n ! ) 2 ( n k ) ! ( n + k ) ! , k = ± 1 , ± 2 , , ± n . (17)

Table 2 presents finite difference approximations of the second derivative for some values of n.

3. Numerical and Graphical Determination of the Order of a Finite Difference Formula

It is possible to calculate numerically and graphically the order of a given finite difference formula for a derivative of a function. To achieve this, it is necessary to choose a function whose successive derivatives are exactly known in a certain interval [ a , b ] of the independent variable. We then consider several values of h and compute for each of them the error E [ q ] ( h ) defined as

E [ q ] ( h ) = max i [ 0 , N x ] | f ( q ) ( x i ) f i ( q ) | (18)

where f ( q ) ( x i ) denotes the value of the qth derivative of the function f at x = x i , N x is a positive integer such that

h = ( b a ) / N x , (19)

which means that we discretize the interval of interest by N x + 1 points, i.e., x 0 = a , x 1 , x 2 , , x N x = b . If

Table 2. Forward, backward and central difference approximations of f i ( 2 ) for some values of n.

E [ q ] ( h ) C h p (20)

for h sufficiently small (as h 0 ), then p is the order of the finite difference formula. Here, C is a constant depending on the function, the degree of the derivative and the finite difference formula. From Equation (20), we have

log ( E [ q ] ( h ) ) log ( C h p ) = log ( C ) + p log ( h ) . (21)

Using the logarithmic scale resulting from the change of variable X = log ( h ) and Y = log ( E [ q ] ( h ) ) , we obtain the expression

Y log ( C ) + p X , (22)

which is the equation of a straight line. The slope of this line is therefore the order of the finite difference approximation.

4. Numerical Experimentation

In order to study orders of finite difference formulas for the second derivative, we consider the periodic function f ( x ) defined by

f ( x ) = sin ( α x ) , α 0 + , x [ 0 , 2 π ] . (23)

The parameter α allows working with various oscillation frequencies of the function under consideration.

Figure 1 shows the dependence of the function f ( x ) on the independent variable x and the parameter α. As expected, the number of oscillations increases with α.

In Figures 1-5, we show the errors E [ 2 ] ( h ) plotted against h on a log-log scale for several values of the α parameter and the three numerical differentiation methods mentioned above. This is a good way to plot errors when we expect

Figure 1. Plots of sin ( α x ) for several values of the parameter α .

(a)(b)(c)(d)

Figure 2. Plots of E [ 2 ] ( h ) in a log-log scale. (a) α = 0.5 ; (b) α = 1 ; (c) α = 1.5 ; (d) α = 2 . n = 3 for the central difference formula.

(a)(b)(c)(d)

Figure 3. Plots of E [ 2 ] ( h ) in a log-log scale. (a) α = 2.5 ; (b) α = 3 ; (c) α = 1.25 ; (d) α = 2.75 . n = 3 for the central difference formula.

(a)(b)(c)(d)

Figure 4. Plots of E [ 2 ] ( h ) in a log-log scale. (a) α = 0.5 ; (b) α = 1 ; (c) α = 3 ; (d) α = 10 . n = 2 for the central difference formula.

(a)(b)(c)(d)

Figure 5. Plots of E [ 2 ] ( h ) in a log-log scale. (a) α = 20 ; (b) α = 40 ; (c) α = 3.78 ; (d) α = 6.25 . n = 2 for the central difference approximations.

them to behave like some power of h, since if the error E [ q ] ( h ) behaves like E [ q ] ( h ) C h p then log ( E [ q ] ( h ) ) log ( C ) + p log ( h ) . So on a log-log scale, the error behaves linearly with a slope that is equal to p, the order of accuracy.

It must be remembered that each of the three considered difference methods using np points to approximate f i ( 2 ) requires us to use at least one of the two other methods in the approximation of this derivative for at least one point. More precisely:

• A np-point forward formula requires use of a np-point backward formula to approximate f i ( 2 ) at the ( n p 1 ) last points.

• A np-point backward formula requires use of a np-point forward formula to approximate f i 2 at the ( n p 1 ) first points.

• A np-point central difference formula requires use of the ( n p + 1 ) -point forward formula (resp. backward formula) for the ( n p 1 ) / 2 first points (resp. last points).

It appears from Figures 2-5 that the shape of each plot depends on the range the sampling period h belongs to, the method used to approximate f i ( 2 ) at the majority of points (those for which the method is useful) and the value of the α parameter. Indeed, the potential values of h can obviously be placed in three different regions when a ( 2 n + 1 ) -point central difference formula is applied to the function f ( x ) = sin ( α x ) with α positive integer or half-integer and x [ 0 , 2 π ] (See Figure 2, Figure 3(a) and Figure 3(b), Figure 4 and Figure 5(a) and Figure 5(b)): the small-h region where the curve is a straight line with slope 2n, the medium-h region where we have a straight line of slope 2 n + 1 and the high-h region where the slope of the curve is not where defined. The length and endpoints of each region depend on the values of the integer n and the parameter α, which implies that sizes and endpoints of the three regions are dependent on both the oscillation frequency of the function and the number of points used in the approximation of the derivative. With the same function, but values of α which are neither integers nor half-integers, only two regions are observed: there is no medium-h region (See Figure 3(c) and Figure 3(d) and Figure 5(c) and Figure 5(d)). We have to add that all results obtained from forward differencing formulas fit with those of the corresponding backward formulas. Furthermore, no cases of medium-h region were reported for these two methods.

In an effort to better understand the origin of the medium-h region, we show the errors E [ 2 ] , C C ( h ) (Figure 2, Figure 4, Figure 3(a) and Figure 3(b) and Figure 5(a) and Figure 5(b)) defined as

E [ 2 ] , C C ( h ) = max i [ n , N x n ] | f ( 2 ) ( x i ) f i ( 2 ) | (24)

for a ( 2 n + 1 ) -point central difference approximation of the second derivative. We have to emphasize that E [ 2 ] , C C ( h ) is obtained from Equation (18) by omitting all points at which the central difference approximation for the second derivative is not valid. It is remarkable that any case of medium-h region is observed for E [ 2 ] , C C ( h ) , which means that this region is simply the consequence of asymmetric finite difference approximations of the derivative in question at some points (those at which the central difference formula is not valid).

Now consider the problem of estimating the slope of the line segment connecting two any consecutive points on each of the curves of Figures 2-5 for only central differencing formulas and integer or half-integer values of the parameter α. We used the expression p [ 2 ] ( h j ) to denote the value of the slope for the line segment whose endpoints are ( log ( h j 1 ) , log ( E [ 2 ] , C ( h j 1 ) ) ) and ( log ( h j ) , log ( E [ 2 ] , C ( h j ) ) ) :

p [ 2 ] ( h j ) = log ( E [ 2 ] , C ( h j ) ) log ( E [ 2 ] , C ( h j 1 ) ) log ( h j ) log ( h j 1 ) , h j = 2 π / N j , j [ 1 , N x ] . (25)

The plots of the functions highlighting p [ 2 ] ( h ) for several values of the parameter α are shown in Figure 6 and Figure 7. We see the same three regions appearing in Figures 2-5. The numerically calculated starting values of the sampling period for the small-h region ( 5.99 p [ 2 ] ( h ) 6.01 for n = 3 and 3.99 p [ 2 ] ( h ) 4.01 for n = 2 ) are given in Table 3 and Table 4, the values of h being listed in descending order.

A careful analysis of our numerical data presented in Table 3 leads to the conclusion that the relationship between h s t a r t and α (integer or half-integer) can be described by the following simple formula:

h s t a r t ( α ) 2 π / r s t a r t ( α ) (26)

for the 5-point central difference approximation of f ( x ) , where

r s t a r t = 1325.77 α . (27)

From the numerical data of Table 4, we found that the small-h region associated with the 7-point central difference approximation of f ( x ) is such that

h s t a r t ( α ) 2 π / r ˜ s t a r t ( α ) (28)

with

r ˜ s t a r t ( α ) = 5780 α (29)

Table 3. Starting values of the sampling period for the small-h region in the case where n = 2 (5-point central difference formula). Fifteen values of the parameter α are considered.

Table 4. Starting values of the sampling period for the small-h region in the case where n = 3 (7-point central difference formula). Six values of the parameter α are considered.

(a)(b)(c)(d)(e)(f)

Figure 6. Plots of p [ 2 ] , C ( h ) for the 7-point central difference approximation of f ( x ) . (a) α = 0.5 ; (b) α = 2 ; (c) α = 1 ; (d) α = 2.5 ; (e) α = 1.5 ; (f) α = 3 .

(a)(b)(c)(d)(e)(f)

Figure 7. Plots of p [ 2 ] , C ( h ) for the 5-point central difference approximation of f ( x ) . (a) α = 0.5 ; (b) α = 10 ; (c) α = 1 ; (d) α = 20 ; (e) α = 3 ; (f) α = 40 .

5. Conclusions

The focus of our work was to investigate the influence of forward and backward finite difference approximations on the orders of central difference approximations for the second derivative of a periodic function. First, we have briefly presented very practical closed-form expressions for the finite difference approximations of first and second derivatives based on Taylor series and found in the literature. Next, these expressions were used to obtain forward, backward and central difference formulas of several orders for the first two derivatives of any function of one variable. Thereafter, we have described one major method to determine numerically and graphically the order of a finite difference formula for the second derivative of a certain degree q. Finally, we have applied the method in question to the second derivative of the periodic function f ( x ) = sin ( α x ) with α > 0 and x [ 0 , 2 π ] . The results obtained from this application led to the conclusion that any ( 2 n + 1 ) -point central difference formula for the second derivative of the considered periodic function is of order 2 n + 1 for medium values of the sampling period h and of order 2n for small values of this parameter in the case where it is necessary to use forward and backward approximations of the derivative of interest at some points belonging to a given grid of equally spaced points. Sizes and starting points of the small-h and the medium-h regions depend on h and α. There is no medium-h region when the parameter α is neither integer nor half-integer. Careful analysis of our numerical results led us to very simple formulas to describe relationships between the starting values of h for the small-h region and the α parameter in the cases of n = 2 and n = 3 , the values of h being listed in descending order.

One has to add that errors E [ 2 ] ( h ) defined by Equation (18) become too small to be well calculated when h is of very small magnitude and the parameter α not large ( α < 4 ), unless the computation is performed with sufficiently high-precision. To deal with this problem, we appealed to a new software package for arbitrary precision computation, named MPFUN2020 and developed by David H. Bailey [14] .

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Hassan, H.Z., Mohamad, A.A. and Atteia, G.E. (2012) An Algorithm for the Finite Difference Approximation of Derivatives With Arbitrary Degree and Order of Accuracy. Journal of Computational and Applied Mathematics, 256, 2622-2631. https://doi.org/10.1016/j.cam.2011.12.019
[2] Li, J. (2005) General Explicit Formulas for Numerical. Journal of Computational and Applied Mathematics, 183, 29-52. https://doi.org/10.1016/j.cam.2004.12.026
[3] Jordan, C. (1950) Calculus of Finite Differences. 2nd Edition, Springer, New York. https://doi.org/10.1017/S0025557200230271
[4] Collatz, L. (1966) The Numerical Treatment of Differential Equations. 2nd Edition, Springer, Berlin.
[5] Chapra, S.C. and Canale, R.P. (2010) Numerical Methods for Engineers. 6th Edition, McGraw-Hill, New York. https://ds.amu.edu.et/xmlui/bitstream/handle/123456789/17416/Numerical_methods_for_engineers.pdf?equence=1&isAllowed=y
[6] Hoffman, J.D. (2001) Numerical Methods for Engineers and Scientists. 2nd Edition, Marcel Dekker, New York. http://freeit.free.fr/Finite%20Element/Hoffman,_Numerical_Methods_for_Engineers&Scientists,2001.pdf
[7] Khan, I.R. and Ohba, R. (2003) Taylor Series Based Finite of Difference Approximations of Higher-Degree Derivatives. Journal of Computational and Applied Mathematics, 154, 115-124. https://doi.org/10.1016/S0377-0427(02)00816-6
[8] Dahlquist, G. and Bjorck, A. (1974) Numerical Methods. Prentice Hall, Englewood Cliffs. https://doi.org/10.1137/1.9780898717785
[9] Burden, R.L. and Faires, J.D. (1993) Numerical Analysis. 5th Edition, PWS-Kent Pub. Co., Boston.
[10] Gerald, C.F. and Wheatley, P.O. (1994) Applied Numerical Analysis. 5th Edition, Addison-Wesley, Michigan.
[11] Robert, D.V. (2009) An Improved Numerical Approximation for First Derivative. Journal of Chemical Sciences, 121, 935-950. https://www.ias.ac.in/article/fulltext/jcsc/121/05/0935-0950
[12] Khan, I.R. and Ohba, R. (1999) New Finite Difference Formulas for Numerical Differentiation. Journal of Computational and Applied Mathematics, 126, 269-276. https://doi.org/10.1016/S0377-0427(99)00358-1
[13] Khan, I.R. and Ohba, R. (1999) Closed-Form Expressions for Finite Difference Approximations of First and Higher Derivatives Based on Taylor Series. Journal of Computational and Applied Mathematics, 107, 179-193. https://doi.org/10.1016/S0377-0427(99)0088-6
[14] Bailey, D.H. (2023) MPFUN2020: A Thread-Safe Arbitrary Precision Package with Special Functions (Full Documentation). https://www.davidhbailey.com/dhbpapers/mpfun2020.pdf

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.