^{1}

^{1}

^{1}

This work deals with a second order linear general equation with partial derivatives for a two-variable function. It covers a wide range of applications. This equation is solved with a finite difference hybrid method: BTCS + CTCS. This scheme is simple, precise, and economical in terms of time and space occupancy in memory.

We are interested in the following second order linear Partial Differential Equation (PDE):

A ( x , t ) ∂ 2 Φ ∂ x 2 + B ( x , t ) ∂ 2 Φ ∂ x ∂ t + C ( x , t ) ∂ 2 Φ ∂ t 2 + D ( x , t ) ∂ Φ ∂ x + E ( x , t ) ∂ Φ ∂ t + F ( x , t ) Φ = g ( x , t ) ; (1)

where Φ designates an scalar function depending on the variables x and t. Usually x states for the space variable and t for the time. The Equation (1) can be brought in the following form:

∑ μ = 0 2 ∑ ν = 0 2 − μ A μ ν ( x , t ) ∂ μ ∂ ν Φ ∂ x μ ∂ t ν = g ( x , t ) ⋅ (2)

The resolution of such an equation presents a great interest, because it governs several phenomenas in physics, chemistry, mathematics, economy, etc. [

· Elliptic equations: It treats the Poisson equation which describes electrostatic and magnetostatic phenomenas, flow of perfect incompressible fluids, vortex and viscous incompressible flow, filtration of fluids through porous materials, etc. It concerns also the Euler-Tricomi equation that allows the study of transonic flow.

· Parabolic equations: It handles problems governed by the extended diffusion equation or the Diffusion Advection Reaction equation (of concentration, matter, temperature or electromagnetic field) or the BlackScholes equation in mathematical finance [

· Hyperbolic equation: Equations such the extended complete wave equation and the extended complete telegraph equation are contained in (1). These two equations have important applications in electromagnetic and telecommunications [

We propose to solve this equation with a simple, accurate and efficient method, by reducing the costs in time, memory space, and method’s heaviness. We aim at a good compromise between simplicity and result accuracy.

So, we will first, formulate the problem. Then, we choose a hybrid approximation scheme, using two finite differences approaches: Backward Time Centered Space (BTCS) and Centered Time Centered Space (CTCS) [

Subsequently, we will treat numerical experimentation to validate our proposed method. Finally, we discuss the results and give outlook for further work.

We consider a scalar function Φ = Φ ( x , t ) which depends on the real variables x and t. This function satisfies the partial differential Equation (3) with the given conditions.

{ A Φ x x + B Φ x t + C Φ t t + D Φ x + E Φ t + F Φ = g , ( x , t ) ∈ ] 0 , L [ × ] 0 , ∞ [ Φ ( x , 0 ) = f 0 ( x ) , Φ t ( x , 0 ) = f 1 ( x ) , x ∈ ] 0 , L [ Φ ( 0 , t ) = g 0 ( t ) , Φ ( L , t ) = g L ( t ) , t ∈ ] 0 , ∞ [ (3)

Here, The coefficients A, B, C, D, E and F are constant or known functions depending on the variables x and t. L is a real constant. The function g = g ( x , t ) is a known excitation. The functions f 0 ( x ) , f 1 ( x ) , g 0 ( t ) and g L ( t ) have been also given.

The following mesh has been considered: the spatial interval [ 0, L ] is discretized in N x + 2 points x i = i ⋅ Δ x , i = 0 , 1 , 2 , ⋯ , N x + 1 , with Δ x = h = L ( N x + 1 ) . The considered instants are: t n = n ⋅ Δ t . The time step is Δ t = k and must be sufficiently small to allow a good, accurate and efficient resolution of the problem. An approximated value of Φ is to be found at point ( x i , t n ) : Φ ( x i , t n ) = Φ i n . We define: g ( x i , t n ) = g i n , g 0 ( t n ) = g 0 n , g L ( t n ) = g L n , f 0 ( x i ) = f 0 i and f 1 ( x i ) = f 1 i . It holds: A ( x i , t n ) = A i n , B ( x i , t n ) = B i n , C ( x i , t n ) = C i n , D ( x i , t n ) = D i n , E ( x i , t n ) = E i n and F ( x i , t n ) = F i n .

We consider the Finite Difference method in Time Domain (FDTD); more precisely the two schemes: BTCS and CTCS [

( ∂ Φ ∂ x ) ( x i , t n ) = Φ i + 1 n − Φ i − 1 n 2 h + O ( h 2 ) , ( ∂ 2 Φ ∂ x 2 ) ( x i , t n ) = Φ i − 1 n − 2 Φ i n + Φ i + 1 n h 2 + O ( h 2 ) , i = 1,2,3, ⋯ , N x . (4)

For the BTCS scheme, we get the first and second order backward time derivatives, for t n > 0 :

( ∂ Φ ∂ t ) ( x i , t n ) = Φ i n − Φ i n − 1 k + O ( k ) , ( ∂ 2 Φ ∂ t 2 ) ( x i , t n ) = Φ i n − 2 Φ i n − 1 + Φ i n − 2 k 2 + O ( k ) , i = 1,2,3, ⋯ , N x . (5)

Combining the Equations (3)-(5), one gets:

( A i n − h 2 k B i n − h 2 D i n ) ︸ : = c i b Φ i − 1 n + ( − 2 A i n + h 2 k 2 C i n + h 2 k E i n + h 2 F i n ) ︸ : = d i b Φ i n + ( A i n + h 2 k B i n + h 2 D i n ) ︸ : = e i b Φ i + 1 n , = ( − h 2 k B i n ) ︸ : = − c ′ i b Φ i − 1 n − 1 + ( 2 h 2 k 2 C i n + h 2 k E i n ) ︸ : = d ′ i b Φ i n − 1 + ( h 2 k B i n ) ︸ : = e ′ i b Φ i + 1 n − 1 + ( − h 2 k 2 C i n ) ︸ : = b i b Φ i n − 2 + h 2 g i n , i = 2 , 3 , ⋯ , N x − 1 ; n ≥ 1. (6)

With the CTCS scheme, we have the following first and second order derivatives, for t n > 0 :

( ∂ Φ ∂ t ) ( x i , t n ) = Φ i n + 1 − Φ i n − 1 2 k + O ( k 2 ) , ( ∂ 2 Φ ∂ t 2 ) ( x i , t n ) = Φ i n − 1 − 2 Φ i n + Φ i n + 1 k 2 + O ( k 2 ) , i = 1 , 2 , 3 , ⋯ , N x . (7)

Associating the Equations (3), (4), and (7); one gets:

( − h 4 k B i n − 1 ) ︸ : = c i c Φ i − 1 n + ( h 2 k 2 C i n − 1 + h 2 2 k E i n − 1 ) ︸ : = d i c Φ i n + ( h 4 k B i n − 1 ) ︸ : = e i c Φ i + 1 n = ( − A i n − 1 + h 2 D i n − 1 ) ︸ : = c ′ i c Φ i − 1 n − 1 + ( 2 A i n − 1 + 2 h 2 k 2 C i n − 1 − h 2 F i n − 1 ) ︸ : = d ′ i c Φ i n − 1 + ( − A i n − 1 − h 2 D i n − 1 ) ︸ : = e ′ i c Φ i + 1 n − 1 + ( − h 4 k B i n − 1 ) ︸ : = − a i c Φ i − 1 n − 2 + ( − h 2 k 2 C i n − 1 + h 2 2 k E i n − 1 ) ︸ : = b i c Φ i n − 2 + ( h 4 k B i n − 1 ) ︸ : = a i c Φ i + 1 n − 2 + h 2 g i n − 1 , i = 2 , ⋯ , N x − 1 ; n ≥ 2. (8)

We define c i = c i b + c i c , d i = d i b + d i c , e i = e i b + e i c , c ′ i = c ′ i b + c ′ i c , d ′ i = d ′ i b + d ′ i c , e ′ i = e ′ i b + e ′ i c , a i = 0 + a i c and b i = b i b + b i c ; for i = 1 , 2 , 3 , ⋯ , N x ; n ≥ 1 . Thus, we obtain the following matrix equation, considering the mesh points x 1 and x N x :

This previous equation is equivalent to:

( A ) n Φ → n = ( A ′ ) n Φ → n − 1 + ( A ″ ) n Φ → n − 2 + G → n ︸ : = H → n , n ≥ 2 , (10)

with

Φ i − 1 = ( Φ i 0 − k Φ t i 0 ) and Φ i 1 = ( 2 k Φ t i 0 + Φ i − 1 ) ⋅ (11)

So, we will discuss the inversion methods of the matrix ( A ) n , which permits to get the solution Φ → n .

An important discussion is to be done with respect to Equation (9), particularly concerning the matrix ( A ) n . Principally, this matrix must be inverted for each time t n in order to get the solution at this time: Φ → n . But, if the coefficients of Equation (1) do not dependent on the time ( A ) n = ( A ) then one and exactly one inversion is sufficient.

The algorithm of Thomas can be used for the inversion (see Appendix). With this method, we do not have an idea about the regularity of the matrix. But, it is clear that if the tridiagonal matrix ( A ) n is diagonal dominant then it is regular. Of course, this property is sufficient but not necessary for the regularity.

We prefer an inversion with the Usmani’s algorithm [

We can apply it for ( A ) n , defining:

θ 0 = 1 , θ 1 = d 1 , θ N x + 1 = 0 and θ i = d i θ i − 1 − c i e i − 1 θ i − 2 , i = 2 , 3 , ⋯ , N x , and η N x + 1 = 1 , η N x = d N x , η 0 = 0 and η i = d i η i + 1 − c i + 1 e i η i + 2 , i = N x − 1 , N x − 2 , ⋯ , 2 , 1

Then, the coefficients of the inverted matrix ( B ) n are obtained with:

b i j = { ( − 1 ) i + j ⋅ ∏ l = i j − 1 e l ⋅ θ i − 1 ⋅ η j + 1 θ N x i < j θ i − 1 ⋅ η i + 1 θ N x i = j ( − 1 ) i + j ⋅ ∏ l = j i − 1 c l ⋅ θ j − 1 ⋅ η i + 1 θ N x i > j (12)

We get the solution:

Φ → n = ( B ) n × H → n . (13)

The solution Φ i n , at point x i and time t n , is given by a simple matrix- vector multiplication:

Φ i n = ∑ j = 1 N x b i j ⋅ H _ j n , i = 1 , 2 , ⋯ , N x , n ≥ 2. (14)

In the case that the coefficients of Equation (1) do not dependent on the space: A = A ( t ) , B = B ( t ) , C = C ( t ) , D = D ( t ) , E = E ( t ) and F = F ( t ) ; then the matrix (A) dependent only on the time. Its coefficients are constant at a fixed time t n . The formula of the invert matrix ( B ) n can be simplified [

Defining a real number Δ and a complex number θ in following manner:

Δ = d 2 − 4 e c and θ = atanh ( Δ d ) . (15)

we get | A | i , i = 1 , 2 , ⋯ , N x , which is the determinant of the submatrix of order i of (A); and which is of dimension ( i × i ). In the case, where e ⋅ c = 0 , the determinant of the matrix (A) is: | A | N x = d N x

One can verify that this determinant, for e ⋅ c ≠ 0 , is given by the following relations [

| A | N x = { 1 Δ { [ d + Δ 2 ] N x + 1 − [ d − Δ 2 ] N x + 1 } = ( ± sgn ( d ) | e c | ) N x sinh ( ( N x + 1 ) θ ) sinh ( θ ) , d ≠ ± 2 e c ( ± | e c | ) N x ( N x + 1 ) , d = ± 2 | e c | (16)

The inverted matrix (B) exists when d is different of one of the following values [

d l = 2 e c cos ( l π N x + 1 ) , l = 1 , 2 , ⋯ , N x . (17)

Outside these values of d l , the matrix (A) is regular and its inverts (B) is given by the following formula:

b i j = { ( − 1 ) i + j ⋅ e j − i ⋅ | A | i − 1 ⋅ | A | N x − j | A | N x i ≤ j ( − 1 ) i + j ⋅ c i − j ⋅ | A | j − 1 ⋅ | A | N x − i | A | N x i > j , 1 ≤ i , j ≤ N x (18)

For the numerical verification, we choose the equation of telegraph equation, which has been treated by several authors [

The following problem treated by [

{ − Φ x x ( x , t ) + Φ t t ( x , t ) + 2 α Φ t ( x , t ) + β 2 Φ ( x , t ) = g ( x , t ) , ( x , t ) ∈ ] 0 , π [ × ] 0 , ∞ [ Φ ( x , 0 ) = f 0 ( x ) = sin ( x ) , Φ t ( x , 0 ) = f 1 ( x ) = − sin ( x ) , x ∈ ] 0 , π [ Φ ( 0 , t ) = g 0 ( t ) = 0 , Φ ( π , t ) = g L ( t ) = 0 , t ∈ ] 0 , ∞ [ (19)

By choosing dx = 0.02 and dt = 0.001, the following results were obtained, showing the L ∞ and L 2 errors:

The results are very satisfactory because our method is not heavy and leads to a precise solution. Compared to the one used in [

In this work, a method of solving a general linear partial differential equation has been presented. The finite difference hybrid approach (BTCS + FTCS) that has been used is simple, accurate and efficient; and is economical in terms of calculation and occupancy of the memory space.

This study can allow numerous applications of this method to several phenomena of the sciences and techniques governed by this equation. In addition, other methods could be explored to improve performance.

The authors declare no conflicts of interest regarding the publication of this paper.

Gueye, S.B., Mbow, C. and Diagana, M.F. (2021) Efficient BTCS + CTCS Finite Difference Scheme for General Linear Second Order PDE. Journal of Electromagnetic Analysis and Applications, 13, 135-143. https://doi.org/10.4236/jemaa.2021.1310010

For a resolution of Equation (10) with the algorithm of Thomas, the vector α → , γ → , and r → can be introduced, in order to proceed to a forward elimination. Then the solutions are obtained by backward restitution. This algorithm is presented below.