The Lanczos-Chebyshev Pseudospectral Method for Solution of Differential Equations

In this paper, we propose to replace the Chebyshev series used in pseudospectral methods with the equivalent Chebyshev economized power series that can be evaluated more rapidly. We keep the rest of the implementation the same as the spectral method so that there is no new mathematical principle involved. We show by numerical examples that the new approach works well and there is indeed no significant loss of solution accuracy. The advantages of using power series also include simplicity in its formulation and implementation such that it could be used for complex systems. We investigate the important issue of collocation point selection. Our numerical results indicate that there is a clear accuracy advantage of using collocation points corresponding to roots of the Chebyshev polynomial.


Introduction
The basic idea of the spectral methods is to solve differential equations using truncated series expansions [1].The basis functions chosen for this purpose-for example, Chebyshev polynomials-must be linearly independent and orthogonal.The methods are implemented in terms of the expansion coefficients [2].When a differential equation involves variable coefficients, it is more convenient to use the pseudospectral methods, employing collocation at selected points.Among them, the Chebyshev pseudospectral (CPS) methods are the most commonly used [2] [3].In general, the collocation approach does not require basis functions to be orthogonal.For P. Y. P. Chen example, splines [4], canonical polynomials [5], or even mesh-free formulations, based on specially constructed functions [6], had been proposed.So far, none of the non-orthogonal approaches could compete with the standard pseudospectral methods in popularity because there is no clear and decisive advantage in those alternatives.
The idea of using simple power series to generate solution approximation is well known.It is also a wellknown fact that pure monomial powers form a terribly ill-conditioned base for all but the very simplest problems.However, this restriction could be overcome.In a number of publications it has been reported that recursive formulations could be used to derive combinations of power functions given under the name canonical polynomials [5] [7] [8].This approach could be considered as too laborious, as the procedures are complex and a set of different canonical polynomials has to be developed for each particular problem.It has also been reported that vast improvement could be achieved if an operational formulation is used [9].In this alternative approach, there is no implicit use of any recursive relationships.Instead, a series of matrix operations are used, including differentiation, power multiplication etc.However, the formulation still directly involves linear differential operators of a given problem and the polynomial basis so found is applicable only for that specific problem.
In this paper we describe an efficient alternative based on the replacement of series expansion solutions using orthogonal polynomials such as Chebyshev polynomials with the equivalent Chebyshev economized power series.We are taking full advantage of the collocation approach so that the basis of our approximation is just only the simple power functions that could be used universally for all problems.Just like the operational approach [9], a series of matrix operations are needed.However, the formulation and implementation are expected to be much simpler as the polynomial basis need not to be specifically developed.As Chebyshev economized series are used, best solution accuracy based on the number of terms used can be assured uniformly over the entire domain of interest.We could show numerically that there is no loss of accuracy, if the economized series is based on the collocation points chosen at roots of the Chebyshev polynomial, as originally proposed by Lanczos [10] [11].We have called this method the Lanczos-Chebyshev Pseudospectral (LCPS) method.We should point out that the use of Chebyshev economized power series has been reported previously [12].The implementation given is based on a recursive formulation.As a result, this particular reported approach lacks the simplicity that characterized the LCPS method.
There are, however, a number of issues that could affect the numerical performance of various methods.To retain the spectral accuracy, pseudospectral formulations employ orthogonal-polynomial series.In implementing those classical pseudospectral methods, the series expansions are often replaced by function values at the collocation points in the physical space [2] [13].Thus, solutions are sought only at the collocation points.If global solutions are required, such as in some nonlinear problems, they can only be obtained from values at grid points by executing some more than trivial numerical routines.For the CPS methods, deriving derivatives from the Chebyshev series results in loss of accuracy [14] [15].The operation is also expensive in computer execution time, needing O(N^2) operations where N is the number of grid points used.For FFT, the operations are O(NlogN), although in here N is a much larger number than that used in the CPS methods.In contrast, with the series formulation, we are able to restrict the number of operations required to evaluate a function or a derivative to O(2N).In a simulation requiring many evaluations such in an iterative procedure for nonlinear problems or in a long evolution history of a transient problem, a great deal of speed could be gained with no penalty in extra and complicated program development.
Nevertheless, there are good reasons for the pseudospectral methods to remain so popular among the numerical analysts.The methods can be used to solve a wide range of problems, not just in 1D but also 2D and possibly 3D.In almost every case, to achieve the same degree of accuracy, far fewer grid points are required than in the case of other numerical methods.There is the freedom to divide a given domain into a number of sub-domains, gaining many numerical advantages [16] [17].The formulations involved can be easily programmed to select grids and the number of terms used in the series expansion.The availability of an appropriate commercial program package [18] [19] is also an attraction.It should be mentioned that all these advantages are retained in the LCPS method with the commercial package as the only exception.
As the formulation based on the power series is simple to implement, it is expected that it would be easy to use LCPS method to solve complex mathematical models resulting from modern sophisticated applications.We have already used the LCPS method to solve wave propagation in a complex system involving special purpose designed metamaterials and photonic crystals [20].In this present paper we deal only with problems involving one spatial dimension.We have already extended the LCPS method to solve stationary wave-propagation problems in two-dimension [21].This type of problems could not be readily solved by means of other available numerical methods.

P. Y. P. Chen
In this paper, we provide details of the formulation of the LCPS method in Section 2, including treatment of domain subdivision.In Section 3, we use several of examples that have either analytical or known numerical solutions, to show the efficiency of the LCPS method.Along with these examples, we test practical issues, such as the selection of collocation points, how one can overcome reduced accuracy in approximating derivatives, and how to achieve the best management of the collocation points.Conclusions are presented in Section 4.

Formulating the LCPS Method
We consider a general differential equation involving spatial derivatives of the second order: We use truncated power series of order N as the approximate solution, where u i (t) are time-dependent coefficients: The derivatives can be obtained from Equation ( 2) by means of the term-by-term differentiation to give, ( We use N − 1 collocation points, x i , within the interval [−1,1], that are the roots of the Chebyshev polynomial ) Applying the above procedure to the Equation (1a) and boundary conditions Equation (1b), results in a set of N + 1 ODEs in the form of ( ) [ ] ( ) where ) For time-dependent problems, the initial condition, Equation (1c), together with the expansion, Equation (2), and collocation points, Equation (3), are used to find where

A) Domain Subdivision
Equation ( 5) could be solved by a finite difference formulation together with matrix inversion.For a highorder approximation, the collocation points near the boundaries would be very close to each other.The implication is that the accuracy of the solution could be compromised, if too high an order of the expansion is used.This is not a numerical problem specific only to the LCPS method.In the ordinary CPS method, care is always needed in evaluating derivatives [14] [22].In fact, round off errors become significant only when N is larger than 100 [22].If needed, high-order approximation can be avoided by sub-division into sub-domains.There are, however, other reasons for the use of sub-domains as the distribution of the collocation points could thus be better managed.Because the nearest collocation points are now more widely separated, there is an advantage of being able to use a larger integration step in transient problems If the computational domain is divided into M sub-domains, each having width , where X k and X k−1 are, respectively, the right and the left edges of the k-th subdomain.Transformation X = α k x + β k , where ( ) , is used to scale each sub-domain to [−1, +1].The field functions in each sub-domain are approximated by an N-order power series in the local coordinate x: , Outside of the k-th sub-domain, u k = 0. Applying the above to Equation (1a) at the collocation points, , in each sub-domain leads to the same set of equations as those for the single domain problem.
The assembly of all sub-domains involves the series expansion coefficient as a vector of length ( ) , , , , , , , , , , , , Between two adjacent sub-domains, the boundary conditions, Equation (1b), are replaced by continuity conditions: To implement these continuity conditions, the equations representing the boundary conditions in the matrix P in Equation ( 6) is changed to ones involving adjacent sub-domains as well.The matrix representing the boundary conditions for the k-th sub-domain are: where )

B) Selection of Collocation Points
One well-discussed issue about pseudospectral methods is about collocation point selections.Other orthogonal polynomials, representing a set of different collocation points other than those based on the Chebyshev polynomials, are proposed mostly based on claims of a better accuracy.Within CPS methods, far more user are using not the roots, Equation (4) but the turning points of T N (x), known as the Gauss-Lobatto points [10] [11].They are the roots of the polynomial ( ) ( ) In implementation based on this selection, the first and the last of the N + 1 points are reserved for the boundary conditions while all other interior points are used for the collocation of the field equation.The reason for using Equation ( 12) is that in approximating a function, this set of collocation points will reproduced exactly the N-th order polynomial function.In general, the boundary conditions for differential equations are not always the specified functional value itself.The fact this set of collocation points could be freely used is based on the strength that spectral errors for approximating a function could easily be smaller than 10 −15 .For a single domain, the maximum errors will most likely be near the boundaries.With sub-domains, it is important that the selection will give the least error at cell boundaries as the inter-domain conditions include the continuity of derivatives.

Numerical Examples
To validate the accuracy and computational efficiency of the LCPS method discussed in the previous sections, we have chosen the examples displayed below.

A) Accuracy of the Economized Chebyshev Power-Series Expansion and the Selection of Collocation Points
To test the differences between the two different choices of the collocation points, Equations ( 4) and ( 12), we use the following function: It has been found that, at least, a 10 th -order Chebyshev polynomial series is required for this function to be represented with sufficient accuracy [23].We test the series expansion employing 9 interior collocation points, and two boundary points.We found the errors by comparing the exact functional value with those evaluated according to Equation ( 2), which yields exact approx shows that the errors at regular intervals over the complete range [0,1] are on the order of 10 −3 .At the exact collocation points, as shown in the plots, the errors are many orders of magnitude smaller, at 10 −16 .They are round-off errors, indicating that the matrix inversion is reliable.Figure 1(b) shows errors of the first derivative.From this plot, it can be seen that errors for the derivative have increased by 100-fold, as compared to the errors for the function itself.Minimum errors for the derivatives are no longer at collocation points.However, our investigation reveals that, just like in the case of the function itself, errors of the derivatives also decrease exponentially with the increasing number of the points used.Therefore, it is easy to control errors of the derivatives, although the actual number of needed points depends on the smoothness of the function and the expected accuracy, as well as the length over which the approximation is made.In practice, we recommend that N ≤ 20 be used, so that the integration steps need not be very small.If more than 20 terms are required due to the necessary accuracy, the domain subdivision could be used.In this case, conditions at borders of cells include the continuity of both the function and its first derivative.We shall show in a later example that, for a multi-domain treatment, there is a clear advantage in using the set of zeros as collocation points.
In case that there is interest on how errors in approximating the function could affect the solution of differential equations, we use the steady-state heat conduction as an example:  onstrated in Example A above, evaluating derivative α'(x) at any given point is easy with the LCPS method.
There is also the freedom to choose the number of terms, so as to provide the required accuracy.Figure 3 shows our results, which are comparable in every aspect to Figure 2(a) in [11], where the CPS method was used.

C) Eigenvalues for Timoshenko beams
Here we use an example that has been employed to validate the pseudospectral method for eigenvalue analysis [24].The equations for a beam with length L normalized to x∈[−1,1] are: .
We use the particular case with clamped boundary conditions: For the above simultaneous equations, the LCPS method transformed them into a generalized eigenvalue problem involving matrices of size 2(N + 1) × 2(N + 1), λ = Cu Du (20) where elements of the vector u are the series expansion coefficients for θ and w.We have used the general matrix eigenvalue solver in the IMSL package to solve Equation (20).In Table 1 we show the lowest and real dimensionless eigenvalues found for different numbers of the collocation points, used together with results reported in [24].It could be seen that all the small eigenvalues are found to be in good agreement.

D) Example of the nonlinear Schrödinger (NLS) Equation in one dimension (1D)
We consider the NLS equation in 1D with a pulse width W, where S is the speed of the pulse.The equation has the following well-known analytical solution (the soliton), This is a time-dependent example with complex-valued functions, which had been used to illustrate the efficiency of different numerical methods [4] [25].We used it in this work to demonstrate that the expansion relying on Equation ( 2) is still applicable, except that all the coefficients are now complex-valued.The treatments of derivatives remain the same as above, as only the spatial variables are involved.In fact, the entire procedure remains the same, except that complex arithmetic operations are required.The LCPS method is used here to reduce Equation ( 21) to a set of ODEs in time, that is then solved numerically.It is not our intention to investigate the pros and cons of various numerical integration methods.For this paper, we have used the unconditionally stable Crank-Nicholson method, keeping the time step small enough to prevent the growth of unwanted solutions.
Besides being a complex-value function, the pulse shapes are by far more complicated than all the previous examples.Depending on L and S and also on the expected accuracy, the number of the required collocation points could vary greatly.We have chosen L = 40 to produce a pulse shape close to the soliton solution.If S = 0 and the pulse remains stationary, N = 20 with four sub-divisions are found to be sufficient, as shown by the evolution histories displayed in Figure 4(a).However, if S = 4, for the moving pulse it is necessary to increase the total number of the collocation points to N = 20 with eight sub-divisions.The corresponding histories are shown in Figure 4(b).In both cases, the time step was 0.0001.
We cannot see any pulse-shape changes in both Figure 4(a) and Figure 4(b) over the entire time period considered.In fact, there are only small differences between the analytical and numerical solutions.We have plotted out in Figure 4(c) the maximum error at various time intervals.It is obvious that the errors are well within acceptable limits.It is interesting to see that, for the case of S = 0, the numerical solutions are more accurate than in the case of S = 4, although the former uses half the total number of the collocation points, as compared to the latter.In both cases, although there are marked reductions of the accuracy over the initial time period, the trend is for the accuracy to move toward a constant level.Thus we can see that the numerical procedures could be continued to a much longer time.
Also shown in Figure 4(c) is the error plot for solutions obtained under the same conditions, except that turning points based on Equation (8) were used as collocation points.The results show that the collocations based on the zeros produced by Equation (4) are clearly superior.

Conclusions
Our numerical results show that, for the solutions of differential equations, the Chebyshev series used in the CPS methods could be replaced by economized Chebyshev power series.This may not be surprising, as the well-established numerical method for approximating a given function is to replace its Chebyshev series expansion by its equivalent economized power-series expansion.For computational purposes, it is easier and speedier to evaluate a power series.
Although our numerical results show that derivatives obtained from the power series are not as accurate as the function itself, this deficiency can be offset by a small increase in the order of the series used.The derivatives based on the Chebyshev economized power series retain the important property of the "spectral accuracy".From our numerical examples, we cannot see that there are cases which could be solved by the CPS methods but cannot be solved by the LCPS method.
We have demonstrated that the formulation of the LCPS method is simpler than that for the CPS methods.It would be easier to apply the LCPS method to solve complex models.For nonlinear problems, the LCPS method will have the advantage of being able to evaluate globally and rapidly the function itself and its derivatives.
Finally, we have shown that the roots (zeros) of the Chebyshev orthogonal polynomials are better choices for the collocation points.
errors in T and its second derivatives are shown in Figure 2.

Figure 1 .
Figure 1.Errors in approximating the function in Equation (13): (a) The function; (b) Its first derivative.Symbols are errors at the collocation points.

Figure 3 .
Figure 3.The dependence of the solution accuracy on the number of terms used in the coefficient approximation: L is the series order of the coefficient, and N is the series order of the function.

Figure 4 .
Figure 4.The evolution of the soliton-like pulses: (a) with S = 0; (b) with S = 4; (c) the maximum error in the numerical solutions.