The Numerical Inversion of the Laplace Transform in a Multi-Precision Environment

This paper examines the performance of five algorithms for numerically inverting the Laplace transform, in standard, 16-digit and multi-precision environments. The algorithms are taken from three of the four main classes of numerical methods used to invert the Laplace transform. Because the numerical inversion of the Laplace transform is a perturbed problem, rounding errors which are generated in numerical approximations can adversely affect the accurate reconstruction of the inverse transform. This paper demonstrates that working in a multi-precision environment can substantially reduce these errors and the resulting perturbations exist in transforming the data from the s-space into the time domain and in so doing overcome the main drawback of numerically inverting the Laplace transform. Our main finding is that both the Talbot and the accelerated Gaver functionals perform considerably better in a multi-precision environment increasing the advantages of using Laplace transform methods over time-stepping procedures in solving diffusion and more generally parabolic partial differential equations.


Introduction
This paper examines the performance of five algorithms for numerically inverting the Laplace transform, in standard 16-digit and multi-precision environments.The algorithms, whose derivations are outlined in Section 5, are taken from three of the four main classes of numerical methods used to invert the Laplace transform [1].
The Abate-Valko [1] and Logan schemes [2] belong to the class of inversion algorithms which deform the Bromwich contour [3].They are closely related versions of this approach as they both use Talbot's method for deformation of the contour [4].Logan, however, chooses an exponential transform while Abate-Valko extends the original Talbot formulation expressing the contour in trigonometric form.
The Stehfest and Salzer-Gaver algorithms [5], are again two closely related schemes based on the acceleration of the Gaver functional [6].Stehfest applied a modified Salzer acceleration scheme [7] onto the Gaver functional simplifying this result to yield one of the most widely used algorithms for inverting the Laplace transform.We find, however, that when we implement a direct application of the Salzer acceleration scheme onto the Gaver functional, (Salzer-Gaver), with Stehfest's modifications, we do not obtain the same results as those produced by the Stehfest scheme.We conclude that Stehfest's simplification process is at least in part responsible for the differences in performance of these two versions.
Finally, we examine the Fourier series method [8], which expresses the inversion integral as a Fourier series and then uses the trapezium rule to evaluate the integral numerically.The Fourier series method differs from the other four algorithms as no acceleration scheme is used to force convergence.As such, this is only used in a standard 16 digit precision environment and is compared with the four other schemes using standard precision.

The Laplace Transform
The Laplace transform is an integral transform defined as follows: Let ( ) f t be defined for 0 t ≥ , then the Laplace transform of ( ) f t is given by, { } f t  is a function of s denoted as ( ) F s .The Laplace transform can be shown to exist for any function which can be integrated over any finite interval 0 t l < < for 0 l > , and for which ( ) f t is of exponential order, i.e.
( ) as t → ∞ , where 0 M > is a finite real number and a is a small real positive number.
Analytically the inverse Laplace transform is usually obtained using the techniques of complex contour integration with the resulting set of standard transforms presented in tables [9].
However, using the Laplace transform can generate data in the Laplace domain which is not easily invertible to the real domain by analytical means.Thus numerical inversion techniques have to be used to convert the data from the s-space to the time domain.

The Inverse Laplace Transform, Perturbation and Multi-Precision
The recovery of the function ( ) such that α ∈ R .The inversion integral is widely known to be highly perturbed [10] [11] [12].This is due to the e st term in the integral which amplifies small changes in the input data.Hence all numerical schemes are vulnerable to this perturbation and this has to be taken into account when using the various algorithms to invert the Laplace transform.This suggests that working in a multi-precision environment can act to reduce errors and the resulting perturbations which exist in the transforming the data from the s-space into the time domain.
As Abate-Valko noted [1], "In the traditional development of the inversion methods, most of the effort was directed at controlling round-off errors.This is because the process is numerically unstable in a fixed-precision, computing environment.The problem is that as the user tries to increase the accuracy there comes a point where the round off error causes the error to increase dramatically".
In fact Abate-Valko got further and made the claim that "for our purposes, we add the proviso that values of the transform can be computed to any desired precision as a function of the complex variable".
This suggests that working in a multi-precision environment can act to reduce errors and the resulting perturbations which exist in transforming the data from the s-space into the time domain.

The Algorithms
Large parts of this section are taken from our earlier work [13].This is necessary to provide sufficient background on the derivation of the algorithms and their performance.However, we extend this work to include comments on Salzer acceleration and the subsequent Salzer Gaver scheme's derivation and compare this to the Stehfest inversion scheme.
However this has been extended to include analysis of the Salzer acceleration scheme on the Gaver functional.We also comp There are over 100 algorithms available for inverting the Laplace Transform with numerous comparative studies, examples include, Duffy [14], Narayanan and Beskos [15], Cohen [10] and perhaps the most comprehensive by Davies and Martin [16].However for the purposes of this investigation we apply our tests using "Those algorithms that have passed the test of time" [1].These fall into four groups, 1) Fourier series expansion.

The Fourier Series Method
In their survey of algorithms for inverting the Laplace transform, Davies and Martin [16] note that the Fourier series method without accelerated convergence gives good accuracy on a wide variety of functions.Since the Laplace Transform is closely related to the Fourier transform it is not surprising that inversion methods based on a Fourier series expansion would yield accurate results.In fact the two sided Laplace transform can be derived from the Fourier transform in the following way.We can define the Fourier transform as Then letting This Fourier transform exists provided ( ) As many functions do not satisfy condition (6), ( ) and letting s u iv = + we obtain the two sided Laplace Transform of ( ) Le Page [17] notes that the integral given by ( 8) can be written in two parts as follows: The second term on the RHS in the above expression is referred to as the one sided Laplace transform or simply the Laplace transform.Thus, s is defined as a complex variable in the definition of the Laplace transform.
As before the inverse Laplace transform is given as: ( ) ( ) (10) this leads to the result [8], which can be replaced by the cosine transform pair or by the sine transform pair Dubner and Abate [18] applied a trapezoid rule to (13) resulting in the Fourier series approximation, ( ) ( ) where ( ) For faster computation Simon et al. [19] proposed the following version of ( 16).

( ) ( ) (
) This series can be summed much faster than ( 16) as there are no cosines to compute [20].This algorithm is relatively easy to implement with u being the only real varying parameter.
However, as Crump [8] points out, for the expression in ( 17) the transform

( )
F s must now be computed for a different set of s-values for each distinct t.
Since this type of application occurs often in practice in which the numerical computations of ( ) F s is itself quite time consuming this may not be an economical inversion algorithm to use.These drawbacks to some extent can be overcome by using the fast Fourier transform techniques in [20] [21].
Crump [8] also extends this method to one with faster convergence by making use of the already computed imaginary parts.There are several other acceleration schemes for example, those outlined by Cohen [10], however these acceleration methods in general require the introduction of new parameters which for the purposes of this investigation we wish to avoid.

The Stehfest Algorithm
Davies and Martin [16] cite the Stehfest [5] algorithm as providing accurate results on a wide variety of test functions.Since that time, this algorithm has become widely used for inverting the Laplace Transform, being favoured due its reported accuracy and ease of implementation.
Here we give a brief overview of the evolution of the algorithm from a probability distribution function to the Gaver functional whose asymptotic expansion leads to an acceleration scheme which yields the algorithm in its most widely used form.
Gaver [6] investigated a method for obtaining numerical information on the time dependent behavior of stochastic processes which often arise in queuing theory.The investigation involved examining the properties of the three parameter class of density functions namely with , n m ∈  .
After the binomial expansion of the term ( ) , Gaver went on to find the expectancy ( ) T is the random variable with density (18).From this Gaver was able to express the inverse Laplace transform in terms of the functional with certain conditions on n and m, Gaver makes n m

=
and Expresses (19) as While the expression in (20) can be used to successfully invert the Laplace transform for a large class of functions its rate of convergence is slow [9] [10].
However Gaver [6] has shown that (20), with ln 2 t a = has the asymptotic expansion ( ) where the j α 's are constant coefficients in the asymptotic series.Hence (21) in the limit converges to ( ) For the conditions on m and n and justification for the substitution for a referred to above see Gaver [6].This asymptotic expansion provides scope for applying various acceleration techniques enabling a more viable application of the basic algorithm.Much of the literature alludes to the fact that a Salzer [7] acceleration scheme is used on the Gaver functional in (20) which results in the Stehfest algorithm.In fact Stehfest's approach was a little more subtle than a direct application of the Salzer acceleration.

Using Salzer Acceleration
The Salzer acceleration scheme makes use of the "Toeplitz limit theorem" [7], "this concerns the convergence of a transformation of a sequence s ζ where the ( ) n + th member of the transformed sequence is a weighted mean of the first ( ) Here n S is the transformed sequence and k S the original sequence which for our purposes ( ) (20).The Salzer means are given by ( ) ( ) For the sake of compatibility with (22) we make the change k i → and Applied Mathematics n k → in (20).With this change of variables we also write This allows the sum to be taken from 0 k = to n without ( ) in the denominator of (20).So with Salzer acceleration we can express the Salzer-Gaver algorithm as

Stehfest's Acceleration Scheme
For the purposes of following Stehfest's derivation it will be convenient to rewrite (20) as Stehfest [5] begins by supposing we have N values for ( ) F Na for N even.Using (25) we can then . Now each of these 2 N values satisfy the asymptotic series in (21) with the same coefficients.
As Stehfest [5] points out, the α j 's are the same for each of the above expansions and by using a suitable linear combination the first ( 1 2 N − ) error terms in (21) can be eliminated.That is which may be achieved by selecting the coefficients to satisfy which produce the same coefficients as the Salzer acceleration scheme used in (22).In fact for any n, Stehfest generates the required coefficients using what is in effect a modified Salzer acceleration scheme giving ( ) Finally, Stehfest substitutes these results into (26) and gets the inversion formula ( ) for N even and However, a direct application of the modified Salzer acceleration scheme in (28) onto the Gaver functional in (25) does not produce the same results for the expression in (30) so they are not exactly equal to each other.
To show this we consider the function ( ) sin t whose Laplace transform is The eight weights produced by the Salzer acceleration for 8 n = are exactly the same for 18 n = in Stehfest's modified Salzer acceleration scheme in (28).However, Table 1 shows that for ( ) sin t with these same weights, the Salzer-Gaver scheme produces different results when compared to Stehfest's scheme in (30).We believe this is due to Stehfest's simplification of the Salzer-Gaver scheme to the expression in (30).
This simplification was necessary because as we show in our results in Section 6, Stehfest's final expression in (30) is faster and works better in standard double precision.As the algorithm was developed in 1970, this would be far more efficient when taking into consideration the computing power available at the time.Again as we show in Section 6, a direct application of a Salzer acceleration scheme onto the Gaver functional is only advantageous in a multi-precision environment.

The Talbot Algorithm
Equations ( 4) to (8) showed that the Laplace transform can be seen as a Fourier transform of the function i.e.
( ) Hence the Fourier transform inversion formula can be applied to recover the function thus: as s u iv = + we have that d d s i v = and so ( ) ( ) This result provides a direct means of obtaining the inverse Laplace transform.In practice the integral in (34) is evaluated using a contour ( ) with B here denoting the Bromwich contour [3].The contour is chosen so that it encloses all the possible singularities of ( ) F s .The idea of the contour is introduced so that the Cauchy residue theorem can be used to evaluate the integral.However, when ( ) f t is to be calculated using numerical quadrature it may be more appropriate to devise a new contour.To ensure that (35) is integrable we may wish to control the growth of the magnitude of the integrand e st by moving the contour to the left and so giving the real part of s a large negative component [1] [22].
However, the deformed contour must not be allowed to pass through any singularities of ( ) F s .This is to ensure that the transform is analytic in the region to the right of B.

Derivation of the Fixed Talbot Contour
In the derivation that follows [1] and [22] are used as the primary basis for extending the explanation of the derivation of the Talbot algorithm for inverting the Laplace Transform.
Abate-Valko [1] began with the Bromwich inversion integral along the Bromwich contour B with the transform ( ) So ( ) with a t α = in (37).As Abate-Valko [1] indicated, numerically evaluating the integral in (37) is difficult due to the oscillatory nature of the integrand.
However, this evaluation can be achieved by deforming the contour B into a path of constant phase thus eliminating the oscillations in the imaginary component.These paths of constant phase are also paths of steepest decent for the real part of the integrand [1] [22] [23].
There are in general a number of contours for which the imaginary component remains constant so we choose one on which the real part attains a maxi-= + π − < < +π (45) [4].

Conformal Mapping of the Talbot Contour
While the above parametrization can be used as a basis for inverting the Laplace transform we proceed with the algorithm's development via a convenient conformal mapping as follows.
( ) For the details of this transformation one can refer the study of Logan [2].Next we follow the procedure as adopted by Logan [2] for numerically integrating (51).With the change of variable (51) becomes ) ) For convenience we write, ( ) ( ) The integral in (54) is then rotated by 2 π so the interval of integration is now real and becomes [ ] − π π and then we use the trapezoid rule with n odd and where and we note that ( ) ( )

Valko
Abate-Valko [1] deformed the Bromwich contour using the Talbot path which has the form, Which is the same as (47) with a r = and h θ = So we have Then from (52) we find, ( They then approximate the value of the integral in (63) by using the trapezoidal rule with step size Based on numerical experiments, Abate-Valko then fixed the parameter r to the value, [1].We also use this value for a in Logan's transformation.

Results
We tested the five algorithms on the functions listed in Table 2 and Table 3 Table 2. Test functions.
Function No. , however for brevity we include only the result for 200 M = .

( )
For the Stehfest and the Salzer-Gaver algorithms best results were obtained with weights of 16 M = and 8 M = respectively.This is in keeping with Stehfest's observations on the instability of this method as M increases above an optimal level [5].
In multi-precision, the number of precision digits N for Abate-Valko were set equal to N M = [1] and for the Slazer-Gaver, and Stehfest schemes, best results were obtained when the number of precision digits was set equal to ( ) ( ) And where ( ) f t is the analytical solution and ( ) Hence L is the root-mean-square error and e L is the same as L but weighted by the factor e t − [14].All computations were done using a 64-bit operating system with an Intel(R) Core(TM) i7-855ou CPU processor.The algorithms were implemented in Maple 2018 using the Maple's digits command to set the required precision.

Standard Double Precision
Table 4 and Table 5 show that when compared with the other four algorithms, the Fourier series method performs with the least accuracy on all the functions tested.It also fails to reconstruct functions 8, 15, 17 and 18, with poor results for functions 4, 5 and 12.
However, for the functions which it successfully reconstructs it does so with a RMS accuracy of between L = 3.6(−5) and 1.2(−2).We believe that this scheme will improve greatly when an acceleration scheme is applied.This is an issue we intend to investigate in our future work.
With the exception of function 7, ( ) 0 J t , Logan's algorithm successfully inverts all the functions given in Table 2 and Table 3 with very good accuracy.We found that in SDP best results are obtained by equating 1 a = in (50).Table 3 and Table 4 show that for these functions the RMS error varies between 3.6(−8) to 8.4(−12).
Except for function 7 the ( ) 0 J t , the Abate-Valko scheme successfully inverts all the functions in Table 3 and Table 4.Moreover, it does so with greater    accuracy than the Logan scheme.The tables show that the RMS error varied between 6.5(−11) and 6.2(−12).Table 3 and Table 4 show that the Stehfest algorithm shows poor accuracy when inverting functions 1, 7, 10 and 11.For these functions the RMS error varies between 2.01(−2) to 9.2(−3).Its poor performance is due to the fact that the Stehfest algorithm has difficulty inverting functions of a cyclic nature [5].However, it inverts the remaining functions with good accuracy with a RMS error of between 0 to 2.9(−5).Table 3 and Table 4 shows that the Salzer-Gaver algorithm shows poor accuracy for functions 1, 7, 10 and 11.These are the very same functions that the Stehfest algorithm has problems inverting.Again this is due to the difficulties it encounters when inverting cyclic functions.It inverts the remaining functions with less accuracy than the Stehfest with an RMS error varying between 10(−15) to 10(−5).

Multi Precision
With the exception of function 7, the Logan and Abate-Valko algorithms successfully inverted the remaining functions to a high degree of accuracy, see Table 6 and Table 7. Duffy [14] also remarks that when using the Talbot contour he had difficulties accurately inverting the Bessel function.This may related to the combination of the singularity on the imaginary axis and the branching nature of the square root function.[1] stated that they were able to overcome this by increasing the weights and hence the precision as a function of t.However, we were unable to replicate their results for this function.

Abate-Valko
Overall, the Abate-Valko scheme showed far greater accuracy than Logan's across all the functions tested.However, Logan's algorithm was still able to produce highly accurate results with RMS errors varying between 10(−60) to 10(−63).Moreover, Table 8 shows that Logan's scheme was able to perform the inversion of these functions with shorter elapsed times.
The Stehfset and Salzer-Gaver algorithms were able to invert all the functions to a high degree of accuracy.The Salzer-Gaver scheme was in general about twice as accurate as the Stehfest algorithm which, was less accurate than Abate-Valko's scheme.Nevertheless, the Stehfest scheme inverted the functions

Table 5 .
Standard double precision continued.