Evaluation of Generalized Error Function via Fast-Converging Power Series ()
1. Introduction
In the fourth volume of the Mécanique Céleste, Laplace (1749-1827), treating the refraction of light on the hypothesis of uniform temperature, makes use of the integral
and, besides stating that
, develops a continued fraction solution to it ([1], pp. 485-493):
(1)
where
and q should not exceed 1/4 or equivalently T should be greater than
to get good results; otherwise, alternative series solutions are suggested. The most commonly used form of this particular integral is known as the error function:
(2)
for which various series solutions and approximations are available. Two generalized forms of the error function are defined as ([2], p. 297):
(3)
where
. Euler’s integral for the gamma function ([3], p. 500) is
(4)
Let
where, like z, p may be complex although treated here as real, and introduce a change of variable such that
so that
. Noting that
and substituting the new variables into (4) give
(5)
which is introduced here as a new definition of the gamma function. In accord with Equation (5), the present work goes a step further and considers the normalized integral function
(6)
with real
and
. The normalized integral function is designated by
due to its close affinity with the gamma (factorial) function
as can be seen from (5) and (6). Furthermore, for
Equation (6) becomes identical with the error function given in (2). Here, a fast-converging power series solution valid for the entire range of
is developed for
by making use of a consistent and convergence improved series expansion introduced in the resolution of Grandi’s paradox [4]. The same approach was previously used for obtaining a series solution to the exponential integral
with very satisfactory results [5]. This particular integral, which is also related to the gamma function, was transformed to a form that required a fast-converging and accurate expansion of
for its evaluation. Equation (6) is likewise elaborated into integrals containing
with
hence is evaluated with the aid of expansions introduced in Beji [4] for the resolution of Grandi’s paradox. In order to clarify the underlying methodology, a brief account of the Grandi’s paradox and its resolution is addressed next.
2. Grandi’s Paradox and Its Resolution
Grandi (1671-1742) is known by his book entitled briefly as Quadratura, which essentially lacks originality except for two particular items: construction of a curve later named as the Witch of Agnesi and statement of a paradox associated with the series expansion of
within
. The paradox has become well-known and named after Grandi [6]. The series expansion of
, either by simple division or Maclaurin series, can be easily established for
and a straightforward manipulation described in [4] (multiply both sides by u and redefine
) gives the corresponding convergent series for
:
(7a)
(7b)
The paradox arises when u is set to unity,
, which renders the left side 1/2 while the right side becomes either 0 or 1, depending on the number of terms kept in the expansion. Beji [4] introduced the consistent truncation and convergence improvement technique to resolve the paradox. Thus, instead of equations given in (7), the following series expansions were developed.
(8a)
(8b)
where N is the order of truncation. The new forms (8) give exactly 1/2 when u is set to unity for all the orders of expansion. A remarkable feature of the new series expansions is the change of coefficients depending on the order of truncation. Specifically, the coefficients are re-adjusted for each truncation order to give a highly accurate expansion of the generating function. Note also that the convergence of series (8a) (8b) are ensured since the positive powers for
and negative powers for
are used. As to the case
, both expansions yield exactly 1/2 for any order of truncation N and terminate the inconsistency, which is the origin of the paradox. Applications of this technique to various Grandi-type functions with real and complex arguments can be found in [4] [7].
Figure 1 compares the third- and fourth-order series expansions of (7) and (8) against the exact expression
. Smooth and accurate approximations obtained from (8), which are virtually inseparable from
, are quite satisfactory when compared with (7).
3. Development of Series Solution
Integral stated in (4) is first split into two parts as
(9)
where, by definition,
and
. Integral
, being in the range of 0 to 1, is fairly easily evaluated by employing a straightforward series expansion of
. On the other hand, integral
, which implicitly assumes
, requires some novelty for its evaluation.
3.1. Evaluation of
Using the standard series expansion of exponential function gives
Figure 1. Function
(black), third- and fourth-order series expansions according to (0) (blue, green) and (0) (red, orange).
(10)
which is absolutely convergent as long as the upper limit of integration is equal to or less than one, as is the case here. A quite similar expression corresponding to the error function,
, for an arbitrary upper limit can be found in ([2], p. 297).
3.2. Evaluation of
In order to evaluate
some preliminary arrangements and splitting of it into separate integrals are necessary. First, introduce a change of variable
hence
and define
for ease of notation so that
(11)
Note that
since
and for the special case of the error function
hence
. Let
,
(12)
Define the first integral within the brackets as
(13)
Since
,
is going to be treated by employing series expansion (8a). The second integral within the brackets is split into two parts as
(14)
where, by definition,
(15)
is elaborated further with two successive changes of variables; first
then
, and finally
is renamed as u so that
(16)
Considering the entire
,
(17)
Integrals
,
, and
are now evaluated in succession by employing the improved series expansions defined in (8) for
.
3.2.1. Evaluation of
In order to evaluate
it is necessary to express
as a power series and to do so we first employ the improved series expansion of
as given in Equation (8a) for the range
. Let
so that
(18)
where, by definition,
(19)
We now establish
as a Maclaurin series truncated at a definite order N:
(20)
As a demonstration, taking
gives
(21)
in which the functions are to be evaluated at
. Use of
results in
(22)
where
and
’s are as defined in Equation (19).
multiplying
arises from successive differentiations of the polynomial function
given in (18). A practical method of carrying out higher derivatives of
evaluated at
is developed in Appendix A. For ease of notation we define a set of new coefficients:
(23)
With the use of above coefficients in Equation (20) a power series for
for the range
is obtained. A simple manipulation, described for Equations (7b) and (8b), gives the corresponding expression for the range
. First, changing the variable by setting
, then dividing both sides by
and finally renaming
as u again, renders the series valid for
. Thus,
(24a)
(24b)
The accuracy of equations given in (24) to represent
for various
values may be seen in Figure 2, where the exact expression
and Equation (24) are depicted for
, and 10 which correspond to
, and 9/10, respectively. Figure 2 is drawn by the use of (24a) for
and (24b) for
and both series are truncated at
.
Figure 2. Function
(red) and Equation (24) for
terms (blue dots) compared for
;
;
;
values.
Equation (24a) is now used for the evaluation of
for the range
while Equation (24b) is employed for the evaluation of
in §3.2.2 for the range
where
and may be set to
.
(25)
Successively integrating by parts yields
(26)
which may be easily generalized as
(27)
We have thus completed the evaluation of
in terms of the coefficients of the power series given in Equation (24).
3.2.2. Evaluation of
For the evaluation of
we make use of Equation (24b) as
in this case.
(28)
The first integral
is equal to
as can bee seen from Equation (13) and cannot be evaluated directly; therefore, we keep it as
. On the other hand, the remaining integrals are worked out by partial integration so that
(29a)
(29b)
(29c)
Continuing in this manner and adding all the results to establish
as expressed in (28) give
(30)
The general form of (30) may be expressed as
(31)
3.2.3. Evaluation of
To complete the evaluation of
the only remaining integral is
as defined by (16). Considering only the integral itself and noting that
, the series expansion (24a) valid for
can be employed.
(32)
Proceeding very similar to the evaluation of
, we get
(33)
Generalizing the above expression and multiplying by
give
as
(34)
Finally,
can be obtained by the use of Equation (17)
,
(35)
which can be re-arranged for
as
(36)
Computing
from (10) and
from (36) enable us to compute the integral sought
as stated in (9).
4. Numerical Results
Formulations developed for computing
are programmed in FORTRAN, which is included in Appendix B. In this section, various comparisons with available numerical values are presented. First, the tables given in ([2], pp. 310-311) for the error function
are used for comparisons with the computed results obtained for
. Then, quite similar comparisons, again using the tables of Abramowitz and Stegun ([2], p. 320) are presented for
. For different truncation orders N and a range of p values, §4 presents a table listing computed
values, all of which must be exactly unity. Two graphical representations of the normalized integral function
for
and
are also included.
4.1. Comparisons for
or
The well-known error function corresponds to
case hence it is considered first. Table 1 compares the tabulated values of ([2], pp. 310-11), labeled “exact”, with computations from present formulation for the truncation order
. Integral value at infinity can be obtained either by simply setting the upper limit contribution of
to zero or taking a relatively large x value such as
; but even
is sufficient to obtain a perfectly acceptable result for
. For the present case of
this value corresponds to
and the values in Table 1 are normalized by multiplying with
. Finally, the relative error percentages are computed from 100∙(1 – Approximate/Exact).
Table 1. Comparisons for
.
|
Abramowitz and Stegun [2] |
Present work |
Relative error |
x |
(Exact) |
(N = 4) |
(%) |
0.00 |
0.00000000 |
0.00000000 |
−0.0000 |
0.25 |
0.27632639 |
0.27632639 |
−0.0000 |
0.50 |
0.52049988 |
0.52049988 |
−0.0000 |
0.75 |
0.71115563 |
0.71115563 |
−0.0000 |
1.00 |
0.84270079 |
0.84270079 |
−0.0000 |
1.25 |
0.92290013 |
0.92227506 |
−0.0677 |
1.50 |
0.96610514 |
0.96578097 |
−0.0336 |
1.75 |
0.98667167 |
0.98657212 |
−0.0101 |
2.00 |
0.99532227 |
0.99532944 |
−0.0072 |
|
1.00000000 |
1.00006365 |
−0.0064 |
For
the computed results agree perfectly well with the tabulated values because both are obtained from the series expansion in (10) with upper limit of integration set to the given x value. In computations the number of terms was restricted to 150 due to the limited capability of the machine to compute large factorials. Tables of Abramowitz and Stegun [2] terminate at
; therefore, the comparisons accord with this upper limit.
For
the present formulation uses (36) with
as no improvement in accuracy is observed for higher truncation orders
. However, as Table 3 in §4 reveals, higher truncation orders give improved results for higher p values, especially for
. The present comparisons have the same tendency concerning x; namely, increasing the upper limit of integration decreases the relative error percentage appreciably. For instance, the error is approximately 0.07% for
; but it reduces tenfold to −0.007% for
. Despite these moderately satisfactory results, even for a truncation order of as small as
, the lack of improvement in accuracy for higher truncation orders indicates an inherently difficult problem, which is also observed for numerical integrations performed by Simpson’s rule of integration for purposes of extra checking. High precision computations in the range
did not yield results as good as the series solutions despite taking an extremely small step of
.
4.2. Comparisons for
Another group of tables given in ([2], p. 320] is for the integral
, which corresponds to
in the present notation. Table 2 shows the comparisons up to
, which is the highest value included in [2]. Error percentages show similar characteristics with those of
case; yet, overall the errors are smaller. The listed highest error is 0.056% for
and it reduces to 0.004% for
. Again, values for
agree perfectly well because they are computed from the series expansion in complete agreement with the one used in [2].
Table 2. Comparisons for normalized integral
.
|
Abramowitz and Stegun [2] |
Present work |
Relative error |
x |
(Exact) |
(N = 4) |
(%) |
0.00 |
0.0000000 |
0.0000000 |
0.0000 |
0.30 |
0.3337037 |
0.3337037 |
0.0000 |
0.70 |
0.7227669 |
0.7227669 |
0.0000 |
1.00 |
0.9042886 |
0.9042886 |
0.0000 |
1.20 |
0.9641064 |
0.9635709 |
0.0555 |
1.50 |
0.9951149 |
0.9949880 |
0.0128 |
1.70 |
0.9991499 |
0.9990934 |
0.0336 |
2.10 |
0.9999925 |
0.9999496 |
0.0057 |
2.30 |
0.9999997 |
0.9999569 |
0.0043 |
|
1.0000000 |
0.9999573 |
0.0043 |
4.3. Computations for Various p and x Values
For different truncation orders
, Table 3 presents computational results for the normalized integral
, which is exactly equal to unity for any p. Deviation from the unity corresponds to the absolute error in computational result. It is clear from Table 3 that, regardless of the truncation number,
Table 3. Normalized integral
for various p values and truncation orders N.
p |
N = 4 |
N = 5 |
N = 6 |
N = 7 |
N = 8 |
1.20 |
1.000544 |
0.999778 |
1.001037 |
0.998898 |
1.001525 |
1.50 |
1.000405 |
1.000205 |
1.001224 |
0.998582 |
1.002317 |
2.00 |
1.000064 |
1.000844 |
1.000719 |
0.999117 |
1.001955 |
3.00 |
0.999957 |
1.001095 |
1.000106 |
0.999937 |
1.000867 |
4.25 |
1.000047 |
1.000912 |
0.999939 |
1.000216 |
1.000303 |
5.75 |
1.000122 |
1.000685 |
0.999931 |
1.000251 |
1.000090 |
7.80 |
1.000161 |
1.000485 |
0.999961 |
1.000212 |
1.000013 |
10.00 |
1.000168 |
1.000358 |
0.999985 |
1.000167 |
0.999995 |
computational accuracy increases with increasing p values. In particular,
case with four/five decimal places accuracy is remarkably good for
. On the other hand, lower truncation orders
appear to perform better for relatively smaller
values. Nevertheless, there appears no definite trend of truncation orders. Figure 3 depicts
normalized integral function within the range
for five different p values. Computations for
may be performed by setting p to a slightly higher value such as
; otherwise,
becomes exactly zero and the scheme fails. As expected, all the curves tend to unity with increasing x, even as low as x = 5.
A perspective view of
function for x and p pairs is presented in Figure 4. The most notable characteristic of both Figure 3 and Figure 4 is the sharp increase observed in
in the neighborhood of
as p values increase.
Figure 3. Normalized integral function
for a number of p values within
.
Figure 4. A perspective view of normalized integral function
for
and
values.
5. Concluding Remarks
A generalized form of the error function, defined as
, is
evaluated by means of fast-converging novel series expansions developed in resolving the so-called Grandi’s paradox. Computations indicate that for as low truncation order as
, the technique introduced here gives quite acceptable results when compared with available tables; however, the lack of substantial improvement in accuracy for higher truncation orders
poses a drawback. On the other hand, visibly better results for higher p and N values may indicate a potential for further improvement with even higher truncation orders
, which are not pursued here since the expansions become quite cumbersome as can be seen in the FORTRAN code for
. A final criticism may be raised against the idea of pursuing such analytical solutions at the present advanced age of computers. It is perfectly possible to do numerical integrations with very good accuracy by various methods; but restricting our interests only to available tools would unquestionably limit the mathematical progress which may otherwise produce fruitful extensions to new areas.
Appendix A: Derivatives of
for
A practical methodology is developed here for obtaining higher derivatives of
at
with
. The approach is essentially based on a notational procedure and considerably simpler than the direct way. Let us begin with
, recall that in general
as defined in §3.2.1, and introduce a differentiation scheme as follows
(37)
where
indicates the first derivative of
, which is no longer considered as a constant but the value of
. For the same reason, the derivative of
is always evaluated as
which is equal to
. Equation (37) is not complete however; we must have the multiplier
associated with the first derivative
so that the correct expression becomes
. Nevertheless, presently we shall ignore these multipliers of
,
, etc. and consider them in a systematic manner afterwards. Thus, proceeding from (37),
(38)
The second power appearing in
indicates a double differentiation hence requires the multiplier
while the first power in
corresponds to the multiplier
only. The complete expression is then
as given in Equation (22). Differentiation of (38) yields
(39)
which in turn corresponds to
(40)
Note that total powers of the coefficients determines the number of derivatives; for example,
has a power of 3 hence is multiplied by
. Likewise,
has total
powers and is multiplied by
. It is also relatively straightforward to establish the corresponding functional form from Equation (40) as
(41)
Demonstration is concluded by giving the resulting expressions for the fourth-derivative.
(42)
which in turn corresponds to
(43)
and in terms of function g reads
(44)
Higher-order derivatives should be obtained fairly easily by following the same methodology.
Appendix B: FORTRAN Code for Computation of
use msimsl
parameter(n=4)
double precision c(0:50),d(0:50)
double precision bt,cfn,ep,em,one,p,pi,x
double precision xmlt,sgn,sum,sumlt,twopwn
double precision XI,XI1,XI2,XI0,XI10,XI1a,XI1b,XI1c
c
c Pi & Exponential values
pi=4*datan(one); one=1; ep=dexp(one); em=dexp(-one)
c
c Power p and beta
p=2.0; bt=1-1/p
c
c Normalization coefficient p/Gamma(1/p)
cfn=p/dgamma(1/p)
c
c Upper limit of integration
x=100
c
c Regularized coefficients
c(0)=1; twopwn=2**dfloat(n)
c
do m=1,n; sum=0
c
do k=m,n
sum=sum+dbinom(n,k)
enddo
c
sgn=1-2*mod(m,2)
c(m)=sgn*sum/twopwn
enddo
c
c Coefficients for derivatives
do m=0,n; c(m)=dfac(m)*c(m); enddo
c
c Call Maclaurin coefficients
call mac(n,bt,c,d)
c
c Integral Parts
c
c Computation of I_0
c
if(x.le.one) then; XI0=x
do m=1,150; sgn=1-2*mod(m,2)
XI0=XI0+sgn*(x**(1+p*dfloat(m)))/(1+p*dfloat(m))/dfac(m)
enddo; goto 50
else; XI0=1.0
do m=1,150; sgn=1-2*mod(m,2)
XI0=XI0+sgn/(1+p*dfloat(m))/dfac(m)
enddo; endif
c c Computation of I_10
XI10=0.0
do m=0,n; XI10=XI10+dfac(m)*d(m)
do k=m,n; XI10=XI10-em*(dfac(k)/dfac(m))*d(k)
enddo; enddo
c
c Computation of I_11
c
c I1a First part of I1
XI1a=0.0 do m=1,n; sumlt=0.0
do k=1,m; sgn=1-2*mod(k+1,2); xmlt=1.0
do l=1,k; xmlt=xmlt/(bt+dfloat(m)-dfloat(l))
enddo; sumlt=sumlt+sgn*xmlt; enddo
XI1a=XI1a+sumlt*d(m); enddo
XI1a=em*XI1a
c
c I1b Second part of I1
XI1b=0.0; do m=1,n; sumlt=0.0
do k=1,m; sgn=1-2*mod(k+1,2); xmlt=1.0
do l=1,k; xmlt=xmlt/(bt+dfloat(m)-dfloat(l)); enddo
sumlt=sumlt+sgn*xmlt*(x**(-p*(1+dfloat(m)-dfloat(k))))
enddo; XI1b=XI1b+sumlt*d(m); enddo
XI1b=x*dexp(-x**p)*XI1b
c
c I1c Coefficient of I1
XI1c=d(0); do m=1,n; sgn=1-2*mod(m,2); xmlt=1.0
do l=1,m; xmlt=xmlt/(bt+dfloat(m)-dfloat(l)); enddo
XI1c=XI1c+sgn*xmlt*d(m); enddo
c
c Computation of I_2
XI2=0.0; do m=0,n; XI2=XI2+(x**(-p*(dfloat(m)+1.)))*dfac(m)*d(m)
do k=m,n
XI2=XI2-em*(x**(-p*(dfloat(k)+1.)))*(dfac(k)/dfac(m))*d(k)
enddo; enddo
XI2=x*dexp(1-x**p)*XI2
c
c Final Value
XI1=XI10+XI1a-XI1b-XI2
XI1=XI1/(ep-XI1c)/p
XI=XI0+XI1; goto 60
c
50 XI=XI0
c
60 write(*,40)p,x,cfn*XI
c
40 format(8(2x,f14.6))
c
stop; end
c subroutine mac(n,bt,c,d)
use msimsl
double precision bt,c(0:50),d(0:50)
c
d(0)=c(0)
c d(1)=c(1)
c
d(2)=(bt-1)*c(1)*c(1)+c(2)
c
d(3)=(bt-1)*(bt-2)*c(1)*c(1)*c(1)+3*(bt-1)*c(1)*c(2)+c(3)
c
d(4)=(bt-1)*(bt-2)*(bt-3)*c(1)*c(1)*c(1)*c(1)
d(4)=d(4)+6*(bt-1)*(bt-2)*c(1)*c(1)*c(2)
d(4)=d(4)+3*(bt-1)*c(2)*c(2)+4*(bt-1)*c(1)*c(3)+c(4)
c
d(5)=(bt-1)*(bt-2)*(bt-3)*(bt-4)*c(1)*c(1)*c(1)*c(1)*c(1)
d(5)=d(5)+10*(bt-1)*(bt-2)*(bt-3)*c(1)*c(1)*c(1)*c(2)
d(5)=d(5)+10*(bt-1)*(bt-2)*c(1)*c(1)*c(3)
d(5)=d(5)+15*(bt-1)*(bt-2)*c(1)*c(2)*c(2)
d(5)=d(5)+5*(bt-1)*c(1)*c(4)+10*(bt-1)*c(2)*c(3)+c(5)
c
d(6)=(bt-1)*(bt-2)*(bt-3)*(bt-4)*(bt-5)
d(6)=d(6)*c(1)*c(1)*c(1)*c(1)*c(1)*c(1)
d(6)=d(6)+15*(bt-1)*(bt-2)*(bt-3)*(bt-4)*c(1)*c(1)*c(1)*c(1)*c(2)
d(6)=d(6)+45*(bt-1)*(bt-2)*(bt-3)*c(1)*c(1)*c(2)*c(2)
d(6)=d(6)+20*(bt-1)*(bt-2)*(bt-3)*c(1)*c(1)*c(1)*c(3)
d(6)=d(6)+15*(bt-1)*(bt-2)*c(2)*c(2)*c(2)
d(6)=d(6)+60*(bt-1)*(bt-2)*c(1)*c(2)*c(3)
d(6)=d(6)+15*(bt-1)*(bt-2)*c(1)*c(1)*c(4)
d(6)=d(6)+10*(bt-1)*c(3)*c(3)+15*(bt-1)*c(2)*c(4)
d(6)=d(6)+6*(bt-1)*c(1)*c(5)+c(6)
c
d(7)=(bt-1)*(bt-2)*(bt-3)*(bt-4)*(bt-5)*(bt-6)
d(7)=d(7)*c(1)*c(1)*c(1)*c(1)*c(1)*c(1)*c(1)
d(7)=d(7)+21*(bt-1)*(bt-2)*(bt-3)*(bt-4)*(bt-5)*(c(1)**5)*c(2)
d(7)=d(7)+35*(bt-1)*(bt-2)*(bt-3)*(bt-4)*(c(1)**4)*c(3)
d(7)=d(7)+105*(bt-1)*(bt-2)*(bt-3)*(bt-4)*(c(1)**3)*c(2)**2
d(7)=d(7)+35*(bt-1)*(bt-2)*(bt-3)*(c(1)**3)*c(4)
d(7)=d(7)+210*(bt-1)*(bt-2)*(bt-3)*c(1)*c(1)*c(2)*c(3)
d(7)=d(7)+21*(bt-1)*(bt-2)*c(1)*c(1)*c(5)
d(7)=d(7)+105*(bt-1)*(bt-2)*(bt-3)*c(1)*c(2)**3
d(7)=d(7)+105*(bt-1)*(bt-2)*c(1)*c(2)*c(4)
d(7)=d(7)+70*(bt-1)*(bt-2)*c(1)*c(3)*c(3)+7*(bt-1)*c(1)*c(6)
d(7)=d(7)+105*(bt-1)*(bt-2)*c(2)*c(2)*c(3)+21*(bt-1)*c(2)*c(5)
d(7)=d(7)+35*(bt-1)*c(3)*c(4)+c(7)
c
d(8)=(bt-1)*(bt-2)*(bt-3)*(bt-4)*(bt-5)*(bt-6)*(bt-7)
d(8)=d(8)*c(1)*c(1)*c(1)*c(1)*c(1)*c(1)*c(1)*c(1)
&+28*(bt-1)*(bt-2)*(bt-3)*(bt-4)*(bt-5)*(bt-6)*(c(1)**6)*c(2)
d(8)=d(8)+56*(bt-1)*(bt-2)*(bt-3)*(bt-4)*(bt-5)*(c(1)**5)*c(3)
d(8)=d(8)+210*(bt-1)*(bt-2)*(bt-3)*(bt-4)*(bt-5)*(c(1)**4)*c(2)**2
d(8)=d(8)+70*(bt-1)*(bt-2)*(bt-3)*(bt-4)*(c(1)**4)*c(4)
d(8)=d(8)+560*(bt-1)*(bt-2)*(bt-3)*(bt-4)*c(1)*c(1)*c(1)*c(2)*c(3)
d(8)=d(8)+56*(bt-1)*(bt-2)*(bt-3)*c(1)*c(1)*c(1)*c(5)
d(8)=d(8)+420*(bt-1)*(bt-2)*(bt-3)*(bt-4)*c(1)*c(1)*c(2)**3
d(8)=d(8)+420*(bt-1)*(bt-2)*(bt-3)*c(1)*c(1)*c(2)*c(4)
d(8)=d(8)+280*(bt-1)*(bt-2)*(bt-3)*c(1)*c(1)*c(3)*c(3)
d(8)=d(8)+28*(bt-1)*(bt-2)*c(1)*c(1)*c(6)
d(8)=d(8)+840*(bt-1)*(bt-2)*(bt-3)*c(1)*c(2)*c(2)*c(3)
d(8)=d(8)+168*(bt-1)*(bt-2)*c(1)*c(2)*c(5)
d(8)=d(8)+280*(bt-1)*(bt-2)*c(1)*c(3)*c(4)+8*(bt-1)*c(1)*c(7)
&+105*(bt-1)*(bt-2)*(bt-3)*c(2)**4+210*(bt-1)*(bt-2)*c(2)*c(2)*c(4)
d(8)=d(8)+280*(bt-1)*(bt-2)*c(2)*c(3)**2+28*(bt-1)*c(2)*c(6)
&+56*(bt-1)*c(3)*c(5)+35*(bt-1)*c(4)*c(4)+c(8)
c
do m=1,n; d(m)=bt*d(m)/dfac(m); enddo
c
return; end