Analytical Evaluation of Non-Elementary Integrals Involving Some Exponential, Hyperbolic and Trigonometric Elementary Functions and Derivation of New Probability Measures Generalizing the Gamma-Type and Gaussian-Type Distributions

Abstract

The non-elementary integrals involving elementary exponential, hyperbolic and trigonometric functions, and where α,η and β are real or complex constants are evaluated in terms of the confluent hypergeometric function 1F1 and the hypergeometric function 1F2. The hyperbolic and Euler identities are used to derive some identities involving exponential, hyperbolic, trigonometric functions and the hypergeometric functions 1F1 and 1F2. Having evaluated, these non-elementary integrals, some new probability measures generalizing the gamma-type and Gaussian distributions are also obtained. The obtained generalized probability distributions may, for example, allow to perform better statistical tests than those already known (e.g. chi-square (x2) statistical tests and other statistical tests constructed based on the central limit theorem (CLT)), while avoiding the use of computational approximations (or methods) which are in general expensive and associated with numerical errors.

Share and Cite:

Nijimbere, V. (2020) Analytical Evaluation of Non-Elementary Integrals Involving Some Exponential, Hyperbolic and Trigonometric Elementary Functions and Derivation of New Probability Measures Generalizing the Gamma-Type and Gaussian-Type Distributions. Advances in Pure Mathematics, 10, 371-392. doi: 10.4236/apm.2020.107023.

1. Introduction

The confluent hypergeometric function F 1 1 and the hypergeometric function F 1 2 are used throughout this paper. There are defined here for reference, or see for example [1].

Definition 1. The confluent hypergeometric function, denoted as F 1 1 , is a special function given by the series

F 1 1 ( a ; b ; x ) = n = 0 ( a ) n ( b ) n x n n ! , (1)

where a and b are arbitrary constants,

( ϑ ) n = ϑ ( ϑ + 1 ) ( ϑ + n 1 ) = m = 1 n ( ϑ + m 1 ) = Γ ( ϑ + n ) / Γ ( ϑ )

(Pochhammer’s notation) for any complex ϑ , with ( ϑ ) 0 = 1 , and Γ is the standard gamma function.

Definition 2. The hypergeometric function F 1 2 , is a special function given by the series

F 1 2 ( a ; b , c ; x ) = n = 0 ( a ) n ( b ) n ( c ) n x n n ! , (2)

where a, b and c are arbitrary constants, ( ϑ ) n = Γ ( ϑ + n ) / Γ ( ϑ ) (see definition 2).

Definition 3. An elementary function is a function of one variable constructed using that variable and constants, and by performing a finite number of repeated algebraic operations involving exponentials and logarithms. An indefinite integral which can be expressed in terms of elementary functions is an elementary integral. And if, on the other hand, it cannot be evaluated in terms of elementary functions, then it is non-elementary [2] [3] .

One of the goals of this work is to show how non-elementary integrals of one of the types

x α e η x β d x , x α c o s h ( η x β ) d x , x α s i n h ( η x β ) d x , (3)

x α c o s ( η x β ) d x and x α s i n ( η x β ) d x , (4)

where α , η and β are real or complex constants can be evaluated in terms of the special functions F 1 1 and F 1 2 .

It is worth clarifying that the integrals in (3) and (4) may be elementary or non-elementary depending on the values of the constants α and β . If, for instance, α = β 1 , then the integral

x α e η x β d x = 1 η β η β x β 1 e η x β d x = e η x β η β + C (5)

is elementary because it is expressed in terms of the elementary function e η x β . In that case, the other integrals in (3) and (4) are also elementary since they can be expressed as linear combination of integrals such that in (5) using the hyperbolic identities

c o s h ( η x β ) = ( e η x β + e η x β ) / 2 , s i n h ( η x β ) = ( e η x β e η x β ) / 2

and the Euler’s identities

c o s ( η x β ) = ( e i η x β + e i η x β ) / 2 , s i n ( η x β ) = ( e i η x β e i η x β ) / ( 2 i ) .

Using Liouville 1835’s theorem, it can readily be shown that if α is not an integer and α β 1 , then the integrals in (3) and (4) are non-elementary [2] [3].

These integrals are the generalization of the non-elementary integrals evaluated by Nijimbere [4] [5], and have been not evaluated before in the sens that, if α < 0 , then the integrals in (3) and (4) become, respectively, the (indefinite) exponential, hyperbolic sine and hyperbolic cosine integrals, and the sine and cosine integrals which were evaluated in terms of F 1 2 , F 2 2 and F 2 3 in Nijimbere [4] (Theorem 1, Theorem 4, Theorem 5 and Theorem 8) and in Nijimbere [5] (Theorem 1, Theorem 2, Theorem 3, Theorem 4 and Theorem 5),

e η x β x α d x , c o s h ( η x β ) x α d x , s i n h ( η x β ) x α d x ,

c o s ( η x β ) x α d x and s i n h ( η x β ) x α d x .

If, on the other hand, α = 0 , the non-elementary integrals in (3) and (4) reduce to the non-elementary integrals evaluated in Nijimbere [6] (Proposition 1, Proposition 2 and Proposition 3),

e η x β d x , c o s h ( η x β ) d x , s i n h ( η x β ) d x ,

c o s ( η x β ) d x and s i n h ( η x β ) d x .

Once the indefinite non-elementary integrals in (3) and (4) are evaluated, then their corresponding definite integrals

B 1 B 2 x α e η x β d x , B 1 B 2 x α c o s h ( η x β ) d x , B 1 B 2 x α s i n h ( η x β ) d x ,

B 1 B 2 x α c o s ( η x β ) d x and B 1 B 2 x α s i n ( η x β ) d x ,

where B 1 and B 2 are arbitrary constants or functions can be evaluated.

For instance, the incomplete gamma function

γ ( z 2 , z 1 ) = 0 z 1 x z 2 1 e x d x

which is a very useful special function in both applied analysis and applied sciences, is a particular case of the definite non-elementary integral B 1 B 2 x α e η x β d x , where the limits of integration B 1 = 0 and B 2 = z 1 , { z 1 , z 2 } , z 2 = α 1 has a positive real part ( Re ( z 2 ) > 0 ), η = 1 and β = 1 . So, the gamma function,

Γ ( z 2 ) = lim | z 1 | γ ( z 2 , z 1 ) , | arg z 2 | < π / 2 ,

is, as well, simply a limited particular case of the definite non-elementary integral B 1 B 2 x α e η x β d x in which, for example, the real part of α can be negative ( Re ( α ) < 0 ), β can be negative as well, and B 1 and B 2 can be arbitrary functions or constants. Thus, it is quite important to evaluate the non-elementary integrals in (3) and (4).

It is well known that numerical integration (or approximation) methods are expensive and their main drawback is that they are associated with computational errors which become very large as the integration limits become large. Thus, the analytical method used in this paper is very important in order to avoid computational methods. For example, Dawson’s integral

Daw ( z ) = e z 2 0 z e x 2 d x

and other related functions in mathematical physics such Faddeeva, Fried-Conte, Jackson, Fresnel and Gordeyev integrals were analytically evaluated by Nijimbere [7] in terms of the confluent hypergeometric function F 1 1 using the same analytical method as in this study rather than using numerical approximations, see for example [8] [9].

Another goal of this work is to obtain some identities (or formulas) involving exponential, hyperbolic, trigonometric functions and the hypergeometric functions F 1 1 and F 1 2 using the Euler and hyperbolic identities. Other interesting identities involving hypergeometric functions may be found, for example, in [10] [11] [12]. Non-elementary integrals with integrands involving generalized hypergeometric functions and identities of generalized hypergeometric series have also been examined in Nijimbere [13].

Using the fact that g ( x ) = e η x β , x , η + , is in the L p -space, p > 0 for some β , some finite measure, μ ( { , x } ) < , can be defined for all x . Moreover, if X = h ( x ) , x is some random variable, h : is some well-defined function (e.g. h ( x ) = x ), then it is possible to define probability measures in terms of the Lebesgue measure d x as μ ( d x ) = A g ( x ) d x , x Ω , and Ω , satisfying the integrability condition Ω | X | α μ ( d x ) < , α 0, α > β 1 and A being a (normalization) constant. In that case, new probability measures (or distributions) that generalize the gamma-type and Gaussian-type distributions may be constructed, and corresponding distribution functions and moments can be evaluated as well.

Definition 4. The generalized gamma probability distribution is a three parameter probability distribution, let say ϕ > 0 , κ > 0 and β > 0 , and a random variable X has a generalized gamma distribution if it has the probability density function (p.d.f.)

f X ( x ; ϕ , κ , β ) = β / κ ϕ Γ ( ϕ / β ) x ϕ 1 e ( x / κ ) β , x + , ϕ > 0 , κ > 0 , β > 0. (6)

Definition 5. The generalized normal (Gaussian) probability distribution is a four parameter probability distribution, let say η > 0 , θ , β > 0 and σ > 0 , and a random variable X has a generalized normal distribution if it has the probability density function (p.d.f.)

f X ( x ; η , β , θ , σ ) = β 2 σ η 1 / β Γ ( 1 / β ) exp ( η ( x θ σ ) β ) , x , η > 0 , θ , β > 0 , σ > 0. (7)

Recent studies about generalized gamma and Gaussian probability distributions or involving these probability distributions may, for example, be found in [14] [15]. In this study these distributions are generalized further. For instance, analytical properties regarding moments, characteristic functions of the generalized Gaussian distribution and their possible applications are very well documented in Dytso et al. [15]. However, there is no a direct formula to evaluate the nth moments of the generalized Gaussian distribution. Here, the formula for the nth moments of the generalized Gaussian distribution is obtained, and in particular, it is shown that the nth moments of the generalized normal distribution in definition 5 are given by

M ( X n ) = + x n f X ( x ; η , β , θ , σ ) d x = θ n Γ ( 1 / β ) l = 0 n Γ ( ( 2 l + 1 ) / β ) C 2 l n ( σ θ η 1 / β ) 2 l , 2 l n and ( 2 l ) , (8)

where C 2 l n = n ! / ( ( n 2 l ) ! ( 2 l ) ! ) , θ is the mean of the Gaussian random variable and σ 2 > 0 its variance. It is also shown, for instance, that the inverse gamma distribution is as well a particular case of the generalized gamma-type distribution derived in this study.

The integrals examined here may also find applications in functional analysis, Gaussian Hilbert space, in which Hermite polynomials form a vector space with a Gaussian weight function, Freud weight and associated orthogonal polynomials [16], to name few, are good examples.

The paper is organized as follows. In Section 2, the integrals in (4)-(5) are evaluated, and some new identities (or formula) that involve the exponential, hyperbolic, trigonometric functions and the hypergeometric functions F 1 1 and F 1 2 are obtained. In Section 3, new probability measures that generalize the gamma-type and Gaussian-type distributions are constructed, and their corresponding distribution functions are written in terms of the confluent hypergeometric function. Formulas to evaluate the nth moments are also derived in Section 3. A general discussion is given in Section 4. The main results of the paper are given as propositions, theorems and corollaries in Sections 2.1, 2.2, 3.1 and 3.2.

2. Evaluation of the Non-Elementary Integrals

Let first prove a lemma which will be used throughout the paper.

Lemma 1. Let j 0 and m 0 be integers, and let α , β and γ be arbitrarily constants.

1) Then

m = 0 j ( α + m β + 1 ) = ( α + 1 ) β j ( α + 1 β + 1 ) j , (9)

2)

m = 0 2 j ( α + m β + 1 ) = ( α + 1 ) ( α + β + 1 ) ( 2 β ) 2 j ( α + β + 1 2 β + 1 ) j ( α + 2 β + 1 2 β + 1 ) j (10)

3) and

m = 0 2 j + 1 ( α + m β + 1 ) = ( α + 1 ) ( α + β + 1 ) ( 2 β ) 2 j ( α + 2 β + 1 2 β + 1 ) j ( α + 3 β + 1 2 β + 1 ) j . (11)

Proof.

1) Making use of Pochhammer’s notation [17], see definition 1 yields

m = 0 j ( α + m β + 1 ) = ( α + 1 ) m = 1 j ( α + m β + 1 ) = ( α + 1 ) β j m = 1 j ( α + 1 β + m ) = ( α + 1 ) β j m = 1 j ( α + 1 β + 1 + m 1 ) = ( α + 1 ) β j ( α + 1 β + 1 ) j . (12)

2) Observe that

m = 0 2 j ( α + m β + 1 ) = l = 0 j 1 ( α + l ( 2 β ) + β + 1 ) l = 0 j ( α + l ( 2 β ) + 1 ) . (13)

Then, making use of Pochhammer’s notation as before gives

l = 0 j 1 ( α + l ( 2 β ) + β + 1 ) = ( α + β + 1 ) ( 2 β ) j ( α + β + 1 2 β + 1 ) j (14)

and

l = 0 j ( α + l ( 2 β ) + 1 ) = ( α + 1 ) ( 2 β ) j ( α + 2 β + 1 2 β + 1 ) j . (15)

Hence, multiplying (14) with (15) gives (10).

3) Observe that

m = 0 2 j + 1 ( α + m β + 1 ) = l = 0 j ( α + l ( 2 β ) + 1 ) l = 0 j ( α + l ( 2 β ) + β + 1 ) . (16)

Once again, using again Pochhammer’s notation yields

l = 0 j ( α + l ( 2 β ) + β + 1 ) = ( α + β + 1 ) ( 2 β ) j ( α + 3 β + 1 2 β + 1 ) j . (17)

Hence, multiplying (17) with (15) gives (11). □

Now, some of the main results of this paper can be obtained.

2.1. Evaluation of Non-Elementary Integrals of the Types x α e η x β d x , x α c o s h ( η x β ) d x , x α s i n h ( η x β ) d x

Proposition 1. Let η and β be nonzero constants ( η 0, β 0 ), and α be any constant different from −1 ( α 1 ). Then,

x α e η x β d x = x α + 1 e η x β α + 1 F 1 1 ( 1 ; α + β + 1 β ; η x β ) + C . (18)

The Kummer transformation (formula 13.1.27 in [17] ) gives

x α e η x β d x = x α + 1 α + 1 F 1 1 ( α + 1 β ; α + β + 1 β ; η x β ) + C . (19)

Proof. The substitution u β = η x β and (1) yields

x α e η x β d x = 1 η α + 1 β u α e u β d u . (20)

Performing successive integration by parts that increases the power of u gives

u α e u β d u = u α + 1 e u β α + β + 1 β u α + β + 1 e u β ( α + 1 ) ( α + β + 1 ) + β 2 u α + 2 β + 1 e u β ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) β 3 u α + 3 β + 1 e u β ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) ( α + 3 β + 1 ) + + ( 1 ) j β j u α + j β + 1 e u β m = 0 j ( α + m β + 1 ) + = j = 0 ( 1 ) j β j u α + j β + 1 e u β m = 0 j ( α + m β + 1 ) + C . (21)

Using (9) in Lemma 1 yields

u α e u β d u = u α + 1 e u β j = 0 ( β u β ) j m = 0 j ( α + m β + 1 ) + C = u α + 1 e u β j = 0 ( β u β ) j ( α + 1 ) β j ( α + 1 β + 1 ) j + C = u α + 1 e u β α + 1 j = 0 ( u β ) j ( α + 1 β + 1 ) j + C

= u α + 1 e u β α + 1 j = 0 ( 1 ) j ( u β ) j ( α + 1 β + 1 ) j j ! + C = u α + 1 e u β α + 1 F 1 1 ( 1 ; α + β + 1 β ; u β ) + C . (22)

Hence, using the fact u β = η x β gives (10). □

Having evaluated (18), the following results hold.

Theorem 1. Let α be an arbitrarily real or complex constant, β a nonzero real or complex constant ( β 0 ), and η a nonzero real or complex constant with a positive real part ( Re ( η ) > 0 ).

1) Then,

0 + x α e η x β d x = Γ ( α + β + 1 β ) ( α + 1 ) η α + 1 β , (23)

α > β 1 , α 1 if { α , β } .

2) Moreover, if the integrand is even, then

+ x α e η x β d x = 2 Γ ( α + β + 1 β ) ( α + 1 ) η α + 1 β . (24)

Proof. It can readily be shown using Proposition 1 and the asymptotic expansion of the confluent hypergeometric function (formula 13.1.5 in [17] ) that

0 + x α e η x β d x = lim x x α + 1 e η x β α + 1 F 1 1 ( 1 ; α + β + 1 β ; η x β ) = Γ ( α + β + 1 β ) ( α + 1 ) η α + 1 β . (25)

If the integrand is even, then + x α e η x β d x = 2 0 + x α e η x β d x , and this gives (24).

Theorem 1 is, for instance, the generalization of the Mellin transform of the function e η x β , Re { η } > 0 , β > 0 , where s = α + 1 is the Mellin parameter, and in this case it can be negative s = α + 1 < 0 , and the constant β can be negative as well ( β < 0 ), see for example Polarikas [18].

Moreover, as it will shortly be shown (see Section 3), Theorem 1 can be used to obtain new probability distributions that generalize the gamma-type and Gaussian-type distributions that may lead to better statistical tests than those already known which are based on the central limit theorem (CLT).

Proposition 2. Let η and β be nonzero constants ( η 0, β 0 ), α be some constant different from −1 ( α 1 ) and α β 1 . Then,

x α c o s h ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ c o s h ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 )

β η x β s i n h ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + C . (26)

Proof. The change of variable u β = η x β yields

x α c o s h ( η x β ) d x = 1 η α + 1 β u α c o s h ( u β ) d u + C . (27)

Successive integration by parts that increases the power of u gives

u α cosh ( u β ) d u = u α + 1 cosh ( u β ) α + β + 1 β u α + β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) + β 2 u α + 2 β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) β 3 u α + 3 β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) ( α + 3 β + 1 ) + + β 2 j u α + 2 j β + 1 cosh ( u β ) m = 0 2 j ( α + m β + 1 ) + β 2 j + 1 u α + ( 2 j + 1 ) β + 1 sinh ( u β ) m = 0 2 j + 1 ( α + m β + 1 ) = cosh ( u β ) j = 0 β 2 j u α + 2 j β + 1 m = 0 2 j ( α + m β + 1 ) sinh ( u β ) j = 0 β 2 j + 1 u α + ( 2 j + 1 ) β + 1 m = 0 2 j + 1 ( α + m β + 1 ) + C . (28)

Using (10) and (11) in Lemma 1 yields

u α cosh ( u β ) d u = u α + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) j = 0 ( u 2 β 4 ) j ( α + β + 1 2 β ) j ( α + 2 β + 1 2 β ) j β u α + β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) j = 0 ( u 2 β 4 ) j ( α + 2 β + 1 2 β ) j ( α + 3 β + 1 2 β ) j + C

= u α + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) j = 0 ( 1 ) j ( u 2 β 4 ) j ( α + β + 1 2 β ) j ( α + 2 β + 1 2 β ) j j ! β u α + β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) j = 0 ( 1 ) j ( u 2 β 4 ) j ( α + 2 β + 1 2 β ) j ( α + 3 β + 1 2 β ) j j ! + C = u α + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; u 2 β 4 )

β u α + β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; u 2 β 4 ) + C . (29)

Hence, using the fact u β = η x β and rearranging terms gives (26). □

Proposition 3. Let η and β be nonzero constants ( η 0, β 0 ), α be some constant different from −1 ( α 1 ) and α β 1 . Then,

x α s i n h ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ s i n h ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β c o s h ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + C . (30)

Proof. Making the change of variable u β = η x β as before yields

x α s i n h ( η x β ) d x = 1 η α + 1 β u α s i n h ( u β ) d u + C . (31)

Performing successive integration by parts that increase the power of u as before gives

u α sinh ( u β ) d u = u α + 1 sinh ( u β ) α + β + 1 β u α + β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) + β 2 u α + 2 β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) β 3 u α + 3 β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) ( α + 3 β + 1 ) + + β 2 j u α + 2 j β + 1 sinh ( u β ) m = 0 2 j ( α + m β + 1 ) + β 2 j + 1 u α + ( 2 j + 1 ) β + 1 cosh ( u β ) m = 0 2 j + 1 ( α + m β + 1 ) = sinh ( u β ) j = 0 β 2 j u α + 2 j β + 1 m = 0 2 j ( α + m β + 1 ) cosh ( u β ) j = 0 β 2 j + 1 u α + ( 2 j + 1 ) β + 1 m = 0 2 j + 1 ( α + m β + 1 ) + C . (32)

Using (10) and (11) in Lemma 1 yields

u α s i n h ( u β ) d u = u α + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) j = 0 ( u 2 β 4 ) j ( α + β + 1 2 β ) j ( α + 2 β + 1 2 β ) j β u α + β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) j = 0 ( u 2 β 4 ) j ( α + 2 β + 1 2 β ) j ( α + 3 β + 1 2 β ) j + C

= u α + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) j = 0 ( 1 ) j ( u 2 β 4 ) j ( α + β + 1 2 β ) j ( α + 2 β + 1 2 β ) j j ! β u α + β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) j = 0 ( 1 ) j ( u 2 β 4 ) j ( α + 2 β + 1 2 β ) j ( α + 3 β + 1 2 β ) j j ! + C = u α + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; u 2 β 4 ) β u α + β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; u 2 β 4 ) + C . (33)

Hence, using the fact u β = η x β and rearranging terms gives (30). □

Theorem 2. For any constants α , β and η ,

1 α + β + 1 [ cosh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β sinh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] = 1 2 [ e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) + e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] . (34)

Proof. Using the hyperbolic identity cosh ( η x β ) = ( e η x β + e η x β ) / 2 and Proposition 1 yields

x α cosh ( η x β ) d x = 1 2 ( x α e η x β d x + x α e η x β d x ) = x α + 1 2 ( α + 1 ) [ e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) + e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] + C . (35)

Hence, Comparing (35) with (26) gives (34). □

Theorem 3. For any constants α , β and η ,

1 α + β + 1 [ sinh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β cosh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] = 1 2 [ e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] . (36)

Proof. Using the hyperbolic identity s i n h ( η x β ) = ( e η x β e η x β ) / 2 and Proposition 1 yields

x α sinh ( η x β ) d x = 1 2 ( x α e η x β d x x α e η x β d x ) = x α + 1 2 ( α + 1 ) [ e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] + C . (37)

Hence, Comparing (37) with (30) gives (36). □

Theorem 4 For any constants α , β and η ,

e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) = 1 α + β + 1 [ sinh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β cosh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) + cosh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β sinh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] . (38)

Proof. The hyperbolic relation e η x β = cosh ( η x β ) + sinh ( η x β ) and Propositions 2 and 3 gives

x α e η x β d x = x α cosh ( η x β ) d x + x α sinh ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ cosh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β sinh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + x α + 1 ( α + 1 ) ( α + β + 1 ) [ sinh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β cosh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + C . (39)

Hence, comparing (39) with (18) gives (38). □

2.2. Evaluation of Non-Elementary Integrals of the Types x α c o s ( η x β ) d x , x α s i n ( η x β ) d x

Proposition 4. Let η and β be nonzero constants ( η 0, β 0 ), α be some constant different from −1 ( α 1 ) and α β 1 . Then,

x α c o s ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ c o s ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 )

+ β η x β s i n ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + C . (40)

The proof is similar to the proof of Proposition 2, so it is omitted.

Proposition 5. Let η and β be nonzero constants ( η 0, β 0 ), α be some constant different from −1 ( α 1 ) and α β 1 . Then

x α s i n ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ s i n ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β c o s ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + C . (41)

The proof of this proposition is also omitted since it is similar to that of Proposition 3.

Theorem 5. For any constants α , β and η ,

1 α + β + 1 [ cos ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β sin ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] = 1 2 [ e i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) + e i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) ] . (42)

Proof. Euler’s identity c o s ( η x β ) = ( e i η x β + e i η x β ) / 2 and Proposition 1 gives

x α cos ( η x β ) d x = 1 2 [ x α e i η x β d x + x α e i η x β d x ] = x α + 1 2 ( α + 1 ) [ e i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) + e i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) ] + C . (43)

Hence, Comparing (43) with (40) gives (42). □

Theorem 6. For any constants α , β and η ,

1 α + β + 1 [ sin ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) + β η x β cos ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] = 1 2 i [ e i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) e i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) ] . (44)

Proof. Euler’s identity s i n ( η x β ) = ( e i η x β e i η x β ) / ( 2 i ) and Proposition 1 gives

x α cos ( η x β ) d x = 1 2 [ x α e i η x β d x + x α e i η x β d x ]

= x α + 1 2 i ( α + 1 ) [ e i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) e i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) ] + C . (45)

Hence, Comparing (45) with (41) gives (44). □

Theorem 7. For any constants α , β and η ,

e i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) = 1 α + β + 1 [ cos ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β sin ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + i α + β + 1 [ sin ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) + β η x β cos ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] . (46)

Proof. Using the relation e i η x β = cos ( η x β ) + i sin ( η x β ) and Propositions 4 and 5 yields

x α e i η x β d x = x α cos ( η x β ) d x + i x α sin ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ cos ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) + β η x β sin ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + i x α + 1 ( α + 1 ) ( α + β + 1 ) [ sin ( η x β ) 1 F 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) β η x β cos ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + C . (47)

Hence, comparing (47) with (18) (with η replaced by i η ) gives (46). □

3. New Probability Measures That Generalize the Gamma-Type and Gaussian-Type Distributions

In this section, Proposition 1 and Theorem 1 are used to generalize the gamma-type ( χ 2 distribution, inverse gamma distribution) and Gaussian-type distributions, see for example [19].

3.1. Generalization of the Gamma-Type Distributions

Define a probability measure μ in terms of the Lebesgue measure d x as [20]

d μ = μ ( d x ) = A g ( x ; α , η , β ) d x = f X ( x ; α , η , β ) d x , x [ 0 , + ) , (48)

where f X ( x ; α , η , β ) is a probability density function (p.d.f.) of a three parameter distribution of some random variable X,

g ( x ; α , η , β ) = x α e η x β , α 1 , β 0 , α > β 1 , (49)

α , η and β are parameters of the probability distribution of the random variable X, and A is a normalized constant which can be obtained using formula (23) in Theorem 1.

After normalization, it is found that the p.d.f. of X is given by

f X ( x ; α , η , β ) = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α e η x β , α 1 , β 0 , α > β 1. (50)

The distribution function of the random variable X can be obtained using Proposition 1 and is given by

F X ( x ; α , η , β ) = μ { [ 0 , x ) } = 0 x f X ( u ; α , η , β ) d u = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α + 1 e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) . (51)

The nth moments ( M ( X n ) ) can, as well, be evaluated using formula (23) in Theorem 1 to obtain

M ( X n ) = 0 + x n f X ( x ; α , η , β ) d x = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) 0 + x α + n e η x β d x = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) Γ ( ( α + β + n + 1 ) / β ) ( α + n + 1 ) η ( α + n + 1 ) / β = Γ ( ( α + n + 1 ) / β ) η α / β Γ ( ( α + 1 ) / β ) . (52)

These results are summarized in the following theorem.

Theorem 8. Let X be a random variable having the generalized gamma-type p.d.f. with parameters α , η and β given by

f X ( x ; α , η , β ) = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α e η x β , x + , α 1, β 0, α > β 1. (53)

Then, the distribution function F X ( x ; α , η , β ) of the random variable X is given by

F X ( x ; α , η , β ) = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α + 1 e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) , (54)

and the nth moments M ( X n ) of X are given by

M ( X n ) = 0 + x n f X ( x ; α , η , β ) d x = Γ ( ( α + n + 1 ) / β ) η α / β Γ ( ( α + 1 ) / β ) . (55)

Example 1. The generalized gamma (GG) distribution can be derived from Theorem 8 by setting η = 1 / κ β , κ > 0 , β > 0 and α = ϕ 1 , ϕ > 0 , and its p.d.f is, see definition 4,

f X ( x ; ϕ , κ , β ) = β / κ ϕ Γ ( ϕ / β ) x ϕ 1 e ( x / κ ) β , x + , ϕ > 0 , κ > 0 , β > 0.

Example 2. The one parameter Maxwell-Boltsman distribution in gas dynamics given by

F V ( v ; η ) = 2 η 3 / 2 Γ ( 3 / 2 ) 0 v x 2 e η x 2 d x = 2 η 3 / 2 Γ ( 3 / 2 ) v 3 e η v 2 F 1 1 ( 1 ; 5 2 ; η v 2 ) ,

where v is the gas speed and η > 0 is some constant that depends on the gas properties, is also a special case of Theorem 8 with α = 2 and β = 2 .

The inverse gamma distribution find applications in wireless communications, see for example [21] [22]. It can be shown that the inverse gamma distribution is, as well, a particular case of the generalized gamma-type distribution in Theorem 8. If Y is some gamma distribution random variable, then the random variable X = 1 / Y is said to be an inverse gamma distribution random variable. The inverse gamma distribution is a special case of Theorem 8 where both parameters α and β are negative ( α < 0 , β < 0 ).

Corollary 1. Let X be a random variable with the inverse gamma distribution, X ~ I G ( θ , η ) with parameters θ and η . Then, the distribution function F X ( x ; θ , η ) is given by

F X ( x ; θ , η ) = η θ Γ ( θ + 1 ) x θ e η / x F 1 1 ( 1 ; θ + 1 ; η / x ) , x > 0 , θ > 0 , η > 0, (56)

while the nth moments M ( X n ) are given by

M ( X n ) = η n Γ ( θ n ) Γ ( θ + 1 ) , θ > n . (57)

Proof. Setting α = ( θ + 1 ) , β = 1 in Theorem 8, and using the fundamental theorem of calculus f X ( x ) = d F X d x = d d x 0 x f X ( u ) d u and Proposition 1 gives the p.d.f.

f X ( x ; α , η , β ) = η θ Γ ( θ ) x ( θ + 1 ) e η x 1 , x > 0 , θ > 0 , η > 0 , (58)

which is the p.d.f of the inverse gamma distribution. The nth moments M ( X n ) of X are obtained by setting α = ( θ + 1 ) and β = 1 in (55). □

3.2. Generalization of Gaussian-Type Distributions

Consider, as before, a probability measure μ in terms of Lebesgue measure d x given by

d μ = μ ( d x ) = A g ( x ; α , η , β ) d x = f X ( x ; α , η , β ) d x , x , (59)

where, as before, f X ( x ; α , η , β ) is the p.d.f. of some random variable X,

g ( x ; α , η , β ) = x α e η x β , α 1 , β 0 , α > β 1 , (60)

is an even function of the variable x, A is a normalized constant which can be obtained using formula (24) in Theorem 1, and α , η and β are parameters of the probability distribution of the random variable X.

After normalization, the p.d.f. of X is found to be

f X ( x ; α , η , β ) = ( α + 1 ) η ( α + 1 ) / β 2 Γ ( ( α + β + 1 ) / β ) x α e η x β , x , α 1 , β 0 , α > β 1. (61)

It is important to note that f X in this case is even. So, a factor of 2 has to appear in the denominator. In addition the parameters α and β can be negative. The distribution function F X can also be obtained using Proposition 1 and is thus given by

F X ( x ; α , η , β ) = μ { ( , x ) } = x f X ( u ; α , η , β ) d u = 1 2 [ 1 ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α + 1 e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] . (62)

The moment ( M ( X n ) ) can be evaluated using formula (24) in Theorem 1 to obtain

M ( X n ) = + x n f X ( x ; α , η , β ) d x = ( α + 1 ) η ( α + 1 ) / β 2 Γ ( ( α + β + 1 ) / β ) + x α + n e η x β d x = ( Γ ( ( α + n + 1 ) / β ) η α / β Γ ( ( α + 1 ) / β ) , if n is even . 0, if n is odd . (63)

These results further generalize the generalized Gaussian distribution with a zero mean in which, in general, α = 0 and β > 0 . They are summarized in Theorem 9.

Theorem 9. Let X be a random variable having an even p.d.f. with parameters α , η and β given by

f X ( x ; α , η , β ) = ( α + 1 ) η ( α + 1 ) / β 2 Γ ( ( α + β + 1 ) / β ) x α e η x β , x , α 1, β 0, α > β 1. (64)

Then, the distribution function F X ( x ; α , η , β ) of the random variable X is given by

F X ( x ; α , η , β ) = 1 2 [ 1 ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α + 1 e η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] . (65)

And the nth moments M ( X n ) of X are given by

M ( X n ) = + x n f X ( x ; α , η , β ) d x = ( Γ ( ( α + n + 1 ) / β ) η α / β Γ ( ( α + 1 ) / β ) , if n is even . 0, if n is odd . (66)

Example 3. Setting α = 0 , β = 2 and η = 1 / 2 yields f X ( x ) = ( 1 / 2 π ) e x 2 / 2 , and the mean of X is E X = M ( X 1 ) = 0 while the variance is E X 2 = M ( X 2 ) = 1 . So X ~ N ( 0,1 ) distribution as expected (or X is a standard normal distribution random variable).

More general results can be achieved by introducing two additional parameters θ and σ > 0 . The results in Theorem 10 generalizes further the generalized Gaussian distribution in definition 5.

Theorem 10. Let X be a random variable having an even p.d.f. with five parameters α , η , β , θ and σ given by

f X ( x ; α , η , β , θ , σ ) = 1 2 σ ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) ( x θ σ ) α e x p ( η ( x θ σ ) β ) , x , θ , α 1, β 0, α > β 1 , σ > 0. (67)

Then, the distribution function F X ( x ; α , η , β , θ , σ ) of the random variable X is given by

F X ( x ; α , η , β , θ , σ ) = 1 2 [ 1 1 σ ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) ( x θ σ ) α + 1 × e x p ( η ( x θ σ ) β ) F 1 1 ( 1 ; α + β + 1 β ; η ( x θ σ ) β ) ] , (68)

and the moments M ( X n ) of X are given by

M ( X n ) = + x n f X ( x ; α , η , β , θ , σ ) d x = θ n Γ ( ( α + 1 ) / β ) l = 0 n Γ ( ( α + 2 l + 1 ) / β ) C 2 l n ( σ θ η 1 / β ) 2 l ,2 l n and ( 2 l ) , (69)

where C 2 l n = n ! / ( ( n 2 l ) ! ( 2 l ) ! ) .

Thus, the mean and the variance of X are respectively given by

E X = M ( X 1 ) = θ and var X = E X 2 ( E X ) 2 = σ 2 η 2 / β Γ ( ( α + 3 ) / β ) Γ ( ( α + 1 ) / β ) . (70)

Formula (69) is obtained by making the substitution u = ( x θ ) / σ , and by applying the binomial theorem and Theorem 1.

A four parameter generalized Gaussian distribution (see definition 5) may be derived by setting α = 0 in Theorem 10.

Corollary 2. Let X be a random variable having the generalized Gaussian-type distribution p.d.f. (see definition 5) with four parameters α , η , β and σ given by

f X ( x ; η , β , θ , σ ) = β 2 σ η 1 / β Γ ( 1 / β ) exp ( η ( x θ σ ) β ) , x , η > 0 , θ , β > 0 , σ > 0 , (71)

and where β is even. Then, the distribution function F X ( x ; α , η , β , θ , σ ) of the random variable X is given by

F X ( x ; η , β , θ , σ ) = 1 2 [ 1 β σ η 1 / β Γ ( 1 / β ) e x p ( η ( x θ σ ) β ) F 1 1 ( 1 ; β + 1 β ; η ( x θ σ ) β ) ] , (72)

and the nth moment M ( X n ) of X are given by

M ( X n ) = + x n f X ( x ; η , β , θ , σ ) d x = θ n Γ ( 1 / β ) l = 0 n Γ ( ( 2 l + 1 ) / β ) C 2 l n ( σ θ η 1 / β ) 2 l ,2 l n and ( 2 l ) , (73)

where, as before, C 2 l n = n ! / ( ( n 2 l ) ! ( 2 l ) ! ) . Thus, the mean and the variance of X are respectively given by

E X = M ( X 1 ) = θ and var X = E X 2 ( E X ) 2 = σ 2 η 2 / β Γ ( 3 / β ) Γ ( 1 / β ) . (74)

Moreover, if η 1 / 2 and β > 2 , and since Γ ( 3 / β ) < Γ ( 1 / β ) , then the variance of X ( var X ) does satisfy

var X = E X 2 ( E X ) 2 = σ 2 η 2 / β Γ ( 3 / β ) Γ ( 1 / β ) σ 2 , (75)

where σ 2 is the variance of the Gaussian random variable.

Example 4. Let X be a random variable with the p.d.f. in Corollary 2. Setting η = 1 / β implies that Y = ( ( X θ ) / σ ) β is a χ 1 β distribution random variable and has the p.d.f

f Y ( y ; β ) = y 1 β 1 e y / β 2 1 β Γ ( 1 β ) , y 0,

and β + is an even number as before and the subscript 1 on χ 1 β is the number of degrees of freedom. Moreover, if X 1 , X 2 , , X n are identically and independently distributed having the p.d.f. in Corollary 2, then S = i = 1 n Y i = i = 1 n ( ( X i θ ) / σ ) β is a χ n β (with n degrees of freedom) random variable and has the p.d.f. (Richter [23] )

f S ( Y ) ( y ; β ) = y n β 1 e y / β 2 n β Γ ( n β ) , y 0.

In that case, inferences about the statistics σ may be performed, see for example Richter [23]. Futhermore, if the parameter θ is substituted by its unbaised estimator θ ^ (i.e. E θ ^ = θ ), then S ^ = i = 1 n Y i = i = 1 n ( ( X i θ ^ ) / σ ) β is a χ n 1 β (with n 1 degrees of freedom) random variable.

A formula for the nth moments of the Gaussian distribution can now be obtained by setting β = 2 , η = 1 / 2 and α = 0 in (73).

Corollary 3. Let X be a Gaussian random variable. Its nth moments M ( X n ) are thus given by the formula

M ( X n ) = θ n π l = 0 n Γ ( l + 1 / 2 ) C 2 l n ( 2 σ 2 θ 2 ) l , 2 l n and ( 2 l ) , (76)

where θ is the mean of the Gaussian random variable and σ 2 > 0 its variance, and as before, C 2 l n = n ! / ( ( n 2 l ) ! ( 2 l ) ! ) .

4. Concluding Remarks and Discussion

Formulas for non-elementary integrals of the types x α e η x β d x , x α cosh ( η x β ) d x , x α sinh ( η x β ) d x , x α cos ( η x β ) d x and x α s i n ( η x β ) d x where α , η and β are real or complex constants were obtained in terms of the confluent hypergeometric function F 1 1 and the hypergeometric function F 1 2 in section 2 (see Propositions 1, 2, 3, 4 and 5). The results in Propositions 1, 2, 3, 4 and 5 generalize those in Nijimbere (2017) and those in Nijimbere (2018). Using hyperbolic and Euler identities, some identities involving the confluent hypergeometric function F 1 1 and the hypergeometric function F 1 2 were also obtained in section 2 (Theorems 2 - 7). Having evaluated the integrals Ω x α e η x β d x , Ω and η + in Theorem 1, new probability measures that further generalize the generalized gamma distribution and the generalized Gaussian distribution were constructed. Their distributions were also written in terms of the confluent hypergeometric function F 1 1 , and formulas for the nth moments were obtained as well in Section 3 (Theorems 8 - 10 and Corollaries 2 - 3). The results obtained in this paper, for example, may be used to construct better statistical tests than those already know (e.g. χ 2 statistical tests and tests obtained based on the normal distribution). Theorem 1 turns out also to be the generalization of the Mellin transform of the function e η x β , Re { η } > 0 , β > 0 , where s = α + 1 is the Mellin parameter, and in this case it can be negative s = α + 1 < 0 , and the constant β can be negative as well ( β < 0 ). It is also worth clarifying that the gamma function and the incomplete gamma function are particular cases of the definite integral Ω x α e η x β d x because R ( α ) and R ( β ) may simultaneously be negative (see the introduction section).

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] NIST Digital Library of Mathematical Functions.
http://dlmf.nist.gov
[2] Marchisotto, E.A. and Zakeri, G.A. (1994) An Invitation to Integration in Finite Terms. The College Mathematics Journal, 25, 295-308.
https://doi.org/10.1080/07468342.1994.11973625
[3] Rosenlicht, M. (1972) Integration in Finite Terms. The American Mathematical Monthly, 79, 963-972.
https://doi.org/10.1080/00029890.1972.11993166
[4] Nijimbere, V. (2018) Evaluation of Some Non-Elementary Integrals Involving Sine, Cosine, Exponential and Logarithmic Integrals: Part I. Ural Mathematical Journal, 4, 24-42.
https://doi.org/10.15826/umj.2018.1.003
[5] Nijimbere, V. (2018) Evaluation of Some Non-Elementary Integrals Involving Sine, Cosine, Exponential and Logarithmic Integrals: Part II. Ural Mathematical Journal, 4, 43-55.
https://doi.org/10.15826/umj.2018.1.004
[6] Nijimbere, V. (2017) Evaluation of the Non-Elementary Integral ∫eλxαdx, α≥2, and Other Related Integrals. Ural Mathematical Journal, 3, 130-142.
https://doi.org/10.15826/umj.2017.2.014
[7] Nijimbere, V. (2019) Analytical and Asymptotic Evaluations of Dawson’s Integral and Related Functions in Mathematical Physics. Journal of Applied Analysis, 25, 43-55.
https://doi.org/10.1515/jaa-2019-0019
[8] Abrarov, S.M. and Quine, B.M. (2018) A Rational Approximation of the Dawsons Integral for Efficient Computation of the Complex Error Function. Applied Mathematics and Computation, 321, 526-543.
https://doi.org/10.1016/j.amc.2017.10.032
[9] Abrarov, S.M. and Quine, B.M. (2018) A Sampling-Based Approximation of the Complex Error Function and Its Implementation without Poles. Applied Numerical Mathematics, 129, 181-191.
https://doi.org/10.1016/j.apnum.2018.03.009
[10] Al-Salman, A., Rhouma, M.B.H. and Al-Jarrah, A.A. (2011) On Integrals and Sums Involving Special Functions. Missouri Journal of Mathematical Sciences, 23, 123-141.
https://doi.org/10.35834/mjms/1321045141
[11] Choi, J. and Rathie, A.K. (2013) On a Hypergeometric Summation Theorem Due to Quereshi et al. Communications of the Korean Mathematical Society, 28, 527-534.
https://doi.org/10.4134/CKMS.2013.28.3.527
[12] Qureshi, M.I., Quraishi, K.A. and Srivastava, H.M. (2008) Some Hypergeometric Summation Formulas and Series Identities Associated with Exponential and Trigonometric Functions. Integral Transforms and Special Functions, 19, 267-276.
https://doi.org/10.1080/10652460801896024
[13] Nijimbere, V. (2020) Evaluation of Some Non-Elementary Integrals Involving the Generalized Hypergeometric Function with Some Applications.
https://arxiv.org/abs/2003.07403
[14] Kiche, J., Ngesa, O. and Orwa, G. (2019) On Generalized Gamma Distribution and Its Application to Survival Data. International Journal of Statistics and Probability, 8, 85-102.
https://doi.org/10.5539/ijsp.v8n5p85
[15] Dytso, A., Bustin, R., Poor, H.V. and Shamai, S. (2018) Analytical Properties of Generalized Gaussian Distributions. Journal of Statistical Distributions and Applications, 5, 6.
https://doi.org/10.1186/s40488-018-0088-5
[16] Freud, G. (1976) On the Coefficients in the Recursion Formulae of Orthogonal Polynomials. Proceedings of the Royal Irish Academy, 76, 1-6.
[17] Abramowitz, M. and Stegun, I.A. (1964) Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables. National Bureau of Standards, Washington D.C..
[18] Poularikas, A.D. (1998) Handbook of Formulas and Tables for Signal Processing. CRC Press, Boca Raton.
https://doi.org/10.1201/9781420049701
[19] List of Probability Distributions. Wikipedia, The Free Encyclopedia, 2020.
https://en.wikipedia.org/wiki/List_of_probability_distributions
[20] Billingsley, P. (1995) Probability and Measure. 3rd Edition, Wiley Series in Probability and Mathematical Statistics, Hoboken.
[21] Chen, Z., Qiu, L. and Liang, X. (2016) Area Spectral Efficiency Analysis and Energy Consumption Minimization in Multiantenna Poisson Distributed Networks. IEEE Transactions on Wireless Communications, 15, 4862-4874.
https://doi.org/10.1109/TWC.2016.2547912
[22] Wu, M., Yin, B., Wang, G., Dick, C., Cavallaro, J.R. and Studer, C. (2014) Large-Scale MIMO Detection for 3GPP LTE: Algorithms and FPGA Implementations. IEEE Journal of Selected Topics in Signal Processing, 8, 916-929.
https://doi.org/10.1109/JSTSP.2014.2313021
[23] Richter, W.-D. (2016) Exact Inference on Scaling Parameters in Norm and Antinorm Contoured Sample Distributions. Journal of Statistical Distributions and Applications, 3, 8.
https://doi.org/10.1186/s40488-016-0046-z

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.