_{1}

^{*}

The non-elementary integrals involving elementary exponential, hyperbolic and trigonometric functions,
*α*,*η* and
*β* are real or complex constants are evaluated in terms of the confluent hypergeometric function
_{1}
*F*
_{1} and the hypergeometric function
_{1}
*F*
_{2}. The hyperbolic and Euler identities are used to derive some identities involving exponential, hyperbolic, trigonometric functions and the hypergeometric functions
_{1}
*F*
_{1} and
_{1}
*F*
_{2}. Having evaluated, these non-elementary integrals, some new probability measures generalizing the gamma-type and Gaussian distributions are also obtained. The obtained generalized probability distributions may, for example, allow to perform better statistical tests than those already known (e.g. chi-square (
*x*^{2}) statistical tests and other statistical tests constructed based on the central limit theorem (CLT)), while avoiding the use of computational approximations (or methods) which are in general expensive and associated with numerical errors.

The confluent hypergeometric function F 1 1 and the hypergeometric function F 1 2 are used throughout this paper. There are defined here for reference, or see for example [

Definition 1. The confluent hypergeometric function, denoted as F 1 1 , is a special function given by the series

F 1 1 ( a ; b ; x ) = ∑ n = 0 ∞ ( a ) n ( b ) n x n n ! , (1)

where a and b are arbitrary constants,

( ϑ ) n = ϑ ( ϑ + 1 ) ⋯ ( ϑ + n − 1 ) = ∏ m = 1 n ( ϑ + m − 1 ) = Γ ( ϑ + n ) / Γ ( ϑ )

(Pochhammer’s notation) for any complex ϑ , with ( ϑ ) 0 = 1 , and Γ is the standard gamma function.

Definition 2. The hypergeometric function F 1 2 , is a special function given by the series

F 1 2 ( a ; b , c ; x ) = ∑ n = 0 ∞ ( a ) n ( b ) n ( c ) n x n n ! , (2)

where a, b and c are arbitrary constants, ( ϑ ) n = Γ ( ϑ + n ) / Γ ( ϑ ) (see definition 2).

Definition 3. An elementary function is a function of one variable constructed using that variable and constants, and by performing a finite number of repeated algebraic operations involving exponentials and logarithms. An indefinite integral which can be expressed in terms of elementary functions is an elementary integral. And if, on the other hand, it cannot be evaluated in terms of elementary functions, then it is non-elementary [

One of the goals of this work is to show how non-elementary integrals of one of the types

∫ x α e η x β d x , ∫ x α c o s h ( η x β ) d x , ∫ x α s i n h ( η x β ) d x , (3)

∫ x α c o s ( η x β ) d x and ∫ x α s i n ( η x β ) d x , (4)

where α , η and β are real or complex constants can be evaluated in terms of the special functions F 1 1 and F 1 2 .

It is worth clarifying that the integrals in (3) and (4) may be elementary or non-elementary depending on the values of the constants α and β . If, for instance, α = β − 1 , then the integral

∫ x α e η x β d x = 1 η β ∫ η β x β − 1 e η x β d x = e η x β η β + C (5)

is elementary because it is expressed in terms of the elementary function e η x β . In that case, the other integrals in (3) and (4) are also elementary since they can be expressed as linear combination of integrals such that in (5) using the hyperbolic identities

c o s h ( η x β ) = ( e η x β + e − η x β ) / 2 , s i n h ( η x β ) = ( e η x β − e − η x β ) / 2

and the Euler’s identities

c o s ( η x β ) = ( e i η x β + e − i η x β ) / 2 , s i n ( η x β ) = ( e i η x β − e − i η x β ) / ( 2 i ) .

Using Liouville 1835’s theorem, it can readily be shown that if α is not an integer and α ≠ β − 1 , then the integrals in (3) and (4) are non-elementary [

These integrals are the generalization of the non-elementary integrals evaluated by Nijimbere [

∫ e η x β x α d x , ∫ c o s h ( η x β ) x α d x , ∫ s i n h ( η x β ) x α d x ,

∫ c o s ( η x β ) x α d x and ∫ s i n h ( η x β ) x α d x .

If, on the other hand, α = 0 , the non-elementary integrals in (3) and (4) reduce to the non-elementary integrals evaluated in Nijimbere [

∫ e η x β d x , ∫ c o s h ( η x β ) d x , ∫ s i n h ( η x β ) d x ,

∫ c o s ( η x β ) d x and ∫ s i n h ( η x β ) d x .

Once the indefinite non-elementary integrals in (3) and (4) are evaluated, then their corresponding definite integrals

∫ B 1 B 2 x α e η x β d x , ∫ B 1 B 2 x α c o s h ( η x β ) d x , ∫ B 1 B 2 x α s i n h ( η x β ) d x ,

∫ B 1 B 2 x α c o s ( η x β ) d x and ∫ B 1 B 2 x α s i n ( η x β ) d x ,

where B 1 and B 2 are arbitrary constants or functions can be evaluated.

For instance, the incomplete gamma function

γ ( z 2 , z 1 ) = ∫ 0 z 1 x z 2 − 1 e − x d x

which is a very useful special function in both applied analysis and applied sciences, is a particular case of the definite non-elementary integral ∫ B 1 B 2 x α e η x β d x , where the limits of integration B 1 = 0 and B 2 = z 1 , { z 1 , z 2 } ∈ ℂ , z 2 = α − 1 has a positive real part ( Re ( z 2 ) > 0 ), η = − 1 and β = 1 . So, the gamma function,

Γ ( z 2 ) = lim | z 1 | → ∞ γ ( z 2 , z 1 ) , | arg z 2 | < π / 2 ,

is, as well, simply a limited particular case of the definite non-elementary integral ∫ B 1 B 2 x α e η x β d x in which, for example, the real part of α can be negative ( Re ( α ) < 0 ), β can be negative as well, and B 1 and B 2 can be arbitrary functions or constants. Thus, it is quite important to evaluate the non-elementary integrals in (3) and (4).

It is well known that numerical integration (or approximation) methods are expensive and their main drawback is that they are associated with computational errors which become very large as the integration limits become large. Thus, the analytical method used in this paper is very important in order to avoid computational methods. For example, Dawson’s integral

Daw ( z ) = e − z 2 ∫ 0 z e x 2 d x

and other related functions in mathematical physics such Faddeeva, Fried-Conte, Jackson, Fresnel and Gordeyev integrals were analytically evaluated by Nijimbere [

Another goal of this work is to obtain some identities (or formulas) involving exponential, hyperbolic, trigonometric functions and the hypergeometric functions F 1 1 and F 1 2 using the Euler and hyperbolic identities. Other interesting identities involving hypergeometric functions may be found, for example, in [

Using the fact that g ( x ) = e − η x β , x ∈ ℝ , η ∈ ℝ + , is in the L p -space, p > 0 for some β ∈ ℝ , some finite measure, μ ( { − ∞ , x } ) < ∞ , can be defined for all x ∈ ℝ . Moreover, if X = h ( x ) , x ∈ ℝ is some random variable, h : ℝ → ℝ is some well-defined function (e.g. h ( x ) = x ), then it is possible to define probability measures in terms of the Lebesgue measure d x as μ ( d x ) = A g ( x ) d x , x ∈ Ω , and Ω ⊆ ℝ , satisfying the integrability condition ∫ Ω ⊆ ℝ | X | α μ ( d x ) < ∞ , α ≠ 0, α > − β − 1 and A being a (normalization) constant. In that case, new probability measures (or distributions) that generalize the gamma-type and Gaussian-type distributions may be constructed, and corresponding distribution functions and moments can be evaluated as well.

Definition 4. The generalized gamma probability distribution is a three parameter probability distribution, let say ϕ > 0 , κ > 0 and β > 0 , and a random variable X has a generalized gamma distribution if it has the probability density function (p.d.f.)

f X ( x ; ϕ , κ , β ) = β / κ ϕ Γ ( ϕ / β ) x ϕ − 1 e − ( x / κ ) β , x ∈ ℝ + , ϕ > 0 , κ > 0 , β > 0. (6)

Definition 5. The generalized normal (Gaussian) probability distribution is a four parameter probability distribution, let say η > 0 , θ ∈ ℝ , β > 0 and σ > 0 , and a random variable X has a generalized normal distribution if it has the probability density function (p.d.f.)

f X ( x ; η , β , θ , σ ) = β 2 σ η 1 / β Γ ( 1 / β ) exp ( − η ( x − θ σ ) β ) , x ∈ ℝ , η > 0 , θ ∈ ℝ , β > 0 , σ > 0. (7)

Recent studies about generalized gamma and Gaussian probability distributions or involving these probability distributions may, for example, be found in [^{th} moments of the generalized Gaussian distribution. Here, the formula for the n^{th} moments of the generalized Gaussian distribution is obtained, and in particular, it is shown that the n^{th} moments of the generalized normal distribution in definition 5 are given by

M ( X n ) = ∫ − ∞ + ∞ x n f X ( x ; η , β , θ , σ ) d x = θ n Γ ( 1 / β ) ∑ l = 0 n Γ ( ( 2 l + 1 ) / β ) C 2 l n ( σ θ η 1 / β ) 2 l , 2 l ≤ n and ( 2 l ) ∈ ℕ , (8)

where C 2 l n = n ! / ( ( n − 2 l ) ! ( 2 l ) ! ) , θ ∈ ℝ is the mean of the Gaussian random variable and σ 2 > 0 its variance. It is also shown, for instance, that the inverse gamma distribution is as well a particular case of the generalized gamma-type distribution derived in this study.

The integrals examined here may also find applications in functional analysis, Gaussian Hilbert space, in which Hermite polynomials form a vector space with a Gaussian weight function, Freud weight and associated orthogonal polynomials [

The paper is organized as follows. In Section 2, the integrals in (4)-(5) are evaluated, and some new identities (or formula) that involve the exponential, hyperbolic, trigonometric functions and the hypergeometric functions F 1 1 and F 1 2 are obtained. In Section 3, new probability measures that generalize the gamma-type and Gaussian-type distributions are constructed, and their corresponding distribution functions are written in terms of the confluent hypergeometric function. Formulas to evaluate the n^{th} moments are also derived in Section 3. A general discussion is given in Section 4. The main results of the paper are given as propositions, theorems and corollaries in Sections 2.1, 2.2, 3.1 and 3.2.

Let first prove a lemma which will be used throughout the paper.

Lemma 1. Let j ≥ 0 and m ≥ 0 be integers, and let α , β and γ be arbitrarily constants.

1) Then

∏ m = 0 j ( α + m β + 1 ) = ( α + 1 ) β j ( α + 1 β + 1 ) j , (9)

2)

∏ m = 0 2 j ( α + m β + 1 ) = ( α + 1 ) ( α + β + 1 ) ( 2 β ) 2 j ( α + β + 1 2 β + 1 ) j ( α + 2 β + 1 2 β + 1 ) j (10)

3) and

∏ m = 0 2 j + 1 ( α + m β + 1 ) = ( α + 1 ) ( α + β + 1 ) ( 2 β ) 2 j ( α + 2 β + 1 2 β + 1 ) j ( α + 3 β + 1 2 β + 1 ) j . (11)

Proof.

1) Making use of Pochhammer’s notation [

∏ m = 0 j ( α + m β + 1 ) = ( α + 1 ) ∏ m = 1 j ( α + m β + 1 ) = ( α + 1 ) β j ∏ m = 1 j ( α + 1 β + m ) = ( α + 1 ) β j ∏ m = 1 j ( α + 1 β + 1 + m − 1 ) = ( α + 1 ) β j ( α + 1 β + 1 ) j . (12)

2) Observe that

∏ m = 0 2 j ( α + m β + 1 ) = ∏ l = 0 j − 1 ( α + l ( 2 β ) + β + 1 ) ∏ l = 0 j ( α + l ( 2 β ) + 1 ) . (13)

Then, making use of Pochhammer’s notation as before gives

∏ l = 0 j − 1 ( α + l ( 2 β ) + β + 1 ) = ( α + β + 1 ) ( 2 β ) j ( α + β + 1 2 β + 1 ) j (14)

and

∏ l = 0 j ( α + l ( 2 β ) + 1 ) = ( α + 1 ) ( 2 β ) j ( α + 2 β + 1 2 β + 1 ) j . (15)

Hence, multiplying (14) with (15) gives (10).

3) Observe that

∏ m = 0 2 j + 1 ( α + m β + 1 ) = ∏ l = 0 j ( α + l ( 2 β ) + 1 ) ∏ l = 0 j ( α + l ( 2 β ) + β + 1 ) . (16)

Once again, using again Pochhammer’s notation yields

∏ l = 0 j ( α + l ( 2 β ) + β + 1 ) = ( α + β + 1 ) ( 2 β ) j ( α + 3 β + 1 2 β + 1 ) j . (17)

Hence, multiplying (17) with (15) gives (11). □

Now, some of the main results of this paper can be obtained.

Proposition 1. Let η and β be nonzero constants ( η ≠ 0, β ≠ 0 ), and α be any constant different from −1 ( α ≠ − 1 ). Then,

∫ x α e η x β d x = x α + 1 e η x β α + 1 F 1 1 ( 1 ; α + β + 1 β ; − η x β ) + C . (18)

The Kummer transformation (formula 13.1.27 in [

∫ x α e η x β d x = x α + 1 α + 1 F 1 1 ( α + 1 β ; α + β + 1 β ; η x β ) + C . (19)

Proof. The substitution u β = η x β and (1) yields

∫ x α e η x β d x = 1 η α + 1 β ∫ u α e u β d u . (20)

Performing successive integration by parts that increases the power of u gives

∫ u α e u β d u = u α + 1 e u β α + β + 1 − β u α + β + 1 e u β ( α + 1 ) ( α + β + 1 ) + β 2 u α + 2 β + 1 e u β ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) − β 3 u α + 3 β + 1 e u β ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) ( α + 3 β + 1 ) + ⋯ + ( − 1 ) j β j u α + j β + 1 e u β ∏ m = 0 j ( α + m β + 1 ) + ⋯ = ∑ j = 0 ∞ ( − 1 ) j β j u α + j β + 1 e u β ∏ m = 0 j ( α + m β + 1 ) + C . (21)

Using (9) in Lemma 1 yields

∫ u α e u β d u = u α + 1 e u β ∑ j = 0 ∞ ( − β u β ) j ∏ m = 0 j ( α + m β + 1 ) + C = u α + 1 e u β ∑ j = 0 ∞ ( − β u β ) j ( α + 1 ) β j ( α + 1 β + 1 ) j + C = u α + 1 e u β α + 1 ∑ j = 0 ∞ ( − u β ) j ( α + 1 β + 1 ) j + C

= u α + 1 e u β α + 1 ∑ j = 0 ∞ ( 1 ) j ( − u β ) j ( α + 1 β + 1 ) j j ! + C = u α + 1 e u β α + 1 F 1 1 ( 1 ; α + β + 1 β ; − u β ) + C . (22)

Hence, using the fact u β = η x β gives (10). □

Having evaluated (18), the following results hold.

Theorem 1. Let α be an arbitrarily real or complex constant, β a nonzero real or complex constant ( β ≠ 0 ), and η a nonzero real or complex constant with a positive real part ( Re ( η ) > 0 ).

1) Then,

∫ 0 + ∞ x α e − η x β d x = Γ ( α + β + 1 β ) ( α + 1 ) η α + 1 β , (23)

α > − β − 1 , α ≠ − 1 if { α , β } ∈ ℝ .

2) Moreover, if the integrand is even, then

∫ − ∞ + ∞ x α e − η x β d x = 2 Γ ( α + β + 1 β ) ( α + 1 ) η α + 1 β . (24)

Proof. It can readily be shown using Proposition 1 and the asymptotic expansion of the confluent hypergeometric function (formula 13.1.5 in [

∫ 0 + ∞ x α e − η x β d x = lim x → ∞ x α + 1 e − η x β α + 1 F 1 1 ( 1 ; α + β + 1 β ; η x β ) = Γ ( α + β + 1 β ) ( α + 1 ) η α + 1 β . (25)

If the integrand is even, then ∫ − ∞ + ∞ x α e − η x β d x = 2 ∫ 0 + ∞ x α e − η x β d x , and this gives (24).

□

Theorem 1 is, for instance, the generalization of the Mellin transform of the function e − η x β , Re { η } > 0 , β > 0 , where s = α + 1 is the Mellin parameter, and in this case it can be negative s = α + 1 < 0 , and the constant β can be negative as well ( β < 0 ), see for example Polarikas [

Moreover, as it will shortly be shown (see Section 3), Theorem 1 can be used to obtain new probability distributions that generalize the gamma-type and Gaussian-type distributions that may lead to better statistical tests than those already known which are based on the central limit theorem (CLT).

Proposition 2. Let η and β be nonzero constants ( η ≠ 0, β ≠ 0 ), α be some constant different from −1 ( α ≠ − 1 ) and α ≠ − β − 1 . Then,

∫ x α c o s h ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ c o s h ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 )

− β η x β s i n h ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + C . (26)

Proof. The change of variable u β = η x β yields

∫ x α c o s h ( η x β ) d x = 1 η α + 1 β ∫ u α c o s h ( u β ) d u + C . (27)

Successive integration by parts that increases the power of u gives

∫ u α cosh ( u β ) d u = u α + 1 cosh ( u β ) α + β + 1 − β u α + β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) + β 2 u α + 2 β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) − β 3 u α + 3 β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) ( α + 3 β + 1 ) + ⋯ + β 2 j u α + 2 j β + 1 cosh ( u β ) ∏ m = 0 2 j ( α + m β + 1 ) + ⋯ − β 2 j + 1 u α + ( 2 j + 1 ) β + 1 sinh ( u β ) ∏ m = 0 2 j + 1 ( α + m β + 1 ) − ⋅ ⋅ = cosh ( u β ) ∑ j = 0 ∞ β 2 j u α + 2 j β + 1 ∏ m = 0 2 j ( α + m β + 1 ) − sinh ( u β ) ∑ j = 0 ∞ β 2 j + 1 u α + ( 2 j + 1 ) β + 1 ∏ m = 0 2 j + 1 ( α + m β + 1 ) + C . (28)

Using (10) and (11) in Lemma 1 yields

∫ u α cosh ( u β ) d u = u α + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) ∑ j = 0 ∞ ( u 2 β 4 ) j ( α + β + 1 2 β ) j ( α + 2 β + 1 2 β ) j − β u α + β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) ∑ j = 0 ∞ ( u 2 β 4 ) j ( α + 2 β + 1 2 β ) j ( α + 3 β + 1 2 β ) j + C

= u α + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) ∑ j = 0 ∞ ( 1 ) j ( u 2 β 4 ) j ( α + β + 1 2 β ) j ( α + 2 β + 1 2 β ) j j ! − β u α + β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) ∑ j = 0 ∞ ( 1 ) j ( u 2 β 4 ) j ( α + 2 β + 1 2 β ) j ( α + 3 β + 1 2 β ) j j ! + C = u α + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; u 2 β 4 )

− β u α + β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; u 2 β 4 ) + C . (29)

Hence, using the fact u β = η x β and rearranging terms gives (26). □

Proposition 3. Let η and β be nonzero constants ( η ≠ 0, β ≠ 0 ), α be some constant different from −1 ( α ≠ − 1 ) and α ≠ − β − 1 . Then,

∫ x α s i n h ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ s i n h ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) − β η x β c o s h ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + C . (30)

Proof. Making the change of variable u β = η x β as before yields

∫ x α s i n h ( η x β ) d x = 1 η α + 1 β ∫ u α s i n h ( u β ) d u + C . (31)

Performing successive integration by parts that increase the power of u as before gives

∫ u α sinh ( u β ) d u = u α + 1 sinh ( u β ) α + β + 1 − β u α + β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) + β 2 u α + 2 β + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) − β 3 u α + 3 β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) ( α + 2 β + 1 ) ( α + 3 β + 1 ) + ⋯ + β 2 j u α + 2 j β + 1 sinh ( u β ) ∏ m = 0 2 j ( α + m β + 1 ) + ⋯ − β 2 j + 1 u α + ( 2 j + 1 ) β + 1 cosh ( u β ) ∏ m = 0 2 j + 1 ( α + m β + 1 ) − ⋯ = sinh ( u β ) ∑ j = 0 ∞ β 2 j u α + 2 j β + 1 ∏ m = 0 2 j ( α + m β + 1 ) − cosh ( u β ) ∑ j = 0 ∞ β 2 j + 1 u α + ( 2 j + 1 ) β + 1 ∏ m = 0 2 j + 1 ( α + m β + 1 ) + C . (32)

Using (10) and (11) in Lemma 1 yields

∫ u α s i n h ( u β ) d u = u α + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) ∑ j = 0 ∞ ( u 2 β 4 ) j ( α + β + 1 2 β ) j ( α + 2 β + 1 2 β ) j − β u α + β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) ∑ j = 0 ∞ ( u 2 β 4 ) j ( α + 2 β + 1 2 β ) j ( α + 3 β + 1 2 β ) j + C

= u α + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) ∑ j = 0 ∞ ( 1 ) j ( u 2 β 4 ) j ( α + β + 1 2 β ) j ( α + 2 β + 1 2 β ) j j ! − β u α + β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) ∑ j = 0 ∞ ( 1 ) j ( u 2 β 4 ) j ( α + 2 β + 1 2 β ) j ( α + 3 β + 1 2 β ) j j ! + C = u α + 1 sinh ( u β ) ( α + 1 ) ( α + β + 1 ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; u 2 β 4 ) − β u α + β + 1 cosh ( u β ) ( α + 1 ) ( α + β + 1 ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; u 2 β 4 ) + C . (33)

Hence, using the fact u β = η x β and rearranging terms gives (30). □

Theorem 2. For any constants α , β and η ,

1 α + β + 1 [ cosh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) − β η x β sinh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] = 1 2 [ e η x β F 1 1 ( 1 ; α + β + 1 β ; − η x β ) + e − η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] . (34)

Proof. Using the hyperbolic identity cosh ( η x β ) = ( e η x β + e − η x β ) / 2 and Proposition 1 yields

∫ x α cosh ( η x β ) d x = 1 2 ( ∫ x α e η x β d x + ∫ x α e − η x β d x ) = x α + 1 2 ( α + 1 ) [ e η x β F 1 1 ( 1 ; α + β + 1 β ; − η x β ) + e − η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] + C . (35)

Hence, Comparing (35) with (26) gives (34). □

Theorem 3. For any constants α , β and η ,

1 α + β + 1 [ sinh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) − β η x β cosh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] = 1 2 [ e η x β F 1 1 ( 1 ; α + β + 1 β ; − η x β ) − e − η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] . (36)

Proof. Using the hyperbolic identity s i n h ( η x β ) = ( e η x β − e − η x β ) / 2 and Proposition 1 yields

∫ x α sinh ( η x β ) d x = 1 2 ( ∫ x α e η x β d x − ∫ x α e − η x β d x ) = x α + 1 2 ( α + 1 ) [ e η x β F 1 1 ( 1 ; α + β + 1 β ; − η x β ) − e − η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] + C . (37)

Hence, Comparing (37) with (30) gives (36). □

Theorem 4 For any constants α , β and η ,

e η x β F 1 1 ( 1 ; α + β + 1 β ; − η x β ) = 1 α + β + 1 [ sinh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) − β η x β cosh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) + cosh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) − β η x β sinh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] . (38)

Proof. The hyperbolic relation e η x β = cosh ( η x β ) + sinh ( η x β ) and Propositions 2 and 3 gives

∫ x α e η x β d x = ∫ x α cosh ( η x β ) d x + ∫ x α sinh ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ cosh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) − β η x β sinh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + x α + 1 ( α + 1 ) ( α + β + 1 ) [ sinh ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; η 2 x 2 β 4 ) − β η x β cosh ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; η 2 x 2 β 4 ) ] + C . (39)

Hence, comparing (39) with (18) gives (38). □

Proposition 4. Let η and β be nonzero constants ( η ≠ 0, β ≠ 0 ), α be some constant different from −1 ( α ≠ − 1 ) and α ≠ − β − 1 . Then,

∫ x α c o s ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ c o s ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; − η 2 x 2 β 4 )

+ β η x β s i n ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; − η 2 x 2 β 4 ) ] + C . (40)

The proof is similar to the proof of Proposition 2, so it is omitted.

Proposition 5. Let η and β be nonzero constants ( η ≠ 0, β ≠ 0 ), α be some constant different from −1 ( α ≠ − 1 ) and α ≠ − β − 1 . Then

∫ x α s i n ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ s i n ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; − η 2 x 2 β 4 ) − β η x β c o s ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; − η 2 x 2 β 4 ) ] + C . (41)

The proof of this proposition is also omitted since it is similar to that of Proposition 3.

Theorem 5. For any constants α , β and η ,

1 α + β + 1 [ cos ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; − η 2 x 2 β 4 ) − β η x β sin ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; − η 2 x 2 β 4 ) ] = 1 2 [ e i η x β F 1 1 ( 1 ; α + β + 1 β ; − i η x β ) + e − i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) ] . (42)

Proof. Euler’s identity c o s ( η x β ) = ( e i η x β + e − i η x β ) / 2 and Proposition 1 gives

∫ x α cos ( η x β ) d x = 1 2 [ ∫ x α e i η x β d x + ∫ x α e − i η x β d x ] = x α + 1 2 ( α + 1 ) [ e i η x β F 1 1 ( 1 ; α + β + 1 β ; − i η x β ) + e − i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) ] + C . (43)

Hence, Comparing (43) with (40) gives (42). □

Theorem 6. For any constants α , β and η ,

1 α + β + 1 [ sin ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; − η 2 x 2 β 4 ) + β η x β cos ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; − η 2 x 2 β 4 ) ] = 1 2 i [ e i η x β F 1 1 ( 1 ; α + β + 1 β ; − i η x β ) − e − i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) ] . (44)

Proof. Euler’s identity s i n ( η x β ) = ( e i η x β − e − i η x β ) / ( 2 i ) and Proposition 1 gives

∫ x α cos ( η x β ) d x = 1 2 [ ∫ x α e i η x β d x + ∫ x α e − i η x β d x ]

= x α + 1 2 i ( α + 1 ) [ e i η x β F 1 1 ( 1 ; α + β + 1 β ; − i η x β ) − e − i η x β F 1 1 ( 1 ; α + β + 1 β ; i η x β ) ] + C . (45)

Hence, Comparing (45) with (41) gives (44). □

Theorem 7. For any constants α , β and η ,

e i η x β F 1 1 ( 1 ; α + β + 1 β ; − i η x β ) = 1 α + β + 1 [ cos ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; − η 2 x 2 β 4 ) − β η x β sin ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; − η 2 x 2 β 4 ) ] + i α + β + 1 [ sin ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; − η 2 x 2 β 4 ) + β η x β cos ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; − η 2 x 2 β 4 ) ] . (46)

Proof. Using the relation e i η x β = cos ( η x β ) + i sin ( η x β ) and Propositions 4 and 5 yields

∫ x α e i η x β d x = ∫ x α cos ( η x β ) d x + i ∫ x α sin ( η x β ) d x = x α + 1 ( α + 1 ) ( α + β + 1 ) [ cos ( η x β ) F 1 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; − η 2 x 2 β 4 ) + β η x β sin ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; − η 2 x 2 β 4 ) ] + i x α + 1 ( α + 1 ) ( α + β + 1 ) [ sin ( η x β ) 1 F 2 ( 1 ; α + β + 1 2 β , α + 2 β + 1 2 β ; − η 2 x 2 β 4 ) − β η x β cos ( η x β ) F 1 2 ( 1 ; α + 2 β + 1 2 β , α + 3 β + 1 2 β ; − η 2 x 2 β 4 ) ] + C . (47)

Hence, comparing (47) with (18) (with η replaced by i η ) gives (46). □

In this section, Proposition 1 and Theorem 1 are used to generalize the gamma-type ( χ 2 distribution, inverse gamma distribution) and Gaussian-type distributions, see for example [

Define a probability measure μ in terms of the Lebesgue measure d x as [

d μ = μ ( d x ) = A g ( x ; α , η , β ) d x = f X ( x ; α , η , β ) d x , x ∈ [ 0 , + ∞ ) , (48)

where f X ( x ; α , η , β ) is a probability density function (p.d.f.) of a three parameter distribution of some random variable X,

g ( x ; α , η , β ) = x α e − η x β , α ≠ − 1 , β ≠ 0 , α > − β − 1 , (49)

α , η and β are parameters of the probability distribution of the random variable X, and A is a normalized constant which can be obtained using formula (23) in Theorem 1.

After normalization, it is found that the p.d.f. of X is given by

f X ( x ; α , η , β ) = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α e − η x β , α ≠ − 1 , β ≠ 0 , α > − β − 1. (50)

The distribution function of the random variable X can be obtained using Proposition 1 and is given by

F X ( x ; α , η , β ) = μ { [ 0 , x ) } = ∫ 0 x f X ( u ; α , η , β ) d u = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α + 1 e − η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) . (51)

The n^{th} moments ( M ( X n ) ) can, as well, be evaluated using formula (23) in Theorem 1 to obtain

M ( X n ) = ∫ 0 + ∞ x n f X ( x ; α , η , β ) d x = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) ∫ 0 + ∞ x α + n e − η x β d x = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) Γ ( ( α + β + n + 1 ) / β ) ( α + n + 1 ) η ( α + n + 1 ) / β = Γ ( ( α + n + 1 ) / β ) η α / β Γ ( ( α + 1 ) / β ) . (52)

These results are summarized in the following theorem.

Theorem 8. Let X be a random variable having the generalized gamma-type p.d.f. with parameters α , η and β given by

f X ( x ; α , η , β ) = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α e − η x β , x ∈ ℝ + , α ≠ − 1, β ≠ 0, α > − β − 1. (53)

Then, the distribution function F X ( x ; α , η , β ) of the random variable X is given by

F X ( x ; α , η , β ) = ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α + 1 e − η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) , (54)

and the n^{th} moments M ( X n ) of X are given by

M ( X n ) = ∫ 0 + ∞ x n f X ( x ; α , η , β ) d x = Γ ( ( α + n + 1 ) / β ) η α / β Γ ( ( α + 1 ) / β ) . (55)

Example 1. The generalized gamma (GG) distribution can be derived from Theorem 8 by setting η = 1 / κ β , κ > 0 , β > 0 and α = ϕ − 1 , ϕ > 0 , and its p.d.f is, see definition 4,

f X ( x ; ϕ , κ , β ) = β / κ ϕ Γ ( ϕ / β ) x ϕ − 1 e − ( x / κ ) β , x ∈ ℝ + , ϕ > 0 , κ > 0 , β > 0.

Example 2. The one parameter Maxwell-Boltsman distribution in gas dynamics given by

F V ( v ; η ) = 2 η 3 / 2 Γ ( 3 / 2 ) ∫ 0 v x 2 e − η x 2 d x = 2 η 3 / 2 Γ ( 3 / 2 ) v 3 e − η v 2 F 1 1 ( 1 ; 5 2 ; η v 2 ) ,

where v is the gas speed and η > 0 is some constant that depends on the gas properties, is also a special case of Theorem 8 with α = 2 and β = 2 .

The inverse gamma distribution find applications in wireless communications, see for example [

Corollary 1. Let X be a random variable with the inverse gamma distribution, X ~ I G ( θ , η ) with parameters θ and η . Then, the distribution function F X ( x ; θ , η ) is given by

F X ( x ; θ , η ) = − η θ Γ ( θ + 1 ) x − θ e − η / x F 1 1 ( 1 ; θ + 1 ; η / x ) , x > 0 , θ > 0 , η > 0, (56)

while the n^{th} moments M ( X n ) are given by

M ( X n ) = η n Γ ( θ − n ) Γ ( θ + 1 ) , θ > n . (57)

Proof. Setting α = − ( θ + 1 ) , β = − 1 in Theorem 8, and using the fundamental theorem of calculus f X ( x ) = d F X d x = d d x ∫ 0 x f X ( u ) d u and Proposition 1 gives the p.d.f.

f X ( x ; α , η , β ) = η θ Γ ( θ ) x − ( θ + 1 ) e − η x − 1 , x > 0 , θ > 0 , η > 0 , (58)

which is the p.d.f of the inverse gamma distribution. The n^{th} moments M ( X n ) of X are obtained by setting α = − ( θ + 1 ) and β = − 1 in (55). □

Consider, as before, a probability measure μ in terms of Lebesgue measure d x given by

d μ = μ ( d x ) = A g ( x ; α , η , β ) d x = f X ( x ; α , η , β ) d x , x ∈ ℝ , (59)

where, as before, f X ( x ; α , η , β ) is the p.d.f. of some random variable X,

g ( x ; α , η , β ) = x α e − η x β , α ≠ − 1 , β ≠ 0 , α > − β − 1 , (60)

is an even function of the variable x, A is a normalized constant which can be obtained using formula (24) in Theorem 1, and α , η and β are parameters of the probability distribution of the random variable X.

After normalization, the p.d.f. of X is found to be

f X ( x ; α , η , β ) = ( α + 1 ) η ( α + 1 ) / β 2 Γ ( ( α + β + 1 ) / β ) x α e − η x β , x ∈ ℝ , α ≠ − 1 , β ≠ 0 , α > − β − 1. (61)

It is important to note that f X in this case is even. So, a factor of 2 has to appear in the denominator. In addition the parameters α and β can be negative. The distribution function F X can also be obtained using Proposition 1 and is thus given by

F X ( x ; α , η , β ) = μ { ( − ∞ , x ) } = ∫ − ∞ x f X ( u ; α , η , β ) d u = 1 2 [ 1 − ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α + 1 e − η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] . (62)

The moment ( M ( X n ) ) can be evaluated using formula (24) in Theorem 1 to obtain

M ( X n ) = ∫ − ∞ + ∞ x n f X ( x ; α , η , β ) d x = ( α + 1 ) η ( α + 1 ) / β 2 Γ ( ( α + β + 1 ) / β ) ∫ − ∞ + ∞ x α + n e − η x β d x = ( Γ ( ( α + n + 1 ) / β ) η α / β Γ ( ( α + 1 ) / β ) , if n is even . 0, if n is odd . (63)

These results further generalize the generalized Gaussian distribution with a zero mean in which, in general, α = 0 and β > 0 . They are summarized in Theorem 9.

Theorem 9. Let X be a random variable having an even p.d.f. with parameters α , η and β given by

f X ( x ; α , η , β ) = ( α + 1 ) η ( α + 1 ) / β 2 Γ ( ( α + β + 1 ) / β ) x α e − η x β , x ∈ ℝ , α ≠ − 1, β ≠ 0, α > − β − 1. (64)

Then, the distribution function F X ( x ; α , η , β ) of the random variable X is given by

F X ( x ; α , η , β ) = 1 2 [ 1 − ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) x α + 1 e − η x β F 1 1 ( 1 ; α + β + 1 β ; η x β ) ] . (65)

And the n^{th} moments M ( X n ) of X are given by

M ( X n ) = ∫ − ∞ + ∞ x n f X ( x ; α , η , β ) d x = ( Γ ( ( α + n + 1 ) / β ) η α / β Γ ( ( α + 1 ) / β ) , if n is even . 0, if n is odd . (66)

Example 3. Setting α = 0 , β = 2 and η = 1 / 2 yields f X ( x ) = ( 1 / 2 π ) e − x 2 / 2 , and the mean of X is E X = M ( X 1 ) = 0 while the variance is E X 2 = M ( X 2 ) = 1 . So X ~ N ( 0,1 ) distribution as expected (or X is a standard normal distribution random variable).

More general results can be achieved by introducing two additional parameters θ ∈ ℝ and σ > 0 . The results in Theorem 10 generalizes further the generalized Gaussian distribution in definition 5.

Theorem 10. Let X be a random variable having an even p.d.f. with five parameters α , η , β , θ and σ given by

f X ( x ; α , η , β , θ , σ ) = 1 2 σ ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) ( x − θ σ ) α e x p ( − η ( x − θ σ ) β ) , x ∈ ℝ , θ ∈ ℝ , α ≠ − 1, β ≠ 0, α > − β − 1 , σ > 0. (67)

Then, the distribution function F X ( x ; α , η , β , θ , σ ) of the random variable X is given by

F X ( x ; α , η , β , θ , σ ) = 1 2 [ 1 − 1 σ ( α + 1 ) η ( α + 1 ) / β Γ ( ( α + β + 1 ) / β ) ( x − θ σ ) α + 1 × e x p ( − η ( x − θ σ ) β ) F 1 1 ( 1 ; α + β + 1 β ; η ( x − θ σ ) β ) ] , (68)

and the moments M ( X n ) of X are given by

M ( X n ) = ∫ − ∞ + ∞ x n f X ( x ; α , η , β , θ , σ ) d x = θ n Γ ( ( α + 1 ) / β ) ∑ l = 0 n Γ ( ( α + 2 l + 1 ) / β ) C 2 l n ( σ θ η 1 / β ) 2 l ,2 l ≤ n and ( 2 l ) ∈ ℕ , (69)

where C 2 l n = n ! / ( ( n − 2 l ) ! ( 2 l ) ! ) .

Thus, the mean and the variance of X are respectively given by

E X = M ( X 1 ) = θ and var X = E X 2 − ( E X ) 2 = σ 2 η 2 / β Γ ( ( α + 3 ) / β ) Γ ( ( α + 1 ) / β ) . (70)

Formula (69) is obtained by making the substitution u = ( x − θ ) / σ , and by applying the binomial theorem and Theorem 1.

A four parameter generalized Gaussian distribution (see definition 5) may be derived by setting α = 0 in Theorem 10.

Corollary 2. Let X be a random variable having the generalized Gaussian-type distribution p.d.f. (see definition 5) with four parameters α , η , β and σ given by

f X ( x ; η , β , θ , σ ) = β 2 σ η 1 / β Γ ( 1 / β ) exp ( − η ( x − θ σ ) β ) , x ∈ ℝ , η > 0 , θ ∈ ℝ , β > 0 , σ > 0 , (71)

and where β is even. Then, the distribution function F X ( x ; α , η , β , θ , σ ) of the random variable X is given by

F X ( x ; η , β , θ , σ ) = 1 2 [ 1 − β σ η 1 / β Γ ( 1 / β ) e x p ( − η ( x − θ σ ) β ) F 1 1 ( 1 ; β + 1 β ; η ( x − θ σ ) β ) ] , (72)

and the n^{th} moment M ( X n ) of X are given by

M ( X n ) = ∫ − ∞ + ∞ x n f X ( x ; η , β , θ , σ ) d x = θ n Γ ( 1 / β ) ∑ l = 0 n Γ ( ( 2 l + 1 ) / β ) C 2 l n ( σ θ η 1 / β ) 2 l ,2 l ≤ n and ( 2 l ) ∈ ℕ , (73)

where, as before, C 2 l n = n ! / ( ( n − 2 l ) ! ( 2 l ) ! ) . Thus, the mean and the variance of X are respectively given by

E X = M ( X 1 ) = θ and var X = E X 2 − ( E X ) 2 = σ 2 η 2 / β Γ ( 3 / β ) Γ ( 1 / β ) . (74)

Moreover, if η ≥ 1 / 2 and β > 2 , and since Γ ( 3 / β ) < Γ ( 1 / β ) , then the variance of X ( var X ) does satisfy

var X = E X 2 − ( E X ) 2 = σ 2 η 2 / β Γ ( 3 / β ) Γ ( 1 / β ) ≤ σ 2 , (75)

where σ 2 is the variance of the Gaussian random variable.

Example 4. Let X be a random variable with the p.d.f. in Corollary 2. Setting η = 1 / β implies that Y = ( ( X − θ ) / σ ) β is a χ 1 β distribution random variable and has the p.d.f

f Y ( y ; β ) = y 1 β − 1 e − y / β 2 1 β Γ ( 1 β ) , y ≥ 0,

and β ∈ ℝ + is an even number as before and the subscript 1 on χ 1 β is the number of degrees of freedom. Moreover, if X 1 , X 2 , ⋯ , X n are identically and independently distributed having the p.d.f. in Corollary 2, then S = ∑ i = 1 n Y i = ∑ i = 1 n ( ( X i − θ ) / σ ) β is a χ n β (with n degrees of freedom) random variable and has the p.d.f. (Richter [

f S ( Y ) ( y ; β ) = y n β − 1 e − y / β 2 n β Γ ( n β ) , y ≥ 0.

In that case, inferences about the statistics σ may be performed, see for example Richter [

A formula for the n^{th} moments of the Gaussian distribution can now be obtained by setting β = 2 , η = 1 / 2 and α = 0 in (73).

Corollary 3. Let X be a Gaussian random variable. Its n^{th} moments M ( X n ) are thus given by the formula

M ( X n ) = θ n π ∑ l = 0 n Γ ( l + 1 / 2 ) C 2 l n ( 2 σ 2 θ 2 ) l , 2 l ≤ n and ( 2 l ) ∈ ℕ , (76)

where θ ∈ ℝ is the mean of the Gaussian random variable and σ 2 > 0 its variance, and as before, C 2 l n = n ! / ( ( n − 2 l ) ! ( 2 l ) ! ) .

Formulas for non-elementary integrals of the types ∫ x α e η x β d x , ∫ x α cosh ( η x β ) d x , ∫ x α sinh ( η x β ) d x , ∫ x α cos ( η x β ) d x and ∫ x α s i n ( η x β ) d x where α , η and β are real or complex constants were obtained in terms of the confluent hypergeometric function F 1 1 and the hypergeometric function F 1 2 in section 2 (see Propositions 1, 2, 3, 4 and 5). The results in Propositions 1, 2, 3, 4 and 5 generalize those in Nijimbere (2017) and those in Nijimbere (2018). Using hyperbolic and Euler identities, some identities involving the confluent hypergeometric function F 1 1 and the hypergeometric function F 1 2 were also obtained in section 2 (Theorems 2 - 7). Having evaluated the integrals ∫ Ω x α e − η x β d x , Ω ⊆ ℝ and η ∈ ℝ + in Theorem 1, new probability measures that further generalize the generalized gamma distribution and the generalized Gaussian distribution were constructed. Their distributions were also written in terms of the confluent hypergeometric function F 1 1 , and formulas for the n^{th} moments were obtained as well in Section 3 (Theorems 8 - 10 and Corollaries 2 - 3). The results obtained in this paper, for example, may be used to construct better statistical tests than those already know (e.g. χ 2 statistical tests and tests obtained based on the normal distribution). Theorem 1 turns out also to be the generalization of the Mellin transform of the function e − η x β , Re { η } > 0 , β > 0 , where s = α + 1 is the Mellin parameter, and in this case it can be negative s = α + 1 < 0 , and the constant β can be negative as well ( β < 0 ). It is also worth clarifying that the gamma function and the incomplete gamma function are particular cases of the definite integral ∫ Ω ⊆ ℂ x α e η x β d x because R ( α ) and R ( β ) may simultaneously be negative (see the introduction section).

The author declares no conflicts of interest regarding the publication of this paper.

Nijimbere, V. (2020) Analytical Evaluation of Non-Elementary Integrals Involving Some Exponential, Hyperbolic and Trigonometric Elementary Functions and Derivation of New Probability Measures Generalizing the Gamma-Type and Gaussian-Type Distributions. Advances in Pure Mathematics, 10, 371-392. https://doi.org/10.4236/apm.2020.107023