What Is the Difference between Gamma and Gaussian Distributions ?

An inequality describing the difference between Gamma and Gaussian distributions is derived. The asymptotic bound is much better than by existing uniform bound from Berry-Esseen inequality.


Introduction 1.Problem
We first introduce some notations.Denote Gamma distribution function as for and , where is the Gamma function, i.e., 2 1 e ,for 0, , 2 2 0, otherwise.
It is well-known that the random variable n  can be interpreted by On the other side, by the Berry-Esseen inequality to where is the standard Gaussian distribution function, i.e.,

 
which describes the distance between Gamma and Gaussian distributions.The purpose of this paper is to derive asymptotic sharper bound in Equation ( 5), which much improves the constant by directly using Berry-Esseen inequality.The main framework of analysis is based on Gil-Pelaez formula (essentially equivalent to Levy inversion formula), which represents distribution function of a random variable by its characteristic function.

C C
The main result of this paper is as following.Theorem 1.1 A relation of the Gamma distribution (1) and Gaussian distribution (4) is given by where for any 1 0, 2 Clearly, as .Thus, the asymptotical bound is To check the tightness of the limit value of , we plot in Figure 1 the multiplication for , where the straight line is the limit 1, 2, , 200 n   value 1 3 π .From this experiment it seems that 1 3 π is the best constant.The tendency of the theoretical formula is plotted for 1,10 10 n   in Figure 2, which also shows the tendency to the limit value 1 3 π .The slow trend is due to that some upper bounds formulated over interval   0 1 , n n have been weakly estimated, e.g., the third and fourth terms of .

 
1 C n

Comparison to the Bound Derived by Berry-Esseen Inequality
Let be a sequence of independent identically distributed random variables with

C n
and finite third absolute moment The best upper bound 0 is found in [1] in 2009.The bound is improved in [2] at some angle in a slight different form as The inequality (8) will be sharper than Equation (7) for 3 1.93 Thus, it is approximated as by using Matlab to integrate over interval   0,100 divided equivalently 100,000 subinterval for its half value.
By Equation ( 7 Hence, the best constant in Equation ( 5) by applying Berry-Esseen inequality is .Obviously, the limit bound found in this paper for chi-square distribution is much better.
The technical reason is that the Berry-Esseen inequality deals with general i.i.d.random sequences without exact information of the distribution.

Proof of Main Result
Before to prove the main result, we first list a few lemmas and introduce some facts of characteristic function theory.

Some Lemmas Lemma 2.1 For a complex number satisfying
By Taylor's expansion and noting 1 z  , we have the assertion follows.
Lemma 2.2 For a real number x satisfying where is the imaginary unit and i   Proof.By Taylor expansion for complex function, for < 1 x we have where   3 R x is shown above.By further noting the two alternating real series above, it follows the upper bound.
We cite below a well-known inequality [3] as a lemma.
Lemma 2. 3 The tail probability of the standard normal distribution satisfies

Characteristic Function
Let us recall, see e.g., [4], the definition and some basic facts of characteristic function (CF), which provides another way to describe the distribution function of a random variable.The characteristic function of a random variable X is defined by where is the imaginary unit, and is the argument of the function.Clearly, the CF for random variable with X and Y independent to each other.
It is well-known that the CF of standard Gaussian and the CF of chi-square distributed variable n  is Thus, the CF for 2 The CF is actually an inverse Fourier transformation Copyright © 2013 SciRes.AM of density function.Therefore, distribution function can be expressed by CF directly, e.g., Levy inversion formula.We use another slightly simpler formula.For a univariate random variable X , if x is a continuity point of its distribution X F , then which is called Gil-Pelaez formula, see, e.g., page 168 of [4].

Proof of Main Result
We are now in a position to prove the main result.

Proof of Theorem
where . Then, it is easy to see that for 0 t n  .Hence, by Equations ( 12) and (13) and Lemma 2.1,     In view of Formula (11) , the formula to be proved follows directly.