Scientific Research

An Academic Publisher

**A Note on “Limit Distributions of Self-Normalized Sums” Using Cauchy-Generated Samples** ()

^{}

Keywords

Share and Cite:

*Applied Mathematics*,

**10**, 863-875. doi: 10.4236/am.2019.1011062.

1. Introduction

The problem of finding the distribution of self-normalized sum of a Cauchy-generated sample has been considered since at least 1969 (see, for example, [2] ), but solved (by proposing a concrete numerical algorithm) only in 1973 by [1] , our key reference. After that, the research has been focused mainly on proving general statements concerning self-normalized sums, without dealing with specific distributions such as Cauchy—see, for example, [3] .

In this article, we demonstrate how characteristic functions (CHF) are used in Statistics to find a distribution of a specific sample statistic (a function of individual observations). We do this by using a single comprehensive example, namely finding the $n\to \infty $ limit of the probability density function (PDF) of

$\frac{{\displaystyle {\sum}_{i=1}^{n}{X}_{i}}}{\sqrt{{\displaystyle {\sum}_{i=1}^{n}{X}_{i}^{2}}}}$ (1)

where ${X}_{1}\mathrm{,}\text{\hspace{0.05em}}{X}_{2}\mathrm{,}\cdots \mathrm{,}\text{\hspace{0.05em}}{X}_{n}$ is a random independent sample (RIS) of size n from a Cauchy distribution with zero median. This goal has already been achieved (in a more general setting) by [1] ; our main (rather pedagogical) reason for extending their presentation is to make it accessible to graduate and advance undergraduate students.

First we recall that the CHF of a random variable X is defined by

${\phi}_{x}\left(s\right)=\mathbb{E}\left({e}^{\text{i}sX}\right)={\displaystyle {\int}_{-\infty}^{\infty}}\mathrm{cos}\left(sx\right)\cdot f\left(x\right)\mathrm{}\text{d}x+\text{i}{\displaystyle {\int}_{-\infty}^{\infty}}\mathrm{sin}\left(sx\right)\cdot f\left(x\right)\text{d}x$ (2)

where $f\left(x\right)$ stands for the X’s PDF, $\mathbb{E}$ for the corresponding expected value, and i is the purely imaginary unit; note that the real part of ${\phi}_{x}\left(s\right)$ is always an even (an alternate name is symmetric) function of s, while the purely imaginary part is odd. Also note that when $f\left(x\right)$ is even, the corresponding CHF must be real (its purely imaginary part becomes zero because the integrand is odd).

Similarly

${\phi}_{xy}\left(s\mathrm{,}t\right)=\mathbb{E}\left({\text{e}}^{\text{i}sX+\text{i}tY}\right)={\displaystyle \underset{x\text{-}y\text{\hspace{0.17em}}\text{plane}}{\iint}}{\text{e}}^{\text{i}sx+\text{i}ty}\cdot f\left(x\mathrm{,}y\right)\text{d}x\text{d}y$ (3)

(where $f\left(x\mathrm{,}y\right)$ is the joint PDF of X and Y) defines the joint CHF of two (not necessarily independent) random variables X and Y; when they are independent, we have

${\phi}_{xy}\left(s\mathrm{,}t\right)={\phi}_{x}\left(s\right)\cdot {\phi}_{y}\left(t\right)$ (4)

(product of the individual CHFs). This implies that (when independent), the CHF of $X+Y$ is given by ${\phi}_{x}\left(s\right)\cdot {\phi}_{y}\left(s\right)$ , further implying that the CHF of a sample mean of n independent ${X}_{i}$ is given by

${\phi}_{x}{\left(\frac{s}{n}\right)}^{n}$ (5)

When two random variables are combined into one (e.g. defining $U=\frac{X}{Y}$ ), the CHF of U is given by

${\phi}_{U}\left(s\right)=\mathbb{E}\left({\text{e}}^{\text{i}s\cdot \frac{X}{Y}}\right)={\displaystyle \underset{x\text{-}y\text{\hspace{0.17em}}\text{plane}}{\iint}}{\text{e}}^{\text{i}s\cdot \frac{x}{y}}\cdot f\left(x\mathrm{,}y\right)\text{d}x\text{d}y$ (6)

Under very general conditions, knowing a generating function enables us to reverse the process and find the corresponding PDF thus:

$f\left(x\right)=\frac{1}{2\pi}{\displaystyle {\int}_{-\infty}^{\infty}}\text{\hspace{0.05em}}{\text{e}}^{-\text{i}sx}{\phi}_{x}\left(s\right)\text{d}s$ (7)

To learn more about the mathematical details of transforming PDF into CHF and back the reader may like to consult [4] .

2. Joint CHF and Its Limit

Assume a RIS of size n from a Cauchy distribution with the following PDF

$g\left(x\right)=\frac{1}{\pi \left(1+{x}^{2}\right)}$ (8)

The joint CHF of X and ${X}^{2}$ (seen as distinct random variables) is then given by

${\int}_{-\infty}^{\infty}}\mathrm{exp}\left(\text{i}xs+\text{i}{x}^{2}t\right)g\left(x\right)\text{d}x$ (9)

This implies that the joint CHF of

${U}_{n}\stackrel{\text{def}}{=}\frac{{\displaystyle {\sum}_{i=1}^{n}{X}_{i}}}{n}$ (10)

and of

${V}_{n}^{2}\stackrel{\text{def}}{=}\frac{{\displaystyle {\sum}_{i=1}^{n}{X}_{i}^{2}}}{{n}^{2}}$ (11)

is (replace s by $\frac{s}{n}$ and t by $\frac{t}{{n}^{2}}$ in (9), and raise the result to the power of n)

${\phi}_{{U}_{n}\mathrm{,}{V}_{n}^{2}}\left(s\mathrm{,}t\right)={\left\{1+{\displaystyle {\int}_{-\infty}^{\infty}}\left[\mathrm{exp}\left(\frac{\text{i}xs}{n}+\frac{\text{i}{x}^{2}t}{{n}^{2}}\right)-1-\frac{\text{i}xs}{n}\right]g\left(x\right)\text{d}x\right\}}^{n}$ (12)

where adding and subtracting 1 facilitates taking the $n\to \infty $ limit; similarly, subtracting $\frac{\text{i}xs}{n}$ (which does not change the value of the integral since it integrates, in the principal-value sense, to 0) helps to make subsequent integrals converge. Note that principal value of an integral implies replacing ${\int}_{-\infty}^{\infty}}\cdots \text{d}x$ by ${\mathrm{lim}}_{R\to \infty}{\displaystyle {\int}_{-R}^{R}}\cdots \text{d}x$ ; this is tacitly assumed from now on.

Replacing x by ny makes the previous expression into

${\left\{1+{\displaystyle {\int}_{-\infty}^{\infty}}\left[\mathrm{exp}\left(\text{i}ys+\text{i}{y}^{2}t\right)-1-\text{i}ys\right]g\left(ny\right)n\text{d}y\right\}}^{n}$ (13)

whose $n\to \infty $ limit is

$\mathrm{exp}\left\{{\displaystyle {\int}_{-\infty}^{\infty}}\left[\mathrm{exp}\left(\text{i}ys+\text{i}{y}^{2}t\right)-1-\text{i}ys\right]\frac{\text{d}y}{\pi {y}^{2}}\right\}$ (14)

since $g\left(ny\right)\to \frac{1}{{n}^{2}{u}^{2}}$ and ${\left(1+\frac{A}{n}\right)}^{n}\to \mathrm{exp}\left(A\right)$ . The last displayed expression is thus the characteristic function of U and ${V}^{2}$ (our notation for the corresponding limits of ${U}_{n}$ and ${V}_{n}^{2}$ ).

Note that only the tail behaviour of (8) was relevant in the end.

3. Finding CHF of (1)

Replacing t by $\text{i}{s}^{2}w$ (with w positive) results in

$\begin{array}{l}{\phi}_{U\mathrm{,}{V}^{2}}\left(s\mathrm{,}\text{i}{s}^{2}w\right)\\ =\mathrm{exp}\left\{{\displaystyle {\int}_{-\infty}^{\infty}}\left[\mathrm{exp}\left(\text{i}ys-{s}^{2}w{y}^{2}\right)-1-\text{i}ys\right]\frac{\text{d}y}{\pi {y}^{2}}\right\}=\mathrm{exp}\left[s\psi \left(w\right)\right]\end{array}$ (15)

where (after the $ys=z$ substitution)

$\begin{array}{l}\psi \left(w\right)\stackrel{\text{def}}{=}{\displaystyle {\int}_{-\infty}^{\infty}}\left[\mathrm{exp}\left(\text{i}z-w{z}^{2}\right)-1-\text{i}z\right]\frac{\text{d}z}{\pi {z}^{2}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=-\sqrt{\frac{2}{\pi}}Q\left(\frac{1}{\sqrt{2w}}\right)-2\sqrt{\frac{w}{\pi}}\mathrm{exp}\left(-\frac{1}{4w}\right)\end{array}$ (16)

with

$Q\left(\tau \right)\stackrel{\text{def}}{=}{\displaystyle {\int}_{0}^{\tau}}\mathrm{exp}\left(-\frac{{z}^{2}}{2}\right)\text{d}z$ (17)

Note that the value of $\psi \left(w\right)$ is thus always negative (this remains true for the real part of $\psi \left(w\right)$ after we make w complex; this becomes consequential later on).

Proof.

$\begin{array}{c}{\psi}^{\prime}\left(w\right)=-\frac{1}{\pi}{\displaystyle {\int}_{-\infty}^{\infty}}\mathrm{exp}\left(\text{i}z-w{z}^{2}\right)\text{d}z\\ =-\frac{1}{\pi}\mathrm{exp}\left(-\frac{1}{4w}\right){\displaystyle {\int}_{-\infty}^{\infty}}\mathrm{exp}\left(-w{\left(z-\frac{\text{i}}{2w}\right)}^{2}\right)\text{d}z\\ =-\frac{1}{\pi}\mathrm{exp}\left(-\frac{1}{4w}\right){\displaystyle {\int}_{-\infty -\text{i}/2w}^{\infty -\text{i}/2w}}\mathrm{exp}\left(-w{t}^{2}\right)\text{d}t\\ =-\frac{1}{\pi}\mathrm{exp}\left(-\frac{1}{4w}\right){\displaystyle {\int}_{-\infty}^{\infty}}\mathrm{exp}\left(-w{t}^{2}\right)\text{d}t\\ =-\frac{\mathrm{exp}\left(-\frac{1}{4w}\right)}{\sqrt{\pi w}}\end{array}$

since moving the path of the t integration does not change the integral’s value (there are no singularities between the two paths) and the integrand tends to 0 sufficiently fast within the same strip when w tends to plus or minus infinity.

Therefore, $\psi \left(w\right)$ is given by

$-{\displaystyle \int}\frac{\mathrm{exp}\left(-\frac{1}{4w}\right)}{\sqrt{\pi w}}\mathrm{}\text{d}w+C=\sqrt{\frac{2}{\pi}}{\displaystyle \int}\frac{\mathrm{exp}\left(-\frac{{z}^{2}}{z}\right)}{{z}^{2}}\text{d}z+C$ (18)

$=-\sqrt{\frac{2}{\pi}}\frac{\mathrm{exp}\left(-\frac{{z}^{2}}{z}\right)}{z}-\sqrt{\frac{2}{\pi}}{\displaystyle \int}\mathrm{exp}\left(-\frac{{z}^{2}}{z}\right)\mathrm{}\text{d}z$ (19)

$=-2\sqrt{\frac{w}{\pi}}\mathrm{exp}\left(-\frac{1}{4w}\right)-\sqrt{\frac{2}{\pi}}Q\left(\frac{1}{\sqrt{2w}}\right)$ (20)

using the $w=\frac{1}{2{z}^{2}}$ substitution. Note that $C=0$ , since $Q\left(\infty \right)=\sqrt{\frac{\pi}{2}}$ and

$\psi \left(0\right)=\frac{1}{\pi}{\displaystyle {\int}_{-\infty}^{\infty}}\frac{\mathrm{exp}\left(\text{i}z\right)-1-\text{i}z}{{z}^{2}}\text{d}z=-1$ (21)

This is proved by replacing the path of integration (the real axis) by a clockwise half circle of radius R, centered on the origin and denoted $\mathcal{R}$ (legitimate, since the integrand has no singularities between the two paths), and then evaluating

$\underset{R\to \infty}{\mathrm{lim}}{\displaystyle {\int}_{\mathcal{R}}}\frac{\mathrm{exp}\left(\text{i}z\right)}{{z}^{2}}\text{d}z-\underset{R\to \infty}{\mathrm{lim}}{\displaystyle {\int}_{\mathcal{R}}}\frac{1+\text{i}z}{{z}^{2}}\text{d}z=0+\underset{R\to \infty}{\mathrm{lim}}{\displaystyle {\int}_{0}^{\pi}}\frac{\text{i}-R{\text{e}}^{\text{i}t}}{R{\text{e}}^{2it}}\cdot {\text{e}}^{\text{i}t}\text{d}t=-\pi $ (22)

The first integral is equal to 0 due to Jordan’s lemma. In the second integral, we have traded the clockwise half circle for counter-clockwise (by changing the integrand’s sign).

Evaluating (15) at two distinct values of w (let us denote them w and w_{0}), dividing the difference by s, and integrating over s from 0 to infinity yields

$\begin{array}{l}{\displaystyle {\int}_{0}^{\infty}}{\displaystyle {\int}_{0}^{\infty}}{\displaystyle {\int}_{-\infty}^{\infty}}\mathrm{exp}\left(\text{i}su\right)\frac{\mathrm{exp}\left(-{s}^{2}{v}^{2}w\right)-\mathrm{exp}\left(-{s}^{2}{v}^{2}{w}_{0}\right)}{s}\cdot f\left(u\mathrm{,}v\right)\text{d}u\text{d}v\text{d}s\\ ={\displaystyle {\int}_{0}^{\infty}}\frac{\mathrm{exp}\left[s\psi \left(w\right)\right]-\mathrm{exp}\left[s\psi \left({w}_{0}\right)\right]}{s}\mathrm{}\text{d}s\end{array}$ (23)

where $f\left(u\mathrm{,}v\right)$ is the joint PDF of U and V; note that its support is the whole upper half of the u-v plane.

Replacing s by $\frac{t}{v}$ (note that v cancels out from $\frac{\text{d}s}{s}=\frac{\text{d}t}{t}$ ) changes the left hand side of (23) to

${\int}_{0}^{\infty}}{\displaystyle {\int}_{0}^{\infty}}{\displaystyle {\int}_{-\infty}^{\infty}}\mathrm{exp}\left(\text{i}t\frac{u}{v}\right)\frac{\mathrm{exp}\left(-{t}^{2}w\right)-\mathrm{exp}\left(-{t}^{2}{w}_{0}\right)}{t}\cdot f\left(u\mathrm{,}v\right)\text{d}u\text{d}v\text{d}t$ (24)

$={\displaystyle {\int}_{0}^{\infty}}\phi \left(t\right)\cdot \frac{\mathrm{exp}\left(-{t}^{2}w\right)-\mathrm{exp}\left(-{t}^{2}{w}_{0}\right)}{t}\mathrm{}\text{d}t$ (25)

where $\phi \left(t\right)$ is the characteristic function of $\frac{U}{V}$ , the variable whose distribution we seek; the last expression is then equal to the right-hand side of (23).

4. Converting to PDF

Differentiating the resulting equation, i.e. (25) = (23), with respect to w then yields (after a sign reversal)

${\int}_{0}^{\infty}}t\phi \left(t\right)\cdot \mathrm{exp}\left(-{t}^{2}w\right)\text{d}t=-{\psi}^{\prime}\left(w\right){\displaystyle {\int}_{0}^{\infty}}\mathrm{exp}\left[s\psi \left(w\right)\right]\text{d}s=\frac{{\psi}^{\prime}\left(w\right)}{\psi \left(w\right)$ (26)

(to the last integrand, $\psi \left(w\right)$ is just a negative constant).

Finally, multiplying both sides of the previous equation by

$\frac{\mathrm{exp}\left(\frac{{y}^{2}}{4w}\right)}{\pi \sqrt{\pi w}}$ (27)

and integrating over w from 0 to $\text{i}\cdot \infty $ (a notation which implies following the positive imaginary axis) results in

$\frac{1}{\pi}{\displaystyle {\int}_{0}^{\infty}}\phi \left(t\right)\cdot \mathrm{exp}\left(-\text{i}ty\right)\text{d}t=\frac{1}{{\pi}^{3/2}}{\displaystyle {\int}_{0}^{\text{i}\cdot \infty}}\frac{\mathrm{exp}\left(\frac{{y}^{2}}{4w}\right)}{\sqrt{w}}\cdot \frac{{\psi}^{\prime}\left(w\right)}{\psi \left(w\right)}\text{d}w$ (28)

where $\sqrt{w}$ denotes the corresponding principal value, and (16) is now extended to complex arguments.

Proof. Following [5] , we write

$\begin{array}{c}{\displaystyle {\int}_{0}^{\text{i}\cdot \infty}}\frac{\mathrm{exp}\left(\frac{{y}^{2}}{4w}-{t}^{2}w\right)}{\sqrt{w}}\text{d}w=\sqrt{\frac{2y\text{i}}{t}}{\displaystyle {\int}_{0}^{\infty}}\mathrm{exp}\left(-\frac{\text{i}ty}{2}\left({z}^{2}+\frac{1}{{z}^{2}}\right)\right)\text{d}z\\ ={\text{e}}^{-\text{i}ty}\sqrt{\frac{2y\text{i}}{t}}{\displaystyle {\int}_{0}^{\infty}}\mathrm{exp}\left(-\frac{\text{i}ty}{2}{\left(z-\frac{1}{z}\right)}^{2}\right)\text{d}z\end{array}$ (29)

after introducing

$w=\frac{\text{i}{z}^{2}y}{2t}$ (30)

A further $z\mapsto \frac{1}{z}$ substitution makes it into

${\text{e}}^{-\text{i}ty}\sqrt{\frac{2y\text{i}}{t}}{\displaystyle {\int}_{0}^{\infty}}\mathrm{exp}\left(-\frac{\text{i}ty}{2}{\left(z-\frac{1}{z}\right)}^{2}\right)\frac{\text{d}z}{{z}^{2}}$ (31)

Adding (29) and (31), which are identical in terms of their value, and dividing by 2 yields

${\text{e}}^{-\text{i}ty}\sqrt{\frac{y\text{i}}{2t}}{\displaystyle {\int}_{0}^{\infty}}\mathrm{exp}\left(-\frac{\text{i}ty}{2}{\left(z-\frac{1}{z}\right)}^{2}\right)\left(1+\frac{1}{{z}^{2}}\right)\mathrm{}\text{d}z$ (32)

Finally, introducing $q=\left(z-\frac{1}{z}\right)$ results in

${\text{e}}^{-\text{i}ty}\sqrt{\frac{y\text{i}}{2t}}{\displaystyle {\int}_{-\infty}^{\infty}}\mathrm{exp}\left(-\frac{\text{i}ty}{2}{q}^{2}\right)\mathrm{}\text{d}q={\text{e}}^{-\text{i}ty}\cdot \frac{\sqrt{\pi}}{t}$ (33)

assuming that, to evaluate the last integral, we first replace t by $t-\text{i}\epsilon $ (to make it converge), and then take the $\epsilon \to 0$ limit of the answer (Cauchy-type integration).

It is well known that the real part of the left hand side (and, therefore, of the right-hand side) of (28) yields the desired (clearly symmetric) PDF of $\frac{U}{V}$ , say $f\left(y\right)$ , due to $\phi \left(t\right)$ being real (and thus automatically symmetric as well). This then yields

$\begin{array}{c}f\left(y\right)=\frac{1}{{\pi}^{3/2}}\mathrm{Re}{\displaystyle {\int}_{0}^{\text{i}\cdot \infty}}\frac{\mathrm{exp}\left(\frac{{y}^{2}}{4w}\right)}{\sqrt{w}}\cdot \frac{{\psi}^{\prime}\left(w\right)}{\psi \left(w\right)}\text{d}w\\ =\frac{\sqrt{2}}{{\pi}^{3/2}}\mathrm{Re}{\displaystyle {\int}_{0}^{\text{i}\cdot \infty}}\frac{\mathrm{exp}\left(\frac{{y}^{2}}{4w}\right)}{{\left(2w\right)}^{3/2}}\cdot \frac{\text{d}w}{1+\frac{1}{\sqrt{2w}}\mathrm{exp}\left(\frac{1}{4w}\right)Q\left(\frac{1}{\sqrt{2w}}\right)}\end{array}$ (34)

based on (16) and (18). The problem is that there is no simple analytic answer to the last integral; worse yet, its integrand is highly oscillatory, preventing us from direct numerical integration.

In an attempt to break this impasse, we introduce the following substitution

$w=\frac{1}{2{\tau}^{2}}$ (35)

getting

$f\left(y\right)=\frac{\sqrt{2}}{{\pi}^{3/2}}\mathrm{Re}{\displaystyle {\int}_{0}^{\mathrm{exp}\left(-\text{i}\pi /4\right)\cdot \infty}}\frac{\mathrm{exp}\left(\frac{{y}^{2}{\tau}^{2}}{2}\right)\text{d}\tau}{1+\tau \mathrm{exp}\left(\frac{{\tau}^{2}}{2}\right)Q\left(\tau \right)}$ (36)

where $\tau $ now follows the complex ray at −45˚.

Unfortunately, even after this substitution, special measures are still needed to facilitate accurate numerical integration of the integral. For one, it becomes necessary to separately deal with the ${y}^{2}<1$ and ${y}^{2}>1$ regions.

5. Case of y^{2} < 1

When ${y}^{2}<1$ , it is legitimate to rotate the ray of the last integration to the positive real axis, since there are no singularities of the integrand between the old and the new path, and the function decreases sufficiently fast towards infinity in that segment. We then get

$f\left(y\right)={\displaystyle {\int}_{0}^{\infty}}\frac{\sqrt{\frac{2}{{\pi}^{3}}}\mathrm{exp}\left(\frac{{y}^{2}{\tau}^{2}}{2}\right)\text{d}\tau}{1+\tau \mathrm{exp}\left(\frac{{\tau}^{2}}{2}\right)Q\left(\tau \right)}$ (37)

where $\tau $ is real, and the integrand is “well behaved” (no oscillations). Results of numerical evaluation are displayed in Figure 1 (we are showing only the $y>0$ portion of the graph; visualize a mirror image of this curve when $-1<y<0$ ).

To investigate the nature of this singularity we notice that, for large $\tau $ , the integrand of (37) tends to

Figure 1. Resulting PDF for ${y}^{2}<1$ . Note the logarithmic-type singularity at $y=\pm 1$ .

$\frac{2}{{\pi}^{2}\tau}\cdot \mathrm{exp}\left(-\frac{{\tau}^{2}\left(1-{y}^{2}\right)}{2}\right)$ (38)

since $Q\left(\tau \right)\underset{\tau \to \infty}{\to}\sqrt{\pi /2}$ . Integrating the last function over $\tau $ from 0 to infinity yields

$\frac{1}{{\pi}^{2}}\cdot \Gamma \left(\mathrm{0,}\frac{1-{y}^{2}}{2}\right)$ (39)

where the second factor is a special case of the incomplete gamma function defined by

$\Gamma \left(\mathrm{0,}z\right)\stackrel{\text{def}}{=}-\gamma -\mathrm{ln}z-{\displaystyle \underset{k=1}{\overset{\infty}{\sum}}}\frac{{\left(-z\right)}^{k}}{k\cdot k\mathrm{!}}$ (40)

and $\gamma $ is Euler’s gamma. This indicates that, by adding $\mathrm{ln}\left(1-{y}^{2}\right){\pi}^{-2}$ to $f\left(y\right)$ makes the resulting function finite (and thus amendable to a simple, Padé-type approximation, constructed later on).

6. Case of y^{2} > 1

The situation becomes somehow more difficult when ${y}^{2}>1$ . Now, we can rotate the −45˚ ray of the (36) integration to −90˚ (the negative imaginary axis), since the integrand decreases sufficiently fast towards infinity in this range of directions. The new path contributes 0 to the real part of (36), since the integrand (along the −90˚ ray) is real and $\text{d}\tau $ is purely imaginary. But moving from −45˚ to −90˚ we have crossed infinitely many singularities located between the two rays; these can be found numerically (a rather tedious procedure) by solving

$1+\tau \mathrm{exp}\left(\frac{{\tau}^{2}}{2}\right)Q\left(\tau \right)=0$ (41)

The location of the first 50 of these (they are all simple poles, slowly converging to the $\mathrm{Im}\tau =-\mathrm{Re}\tau $ line) are shown in Figure 2.

We can then evaluate (36) using Cauchy integral theorem; note that, at each of these poles (say at ${\tau}_{p}$ ), the $\tau $ derivative of $1+\tau \mathrm{exp}\left(\frac{{\tau}^{2}}{2}\right)Q\left(\tau \right)$ equals to $-\frac{1}{{\tau}_{p}}$ , since

$\begin{array}{l}{\frac{\text{d}\left(1+\tau \mathrm{exp}\left(\frac{{\tau}^{2}}{2}\right)Q\left(\tau \right)\right)}{\text{d}\tau}|}_{\tau ={\tau}_{p}}\\ =\mathrm{exp}\left(\frac{{\tau}_{p}^{2}}{2}\right)Q\left({\tau}_{p}\right)+{\tau}_{p}^{2}\mathrm{exp}\left(\frac{{\tau}_{p}^{2}}{2}\right)Q\left({\tau}_{p}\right)+{\tau}_{p}\mathrm{exp}\left(\frac{{\tau}_{p}^{2}}{2}\right)\mathrm{exp}\left(-\frac{{\tau}_{p}^{2}}{2}\right)\end{array}$ (42)

and

Figure 2. First 50 roots of (41).

$1+{\tau}_{p}\mathrm{exp}\left(\frac{{\tau}_{p}^{2}}{2}\right)Q\left({\tau}_{p}\right)=0$ (43)

implying

$\mathrm{exp}\left(\frac{{\tau}_{p}^{2}}{2}\right)Q\left({\tau}_{p}\right)=-\frac{1}{{\tau}_{p}}$ (44)

This means that each pole (with the exception of the first one—see the next paragraph), contributes

$\sqrt{\frac{2}{{\pi}^{3}}}\mathrm{Re}\left[2\pi i\cdot {\tau}_{p}\mathrm{exp}\left(\frac{{\tau}_{p}^{2}{y}^{2}}{2}\right)\right]$ (45)

to (36).

The pole on the imaginary axis needs to be avoided by an infinitesimal half circle, which means that it contributes only one half of the above amount, namely

$\sqrt{\frac{2}{\pi}}\cdot 1.30693\mathrm{exp}\left(-\frac{{1.30693}^{2}{y}^{2}}{2}\right)$ (46)

The last function yields the asymptotic behaviour of $f\left(y\right)$ as ${y}^{2}\to \infty $ , and provides an excellent approximation (its maximum absolute error is about $2\times {10}^{-6}$ ) to this PDF when $\left|y\right|>1.8$ . Unfortunately, to build a numerical solution for the full range of ${y}^{2}>1$ values by adding contributions of sufficiently many of these poles (as done by [1] ) is rather tedious, as finding hundreds of these poles (needed to reach a good accuracy, especially when ${y}^{2}$ approaches 1) is a non-trivial task, and the resulting convergence is quite slow.

Nevertheless we would like to mention that, by adding the contribution of the second pole, namely

$-2\sqrt{\frac{2}{\pi}}\mathrm{Im}\left({\tau}_{2}\mathrm{exp}\left(\frac{{\tau}_{2}^{2}{y}^{2}}{2}\right)\right)$ (47)

where ${\tau}_{2}=-1.84906-3.45208\text{i}$ , to (46) results in an equally accurate approximation (its error is less than $2\times {10}^{-6}$ ) in the $\left|y\right|>1.7$ tails of $f\left(y\right)$ .

7. Alternate Solution

We are now going to backtrack and deal directly with (36). Even though its integrand is still highly oscillatory (with a frequency which increases as $\tau \to \infty \cdot {\text{e}}^{-\text{i}\cdot \pi /4}$ ), we can mitigate the problem by dividing the range of integration into two segments: first we integrate from 0 to $1-\text{i}$ (which does not pose any numerical difficulty), thus getting what we will call the first component of (36), then from $1-\text{i}$ to infinity (along the −45˚ ray), getting the second component.

To carry out the latter integration, we first note that, for large $\tau $ , $Q\left(\tau \right)$ can be expanded in the following manner

$Q\left(\tau \right)\simeq \sqrt{\frac{\pi}{2}}-\mathrm{exp}\left(-\frac{{\tau}^{2}}{2}\right)\cdot \left(\frac{1}{\tau}-\frac{1}{{\tau}^{3}}+\frac{3}{{\tau}^{5}}-\frac{5\cdot 3}{{\tau}^{7}}+\cdots \right)$ (48)

Substituting this into the integrand of (36) and further expanding in powers of
$\frac{1}{\tau}$ (up to and including the 6^{th} power) while keeping the two exponential terms fixed (i.e. not expanding in
$\tau $ ) yields

$\left(\sqrt{\frac{2}{\pi}}\cdot \frac{1}{\tau}-\frac{2\mathrm{exp}\left(-\frac{{\tau}^{2}}{2}\right)}{{\tau}^{4}\pi}+\frac{6\mathrm{exp}\left(-\frac{{\tau}^{2}}{2}\right)}{{\tau}^{6}\pi}\right)\cdot \mathrm{exp}\left(-\frac{{\tau}^{2}\left({y}^{2}-1\right)}{2}\right)$ (49)

which can be $\tau $ -integrated, from $1-\text{i}$ to $\infty \left(1-\text{i}\right)$ , analytically; then we numerically integrate (over the same line segment) the difference between the integrands of (36) and (49)—since the resulting integrand now approaches zero (as $\tau $ increases) very quickly, the integration range becomes effectively finite ( $1-\text{i}$ to $6-6\text{i}$ already achieves very high accuracy), thus eliminating any troublesome oscillation. Adding the real part of the last two answers (and multiplying by $\sqrt{2{\pi}^{-3}}$ ) then provides the second component of $f\left(y\right)$ .

This technique yields the graph of Figure 3 (drawn for $y>1$ ; visualize a mirror image of this curve when $y<-1$ ):

8. Monte-Carlo Verification

One can always get a good idea about a distribution of any sample statistic (however complicated and inaccessible to analytic treatment) by actually generating a random sample with the required properties by a computer, and using it to compute the desired statistic’s value; this is then repeated as many times as possible, displaying the results in a histogram.

We have done this with our (1), using $n=100$ and generating one million of its random values. Plotting the resulting histogram, together with the theoretical asymptotic PDF of the last three sections, yields Figure 4; visual comparison clearly indicates a good agreement between the two answers. The tiny discrepancy still discernible (mainly in the $0<y<1$ range) is due to the fact that $n=100$ is not large enough to have reached the $n\to \infty $ limit yet (trying to make the sample size substantially higher would run up against our computer’s capacity).

Note that, to be more economical, we have folded the negative and positive parts of the distribution into a single graph, effectively plotting the PDF of the absolute value of (1).

9. Accurate Approximation

First we have to remember the we already have an excellent approximation to $f\left(y\right)$ in the $\left|y\right|>1.7$ region, given by the sum of (46) and (47); now we have to find a similarly accurate way of dealing with $\left|y\right|\le 1.7$ .

Figure 3. Resulting PDF when ${y}^{2}>1$ .

Figure 4. Comparing theoretical and empirical PDF’s.

We already know that, by adding

$\frac{\mathrm{ln}\left|1-{y}^{2}\right|}{{\pi}^{2}}$ (50)

to $f\left(y\right)$ removes the corresponding singularity when ${y}^{2}\le 1$ ; we can easily verify that the same is true when ${y}^{2}>1$ , since one of the terms in the analytic result of the (49) integration (once we take its real part and multiply by $\sqrt{2{\pi}^{-3}}$ ) equals to

$\frac{1}{{\pi}^{2}}\cdot \mathrm{Re}\Gamma \left(\mathrm{0,}\text{i}\cdot \frac{{y}^{2}-1}{2}\right)$ (51)

where

$\mathrm{Re}\Gamma \left(\mathrm{0,}\text{i}\cdot z\right)=-\gamma -\mathrm{ln}z-{\displaystyle \underset{k=1}{\overset{\infty}{\sum}}}\frac{{\left(-1\right)}^{k}{z}^{2k}}{2k\cdot \left(2k\right)\mathrm{!}}$ (52)

whose singular part is given by (50). This proves that

$f\left(y\right)+\frac{\mathrm{ln}\left|1-{y}^{2}\right|}{{\pi}^{2}}$ (53)

is singularity-free for all values of y; unfortunately, that still does not make it sufficiently “smooth”, as the next paragraph indicates.

There is yet another subtle issue causing great difficulty when trying to build an approximate formula for (53): the function has a small (hardly noticeable) kink (a discontinuous second derivative) at ${y}^{2}=2$ ; this is not an artifact of the new technique—the same (rather surprising) phenomenon can be confirmed by the old technique of adding residues. Luckily, we can identify yet another term of the (49) contribution responsible for this discontinuity, and remove it by further subtracting the offending term from (53), thus getting

$f\left(y\right)+\frac{\mathrm{ln}\left|1-{y}^{2}\right|}{{\pi}^{2}}-\frac{2\mathrm{Re}{\left(2-{y}^{2}\right)}^{3/2}\left(3{y}^{2}-11\right)}{15{\pi}^{2}}\mathrm{.}$ (54)

This function is (finally!) sufficiently (even though not perfectly—tiny issues remain with higher derivatives at ${y}^{2}=2,{y}^{2}=3$ etc.) smooth to fit it (in the $\left|y\right|<1.7$ range) by a ratio of two polynomials (by minimizing the total error squared). This results in the following approximation

$\frac{0.666457-0.82876{y}^{2}+0.401154{y}^{4}-0.0800733{y}^{6}+0.00203206{y}^{10}}{1-0.572265{y}^{2}+0.0327094{y}^{4}+0.0207377{y}^{6}+0.00042241{y}^{10}+0.000261666{y}^{14}}.$ (55)

Its absolute error never exceeds $6\times {10}^{-6}$ , which is more than sufficient for any practical application. The form of this expression and the individual powers of y have been chosen somehow arbitrarily; we do not claim that our choices are optimal.

Do not forget that (55) is approximating (54); a corresponding adjustment has to be made to convert it into an approximation for $f\left(y\right)$ .

10. Conclusion

The aim of this article was to demonstrate that finding the distribution of a relatively simple sample statistic requires a skillful use of characteristic functions and a whole gamut of sophisticated mathematical techniques, including real and complex analysis, Fourier transform, and curve fitting. We hope that students of Statistics can benefit from the ingenuity of the authors of the original derivation of this distribution as presented in [1] , and from some extra details included in this article; the latter includes a different numerical approach to building the resulting PDF, expressing it in the form of an accurate Padė-type approximation (discovering an interesting discontinuity in the process), and verifying the answer by Monte Carlo simulation.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper

[1] |
Logan, B.F., Mallows, C.L., Rice, S.O. and Shepp, A.L. (1973) Limit Distributions of Self-Normalized Sums. The Annals of Probability, 1, 788-809. https://doi.org/10.1214/aop/1176996846 |

[2] |
Efron, B. (1969) Student’s t-Test under Symmetry Conditions. Journal of the Americal Statistical Association, 64, 1278-1302. https://doi.org/10.1080/01621459.1969.10501056 |

[3] |
Spataru, A. (2014) Convergence and Precise Asymptotics for Series Involving Self-Normalized Sums. Journal of Theoretical Probability, 29, 267-276. https://doi.org/10.1007/s10959-014-0560-1 |

[4] | Sneddon, I.N. (2010) Fourier Transforms. Dover Publications. |

[5] |
Viola, M. How Does One Derive the Following Formula of Integration? https://math.stackexchange.com/q/3323687 |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.