Scientific Research

An Academic Publisher

Asymptotics and Well-Posedness of the Derived Distribution Density in a Study of Biovariability

**Author(s)**Leave a comment

KEYWORDS

1. Introduction

Sound is an indispensable part of our life and we experience sound every day. A common way to measure the amount of sound is the decibel (dB) [1] . Sounds of less than 75 dB are at safe levels that do not damage our hearing. However, any sound above 85 dB is potentially harmful and can cause hearing loss. Examples of harmless sounds are normal conversation (60 dB), the humming of a refrigerator (45 dB) whereas harmful sounds include noise from lawn mowers (90 dB) and gun shots or firecrackers (both 150 dB) [2] . The risk of hearing damage depends on the power of the sound as well as the length of exposure.

Hearing loss is a common health problem among veterans. In order to protect warfighters, starting 1960s the US Army conducted and funded research to assess the risk of hearing loss caused by intense impulse noise from explosive blasts and weapon firings [3] . Recently Dr. Chan and his collaborators [4] developed a dose-response model for the assessment of injury caused by impulse noise and a model for the possible recovery afterwards, based on chinchilla data. Chinchillas share similar hearing capabilities as humans and thereby are commonly used for hearing-related experiments.

In [5] , we interpreted the empirical dose-response relation from [4] for exposure to multiple sound impulses in the framework of immunity. In [6] , we viewed the empirical dose-response relation from a completely different angle, in the framework of biovariability. Together in these two studies [5] [6] , we showed that it is possible to interpret the empirical dose-response relation from either of the two extreme cases: immunity or biovariability. Here we would like to further our study in [6] to demonstrate that the derived distribution density of injury susceptibility in [6] is well-posed.

2. Mathematical Formulation of the Problem

In experiments [4] , the injury risk of a crowd caused by a sound exposure event is described by the logistic dose-response relation:

$p=\frac{1}{1+\mathrm{exp}\left(-\alpha \left(SELA-{D}_{50}\right)\right)}$. (1)

Here the dose of the sound exposure event is defined by the SELA (A-Weighted Sound Exposure Level) in units of dBA. The injury risk p of the crowd represents the average injury fraction of the crowd. In the logistic dose response model (1), the parameter α determines the steepness of function p while D_{50} denotes the median injury dose. For a crowd, the median injury dose is the dose level at which half of the population is expected to be injured. For a crowd of subjects with a wide spectrum of heterogeneous individual injury probabilities, at the apparent median injury dose, a particular subject's individual injury probability may be below or above 50% due to biovariability. For injury of permanent threshold shift (PTS) > 25 dB, the values of parameters α and D_{50} are found to be
$\alpha =0.1$ and
${D}_{50}=161$ dB, respectively [4] . It was also noticed that the parameter value α remains unchanged (
$\alpha =0.1$ ) for PTS injuries of all cut-off levels whereas the median injury dose D_{50} rises with the PTS cut-off level [4] .

In the framework of the logistic dose-response relation, the injury risk of the crowd caused by a sequence of N acoustic impulses is given by the expression

${p|}_{\left(N\text{impulses}\right)}=\frac{1}{1+exp\left(-\alpha \left({S}_{\text{comb}}-{D}_{50}\right)\right)}$. (2)

where ${S}_{\text{comb}}$ is the effective combined dose for the whole sequence of impulses as a single sound exposure event. For a sequence of N identical impulses each with SELA value S, the effective combined dose, ${S}_{\text{comb}}$ , was observed to follow the dose combination rule [4] :

${S}_{\text{comb}}=S+\lambda lo{g}_{10}N\mathrm{,}\text{\hspace{1em}}\lambda =3.44$. (3)

Thus, for a sequence of N impulses each with SELA value S, the injury risk takes the form

${p|}_{\left(N\text{impulses}\right)}=\frac{1}{1+exp\left(-\alpha \left(S-{D}_{50}+\lambda lo{g}_{10}N\right)\right)}\equiv \frac{1}{1+{\left(a{N}^{\eta}\right)}^{-1}}$. (4)

where parameters a and η are defined as

$a\equiv exp\left(\alpha \left(S-{D}_{50}\right)\right)\mathrm{,}\text{\hspace{1em}}\eta \equiv \frac{\alpha \lambda}{ln10}=0.1494$. (5)

Parameter a is related to probability ${p|}_{\left(\text{1impulse}\right)}$ by

$a\mathrm{=}\frac{p{\mathrm{|}}_{\text{(1impulse)}}}{1-p{\mathrm{|}}_{\text{(1impulse)}}}$. (6)

That is, parameter a is the injury odds of a hypothetical subject in the crowd with the average injury probability, ${p|}_{\left(\text{1impulse}\right)}$ , in responding to a single acoustic impulse.

In [6] under the assumption that N acoustic impulses act independently from each other in causing injury, regardless of whether one is preceded by another (i.e., no immunity effect), we explored the possibility of interpreting the observed logistic dose-response relation for a crowd in the framework of biovariability. For mathematical convenience, we consider non-injury probability instead of injury probability. Let $q\left(\omega \right)$ denote the individual non-injury probability of a random subject in the crowd, in responding to one acoustic impulse. Here $q\left(\omega \right)$ is a random variable, due to the presence of biovariability. Let $\rho \left(q\right)$ be the distribution density of random variable $q\left(\omega \right)$. Mathematically in the framework of biovariability, the average non-injury fraction for N acoustic impulses is expressed in terms of $\rho \left(q\right)$ as

${\text{\hspace{0.05em}}\left(\text{Non-injury}\text{\hspace{0.17em}}\text{fraction}\right)\text{\hspace{0.05em}}|}_{\left(N\text{impulses}\right)}=E\left[q{\left(\omega \right)}^{N}\right]={\displaystyle {\int}_{0}^{1}}\text{\hspace{0.05em}}{q}^{N}\rho \left(q\right)\text{d}q$. (7)

On the other hand, experimentally, the injury risk was observed to follow the logistic dose-response relation (4), which relates to the average non-injury fraction as

${\text{\hspace{0.05em}}\left(\text{Non-injury}\text{\hspace{0.17em}}\text{fraction}\right)\text{\hspace{0.05em}}|}_{\left(N\text{impulses}\right)}={1-p|}_{\left(N\text{impulses}\right)}=1-\frac{1}{1+{\left(a{N}^{\eta}\right)}^{-1}}=\frac{1}{1+a{N}^{\eta}}$. (8)

For the theoretical model of biovariability to reproduce the experimentally observed results, the distribution density $\rho \left(q\right)$ has to satisfy an equation obtained by combining (7) and (8), which we write out below:

${\int}_{0}^{1}}\text{\hspace{0.05em}}{q}^{N}\rho \left(q\right)\text{d}q=\frac{1}{1+a{N}^{\eta}}\mathrm{,}\text{\hspace{1em}}N=\mathrm{1,2,}\cdots $. (9)

In [6] we solved Equation (9) analytically by constructing a power series

expansion in new variable $s=-{a}^{\frac{-1}{\eta}}\mathrm{ln}\left(q\right)$. Since the non-injury probability $q\left(\omega \right)$ is constrained to interval $\left(\mathrm{0,1}\right)$ , variable $s=-{a}^{\frac{-1}{\eta}}\mathrm{ln}\left(q\right)$ has the domain $\left(\mathrm{0,}+\infty \right)$. The distribution density of random variable $s\left(\omega \right)=-{a}^{\frac{-1}{\eta}}ln\left(q\left(\omega \right)\right)$ is

$g\left(s\right)\equiv {a}^{\frac{1}{\eta}}\rho \left(\mathrm{exp}\left(-{a}^{\frac{1}{\eta}}s\right)\right)\mathrm{exp}\left(-{a}^{\frac{1}{\eta}}s\right)$. (10)

For density $g\left(s\right)$ , Equation (9) becomes

$\int}_{0}^{+\infty}}\mathrm{exp}\left(-\left({a}^{\frac{1}{\eta}}N\right)s\right)g\left(s\right)\text{d}s=\frac{1}{1+{\left({a}^{\frac{1}{\eta}}N\right)}^{\eta$. (11)

In [6] we derived a power series solution for $g\left(s\right)$ :

$g\left(s\right)={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)\right)}{s}^{\eta \left(k+1\right)-1}$. (12)

In power series (12), the gamma function $\Gamma \left(\eta \left(k+1\right)\right)$ in the denominator of the coefficient grows faster than any exponential function of k. As a result, power series (12) converges for all values of s. It follows that function $g\left(s\right)$ is well defined by power series (12) for all values of s. To be a proper density function, however, $g\left(s\right)$ must satisfy the two properties below:

$g\left(s\right)\ge 0$ (13)

${\int}_{0}^{+\infty}}g\left(s\right)\text{d}s=1$. (14)

In [6] we rigorously proved these two properties for the special case of $\eta =\frac{1}{2}$ and the special case of $\eta =\frac{1}{3}$. The analysis procedure differs quite significantly between these two special cases. It is highly unlikely that the particular analysis approach used in either of these two special cases can be directly extended to the general case of arbitrary η. In the current study, we aim at verifying semi-analytically the two properties for any given value of η. For that purpose, we need the numerical capability of calculating function $g\left(s\right)$ for all values of s, from small to large. Power series (12) has the nice property that theoretically it converges for all values of s. In practical computations, however, at large values of s, the numerical accuracy of the power series summation is completely wiped out by the accumulation of round-off errors. In the power series summation, as s increases the net sum decreases while the magnitude of the largest term grows exponentially with s [6] . The combined effect of these two factors magnifies catastrophically, at large s, the influence of round-off errors on the numerical accuracy of the net sum. For practical computation of $g\left(s\right)$ in finite precision arithmetic, we need a robust numerical formula of $g\left(s\right)$ at large s. In the next section, we derive a general asymptotic approximation of $g\left(s\right)$ at large s. The synthesis of the power series and the asymptotics will provide a practical numerical tool for computing the distribution density $g\left(s\right)$ for all values of s at any given value of parameter η.

3. Asymptotics of g(s) at Large s

Now we derive a general asymptotic approximation of $g\left(s\right)$ at large s when η is a rational number. We then reasonably conjecture that the same asymptotic approximation is also valid even when η is irrational. In practical computation of $g\left(s\right)$ , the case of irrational η actually does not apply since all numerical calculations are carried out in finite precision arithmetic, using only rational numbers.

A rational number η takes the form $\eta =\frac{m}{n}$ where both m and n are positive

integers. We rewrite the power series of $g\left(s\right)$ in terms of the reciprocal gamma function as follows:

$g\left(s\right)={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}{\left(-1\right)}^{k}f\left(\frac{m}{n}\left(k+1\right)\right){s}^{\frac{m}{n}\left(k+1\right)-1}$ (15)

where $f\left(z\right)$ is the reciprocal gamma function defined as

$f\left(z\right)\equiv \frac{1}{\Gamma \left(z\right)}$. (16)

The advantage of working with the reciprocal gamma function $f\left(z\right)$ is that it is well defined and is analytic everywhere. In comparison, the gamma function $\Gamma \left(z\right)$ diverges at all non-positive integer values of z. The reciprocal gamma function $f\left(z\right)$ has the property

$\left(z-1\right)f\left(z\right)=\frac{z-1}{\Gamma \left(z\right)}=\frac{1}{\Gamma \left(z-1\right)}=f\left(z-1\right)$. (17)

Using this convenient property when differentiating $g\left(s\right)$ , we have

$\frac{\text{d}}{\text{d}s}g\left(s\right)={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}{\left(-1\right)}^{k}f\left(\frac{m}{n}\left(k+1\right)-1\right){s}^{\frac{m}{n}\left(k+1\right)-2}$. (18)

Differentiating $g\left(s\right)$ repeatedly m times, we obtain a differential equation for $g\left(s\right)$.

$\begin{array}{c}\frac{{\text{d}}^{m}}{\text{d}{s}^{m}}g\left(s\right)={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}{\left(-1\right)}^{k}f\left(\frac{m}{n}\left(k+1\right)-m\right){s}^{\frac{m}{n}\left(k+1\right)-m-1}\\ ={\left(-1\right)}^{n}{\displaystyle \underset{{k}^{\prime}=-n}{\overset{+\infty}{\sum}}}{\left(-1\right)}^{{k}^{\prime}}f\left(\frac{m}{n}\left({k}^{\prime}+1\right)\right){s}^{\frac{m}{n}\left({k}^{\prime}+1\right)-1}\\ ={\left(-1\right)}^{n}\left[{\displaystyle \underset{{k}^{\prime}=-n}{\overset{-1}{\sum}}}{\left(-1\right)}^{{k}^{\prime}}f\left(\frac{m}{n}\left({k}^{\prime}+1\right)\right){s}^{\frac{m}{n}\left({k}^{\prime}+1\right)-1}+g\left(s\right)\right]\\ ={\left(-1\right)}^{n}\left[-{\displaystyle \underset{\stackrel{^}{k}=1}{\overset{n-1}{\sum}}}{\left(-1\right)}^{\stackrel{^}{k}}f\left(\frac{-\stackrel{^}{k}m}{n}\right){s}^{-\left(\frac{\stackrel{^}{k}m}{n}+1\right)}+g\left(s\right)\right]\end{array}$ (19)

We derive an asymptotic approximation of $g\left(s\right)$ at large s based on this differential equation. We proceed with the assumption that $g\left(s\right)$ converges to 0 as s goes to $+\infty $. This assumption is reasonable although not directly derivable from the power series form of $g\left(s\right)$. Under this assumption, the asymptotics of $g\left(s\right)$ at large s takes the form

$g\left(s\right)~{\displaystyle \underset{j=1}{\sum}}\text{\hspace{0.05em}}{b}_{j}{s}^{-{\beta}_{j}},\text{\hspace{1em}}0<{\beta}_{1}<{\beta}_{2}<{\beta}_{3}<\cdots $. (20)

Differentiating the asymptotic form m times, we get

$\frac{{\text{d}}^{m}}{\text{d}{s}^{m}}g\left(s\right)~{\left(-1\right)}^{m}{\displaystyle \underset{j=1}{\sum}}\text{\hspace{0.05em}}{b}_{j}\frac{f\left({\beta}_{j}\right)}{f\left({\beta}_{j}+m\right)}{s}^{-\left({\beta}_{j}+m\right)}=O\left({s}^{-\left({\beta}_{1}+m\right)}\right)$. (21)

In (21) the first term is of the order $O\left({s}^{-\left({\beta}_{1}+m\right)}\right)$ while all other terms are asymptotically smaller than $O\left({s}^{-\left({\beta}_{1}+m\right)}\right)$. Using the result of (21), we rewrite (19) as

$O\left({s}^{-\left({\beta}_{1}+m\right)}\right)=\left[-{\displaystyle \underset{k=1}{\overset{n-1}{\sum}}}{\left(-1\right)}^{k}f\left(\frac{-km}{n}\right){s}^{-\left(\frac{km}{n}+1\right)}+g\left(s\right)\right]$ (22)

which leads to

$g\left(s\right)={\displaystyle \underset{k=1}{\overset{n-1}{\sum}}}{\left(-1\right)}^{k}f\left(\frac{-km}{n}\right){s}^{-\left(\frac{km}{n}+1\right)}+O\left({s}^{-\left({\beta}_{1}+m\right)}\right)$. (23)

Equating the leading terms on both sides of (23) yields ${\beta}_{1}=\frac{m}{n}+1$. Substituting this value of ${\beta}_{1}$ back into (23) gives us

$g\left(s\right)={\displaystyle \underset{k=1}{\overset{n-1}{\sum}}}{\left(-1\right)}^{k}f\left(\frac{-km}{n}\right){s}^{-\left(\frac{km}{n}+1\right)}+O\left({s}^{-\left(\frac{m}{n}+m+1\right)}\right)$. (24)

Notice that in (24), the smallest term in the summation is $O\left({s}^{-\left(\frac{-m}{n}+m+1\right)}\right)$ which occurs at $k=n-1$. Thus, all terms in the summation are indeed asymptotically larger than $O\left({s}^{-\left(\frac{m}{n}+m+1\right)}\right)$.

In summary, expression (24) gives an asymptotic approximation of $g\left(s\right)$ at large s, accurate up to the order $O\left({s}^{-\left(m+1\right)}\right)$. Recall that (24) is derived by differentiating each of power series (15) and asymptotic form (20) m times, and then combining the results. To derive a general asymptotic approximation, we differentiate each of the power series and the asymptotic form $\left(r\cdot m\right)$ times where r is a positive integer. In the above, asymptotics (24) is the outcome in the special case of $r=1$. In the general case, differentiating power series (15) $\left(rm\right)$ times yields.

$\begin{array}{c}\frac{{\text{d}}^{\left(rm\right)}}{\text{d}{s}^{\left(rm\right)}}g(s)={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}{\left(-1\right)}^{k}f\left(\frac{m}{n}\left(k+1\right)-\left(rm\right)\right){s}^{\frac{m}{n}\left(k+1\right)-\left(rm\right)-1}\\ ={\left(-1\right)}^{\left(rn\right)}{\displaystyle \underset{{k}^{\prime}=-\left(rn\right)}{\overset{+\infty}{\sum}}}{\left(-1\right)}^{{k}^{\prime}}f\left(\frac{m}{n}\left({k}^{\prime}+1\right)\right){s}^{\frac{m}{n}\left({k}^{\prime}+1\right)-1}\\ ={\left(-1\right)}^{\left(rn\right)}\left[{\displaystyle \underset{{k}^{\prime}=-\left(rn\right)}{\overset{-1}{\sum}}}{\left(-1\right)}^{{k}^{\prime}}f\left(\frac{m}{n}\left({k}^{\prime}+1\right)\right){s}^{\frac{m}{n}\left({k}^{\prime}+1\right)-1}+g\left(s\right)\right]\\ ={\left(-1\right)}^{\left(rn\right)}\left[-{\displaystyle \underset{\stackrel{^}{k}=1}{\overset{\left(rn\right)-1}{\sum}}}{\left(-1\right)}^{\stackrel{^}{k}}f\left(\frac{-\stackrel{^}{k}m}{n}\right){s}^{-\left(\frac{\stackrel{^}{k}m}{n}+1\right)}+g\left(s\right)\right]\end{array}$ (25)

Differentiating asymptotic form (20) $\left(rm\right)$ times, we get

$\frac{{\text{d}}^{\left(rm\right)}}{\text{d}{s}^{\left(rm\right)}}g\left(s\right)~{\left(-1\right)}^{\left(rm\right)}{\displaystyle \underset{j=1}{\sum}}\text{\hspace{0.05em}}{b}_{j}\frac{f\left({\beta}_{j}\right)}{f\left({\beta}_{j}+\left(rm\right)\right)}{s}^{-\left({\beta}_{j}+\left(rm\right)\right)}=O\left({s}^{-\left(\frac{m}{n}+\left(rm\right)+1\right)}\right)$ (26)

where we have used ${\beta}_{1}=\frac{m}{n}+1$. Combining (25) and (26), we obtain

$\begin{array}{c}g\left(s\right)={\displaystyle \underset{k=1}{\overset{\left(rn\right)-1}{\sum}}}{\left(-1\right)}^{k}f\left(\frac{-km}{n}\right){s}^{-\left(\frac{km}{n}+1\right)}+O\left({s}^{-\left(\frac{m}{n}+\left(rm\right)+1\right)}\right)\\ ~{\displaystyle \underset{k=1}{\sum}}{\left(-1\right)}^{k}f\left(\frac{-km}{n}\right){s}^{-\left(\frac{km}{n}+1\right)}\end{array}$ (27)

Since integer r can be made as large as we like, in (27) we simply use the infinite series as a symbolic general asymptotics, with the understanding that a particular asymptotic approximation will use only a partial sum of the infinite series. We need to point out that the infinite series in (27) actually diverges for all values of s. So the infinite summation does not have a well defined sum. Instead, the infinite series serves only as a symbolic general asymptotics. Summation of moderate number of terms, however, will provide an accurate asymptotic approximation of $g\left(s\right)$ at moderately large s and beyond. We will examine the approximation errors in details later. In the analysis above, we did not assume that integers m and n are prime to each other. It turns out that asymptotics (27) can be written in terms of η only, without any reference to m or n. The expression in terms of η gives us the general asymptotics, which does not depend on a particular rational form of η:

$g\left(s\right)~{\displaystyle \underset{k=1}{\sum}}{\left(-1\right)}^{k}f\left(-k\eta \right){s}^{-\left(k\eta +1\right)}$. (28)

The asymptotic expansion (28) depends only on η and is invariant with respect to different rational representations of η. It is plausible to conjecture that asymptotics (28) is also valid for irrational η although it is derived in the case of rational η. This conjecture cannot be tested numerically since all computations use a finite precision number representation system, which is a subset of all rational numbers. In the next section, in preparation for the numerical verification of properties (13) and (14), we build the necessary numerical tools for computing function $g\left(s\right)$.

4. Accurate Evaluation of g(s) in Finite Precision

In this section, we develop a practical numerical method for computing $g\left(s\right)$ in IEEE double precision, over the whole range of s and at any given value of parameter η.

Function $g\left(s\right)$ is defined straightforwardly by power series (12). Theoretically $g\left(s\right)$ can be evaluated as accurately as we like by including sufficiently large number of terms in summation and carrying out the computation in arithmetic of sufficiently high numerical precision. Practically, with the IEEE double precision arithmetic, the numerical accuracy of the power series summation, at large s, is completely ruined by the round-off errors from terms of the largest magnitude. In [6] we showed that the largest term grows roughly exponentially with s and it has the behavior.

$\underset{k}{max}\left|\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)\right)}{s}^{\eta \left(k+1\right)-1}\right|\approx \frac{{\text{e}}^{s}}{\sqrt{2\text{\pi}s}}$ (29)

Even at a moderate large value of
$s=40$ , the largest term in the summation is more than 10^{16}, which in general will pollute the numerical value of
$g\left(s\right)$ with an error of magnitude 1 or bigger. Thus, at large s, the power series summation is not a workable numerical tool for accurately calculating
$g\left(s\right)$ in finite precision arithmetic.

The infinite series in the asymptotic expansion (28) diverges for all values of s. As a result, it does not make sense to include in the asymptotic approximation a very large number of terms from (28). When a moderate number of terms are used, however, the partial sum of (28) provides an accurate approximation of $g\left(s\right)$ at moderately large s and beyond. For a fixed number of terms, the larger the value of s is, the better the approximation. Therefore, at large s, function $g\left(s\right)$ can be evaluated fairly accurately by employing an asymptotic approximation with a suitable number of terms.

The contrast and complementary behaviors of the power series around
$s=0$ and the asymptotics at large s suggest that a viable numerical strategy is to use the power series summation for small s and switch to the asymptotic approximation when s is above a threshold s_{sw}, which is yet to be specified. The success of this numerical strategy depends on that there is an overlapping region of intermediate s in which both the power series summation and the asymptotic approximation will yield reasonably good accuracy. Without this intermediate region, if the valid region of the power series summation is separated by a gap from the valid region of the asymptotic approximation,
$g\left(s\right)$ cannot be calculated accurately in the gap region. The existence of an overlapping region of intermediate s also provides us a numerical mechanism for identifying the overlapping region and selecting an optimal threshold s_{sw} for switching from one numerical formula to the other.

In the overlapping region, both the power series summation and the asymptotic approximation are reasonably accurate. Accordingly, the difference between the two numerical formulas should be fairly small inside the overlapping region. Below or above the overlapping region, only one of the numerical formula is very accurate while the other is not. Consequently, outside the overlapping region, the difference between the two will be significantly more pronounced than inside the overlapping region. To identify the overlapping region, we examine the difference between the power series summation and the asymptotic approximation as s increases from small values to large values. The magnitude of the minimum difference indicates the existence (or non-existence) of the overlapping region; the location of the minimum difference suggests an optimal threshold s_{sw} for switching. To proceed along this line, we introduce two notations.

• ${g}^{\left(PS\right)}\left(s\right)$ = power series summation (12)

• ${g}^{\left(AA\right)}\left(s\right)$ = asymptotic approximation (28) with terms up to $O\left({s}^{-\left({N}_{g}+1\right)}\right)$...

Throughout this paper, all numerical results are computed in IEEE double precision arithmetic. In this section, simulations are focused on the case of $\eta =0.1494$ , the observed value of parameter η in experiments. We will explore other values of η in subsequent sections.

In Figure 1, we plot the difference between
${g}^{\left(PS\right)}\left(s\right)$ and
${g}^{\left(AA\right)}\left(s\right)$ as a function of s for several different values of N_{g}. Three asymptotic approximations respectively with
${N}_{g}=9$ ,
${N}_{g}=12$ and
${N}_{g}=15$ are tested. For all 3 values of N_{g}, especially for
${N}_{g}=12$ and
${N}_{g}=15$ , Figure 1 demonstrates clearly the existence of an overlapping valid region for the two numerical formulas. For s values smaller than 17, there is a visible discrepancy among the three curves because in this region
$\left|{g}^{\left(PS\right)}\left(s\right)-{g}^{\left(AA\right)}\left(s\right)\right|$ is primarily attributed to the approximation error in
${g}^{\left(AA\right)}\left(s\right)$ which depends heavily on N_{g} when s is not very large. For s values bigger than 17, the three curves almost coincide with each other due to the dominant effect of round-off errors in
${g}^{\left(PS\right)}\left(s\right)$ which is independent of N_{g}. The overlapping region is in a neighborhood around [5] [6] . The asymptotic approximation with
${N}_{g}=12$ has the best performance since it reaches the lowest minimum difference and attains the minimum difference at a smaller value of s, indicating that it is already valid even when s is not very large. In our subsequent simulations, we shall select N_{g} using this strategy. For
${N}_{g}=12$ , Figure 1 suggests that an optimal threshold s_{sw} for switching from power series summation to asymptotic approximation is about
${s}_{sw}=15$. Based computing function
$g\left(s\right)$.

Figure 1. Difference between the power series summation
${g}^{\left(PS\right)}\left(s\right)$ and the asymptotic expansion
${g}^{\left(AA\right)}\left(s\right)$ for various values of N_{g}.

Numerical procedure:

• For
$s\le {s}_{sw}$ ,
$g\left(s\right)$ is computed using the partial sum of the first K_{f} terms in power series summation (12).

$g\left(s\right)\approx {\displaystyle \underset{k=0}{\overset{{K}_{f}}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)\right)}{s}^{\eta \left(k+1\right)-1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\le {s}_{sw}$ (30)

on these numerical findings we adopt the numerical procedure below for where K_{f} is the number of terms needed to make the truncation error well below the machine precision of IEEE double precision. In computations, we make the truncation error smaller than 10^{−}^{20}. Theoretically, K_{f} has an a priori estimate expressed in terms of s when s is moderately large. In practice, K_{f} is determined automatically in the numerical summation process by monitoring the magnitude of terms.

• For $s\ge {s}_{sw}$ , $g\left(s\right)$ is calculated using the partial sum of terms up to $O\left({s}^{-\left({N}_{g}+1\right)}\right)$ in the asymptotic approximation (28).

$g\left(s\right)\approx {\displaystyle \underset{\begin{array}{c}k=1\\ k\text{\hspace{0.05em}}\eta \le {N}_{g}\end{array}}{\sum}}{\left(-1\right)}^{k}f\left(-k\eta \right){s}^{-\left(k\eta +1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\ge {s}_{sw}$ (31)

The choice of ${s}_{sw}=15$ and ${N}_{g}=12$ above is based on numerical minimization of $\left|{g}^{\left(PS\right)}\left(s\right)-{g}^{\left(AA\right)}\left(s\right)\right|$ with respect to $\left(s\mathrm{,}{N}_{g}\right)$ in the case of $\eta =0.1494$. This particular set of $\left({s}_{sw}\mathrm{,}{N}_{g}\right)$ is for computing function $g\left(s\right)$ at $\eta =0.1494$. In a similar fashion, at each different value of η, an individual set of $\left({s}_{sw}\mathrm{,}{N}_{g}\right)$ is determined for that η and then used in evaluating $g\left(s\right)$.

In the next section, we apply the numerical procedure described above to verify properties (13) and (14) numerically, and thus demonstrate the well-posedness of the distribution density.

5. Numerical Verification of Well-Posedness

We first verify that $g\left(s\right)$ defined by power series (12) is positive for all values of $s>0$ at any parameter value of $\eta >0$. To examine both the sign and the magnitude of $g\left(s\right)$ , we use mapping $D\left(z\right)$ , defined below, to display $g\left(s\right)$ as $D\left(g\left(s\right)\right)$. Let

$D\left(z\right)\equiv {z}_{0}{\mathrm{sinh}}^{-1}\left(\frac{z}{{z}_{0}}\right)$. (32)

where ${z}_{0}>0$ is a design parameter depending on the range we like to focus on.

The mapping $D\left(z\right)$ has several design features for showing the sign and for accommodating a huge range over many orders of magnitude.

• $D\left(z\right)$ is an odd function of z, clearly showing the sign of z.

• $D\left(z\right)$ is a monotonically increasing function of z, preserving any trend of z.

• When $\left|z\right|$ is significantly below ${z}_{0}$ , the mapping $D\left(z\right)$ displays z in a linear scale:

$D\left(z\right)\approx z\text{\hspace{1em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\left|z\right|\ll {z}_{0}$.

• When $\left|z\right|$ is significantly above ${z}_{0}$ , the mapping $D\left(z\right)$ displays z in a logarithmic scale:

$D\left(z\right)\approx {z}_{0}\mathrm{ln}\left(\frac{2z}{{z}_{0}}\right)\text{\hspace{1em}}\text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}z\gg {z}_{0}$.

We calculate $g\left(s\right)$ vs s numerically for several representative values of $\eta >0$ , and plots $D\left(g\left(s\right)\right)$ in Figure 2 with ${z}_{0}={10}^{-4}$. Function $g\left(s\right)$ is positive for all values of η we examined.

Next we verify that ${\int}_{0}^{+\infty}}g\left(s\right)\text{d}s=1$ for parameter $\eta >0$. We integrate the power series (12) to write out the cumulative distribution function (CDF), which becomes

$G\left(s\right)\equiv {\displaystyle {\int}_{0}^{s}}\text{\hspace{0.05em}}g\left(u\right)\text{d}u={\displaystyle \underset{k=0}{\sum}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)+1\right)}{s}^{\eta \left(k+1\right)}$. (33)

Again, theoretically power series (33) converges for all s, making $G\left(s\right)$ a well defined function for all s. But in numerical computations with IEEE double precision, at large s, power series summation (33) suffers catastrophically from complete loss of accuracy. As a result, using the power series summation to compute $G\left(s\right)$ at large s is not viable for demonstrating ${\mathrm{lim}}_{s\to +\infty}G\left(s\right)=1$. Instead, we consider the complementary cumulative distribution function at large s, defined as

${G}^{\left(C\right)}\left(s\right)\equiv {\displaystyle {\int}_{s}^{+\infty}}g\left(u\right)\text{d}u$. (34)

Figure 2. Plots of $D\left(g\left(s\right)\right)$ for several values of parameter η. The mapping $D\left(z\right)$ defined in (32) is designed for showing the sign and for accommodating a wide range of quantity z. The plots demonstrate that function $g\left(z\right)$ is positive for all values of parameter η tested.

To verify ${\int}_{0}^{+\infty}}g\left(s\right)\text{d}s=1$ , we only need to demonstrate numerically that $G\left(s\right)+{G}^{\left(C\right)}\left(s\right)=1$ at some value of s. This allows us to select a value of s such that both $G\left(s\right)$ and ${G}^{\left(C\right)}\left(s\right)$ can be computed accurately.

For $s\le {s}_{sw}$ , $G\left(s\right)$ can be accurately calculated using a partial sum of power series (33)

$G\left(s\right)\approx {\displaystyle \underset{k=0}{\overset{{K}_{f}}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)+1\right)}{s}^{\eta \left(k+1\right)},\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\le {s}_{sw}$. (35)

For $s\ge {s}_{sw}$ , $g\left(s\right)$ is well approximated by asymptotics (28). Using a partial sum of (28) with terms up to $O\left({s}^{-\left({N}_{g}+1\right)}\right)$ to replace $g\left(s\right)$ in the integral of ${G}^{\left(C\right)}\left(s\right)$ , we write out an asymptotic approximation for ${G}^{\left(C\right)}(\; s\; )$

${G}^{\left(C\right)}\left(s\right)\approx -{\displaystyle \underset{\begin{array}{c}k=1\\ k\eta \le {N}_{g}\end{array}}{\sum}}{\left(-1\right)}^{k}f\left(-k\eta +1\right){s}^{-k\eta},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\ge {s}_{sw}$. (36)

Results (35) and (36) suggest that quantity ${\int}_{0}^{+\infty}}g\left(u\right)\text{d}u=G\left(s\right)+{G}^{\left(C\right)}\left(s\right)$ has the optimal numerical accuracy if we evaluate it at $s={s}_{sw}$. Thus, we compute quantity $\left|G\left({s}_{sw}\right)+{G}^{\left(C\right)}\left({s}_{sw}\right)-1\right|$ and use it to judge if ${\int}_{0}^{+\infty}}g\left(u\right)\text{d}u=1$ is satisfied.

Figure 3 plots $\left|G\left({s}_{sw}\right)+{G}^{\left(C\right)}\left({s}_{sw}\right)-1\right|$ vs parameter η. It is clear that for any parameter value η in $\left(\mathrm{0,1}\right)$ , the assertion ${\int}_{0}^{+\infty}}g\left(u\right)\text{d}u=1$ is indeed valid within the numerical approximation error.

Figure 3. The difference between ${\int}_{0}^{\infty}}g\left(u\right)\text{d}u$ and 1 for different values of η.

In summary, we have numerically verified 1) $g\left(s\right)>0$ for $s>0$ and 2) ${\int}_{0}^{+\infty}}g\left(s\right)\text{d}s=1$ . It follows that function $g\left(s\right)$ defined by power series (12) is mathematically a proper distribution density.

With the property $G\left(+\infty \right)=1$ established, we write out a unified numerical procedure for computing function $G\left(s\right)$ over the full range of $s\in \left(\mathrm{0,}+\infty \right)$ ,

$G\left(s\right)\approx \{\begin{array}{ll}{\displaystyle \underset{k\mathrm{=0}}{\overset{{K}_{f}}{\sum}}}{\left(-1\right)}^{k}f\left(\eta \left(k+1\right)+1\right){s}^{\eta \left(k+1\right)}\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\le {s}_{sw}\hfill \\ {\displaystyle \underset{\begin{array}{c}k\mathrm{=0}\\ k\eta \le {N}_{g}\end{array}}{\sum}}{\left(-1\right)}^{k}f\left(-k\eta +1\right){s}^{-k\eta}\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\ge {s}_{sw}\hfill \end{array}$ (37)

For readers’ convenience, we also summarize below the unified numerical procedure for computing function $g\left(s\right)$ over the full range of $s\in \left(\mathrm{0,}+\infty \right)$ ,

$g\left(s\right)\approx \{\begin{array}{ll}{\displaystyle \underset{k=0}{\overset{{K}_{f}}{\sum}}}{\left(-1\right)}^{k}f\left(\eta \left(k+1\right)\right){s}^{\eta \left(k+1\right)-1}\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\le {s}_{sw}\hfill \\ {\displaystyle \underset{\begin{array}{c}k=1\\ k\eta \le {N}_{g}\end{array}}{\sum}}{\left(-1\right)}^{k}f\left(-k\eta \right){s}^{-\left(k\eta +1\right)}\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\ge {s}_{sw}\hfill \end{array}$ (38)

6. Pade Approximations

In the previous section, we verified $G\left(+\infty \right)=1$. We now impose this property as a constraint at $s=+\infty $ and construct a Pade approximation [7] [8] for $G\left(s\right)$ based on its power series around $s=0$. As we will see, the Pade approximation provides an accurate and efficient approximation over the full range of $s\in \left(\mathrm{0,}+\infty \right)$.

$G\left(s\right)={\displaystyle \underset{k=1}{\overset{+\infty}{\sum}}}\text{\hspace{0.05em}}{c}_{k}{x}^{k},\text{\hspace{1em}}x={s}^{\eta},\text{\hspace{1em}}{c}_{k}=\frac{{\left(-1\right)}^{k+1}}{\Gamma \left(k\eta +1\right)}$. (39)

Note that one can set ${c}_{0}=0$ and start the summation at $k=0$ in (39). Taking these features of function $G\left(s\right)$ into consideration, we adopt a Pade approximation of order $\left[n/n\right]$ , of the form

$R\left(s;n\right)=\frac{{a}_{1}x+{a}_{2}{x}^{2}+\cdots +{a}_{n-1}{x}^{n-1}+{x}^{n}}{{b}_{0}+{b}_{1}x+{b}_{2}{x}^{2}+\cdots +{b}_{n-1}{x}^{n-1}+{x}^{n}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}x={s}^{\eta}$. (40)

where ${a}_{0}=0$ follows from ${c}_{0}=0$ , and ${a}_{n}={b}_{n}=1$ follows from $G\left(+\infty \right)=1$ and normalization. There are $\left(2n-1\right)$ unknown coefficients in Pade approximation (40). To determine these coefficients, we multiply both (40) and (39) by ${\sum}_{k=0}^{n}}\text{\hspace{0.05em}}{b}_{k}{x}^{k$ , and then match the ${x}^{k}$ terms for $1\le k\le \left(2n-1\right)$. The product of two power series has the expression:

$\left({\displaystyle \underset{k=0}{\sum}}\text{\hspace{0.05em}}{b}_{k}{x}^{k}\right)\left({\displaystyle \underset{k=0}{\sum}}\text{\hspace{0.05em}}{c}_{k}{x}^{k}\right)={\displaystyle \underset{k=0}{\sum}}\left({\displaystyle \underset{j=0}{\overset{k}{\sum}}}\text{\hspace{0.05em}}{b}_{j}{c}_{k-j}\right){x}^{k}$.

For k in the range of $n\le k\le \left(2n-1\right)$ , equating the corresponding coefficients of ${x}^{k}$ terms on the left-hand side and on the right-hand sides yields equations for the unknowns $\left\{{b}_{k}\right\}$.

$\underset{j=0}{\overset{n-1}{\sum}}}\text{\hspace{0.05em}}{b}_{j}{c}_{k-j}={w}_{k},\text{\hspace{1em}}k=n,n+1,\cdots ,\left(2n-1\right)$ (41)

where ${w}_{k}$ is known and has the expression

${w}_{k}=\{\begin{array}{ll}1,\hfill & k=n\hfill \\ -{c}_{k-n},\hfill & \left(n+1\right)\le k\le \left(2n-1\right)\hfill \end{array}$ (42)

Equation (41) is an $n\times n$ linear system for $\left\{{b}_{0}\mathrm{,}{b}_{1}\mathrm{,}\cdots \mathrm{,}{b}_{n-1}\right\}$. Coefficients $\left\{{b}_{k}\right\}$ are determined by solving linear system (41). Once coefficients $\left\{{b}_{k}\right\}$ are known, we write out coefficients $\left\{{a}_{k}\right\}$ by matching the coefficients of ${x}^{k}$ terms for $1\le k\le \left(n-1\right)$.

${a}_{k}={\displaystyle \underset{j=0}{\overset{k-1}{\sum}}}\text{\hspace{0.05em}}{b}_{j}{c}_{k-j},\text{\hspace{1em}}k=1,2,\cdots ,\left(n-1\right)$ (43)

To estimate the error of Pade approximation
$R\left(s\mathrm{;}n\right)$ defined in (40), we use
$G\left(s\right)$ computed with the unified numerical procedure (37) as the “exact” solution to compare with. We calculate the difference between the numerical value of
$G\left(s\right)$ and the Pade approximation
$R\left(s\mathrm{;}n\right)$. Figure 4 shows
$\left|G\left(s\right)-R\left(s\mathrm{;}n\right)\right|$ vs s for parameter value
$\eta =0.1494$. Four Pade approximations, respectively with n = 3, 4, 5, and 6, are shown where n is the highest power used in Pade approximation (40). For n = 4, the approximation error of
$R\left(s\mathrm{;}n\right)$ is already below 10^{−}^{8}, which is similar to the errors of both the power series summation and

Figure 4. Discrepancy between $G\left(s\right)$ and Pade approximation $R\left(s\mathrm{;}n\right)$.

the asymptotic approximation in the overlapping region around
$s=15$. The numerical error of procedure (37) varies with the magnitude of s. The numerical error is the largest in the overlapping region: below the overlapping region, the power series summation is less polluted by the round-off errors and thus is more accurate; above the overlapping region, the asymptotic approximation becomes more accurate. The numerical accuracy of
$G\left(s\right)$ calculated using (37) is significantly higher than 10^{−}^{8} when s is outside the overlapping region. This property of
$G\left(s\right)$ will help us decipher the error behavior in higher order Pade approximations.

When n is increased to
$n=5$ , the difference
$\left|G\left(s\right)-R\left(s\mathrm{;}n\right)\right|$ is below 10^{−}^{10} outside the overlapping region, implying that the error of Pade approximation is also below 10^{−}^{10} outside the overlapping region. The difference
$\left|G\left(s\right)-R\left(s\mathrm{;}n\right)\right|$ increases significantly in the overlapping region. However, it is highly unlikely that the approximation error of Pade approximation jumps significantly only in the overlapping region while remaining below 10^{−}^{10} outside the overlapping region. The Pade approximation consists of one rational function for all values of s; it does not involve any switching. It is much more likely that the approximation error of Pade approximation actually remains below 10^{−}^{10} over the full range of s; the significant increase in
$\left|G\left(s\right)-R\left(s\mathrm{;}n\right)\right|$ is solely caused by the increased numerical error of
$G\left(s\right)$ in the overlapping region. If this is true, then for
$n=5$ , the Pade approximation is already more accurate than the unified numerical procedure (37) in IEEE double precision. The smaller numerical error of the Pade approximation is mainly attributed to that it has only a few terms, and subsequently, is much less affected by round-off errors in IEEE double precision. For
$n=6$ , Figure 4 shows that the increase of
$\left|G\left(s\right)-R\left(s\mathrm{;}n\right)\right|$ near the overlapping region is much more pronounced than in the case of
$n=5$. The pattern of increase strongly suggests that it is caused by the increased numerical error of
$G\left(s\right)$ near the overlapping region. Figure 4 indicates that the true numerical error in Pade approximation is very likely below 10^{−}^{12} throughout the full range of s, much more accurate than the unified numerical procedure (37) in IEEE double precision.

We carry out single precision computations to support the assertion we made above that in finite precision arithmetic, Pade approximations can be more accurate than the power series even though the power series is theoretically exact in infinite precision arithmetic. We use the double precision result of $G\left(s\right)$ as the exact solution to compare with. We compute power series summation, asymptotics, and Pade approximations in single precision, and then examine the numerical errors in single precision results. Figure 5 shows the error behaviors of single precision results. As s increases, the power series summation starts losing accuracy due to the exponential growth of the largest term and the associated round-off error in summation. Meanwhile, as s increases, the approximation error in asymptotics decreases and its numerical accuracy improves. In contrast, the numerical errors in Pade approximations remain fairly steady with respect to s and decays very rapidly as n is increased. Pade approximation $R\left(s\mathrm{;3}\right)$ is already significantly more accurate than both the power series summation and the asymptotics in a large neighborhood of the overlapping region (for single precision arithmetic, the overlapping region is around $s=6$ ). It is evident in Figure 5 that the numerical error of $R\left(s\mathrm{;4}\right)$ is primarily caused by round-off errors and its true approximation error is below the machine epsilon of single precision

Figure 5. Errors of power series, asymptotics and Pade approximations in single precision computations.

system. In single precision, the actually realized numerical accuracy of $R\left(s\mathrm{;4}\right)$ is uniformly much higher than that of both the power series summation and the asymptotics over the full range of s.

Next, we construct a Pade approximation for density $g\left(s\right)$. Power series of $g\left(s\right)$ around $s=0$ has the form

$g\left(s\right)=\eta {s}^{\eta -1}\frac{\text{d}G\left(s\left(x\right)\right)}{\text{d}x}={s}^{\eta -1}{\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\text{\hspace{0.05em}}{\stackrel{\u02dc}{c}}_{k}{x}^{k}$

$x={s}^{\eta},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{\u02dc}{c}}_{k}=\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)\right)}$. (44)

Note that since $li{m}_{x\to +\infty}G\left(s\left(x\right)\right)=1$ , we have $li{m}_{x\to +\infty}\frac{\text{d}}{\text{d}x}G\left(s\left(x\right)\right)=0$ in

(44), which suggests us to adopt a Pade approximation of order $\left[\left(n-1\right)/n\right]$ , of the form

$r\left(s;n\right)={s}^{\eta -1}\left(\frac{{\stackrel{\u02dc}{a}}_{0}+{\stackrel{\u02dc}{a}}_{1}x+\cdots +{\stackrel{\u02dc}{a}}_{n-1}{x}^{n-1}}{{\stackrel{\u02dc}{b}}_{0}+{\stackrel{\u02dc}{b}}_{1}x+\cdots +{\stackrel{\u02dc}{b}}_{n-1}{x}^{n-1}+{x}^{n}}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}x={s}^{\eta}$. (45)

There are 2n unknown coefficients in the Pade approximation of $g\left(s\right)$. To determine the coefficients, we multiply both (45) and (44) by ${\sum}_{k=0}}\text{\hspace{0.05em}}{\stackrel{\u02dc}{b}}_{k}{x}^{k$ , match the coefficients of ${x}^{k}$ terms for $0\le k\le \left(2n-1\right)$ to form a linear system, and then solve for the unknowns, following a procedure similar to the one used in the Pade approximation of $G\left(s\right)$.

To assess the error of Pade approximation $r\left(s\mathrm{;}n\right)$ given in (45), we use $g\left(s\right)$ computed with the unified numerical procedure (38) as the “exact” solution to compare with. Figure 6 shows $\left|g\left(s\right)-r\left(s\mathrm{;}n\right)\right|$ vs s for parameter value $\eta =0.1494$. Four Pade approximations, respectively with, n = 3, 4, 5, and 6, are shown where n is the highest power used in Pade approximation (45). The behaviors of the Pade approximations for density $g\left(s\right)$ are similar to those for the CDF $g\left(s\right)$. In IEEE double precision, the actually realized numerical accuracy of the Pade approximations with $n=5$ and $n=6$ is significantly better than that of numerical procedure (38). Again, in IEEE double precision, the smaller numerical error of the Pade approximations is mainly attributed to the fact that it contains only a few terms, and its numerical results are much less contaminated with round-off errors.

7. Conclusion

We studied the biovariability of a crowd for hearing loss injury, in the form of heterogeneous injury susceptibility. We constructed a unified numerical procedure for computing the distribution density of injury susceptibility that reproduces the observed logistic dose-response relation in a crowd. The unified procedure combines the advantage of power series expansion for small values of argument and the advantage of asymptotic approximation for large values of argument. It switches between these two approaches to achieve a numerical

Figure 6. Differences between density $g\left(s\right)$ and its Pade approximations $r\left(s\mathrm{;}n\right)$.

accuracy of 10^{−8} or better with IEEE double precision, over the full range of argument. Using this unified procedure, we verified numerically that for all parameter values, the derived distribution density, i) is non-negative everywhere and ii) integrates to one. These results establish numerically that the derived distribution is indeed a proper density for all values of parameter, and thus, is well-posed. Furthermore, we developed efficient and accurate Pade approximations for the distribution density and for the cumulative distribution function. In the computational environment of IEEE double precision, Pade approximations actually yield a much higher realized numerical accuracy than that of both the asymptotic approximation for large argument value and the power series for small argument value. The superior performance of Pade approximations is attributed to the fact that it attains high theoretical accuracy with only a few terms, which leads to less contamination with round-off errors and better realized numerical accuracy. In conclusion, we verified numerically that the observed logistic dose-response relation can be explained solely based on a valid distribution of injury susceptibility. Rigorous proof of the well-posedness of the derived distribution density, however, still remains open.

8. Disclaimer

The authors thank the Joint Non-Lethal Weapons Directorate of US Department of Defense for supporting this work. The views expressed in this document are those of the authors and do not reflect the official policy or position of the Department of Defense or the US Government.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

*Applied Mathematics*,

**9**, 672-690. doi: 10.4236/am.2018.96046.

[1] |
Glodsmith, M. (2015) Sound: A Very Short Introduction. Oxford University Press, Oxford. https://doi.org/10.1093/actrade/9780198708445.001.0001 |

[2] | https://www.nidcd.nih.gov/health/noise-induced-hearing-loss |

[3] |
Murphy, W.J., Khan, A. and Shaw, P.B. (2011) Analysis of Chinchilla Temporary and Permanent Threshold Shifts Following Impulsive Noise Exposure.
https://www.cdc.gov/niosh/surveyreports/pdfs/338-05c.pdf |

[4] |
Chan, P., Ho, K. and Ryan, A.F. (2016) Impulse Noise Injury Model. Military Medicine, 181, 59-69. https://doi.org/10.7205/MILMED-D-15-00139 |

[5] |
Wang, H., Burgei, W.A. and Zhou, H. (2017) Interpreting Dose-Response Relation for Exposure to Multiple Sound Impulses in the Framework of Immunity. Health, 9, 1817-1842. https://doi.org/10.4236/health.2017.913132 |

[6] |
Wang, H., Burgei, W.A. and Zhou, H. (2018) Risk of Hearing Loss Caused by Multiple Acoustic Impulses in the Framework of Biovariability. Health, 10, Article ID: 84786. https://doi.org/10.4236/health.2018.105048 |

[7] | Bush, A.W. (1992) Perturbation Methods for Engineers and Scientists. CRC Press, Boca Raton. |

[8] |
Hinch, E.J. (1991) Perturbation Methods. Cambridge University Press, New York.
https://doi.org/10.1017/CBO9781139172189 |

Copyright © 2019 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.