> 0 , let us remember that the matrix $\mathcal{B}\left(\Lambda ,\Upsilon \right)$ it can be written in two equivalent ways as follows

$\mathcal{B}\left(\Lambda ,\Upsilon \right)=exp\left[\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)t\right]=\sum _{l=0}^{\infty }\frac{{t}^{l}}{l!}\left({\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l}\right).$ (36)

Then

$\begin{array}{c}{\mathcal{B}}_{n}-\mathcal{B}\left(\Lambda ,\Upsilon \right)=\sum _{l=0}^{{m}_{n}}\frac{{t}^{l}}{l!}\left[{\left(\mathcal{Q}\left({\Lambda }_{n}\right)+\mathcal{H}\left({\Upsilon }_{n}\right)s\right)}^{l}-{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\sum _{l={m}_{n}+1}^{\infty }\frac{{t}^{l}}{l!}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l}.\end{array}$ (37)

The second term is the tail of a convergent series, therefore it tends to 0 when n grows.

For the first term, we will apply the Mean Value Theorem defining the function $f:{\mathcal{M}}_{k×k}\to {\mathcal{M}}_{k×k}$ such that $f\left(M\right)={M}^{l}$ , to express the increment of the function as a proportion of the argument increment through the differential operator in the following way then

$f\left({\mathcal{B}}_{n}\right)-f\left(\mathcal{B}\right)=Df\left({\stackrel{˜}{\mathcal{B}}}_{n}\right)\cdot \left({\mathcal{B}}_{n}-\mathcal{B}\right),$ (38)

where ${\stackrel{˜}{\mathcal{B}}}_{n}$ is between ${\mathcal{B}}_{n}$ and $\mathcal{B}$ , or in other way (38) can be written $Df\left({\stackrel{˜}{\mathcal{B}}}_{n}\right)\cdot \left(\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)+\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s\right).$

By definition of differential operator to $f\left(A\right)$ , equation (38) becomes into

$f\left({\mathcal{B}}_{n}\right)-f\left(\mathcal{B}\right)=\sum _{i=0}^{l-1}\text{ }\text{ }{\stackrel{˜}{\mathcal{B}}}_{n}^{i}\left(\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)+\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s\right){\stackrel{˜}{\mathcal{B}}}_{n}^{l-i-1},$ (39)

so we have

$‖\sum _{l=0}^{{m}_{n}}\frac{{t}^{l}}{l!}\left[{\left(\mathcal{Q}\left({\Lambda }_{n}\right)+\mathcal{H}\left({\Upsilon }_{n}\right)s\right)}^{l}-{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l}\right]‖$ (40)

$=‖\sum _{l=0}^{{m}_{n}}\frac{{t}^{l}}{l!}\sum _{i=0}^{l-1}\text{ }\text{ }{\stackrel{˜}{\mathcal{B}}}_{n}^{i}\left(\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)+\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s\right){\stackrel{˜}{\mathcal{B}}}_{n}^{l-i-1}‖$

$\le \sum _{l=0}^{{m}_{n}}\frac{{t}^{l}}{l!}\sum _{i=0}^{l-1}{‖{\stackrel{˜}{\mathcal{B}}}_{n}‖}^{l-1}\left(‖\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)‖+‖\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s‖\right)$ (41)

$\le \sum _{l=0}^{{m}_{n}}\frac{{t}^{l}}{l!}l{‖{\stackrel{˜}{\mathcal{B}}}_{n}‖}^{l-1}\left(‖\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)‖+‖\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s‖\right)$ (42)

$=\left(‖\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)‖+‖\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s‖\right)t\sum _{l=0}^{{m}_{n}}\frac{{t}^{l-1}}{\left(l-1\right)!}{‖{\stackrel{˜}{\mathcal{B}}}_{n}‖}^{l-1}.$ (43)

As ${\sum }_{l=0}^{{m}_{n}}\frac{{t}^{l-1}}{\left(l-1\right)!}{‖{\stackrel{˜}{\mathcal{B}}}_{n}‖}^{l-1}$ is bounded by being the partial sum of a convergent

series, and both ${\Lambda }_{n}$ as ${\Upsilon }_{n}$ are consistent estimator of $\Lambda$ and $\Upsilon$ respectively, each terms of the first factor tend to 0, so ${\mathcal{B}}_{n}$ is a consistent estimator of $\mathcal{B}$ .

4) This point is derived directly from the points 1 and 3.

4.4. Confidence interval for $\alpha \left(s,t\right)$

The following theorem and corollary show how to perform numerical computation using the main result.

Theorem 5. Let ${q}_{ij}^{\left(n\right)}$ be the maximum likelihood estimators of Q, ${S}_{n},{p}_{n},d{p}_{n}$ and ${\mathcal{B}}_{n}$ the estimators presented in the proposition above, and ${m}_{n}$ a succession of positive real numbers such that ${m}_{n}\underset{n\to \infty }{\to }\infty$ , then

$\begin{array}{c}{\sigma }_{n}^{2}=\frac{1}{{S}_{n}^{2}}\left[\sum _{i=1}^{k}\sum _{j=1}^{k-1}\frac{{q}_{ij}^{\left(n\right)}}{{p}_{n}\left(i\right)}{\left(d{p}_{n}^{ij}{\mathcal{B}}_{n}1+{p}_{m}^{ij}\sum _{l=0}^{{m}_{n}}\sum _{r=0}^{l-1}\text{ }\text{ }A\left({\Lambda }_{n},{\Upsilon }_{n}\right)1\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\sum _{l=1}^{k}\frac{{\stackrel{^}{{\sigma }_{i}}}^{2}}{\stackrel{^}{{\lambda }_{i}}}{\left({p}_{n}^{ij}\sum _{l=0}^{{m}_{n}}\sum _{r=0}^{l-1}\text{ }\text{ }{A}^{\ast }\left({\Lambda }_{n},{\Upsilon }_{n}\right)1\right)}^{2}\right],\end{array}$ (44)

is a consistent estimator of ${\sigma }^{2}$ , with $A\left(\cdot ,\cdot \right)$ and ${A}^{\ast }\left(\cdot ,\cdot \right)$ defined as in (33) and (34) respectively.

The main argument in the proof is the differentiability , and a straightforward consequence of the theorem is the following result,

Corollary 1. Taking $\alpha \left(s,t\right)$ , ${\alpha }^{\left(n\right)}\left(s,t\right)$ and ${\sigma }_{n}^{2}$ defined in (3), (31) and (44) respectively, we have

1) $\frac{\sqrt{n}\left({\alpha }^{\left(n\right)}\left(s,t\right)-\alpha \left(s,t\right)\right)}{{\sigma }_{n}^{2}}\underset{n\to \infty }{\overset{w}{\to }}N\left(0,1\right).$

2) If ${I}_{\alpha }\left(n\right)=\left[{\alpha }^{\left(n\right)}\left(s,t\right)-\frac{{z}_{ϵ}{\sigma }_{n}}{\sqrt{n}},{\alpha }^{\left(n\right)}\left(s,t\right)+\frac{{z}_{ϵ}{\sigma }_{n}}{\sqrt{n}}\right],$ where ${z}_{ϵ}$ is such that $P\left(Z>{z}_{ϵ}\right)=\frac{ϵ}{2}$ for $Z~N\left(0,1\right)$ , then

$\underset{n\to \infty }{lim}P\left(\alpha \left(s,t\right)\in {I}_{\alpha }\left(n\right)\right)=1-ϵ.$ (45)

5. Simulation and Numerical results

In this section we will carry out the analysis with traffic traces generated by simulations from the model introduced in Section 2 to perform the estimations.

5.1. Parameters for the Simulation

To validate the results obtained, we performed several traffic simulations according to the GMFM model presented.

In the model chosen, the modulating Markov chain has $k=13$ states and each state is associated with a data transfer rate interval as shown in Table 1.

It is expected the usual state to be that of the highest transfer rate available in the transmission channel, so the most probable state is 13. It is also more common to jump from one state to the adjacent ones, or to the maximum transfer rate, or minimum or no transfer rate. With these considerations it is designed the infinitesimal generator of the modulating chain that is given by the matrix $ℚ$

$Q=\left(\begin{array}{ccccccccccccc}-7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 3.75\\ 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 1.88\\ 1& 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 0.13& 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 0.13& 0.13& 2& -7& 2& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 0.13& 0.13& 0.13& 2& -7& 2& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 2& -7& 2& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 2& -7& 2& 1\\ 2& 0.20& 0.20& 0.20& 0.20& 0.20& 0.20& 0.20& 0.20& 0.20& 0.20& -8& 4\\ 2& 0.30& 0.30& 0.30& 0.30& 0.30& 0.30& 0.30& 0.30& 0.30& 0.30& 5.00& -10\end{array}\right).$
(46)

Within each of these intervals, it is raffled how much is actually dispatched by means of a normal distribution truncated to the interval with mean equal to the midpoint of the interval and deviation equal to one sixth of the length of the interval. The diagonal matrix $ℍ$ will contain the mean values of these distri- butions.

Table 1. Transfer speed.

An example of the traces generated for the simulations can be seen in Figure 1.

5.2. Estimation of the Effective bandwidth from traces

The first objective is to calculate the EB of the presented model, for which we will use the result shown in Theorem 1, and the EB is then calculated according to the formula (3). Figure 2 shows the EB calculated for the GMFM.

For each simulated trace we estimate the EB using the estimator presented in Theorem 2 according to the equation (31) and in Figure 3 the comparison of the estimated effective bandwidth for a trace with the theoretical value is shown.

Figure 1. Trace generated with the GMFM.

Figure 2. Effective bandwidth for the GMFM model.

Figure 3. Effective bandwidth vs. Estimated effective bandwidth.

6. Conclusions

In this work, we have presented contributions in two areas related to data networks. About the modeling, we have proposed the GMFM that has the advantage of being very realistic for the current requirements of telecommunication networks, in which it is possible to apply refined tools and mathematical statistics results. We have also found a formula for the effective bandwidth where can be visualized the role that play each parameters of the model.

Regarding the estimation of parameters, we have proposed a methodology to estimate the effective bandwidths, from traffic traces of a GMFM source, which has the expected properties to ensure that it complies with a Central Theorem of Limit and thus be able to build a confidence interval. These results enable the calculation of the effective bandwidth from simulated traffic traces. A numerical example has been presented, where the results were applied to traffic traces and ideal results were obtained.

It is expected to extend statistical effective bandwidth calculation to other stochastic phenomena where the supports of each probability law are not disjoint, or which do not need to be Markovian and the application of these techniques to real data scenarios.

Acknowledgements

The authors express their gratitude to Dr. Gonzalo Perera for introducing them to the topic and for his valuable suggestions.

Cite this paper

Bavio, J. and Marrón, B. (2018) Properties of the Estimators for the Effective Bandwidth in a Generalized Markov Fluid Model. Open Journal of Statistics, 8, 69-84. https://doi.org/10.4236/ojs.2018.81006

References

1. 1. Marrón, B.S. (2012) Estadstica de Procesos Estocásticos aplicados a Redes de Datos y Telecomunicación. Ph.D. Thesis, Departamento de Matemática, Universidad Nacional del Sur, Argentina. http://repositoriodigital.uns.edu.ar/bitstream/123456789/2302/1/Marron-Beatriz-Tesis.pdf

2. 2. Kelly, F. (1996) Notes on Effective Bandwidths. Stochastics Netwoks: Theory and Applications. Oxford University Press, Oxford.

3. 3. Kesidis, G., Walrand, J. and Chang, C.S. (1993) Effective Bandwidth for Multiclass Markov Fluids and Other ATM Sources. IEEE/ACM Transactions on Networking, 1, 424-428. https://doi.org/10.1109/90.251894

4. 4. Pechiar, J., Perera, G. and Simón, M. (2002) Effective Bandwidth Estimation and Testing for Markov Sources. Performance Evaluation, 45, 157-175. https://doi.org/10.1016/S0166-5316(02)00035-4

5. 5. Lebedev, E.A. and Lukashuk, L.I. (1986) Maximum Likelihood Estimation of the Infinitesimal Matrix of a Markov Chain with Continuous Time (in Russian). Doklady Akademii Nauk Ukrainskoj SSR Serija A, 1, 12-14.

6. 6. Albert, A. (1962) Estimating the Infinitesimal Generator of a Continuous Time, Finite State Markov Process. The Annals of Mathematical Statistics, 33, 727-753. https://doi.org/10.1214/aoms/1177704594

7. 7. Feller, W. (1957) An Introduction to Probability Theory and Its Applications. John Wiley, New York.

8. 8. Billingsley, P. (1979) Probability and Measures. Wiley & Sons, New York.

Appendix

Lemma 6. Let ${\left({Z}_{n}\right)}_{n\in ℕ}$ be a sequence of random variables in ${ℝ}^{d},d\ge 1$ , ${\left({a}_{n}\right)}_{n\in ℕ}$ sequence of positive numbers satisfying ${a}_{n}\to \infty$ and $Z\in {ℝ}^{d}$ such that

${a}_{n}\left({Z}_{n}-Z\right)\underset{n\to \infty }{\overset{w}{\to }}N\left(0,\Sigma \right).$ (a1)

Let us consider $G:{ℝ}^{d}\to ℝ$ differentiable in an neighborhood of Z, then

${a}_{n}\left(G\left({Z}_{n}\right)-G\left(Z\right)\right)\underset{n\to \infty }{\overset{w}{\to }}N\left(0,\nabla G\left(Z\right)\Sigma \nabla G{\left(Z\right)}^{t}\right).$ (a2)

Lemma 7. Let us consider $\Psi$ as in (30), $g$ as in (29) and $\mathcal{B}$ as in (28), and $\mathcal{Q}:{ℝ}^{k\left(k-1\right)}\to {\mathcal{M}}_{k×k}$ and $\mathcal{H}:{ℝ}^{k}\to {\mathcal{M}}_{k×k}$ defined in Section 4.2, then

1) $\frac{\partial \Psi \left(\Lambda ,\Upsilon \right)}{\partial {q}_{ij}}=\frac{1}{st\text{ }g\left(\Lambda ,\Upsilon \right)}\frac{\partial g\left(\Lambda ,\Upsilon \right)}{\partial {q}_{ij}}$ .

2) $\frac{\partial \Psi \left(\Lambda ,\Upsilon \right)}{\partial {\mu }_{i}}=\frac{1}{st\text{ }g\left(\Lambda ,\Upsilon \right)}\frac{\partial g\left(\Lambda ,\Upsilon \right)}{\partial {\mu }_{i}}$ .

3) $\begin{array}{c}\frac{\partial g\left(\Lambda ,\Upsilon \right)}{\partial {q}_{ij}}=\frac{\partial \pi \left(\Lambda \right)}{\partial {q}_{ij}}\mathcal{B}\left(\Lambda ,\Upsilon \right)1+\pi \left(\Lambda \right)\left({\sum }_{l=0}^{\infty }{\sum }_{r=0}^{l-1}\frac{{t}^{l}}{l!}\left(\mathcal{Q}\left(\Lambda \right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\mathcal{H}\left(\Upsilon \right)s\right)}^{r}{V}^{ij}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l-r-1}\right)1,\end{array}$

with ${\left({V}^{ij}\right)}_{lm}=\left\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{si}\text{\hspace{0.17em}}i=l\text{\hspace{0.17em}}\text{y}\text{\hspace{0.17em}}j=m\ne i\\ -1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{si}\text{\hspace{0.17em}}l=i=m\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$ .

4)

$\frac{\partial g\left(\Lambda ,\Upsilon \right)}{\partial {\mu }_{i}}=\pi \left(\Lambda \right)\left({\sum }_{l=0}^{\infty }{\sum }_{r=0}^{l-1}\frac{{t}^{l}{s}^{l}}{l!}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{r}{U}^{i}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l-r-1}\right)1$ ,

with ${\left({U}^{i}\right)}_{lm}=\left\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{si}\text{\hspace{0.17em}}l=m=i\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$ .

Lemma 8. The matrix $\stackrel{^}{ℚ}=\stackrel{^}{\mathcal{Q}}\left(\Lambda \right)$ supports inverse ${\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right)$ that is differentiable and fulfills

$D\left({\stackrel{^}{\mathcal{Q}}}^{-1}\right)\left(\Lambda \right)\left(x\right)=-{\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right)D\left(\mathcal{Q}\right)\left(\Lambda \right){\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right)\left(x\right).$ (A3)