Properties of the Estimators for the Effective Bandwidth in a Generalized Markov Fluid Model

The Generalized Markov Fluid Model (GMFM), introduced in , is assumed for modeling sources in the network because it is versatile to describe the traffic fluctuations. In order to estimate resources allocations or in other words the channel occupation of each source, the concept of effective bandwidth proposed by Kelly  is used. In this paper, we present a formula for calculating the effective bandwidth, developed for the Generalized Markovian Flow model, which is of particular interest because it allows expressing said magnitude depending on the parameters of the model. We present unbiased estimators for these parameters that can be obtained from real data. The convergence and the consistency of the estimation are studied, and confidence bands are found. Illustrative calculation and performance of the proposed estimators were tested with simulated data and ideal results were obtained.

Keywords

Share and Cite:

Bavio, J. and Marrón, B. (2018) Properties of the Estimators for the Effective Bandwidth in a Generalized Markov Fluid Model. Open Journal of Statistics, 8, 69-84. doi: 10.4236/ojs.2018.81006.

1. Introduction

A multiplexing system can be thought as a buffer with capacity B and output velocity C, fed by many different data sources that share the common output port. One of the more interest study subject is to know how many resources will each source required from the system. This knowledge has different applications to call admission, control and building. This magnitude of the required resources is known as the Effective Bandwidth of the traffic source and is an useful and realistic measure of channel occupancy. The best interpretation and collection of results on Effective Bandwidth are given in Kelly  where the Effective Bandwidth for a process ${X}_{t}$ , with stationary increments, that represents the total amount of work arriving from a source in the interval $\left[0,t\right]$ is defined as

$\alpha \left(s,t\right)=\frac{1}{st}\mathrm{log}E{\text{e}}^{s{X}_{t}},\text{ }0 (1)

This definition can be motivated in several ways, perhaps the most important is that the logarithmic moments generating function is naturally associated with the additive property of the sources in a node of the network. The space parameter s, measured in data units, point out the degree of the multiplexing. The time parameter t, measured in time units, it is related with period of greater buffer occupancy before overflow; slow accumulation of work load in the buffer corresponds to large values of t as well as fast accumulation corresponds to small values of t. Both parameters characterize the link operating point depending on the context of the stream.

The general problem is that resources are shared by a set of heterogeneous communications and when a new communication is accepted, its workload must be estimated to allocate part of the available resources. Therefore two problems motivate our work: to propose a realistic model for communications traffic and calculate the effective bandwidth for this model for the allocation of resources, and to obtain consistent estimators for the parameters.

The paper is organized as follows, the Generalized Markov Fluid Model model is introduced in Section 2, In Section 3, we compute de effective bandwidth for the introduced model and in Section 4 the proposed estimator is presented with its properties. Section 5 contains the numerical results of simulations, conclusions are presented in Section 6 and finally some useful lemmas are in the Appendix.

2. The Model

The needs for networks that integrate various telecommunication services leads to the emergence of the concept of integrated services digital network, which involves the use of a single infrastructure for transport, at high speeds, of data, voice and images. An important issue is the selection of the transfer process, which could be defined as the set of multiplexing mechanisms for commu- nication in the network. Through the concept of statistical multiplexing of sources, high efficiency is achieved in the use of network resources. If a source sends data at a variable rate, multiplexing the sources in a link, it reserves for each a capacity greater than the average rate but lower than the maximum emission rate. The price to pay is that the probability that many sources agree on dispatching the maximum rate is not zero, in which case overflow would occur with consequent damage to the Quality of Service (QoS).

To minimize these effects of data loss and maintain quality of service, both for existing sources as well as new, is necessary to have connection admission control (CAC) which to decide whether it can accept a new connection, as well as of congestion control mechanisms. For this we need mathematical models to describe the behavior of the sources.

The Generalized Markov Fluid

Markov Fluid models have been applied to model many kinds of digital sources but there are limitations when speed data transfers take too much values and the Generalized Markov Fluid model, introduced in  , can be used to describe properly those kind of traffic in a source.

In the GMFM, a source in a data network assumes the state ${Z}_{t}$ at time t, where Z is a continuous time, homogeneous and irreducible Markov chain, with finite state space $K=\left\{1,\cdots ,k\right\}$ , invariant distribution $\pi$ and infinitesimal generator ${ℚ}^{Z}$ . That is, at time t the chain Z reaches states i and the rate data transfer of the sources is drawn, independently of the chain Z, by the law ${f}_{i}$ . So, the random variable ${Y}_{t}|{Z}_{t}=i$ , is distributed according the probability law ${f}_{i}$ and the density functions ${f}_{1},{f}_{2},\cdots ,{f}_{k}$ , are known and with disjoint support for $i=1,\cdots ,k$ .

The process Y does not change until the chain Z changes its state and since the supports of the k laws of probability are known and disjoint, observe the process ${Y}_{t}$ allow us to restore the process ${Z}_{t}$ .

The work load received from the source that delivers information with speed ${Y}_{t}$ is represented by the Markov flow modulated by the chain ${Z}_{t}$ and can be written as

${X}_{t}={\int }_{0}^{t}\text{ }\text{ }{Y}_{s}\text{ }\text{d}s.$ (2)

The advantage of the GMFM is that makes manageable those networks where the speed of the source in each state is a random variable. In this model, abrupt changes in the transfer speed report a change of state in the chain, but within a state is allowed the rate to assume randomly any value according to some probability distribution. The laws of probability may be discrete or continuous.

To interpret this model we could think that each state in the chain is interpreted as the activity performed by a user, like chat or video conferences, then speed data transfer assumes values that depend specifically for such activity.

3. Effective Bandwidth Estimation

One of the main issues in QoS for admission control is the estimation of the resources needed for guaranteed communications, which cannot be the peak rate because would be too pessimistic and would lead to resource waste, nor the mean rate of the service, because would be a too optimistic estimation, that would cause frequent losses. Given an expected QoS, interpreted as the probability of buffer overflow, the Effective Bandwidth (EB) of the traffic sources defined in (1), was proposed by Kelly in  and is a realistic measure of channel occupancy. The space and time parameters, s and t respectively, depend not only on the source itself, but on the context on which this source is acting as the capacity, the buffer size, the scheduling policy of the multiplexer, the QoS parameter to be achieved and the number of other sources in the channel, this is the actual traffic mix. The EB concept can be applied to sources or to aggregated traffic, as it can be the networks core link, but also it can be used for any shared resource models.

In order to estimate EB for a given GMFM, our first goal is to find the type of formulas obtained by Kesidis, Walrand and Chang   to calculate the EB, that depends on estimable data from traffic traces.

Computation of the EB for the GMFM

Let ${\left\{{X}_{t}\right\}}_{t\ge 0}$ be a GMFM modulated by a continuous time, homogeneous and irreducible Markov chain Z with finite state space $K=\left\{1,\cdots ,k\right\}$ and invariant distribution $\pi$ and infinitesimal generator ${ℚ}^{Z}$ . Let ${Y}_{i}$ be the random variables with density function ${f}_{i}$ , mean ${\mu }_{i}$ , variance ${\sigma }_{i}^{2}$ and Laplace transform ${\varphi }_{i}\left(t\right)$ , for $i=1,\cdots ,k$ . Let us also assume that each ${f}_{i}$ has known and disjoint support $\left[{c}_{i},{c}_{i+1}\right)\subset {ℝ}^{+}$ with ${c}_{i}<{c}_{i+1}$ . Let us denote $ℍ$ the diagonal matrix of dimension k, whose nonzero elements are the first moments ${\mu }_{i}$ of each distribution.

Theorem 1. Let ${\left\{{X}_{t}\right\}}_{t\ge 0}$ be a GMFM, then the effective bandwidth has the following expression

$\alpha \left(s,t\right)=\frac{1}{st}\mathrm{log}\left\{\pi \mathrm{exp}\left[\left({ℚ}^{Z}+sℍ\right)t\right]1\right\},$ (3)

where $1$ is a column vector with all entries equal to 1.

Proof. By definition (1), it is enough to proof $E\left({\text{e}}^{s{X}_{t}}\right)=\pi exp\left[\left({ℚ}^{Z}+sℍ\right)t\right]1.$

The process ${X}_{t}$ can be represented as in (2) and ${Y}_{t}$ is uniformly bounded, hence applying Lebesgue’s dominated convergence theorem

$E\left({\text{e}}^{s{X}_{t}}\right)=\underset{n\to \infty }{lim}E\left({\text{e}}^{s\frac{t}{n}\underset{r=1}{\overset{n}{\sum }}{Y}_{\frac{rt}{n}}}\right).$ (4)

The Markov chain ${Z}_{s}$ is homogeneous and $\pi$ is the invariant distribution, so the argument of the limit in (4) can be written as

$\underset{\left({i}_{0},\cdots ,{i}_{n}\right)\in {\mathcal{K}}^{n+1}}{\sum }\pi \left({i}_{0}\right)\left[\underset{j=0}{\overset{n-1}{\prod }}\left({P}_{\frac{t}{n}}^{Z}\left({i}_{j},{i}_{j+1}\right){\int }_{\left({ℝ}^{+}\right)}\text{ }\text{ }{\text{e}}^{\frac{st}{n}{u}_{j+1}}{f}_{{i}_{j+1}}\left({u}_{j+1}\right)\text{d}{u}_{j+1}\right)\right].$ (5)

Each integral in (5) represents the Laplace transform of the density function, so we can express the right side as the product of the transition matrix and a

diagonal matrix $\left[{C}_{\frac{t}{n}}\right]$ , whose non zero elements are ${\left({C}_{\frac{t}{n}}\right)}_{i,i}={\varphi }_{i}\left(\frac{st}{n}\right)$ , in the next way

$\pi {\left[\left[{P}_{\frac{t}{n}}^{Z}\right]\left[{C}_{\frac{t}{n}}\right]\right]}^{n}1.$

Applying Taylor’s formula to each matrix we obtain

${P}_{\frac{t}{n}}^{Z}={P}_{0}^{Z}+{\left({P}_{t}^{Z}\right)}^{\prime }{}_{t=0}\cdot \frac{t}{n}+o\left(\frac{t}{n}\right)=I+{ℚ}^{Z}\cdot \frac{t}{n}+o\left(\frac{t}{n}\right),$ (6)

${C}_{\frac{t}{n}}={C}_{0}+{\left({C}_{t}\right)}^{\prime }{}_{t=0}\cdot \frac{t}{n}+o\left(\frac{t}{n}\right)=I+sℍ\cdot \frac{t}{n}+o\left(\frac{t}{n}\right),$ (7)

with I the identity matrix and $ℍ$ a diagonal matrix which non zero element are ${{\varphi }^{\prime }}_{i}\left(0\right)={\mu }_{i}$ .

Then,

${\left[\left[{P}_{\frac{t}{n}}^{Z}\right]\left[{C}_{\frac{t}{n}}\right]\right]}^{n}={\left[I+\left({ℚ}^{Z}+sℍ\right)\cdot \frac{t}{n}+o\left(\frac{t}{n}\right)\right]}^{n},$ (8)

and the right side of (8) tends to $exp\left[\left({ℚ}^{Z}+sℍ\right)t\right]$ . ☐

The importance of this result is that provides an expression for the EB that depends on the infinitesimal generator of the modulating chain, its invariant distribution and a matrix containing information of the transfer rate, and all these elements can be estimated with traffic traces.

4. The estimator and its Properties

In order to introduce an estimator for the EB to this traffic model, the elements of the matrices ${ℚ}^{Z}$ and $ℍ$ are the parameters that must be estimated according to the equation (3).

For the first, a result presented by Lebedev and Lukashuk   plays a key role in the construction of our estimator, providing asymptotically gaussian estimator, based on traffic traces, of the infinitesimal generator matrix. The maximum likelihood estimator ${q}_{ij}^{\left(n\right)}$ of each element ${q}_{ij}$ not belonging to the diagonal of infinitesimal generator matrix, is the ratio between the number of transitions of the chain Z from the state i to the state j and the time spent by the chain Z in the state i, both during the same unitary time interval. So defined

${q}_{ij}^{\left(n\right)}$ is unbiased with variance $\frac{{q}_{ij}}{\pi \left(i\right)}$ , where $\pi \left(i\right)$ is the i-th element in the vector of the invariant distribution $\pi$ .

4.1. The estimator of the elements of $ℍ$

The elements of $ℍ$ are the mean data transfer rate ${\mu }_{i}$ at each state i of the chain Z. The proposed estimator is

${\mu }_{i}^{\left(n\right)}=\frac{{\sum }_{r=1}^{{N}_{i}\left(n\right)}{Y}_{i}^{\left(r\right)}}{{N}_{i}\left(n\right)},$ (9)

where ${Y}_{i}^{\left(r\right)}$ denotes the r-th observed rate corresponding to the range of Y when modulating chain is in state i and ${N}_{i}\left(n\right)$ the number of times that the modulating chain Z reaches the state i in the interval $\left[0,n\right]$ .

Before proving the following results let us remark that the random value ${N}_{i}\left(n\right)$ grows as the observed range n increases and due to assumptions about the chain Z, each states i is positive recurrent with average turnaround time $1/{\lambda }_{i}$ satisfying this relationship

$\frac{{N}_{i}\left(n\right)}{n}\stackrel{\text{c}\text{.s}\text{.}}{\to }{\lambda }_{i}.$ (10)

Proposition 2. Let ${\left\{{X}_{t}\right\}}_{t\ge 0}$ be a GMFM and ${\mu }_{i}^{\left(n\right)}$ defined in (9), then

1) ${\mu }_{i}^{\left(n\right)}$ is an unbiased and consistent estimator of ${\mu }_{i}$ .

2) $\sqrt{n}\left({\mu }_{i}^{\left(n\right)}-{\mu }_{i}\right)\underset{n\to \infty }{\overset{w}{\to }}N\left(0,\frac{{\sigma }_{i}^{2}}{{\lambda }_{i}}\right)$ .

Proof. 1) Let us compute the expected value of (9) using conditional expectation

$E\left({\mu }_{i}^{\left(n\right)}\right)=E\left(E\left({\mu }_{i}^{\left(n\right)}|{N}_{i}\left(n\right)\right)\right)$ (11)

$=E\left(\frac{{\sum }_{j\in ℕ}E\left({Y}_{i}^{\left(j\right)}|{N}_{i}\left(n\right)\right)}{{N}_{i}\left(n\right)}\right)$ (12)

$={\mu }_{i}$ (13)

so ${\mu }_{i}^{\left(n\right)}$ is unbiased.

To prove consistence it is enough to show that the variance of (9) tend to 0 as n grows. The second moment can be compute similarly

$E\left({\left({\mu }_{i}^{\left(n\right)}\right)}^{2}\right)=E\left(E{\left({\mu }_{i}^{\left(n\right)}\right)}^{2}|{N}_{i}\left(n\right)\right)$ (14)

$=E\left(\frac{{\sum }_{j=1}^{{N}_{i}\left(n\right)}{\sum }_{k=1}^{{N}_{i}\left(n\right)}E\left({Y}_{i}^{\left(j\right)}{Y}_{i}^{\left(k\right)}|{N}_{i}\left(n\right)\right)}{{N}_{i}^{2}\left(n\right)}\right),$ (15)

Replacing $E\left({Y}_{i}^{\left(k\right)}{Y}_{i}^{\left(j\right)}|{N}_{i}\left(n\right)\right)=\left({\sigma }_{i}^{2}+{\mu }_{i}^{2}\right){\delta }_{kj}+{\mu }_{i}^{2}\left(1-{\delta }_{kj}\right)$ , where ${\delta }_{kj}$ is the Kronecker delta, this is ${\delta }_{kj}=1$ if $k=j$ and 0 elsewhere, we obtain

$E\left({\left({\mu }_{i}^{\left(n\right)}\right)}^{2}\right)=E\left(\frac{1}{{N}_{i}^{2}\left(n\right)}\left({\mu }_{i}^{2}{N}_{i}^{2}\left(n\right)+{\sigma }_{i}^{2}{N}_{i}\left(n\right)\right)\right)$ (16)

$={\mu }_{i}^{2}+{\sigma }_{i}^{2}E\left(\frac{1}{{N}_{i}\left(n\right)}\right),$ (17)

and then

$V\left({\mu }_{i}^{\left(n\right)}\right)={\sigma }_{i}^{2}E\left(\frac{1}{{N}_{i}\left(n\right)}\right)\approx \frac{{\sigma }_{i}^{2}}{{\lambda }_{i}n},$ (18)

that tends to 0 as n grow, hence consistence is proved.

2) Applying a classic Central Limit Theorem  , to the variables ${Y}_{i}^{\left(r\right)}$ , it is true that

$\sqrt{n}\left(\frac{{\sum }_{r=1}^{\left[{\lambda }_{i}n\right]}{Y}_{i}^{\left(r\right)}}{{\lambda }_{i}n}-{\mu }_{i}\right)\underset{n\to \infty }{\overset{w}{\to }}N\left(0,\frac{{\sigma }_{i}^{2}}{{\lambda }_{i}}\right).$ (19)

A classic result of the stochastic process theory, see  , will allow us to achieve the result by just proving that $E{\left(\frac{{\sum }_{r=1}^{{N}_{i}\left(n\right)}{Y}_{i}^{\left(r\right)}}{{N}_{i}\left(n\right)}-\frac{{\sum }_{r=1}^{\left[{\lambda }_{i}n\right]}{Y}_{i}^{\left(r\right)}}{{\lambda }_{i}n}\right)}^{2}\underset{n\to \infty }{\to }0$ .

$E{\left(\frac{{\sum }_{r=1}^{{N}_{i}\left(n\right)}{Y}_{i}^{\left(r\right)}}{{N}_{i}\left(n\right)}-\frac{{\sum }_{r=1}^{\left[{\lambda }_{i}n\right]}{Y}_{i}^{\left(r\right)}}{{\lambda }_{i}n}\right)}^{2}\le 2\left(E{\left(\underset{r=1}{\overset{{N}_{i}\left(n\right)}{\sum }}{Y}_{i}^{\left(r\right)}\left(\frac{1}{{N}_{i}\left(n\right)}-\frac{1}{{\lambda }_{i}n}\right)\right)}^{2}$ (20)

$+E{\left(\frac{{\sum }_{r=1}^{{N}_{i}\left(n\right)}{Y}_{i}^{\left(r\right)}-{\sum }_{r=1}^{\left[{\lambda }_{i}n\right]}{Y}_{i}^{\left(r\right)}}{{\lambda }_{i}n}\right)}^{2}\right).$ (21)

Calculating the first term by conditional expectation we obtain

$E{\left(\underset{r=1}{\overset{{N}_{i}\left(n\right)}{\sum }}{Y}_{i}^{\left(r\right)}\left(\frac{1}{{N}_{i}\left(n\right)}-\frac{1}{{\lambda }_{i}n}\right)\right)}^{2}=\frac{{\left({\lambda }_{i}n-{N}_{i}\left(n\right)\right)}^{2}}{{\left({\lambda }_{i}n{N}_{i}\left(n\right)\right)}^{2}}\left({\sigma }_{i}^{2}{N}_{i}\left(n\right)+{\mu }_{i}^{2}{N}_{i}^{2}\left(n\right)\right)$ (22)

$={\left(1-\frac{{N}_{i}\left(n\right)}{{\lambda }_{i}n}\right)}^{2}\left(\frac{1}{{N}_{i}\left(n\right)}{\sigma }_{i}^{2}+{\mu }_{i}^{2}\right),$ (23)

that tends to 0 as n grows.

The argument in de expectation of the second term can be written like

$\frac{{\sum }_{r=1}^{{N}_{i}\left(n\right)}{Y}_{i}^{\left(r\right)}-{\sum }_{r=1}^{\left[{\lambda }_{i}n\right]}{Y}_{i}^{\left(r\right)}}{{\lambda }_{i}n}=\frac{{\sum }_{m}^{M}{Y}_{i}^{\left(r\right)}}{{\lambda }_{i}n},$ (24)

where $m=\mathrm{min}\left({N}_{i}\left(n\right),\left[{\lambda }_{i}n\right]\right)$ and $M=\mathrm{max}\left({N}_{i}\left(n\right),\left[{\lambda }_{i}n\right]\right)$ , so similarly we compute

$E{\left(\frac{{\sum }_{m}^{M}{Y}_{i}^{\left(r\right)}}{{\lambda }_{i}n}\right)}^{2}=E\left(\frac{{\sum }_{m}^{M}{\sum }_{m}^{M}E\left({Y}_{i}^{\left(k\right)}{Y}_{i}^{\left(j\right)}/{N}_{i}\left(n\right)\right)}{{\lambda }_{i}^{2}{n}^{2}}\right)$ (25)

$=\frac{{\sigma }_{i}^{2}\left(M-m\right)+{\mu }_{i}^{2}{\left(M-m\right)}^{2}}{{\lambda }_{i}^{2}{n}^{2}}.$ (26)

But $M-m=|{N}_{i}\left(n\right)-\left[{\lambda }_{i}n\right]|$ , so (26) becomes into

$\frac{|{N}_{i}\left(n\right)-\left[{\lambda }_{i}n\right]|}{{\lambda }_{i}^{2}{n}^{2}}{\sigma }_{i}^{2}+{\left(1-\frac{{N}_{i}\left(n\right)}{{\lambda }_{i}n}\right)}^{2}{\mu }_{i}^{2},$ (27)

that also tends to 0 as n grows, and the theorem is proved. ☐

4.2. The estimator of $\alpha \left(s,t\right)$

From the maximum likelihood estimators of ${\lambda }_{ij}$ and the estimator of ${\mu }_{i}$ in (9), a estimator of (3) can be construct. Let us define ${\Lambda }_{n}={\left({\lambda }_{ij}^{\left(n\right)}\right)}_{1\le i\ne j\le k}$ and ${\Upsilon }_{n}={\left({\mu }_{i}^{\left(n\right)}\right)}_{1\le i\le k}$ the vector with de no diagonals elements of $ℚ$ and $ℍ$ res- pectively.

Let us also define some functions that allow to build the matrices from the vectors presented above, they are $\mathcal{Q}:{ℝ}^{k\left(k-1\right)}\to {\mathcal{M}}_{k×k}$ such that $\mathcal{Q}\left(\Lambda \right)=ℚ$ , where $ℚ={\left({Q}_{ij}\right)}_{1\le i,j\le k}$ such that ${Q}_{ij}={q}_{ij}$ if $i\ne j$ and ${Q}_{ij}={\sum }_{j=1,j\ne i}^{j=k}\text{ }\text{ }{q}_{ij}=-{q}_{ii}$ if $i=j$ ; $\mathcal{H}:{ℝ}^{k}\to {\mathcal{M}}_{k×k}$ such that $\mathcal{H}\left(\Upsilon \right)=ℍ$ , where $ℍ={\left({H}_{ij}\right)}_{1\le i,j\le k}$ is de- fined ${H}_{ij}={\mu }_{i}$ if $i=j$ and 0 otherwise.

Finally, another function whose response is a matrix that gives the same information that $ℚ$ but that has the advantage of admitting inverse is defined

as $\stackrel{^}{\mathcal{Q}}\left(\Lambda \right)=\stackrel{^}{ℚ}$ , where $\stackrel{^}{ℚ}={\left({\stackrel{^}{Q}}_{ij}\right)}_{1\le i,j\le k}$ is such that ${\stackrel{^}{Q}}_{ij}={q}_{ij}$ if $j and 1 if $j=k$ .

Since $ℚ,\Lambda$ and $\stackrel{^}{ℚ}$ contain exactly the same information, we can think any parameter that depends on $ℚ$ as a function of $\Lambda$ .

We are now able to present the following result that gives the asymptotic distribution of (3).

Theorem 3. Let ${X}_{t}$ be a GMFM, the vectors ${\Lambda }_{n}={\left({\lambda }_{ij}^{\left(n\right)}\right)}_{1\le i\ne j\le k}$ , and ${\Upsilon }_{n}={\left({\mu }_{i}^{\left(n\right)}\right)}_{1\le i\le k}$ containing the estimators of ${\lambda }_{ij}$ and ${\mu }_{i}$ respectively. Let us define the following functions:

$\mathcal{B}:{ℝ}^{k\left(k-1\right)}×{ℝ}^{k}\to {\mathcal{M}}_{k×k},\text{\hspace{0.17em}}\text{ }\text{ }\text{that}\text{ }\text{\hspace{0.17em}}\text{ }\mathcal{B}\left(\Lambda ,\Upsilon \right)=exp\left[\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)t\right],$ (28)

$g:{ℝ}^{k\left(k-1\right)}×{ℝ}^{k}\to ℝ,\text{\hspace{0.17em}}\text{ }\text{ }\text{that}\text{ }\text{ }\text{\hspace{0.17em}}g\left(\Lambda ,\Upsilon \right)=\pi \left(\Lambda \right)\mathcal{B}\left(\Lambda ,\Upsilon \right)1,$ (29)

$\Psi :{ℝ}^{k\left(k-1\right)}×{ℝ}^{k}\to ℝ,\text{ }\text{\hspace{0.17em}}\text{ }\text{that}\text{ }\text{\hspace{0.17em}}\Psi \left(\Lambda ,\Upsilon \right)=\frac{1}{st}log\left(g\left(\Lambda ,\Upsilon \right)\right).$ (30)

Then, for fixed s and t, ${\alpha }^{\left(n\right)}\left(s,t\right)=\Psi \left({\Lambda }_{n},{\Upsilon }_{n}\right)$ follows that

$\sqrt{n}\left({\alpha }^{\left(n\right)}\left(s,t\right)-\alpha \left(s,t\right)\right)\underset{n\to \infty }{\overset{w}{\to }}N\left(0,{\sigma }^{2}\right),$ (31)

with ${\sigma }^{2}=\nabla \Psi \left(\Lambda ,\Upsilon \right){\Sigma }^{\prime }\nabla \Psi {\left(\Lambda ,\Upsilon \right)}^{t}$ , where

${\Sigma }^{\prime }=\left[\begin{array}{cccccc}\frac{{\lambda }_{11}}{\pi \left(1\right)}& & & & & \\ & \ddots & & & & \\ & & \frac{{\lambda }_{k\left(k-1\right)}}{\pi \left(k\right)}& & & \\ & & & \frac{{\sigma }_{1}^{2}}{{\lambda }_{1}}& & \\ & & & & \ddots & \\ & & & & & \frac{{\sigma }_{k}^{2}}{{\lambda }_{k}}\end{array}\right]$ .

Proof. As ${\lambda }_{ij}^{\left(n\right)}$ , ${\mu }_{i}^{\left(n\right)}$ are unbiased, asymptotically gaussian, and the functions used to construct ${\alpha }^{\left(n\right)}\left(s,t\right)$ in (31) are differentiable so applying Lemma (6) and Lemma (7) in Appendix the result holds. Then ${\sigma }^{2}$ can be written more precisely like

$\begin{array}{c}{\sigma }^{2}=\frac{1}{{\left(st\pi \left(\Lambda \right)\mathcal{B}\left(\Lambda ,\Upsilon \right)1\right)}^{2}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}×\left[\underset{\left(i,j\right)\in D}{\sum }\frac{{q}_{ij}}{\pi \left(i\right)}{\left(\frac{\partial \pi \left(\Lambda \right)}{\partial {q}_{ij}}\mathcal{B}\left(\Lambda ,\Upsilon \right)1+\pi \left(\Lambda \right)\underset{l=0}{\overset{\infty }{\sum }}\underset{r=0}{\overset{l-1}{\sum }}\text{ }\text{ }A\left(\Lambda ,\Upsilon \right)1\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i=1}{\overset{k}{\sum }}\frac{{\sigma }_{i}^{2}}{{\lambda }_{i}}{\left(\pi \left(\Lambda \right)\underset{l=0}{\overset{\infty }{\sum }}\underset{r=0}{\overset{l-1}{\sum }}\text{ }\text{ }{A}^{\ast }\left(\Lambda ,\Upsilon \right)1\right)}^{2}\right],\end{array}$ (32)

where

${A}_{ij}\left(\Lambda ,\Upsilon \right)=\underset{l=0}{\overset{\infty }{\sum }}\underset{r=0}{\overset{l-1}{\sum }}\frac{{t}^{l}}{l!}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{r}{V}^{ij}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l-r-1},$ (33)

${A}_{i}^{\ast }\left(\Lambda ,\Upsilon \right)=\underset{l=0}{\overset{\infty }{\sum }}\underset{r=0}{\overset{l-1}{\sum }}\frac{{t}^{l}{s}^{l}}{l!}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{r}{U}^{i}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l-r-1},$ (34)

and ${\left({V}^{ij}\right)}_{lm}=\left\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}i=l\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}j=m\ne i\\ -1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}l=i=m\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$ , ${\left({U}^{i}\right)}_{lm}=\left\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\text{ }l=m=i\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$ .

4.3. Consistency of the Estimators

We show now the main result that allows us to find consistent estimators for each parameter involved in the formula of the variance ${\sigma }^{2}$ obtained in the Theorem 3.

Proposition 4. Let ${\Lambda }_{n}={\left({\lambda }_{ij}^{\left(n\right)}\right)}_{1\le i\ne j\le k}$ , and ${\Upsilon }_{n}={\left({\mu }_{i}^{\left(n\right)}\right)}_{1\le i\le k}$ containing the estimators of ${\lambda }_{ij}$ and ${\mu }_{i}$ respectively then

1) ${p}_{n}={e}_{k}{\stackrel{^}{\mathcal{Q}}}^{-1}\left({\Lambda }_{n}\right)$ is a consistent estimator of $\pi \left(\Lambda \right)$ .

2) $d{p}_{n}^{ij}=-{e}_{k}{\stackrel{^}{\mathcal{Q}}}^{-1}\left({\Lambda }_{n}\right)\frac{\partial \stackrel{^}{\mathcal{Q}}\left(\Lambda \right)}{\partial {q}_{ij}}{\stackrel{^}{\mathcal{Q}}}^{-1}\left({\Lambda }_{n}\right)$ is a consistent estimator of $\frac{\partial \pi \left(\Lambda \right)}{\partial {q}_{ij}}$ .

3) ${\mathcal{B}}_{n}={\sum }_{l=0}^{{m}_{n}}\frac{{t}^{l}{\left(\mathcal{Q}\left({\Lambda }_{n}\right)+\mathcal{H}\left({\Upsilon }_{n}\right)s\right)}^{l}}{l!}$ is a consistent estimator of

$\mathcal{B}\left(\Lambda ,\Upsilon \right)=exp\left[\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)t\right].$

4) ${S}_{n}=st{p}_{n}{B}_{n}1$ is a consistent estimator of $S=st\pi \left(\Lambda \right)B\left(\Lambda ,\Upsilon \right)1$ .

Proof.

1) By definition $\stackrel{^}{ℚ}=\stackrel{^}{\mathcal{Q}}\left(\Lambda \right)$ and $\pi \stackrel{^}{ℚ}={e}_{k}$ , then by Lema 8 in Appendix, $\pi ={e}_{k}{\stackrel{^}{ℚ}}^{-1}$ . As ${\Lambda }_{n}\underset{n\to \infty }{\overset{a.s.}{\to }}\Lambda$ and $\stackrel{^}{\mathcal{Q}}\left(\Lambda \right)$ is continuous then $\stackrel{^}{\mathcal{Q}}\left({\Lambda }_{n}\right)\underset{n\to \infty }{\overset{a.s.}{\to }}\stackrel{^}{\mathcal{Q}}\left(\Lambda \right)$ .

On the other hand ${\stackrel{^}{\mathcal{Q}}}^{-1}$ is also continuous, then for n large enough, $\stackrel{^}{\mathcal{Q}}\left({\Lambda }_{n}\right)$ admits inverse and it is fulfilled that ${\stackrel{^}{\mathcal{Q}}}^{-1}\left({\Lambda }_{n}\right)\underset{n\to \infty }{\overset{a.s.}{\to }}{\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right)$ , so ${p}_{n}={e}_{k}{\stackrel{^}{\mathcal{Q}}}^{-1}\left({\Lambda }_{n}\right)$ is a consistent estimator of $\pi \left(\Lambda \right)$ .

2) As $\pi \left(\Lambda \right)={e}_{k}{\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right)$ , then $\frac{\partial \pi \left(\Lambda \right)}{\partial {q}_{ij}}={e}_{k}\frac{\partial {\stackrel{^}{\mathcal{Q}}}^{-1}}{\partial {q}_{ij}},$ and by Lemma 8 in Appendix

$\frac{\partial {\stackrel{^}{\mathcal{Q}}}^{-1}}{\partial {q}_{ij}}=-{\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right)\frac{\partial {\stackrel{^}{\mathcal{Q}}}^{-1}}{\partial {q}_{ij}}{\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right).$ (35)

But ${\stackrel{^}{\mathcal{Q}}}^{-1}\left({\Lambda }_{n}\right)$ is a consistent estimator of ${\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right)$ , then $d{p}_{n}^{ij}=-{e}_{k}{\stackrel{^}{\mathcal{Q}}}^{-1}\left({\Lambda }_{n}\right)\frac{\partial \stackrel{^}{\mathcal{Q}}\left({\Lambda }_{n}\right)}{\partial {q}_{ij}}{\stackrel{^}{\mathcal{Q}}}^{-1}\left({\Lambda }_{n}\right)$ is a consistent estimator of $\frac{\partial \pi \left(\Lambda \right)}{\partial {q}_{ij}}$ .

3) To prove that ${\mathcal{B}}_{n}\underset{n\to \infty }{\overset{a.s.}{\to }}\mathcal{B}\left(\Lambda ,\Upsilon \right)$ , or equivalently $|{\mathcal{B}}_{n}-\mathcal{B}\left(\Lambda ,\Upsilon \right)|\underset{n\to \infty }{\to }0$ , let us remember that the matrix $\mathcal{B}\left(\Lambda ,\Upsilon \right)$ it can be written in two equivalent ways as follows

$\mathcal{B}\left(\Lambda ,\Upsilon \right)=exp\left[\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)t\right]=\underset{l=0}{\overset{\infty }{\sum }}\frac{{t}^{l}}{l!}\left({\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l}\right).$ (36)

Then

$\begin{array}{c}{\mathcal{B}}_{n}-\mathcal{B}\left(\Lambda ,\Upsilon \right)=\underset{l=0}{\overset{{m}_{n}}{\sum }}\frac{{t}^{l}}{l!}\left[{\left(\mathcal{Q}\left({\Lambda }_{n}\right)+\mathcal{H}\left({\Upsilon }_{n}\right)s\right)}^{l}-{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{l={m}_{n}+1}{\overset{\infty }{\sum }}\frac{{t}^{l}}{l!}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l}.\end{array}$ (37)

The second term is the tail of a convergent series, therefore it tends to 0 when n grows.

For the first term, we will apply the Mean Value Theorem defining the function $f:{\mathcal{M}}_{k×k}\to {\mathcal{M}}_{k×k}$ such that $f\left(M\right)={M}^{l}$ , to express the increment of the function as a proportion of the argument increment through the differential operator in the following way then

$f\left({\mathcal{B}}_{n}\right)-f\left(\mathcal{B}\right)=Df\left({\stackrel{˜}{\mathcal{B}}}_{n}\right)\cdot \left({\mathcal{B}}_{n}-\mathcal{B}\right),$ (38)

where ${\stackrel{˜}{\mathcal{B}}}_{n}$ is between ${\mathcal{B}}_{n}$ and $\mathcal{B}$ , or in other way (38) can be written $Df\left({\stackrel{˜}{\mathcal{B}}}_{n}\right)\cdot \left(\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)+\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s\right).$

By definition of differential operator to $f\left(A\right)$ , equation (38) becomes into

$f\left({\mathcal{B}}_{n}\right)-f\left(\mathcal{B}\right)=\underset{i=0}{\overset{l-1}{\sum }}\text{ }\text{ }{\stackrel{˜}{\mathcal{B}}}_{n}^{i}\left(\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)+\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s\right){\stackrel{˜}{\mathcal{B}}}_{n}^{l-i-1},$ (39)

so we have

$‖\underset{l=0}{\overset{{m}_{n}}{\sum }}\frac{{t}^{l}}{l!}\left[{\left(\mathcal{Q}\left({\Lambda }_{n}\right)+\mathcal{H}\left({\Upsilon }_{n}\right)s\right)}^{l}-{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l}\right]‖$ (40)

$=‖\underset{l=0}{\overset{{m}_{n}}{\sum }}\frac{{t}^{l}}{l!}\underset{i=0}{\overset{l-1}{\sum }}\text{ }\text{ }{\stackrel{˜}{\mathcal{B}}}_{n}^{i}\left(\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)+\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s\right){\stackrel{˜}{\mathcal{B}}}_{n}^{l-i-1}‖$

$\le \underset{l=0}{\overset{{m}_{n}}{\sum }}\frac{{t}^{l}}{l!}\underset{i=0}{\overset{l-1}{\sum }}{‖{\stackrel{˜}{\mathcal{B}}}_{n}‖}^{l-1}\left(‖\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)‖+‖\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s‖\right)$ (41)

$\le \underset{l=0}{\overset{{m}_{n}}{\sum }}\frac{{t}^{l}}{l!}l{‖{\stackrel{˜}{\mathcal{B}}}_{n}‖}^{l-1}\left(‖\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)‖+‖\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s‖\right)$ (42)

$=\left(‖\left(\mathcal{Q}\left({\Lambda }_{n}\right)-\mathcal{Q}\left(\Lambda \right)\right)‖+‖\left(\mathcal{H}\left({\Upsilon }_{n}\right)-\mathcal{H}\left(\Upsilon \right)\right)s‖\right)t\underset{l=0}{\overset{{m}_{n}}{\sum }}\frac{{t}^{l-1}}{\left(l-1\right)!}{‖{\stackrel{˜}{\mathcal{B}}}_{n}‖}^{l-1}.$ (43)

As ${\sum }_{l=0}^{{m}_{n}}\frac{{t}^{l-1}}{\left(l-1\right)!}{‖{\stackrel{˜}{\mathcal{B}}}_{n}‖}^{l-1}$ is bounded by being the partial sum of a convergent

series, and both ${\Lambda }_{n}$ as ${\Upsilon }_{n}$ are consistent estimator of $\Lambda$ and $\Upsilon$ respectively, each terms of the first factor tend to 0, so ${\mathcal{B}}_{n}$ is a consistent estimator of $\mathcal{B}$ .

4) This point is derived directly from the points 1 and 3.

4.4. Confidence interval for $\alpha \left(s,t\right)$

The following theorem and corollary show how to perform numerical computation using the main result.

Theorem 5. Let ${q}_{ij}^{\left(n\right)}$ be the maximum likelihood estimators of Q, ${S}_{n},{p}_{n},d{p}_{n}$ and ${\mathcal{B}}_{n}$ the estimators presented in the proposition above, and ${m}_{n}$ a succession of positive real numbers such that ${m}_{n}\underset{n\to \infty }{\to }\infty$ , then

$\begin{array}{c}{\sigma }_{n}^{2}=\frac{1}{{S}_{n}^{2}}\left[\underset{i=1}{\overset{k}{\sum }}\underset{j=1}{\overset{k-1}{\sum }}\frac{{q}_{ij}^{\left(n\right)}}{{p}_{n}\left(i\right)}{\left(d{p}_{n}^{ij}{\mathcal{B}}_{n}1+{p}_{m}^{ij}\underset{l=0}{\overset{{m}_{n}}{\sum }}\underset{r=0}{\overset{l-1}{\sum }}\text{ }\text{ }A\left({\Lambda }_{n},{\Upsilon }_{n}\right)1\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{l=1}{\overset{k}{\sum }}\frac{{\stackrel{^}{{\sigma }_{i}}}^{2}}{\stackrel{^}{{\lambda }_{i}}}{\left({p}_{n}^{ij}\underset{l=0}{\overset{{m}_{n}}{\sum }}\underset{r=0}{\overset{l-1}{\sum }}\text{ }\text{ }{A}^{\ast }\left({\Lambda }_{n},{\Upsilon }_{n}\right)1\right)}^{2}\right],\end{array}$ (44)

is a consistent estimator of ${\sigma }^{2}$ , with $A\left(\cdot ,\cdot \right)$ and ${A}^{\ast }\left(\cdot ,\cdot \right)$ defined as in (33) and (34) respectively.

The main argument in the proof is the differentiability , and a straightforward consequence of the theorem is the following result,

Corollary 1. Taking $\alpha \left(s,t\right)$ , ${\alpha }^{\left(n\right)}\left(s,t\right)$ and ${\sigma }_{n}^{2}$ defined in (3), (31) and (44) respectively, we have

1) $\frac{\sqrt{n}\left({\alpha }^{\left(n\right)}\left(s,t\right)-\alpha \left(s,t\right)\right)}{{\sigma }_{n}^{2}}\underset{n\to \infty }{\overset{w}{\to }}N\left(0,1\right).$

2) If ${I}_{\alpha }\left(n\right)=\left[{\alpha }^{\left(n\right)}\left(s,t\right)-\frac{{z}_{ϵ}{\sigma }_{n}}{\sqrt{n}},{\alpha }^{\left(n\right)}\left(s,t\right)+\frac{{z}_{ϵ}{\sigma }_{n}}{\sqrt{n}}\right],$ where ${z}_{ϵ}$ is such that $P\left(Z>{z}_{ϵ}\right)=\frac{ϵ}{2}$ for $Z~N\left(0,1\right)$ , then

$\underset{n\to \infty }{lim}P\left(\alpha \left(s,t\right)\in {I}_{\alpha }\left(n\right)\right)=1-ϵ.$ (45)

5. Simulation and Numerical results

In this section we will carry out the analysis with traffic traces generated by simulations from the model introduced in Section 2 to perform the estimations.

5.1. Parameters for the Simulation

To validate the results obtained, we performed several traffic simulations according to the GMFM model presented.

In the model chosen, the modulating Markov chain has $k=13$ states and each state is associated with a data transfer rate interval as shown in Table 1.

It is expected the usual state to be that of the highest transfer rate available in the transmission channel, so the most probable state is 13. It is also more common to jump from one state to the adjacent ones, or to the maximum transfer rate, or minimum or no transfer rate. With these considerations it is designed the infinitesimal generator of the modulating chain that is given by the matrix $ℚ$

$Q=\left(\begin{array}{ccccccccccccc}-7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 3.75\\ 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 1.88\\ 1& 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 0.13& 2& -7& 2& 0.13& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 0.13& 0.13& 2& -7& 2& 0.13& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 0.13& 0.13& 0.13& 2& -7& 2& 0.13& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 2& -7& 2& 0.13& 1\\ 1& 0.125& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 0.13& 2& -7& 2& 1\\ 2& 0.20& 0.20& 0.20& 0.20& 0.20& 0.20& 0.20& 0.20& 0.20& 0.20& -8& 4\\ 2& 0.30& 0.30& 0.30& 0.30& 0.30& 0.30& 0.30& 0.30& 0.30& 0.30& 5.00& -10\end{array}\right).$
(46)

Within each of these intervals, it is raffled how much is actually dispatched by means of a normal distribution truncated to the interval with mean equal to the midpoint of the interval and deviation equal to one sixth of the length of the interval. The diagonal matrix $ℍ$ will contain the mean values of these distri- butions.

Table 1. Transfer speed.

An example of the traces generated for the simulations can be seen in Figure 1.

5.2. Estimation of the Effective bandwidth from traces

The first objective is to calculate the EB of the presented model, for which we will use the result shown in Theorem 1, and the EB is then calculated according to the formula (3). Figure 2 shows the EB calculated for the GMFM.

For each simulated trace we estimate the EB using the estimator presented in Theorem 2 according to the equation (31) and in Figure 3 the comparison of the estimated effective bandwidth for a trace with the theoretical value is shown.

Figure 1. Trace generated with the GMFM.

Figure 2. Effective bandwidth for the GMFM model.

Figure 3. Effective bandwidth vs. Estimated effective bandwidth.

6. Conclusions

In this work, we have presented contributions in two areas related to data networks. About the modeling, we have proposed the GMFM that has the advantage of being very realistic for the current requirements of telecommunication networks, in which it is possible to apply refined tools and mathematical statistics results. We have also found a formula for the effective bandwidth where can be visualized the role that play each parameters of the model.

Regarding the estimation of parameters, we have proposed a methodology to estimate the effective bandwidths, from traffic traces of a GMFM source, which has the expected properties to ensure that it complies with a Central Theorem of Limit and thus be able to build a confidence interval. These results enable the calculation of the effective bandwidth from simulated traffic traces. A numerical example has been presented, where the results were applied to traffic traces and ideal results were obtained.

It is expected to extend statistical effective bandwidth calculation to other stochastic phenomena where the supports of each probability law are not disjoint, or which do not need to be Markovian and the application of these techniques to real data scenarios.

Acknowledgements

The authors express their gratitude to Dr. Gonzalo Perera for introducing them to the topic and for his valuable suggestions.

Appendix

Lemma 6. Let ${\left({Z}_{n}\right)}_{n\in ℕ}$ be a sequence of random variables in ${ℝ}^{d},d\ge 1$ , ${\left({a}_{n}\right)}_{n\in ℕ}$ sequence of positive numbers satisfying ${a}_{n}\to \infty$ and $Z\in {ℝ}^{d}$ such that

${a}_{n}\left({Z}_{n}-Z\right)\underset{n\to \infty }{\overset{w}{\to }}N\left(0,\Sigma \right).$ (a1)

Let us consider $G:{ℝ}^{d}\to ℝ$ differentiable in an neighborhood of Z, then

${a}_{n}\left(G\left({Z}_{n}\right)-G\left(Z\right)\right)\underset{n\to \infty }{\overset{w}{\to }}N\left(0,\nabla G\left(Z\right)\Sigma \nabla G{\left(Z\right)}^{t}\right).$ (a2)

Lemma 7. Let us consider $\Psi$ as in (30), $g$ as in (29) and $\mathcal{B}$ as in (28), and $\mathcal{Q}:{ℝ}^{k\left(k-1\right)}\to {\mathcal{M}}_{k×k}$ and $\mathcal{H}:{ℝ}^{k}\to {\mathcal{M}}_{k×k}$ defined in Section 4.2, then

1) $\frac{\partial \Psi \left(\Lambda ,\Upsilon \right)}{\partial {q}_{ij}}=\frac{1}{st\text{ }g\left(\Lambda ,\Upsilon \right)}\frac{\partial g\left(\Lambda ,\Upsilon \right)}{\partial {q}_{ij}}$ .

2) $\frac{\partial \Psi \left(\Lambda ,\Upsilon \right)}{\partial {\mu }_{i}}=\frac{1}{st\text{ }g\left(\Lambda ,\Upsilon \right)}\frac{\partial g\left(\Lambda ,\Upsilon \right)}{\partial {\mu }_{i}}$ .

3) $\begin{array}{c}\frac{\partial g\left(\Lambda ,\Upsilon \right)}{\partial {q}_{ij}}=\frac{\partial \pi \left(\Lambda \right)}{\partial {q}_{ij}}\mathcal{B}\left(\Lambda ,\Upsilon \right)1+\pi \left(\Lambda \right)\left({\sum }_{l=0}^{\infty }{\sum }_{r=0}^{l-1}\frac{{t}^{l}}{l!}\left(\mathcal{Q}\left(\Lambda \right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\mathcal{H}\left(\Upsilon \right)s\right)}^{r}{V}^{ij}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l-r-1}\right)1,\end{array}$

with ${\left({V}^{ij}\right)}_{lm}=\left\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{si}\text{\hspace{0.17em}}i=l\text{\hspace{0.17em}}\text{y}\text{\hspace{0.17em}}j=m\ne i\\ -1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{si}\text{\hspace{0.17em}}l=i=m\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$ .

4)

$\frac{\partial g\left(\Lambda ,\Upsilon \right)}{\partial {\mu }_{i}}=\pi \left(\Lambda \right)\left({\sum }_{l=0}^{\infty }{\sum }_{r=0}^{l-1}\frac{{t}^{l}{s}^{l}}{l!}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{r}{U}^{i}{\left(\mathcal{Q}\left(\Lambda \right)+\mathcal{H}\left(\Upsilon \right)s\right)}^{l-r-1}\right)1$ ,

with ${\left({U}^{i}\right)}_{lm}=\left\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{si}\text{\hspace{0.17em}}l=m=i\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$ .

Lemma 8. The matrix $\stackrel{^}{ℚ}=\stackrel{^}{\mathcal{Q}}\left(\Lambda \right)$ supports inverse ${\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right)$ that is differentiable and fulfills

$D\left({\stackrel{^}{\mathcal{Q}}}^{-1}\right)\left(\Lambda \right)\left(x\right)=-{\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right)D\left(\mathcal{Q}\right)\left(\Lambda \right){\stackrel{^}{\mathcal{Q}}}^{-1}\left(\Lambda \right)\left(x\right).$ (A3)

Conflicts of Interest

The authors declare no conflicts of interest.

  Marrón, B.S. (2012) Estadstica de Procesos Estocásticos aplicados a Redes de Datos y Telecomunicación. Ph.D. Thesis, Departamento de Matemática, Universidad Nacional del Sur, Argentina. http://repositoriodigital.uns.edu.ar/bitstream/123456789/2302/1/Marron-Beatriz-Tesis.pdf  Kelly, F. (1996) Notes on Effective Bandwidths. Stochastics Netwoks: Theory and Applications. Oxford University Press, Oxford.  Kesidis, G., Walrand, J. and Chang, C.S. (1993) Effective Bandwidth for Multiclass Markov Fluids and Other ATM Sources. IEEE/ACM Transactions on Networking, 1, 424-428. https://doi.org/10.1109/90.251894  Pechiar, J., Perera, G. and Simón, M. (2002) Effective Bandwidth Estimation and Testing for Markov Sources. Performance Evaluation, 45, 157-175. https://doi.org/10.1016/S0166-5316(02)00035-4  Lebedev, E.A. and Lukashuk, L.I. (1986) Maximum Likelihood Estimation of the Infinitesimal Matrix of a Markov Chain with Continuous Time (in Russian). Doklady Akademii Nauk Ukrainskoj SSR Serija A, 1, 12-14.  Albert, A. (1962) Estimating the Infinitesimal Generator of a Continuous Time, Finite State Markov Process. The Annals of Mathematical Statistics, 33, 727-753. https://doi.org/10.1214/aoms/1177704594  Feller, W. (1957) An Introduction to Probability Theory and Its Applications. John Wiley, New York.  Billingsley, P. (1979) Probability and Measures. Wiley & Sons, New York. 