Modeling and Analysis of Bandwidth Allocation in Ieee 802.16 Mac: a Stochastic Reward Net Approach

In this paper, we present a stochastic reward net (SRN) approach to analyse the performance of IEEE 802.16 MAC with multiple traffic classes. The SRN model captures the quality of service requirements of the traffic classes. The model also takes into account pre-emption, priority and timeout characteristics associated with the traffic classes under consideration. The performance of the system is evaluated in terms of mean delay and normalized throughput considering the on-off traffic model. Our analytical model is validated by simulations .


Introduction
Over the last few years there has been tremendous increase in the use of broadband access.The deployment has boosted the usage of several multimedia applications such as Voice over IP (VoIP), online gaming and Video on Demand (VoD).However, in the rural and suburban areas, deployment of traditional wired technologies is too expensive.In such cases, broadband wireless access (BWA) based on IEEE 802.16 provides a promising solution [1,2].One of the key features of IEEE 802.16 is that it supports multiple applications such as HDTV, video conference and conventional internet applications.The challenge for BWA networks is to simultaneously provide quality of service (QoS) to applications with very different characteristics.Hence, a proper resource allocation scheme for packet transmission is imperatively needed.
Performance evaluation of resource allocation mechanisms plays an important role in design of communication systems.Increasing complexity of networks and the way in which they are used, has made it difficult to construct models that are analytically tractable.SRNs are very useful in analytical modeling of complex networks.System operations can be precisely described by means of a graph which translates into a markovian model.Properties such as liveness and deadlock freeness make SRN a reliable analytical modeling tool.
SRN has been used extensively for performance modeling.Performance of opportunistic and non opportunistic schedulers was compared in [3] using analytical model developed with stochastic Petri net (SPN).A protocol of QoS has been developed using Petri net in [4].The protocol has been verified for service guarantee and effective use of resources.Modeling power, analysis and verification of SPN has been discussed in [5].Application of Petri net (PN) in performance and availability analysis is discussed in [6].The authors in [7] presented a SRN approach to model IEEE 802.11DCF with on-off traffic model.Performance metrics such as mean delay and average system throughput have been evaluated.Reconfigurable PN and their ability to model dynamic systems have been studied in [8].
Several approaches have been used for performance evaluation of IEEE 802.16 networks.Simulation approach has been followed in [9] for evaluating IEEE 802.16 system metrics such as mean delay and throughput.Analytical approach to study bandwidth allocation process has been presented in [10,11].Packet scheduling scheme for QoS provisioning in WiMax networks is discussed in [12].The proposed scheme in [12] has been verified using simulations.In [13], authors have proposed a Light WiMAX simulator (LWX) for evaluating performance of IEEE 802.16 bandwidth allocation algorithms.Simulation approach has been adopted in [14] to compare various scheduling schemes such as roundrobin, token bucket-based and M-LWDF algorithms.Authors in [15] have proposed an intelligent bandwidth allocation of uplink (IBAU) for WiMax systems.IBAU mechanism is shown to decrease delay and increase throughput of the network.A survey on scheduling schemes in IEEE 802.16e systems has been presented in [16].Simulation methodologies to be adopted for MAC and PHY layers of IEEE 802.16 are presented in [17].
In this paper, we propose a SRN approach to model and analyze performance of the IEEE 802.16 MAC with multiple traffic classes.The proposed model incorporates prioritization and pre-emption of traffic classes.Packet drop due to waiting time exceeding threshold is also considered.We compute the average system throughput and mean delay suffered by the first packet (i.e., the packet in the head of line (HOL) of each queue, through the proposed SRN formulation.Mean delay of subsequent packets is determined by modelling each queue as M/G/1 queue [7].The mean service time for the computation is obtained from the mean delay suffered by the HOL packet.Our analytical model is validated by comparing the results with simulations carried out using event based simulator.
The rest of the paper is organized as follows: Section 2 presents a brief overview of IEEE 802.16 MAC.System model is presented in Section 3. Section 4 discusses the performance evaluation.Results and discussion are presented in Section 5. Conclusions are drawn in Section 6.

IEEE 802.16 MAC
IEEE 802.16 system consists of two kinds of fixed stations: subscriber station (SS) and base station (BS).All communication in the network is regulated by BS.Two direction of communication path exists between BS and SS: uplink (from SS to BS) and downlink (from BS to SS).IEEE 802.16 MAC defines QoS signaling mechanisms and functions that control BS and SS data transmissions.Two modes of sharing the wireless medium is possible: Point-to-Multipoint (PMP) and Mesh.In PMP, BS serves a set of SS in a broadcast manner.Coordination of transmissions from SSs is done by BS.In mesh mode, organization of nodes is in ad hoc manner and communication exists between SS.In this paper, we focus on PMP mode.
The IEEE 802.16 MAC defines four different scheduling service flows in order to meet the QoS requirements of multimedia applications [9].Unsolicited Grant Service (UGS) is designed to support real-time applications, with strict delay requirements which generate fixed-size packets at periodic intervals such as T1/E1.Real-time Polling Service (rtPS) is designed to support real-time applications with less stringent delay requirements, which generate variable size packets at periodic intervals, such as VoIP with silence suppression.Non-real-time Polling Service (nrtPS) support non-real-time variable bit rate services, such as FTP.Best Effort (BE) traffic does not have QoS guarantees, such as HTTP.Since rtPS, nrtPS and BE traffic classes have varying bandwidth requirements; bandwidth allocation for these classes is performed dynamically.As UGS is allocated fixed and re-served bandwidth, dynamic reassignment of bandwidth is not required.
SS maintains separate connection for each service flow.The allocation of bandwidth by the BS to SS is based on two modes: grant per subscriber station (GPSS) and grant per connection (GPC).In GPSS, the SS obtains aggregate bandwidth for all its individual flow and in turn reallocates the bandwidth to each flow individually.In GPC, the bandwidth allocation by BS is made on per flow basic.We assume GPSS mode of operation in this paper.

System Model
A typical IEEE 802.16 network consists of multiple BSs.Each BS covers several SSs.Every SS is associated with multiple queues corresponding to different traffic classes.We model a single SS with three queues corresponding to rtPS, nrtPS and BE traffic classes as shown in Figure 1.The SS is assigned aggregate bandwidth by the BS.The three queues contend for bandwidth from the SS.The objective is to obtain the mean delay and normalized throughput of each traffic class for varying load conditions.The analytical model is required to take into account prioritization, pre-emption and dropping of packets (with waiting time exceeding the threshold) corresponding to various traffic classes.
Packets arrive at each of the queues in random epochs of time.Data packets arriving at a queue gets buffered till they gain access to channel.Newly arriving packets are added to the queue on a first come first serve (FCFS) basis.Delay of a packet is defined as the time spent by a packet till it is successfully transmitted.Normalized throughput of a given traffic class is defined as the ratio of successful packets transmitted to total packets generated.Average system throughput is the sum of throughputs of individual traffic class.
The following assumptions are made in the model.
• There are 3 different traffic classes in the system, namely rtPS, nrtPS and BE denoted as class 1 , class 2 and class 3 respectively.
• We consider data-only traffic with on-off traffic model.Data bursts consist of active and idle periods.(Practically, a data burst represents data packet of variable length, for example an IP packet with zero idle time between finite set of consecutive packets.[7]) • Data bursts arrival at any queue follows a Poisson process with mean arrival rate λ i .
• Service times of data bursts are exponentially distributed with mean 1/µ i seconds.
• The SSs are assumed to have negligible mobility.

Performance Evaluation
In this section, we present a SRN model to evaluate the performance of the system considered in Section 3. Performance metrics considered are normalized throughput and mean delay suffered by a packets belonging to each traffic class.

Stochastic Reward Net Model
SRN model for a SS with three queues is shown in Figure 2. The model incorporates priority, pre-emption and timeout characteristics of the queues.Tables 1-3 lists the various places, transitions and the meaning associated with each of them.
Transition usr i generates packets at a given rate λ i and deposits them into place q i .An inhibitor arc with cardinality buf i is needed to ensure that the number of packets waiting to enter the current queue is finite.If all channels are busy, the data packets are buffered in q i with buffer size buf i .
A way to assign priority is to give each transition an integer priority level.Transition chchk i are modelled as priority transitions.Lower integer value indicates higher priority level.A priority transition is enabled only if no other higher priority transition is enabled.Since, chchk 1 is assigned lowest value; class 1 has highest priority to gain access to channel, followed by class 2 and class 3 .Firing chchk i transfers a packet from q i to usg i indicating the packet is being served.After completion of service time, transition end i is fired and the channel is returned to the central pool.Note that chchk i are modelled as immediate transitions since they represent activity that does not imply time dependency.Although the action of assigning a channel implies time, the time is neglected from the point of view of traffic modelling.
In order to model pre-emption using SRN, it is required to check the simultaneous presence a packet in place usg i+1 and q i .The meaning of the above condition is that a lower priority packet is being served, when higher priority packet is waiting for resource.Transition prempt i,j are immediate transitions used to model pre-emption.prempt i,j is enabled when packets are available in places q i and usg j at the same time, where subscript i and j correspond to higher priority and lower priority traffic class respectively.Arc connecting prempt i,j indicates removal of packet from usg j , and returning the channel to central pool of channels.Hence, firing prempt i,j pre-empts class j and enables class i to access the resource.
The channels available in the central pool of resource are shared by the traffic classes on arrival of data packets and returned to the pool on completion of service.At higher traffic loads, the available channels become insufficient to meet the bandwidth requirement.Under such conditions, packets in buffer wait for availability of resource.Traffic classes, class 1 and class 2; belong to delay sensitive application with maximum threshold on tolerable delay.Packets exceeding the threshold are dropped.Dropping of packets exceeding the delay limit is incorporated in the model using timed transitions time_o i .
Firing rate of time_o i is set to µ to_i , 1/µ to_i is the maximum tolerable delay for packets belonging to traffic class i .Firing time_o i removes a packet from q i indicating the packet drop.Probability of packet drop depends on the available channels, transmission rates of packets, buffer size etc.Since, class 3 traffic is not associated with any such delay limit, we do not include time out feature for class 3 .

Mean Delay and Normalized Throughput
The underlying continuous time markov chain (CTMC) of the SRN model discussed can be obtained from extended reachability graph (ERG) [7].To obtain the desired performance metrics, one has to solve the CTMC.Complexity of CTMC increases with the size of the system.Solution of complex CTMC can be obtained by using standard software packages such as SHARPE [18], SPNica [19] or TimeNET [20].The average number of packets in each place, and hence the steady state probability of occupancy of each state in the CTMC be determined using the software tools.In this paper, we use SHARPE to construct the SRN model and obtain the performance metrics.
The average throughput of a transition T is defined as the average rate at which packets are deposited by the transition in its output places.If ( ) O t is the average number of packets deposited by transition T in all of its output places up to a time t, then the throughput of a transition T, T η is defined as Since we consider three different traffic classes, the throughput of traffic class i, is given by Average system throughput, η is given by, The mean delay, ∧ H D , experienced by a HOL packet of traffic class i, is the sum of the mean packet holding time and the sum of mean waiting times in places i q and i usg .Let the average number of packets in place P be # P ∧ .
∧ H D can be computed using Little's Theorem [21] as, ( ) ( ) where i µ is the mean packet holding time for traffic class i.The buffer in each queue is modelled as M/G/1 queue with mean service time D can be determined by applying the Pollackzek-Kinchine mean value formula [22] as where .If delay of HOL packet is represented by random variable, i R , then For small loads,

Results and Discussion
We evaluate the system performance in terms of mean delay and normalized throughput for increasing traffic load, ρ , given by  4.
Input traffic parameter settings are given in Table 5.
We compare the analysis and simulation results for three traffic classes in terms of mean delay and normalized throughput.From the results we find the simulation results match with the analysis, thus validating our analytical approach.We also analyse the performance of the system with varying buffer sizes.
Figure 3 presents a comparison of mean delay for three traffic classes with increasing traffic load.It is observed that the mean delay increases with traffic load.Mean delay suffered by packet of class 1 is least followed by class 2 and class 3 .The increase in mean delay is more pronounced for class 3 since class 3 has the least priority among the competing traffic classes.At higher loads, class 3 packets are starved are resources which results in increased mean delay.
We further analyse the system with increased buffer size.Figure 4 shows the comparison of mean delay for buf = 15.From the figure it is observed that with increasing buffer size there is no significant increase the mean delay of class 1 traffic because the packets belonging to class 1 have to wait for minimum amount of time to gain access to the channel.Further, since class 1 and class 2 packets are associated with a maximum tolerable delay,   packets exceeding the tolerable delay are dropped.Dropped packets introduce a decrease in throughput as observed in Figure 7.
Mean delay of class 2 and class 3 for varying buffer sizes is presented in Figures 5 and 6.From Figure 6 we find that for a traffic load of 0.8, mean delay with buf = 1 and buf = 5 are 2.5 and 5.6 respectively resulting in 55% increase.For the same traffic load mean delay with buf = 10 and buf =15 are 6.9 and 7.2 respectively producing only a 4% increase.We observe that increase in buffer size does not produce a corresponding increase mean delay, particularly for higher values of buffer sizes.The reason is that the available bandwidth is insufficient to serve all packets in buffer.Hence, the number of packets successfully transmitted, which amounts to mean delay, does not increase significantly with increase in buffer size.Further, existing packets in buffer prevent additional packets entering the system.
Figures 7 and 8 present the normalized throughput of the three traffic classes with buffer size 1 and 15 respectively.From the graphs, it is observed that for a given buffer size, class 1 has the highest throughput followed by class 2 and class 3 .Further, throughput of all traffic classes decrease with increase in traffic load.Comparing Figures 7  and 8, we find that increase in buffer size from 1 to 15 increases the throughput significantly.Decrease in throughput of class 1 traffic at higher traffic load is attributed to insufficient bandwidth.Also, class 2 and class 3 traffic suffer additional decrease in throughput due to preemption.
In Figure 9, presents the throughput of class 3 packets with increasing buffer sizes.From the graph it is observed that increasing buffer size from 1 to 5 increases the throughput significantly.But, further increase in buffer size from 10 to 15 does not produce any considerable increase in throughput.Further increase in buffer size results in saturation of the system with no further increase in throughput.

Conclusions
We presented a SRN formulation for performance evaluation of bandwidth allocation in IEEE 802.16 network considering multiple traffic classes.The model includes   priority, pre-emption and time-out characteristics of traffic classes.Performance of the system is evaluated in terms of mean delay and normalized throughput.Our model is validated by using simulations.The model can be extended to include more than three traffic classes.The model can be generalized to incorporate multiple SSs.

λ
is the arrival rate and i µ is the service rate of each traffic class.Simulation parameters are shown in Table

Figure 5 .
Figure 5. Mean delay of class 2 traffic for varying buffer size.

Figure 6 .
Figure 6.Mean delay of classs 3 traffic for varying buffer size.

Figure 9 .
Figure 9. Normalized throughput of BE traffic class for varying buffer size.