^{1}

^{*}

^{2}

^{1}

This paper explores traffic dynamics and performance of complex networks. Complex networks of various structures are studied. We use node betweenness centrality, network polarization, and average path length to capture the structural characteristics of a network. Network throughput, delay, and packet loss are used as network performance measures. We investigate how internal traffic, through put, delay, and packet loss change as a function of packet generation rate, network structure, queue type, and queuing discipline through simulation. Three network states are classified. Further, our work reveals that the parameters chosen to reflect network structure, including node betweenness centrality, network polarization, and average path length, play important roles in different states of the underlying networks.

In network science, complex systems are described as networks consisting of vertices and interactions or connections among them. Many social, biological, and communication systems are complex networks. The study of structural and dynamical properties of complex networks has been receiving a lot of interests. One of the ultimate goals of the studies is to understand the influence of topological structures on the behaviors of various complex networks, for instance, how the structure of social networks affects the spread of diseases, information, rumors, or other things; how the structure of a food web affects population dynamics; how the structure of a communication network affects traffic dynamics, and network performance such as robustness, reliability, traffic capacity, and so on.

There is a wealth of literature focusing on traffic dynamics and different performance aspects of communication networks. A basic model, which is aimed at simulating a general transport process on top of a communication network, has been proposed by Ohira and Sawatari [

The work of Tizghadam and Leon-Garcia [15-17] focuses on the robustness of communication networks. They introduce the notion of network criticality. They find that network criticality directly relates to network performance metrics such as average network utilization and average network cost. In addition, by minimizing network criticality, the robustness of a communication network can be improved. In order to measure nodal contribution to global network robustness, Feyessa and Bikdash [

In this paper, we investigate how internal traffic, throughput, delay, and packet loss change as a function of packet generation rate, network structure, queue type, and queuing discipline through simulation. Four different types of networks are chosen as the underlying networks because of their distinct structural features. They are the SF network, the random network, the ring lattice (RL) network, and the square lattice (SL) network. We use node betweenness centrality, network polarization, and average path length, to capture the structural features of the networks.

Based on observed traffic dynamics in the networks studied, we classify three network states: traffic free flow state, moderate congestion state, and heavy congestion state. Simulation results indicate that during each different state, the structural differences among the underlying networks play important roles in the performance of these networks. Through the work, we shall gain deep insights on the dependency of network performance on the structural properties of networks, which could help in designing better network structures and better routing protocols.

The paper is organized as follows. Section II presents our network model. Simulation results and analysis are provided in Section III. Section IV concludes the work.

Four different types of networks are chosen as the underlying networks. They are the SF network, the random network, the SL network, and the RL network. One of their structural differences lies in their distinct nodal degree distributions. The degree of a node is the total number of links connecting it. The SF network is built based on the Barabasi-Albert (BA) model proposed in [

In the paper, we use node betweenness centrality, network polarization, and average path length to capture the structural characteristics of above networks. The node betweenness B_{i} for a node i is defined here as the total number of shortest path routes passing through that node. Nodes with high betweenness values participate in a large number of shortest paths. Therefore, initial congestion usually happens at nodes of the highest betweenness value. Node betweenness reflects the role of a node in a communication network. Normally, high betweenness nodes also have high degrees. The node betweenness distribution of a communication network is demonstrated through a measure of the polarization, π, of the network [

where B_{max} is the maximum betweenness value, is the average betweenness value. We find that π as an indication of node betweenness distribution suits our work better than others (e.g. standard deviation). The large polarization value of a network tells us that at least one node possesses much larger betweenness values than most of the other nodes in the network. Therefore, the larger the value π is, the more heterogeneous the network is. On the other hand, for very homogeneous networks, π is very small. For example, for the RL network, we have π ≈ 0. The average path length of a network is defined as the average of the shortest path lengths among all the source-destination pairs. In the next section, we are going to demonstrate how node betweenness, network polarization, and average path length relate to the performance of the underlying networks.

The above three parameters capture the structural features of a network from different angles. They are also interrelated. Usually, the more heterogeneous (larger π, or relatively higher B_{max}) the network is, the shorter the average path length is. The reason is that high betweenness nodes (usually hubs) serve as shortcuts for connecting node pairs. In addition, the following relationship between shortest path length and node betweenness centrality can be easily found,

(2)

where D_{ij} stands for the shortest path length from node i to node j, B_{i} stands for the betweenness value of node i.

In the underlying networks studied, fixed shortest path routing strategy is implemented. The length of the shortest path is the minimum hop count between a sourcedestination pair. Given network topology, each node calculates the shortest paths to all the other nodes using Dijkstra’s algorithm. Then a routing table is constructed at each node. A routing table contains three columns: destination node, next node to route a packet to the destination, and the hop count to the destination.

The model used to govern the dynamic processes of packet generation, storage, and routing is similar to the model by Ohira and Sawatari [

In the paper, we study traffic dynamics and network performance as a function of packet generation rate, network structure, queue type, and queuing discipline. We use throughput, average packet delay, and packet loss as main performance measures. Throughput is defined as the average number of delivered packets per time slot. The average packet delay is defined as the average time that a delivered packet spent in the network. Under heavy traffic load, packet loss is defined as the average number of discarded packets per time slot. Packet loss is caused by traffic overflow at nodes with heavy traffic; thus, it is evaluated only when finite queues are implemented.

In the simulation, a discrete time clock k is used. Simulation starts with k = 0, for each passed time slot, k is incremented by 1. The performance of an underlying network is measured by its throughput o(k), average packet delay τ(k), and packet loss l(k). The values of o(k), τ(k), and l(k) are calculated respectively as the average from the start of simulation (k = 0) to time k. We use n(k) to represent the total number of packets within the network at time k.

In the simulation, the underlying networks are generated with approximately the same number of nodes and links. The SF network, the random ER network, and the RL network are all generated with 50 nodes and 100 links. The SL network is generated with 49 nodes and 84 links because of its structural restrictions. Four different cases are considered: infinite queues with FIFO queuing discipline, infinite queues with LIFO queuing discipline, finite queues with FIFO queuing discipline, and finite queues with LIFO queuing discipline. Then, we investigate network performance as a function of λ in above four different cases. Three network states are identified accordingly. We demonstrate that how, in different network states, the structure of a network influences its performance.

_{max} . In the following section, the total internal traffic n(k) of these networks is going to be calculated and compared as a function of packet generation rate, queue type, and queuing discipline. In our simulation, each data obtained is averaged over 100 runs.

In this section, by investigating the change of n(k) as a function of packet generation rate λ, we reproduce network phase transition reported in [1-3]. Simulation results are plotted in _{c} is observed in all these networks where a network phase transition takes place from traffic free flow to congestion. Compared with Figures 1(a) and (c), it is shown in Figures 1(b) and (d) that when queue size is finite, the abrupt change of internal traffic at the critical points is greatly smoothed.

When, a network is in steady state or traffic free flow state. In this sate, n(k) remain slow and almost unchanged with the increase in incoming traffic λ. According to Little’s law, for a network of size N, the number of packets created per unit time (given by N × λ) must be equal to the number of packets delivered per time slot. The number of delivered packets per time slot is

, hence. However, n(k) is proportional to the average path length of the networks. For instance, the RL network has the most internal traffic because it has the longest average path length.

From Figures 1(a) and (c), we observe that compared with the other networks, the SF network has the lowest value of λ_{c}. The reason lies in its highest B_{max} among all the networks studied. According to the definition of node betweenness centrality, the node with maximum betweenness value B_{max} handles the heaviest traffic because it participates in the largest number of shortest path routes. With increasing incoming traffic, initial congestion (or quick accumulation of packets) shall take place first at the node with B_{max}. In a similar way, the SL network obtains highest λ_{c} because it has the lowest B_{max}. Thus, the critical point λ_{c} of a network is inverse proportional to its B_{max}.

When, the networks enter into congestion state, where n(k) start increasing quickly with the increase in λ in the infinite queue case (shown in Figures 1(a) and (c)). However, in the finite queue case, we observe from Figures 1(b) and (d) that, the change of n(k) is greatly smoothed. Especially, the curve that represents the change of n(k) in the SF network is the most flat among the four networks. The reason lies in that the few high betweenness nodes are quickly congested in the SF network; therefore, the huge amount of traffic that goes through those nodes has to be discarded because the queue size is finite. With the increase in λ, n(k) does not change much. However, since the incoming traffic is not yet very heavy, for the RL network, packets start to accumulate at almost all the nodes because of its homogeneous structure, which leads to relatively quick increase in its internal traffic n(k).

This section investigates network throughput as a function of packet generation rate, network structure, queue type, and queuing discipline. Simulation results are plotted in Figures 2 and 3. inf. stands for infinite queue type. fin. stands for finite queue type.

From _{c}, the increase in network throughput becomes slower because packets start to accumulate in the networks. We say that a network is in moderate congestion

state. With further increase in λ, we observe clearly from

When a network enters into moderate congestion state, at least one node is congested. From _{max}. Compared with the others, the performance of the SF network is the worst. The reason lies in its most heterogeneous structure (largest π). When the SF network is in moderate congestion state, huge amount of packets quickly accumulate at one or several nodes of extremely high betweenness values when many other nodes are idle (or do not have enough packets to send). A similar phenomenon is observed in the random ER network, but the random ER network performs much better than the SF network because of its much smaller polarization value π. According to the same reasoning, we find that during moderate congestion state, both the RL network and the SL network achieve a little higher throughput than the other two because of their lower polarization value π. However, from Figures 2 and 3, we observe that both the RL network and the SL network have much shorter moderate congestion duration before entering into heavy congestion state.

We find that even though in moderate congestion state, congestion happens at only a few nodes, network throughput depends heavily on traffic load distribution. The less the value of network polarization is, the more homogeneous (in terms of node betweenness distribution) a network is, and the more balanced the traffic load is distributed; therefore, the better the network performs. For the RL network and the SL network, their almost uniform node betweenness distribution results in more balanced traffic load distribution among all the nodes so that many packets are delivered successfully. Therefore, we may say that in moderate congestion state when traffic is not yet very heavy, network throughput strongly relates to network polarization.

When λ increases beyond a specific value (this value is different for different networks), the networks enter into heavy congestion state. In this state, more nodes in the networks are congested. We observe that the smaller the network polarization is, the faster the network enters into heavy congestion state (shown in

work and the random ER network, because of their heterogeneous structure (large π), even though most traffic is jammed at more nodes of high betweenness values, a small amount of traffic bypassing those congested nodes can still be delivered successfully. Compared with the SF network, the performance of the random ER network is much better because the random ER network is relatively less heterogeneous (relatively smaller π). For the RL network and the SL network, their structure is more homogeneous. However, when the incoming traffic becomes very heavy, their very long average path length causes huge amount of internal traffic. In addition, since their node betweenness distribution is almost uniform and their average betweenness value is high, almost all the nodes are congested (few packets can be delivered successfully). Compared with the RL network, the SL network performs better because of its relatively shorter average path length and lower betweenness values. Therefore, in heavy congestion state, average path length, node betweenness, and node betweenness distribution all play important roles in network throughput.

Average packet delay as a function of packet generation rate, network structure, queue type, and queuing discipline is investigated in this section. Simulation results are plotted in

In traffic free flow state, we know that all the networks perform the same in terms of throughput (throughput increases linearly with λ), but it is not so in terms of average packet delay. In traffic free flow statefrom, we obtain. Since n(k)

depends on the average path length of a network, the average packet delay τ(k) also depends on the average path length of the network. It is verified through our simulation. For instance, the SF network displays the lowest τ(k) because it has the shortest average path length. Therefore, in traffic free flow state, the average path length plays a major role in average packet delay of the networks.

When the networks enter into moderate congestion state, from

work and the ER network are in moderate congestion state but the SL network and the RL network are already in heavy congestion state, the average delay of the SL network and the RL network is higher because of their longer average path length.

The analysis made in above sections is also verified by our observation on the changes in queue length (total number of packets in a queue) through simulation. In traffic free flow state (we choose λ = 0.05), most queues in all the networks are almost empty. In moderate congestion state (we choose λ = 0.13), most queues in the RL network contain several packets, a few queues contain several tens of packets, and the length of one queue exceeds one hundred packets. It is similar for the SL network. Most queues in the random ER network are almost empty, but the queues at a few nodes of high betweenness values contain hundreds of packets. Similar to the random ER network, most queues in the SF network are almost empty, but two queues at two nodes of extremely high betweenness values contain thousands of packets respectively. In heavy congestion state (a different λ is chosen for each network), for the RL network, the whole network is congested (most queues contain several tens of packets, a few queues contain even hundreds of pack-

ets). It is similar to the SL network. While for the random ER network and the SF network, even though more nodes of high betweenness values are heavily congested, about half of the queues are still almost empty. Interestingly, we find that no matter what the structure of an underlying network is, congestion always takes place when a large number of packets start to accumulate at a few nodes.

This section investigates the average packet delay as a function of packet generation rate, network structure, queue type, and queuing discipline. Simulation results are plotted in

We have investigated how internal traffic, throughput, average packet delay, and packet loss change as a function of packet generation rate, network structure, queue type and queuing discipline. Networks of various structures have been chosen as underlying networks. Based on network performance, three network states have been classified: traffic free flow state, moderate congestion state, and heavy congestion state. Under fixed shortest path routing, we have found that node betweenness centrality, network polarization, and average path length all play important roles in different states of the underlying networks. In traffic free flow state, average path length plays the major role; it directly affects average packet delay. In moderate congestion state and heavy congestion state, both average path length and node betweenness

distribution play important roles in network performance. Our work could help in designing better network structures and better routing protocols.