Skip to main content
  • Research Article
  • Open access
  • Published:

AWPP: A New Scheme for Wireless Access Control Proportional to Traffic Priority and Rate

Abstract

Cutting-edge wireless networking approaches are required to efficiently differentiate traffic and handle it according to its special characteristics. The current Medium Access Control (MAC) scheme which is expected to be sufficiently supported by well-known networking vendors comes from the IEEE 802.11e workgroup. The standardized solution is the Hybrid Coordination Function (HCF), that includes the mandatory Enhanced Distributed Channel Access (EDCA) protocol and the optional Hybrid Control Channel Access (HCCA) protocol. These two protocols greatly differ in nature and they both have significant limitations. The objective of this work is the development of a high-performance MAC scheme for wireless networks, capable of providing predictable Quality of Service (QoS) via an efficient traffic differentiation algorithm in proportion to the traffic priority and generation rate. The proposed Adaptive Weighted and Prioritized Polling (AWPP) protocol is analyzed, and its superior deterministic operation is revealed.

1. Introduction

There is no doubt that the current trend in the telecommunications market is the extensive adoption of wireless networking solutions. It is expected that in the following years all types of wireless networks will form a significant part of the overall networking infrastructure. In addition to this tendency, the nature of the network applications changes requiring considerably more resources. In particular, multimedia traffic load greatly increases; thus, efficiently serving multiple demanding streams becomes challenging. Furthermore, modern users expect to experience high quality communications independently of the flows' nature or the network type.

The effort to provide qualitative services for all kinds of traffic to wireless network users has lately created a large research area. The barriers we need to overcome are significant; the available bandwidth is limited due to the nature of the signal transmission and legal restrictions, the wireless links are not reliable with increased bit error rate, the communication range varies and affects the transmission rate and the link quality, and the user mobility raises major issues. A clear-cut solution at the physical layer would be the maximization of the bit rate in conjunction with the minimization of the transmission errors. There has been definitely great development towards this objective with the introduction of modern techniques and standards (e.g., the IEEE 802.11n standard [1] proposed for wireless local area networks and achievable data rate around 200 Mbps). However, the increasing requirements for total QoS support necessitate aggregate approaches. Specifically, the access control of the shared wireless medium plays a crucial role in the final quality of the provided services.

The most well-known present scheme which provides QoS supportive MAC for WLANs (Wireless Local Area Networks) is HCF [2]. The latter comprises a distributed protocol known as EDCA and an optional resource reservation centralized protocol called HCCA. EDCA is capable of differentiating traffic; however, it suffers from low channel utilization which leads to limited performance. On the other hand, HCCA is able to guarantee QoS to constant bit rate traffic streams, but it demands predefined requests for resources while it considers no priorities.

Recently, intensive research work has been noticed in the field of optimizing QoS provision in wireless networks through medium access control. A significant number of proposals are oriented towards the improvement of existing well-known standards (like the IEEE 802.11e), trying to enhance the overall performance while retaining compatibility to a great degree [3–8]. On the other hand, some new schemes have been lately introduced, which attempt to maximize the network efficiency regarding QoS support [9–13]. A survey of MAC protocols for multimedia traffic in wireless networks that have put the basis for the modern schemes is presented in [14].

This paper presents a novel resource distribution mechanism for centralized wireless local area networks, that does not require predefined resource reservation and is capable of providing predictable QoS to traffic flows of different type. The proposed AWPP protocol employs the frame structure and the basic polling scheme that were introduced with the high-performance Priority Oriented Adaptive Polling (POAP) protocol [15]. Moreover, AWPP introduces a deterministic traffic differentiation technique that operates in proportion to the buffered packets' priorities and the traffic generation rate. The main idea of the presented protocol is to efficiently share the scarce available bandwidth according to well-defined QoS principles. Specifically, the key objective is to assign transmission opportunities in absolute accordance to the weighted traffic priority and the packet arrival rate of each individual flow. By this manner, we succeed on effectively supporting multimedia streams, while being able to predict and configure resources allocation and network behavior based on the characteristics of the served traffic.

This paper is organized in six sections. In Section 2, the EDCA, HCCA, and POAP protocols are discussed, which are used as reference points in this work. Section 3 thoroughly presents the proposed AWPP protocol. In Section 4, an analytical approach on the AWPP operation is provided. The developed simulation scenario and the comparison results are presented and commented in Section 5. Finally, the conclusions can be found in Section 6.

2. Related Work

The presentation of the AWPP protocol adopts as reference points the well-known EDCA and HCCA protocols, which are the parts of the dominant IEEE 802.11e standard, as well as the very effective POAP protocol, which sets the basic structure for AWPP. These three protocols are briefly described in the current section.

2.1. The EDCA Protocol

The mandatory MAC protocol of the IEEE 802.11e standard is EDCA. It is actually a QoS supportive enhanced version of the legacy IEEE 802.11 MAC protocol, that is the Distributed Coordination Function (DCF). The operation of EDCA is based on the adoption of packet priorities according to the DiffServ model [16].

EDCA employs the CSMA/CA algorithm. Its operation bases on station contention for medium access using a backoff procedure. The latter involves waiting intervals of different length, called Arbitrary Distributed Interframe Spaces (AIFSs), and backoff intervals of different length, called Contention Windows (CWs), according to the priority of the corresponding packet buffer, called Access Category (AC). These different values of the intervals' length impose different access probabilities for the traffic packets based on their priorities. This way, traffic can be differentiated and QoS can be supported. Additionally, EDCA implements a collision avoidance technique using a two-way handshake, called RTS/CTS (Request To Send/Clear To Send). This technique handles to some degree the serious hidden station problem.

The operation of EDCA exhibits significant deficiencies regarding its QoS capabilities. To be more specific, the use of backoff intervals leads to waste of resources, while the hidden station problem, which is still present despite the adoption of the RTS/CTS mechanism, increases the collision rate, thus, decreasing the overall performance. Moreover, QoS support gets problematic due to the exponential backoff procedure. Specifically, it is inefficient to penalize the already delayed collided packets with even longer waiting times. Furthermore, EDCA is shown not to be able to share the available bandwidth fairly [17]. The reasons for the lack of efficiency of EDCA are described in [18]. As a conclusion, EDCA can certainly differentiate traffic and hence provide some QoS, but it reveals great performance limitations.

2.2. The HCCA Protocol

The optional part of the IEEE 802.11e HCF scheme is the HCCA protocol. This is a centralized protocol which uses the so-called Hybrid Coordinator (HC) to perform medium access control. The HC is considered by the standard to be collocated with the Access Point (AP).

The HCCA resource reservation mechanism defines that every Traffic Stream (TS) communicates its Traffic Specifications (TSPECs) to the AP. The TSPECs include the MAC Service Data Unit (MSDU) size and the maximum Required Service Interval (RSI). The standardized scheduler calculates first the minimum value of all the RSIs and then chooses the highest submultiple value of the beacon interval duration as the selected Service Interval (SI), which is less than the minimum of all the maximum RSIs.

The AP polls the stations in order to assign Transmission Opportunities (TXOPs). In order to calculate the TXOP duration, the scheduler estimates the mean number of packets () generated in the TS buffer () for station () during an :

(1)

where is the application mean data rate and is the nominal MSDU size. The TXOP () is then equal to

(2)

where is the physical layer bit rate and is the maximum MSDU size. The interval 2SIFS+ is resulted by the overhead during a TXOP. Equation (2) ensures that at least one packet with maximum size can be transmitted. The total duration a station is allowed to transmit equals the sum of the TXOPs assigned to its TSs, which for station equals to

(3)

where is the number of TSs in station . A new TS can be admitted only when there are enough available resources to fully serve it. The fraction of total transmission time allocated to station is . If there are stations that are given permission to transmit, then the algorithm will check whether the new request for can retain the fraction of time allocated for TXOPs lower than the maximum fraction of time that can be used by HCCA:

(4)

where is the maximum duration of HCCA in a beacon interval (), that is, a superframe.

A basic weakness of the HCCA protocol is related with its nature. HCCA is an optional part of HCF that can guarantee QoS via resource reservation to fixed traffic flows of known resource requirements. The IEEE 802.11e standard actually proposes HCCA for the exclusive handling of multimedia streams. Regarding the resource allocation algorithm, the constant TXOPs lead to limited support for Variable Bit Rate (VBR) traffic. Furthermore, HCCA considers no traffic priorities. It handles simply the QoS requests in time order and denies service to traffic flows that at that moment cannot be given the whole requested resources.

2.3. The POAP Protocol

POAP is a high-performance polling-based protocol that exploits the feedback sent by the stations regarding the amount and the priority of their buffered traffic in order to make QoS-supportive polling decisions. Its polling scheme ensures zero collisions, low overhead, and sufficient network feedback. The proposed AWPP protocol bases its operation on this efficient polling method, which assumes that stations are able to communicate directly when in range; however, the model where the AP acts as a packet forwarder could be also used. According to [2], the IEEE 802.11e access model also provides a Direct Link Protocol (DLP) as an extra feature. The polling scheme is represented in Figure 1 and described below.

Figure 1
figure 1

The POAP polling scheme adopted by AWPP.

  1. (i)

    Polling a Station That Has No Packets for Transmission (Figure 1(a)).

The AP polls a station and the latter responds that it has no packets for transmission.

  1. (ii)

    Polling a Station That Has Packets for Transmission (Figure 1(b)).

The AP polls a station and the latter replies with a STATUS control packet acting as acknowledgment. Then, the polled station starts transmitting the data packet directly to the destination station. Upon successful reception, the destination station broadcasts a STATUS packet acting as acknowledgment. Otherwise, if the reception fails but the station has realized that the specific packet is destined to it, it responds with a STATUS packet acting as no-acknowledgment. Notice that the DATA packet size is generally considered to be variable, thus, is not fixed.

  1. (iii)

    Polling Failure or Feedback Failure (Figure 1(c)).

If the polling fails, then the AP has to wait for the maximum polling cycle before polling again, because it must be sure that it will not collide with a possible ongoing transmission. When polling succeeds, but then the AP fails to receive any of the following packets, it has to wait for the maximum polling cycle before the new poll, similarly to the polling failure case.

In POAP, the algorithm inside each station that decides which packet to select for transmission computes a buffer selection relative (nonnormalized) probability using the following formula:

(5)

where is the buffer index, is a preset weight, is the normalized buffer priority of buffer , is a preset weight, and is the normalized number of packets contained in buffer . The main idea is that both the buffer priority and the current buffer load affect the chance to transmit a packet from the specific buffer, but the contribution of each one of these two factors is controlled by different weights.

Regarding the polling decision mechanism in POAP, it is based on an introduced statistic, called priority score, which becomes available to the AP through the broadcast STATUS control packets. The priority score for station is defined to be equal to

(6)

where is the priority of buffer and is the number of packets it carries. Then, the nonnormalized polling probability of station is calculated as follows:

(7)

where is the normalized priority score of station , is a preset weight, and is the normalized time elapsed since the last poll of station . The factor is employed in order to ensure some fairness among the stations regarding medium access. The AP is further favored, because of its central role, by multiplying its nonnormalized polling probability with the weight .

POAP has been shown to achieve high performance, exhibiting great medium utilization and providing sufficient QoS support. However, the nature of its algorithmic operation makes it very hard to predict to what degree a traffic flow will be favored in comparison to another traffic flow or a station in comparison to another station. To be more specific, the decision-making mechanism in POAP mainly depends on a combination of the buffered packet priorities and the current buffered load. The fact that the buffer load is an alternating factor and the use of the mathematical operation of addition in (5) and (7) in order to combine the priority and load coefficients do not allow the estimation of the ratio of the bandwidth that a traffic flow will be provided with and do not finally ensure the proportional contribution of each coefficient. For example, if in a station a buffer is expected to carry the same load (which cannot be calculated in advance) with another buffer of a higher priority, then we cannot estimate based on (5) at what degree the second buffer will be favored in relation to the first one. Thus, it becomes challenging to set the weights to suitable values, which procedure was eventually carried out in a heuristic manner. At this point, it should be noticed that AWPP comes to provide weighted traffic differentiation proportional to traffic priority and rate allowing the analytical estimation of the network metrics and generally a more deterministic behavior.

3. The AWPP Protocol

3.1. The "Packet to Transmit'' Algorithm

Every station that is granted permission to transmit (through the polling procedure) implements the AWPP method of deciding which packet to send. The packets waiting for transmission are organized into eight buffers that correspond to User Priorities (UPs) according to the DiffServ model. The respective algorithm is designed to be based on the priority of each buffer and its current traffic rate. The central theory is that the network resources should be distributed in proportion to the traffic priority, so that higher-priority traffic is provided with more bandwidth, and the currently estimated traffic arrival rate at each buffer, because buffers of rapidly increasing load would typically need more resources. A basic designing goal is to develop a deterministic and predictable decision-making mechanism based on the above-mentioned concept, which can be configured to provide different contribution of the priority agent compared to the traffic rate agent, while distributing the bandwidth in a proportional manner. Specifically, it is usually required to extendedly favor the high-priority flows regardless of their rate. In fact, a well-known concept is to serve the highest priority flow always first (i.e., the Highest Priority First discipline). However, totally excluding the rest of the traffic flows is not generally acceptable. Thus, according to the basic idea, a flow of priority should be assigned PF times more bandwidth than a flow of priority , assuming of course that they exhibit the same traffic rate, where PF is the introduced priority factor with a default value equal to 2. In case both flows are characterized by the same priority, but the traffic rate of the first one is estimated to be two times higher than the second, then the first flow should be allocated two times more resources. Summing up, the proposed packet buffer selection algorithm is presented in Figure 2 and described below. The fundamental component of this mechanism is the Basic Selection Weight, which is considered for buffer to be equal to

(8)

BP is the Buffer Priority and ETR is the Estimated Traffic Rate that is given by

(9)

where MF is the Memory Factor (default 0.5) and ITR is the Instant Traffic Rate (calculated for a default duration of 2 s). The concept in (9) is to try to estimate the relatively long-term arrival rate in a specific buffer, avoiding sharp alternations that can lead to instability in bandwidth distribution. Thus, a system with memory is used, where the new ETR values are partially based on previous ETR values. The buffer selection then takes place according to the Buffer Selection Probabilities (BSPs):

(10)

where BTI is the introduced Buffered Traffic Indicator. It provides a valuable snapshot of the station's buffers' status. For station , it is equal to

(11)

Finally, the earliest generated packet is chosen from the selected buffer for transmission.

Figure 2
figure 2

The AWPP packet buffer selection algorithm.

3.2. The "Station to Poll'' Algorithm

The AP implements an algorithm responsible to decide each time which station to poll in a QoS provision basis, similarly to the "packet to transmit" algorithm. To be more specific, the objective here is to proportionally favor stations that have high-priority buffered traffic and exhibit high traffic rate, according to the same concept that was described in the previous subsection. Thus, the polling decision should mainly depend on the stations' BTI values. Furthermore, since the AP itself is considered to participate in the polling contention, it should be probably served with higher medium access chances, since it plays a central role in the network by connecting it externally. For this reason, the AP_ExtraPriority parameter (default value 1) is introduced. Specifically, when the AP calculates its buffers' BSW values, which then give the AP's BTI value, it adds the AP_ExtraPriority to each buffer's priority, which means that the exponent in (8) is considered to be equal to BP[i]+AP_ExtraPriority for the AP's packet buffers.

Another factor that must be taken into account in this mechanism is the reassurance of fairness regarding the stations' chances to gain medium access. Total fairness, that is equal probabilities of medium access among stations, is not possible and not desired, since stations may carry traffic flows of different priority and rate and thus having different QoS requirements. However, an unacceptable case of unfairness is the domination of the channel by a single station. The AWPP protocol handles this problem by lowering the polling chance of a station that according to the algorithm exhibits probability of gaining medium access significantly higher than the rest of the stations, while the time that has elapsed since its last polling is significantly lower than that of the rest of the stations. Summing up, the respective AWPP algorithm is presented in Figure 3 and described below.

Figure 3
figure 3

The AWPP station selection algorithm.

According to the specific algorithm, every station is characterized by the introduced Station Selection Weight (SSW), which is given for station by

(12)

where the addition of 1 ensures that there will be no null polling probabilities, so that all stations always have a chance to be polled. In order to provide fairness according to the previously mentioned concept, in each cycle, the algorithm initially identifies the stations that carry the highest SSW and the lowest TEP (Time Elapsed since last Poll) values. If this is the same station and it has times higher SSW than the station that carries the second maximum SSW value and times lower TEP than the station that carries the second minimum TEP value (where is the number of the participating stations and is the total number of stations including the AP), then its SSW value is lowered to times the second maximum value (see Figure 3). Finally, station is given permission to transmit based on its Station Selection Probability (SSP), which equals

(13)

4. Analytical Approach on the AWPP Operation

This paper presents both an analytical and a simulation approach on the operation of the AWPP protocol. The objective is to prove that the proposed protocol achieves high performance and provides QoS in a proportional manner, as it was explained in the previous section. For this reason, a network scenario of controlled conditions is considered, that is suitable both for analytical and simulation study. The results have to be representative, clear, and illustrative. Thus, the studied scenario includes three different traffic types of constant rates. The characteristics of the considered Low Priority (LP), Medium Priority (MP), and High Priority (HP) traffic flows are presented in Table 1.

Table 1 Characteristics of the traffic flows.

Notice that in reality the data packet size and the traffic bit rate need not to be fixed. However, in this study constant values are used for comparative reasons. The protocol is expected to operate according to the same principles when serving variable bit rate flows, too. In this scenario, there are three different bidirectional traffic flows between the AP and each wireless station. Someone could possibly assume that the LP flows correspond to web traffic, the MP flows correspond to video traffic, and the HP flows correspond to voice traffic. It should be mentioned that in order to retain traffic symmetry and produce more explanatory results, the AP flows are not favored in this scenario, that is AP_ExtraPriority and for AWPP and POAP are set to 0 and 1, respectively, Furthermore, the network bit rate was considered to be equal to 36 Mbps, which corresponds to the typical ERP-OFDM-16 QAM mode of the widely used IEEE 802.11g physical layer [19]. The stations are placed at distances of 60 m of each other, leading to an estimated signal propagation delay of 0.2 μ s. Lastly, the network observation interval is set to 60 s.

The performance of AWPP in this network can be analytically calculated by computing the portion of the Utilizable Bandwidth (UB) that each traffic type is assigned. Specifically, this approach bases on the calculation of the total BSW values of the offered traffic flows. Then, the BSP values can be computed considering as ETR the total rate of each traffic type. Finally, the portion of UB that is assigned to each traffic type can be resulted from the BSPs. Thus, according to the BSW formula presented in (8), it stands for the three different traffic types (HP, MP, and LP) of this network scenario assuming wireless stations:

(14)

According to the "packet-to-transmit" and "station-to-poll" algorithms presented in the previous section, considering that the fairness mechanism is not triggered because of the traffic symmetry which prevents the medium domination, and taking into account that the AP flows are not favored in the studied scenario, the Bandwidth Allowed to be Used (BAU) by each traffic type equals

(15)

It should be mentioned that the BAU value is in fact the upper limit of the respective throughput. Apparently, when BAU is higher than the required bandwidth, then the residual bandwidth becomes available to the lower priority traffic. At this point, the proportional distribution of resources also becomes clear. Specifically, (14) and (19) reveal that according to AWPP, the HP traffic deserves 4 times more bandwidth than the MP traffic, since the former's priority is higher by 2, the priority factor equals 2, and they exhibit the same rate, whereas the HP traffic deserves 32 times more bandwidth than the LP traffic, since the former's priority is higher by 6, the priority factor equals 2, and the latter exhibits 2 times higher rate.

The calculation of the BAU values requires the estimation of UB. Actually, what is needed is to estimate the network control overhead in order to conclude the portion of the total bandwidth that is used for data transmissions. Thus, this analysis is based on the polling scheme presented in Section 2.3. It should be clarified that the objective of this study is to prove that AWPP behaves according to the fundamental designing principles, which are already stated (mainly in Section 3). For this reason, the examined scenario assumes that the network links are generally in good state, so when calculating UB, only the case of successfully polling a loaded station is considered. As the matching of the analytical and the simulation results will prove, this assumption causes no computational errors when the total load is low, because there is enough available bandwidth for serving all the flows anyway, while in high-load conditions there are still no errors, because the polling of an "empty" station is unlikely and there are no extensive link failures. Taking also into account that in the examined scenario half of the flows are originated in the AP that does not require physical polling for receiving transmission permission, the following formula is finally resulted:

(16)

Since POLL packet total size is equal to 272 bits, DATA packet total size is equal to 10192 bits, STATUS packet total size equal to 352 bits, and Total Bandwidth is equal to 36 Mbps, (16) results in UB equal to 33.732 Mbps. Finally, the traffic throughput is equal to the traffic load, when the traffic load is lower than the BAU value, while in case the traffic load is higher than BAU, then the traffic throughput equals BAU, as it is already explained.

After calculating the throughput of each traffic type, we can estimate its average delay based on Little's law [20], which states that the average system queue size equals the jobs' arrival rate multiplied by the average waiting time. In the network environment, the average system queue size corresponds to the Average Quantity of Buffered Traffic (AQBT), the job's arrival rate corresponds to the total traffic generation rate (), and the average waiting time corresponds to the average delay (), which means that the following holds.

(17)

Thus, in order to get an indication of the delay, we first need to estimate AQBT as follows:

(18)

where is the observation interval, is the buffered traffic at time , and is the traffic throughput (in terms of bit rate). At this point, it should be noticed that in (18) the traffic generation rate is considered to be constant, which is true for the examined scenario, and the traffic throughput is also assumed constant, which does not absolutely hold. Specifically, the throughput definitely varies in time; however, the operation of the AWPP protocol and the nature of the network scenario allow the use of the average throughput instead, which provides a very good approximation. For example, when the topology consists of 10 wireless stations, then the presented analysis results in AQBT equal to 0 for the HP traffic flows. However, the simulation reveals that there is of course high-priority traffic buffered throughout the simulation. In Figure 4, the amount of the HP buffered traffic in the AP is depicted. Nevertheless, this variation is low and, as it will be shown, the analytical results follow very closely the simulation results. Note that if AQBT in (17) is set according to the buffer size measured during simulation and depicted in Figure 4, then the resulted average delay ( ms) exactly matches the average delay measured in simulation. This means that Little's law and the simulation engine agree. Furthermore, it should be mentioned that the packet buffers are considered to have adequate capacity so that they never overflow. This way, no packets are dropped, so Little's law stands and the average delay statistic is completely indicative of the protocol efficiency.

Figure 4
figure 4

Buffered HP traffic in the AP.

The presented network scenario was simulated for variable number of stations resulting in variable offered load. The analytical and the simulation results regarding the ratio of traffic throughput to traffic load and the average delay in AWPP are depicted in Figures 5 and 6, respectively. As it can be seen, the analytical and the simulation results coincide to a great degree. These figures reveal that at low load conditions all flows are fully served, whereas under saturation the LP traffic first and then the MP traffic get limited resources so that the higher priority traffic can be sufficiently served.

Figure 5
figure 5

Throughput/Load versus number of Wireless Stations: Analytical and simulation results in AWPP.

Figure 6
figure 6

Delay versus number of Wireless Stations: Analytical and simulation results in AWPP.

5. Simulation Results

This section presents the simulation results regarding the performance of the AWPP protocol compared to POAP, EDCA, and HCCA. The simulated network scenario was described in the previous section. The four protocols were simulated on the same specialized developed in C++ event-based simulation framework, adapted to the operational characteristics of each one. The matching of the analytical and simulation results presented in the previous sections validates both the analytical model and the simulator as well. The condition of any wireless link was modeled using a finite-state machine with three states (good, bad, and hidden) based on the work of Zorzi et al. [21]. Note that the relative performance of the four protocols is not affected by the channel status, because in good channel conditions the performance of all protocols improves, whereas in bad conditions all protocols perform worse. Hence, the comparative results are actually the same and conclusions can be drawn whatever the case. The default parameter values for the four protocols were used. The simulation results presented in this section are produced by a statistical analysis based on the "sequential simulation" method [22].

The HP traffic throughput as a function of the HP traffic load is plotted in Figure 7, while Figure 8 presents the HP traffic average delay versus the HP traffic load. In both graphs, it becomes obvious that under low and medium load conditions all protocols manage to fully support the highest priority flows, whereas under high load conditions only the proposed AWPP protocol succeeds to perform this task while keeping delay at impressively low levels. Examining high-priority traffic throughput results in more detail reveals that EDCA starts exhibiting degraded performance at 10 Mbps load, whereas POAP degrades at about 12 Mbps load. On the other hand, we observe a linear relation between throughput and load for AWPP, where all generated high-priority traffic is always served. Similar conclusions are drawn from the high priority traffic delay results, where it is evident that EDCA suffers from the highest delays almost for all values of load, while AWPP ensures minimum packet delays even for 20 Mbps load. At this point, it should be explained that HCCA has a different behavior from the other three protocols, because of its different nature. Specifically, HCCA is based on resource reservation and does not allow the admission of any new flows, if it cannot reserve full resources for them. Thus, in HCCA the traffic load appears to be limited, since no new flows start, when there is not sufficient available bandwidth to allow admission. As a result, HCCA steadily serves the offered traffic up to a point and after that does not serve it at all. Furthermore, HCCA does not consider traffic priority, thus, it handles the different types of traffic similarly (of course, it takes into account the traffic specifications). The fact is that HCCA is a special purpose protocol designed to serve real-time multimedia streams, and its inelastic behavior is not suitable for a general purpose WLAN access mechanism.

Figure 7
figure 7

Throughput versus Load: High Priority traffic in AWPP-POAP-EDCA-HCCA.

Figure 8
figure 8

Delay versus Load: High Priority traffic in AWPP-POAP-EDCA-HCCA.

Figure 9 shows the MP traffic throughput as a function of the MP traffic load, while the MP traffic average delay versus the MP traffic load is represented in Figure 10. It can be seen that regarding MP traffic, performance degradation starts at significantly lower load in POAP than in AWPP. HCCA exhibits a steady behavior to a limited load, as it is already explained. Lastly, the EDCA inefficiency becomes obvious in both network statistics. More specifically, the performance of the presented AWPP protocol on serving medium priority traffic is comparatively close only to POAP, since the other protocols perform significantly worse especially in highly loaded scenarios. The respective throughput and delay curves reveal that POAP seems to get saturated when load exceeds 10 Mbps, whereas AWPP shows descending performance for load values over 16 Mbps.

Figure 9
figure 9

Throughput versus Load: Medium Priority traffic in AWPP-POAP-EDCA-HCCA.

Figure 10
figure 10

Delay versus Load: Medium Priority traffic in AWPP-POAP-EDCA-HCCA.

Figure 11 depicts the LP traffic throughput as a function of the LP traffic load and Figure 12 presents the LP traffic average delay versus the LP traffic load. It becomes clear that the LP traffic starts receiving significantly limited resources when they are necessary for the sufficient service of the higher priority traffic, according to the operation concept of AWPP and POAP. The latter seems to perform better when handling the LP traffic flows under high load conditions; however, it has been shown that it achieves lower performance when serving higher priority traffic, which is of course of greater importance. Specifically, for low-priority traffic load values over 24 Mbps, the AWPP traffic differentiation mechanism allocates a greater percentage of the scarce available bandwidth to the higher-priority traffic than POAP does. As it has been already shown by the performance graphs, the result is that AWPP serves higher-priority traffic more efficiently, which is the main objective, whereas POAP performs better on serving LP traffic. In regards to the other two protocols, HCCA exhibits the same known behavior and EDCA performs steadily poorly when handling LP traffic in all load conditions.

Figure 11
figure 11

Throughput versus Load: Low Priority traffic in AWPP-POAP-EDCA-HCCA.

Figure 12
figure 12

Delay versus Load: Low Priority traffic in AWPP-POAP-EDCA-HCCA.

Lastly, an overview of the overall network performance of the introduced AWPP protocol in comparison to the other three examined protocols is provided in Figure 13. This is a graph of the total average delay versus the total load as the number of the wireless stations increases. It becomes obvious that AWPP always performs superiorly achieving minimum delay and maximum throughput. POAP also exhibits high network performance and similar maximum throughput; however, it suffers from significant delays at highly saturated conditions. In more detail, both AWPP and POAP succeed on reaching total throughput of about 34 Mbps, with the difference that the highest average delay for AWPP is almost 1/3 of the POAP respective value. This is clearly an indication of more efficient QoS support. Regarding HCCA, it is already explained that because of its nature it performs stably under unsaturated conditions. Finally, the comparative inefficiency of EDCA is apparent in all cases.

Figure 13
figure 13

Throughput versus Delay: Total traffic in AWPP-POAP-EDCA-HCCA.

6. Conclusion

This work proposed the Adaptive Weighted and Prioritized Polling (AWPP) protocol capable of efficiently supporting total QoS in wireless networks. The presented analytical approach has proven that AWPP succeeds to provide deterministic traffic differentiation proportional to traffic priority and rate. The simulation results, which coincide with the analytical results, have shown that AWPP serves the different types of traffic more efficiently than the effective POAP protocol, the dominant EDCA protocol, and the specialized HCCA protocol. AWPP is also shown to achieve superior total network performance. As future work, we intend to study extended network scenarios that involve traffic flows characterized by limited duration and bursty nature. Moreover, the special features of the introduced scheme could be adapted into the medium access control mechanism of the emerging wireless broadband networks. Specifically, a possible integration of the AWPP resource managing engine into the respective module of the IEEE 802.16 wireless broadband network will be examined.

References

  1. IEEE 802.11n/D11.0 Unapproved Draft Standard for Information Technology—Telecommunications and information exchange between systems-Local and metropolitan area networks-Specific requirements—part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications Amendment: Enhancements for Higher Throughput, 2009

  2. IEEE 802.11e WG IEEE Standard for Information Technology—Telecommunications and Information Exchange Between Systems—LAN/MAN Specific Requirements—part 11 Wireless Medium Access Control and Physical Layer specifications, Amendment 8: Medium Access Control Quality of Service Enhancements, 2005

  3. Hamidian A, Körner U: An enhancement to the IEEE 802.11e EDCA providing QoS guarantees. Telecommunication Systems 2006, 31(2-3):195-212. 10.1007/s11235-006-6520-z

    Article  Google Scholar 

  4. Ge Y, Hou JC, Choi S: An analytic study of tuning systems parameters in IEEE 802.11e enhanced distributed channel access. Computer Networks 2007, 51(8):1955-1980. 10.1016/j.comnet.2006.07.018

    Article  MATH  Google Scholar 

  5. Shankar S, van der Schaar M: Performance analysis of video transmission over IEEE 802.11a/e WLANs. IEEE Transactions on Vehicular Technology 2007, 56(4):2346-2362.

    Article  Google Scholar 

  6. Boggia G, Camarda P, Grieco LA, Mascolo S: Feedback-based control for providing real-time services with the 802.11e MAC. IEEE/ACM Transactions on Networking 2007, 15(2):323-333.

    Article  Google Scholar 

  7. Fallah YP, Alnuweiri H: A controlled-access scheduling mechanism for QoS provisioning in IEEE 802.11e wireless LANs. Proceedings of the 1st ACM International Workshop on Quality of Service and Security in Wireless and Mobile Networks, October 2005 120-129.

    Google Scholar 

  8. Chou CT, Shankar N S, Shin KG: Achieving per-stream QoS with distributed airtime allocation and admission control in IEEE 802.11e wireless LANs. Proceedings of the IEEE INFOCOM, March 2005 3: 1584-1595.

    Google Scholar 

  9. Lagkas TD, Papadimitriou GI, Nicopolitidis P, Pomportsis AS: Priority-oriented adaptive control with QoS guarantee for wireless LANs. IEEE Transactions on Vehicular Technology 2007, 56(4):1761-1772.

    Article  Google Scholar 

  10. Lagkas TD, Papadimitriou GI, Pomportsis AS: QAP: a QoS supportive adaptive polling protocol for wireless LANs. Computer Communications 2006, 29(5):618-633. 10.1016/j.comcom.2005.05.001

    Article  Google Scholar 

  11. Bohge M, Gross J, Wolisz A, Meyer M: Dynamic resource allocation in OFDM systems: An overview of cross-layer optimization principles and techniques. IEEE Network 2007, 21(1):53-59.

    Article  Google Scholar 

  12. Pahalawatta P, Berry R, Pappas T, Katsaggelos A: Content-aware resource allocation and packet scheduling for video transmission over wireless networks. IEEE Journal on Selected Areas in Communications 2007, 25(4):749-758.

    Article  Google Scholar 

  13. Chlamtac I, Conti M, Liu JJN: Mobile ad hoc networking: imperatives and challenges. Ad Hoc Networks 2003, 1(1):13-64. 10.1016/S1570-8705(03)00013-1

    Article  Google Scholar 

  14. Akyildiz IF, McNair J, Martorell LC, Puigjaner R, Yesha Y: Medium access control protocols for multimedia traffic in wireless networks. IEEE Network 1999, 13(4):39-47. 10.1109/65.777440

    Article  Google Scholar 

  15. Lagkas TD, Papadimitriou GI, Nicopolitidis P, Pomportsis AS: A novel method of serving multimedia and background traffic in wireless LANs. IEEE Transactions on Vehicular Technology 2008, 57(5):3263-3267.

    Article  Google Scholar 

  16. Kilkki K: Differentiated Services for the Internet. Macmillan Technical Publishing, Indianapolis, Ind, USA; 1999.

    Google Scholar 

  17. Pong D, Moors T: Fairness and capacity trade-off in IEEE 802.11 WLANs. Proceedings of the 29th Annual IEEE International Conference on Local Computer Networks (LCN '04), November 2004 310-317.

    Chapter  Google Scholar 

  18. Wang SC, Helmy A: Performance limits and analysis of contention-based IEEE 802.11 MAC. Proceedings of the 31st Annual IEEE Conference on Local Computer Networks (LCN '06), November 2006 418-425.

    Google Scholar 

  19. IEEE 802.11g WG International Standard for Information Technology—Telecommunications and Information Exchange between systems-Local and metropolitan area networks-Specific Requirements—part 11:Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, Amendment 4: Further Higher Data Rate Extension in the 2.4GHz Band, 2003

  20. Little JDC:A proof for the queuing f ormula: . Operations Research 1961, 9(3):383-387. 10.1287/opre.9.3.383

    Article  MathSciNet  MATH  Google Scholar 

  21. Zorzi M, Rao RR, Milstein LB: On the accuracy of a first-order Markov model for data transmission on fading channels. Proceedings of the Annual International Conference on Universal Personal Communications (ICUPC '95), 1995, Tokyo, Japan 211-215.

    Chapter  Google Scholar 

  22. Pawlikowski K, Jeong HDJ, Lee JSR: On credibility of simulation studies of telecommunication networks. IEEE Communications Magazine 2002, 40(1):132-139. 10.1109/35.978060

    Article  Google Scholar 

Download references

Acknowledgment

This work was partially supported by the State Scholarships Foundation of Greece.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Lagkas.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Lagkas, T., Chatzimisios, P. AWPP: A New Scheme for Wireless Access Control Proportional to Traffic Priority and Rate. J Wireless Com Network 2011, 925165 (2011). https://doi.org/10.1155/2011/925165

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2011/925165

Keywords