The efficacy of centralized flow rate control in 802.11-based wireless mesh networks
© Jamshaid et al.; licensee Springer. 2013
Received: 6 August 2012
Accepted: 30 May 2013
Published: 13 June 2013
Commodity WiFi-based wireless mesh networks (WMNs) can be used to provide last mile Internet access. These networks exhibit extreme unfairness with backlogged traffic sources. Current solutions propose distributed source-rate control algorithms requiring link-layer or transport-layer changes on all mesh nodes. This is often infeasible in large practical deployments.
In wireline networks, router-assisted rate control techniques have been proposed for use alongside end-to-end mechanisms. We wish to evaluate the feasibility of establishing similar centralized control via gateways in WMNs. In this paper, we focus on the efficacy of this control rather than the specifics of the controller design mechanism. We answer the question: Given sources that react predictably to congestion notification, can we enforce a desired rate allocation through a single centralized controller? The answer is not obvious because flows experience varying contention levels, and transmissions are scheduled by a node using imperfect local knowledge. We find that common router-assisted flow control schemes used in wired networks fail in WMNs because they assume that (1) links are independent, and (2) router queue buildups are sufficient for detecting congestion. We show that non-work-conserving, rate-based centralized scheduling can effectively enforce rate allocation. It can achieve results comparable to source rate limiting, without requiring any modifications to mesh routers or client devices.
KeywordsWireless mesh networks 802.11 DCF CSMA/CA Max-min fairness Congestion control
Wireless mesh networks (WMNs) based on the commodity IEEE 802.11 radios are a low-cost alternative for last mile broadband access. Such networks consist of static mesh routers powered by utility electricity. The mesh routers communicate with each other over multihop wireless links. Client devices connect to their preferred mesh router either via wire or over a (possibly orthogonal) wireless channel. Communication is typically to/from clients through the mesh routers over multiple wireless hops, to a gateway mesh router that has a wired connection to the wider world, typically the public Internet.
CSMA/CA transmitters located outside mutual carrier sense range may produce misaligned transmissions that result in excessive collisions at a receiver or deprive some nodes of transmission opportunities. As a result, nodes sharing the same wireless channel develop an inconsistent, location-dependent view of the channel state.
DCF provides all nodes in a single contention area with equal transmission opportunities (TXOPs). This MAC-level fairness does not translate to end-to-end fairness in multihop networks where nodes closer to the gateway relay traffic for nodes that are further away.
The impact of these problems can be severe in networks with backlogged traffic; it has been shown that flows closer to the gateway may completely capture the wireless channel at the cost of starving the distant, disadvantaged flows . Without any explicit rate feedback to data sources, this unfairness persists for the duration that the contending flows are active. Existing congestion control protocols such as transmission control protocol (TCP) fail to provide this rate feedback in CSMA/CA-based systems [1, 3]. These problems remain inherent in DCF extensions such as enhanced distributed channel access (EDCA), which schedules elastic TCP streams using the ‘Background’ or ‘Best Effort’ class.
A number of research publications (e.g., [1, 4, 5]) have proposed distributed algorithms that allow traffic sources to compute and enforce flow rate limits based on current contention levels in the network. These algorithms require periodic network-wide flooding of time-varying state information. This requires MAC-layer changes to mesh nodes or transport-layer changes to client devices; these are both often infeasible in large, practical deployments.
In wired networks, router-assisted flow control mechanisms (e.g., ) have been proposed for use alongside end-host based congestion control protocols. Pure end-to-end flow control schemes cannot provide isolation between flows or ensure rate or delay guarantees; they instead depend on these router-assisted mechanisms for support. We are interested in evaluating the feasibility of establishing similar controls at gateway mesh routers in WMNs providing last mile access. Traffic flows in these networks are primarily directed towards or away from the gateway. This allows the gateway to develop a unified view of the end-to-end flow rates of flows through this gateway, making it a suitable choice for enforcing various resource allocation policy objectives. In particular, we wish to use gateway-enforced control to address flow rate unfairness in WMNs.
In this paper, we focus on the efficacy of such a centralized control, rather than specifics of the controller mechanism design itself. Given a desired rate-allocation policy objective (e.g., max-min allocation ), we evaluate the effectiveness of gateway rate control in enforcing this objective in a 802.11-based WMN. This evaluation is necessary because multihop wireless network characteristics are distinct from wired networks or even one-hop wireless local area networks (WLANs): competing flows in a WMN traverse different number of hops, each flow experiencing varying levels of link contention along its path; further, transmissions along individual links are scheduled based only on the localized view of the CSMA/CA transmitters. We discover that these characteristics render some common router-assisted wired network mechanisms ineffective as gateway-enforceable solutions in WMNs. Work-conserving scheduling techniques, such as fair queueing (FQ) or weighted fair queueing (WFQ)  are inadequate on their own as they assume independence of links. Similarly, router-assisted probabilistic packet drop techniques including active queue management (AQM)  are ineffective because packet losses in a multihop network are spatially distributed and cannot be accurately predicted using the queue size at the gateway router. We describe these fundamental differences in Sections 5.1 and 5.2.
We show that simple non-work-conserving, rate-based centralized scheduling techniques can enforce fairness in 802.11-based WMNs. Link layer retransmissions allow a 802.11 node to recover from wireless losses. When combined with rate-based scheduling, this allows all nodes to obtain their share of the network capacity. We show that even course-grained rate control on net-aggregate traffic passing through the gateway is effective in eliminating unfairness. Further improvements are obtained when we isolate flows using FQ alongside aggregate rate-based scheduling. Finally, rate-based scheduling can be enforced on a per-flow basis, allowing fine-grained control over the resource allocation process. We evaluate and establish the efficacy of these gateway-enforced control techniques in both single-channel and multi-radio, multi-channel WMNs.
The remainder of this paper is organized as follows: in Section 2, we explain how DCF leads to flow unfairness and starvation in WMNs with backlogged traffic; we discuss related work, contrasting it with our approach in Section 3; in Sections 4 and 5, we describe various techniques for enforcing centralized rate control in WMNs and evaluate their effectiveness using simulations. These simulations use network capacity models to determine fair share rate information. While such models are extraneous to this work (we are interested in evaluating the efficacy of centralized control given a desired rate allocation), for completeness, we describe the models used in this paper in Section 7.
2 Flow unfairness and starvation in DCF-based multihop networks
A core function of any MAC protocol is to provide fair and efficient contention resolution mechanism. Here, we describe the behavior of 802.11 DCF in multihop networks when the contending nodes are (a) within, and (b) outside mutual carrier sense range.
2.1 Nodes within mutual carrier sense range
On average, DCF provides equal TXOPs to nodes within carrier sense range. This provides per-station fairness in WLANs where stations communicate directly with the access point (AP). However, it does not translate to flow-level or end-to-end fairness in WMNs where nodes closer to the gateway relay an increasing amount of aggregate traffic. Without a proportionate increase in the number of TXOPs, these nodes will experience higher queue drops. This results in capacity loss when the dropped packets originated from other nodes and had already consumed a portion of the shared spectrum. For example, consider a two-hop parking lot topology with two flows originating from the one-hop and the two-hop node destined to a common gateway. Assume uniform wireless link rates with a nominal MAC-layer capacity W. The max-min fair share for each flow is , for an aggregate network capacity of . However, with 802.11 MAC and continuously backlogged sources, the aggregate network capacity reduces to with the two-hop flow starving .
2.2 Nodes outside mutual carrier sense range
2.2.1 Starvation from collisions
Consider the topology in Figure 1a where both senders S1 and S2 have backlogged traffic for their respective receivers R1 and R2. The two senders are outside mutual carrier sense range. Assume that both transmitters are in the first backoff stage, i.e., they choose a random backoff between 0–31 time slots. A collision at R1 is inevitable as the two transmissions can be at most 32 time slots (640 μs for 802.11b) apart, while it takes upwards of 1,500μs to transmit a 1,500-byte Ethernet-friendly MTU and its subsequent link-level acknowledgement (ACK) using 802.11b physical layer parameters . This collision only impacts S1’s packet to R1. S1 now doubles its MAC contention window, choosing a backoff between 0 and 63 time slots, while S2 remains in the first backoff stage. S2 is now twice likely to start transmitting before S1; even if S1 waits a maximum of its 64 time slots, the probability of collision is still 1. S1 doubles its contention window yet again, but even in this third backoff stage, the probability of collision is 0.6. Thus, DCF steadily builds up the contention window for the disadvantaged node S1, while allowing S2 to contend for the channel with a minimum window following every successful transmission; the two transmitters share an inconsistent, asymmetric view of the channel state .
We note that the information asymmetry topology in Figure 1a is an extension of the hidden terminal problem. However, floor-acquisition mechanisms such as request to send/clear to send (RTS/CTS) fail in this scenario. First, even the RTS frames are susceptible to a collision probability of 0.55 when both transmitters are in the first backoff. Second, when the RTS frames do not collide, R1 will not respond to S1’s RTS if it has already been silenced by a prior RTS from S2 to R2. From S1’s perspective, this is no different from when its RTS frame collided at R1 because of S2’s transmission.
2.2.2 Starvation from lack of transmission opportunities
Collisions are not the only reason for the nodes to develop an inconsistent view of the channel state; this may occur even in an ideal CSMA/CA protocol with no collisions. Consider the flow-in-the-middle  topology in Figure 1b where S2 is in carrier sense range of both S1 and S3, but S1 and S3 are outside carrier sense range of each other. With backlogged traffic sources, the throughput for S1 and S3 equals the channel capacity with S2 starving. This is because S2 is always deferring its transmissions to one of the other senders.
2.3 Cross-layer interaction with TCP
The DCF behavior described above may lead to cross-layer interaction with higher-layer protocols. In particular, TCP’s congestion control mechanism further exacerbates the fairness problem. First, TCP allocates bandwidth as a function of a flow’s round-trip time thus penalizing flows with a large hop count. Second, TCP interprets delays in receiving an ACK as a sign of packet loss due to network congestion. In CSMA/CA networks, delays may occur due to transient medium access errors inherent in topologies similar to those described in Figure 1. While wireless link-layer retransmissions may potentially recover from collisions, TCP retransmission timeouts may still occur in the interim. This results in TCP invoking slow start and dropping its congestion window to one. On the other hand, contending nodes that successfully transmitted a packet gradually increase their TCP congestion window under backlogged traffic, eventually capturing the wireless channel. Thus, with backlogged TCP, the short-term MAC unfairness degenerates to long-term flow rate unfairness and starvation for disadvantaged flows.
3 Related work
The challenges associated with using a CSMA/CA-based MAC in multihop networks have been discussed previously [3, 10]. In general, a flow not only contends with other flows sharing the spectrum (i.e., inter-flow contention), but may also interfere with its own transmissions along the path to the destination (i.e., intra-flow contention) . Flows can be routed over non-interfering high-throughput paths [12, 13] when they exist; however, in many WMNs, the traffic is predominantly directed towards and away from the gateway, creating a network bottleneck. The degree of contention increases with increasing traffic loads. Related work in the literature addresses it from different perspectives: MAC-layer enhancements, transport layer enhancements, and higher-layer rate control algorithms.
By far, the largest body of literature specifically devoted to wireless network fairness is that of the MAC-layer solutions (see [14–16], among others). Such approaches tend to assume that contending flows span a single hop and fairness may be achieved by converging the MAC contention windows to a common value. Schemes for reducing collisions can also help improve fairness, e.g., virtual backoff algorithm  uses sequencing techniques to minimize the number of collisions in a single hop wireless network. However, optimal end-to-end fair allocation for multihop flows cannot be achieved by MAC scheduling based only on the local information. For multihop networks, solutions include prioritizing transmissions based on timestamps , using EDCA TXOPs differentiation , or adjusting the minimum contention window parameter at each relay node . However, these solutions are not backwards-compatible across all variants of 802.11a/b/g/n networks, or may have limited utility in multi-radio, multi-channel WMNs. In this work we show that centralized flow rate control techniques are not constrained by these limitations.
A number of studies have associated the inter-flow contention experienced by a single TCP flow to its TCP congestion window exceeding its optimum size. For a chain topology, the optimum window size that maximizes spatial reuse is 1/4th the number of hops between a source and destination . Note that this does not resolve any inter-flow contention between multiple TCP flows, and subsequent unfairness and starvation may still ensue. Modifications or alternatives to TCP for multihop networks have also been proposed (e.g., [21, 22]), though these require modifying the transport stack on the client devices and may present integration challenges when communicating with a wired host running the standard TCP stack. In this work, we show that strong fairness characteristics can be enforced through a centralized rate-based scheduling mechanism without modifying individual client devices or mesh routers.
Rate control algorithms operating outside the transport layer have also been shown to improve fairness between flows. Given a network topology and traffic demands, conflict graph models such as the clique model , its time-fairness extension , as well as Jun and Sichitiu’s nominal capacity model , may be used to compute optimal bounds on network capacity. We defer description of these models to Section 7. Raniwala et al.  proposed a distributed algorithm based on the conflict graph approach  for modeling constraints on simultaneous transmissions. Rangwala et al.  have proposed an AIMD-based rate control alongside a congestion sharing mechanism. The proposed mechanism is designed for many-to-one communication paradigm of sensor networks but fails in one-to-many (typical downloads in mesh networks) or many-to-many scenarios, or a mix of these. This is because the congestion sharing mechanism as defined in IFRC, fails to propagate the congestion information to all potential interferers.
Finally, we note that several works deal with congestion control in wireless networks with relayed traffic [25–27]. However, congestion control does not aim for fairness, and the solutions do not guarantee any fairness scheme. Furthermore, their system models either focus on (very) bursty traffic and power-consumption for the sensor-net environment , or are incompatible with IEEE 802.11, assuming independence of links .
4 Centralized flow rate control in WMNs
Centralized rate control implemented at gateway routers offers many advantages over distributed rate control schemes. First, since the gateway bridges all traffic between WMN and the wired Internet, it can formulate a unified, up-to-date view of the traffic state without additional signaling. Second, the gateway rate control does not require any changes to mesh routers. This is advantageous when the mesh routers are commodity customer peripheral equipment (CPE)-owned and managed by subscribers with the internet service provider (ISP) having little control over them. Third, centralized rate control is effective even when the nodes in the network cannot be trusted to correctly enforce the desired rate control mechanisms. Finally, the notion of centralized rate control also lends itself naturally for providing an auditing and a billing framework that can be essential to the operations of an ISP.
Router-assisted congestion control mechanisms have been extensively studied for wired networks. Congestion in Internet routers occurs due to statistical multiplexing or link speed mismatch across different network interfaces. Gateway nodes in WMNs interface the high-speed wired backhaul link with the shared-spectrum wireless resource that is often the system bottleneck, creating opportunities for reusing existing wired solutions in this new problem domain. In the following sections, we consider three categories of algorithms: work-conserving scheduling-based algorithms (Section 4.1), preferential packet-drop algorithms (Section 4.2), and traffic-shaping algorithms (Section 4.3).
4.1 Work-conserving scheduling-based algorithms
Work-conserving packet scheduling algorithms like FQ and WFQ are approximations of the generalized processor sharing (GPS) scheduler that is the theoretically ideal mechanism for providing fair bandwidth allocation . Their work-conserving nature maintains a high network utilization. While distributed FQ protocols have earlier been proposed for ad hoc networks , we are interested in evaluating their impact on fairness when enforced at the gateway. To the best of our knowledge, this has not been evaluated in prior work.
4.2 Packet-drop/marking algorithms
Packet loss in wired networks primarily occurs at the router queue interface across the bottleneck link. Selective packet drop and/or marking techniques (e.g., AQM variants such as random early detection (RED) ) allow these routers to signal incipient network congestion to traffic sources. Since gateway mesh router bridges traffic between the high-speed wired network and the shared-spectrum wireless network, it appears that these algorithms may also be effective as gateway-enforced solutions in WMNs.
Fair random early drop (FRED)  extends the RED algorithm to improve flow rate fairness. While RED effectively avoids network congestion, it may not improve fairness since it does not differentiate between connections. Thus, when incipient congestion is detected, all packets (irrespective of the flow) are marked with the same drop probability. In contrast, FRED uses per-flow accounting to ensures that the drop rate for a flow depends on its buffer usage.
A brief overview of FRED is as follows: a FRED gateway classifies flows into logically separate buffers. For each flow i, it maintains the corresponding queue length q len i . It defines min q and max q , which respectively are the minimum and the maximum number of packets individual flows are allowed to queue. Similarly, it also maintains min t h , max t h , and avg for the overall queue. All new packet arrivals are accepted as long as avg is below the min t h . When avg lies between min t h and max t h , a new packet arrival is deterministically accepted only if the corresponding q len i is less than min q . Otherwise, as in RED, the packet is dropped with a probability that increases with increasing queue size.
We note that Xu et al.  have proposed the use of RED over a virtual distributed ‘neighborhood’ queue comprising nodes that contend for channel access. This was in the context of wireless ad hoc networks in which flows do not necessarily share traffic aggregation points. In our work, we explore the traditional use of AQM as a router-assisted (gateway-enforced) mechanism.
4.3 Traffic policing/shaping algorithms
Traffic policing and shaping algorithms are commonly used when traffic limits are known or pre-determined in advance (e.g., for enforcing compliance with a contract). The difference between policing and shaping is minor yet subtle: policing does not implement any queueing and excess packets are immediately dropped. Shaping, on the other hand, can absorb short bursts of packet, where the burst size is determined by the allocated buffer. When the buffer is full, all incoming packets are immediately dropped and traffic shaping effectively acts as traffic policing. Both policing and shaping are examples of non-work-conserving scheduling methods.
Traffic shaping can be enforced at different levels of resource abstraction; it can be applied to aggregate traffic allowed to pass through a network interface, or it may be enforced on individual flows in a traffic stream. We describe some of these control configurations below.
4.3.1 Interface aggregate rate limit
4.3.2 Interface aggregate rate limit with FQ
TCP flows sharing a single queue are susceptible to synchronization due to bursty and correlated packet losses. To prevent this, we introduce per-flow queues with fair scheduling between them. By separating flows, we can provide isolation between flows experiencing different levels of contention for network access, e.g., we can separate locally generated traffic at a node from its relayed traffic. This new architecture is shown in Figure 2b. Note that while flows are queued separately, rate limits are still enforced for the net aggregate traffic traversing the gateway.
Separating traffic into flows requires a flow classifier. For WMNs providing last mile access, this classification can be based on source or destination mesh routers. Thus, a flow f i represents the aggregate of all micro-flows originating from, or destined to, mesh router n i in the network. In this context, we use nodes and flows interchangeably in our discussion. We note that this classification is consistent with the common practices employed by ISPs on wired networks, where capacity is managed on a per-subscriber basis.
4.3.3 Per-flow rate limit
While the architecture in Figure 2a,b manages aggregate traffic through an interface; there may be a requirement for more fine-grained control over resource allocation between individual flows. This may be necessitated by QoS-enabled mesh networks where the provider wishes to support differentiated services or provide weighted max-min or proportional fairness. We extend the system architecture to provide per-flow rate limiting at the gateway router as shown in Figure 2c. Data traffic through the gateway can be classified into queues, which are then drained out at their specific rate. Note that we are proposing rate-limiting data traffic only; system housekeeping messages like routing updates are not rate limited.
5 Performance analysis
We evaluate the efficacy of gateway-enforced control in WMNs using simulations in ns-2 . We implement and evaluate each of the control actions described in Section 4 on the gateway. Our implementation works between the 802.11 MAC layer and the network layer, and operates transparently without requiring changes to either layer. We do not modify the regular mesh routers. We model the wireless channel propagation using the two-ray ground reflection model ; it considers the direct path as well as the ground-reflected path between the source and destination. We assume a static noise floor and a uniform static link rate of 1 Mb/s. We simulate the DCF channel access mechanism. Our TCP experiments simulate an infinite file transfer using TCP NewReno . Our upstream flows originate from a mesh router and terminate at a host on the wired network; downstream flows take the other direction. We use Jain’s fairness index (JFI)  as a quantitative measure of fairness for the resulting allocation.
5.1 Work-conserving scheduling-based algorithms
FQ, WFQ, and similar router-assisted scheduling techniques assume independence between links and were designed as work-conserving schedulers; they do not allow the output link to remain idle if any of the flows have packets to send. While this maintains high efficiency in wired networks, it creates problems in wireless networks where the contending links are not independent, i.e., transmission on a link precludes successful delivery of data on contending links. In topologies where mesh nodes share an inconsistent view of the channel state, work-conserving scheduler would schedule packets for advantaged node when it has nothing to send for distant, disadvantaged flows, while ideally it should defer any transmissions and keep the medium idle to allow for successful transmissions by disadvantaged nodes. The situation deteriorates when work-conserving schedulers are used with backlogged traffic sources using elastic TCP streams due to the cross-layer interaction described in Section 2.
5.2 Packet-drop/marking algorithms
We simulate a FRED gateway router on the three-hop parking lot topology used above. We use downstream flows because a queue build-up (for detecting incipient congestion) only occurs when packets traverse from a high-speed wired link to a shared-medium WMN. The gateway queue size and various FRED parameters are consistent with the default values in ns-2.
Our results with the FRED queue at the gateway are shown in Figure 3. It fails to prevent starvation for TCP flow to node n3. By monitoring queue drops at the gateway, we found that the FRED queue did register some proactive packet drops for f1 and f2, though it was insufficient to preclude the starvation of f3.
We conclude that AQM is ineffective as a gateway-enforced technique for improving flow rate fairness in WMNs. This is due to fundamental differences in packet loss characteristics between wired networks and WMNs . In wired networks, packet loss occurs primarily at the queue interface into the bottleneck link. In WMNs, however, these packet losses are spatially distributed over various intermediate routers (see Section 2) and cannot be accurately predicted by simply monitoring the queue size at the gateway router.
5.3 Traffic policing/shaping algorithms
We evaluate the various traffic shaping alternatives described in Section 4.3. Our simulations include a number of chains, grids, and random multihop network topologies, including both upstream and downstream flows, with up to a maximum of 35 simultaneously active nodes transmitting via a single gateway. Experiments for a given topology are repeated 25 times with different random seeds and random flow activation sequences, and the results averaged. For each topology, the traffic shaping rate is computed off-hand using a collision domain network capacity model . Other capacity models such as clique-based models  may similarly be used. We note that the mechanisms for computing fair flow rates is orthogonal to this work; in this paper, we focus on evaluating the efficacy of gateway-enforced control given a desired rate allocation, rather than the mechanics of accurately estimating the network capacity and inferring flow rates. Nonetheless, for completeness, we provide an overview of these two capacity models in Section 7.
The fair rate allocation computed by the model is enforced at the gateway via the traffic shaping architectures described in Section 4.3. The collision domain capacity model allows us to compute per-flow rate. The interface aggregate rate limit is then simply the sum of the fair rates of constituent flows. This rate limit is the fair-aggregate capacity of the network.
5.3.1 Long-lived elastic TCP flows
Fairness indices for downstream TCP flows
Single FIFO queue
Aggregate rate limit
Aggregate rate limit and FQ
Per-flow rate limit
Fairness indices for upstream TCP flows
Single FIFO queue
Aggregate rate limit
Aggregate rate limit and FQ
Per-flow rate limit
Source rate limit
We perform the same set of experiments using a single, FIFO Drop Tail queue at the gateway router.
We repeat these experiments using FQ at the gateway router with a per-flow buffer size of 5 packets. Our prior work  shows that this buffer size maintains low queueing delays at the gateway with little loss in end-to-end flow rate.
For upstream flows, we perform additional experiments where the source node rate limits the flows to their computed fair share rate without any modifications on the gateway router. For downstream flows, this source rate limit is akin to per-flow gateway rate limit as the gateway is now injecting packets in the wireless medium.
Our results in Tables 1 and 2 show that simply enforcing rate-based scheduling, even on the granularity of aggregate amount of traffic allowed through the network interface, provides upwards of two-fold improvements in JFI compared to the base case with a shared FIFO queue. We note that rate-based scheduling enforced via traffic shaping is, by nature, non-work conserving. Thus, while underlying topologies may still be susceptible to 802.11 MAC limitations described in Section 2, link-layer retransmissions can provide reliable packet delivery as long as non-work-conserving, rate-based scheduling can shield individual flows from effects of cross-layer interaction with TCP.
The quantitative analysis of per-flow rate control in Tables 1 and 2 show a further improvement in fairness index of about 1% to 8% over FQ with interface aggregate rate limiting. We note that these fairness characteristics of per-flow rate-limiting are very similar to those achieved with source rate limiting. Incidentally, perfect fairness cannot be achieved even with source rate limiting. Some network topologies may exhibit inherent structural unfairness, requiring control action beyond simple rate limiting. Addressing this is beyond the scope of this work.
Finally, we note that normalized effective network utilization is upwards of 90% for all scheduling techniques for both downstream and upstream flows; backlogged TCP flows saturate the spectrum around the gateway in all cases, irrespective of fairness in rate allocation between individual flows.
In summary, our experiments show that centralized rate control cannot be exercised in WMNs using work-conserving scheduling techniques. Using non-work-conserving, rate-based scheduling is equally effective as source rate limiting techniques that require changing the MAC or transport layer on end hosts.
5.3.2 Multiple long-lived flows per node
We now evaluate the efficacy of gateway rate control when multiple flows originate from a mesh router. Consider a 20-node topology with a random node placement. We randomly select 10 of these nodes as traffic sources. Each source generates between one to three upload flows. The fairness criterion we target is per-subscriber fairness irrespective of their flow count, where each subscriber corresponds to a mesh node. As discussed earlier, this resource allocation policy is consistent with the practices employed by ISPs on wired networks.
5.3.3 Short-lived elastic TCP flows
We next consider the performance characteristics of short-lived dynamic flows where we do not ignore the impact of slow start but use it to evaluate how quickly new flows converge to their fair share allocation. Similarly, when an existing flow terminates, we are interested in evaluating how quickly the freed resources can be utilized by other flows.
Flow activation/termination can be detected in multiple ways. TCP stream activation and teardown can be detected by the exchange of the TCP-specific three-way handshake messages. In our case, where a flow bundle constitutes multiple TCP streams, we simply use the presence or absence of packets to determine the current state of stream activity thus obviating any overhead associated with the distribution of stream activity information. On detecting a new stream, the centralized controller simply computes a new rate per active stream and starts enforcing it. Detecting stream deactivation can be a little tricky; our controller waits for a time interval during which no packet is received from a flow. This time interval should be a function of the average delay and jitter experienced by a flow.
5.3.4 Non-adaptive flows
Router-assisted rate control mechanisms are targeted at adaptive transport protocols that can react to congestion notification. TCP is the canonical example of such an adaptive protocol and constitutes the bulk of Internet traffic . However, UDP-based communications are increasingly being used for real-time delivery of audio and video data. We now evaluate the performance of gateway-assisted centralized rate control for such non-adaptive flows.
Per-flow rate control for UDP flows in a three-hop parking lot topology with gateway n 0 .
We observe that gateway-assisted rate control in WMNs can successfully contain downstream UDP flows only. In this case, it effectively acts as source rate control, limiting each stream to its fair share on the wireless network. However, upstream flows continue experiencing unfairness; while we can limit the goodput of f1 to its fair share by dropping its excess traffic at the gateway, its non-adaptive transport protocol still sources traffic at 800 Kb/s. The locally generated traffic at n1 shares the same transmit buffer as the relayed traffic from n2 and n3. With a probability that increases with the offered load, the relayed packets are likely to find this buffer full and will be dropped in the Drop Tail buffers . Thus the relayed traffic from f2 and f3 experiences a high loss rate and the resulting flow rate unfairness.
Additional mechanisms beyond gateway-enforced traffic policing/shaping algorithms are required to adapt the rate of non-congestion controlled upstream flows, e.g., rate information calculated at the gateway may be communicated back to the source nodes for enforcement at the traffic ingress points. Mesh routers need to correctly interpret and enforce these rate limits. We defer this study to future work.
5.3.5 Multi-radio, multi-channel WMNs
We have further validated the efficacy of gateway control in multi-radio, multi-channel WMNs. We extended ns-2 to support two radio interfaces per mobile node. Each interface is assigned a static, non-overlapping channel so as to maintain connectivity between neighboring nodes. Channel assignment for optimal network performance is beyond the scope of our current work.
Flow rate averages for the results in Figure 9
100 to 200
200 to 300
300 to 400
400 to 500
We observe that fairness improves considerably such that overlapping flow rates over various intervals are often indistinguisable. Of particular interest are the 200- to 300-s and the 400- to 500-s intervals where the active flows sourced from nodes on either side of the gateway do not share a common bottleneck, leading to max-min fairness with unequal flow rates. Using per-flow rate limiting at the gateway, we can correctly converge the flows to their fair share of network capacity.
WMNs, particularly those based on the 802.11 radios, exhibit extreme fairness problems, requiring existing deployments to limit the maximum number of hops to the gateway to prevent distant nodes from starving. In this paper, we explore the feasibility of using centralized rate control that can be enforced at traffic aggregation points such as gateway routers. We show that router-assisted techniques in wired networks, including work-conserving packet scheduling (such as FQ and its variants) and probabilistic packet-drop techniques (such as AQM and its variants) are inadequate as centralized rate control techniques in WMNs. This is because of fundamental differences in the abstraction of wired and wireless networks: (1) transmissions on wired links can be scheduled independently, and (2) packet losses in wired networks occur only as queue drops at bottleneck routers. Our experiments indicate that non-work-conserving, rate-based centralized scheduling can be used effectively in WMNs. Even rate-limiting the aggregate traffic passing through the gateway router improves the fairness index two-folds over the base case with a shared FIFO queue. Further granularity in rate allocation control can be obtained by isolating flows using per-flow buffering and by exercising per-flow rate limiting. The fairness indices achieved with these modifications are comparable to source rate limiting techniques that require changing the MAC or transport layer on the end-hosts.
Having established the feasibility of gateway-assisted rate control in WMNs, we are now working on extending this work along multiple dimensions. First, we are developing practical heuristics and mechanisms to estimate flow rates using the information available locally at the gateway. We are pursuing a feedback-based approach in which the centralized controller adapts its behavior in response to changing network and flow conditions. Second, we are considering the impact of multiple gateway nodes in large WMN deployments. Some gateways in these networks may need to exchange signaling information to reconcile their views of the available network capacity. This requires identifying flows that use one gateway but interfere with flows using other gateway(s). The signaling between these gateways, however, may use the wired backbone without consuming wireless capacity. We hope to address these challenges in the future.
7.1 Model for estimating per-flow fair share
Flow rates used in our analysis in Section 5 were computed off-line using a network capacity model. For completeness, in this appendix, we briefly describe the two computational models that we considered and provide an analysis of the capacity achievable with these models to the model implemented in ns-2.
We first state the assumptions necessary to our approach. We presume that routing is relatively static, based on the fact that the WMN nodes are stationary and, likely, quite reliable. By ‘relatively static’, we mean that changes in routing will be significantly fewer than the changes in stream activity. This assumption implies a few things, including that network membership changes (such as node additions or hardware failures) are few and far between, and that load balancing is not used in the network. While the first assumption is certainly valid, the second assumption is a simplification that we hope to address in the near future.
We also assume that the WMN has a single gateway. Though this is generally not true in large deployments, given static routing, for each node, there will be a single gateway. We thus partition a multi-gateway WMN into disjoint WMNs, each with a single gateway. While there may be interference between the resulting set of WMNs, this is a problem that must already be dealt with insofar as there may be interference from any number of other sources.
In other words, the nodes must be within transmission range of each other. An edge in this connectivity graph is also referred to as a link. A stream is defined by an exchange of data packets between a mesh node and its corresponding gateway. An active stream is one for which data is currently being exchanged.
7.1.1 Computational model
The fair-share computation model is an optimization problem subject to the feasibility model for the network, the network state, and the fairness criterion adopted.
The feasibility model reflects the throughput constraints imposed by the network. It consists of a set of constraints determined by how streams use the links and then how these links contend for the wireless channel. The former is a function of the routing protocol; for the latter, we describe two variations (bottleneck clique vs. collision domain) in the following section below.
This feasibility model is extended by the network state, which is simply the desired rate, G(s), for each stream, s. For this paper, we consider only binary activity: the stream is either silent (G(s)=0) or never satisfied (G(s)=∞). This corresponds to TCP behavior, which either is not transmitting or will increase its transmission rate to the available bandwidth. We are incorporating flows with fixed bandwidth requirements in the future.
Finally, the fairness criterion implements the selected fairness model. In this paper, we deliberately restrict our analysis to max-min fairness (i.e., active streams receive as much throughput as the network can offer without causing other active streams with a lesser throughput to suffer), so as to focus on the accuracy of the model for 802.11-based WMNs and the efficacy of the gateway as a control point. However, we note that the computation model can be extended to any feasible, mathematically tractable fairness criterion that can be expressed as a set of rate allocation constraints.
7.1.2 Network feasibility models
We now describe the details of the two network feasibility models. Both models start by dividing the problem into one of the link constraints (i.e., usage of links by streams) and medium constraints (i.e., usage of the medium by links). The former is the same for both models, as it is a function of the routing together with the demands placed on the network. The latter is where the two models differ.
22.214.171.124 Link-resource constraints
126.96.36.199 Medium-resource constraints
The basic problem in developing medium-resource constraints is that contention is location-dependent, with the medium conceptually divided into overlapping resources of limited capacity. The clique model computes mutually incompatible sets of links, all but one of which must be silent at any given time for collision-free transmission. The collision-domain model considers the medium-resource unit to be the link, and determines the set of links that must be silent for a given link to be used. We first formalize the clique model.
188.8.131.52 Clique model of medium-resource constraints
Note that if each wireless router transmits at the same rate, the value of B(u) can be reasonably approximated as the throughput that can be achieved at the MAC layer in a one-hop network with infrastructure. If routers transmit at different rates, a weighted contention graph may be used.
The clique model requires the (NP-complete) computation of cliques within the contention graph and, as a more practical matter, the determination of which links contend. While the former problem is, to some degree, amenable to careful analysis potentially enabling more-efficient computation , the latter problem is extremely difficult to deal with. Specifically, determining which links interfere with which other links in a wireless mesh network is not, in general, feasible in part because interference is experienced by a receiver, not by a link, and thus depends on traffic direction.
184.108.40.206 Collision-domain model of medium-resource constraints
Note that since transmission range is often much less than interference range, this model underestimates the level of contention. However, each collision domain will, in general, contain links that do not contend with each other, thus overestimating the number of contending links compared to the more-accurate cliques. As a result the combined model has the potential of offering acceptable accuracy, with computational simplicity and practical feasibility. We must emphasize that in this model, it is possible for nodes within the WMN to identify the set of contending links, which is difficult, if not infeasible, with the clique model.
Similarly, the medium-capacity vector B can be redefined as B[i]=B(u i ), where B(u i ) is the available bandwidth of collision domain u i . Equation 5 then remains unaltered, though using the collision-domain definitions of M and B.
7.1.3 Network state constraints and fairness criterion
The network state constraints require that no flow be allocated a rate higher than its desired rate. Thus, if the bandwidth requested by a stream s is G(s), then ∀ s R(s)≤G(s). As previously discussed, in this model, we only consider either inactive streams (G(s)=0) or TCP streams with infinite backlog (G(s)=∞).
Finally, the fairness criterion that we consider is max-min fairness. This imposes an additional constraint that no rate R(s i ) can increase at the expense of R(s j ) if R(s i )>R(s j ).
The resulting computational problem is to maximize the bandwidth allocation vector R, while satisfying the set of constraints described in the above sections.
7.1.4 Model comparison with ns-2
Having described the two computation models, we now provide an analysis of the capacity achievable with these models to the model implemented in ns-2. To achieve this, we have devised experiments that would determine, for a given topology and a set of active streams, the max-min fair-share points. We compare these experimentally determined values to those computed using the two models to determine the accuracy of the computation. Given enough topologies and stream variations, we can then determine the statistical accuracy of the models.
The experiment we created is as follows: for a given set of streams in a given network topology, we simulate, using ns-2 , source-rate limiting those streams over a range of rates from 50% of the computed fair-share rate to 150% of the computed fair-share rate. To avoid TCP complications, UDP data is used, with 1,500 byte packets. This simulation is executed five times, with a different random seed each time. The five results are averaged to give an expected throughput for each stream for any given input rate.
To determine the accuracy of the computational models, we define the fair-share points as follows: the fair-share point for bottleneck i is that point at which the throughput of more than one third of the streams that are constrained by bottleneck i is less than the input source rate by more than 5%. All streams that are constrained by a lesser bottleneck must be capped when running the relevant simulation. We determine a drop of more than 5% by requiring this to be true for four successive data points, and then taking the first of those four points as the point of loss for that stream. While this definition may seem somewhat arbitrary, when trying several variations (e.g., 8% loss by 20% of the stream, etc.), all pointed to approximately the same fair-share point, and visual inspection of plots has suggested that this definition is fairly reasonable.
Given this definition, we executed this experiment over 50 random topologies in a 1000×1000-m area with between 25 and 40 streams for each topology, both with and without the RTS/CTS protocol. We then compute the average error in each computation model, together with the standard deviation.
Random topology accuracy results
Avg of (Ocl - fp)/fp
Std. Dev. of (Ocl - fp)/fp
Avg of (Rcd - fp)/fp
Std. Dev. of (Rcd - fp)/fp
As is apparent, both models are reasonably accurate at predicting the first fair-share point, generally slightly underestimating the capacity, and within about 10% deviation. The simpler collision domain model is only marginally less accurate than the more-complex clique model and thus is quiet sufficient for our purpose in estimating the fair rates for different topologies in Section 5. Finally, we note that while this experiment was performed on a 36-node network, we have corroborated this observation and found it consistent across a large number of random, chain, and grid topologies.
- Gambiroza V, Sadeghi B, Knightly E: END-to-end performance and fairness in multihop wireless backhaul networks. In Proc. of the ACM MobiCom ’04. New York: ACM; September, 2004:287-301.View ArticleGoogle Scholar
- Garetto M, Salonidis T, Knightly E: Modeling per-flow throughput and capturing starvation in CSMA multi-hop wireless networks. In Proc. of the IEEE INFOCOM ’06, Barcelona. New York: IEEE; 2006:1-13.Google Scholar
- Shi J, Gurewitz O, Mancuso V, Camp J, Knightly E: Measurement and modeling of the origins of starvation in congestion controlled mesh networks. In Proc. of the IEEE INFOCOM ’08. New York: IEEE; 2008:1633-1641.Google Scholar
- Raniwala A, De P, Sharma S, Krishnan R, Chiueh T: End-to-end flow fairness over IEEE 802.11-based wireless mesh networks. In Proc. of the IEEE INFOCOM Mini-Conference. New York: IEEE; 2007:2361-2365.Google Scholar
- Aziz A, Starobinski D, Thiran P, Fawal AE: EZ-Flow: removing turbulence in IEEE 802.11 wireless mesh networks without message passing. In Proc. of the ACM CoNEXT ’09. New York: ACM; 2009:73-84.Google Scholar
- Floyd S, Jacobson V: Random early detection gateways for congestion avoidance. IEEE/ACM Trans. Netw August1993, 1(4):397-413. 10.1109/90.251892View ArticleGoogle Scholar
- Bertsekas DP, Gallager RG: Data Networks. Upper Saddle River: Prentice-Hall; 1992.Google Scholar
- Demers AJ, Keshav S, Shenker S: Analysis and simulation of a fair queueing algorithm. In Proc. of the ACM SIGCOMM ’89. New York: ACM; 1989:1-12.Google Scholar
- Jun J, Sichitiu ML: The nominal capacity of wireless mesh networks. IEEE Wireless Commun October 2003, 8-14.Google Scholar
- Li J, Blake C, Couto DSJD, Lee HI, Morris R: Capacity of ad hoc wireless networks. In Proc. of the ACM MobiCom ’01. New York: ACM; 2001:61-69.View ArticleGoogle Scholar
- Zhai H, Fang Y: Distributed flow control and medium access in multihop ad hoc networks. IEEE Trans. Mobile Comput November 2006, 5(11):1503-1514.View ArticleGoogle Scholar
- Misra S, Ghosh TI, Obaidat MS: Routing bandwidth guaranteed paths for traffic engineering in WiMAX mesh networks. Wiley Int. J. Commun. Syst 2013. 10.1002/dac.2518Google Scholar
- Salonidis T, Garetto M, Saha A, Knightly E: Identifying high throughput paths in 802.11 mesh networks: a model-based approach. In Proc. of the ICNP ’07. New York: IEEE; 2007:21-30.Google Scholar
- Bharhgavan V, Demers A, Shenker S, Zhang L: MACAW: a media access protocol for wireless LANs. In Proc. of the ACM SIGCOMM ’94. New York: ACM; 1994:249-256.Google Scholar
- Nandagopal T, Kim T, Gao X, Bharghavan V: Achieving MAC layer fairness in wireless packet networks. In Proc. of the ACM MobiCom ’00. New York: ACM; 2000:87-98.View ArticleGoogle Scholar
- Tassiulas L, Sarkar S: vol. 2. Maxmin fair scheduling in wireless networks. In Proc. of the IEEE INFOCOM ’02. New York: IEEE; 2002:763-772.Google Scholar
- Krishna P, Misra S, Obaidat MS, Saritha V: Virtual backoff algorithm: An enhancement to 802.11 medium-access control to improve the performance of wireless networks. IEEE Trans. Vehicular Technol 2010, 59(3):1068-1075.View ArticleGoogle Scholar
- Nawab F, Jamshaid K, Shihada B, Ho PH: TMAC: Timestamp-ordered MAC for CSMA/CA Wireless Mesh Networks. In Proc. of the IEEE ICCCN ’11. New York: IEEE; 2011.Google Scholar
- Li T, Leith DJ, Badarla V, Malone D, Cao Q: Achieving end-to-end fairness in 802.11e based wireless multi-hop mesh networks without coordination. Mobile Netw. Appl 16(1):17-34. February 2011View ArticleGoogle Scholar
- Fu Z, Luo H, Zerfos P, Lu S, Zhang L, Gerla M: The impact of multihop wireless channel on TCP performance. IEEE Trans. Mobile Comput 4(2):209-221. March/April 2005View ArticleGoogle Scholar
- ElRakabawy SM, Lindemann C: A practical adaptive pacing scheme for TCP in multihop wireless networks. IEEE/ACM Trans. Netw 19(4):975-988. August 2011View ArticleGoogle Scholar
- Rangwala S, Jindal A, Jang KY, Psounis K, Govindan R: Neighborhood-centric congestion control for multi-hop wireless mesh networks. IEEE/ACM Trans. Netw 19(6):1797-1810. December 2011View ArticleGoogle Scholar
- Jain K, Padhye J, Padmanabhan VN, Qiu L: Impact of interference on multi-hop wireless network performance. In Proc. of the ACM MobiCom ’03. New York: ACM; 2003:66-80.View ArticleGoogle Scholar
- Rangwala S, Gummadi R, Govindan R, Psounis K: Interference-aware fair rate control in wireless sensor networks. In Proc. of the ACM SIGCOMM ’06. New York: ACM; 2006:63-74.View ArticleGoogle Scholar
- Misra S, Tiwari V, Obaidat MS, Lacas: learning automata-based congestion avoidance scheme for healthcare wireless sensor networks. IEEE J. Selected Areas in Commun 2009, 27(4):466-479.View ArticleGoogle Scholar
- Wan CY, Eisenman S, Campbell A: CODA: Congestion Detection and Avoidance in sensor networks. In Proc. of the ACM SenSys ’03. New York: ACM; 2003:266-279.View ArticleGoogle Scholar
- Yi Y, Shakkottai S: vol. 4. Hop-by-hop congestion control over a wireless multi-hop network. In Proc. of the IEEE INFOCOM ’04. New York: IEEE; 2004.Google Scholar
- Luo H, Cheng J, Lu S: Self-coordinating localized fair queueing in wireless ad hoc networks. IEEE Trans. Mobile Comput 3(1):86-98. January 2004View ArticleGoogle Scholar
- Lin D, Morris R: vol. 27. Dynamics of Random Early Detection. In Proc. of the ACM SIGCOMM ’97. New York: ACM; 1997:127-137.View ArticleGoogle Scholar
- Xu K, Gerla M, Qi L, Shu Y: TCP unfairness in ad hoc wireless networks and a neighborhood RED, solution. Wireless Netw 11(4):383-399. July 2005View ArticleGoogle Scholar
- Rappaport TS: Wireless Communications: Principles and Practice. Upper Saddle River: Prentice Hall; 2002.Google Scholar
- Floyd S, Henderson T, Gurtov A: April 2004 The NewReno modification to TCP’s fast recovery algorithm. RFC 3782, Internet Engineering Task Force (Proposed Standard) .Accessed 1 June 2013 http://tools.ietf.org/html/rfc3782Google Scholar
- Jain R, Chiu DM, Hawe W: A quantitative measure of fairness and discrimination for resource allocation in shared computer system. Technical report, DEC Research Report TR-301 September 1984Google Scholar
- Jamshaid K, Ward PA: Experiences using gateway-enforced rate-limiting techniques in wireless mesh networks. In Proc. of the IEEE WCNC ’07. New York: IEEE; 2007:3725-3730.Google Scholar
- Zhang L, Chen S, Jian Y: end-to-end maxmin in multihop wireless networks. In Proc. of the IEEE ICDCS ’08 Achieving global. New York: IEEE; 2008:225-232.Google Scholar
- Jamshaid K, Li L, Ward PA: Gateway rate control of wireless mesh networks. In Proc. of the WiMeshNets. Gent: ICST; 2006.Google Scholar
- Li L, Ward PA: Proc. of the IEEE CNSR ’07 Structural unfairness in 802.11-based wireless mesh networks. New York: IEEE; 2007:213-220. >10.1109/CNSR.2007.60Google Scholar
- Williamson C: Internet traffic measurement. IEEE Internet Comput 5(6):70-74. (November/December 2001)View ArticleGoogle Scholar
- Gupta R, Walrand J: vol. 1. Approximating maximal cliques in ad-hoc networks. In Proc. of the IEEE PIMRC ’04. New York: IEEE; 2004:365-369.Google Scholar
- ISI: The network simulator - ns-2. . Accessed 1 June 2013 http://www.isi.edu/nsnam/ns
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.