Skip to main content

An effective queuing architecture for elastic and inelastic traffic with different dropping precedence in MANET

Abstract

Mobile ad hoc network (MANET) is a group of self-organized mobile nodes that are associated with comparatively low bandwidth wireless links. This paper proposes an effective queuing architecture, which supports both elastic and inelastic traffic. The packets of inelastic flows are always stored ahead of those of the elastic flows. If a link is critically loaded by the inelastic traffic, it results in large delays and elastic traffic may have some delay constraints that are nonnegligible. The virtual queue algorithm reduces the experienced delay by comprising virtual queues that are served at a fraction of the actual service rate and by using the virtual queue-length values in utility function. Then, optimization framework is used in which scheduling algorithm allocates resource fairly in the network to meet the fairness objectives of both elastic and inelastic flows. Finally, priority dropping active queue management algorithm is designed based on proportional-integral design (PID) mechanism which provides the differentiated services for the different layers or frames according to their priority.

1 Introduction

Mobile ad hoc network (MANET) is a group of self-organized mobile nodes that are associated with comparatively low bandwidth wireless links. Each node has its own area of control, which is called a cell, only within which others can receive its transmissions. In MANET, there is no fixed infrastructure[1–3]. Consequently, when nodes are free to ramble, the network topology may change rapidly and randomly over time, and nodes automatically make their own accommodating infrastructures[4]. There is a range of applications of MANET like video conferencing, rescue operations, military applications, disaster management, etc.

1.1 Queuing in mobile ad hoc network (MANET)

Queuing resolves the discipline for ordering entities in a queue. It describes the way in which resources are divided as packets and the order in which they are served. Queuing mechanisms control the transmission process by indicating which packets must be transmitted and which packets must be dropped. It affects the packets latency qualified by finding the waiting time of the packet. Some examples of queuing are first in, first out (FIFO); last in, first out (LIFO); priority queuing (PQ); the shortest is served first; service in random order; round robin, random exponential marking (REM); etc.[5].

1.2 Advantages of queuing in mobile ad hoc network (MANET)

There are some advantages of queuing in MANET since it resolves the discipline for ordering entities in a queue and traffic distribution; packet scheduling affects the performance of multipath routing in mobile ad hoc networks. Therefore, queuing can reduce the resequencing delay. The packet scheduling intends to assign packets in a proper order to minimize the resequencing delay[6]. It can also evade the transmission delay and packet loss in the network. Nodes should delay the assembly of new route passing through them when their load level is high. By proper scheduling, as the channel is shared fairly, the throughput is high even at moderate mobility[7]. Scheduling can provide strict bandwidth allocation assurance since no transmission conflicts exist[8, 9].

1.3 Issues of queuing in mobile ad hoc network (MANET)

There are some issues of queuing in MANET. The queuing techniques play the most important factor in service differentiation. Implementation of conventional priority queuing strategy in MANET is considerably complex. Taking an example, firstly, a simple priority queue makes sure that high-priority packets are given unqualified preference over low-priority packets as proposed in the flexible quality of service (QoS) model for MANETs[10]. Secondly, they are regarded as a FIFO queue, which they improve with a mechanism called random early discard with in/out buffer management. In the same way, service differentiation in stateless wireless ad hoc networks[11] also theoretically employs a priority queue but confines the amount of real-time traffic for protecting the lower-priority traffic from starvation[12].

1.4 Elastic traffic

Elastic traffic has the ability of making adjustment of wide-ranging changes in delay and throughput across the internet and still meets the needs of the applications. It adjusts its throughput between end hosts in response to network condition. Network load or congestion may cause packet loss. Congestion occurs when the aggregated demand for a resource exceeds the available capacity of the resource[13]. To avoid this, transmission control protocol (TCP) implements its congestion avoidance algorithm and reduces the rate at which packets are sent over the network.

1.5 Inelastic traffic

Inelastic traffic does not easily adapt to changes in delay and throughput. Real-time multimedia (audio streaming, video, VoIP) is an example of inelastic traffic. Inelastic traffic needs special treatment, while elastic traffic could perceptibly also benefit from such treatment. In general, the quality of wireless links would be affected by many factors like collisions, fading or the noise of environment[14][15].

1.6 Previous work

A token-based resource allocation technique is proposed[16] for multiservice flows in MANET. In that technique, it is assumed that the nodes cycle has three states such as noncritical section (NCS), entry section (ES) and critical section (CS). During deployment, the node is in NCS state, and after receiving the unique tokens, it enters into CS state. The scheduler sends the resource request message in different queues using fuzzy-based flow prioritization technique. If the available resource exceeds the required resource, the scheduler allocates the inelastic service similar to the available resource until the inelastic queue gets empty. Then the token is passed to the queue that contains elastic service flows. Based on simulation results, the proposed approach allocates the resources efficiently.

As an extension, this work proposes an effective queuing architecture that can handle both elastic and inelastic traffic flows and assign different dropping precedence for different priority of traffic. The solutions of this work prove that the proposed architecture offers better fairness and delivery ratio with reduced delay and drop.

2 Related work

Guo and Kuo[6] discussed about the problem connected to traffic assignment and packet scheduling in MANET. This work proposes a packet scheduling framework to study the effect of scheduling strategy on the resequencing delay. Two packet-scheduling schemes uniform round scheduling (URS) and non-uniform round scheduling (NURS) based on the optimal traffic distribution were studied in this work, and it was analysed that URS scheme outperforms the NURS one. Furthermore, by increasing the round length, the URS scheme supplementary decreases the resequencing delay. The authors modelled every path as a multiple-node M/M/1 tandem network. They assume that the end-to-end path delay follows the normal distribution. The performance metric like end-to-end path delay and resequencing delay is discussed in this paper. When average arrival rate λ is increased, the time in every queue is increased by which resequencing delay is also increased

Patil et al.[7] proposed a cross-layer mechanism for scheduling. Cross-layer mechanism is able to overcome many challenges for QOS due to excessive channel sharing. By adopting cross-layer approach to determine the order of the nodes, the packets will be scheduled to give a very high throughput. In this mechanism, when packet loss and retransmission is essential, still the nodes get sufficient time to serve those nodes. Because of time loss, all the other nodes are finished. These techniques significantly reduce latency and losses. The mechanism needs to improve this technique by adopting a suitable bandwidth estimation mechanism as one of the parameters for scheduling.

Cui and Wei[17] converse the problem related to efficiency and fairness of ad hoc networks. This work proposes a novel and efficient contention-based backoff mechanism for wireless ad hoc networks, which is adaptive efficiency-fairness tradeoff (AEFT) backoff algorithm. The authors increase the contention window at the time when channel is busy, then use an adaptive window to reduce the backoff time when the channel is idle by fair scheduling. The fair scheduling principally adopts maximum successive transmission and collision limit to terminate the fairness. This algorithm provides a larger fairness index and a tradeoff between efficiency and fairness. The proposed algorithm can improve total throughput. Performance metrics like backoff time, threshold and efficiency have been discussed in this paper. The proposed algorithm needs to address the continuous maximum successive transmission and the deferring or collision limit problem.

Shi et al.[18] discussed about the problem of head of line (HOL) blocking in the smart antenna system in wireless ad hoc networks. The authors propose a novel directional network allocation vector-based packets scheduling (DBPS) algorithm in this paper. The proposed DBPS algorithm uses the DNAV information and chooses the fittest packet in the smart antenna system. It makes greatest of the communication status of the neighbour nodes and is further adaptive to the network topology. Hence, nodes can efficiently extend the spatial reuse and address the HOL blocking problem. The proposed algorithm improves the throughput greatly and decreases the interference. It needs to study the performance of the DBPS algorithm in the more complex network topology and to extend to some multihop scenarios.

Marbach[19] proposed distributed scheduling and active queue management mechanism for wireless ad hoc networks. This approach is based on a random access scheduler where the transmission-attempt probabilities depend on the local backlog. This mechanism is simple and able to be implemented in a distributed fashion. The proposed scheduling mechanism needs only a redefinition of the transmission probabilities at personage nodes. This can be done by redefining the contention window (CW) size of the current 802.11 protocol. The proposed algorithm shows high throughput and fair bandwidth allocation. This algorithm suffers from the exposed terminal problem. Approaches to avoid this problem by improving the channel feedback need to be investigated.

Jaramillo and Srikant[20] converse the predicament of congestion control and scheduling in ad hoc wireless networks that must support a combination of best-effort and real-time traffic. This work proposes an optimization framework for the problem of congestion control and scheduling of elastic and inelastic traffic in ad hoc wireless networks. Authors presented a decomposition of the problem into an online algorithm that can make best possible decisions at the same time as keeping the network stable and satisfying the inelastic flow's QoS constraints. The scheduling problem for elastic and inelastic flows in a common framework by using deficit counters. Performance metrics like throughput are discussed in this paper. The channel state is considered constant during the entire frame in this work; study of this framework in the unknown channel state case is still needed. Traffic model for inelastic packets assumes that packets arrive at the beginning of the frame and all have the same delay, but it is not possible that all frames have the same delay, so it should be discussed in this framework with regards the difference in frame delay.

3 Proposed work

This paper proposes a queuing architecture, which supports both elastic and inelastic traffic. In this architecture, a single priority queue is maintained at the transmitting node. The priority queue holds all the packets whose routes traverse. It uses virtual queue algorithm to reduce the experienced delay by comprising virtual queues that are served at a fraction of the actual service rate and using the virtual queue-length values in utility function. Then, the optimization framework is used where the scheduling algorithm allocates the resource fairly in the network for both elastic and inelastic flows. Finally, priority dropping active queue management algorithm is applied based on proportional-integral design (PID) mechanism. This algorithm provides the differentiated service for the different layers or frames according to their priority. When network congestion arises, the low-priority packet is dropped initially. After that, the second low-priority packet is dropped and so on.

3.1 System design

System design of the proposed work consists of many steps like virtual queue algorithm, scheduler and congestion controller and active queue management algorithm. These steps will occur one after the other as shown in the Figure 1.

Figure 1
figure 1

System design.

3.1.1 Virtual queue algorithm

The packets from the inelastic flows have strict priority over their elastic counterparts because the inelastic applications are delay sensitive. Hence, the inelastic flows are not able to see the elastic flows in the queues in which they traverse. However, in some situations, the link might be critically loaded by the inelastic traffic itself resulting in huge delays. The elastic traffic also has some slight delay constraints. By applying virtual queues, which serves at the fraction of the actual service rate, and using the virtual queue-length values in utility function, the experienced delay can be reduced.

3.1.2 Joint congestion control and load balancing algorithm

Joint congestion control and load balancing algorithm[21] is used to maximize the utilization of elastic traffic while guaranteeing the support of inelastic traffic. Consider the fluid model where dynamic behaviour and randomness is ignored. The elastic and inelastic traffics are illustrated in Figure 2. The load balancing algorithm transfer the inelastic flows to less heavily loaded routes in order to provide maximum network utilization for elastic flows.

Figure 2
figure 2

Elastic and inelastic traffic.

Here, a source must have the knowledge of all the queue information along its route. The source sends the queue information hop by hop to achieve stability even though this information is delayed. Initially, virtual queue is evolved for both elastic and nonelastic flows. After this, congestion controller for elastic flow and load balancing for inelastic flows are performed using equations developed by Li et al.[21].

Algorithm:

Step 1: Virtual queue evolution for a link l is given by

θ l t = z l t + y l t - α 1 c l θ l t
(1)

where (t) is continuous time index, the aggregated elastic and inelastic rates are denoted by y l and z l . α1 and α2 are two types of virtual queues, which control the total load and the inelastic flow load, respectively. c l is the capacity of link l ∈ L.

Virtual queue evolution for a link l for inelastic flow is given by

γ l t = z l t - α 2 c l γ l t
(2)

Step 2: Congestion controller for elastic flow

x e t = U e t - 1 S R c t
(3)

where S R c is the aggregated virtual queue length of the elastic flow and U is the utility function.

Step 3: Load balancing implemented for inelastic flow

The number of packets at flow i at route r is given by

x i r t = μ i ' t - μ R i r t x i r t
(4)

where μ i ' t satisfies ∑ r = 1 R i ( μ i ' t - μ R i r t ) x i r t =0 and ∑ r = 1 R i x i r 0 = a i , a i denotes the arrival rate of inelastic flows.

3.1.3 Scheduler and congestion controller

Let Sil and Sel be the number of inelastic and elastic packets, respectively, that can be scheduled for transmission at link l for the time slot t {1,2,…, T}

Let S(a i ,c) be the feasible schedule where c is the channel state.

In the congestion control algorithm[20], the queue length of elastic flows and inelastic flows at link l is given by q l (k) and d l (k), respectively. Here, k is the current frame composed of time slot t. The congestion control algorithm is given by

x ˜ el * k ∈ arg max 0 ≤ x el ≤ X max 1 ∈ U l x el - q l k xel
(5)

The elastic arrival rate, which is a nonnegative real number, is converted into a nonnegative integer. This will indicate the number of elastic packets allowed to enter the network in a given frame k. Let us assume that the elastic arrival at link l is ael(k) is a random variable and Pr is a Probability. This satisfies Pr(ael(k) = 0) > 0 and Pr(ael(k) = 1) > 0 for all l ∈ L and all k. These assumptions guarantee the Markov chain that is defined below is irreducible and a periodic.

Consider the number of inelastic arrivals be a i (k) and the channel state be c(k). The scheduling algorithm is as given by

s ˜ * a i k , c k , d k , q k ) ∈ arg max s ∈ S a i k , c k ∑ l ∈ L 1 ϵ w l + d l k ∑ t = 1 T S il , t + q l k ∑ t = 1 T S el , t
(6)

Here, the number of inelastic arrivals at l(a′ il (k)) is a binomial random variable with parameters a il (k) and 1 - p l . The quantity a′ il (k) can be generated by the network as follows: on each inelastic packet arrival, toss a coin with probability of heads equal to 1 - p l and if the outcome is heads, add a 1 to the deficit counter. The optimal scheduler is a function of a i (k), c(k), d(k) and q(k). d l (k) is interrupted as a virtual queue that counts the deficit in service for link to achieve a loss probability due to deadline expiry less than or equal to p l .

3.1.4 PID control

PID[22] is a power controller. It is composed of proportion, integral and derivative controller. PID will compute a control action based on the input state and feedback gain multipliers that control stability, error and response. The proportional-integral design will avoid the steady-state error, but it will decrease the responsiveness by almost 1° of magnitude. A derivative part helps to reduce the overshoot and the settling time. The network feedback control based on PID is as shown in Figure 3[23].

Figure 3
figure 3

PID control system.

Here q0 is the expected queue length, q is instantaneous queue length, e = q - q0 is the error signal. p is the packet loss rate at some time which is the output of the PID controller. The input given to the PID controller is e.

PID control system estimates the packet loss rate p of every arriving packet based on the variance of queue length of the router. Source detects the packet loss rate after a link delay time. Source then judges the congestion state according to p and adjusts its sending rate to control the length of the router. The dropping probability p is given by:

p= 0 p < 0 p 0 ≤ p ≤ 1 1 p > 1
(7)

From Equation 7, it is clear that p is always in between 0 and 1.

The implementation process of priority dropping can be explained as follows: First, the packet priority number is defined when the data is packetized in the application layer. The priority number is then written to the priority field of the packet. The priority number for other background flows is set to 0. Here, the router maintains a packet queue. The queue is updated when packets enter queue or depart queue. For the newly arriving packet, dropping probability is calculated according to (7). If the current packet is determined to drop, then the packet whose priority number is less than the current packet in the queue is found. If there is any lower-priority packet in the queue, then that packet is dropped and the current packet enters the queue. Else, the current packet will be dropped.

3.2 Advantages

The main advantage of the proposed approach is that it is an effective queuing architecture, which will handle both elastic and inelastic traffic flows and assign different dropping precedence for different priority of traffic.

4 Simulation results

NS-2[24] is used for simulation for proposed effective queuing architecture with different dropping precedence (EQADDP) technique. By simulation, the channel capacity of mobile hosts is set to the same value: 2 Mbps. In simulation, 100 mobile nodes move in a 1,500 × 300 m2 rectangular region for different time simulation times. Initial locations and movements of the nodes are obtained using the random waypoint (RWP) model of NS-2. Assume that each node moves independently with the same average speed. In this mobility model, a node randomly selects a destination from the physical terrain. In simulation, the time varies from 10 to 50 s. The simulated traffics are variable bit rate traffic (VBR) and constant bit rate (CBR) for inelastic traffic and TCP for elastic traffic.

Simulation settings and parameters are summarized in Table 1.

Table 1 Simulation settings

4.1 Performance metrics

Comparative study made to prove the performance of this proposed (EQADDP) with the optimal scheduling algorithm. The following metrics were used for performance evaluation:

– Average end-to-end delay: The end-to-end delay is averaged over all surviving data packets from the sources to the destinations.

– Average packet delivery ratio: It is the ratio of the number of packets received successfully and the total number of packets transmitted.

– Drop: It is the total number of packets dropped during the transmission.

– Bandwidth: It is the measure of received bandwidth for all traffic flows.

– Fairness: For each flow, measure the fairness index as the ratio of throughput of each flow and total no. of flows.

The performance results are presented graphically in the next section.

4.2 Results

4.2.1 Based on rate

In the first experiment, vary the rate as 100, 200, 300, 400 and 500 kb and keep the simulation time at a constant value (50 s) and measure the selected metrics. The results obtained for the proposed algorithm and the algorithm taken for comparison are shown in Tables 2 and3, respectively.Figure 4 shows the received bandwidth of EQADDP and optimal techniques for different rate scenarios. It is concluded that the bandwidth of proposed EQADDP approach is 337% higher than optimal approach.Figure 5 shows the fairness of EQADDP and optimal techniques for different rate scenarios. It is concluded that the fairness of proposed EQADDP approach is 337% higher than optimal approach.Figure 6 shows the delivery ratio of EQADDP and optimal techniques for different rate scenarios. It is concluded that the delivery ratio of proposed EQADDP approach is 106% higher than optimal approach.Figure 7 shows the delay of EQADDP and optimal techniques for different rate scenarios. It is concluded that the delay of proposed EQADDP approach is 47% less than optimal approach.Figure 8 shows the drop of EQADDP and optimal techniques for different rate scenarios. It is concluded that the drop of proposed EQADDP approach is 27% less than optimal approach.

Table 2 EQADDP
Table 3 Optimal
Figure 4
figure 4

Rate vs bandwidth utilization.

Figure 5
figure 5

Rate vs fairness.

Figure 6
figure 6

Rate vs delivery ratio.

Figure 7
figure 7

Rate vs delay.

Figure 8
figure 8

Rate vs drop.

4.2.2 Based on time

In second experiment, vary the time as 10, 20, 30, 40 and 50 s and keep the rate at a constant value (100 kb/s) and measure the selected metrics. The results obtained for the proposed algorithm and the algorithm taken for comparison are shown in Tables 4 and5, respectively.Figure 9 shows the received bandwidth of EQADDP and optimal techniques for different time scenarios. It is concluded that the bandwidth of proposed EQADDP approach is 349% higher than optimal approach.Figure 10 shows the fairness of EQADDP and optimal techniques for different time scenarios. It is concluded that the fairness of proposed EQADDP approach is 278% higher than optimal approach.Figure 11 shows the delivery ratio of EQADDP and optimal techniques for different time scenarios. It is concluded that the delivery ratio of proposed EQADDP approach is 103% higher than optimal approach.Figure 12 shows the delay of EQADDP and optimal techniques for different time scenarios. It is concluded that the delay of proposed EQADDP approach is 84% less than optimal approach.Figure 13 shows the drop of EQADDP and optimal techniques for different time scenarios. It is concluded that the drop of proposed EQADDP approach is 43% less than optimal approach.

Table 4 EQADDP
Table 5 Optimal
Figure 9
figure 9

Simulation time vs bandwidth utilization.

Figure 10
figure 10

Simulation time vs fairness.

Figure 11
figure 11

Simulation time vs delivery ratio.

Figure 12
figure 12

Simulation time vs delay.

Figure 13
figure 13

Simulation time vs drop.

5 Conclusions

This paper proposed a queuing architecture for elastic and inelastic traffic. If a link is critically loaded by the inelastic traffic, then large delays may occur. Elastic traffic also has delay constraints. Virtual queue algorithm is used to reduce the delay using virtual queues and virtual queue-length values. The optimization framework is used where scheduling algorithm allocates the resource fairly in the network. Based on priority, the packets are classified as low-, medium- and high-priority data packet for drop preference. Based on PID mechanism, priority dropping active queue management algorithm (PID_PD) provides the differentiated service for the different layers or frames according to their priority. Simulation results proved that the proposed architecture offers better fairness and delivery ratio with reduced delay and drop.

References

  1. Moon A, Cho H: Energy-efficient replication extended database state machine in mobile ad-hoc network. IADIS International Conference on Applied Computing 2004, 224-228.

    Google Scholar 

  2. Xing Z, Gruenwald L: "Issues in Designing Concurrency Control Techniques for Mobile Ad-hoc Network Databases", Technical Report, School of Computer Science. University of Oklahoma; 2007:1-37.

    Google Scholar 

  3. Mukilan P, Wahi DA: EENMDRA: efficient energy and node mobility based data replication algorithm for MANET. IJCSI Int. J. Comput. Sci. Issues 2012, 9: 357-364.

    Google Scholar 

  4. Yen Y-S, Chao H-C, Chang R-S, Vasilakos A: Flooding-limited and multi-constrained QoS multicast routing based on the genetic algorithm for MANETs. Math. Comput. Model. 2011, 53: 2238-2250. 10.1016/j.mcm.2010.10.008

    Article  Google Scholar 

  5. Hasson ST, Fadil E: Queuing approach to model the MANETs performance. Br. J. Sci. 2012, 6: 18-24.

    Google Scholar 

  6. Guo Y-F, Kuo G-S: A packet scheduling framework for multipath routing in mobile ad hoc networks. In Vehicular Technology Conference. IEEE; 2007:233-237.

    Google Scholar 

  7. Patil R, Damodaram A, Das R: Cross layer fair scheduling for MANET with 802.11 CDMA channels. In First Asian Himalayas International Conference, IEEE. Kathmandu; 2009:1-5.

    Google Scholar 

  8. Salonidis T, Tassiulas L: Distributed on-line schedule adaptation for balanced slot allocation in wireless ad hoc networks. In Quality of Service, Twelfth IEEE International Workshop. IEEE; 2004:20-29.

    Google Scholar 

  9. He Y, Yuan R, Sun J, Gong W: Semi-random backoff: towards resource reservation for channel access in wireless LANs. In 17th IEEE International Conference on Network Protocols. IEEE, Princeton; 2009:21-30.

    Google Scholar 

  10. Xiao H, Seah WKG, Lo A, Chua KC: A flexible quality of service model for mobile ad-hoc networks. In IEEE Vehicular Technology Conference. Tokyo; 2000:445-449.

    Google Scholar 

  11. Ahn G-S, Campbell AT, Veras A, Sun L-H: SWAN: service differentiation in stateless wireless ad hoc networks. Twenty-First Annual Joint Conferences of the IEEE Computer and Communications Societies, IEEE 2002, 457-466.

    Google Scholar 

  12. Parvez J, Peer MA: A comparative analysis of performance and QoS issues in MANETs. World Acad. Sci. Technol. 2010, 48: 937-948.

    Google Scholar 

  13. Xiong N, Vasilakos AV, Yang LT, Wang C-X, Kannan R, Chang C-C, Pan Y: A novel self-tuning feedback controller for active queue management supporting TCP flows. Inform. Sci. 2010, 180: 2249-2263. 10.1016/j.ins.2009.12.001

    Article  MathSciNet  Google Scholar 

  14. Li P, Guo S, Yu S, Vasilakos AV: CodePipe: an opportunistic feeding and routing protocol for reliable multicast with pipelined network coding. IEEE Infocom 2012, 100-108.

    Google Scholar 

  15. Wang S, Vasilakos A, Jiang H, Ma X, Liu W, Peng K, Liu B, Dong Y: Energy efficient broadcasting using network coding aware protocol in wireless ad hoc network. In Communications (ICC), 2011 IEEE International Conference on. Kyoto; 2011:1-5.

    Google Scholar 

  16. Ambika I, Eswaran P: Improved token based resource allocation technique for multi-service flows in MANET. Int. Rev. Comput. Softw. 2013, 8: 1486-1496.

    Google Scholar 

  17. Cui H-X, Wei G: A novel backoff algorithm based on the tradeoff of efficiency and fairness for ad hoc networks. WRI Int. Conf. 2009, 2: 81-86.

    Google Scholar 

  18. Shi C, Dai X, Luo L, Cui M: A novel directional-NAV-based packets scheduling algorithm for ad hoc networks. In International Conference on Wireless Communications & Signal Processing. Nanjing; 2009:1-4.

    Google Scholar 

  19. Marbach P: Distributed scheduling and active queue management in wireless networks, 26th IEEE international conference on computer communications. IEEE 2007, 2321-2325.

    Google Scholar 

  20. Jaramillo JJ, Srikant R: Optimal scheduling for fair resource allocation in ad hoc networks with elastic and inelastic traffic. IEEE/ACM Trans. Networking 2011, 19: 1124-1136.

    Article  Google Scholar 

  21. Li R, Ying L, Eryilmaz A, Shroff NB: A unified approach to optimizing performance in networks serving heterogeneous flows. IEEE/ACM Trans. Networking 2011, 223-236.

    Google Scholar 

  22. Xiong N, Jia X, Yang LT, Vasilakos AV: A distributed efficient flow control scheme for multirate multicast networks. IEEE Trans. Parallel Distr. Syst. 2010, 21: 1254-1266.

    Article  Google Scholar 

  23. Xiaogang Y, Jiqiang L, Ning L, T-h K: Priority dropping for scalable video. Int. J. Multimed. Ubiquitous Eng. 2007, 119-129.

    Google Scholar 

  24. Network Simulator: Network Simulator. http://www.isi.edu/nsnam/ns/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Iyyapillai Ambika.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ambika, I., Sadasivam, V.P. & Eswaran, P. An effective queuing architecture for elastic and inelastic traffic with different dropping precedence in MANET. J Wireless Com Network 2014, 155 (2014). https://doi.org/10.1186/1687-1499-2014-155

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2014-155

Keywords