This paper proposes a queuing architecture, which supports both elastic and inelastic traffic. In this architecture, a single priority queue is maintained at the transmitting node. The priority queue holds all the packets whose routes traverse. It uses virtual queue algorithm to reduce the experienced delay by comprising virtual queues that are served at a fraction of the actual service rate and using the virtual queue-length values in utility function. Then, the optimization framework is used where the scheduling algorithm allocates the resource fairly in the network for both elastic and inelastic flows. Finally, priority dropping active queue management algorithm is applied based on proportional-integral design (PID) mechanism. This algorithm provides the differentiated service for the different layers or frames according to their priority. When network congestion arises, the low-priority packet is dropped initially. After that, the second low-priority packet is dropped and so on.

### 3.1 System design

System design of the proposed work consists of many steps like virtual queue algorithm, scheduler and congestion controller and active queue management algorithm. These steps will occur one after the other as shown in the Figure 1.

#### 3.1.1 Virtual queue algorithm

The packets from the inelastic flows have strict priority over their elastic counterparts because the inelastic applications are delay sensitive. Hence, the inelastic flows are not able to see the elastic flows in the queues in which they traverse. However, in some situations, the link might be critically loaded by the inelastic traffic itself resulting in huge delays. The elastic traffic also has some slight delay constraints. By applying virtual queues, which serves at the fraction of the actual service rate, and using the virtual queue-length values in utility function, the experienced delay can be reduced.

#### 3.1.2 Joint congestion control and load balancing algorithm

Joint congestion control and load balancing algorithm[21] is used to maximize the utilization of elastic traffic while guaranteeing the support of inelastic traffic. Consider the fluid model where dynamic behaviour and randomness is ignored. The elastic and inelastic traffics are illustrated in Figure 2. The load balancing algorithm transfer the inelastic flows to less heavily loaded routes in order to provide maximum network utilization for elastic flows.

Here, a source must have the knowledge of all the queue information along its route. The source sends the queue information hop by hop to achieve stability even though this information is delayed. Initially, virtual queue is evolved for both elastic and nonelastic flows. After this, congestion controller for elastic flow and load balancing for inelastic flows are performed using equations developed by Li et al.[21].

**Algorithm:**

Step 1: Virtual queue evolution for a link *l* is given by

{\theta}_{l}\left(t\right)={\left({z}_{l}\left(t\right)+{y}_{l}\left(t\right)-{\alpha}_{1}{c}_{l}\right)}_{{\theta}_{l}\left(t\right)}

(1)

where (*t*) is continuous time index, the aggregated elastic and inelastic rates are denoted by *y*_{
l
} and *z*_{
l
}. *α*_{1} and *α*_{2} are two types of virtual queues, which control the total load and the inelastic flow load, respectively. *c*_{
l
} is the capacity of link *l* ∈ *L*.

Virtual queue evolution for a link *l* for inelastic flow is given by

{\gamma}_{l}\left(t\right)={\left({z}_{l}\left(t\right)-{\alpha}_{2}{c}_{l}\right)}_{{\gamma}_{l}\phantom{\rule{0.12em}{0ex}}\left(t\right)}

(2)

Step 2: Congestion controller for elastic flow

{x}_{e}\left(t\right)={U}_{e}^{t-1}\left({S}_{{R}_{c}}\left(t\right)\right)

(3)

where{S}_{{R}_{c}} is the aggregated virtual queue length of the elastic flow and *U* is the utility function.

Step 3: Load balancing implemented for inelastic flow

The number of packets at flow *i* at route *r* is given by

{x}_{i}^{\left(r\right)}\left(t\right)={\left({\mu}_{i}^{\text{'}}\left(t\right)-{\mu}_{{R}_{i}^{\left(r\right)}}\left(t\right)\right)}_{{x}_{i}^{\left(r\right)}\left(t\right)}

(4)

where{\mu}_{i}^{\text{'}}\left(t\right) satisfies\sum _{r=1}^{\left|{R}_{i}\right|}{({\mu}_{i}^{\text{'}}\left(t\right)-{\mu}_{{R}_{i}^{\left(r\right)}}\left(t\right))}_{{x}_{i}^{\left(r\right)}\left(t\right)}=0 and\sum _{r=1}^{\left|{R}_{i}\right|}{x}_{i}^{\left(r\right)}\left(0\right)={a}_{i}, *a*_{
i
} denotes the arrival rate of inelastic flows.

#### 3.1.3 Scheduler and congestion controller

Let *S*_{il} and *S*_{el} be the number of inelastic and elastic packets, respectively, that can be scheduled for transmission at link *l* for the time slot *t* {1,2,…, *T*}

Let *S*(*a*_{
i
},*c*) be the feasible schedule where *c* is the channel state.

In the congestion control algorithm[20], the queue length of elastic flows and inelastic flows at link *l* is given by *q*_{
l
}(*k*) and *d*_{
l
}(*k*), respectively. Here, *k* is the current frame composed of time slot *t*. The congestion control algorithm is given by

{\tilde{x}}_{\mathit{el}}^{*}\left(k\right)\in \underset{0\le {x}_{\mathit{el}}\le {X}_{\text{max}}}{\text{arg max}}\frac{1}{\in}{U}_{l}\left({x}_{\mathit{el}}\right)-{q}_{l}\left(k\right)\mathit{xel}

(5)

The elastic arrival rate, which is a nonnegative real number, is converted into a nonnegative integer. This will indicate the number of elastic packets allowed to enter the network in a given frame *k*. Let us assume that the elastic arrival at link *l* is *a*_{el}(*k*) is a random variable and Pr is a Probability. This satisfies Pr(*a*_{el}(*k*) = 0) > 0 and Pr(*a*_{el}(*k*) = 1) > 0 for all *l* ∈ *L* and all *k*. These assumptions guarantee the Markov chain that is defined below is irreducible and a periodic.

Consider the number of inelastic arrivals be *a*_{
i
}(*k*) and the channel state be *c*(*k*). The scheduling algorithm is as given by

\begin{array}{ll}\phantom{\rule{2.7em}{0ex}}{\tilde{s}}^{*}{a}_{i}\left(k\right),c\left(k\right),d\left(k\right),q\left(k\right))\in \underset{s\in S\left({a}_{i}\left(k\right),c\left(k\right)\right)}{\text{arg max}}& \sum _{l\in L}\left\{\left[\frac{1}{\u03f5}{w}_{l}+{d}_{l}\left(k\right)\right]\phantom{\rule{0.12em}{0ex}}\sum _{t=1}^{T}{S}_{\mathit{il},t}\right.\\ \phantom{\rule{2.5em}{0ex}}\left(\right)close="\}">+{q}_{l}\left(k\right)\phantom{\rule{0.12em}{0ex}}\sum _{t=1}^{T}{S}_{\mathit{el},t}\end{array}\n

(6)

Here, the number of inelastic arrivals at *l*(*a*′_{
il
}(*k*)) is a binomial random variable with parameters *a*_{
il
}(*k*) and 1 - *p*_{
l
}. The quantity *a*′_{
il
}(*k*) can be generated by the network as follows: on each inelastic packet arrival, toss a coin with probability of heads equal to 1 - *p*_{
l
} and if the outcome is heads, add a 1 to the deficit counter. The optimal scheduler is a function of *a*_{
i
}(*k*), *c*(*k*), *d*(*k*) and *q*(*k*). *d*_{
l
}(*k*) is interrupted as a virtual queue that counts the deficit in service for link to achieve a loss probability due to deadline expiry less than or equal to *p*_{
l
}.

#### 3.1.4 PID control

PID[22] is a power controller. It is composed of proportion, integral and derivative controller. PID will compute a control action based on the input state and feedback gain multipliers that control stability, error and response. The proportional-integral design will avoid the steady-state error, but it will decrease the responsiveness by almost 1° of magnitude. A derivative part helps to reduce the overshoot and the settling time. The network feedback control based on PID is as shown in Figure 3[23].

Here *q*_{0} is the expected queue length, *q* is instantaneous queue length, *e* = *q* - *q*_{0} is the error signal. *p* is the packet loss rate at some time which is the output of the PID controller. The input given to the PID controller is *e*.

PID control system estimates the packet loss rate *p* of every arriving packet based on the variance of queue length of the router. Source detects the packet loss rate after a link delay time. Source then judges the congestion state according to *p* and adjusts its sending rate to control the length of the router. The dropping probability *p* is given by:

p=\left\{\begin{array}{cc}0& p<0\\ p& 0\le p\le 1\\ 1& p>1\end{array}\right.

(7)

From Equation 7, it is clear that *p* is always in between 0 and 1.

The implementation process of priority dropping can be explained as follows: First, the packet priority number is defined when the data is packetized in the application layer. The priority number is then written to the priority field of the packet. The priority number for other background flows is set to 0. Here, the router maintains a packet queue. The queue is updated when packets enter queue or depart queue. For the newly arriving packet, dropping probability is calculated according to (7). If the current packet is determined to drop, then the packet whose priority number is less than the current packet in the queue is found. If there is any lower-priority packet in the queue, then that packet is dropped and the current packet enters the queue. Else, the current packet will be dropped.

### 3.2 Advantages

The main advantage of the proposed approach is that it is an effective queuing architecture, which will handle both elastic and inelastic traffic flows and assign different dropping precedence for different priority of traffic.