General description of MCLA protocol
The MCLA protocol proposed in this paper is a distributed random contention MAC protocol. It includes five components, namely, the multi-priority queueing and service mechanism, packet admission control mechanism, channel occupancy statistical prediction mechanism, backoff mechanism, and multi-channel allocation mechanism, as shown in Fig. 1. The channel occupancy statistical prediction mechanism can determine channel busy-idle degree by statistical history information of channel load [25]. Each node senses channel load continuously by its receiver to obtain predicted value. Thereby, the predicted value is used for packet admission control mechanism and backoff mechanism. The packet admission control mechanism controls the access of packets to channels according to preset threshold of each priority and predicted channel occupancy state. The multi-channel dynamic allocation mechanism is used to allocate channel resource to different services rationally. In this paper, we mainly study the multi-priority queueing and service mechanism and the backoff mechanism in the protocol.
The fundamental principle of MCLA protocol is described as follows:
(1) Each node in the network has a sending buffer of the same size. When information is generated, it is split into multiple packets of the same length to be transmitted in node buffer. Packets of the same priority line up in a separate queue.
(2) Total channel resource is divided into C paralleled channels, and interference between different channels is ignored.
(3) Each node has a sending access and C receiving accesses. The system works on half-duplex mode.
(4) The status of each node in the network is equal, and each node can generate traffic of P priority classes. All packets have the same length and transmission rate. Furthermore, the service of the highest priority has a very strict QoS requirement in timeliness and reliability.
(5) Generally, the admission of packets to channels is determined by the channel threshold of the packet priority and current channel busy-idle degree. For the timeliness of transmission of the highest priority packets, the admission of these packets is not controlled by channel threshold. That is to say, the channel threshold is not set for the highest-priority packets. However, different channel thresholds are set for each priority service respectively (except the highest priority). The admission of packets is controlled by comparison between the channel threshold of their priority and the current channel busy-idle degree. If the current channel busy-idle degree is lower than the channel threshold, the packet can have access to channel immediately. Otherwise, it cannot be transmitted immediately and will go through the backoff stage. After the backoff stage, whether it can have access to channel will be judged once again.
(6) The MCLA protocol adopts the random access mechanism. Each node chooses a channel randomly to send packets with a probability 1/C and can receive packets in all channels. If only one packet is transmitted in a channel during a slot time, it can be transmitted successfully. However, if several packets are transmitted in a channel simultaneously, a collision will happen. The channel transmission delay of a burst is μ, and if the transmission interval between the two bursts sent in the same channel is less than μ, a collision will happen. For example, a collision will happen in channel c, as shown in Fig. 2. At that time in the receiver, if the receiving power of a packet is larger than or equal to ρ times of the sum of receiving power of other packets, the packet can still be received successfully.
(7) When a collision happens, the node whose packet is transmitted unsuccessfully will wait for Wbf slot times according to the backoff scheme and send the packet once again. The value of Wbf is determined by current number of active nodes in the network. The detailed description is given in Section 3.3.
(8) The transmission time of a packet in a single hop is a slot time.
The process of node state transition in MCLA protocol is shown in Fig. 3. Each node in the network works based on the following state transition scheme. (1) In the state “initialization/idle”, if node receives packets from the upper layer, the packets are inserted into the corresponding priority queue. (2) After the new arrival packets are inserted into the queue, compare the busy-idle degree of channel at present with the channel threshold preset for each priority according to the admission control mechanism, and then decide whether enter into the state “send the head packet of the queue” or enter into the state “backoff”. (3) In the state “backoff”, when new packets arrive, if the corresponding priority queue is not full, insert the packets into the queue, otherwise discard the packets. When there exist corresponding idle channels, enter into the state “send the head packet of the queue”. (4) In the state “send the head packet of the queue,” when new packets arrive, if the corresponding priority queue is not full, insert the packets into the queue, otherwise discard the packets. (5) After the state “transmission completed,” according to whether all queues are empty and the busy-idle degree of channel, the state of the node can enter into “send the head packet of the queue,” or “initialization/idle,” or “backoff.”
For the above process in MCLA protocol, let us further describe the state transition of the highest priority packet and other priority packet, respectively. For the highest priority packet, it will be inserted into the highest priority queue when it arrives. After the packets arrived earlier in the highest priority queue are transmitted, it will enter into the state “send the head packet of the queue.” When it is transmitted successfully, it will enter into the state “transmission completed.” For the other priority packet, it may experience more complex state transitions. When it arrives, it will be inserted into its corresponding priority queue. When it reaches the head of its priority queue and the higher priority queues are empty, it acquires the chance to have access to channel. Whether it can have access to channel is be determined by the busy-idle degree of channel at present and the channel threshold preset for its priority. If the current busy-idle degree of channel satisfies the access to channel, it will enter into the state “send the head packet of the queue.” Otherwise, it will enter into the state “backoff.” If it enters into the state “send the head packet of the queue” and the packet is transmitted successfully, it will enter into the state “transmission completed.” If it enters into the state “backoff” and when the backoff period ends, it has another chance to have access to channel. Whether it can have access to channel should still be judged by the current busy-idle degree of channel.
Multi-priority queueing and service mechanism
The traffic in the network has P priority classes, where the priority 1 service is the highest priority, the priority 2 service is the sub-highest priority, and the priority P service is the lowest priority. Supposing that the sending buffer in node is large enough, the queue overflow is not considered in this paper. Furthermore, it is supposed that the arrival of packets for each priority is considered as a Poisson process, the service provided for packets by server obeys general distribution, and there is only 1 server in the system. Therefore, the packet queue in each node buffer is a multi-priority M/G/1 queueing system, as shown in Fig. 4. Let the packet arrival rate from the highest priority to the lowest priority be λ1, λ2, ⋯, λP, respectively, the mean service time is \( \frac{1}{\mu_1},\frac{1}{\mu_2},\cdots, \frac{1}{\mu_P} \) respectively, and the second moment of service time is \( \overline{{X_1}^2},\overline{{X_2}^2},\cdots, \overline{{X_P}^2} \) respectively. Let \( {N}_Q^p \), Wp and ρp denote the waiting queue length, waiting time, and utilization ratio of the priority p packets, respectively.
The system is a single-server and multi-queue system, and serves packets from high priority to low priority. Supposing that the traffic of the highest priority is extremely low, the preemptive-resume service strategy is employed for it. That is to say when the highest priority packet arrives, the transmission of other priority packet will be interrupted, and the new arrived packet will be served preferentially. When the service is over, the interrupted packet will be transmitted from the breakpoint. Furthermore, the backoff scheme is not used for the highest priority traffic. For the priority 2 ≤ p ≤ P packets, the system adopts the non-preemptive strategy, i.e., if a higher priority packet arrives, it will be served when the service for the current lower priority packet finishes. In addition, before serving, the system should decide whether the packet can be served immediately according to the current channel busy-idle degree. If it satisfies the service condition, the packet will be accessed immediately; otherwise, it will wait for several slot times according to the backoff scheme. After the backoff phase, it will be judged again whether it can be served or enter into another backoff phase. The duration of each backoff phase is denoted as V1, V2, ⋯, which are independent identically distributed random variables and are independent of the packet priority. The mean value of V1, V2, ⋯ is \( \overline{V} \). Except for the priority 1 packets, all packets will experience m(m = 0, 1, 2, ⋯) backoff periods.
In the following, the expected delay Tp will be derived. Tp is comprised of two parts: ① the expected service time 1/μp of packets, ② the expected waiting time of packets. The expected waiting time Wp of the priority p packet contains two parts: ① the service time \( {W}_{old}^p \) of the priority 1 to p packets which have already existed in the system when the priority p packet arrives; ② the service time \( {W}_{new}^p \) of priority 1 to p − 1 packets which arrives while the priority p packet is waiting for service. Thus,
$$ {T}_p=\frac{1}{\mu_p}+{W}_p=\frac{1}{\mu_p}+{W}_{old}^p+{W}_{new}^p $$
(1)
Where \( {W}_{old}^p \) can be derived according to a single-priority M/G/1 queueing system with server’s vacation. In the queueing system, only priority 1 to p packets need to be considered, while priority p + 1 to P packets can be neglected. According to the M/G/1 queueing theory, we have
$$ {W}_{old}^p=\frac{R_p}{1-{\rho}_1-{\rho}_2-\cdots -{\rho}_p} $$
(2)
Where Rp is the mean residual service time of packets, and ρp is the utilization ratio of the priority p packets. Rp will be derived as follows.
When a new packet arrives, it may encounter two situations: (1) there is a packet receiving service; (2) there is a packet in backoff stage.
Let Mp(t) and L(t) denotes the number of packets arrived and the number of backoff times during interval [0, t] respectively. Thus, the mean residual service time Rp of the priority p packets can be expressed as
$$ {\displaystyle \begin{array}{l}{R}_p=\frac{1}{t}\underset{0}{\overset{t}{\int }}r\left(\tau \right) d\tau =\frac{1}{t}\sum \limits_{i=1}^{M_1(t)}\frac{1}{2}{X_{1i}}^2+\frac{1}{t}\sum \limits_{i=1}^{M_2(t)}\frac{1}{2}{X_{2i}}^2+\cdots +\frac{1}{t}\sum \limits_{i=1}^{M_{p-1}(t)}\frac{1}{2}{X_{\left(p-1\right)i}}^2+\frac{1}{t}\sum \limits_{i=1}^{L(t)}\frac{1}{2}{V_i}^2\\ {}=\frac{1}{2}\frac{M_1(t)}{t}\frac{\sum \limits_{i=1}^{M_1(t)}{X_{1i}}^2}{M_1(t)}+\frac{1}{2}\frac{M_2(t)}{t}\frac{\sum \limits_{i=1}^{M_2(t)}{X_{2i}}^2}{M_2(t)}+\cdots +\frac{1}{2}\frac{M_{p-1}(t)}{t}\frac{\sum \limits_{i=1}^{M_{p-1}(t)}{X_{\left(p-1\right)i}}^2}{M_{p-1}(t)}+\frac{1}{2}\frac{L(t)}{t}\frac{\sum \limits_{i=1}^{L(t)}{V_i}^2}{L(t)}\end{array}} $$
(3)
Where \( \frac{M_1(t)}{t} \) is the arrival rate λ1 of the priority 1 packets, \( \frac{M_2(t)}{t} \) is the arrival rate λ2 of the priority 2 packets, …, \( \frac{M_{p-1}(t)}{t} \) is the arrival rate λp − 1 of the priority p − 1 packets, and \( \frac{L(t)}{t} \) is the mean arrival rate of the backoff stages.
In unit time, the proportion of transmission of the priority p packets can be expressed as \( {\rho}_p=\frac{\lambda_p}{\mu_p} \), and the proportion of backoff stage is \( 1-\sum \limits_{i=1}^{p-1}{\rho}_i \). Thereby, the mean arrival rate of the backoff stages is \( \frac{1-\sum \limits_{i=1}^{p-1}{\rho}_i}{\overline{V}} \).
Let t → ∞, and the mean residual service time is
$$ {\displaystyle \begin{array}{l}{R}_p=\frac{1}{2}{\lambda}_1\overline{{X_1}^2}+\frac{1}{2}{\lambda}_2\overline{{X_2}^2}+\cdots +\frac{1}{2}{\lambda}_{p-1}\overline{{X_{p-1}}^2}+\frac{1}{2}\frac{1-\sum \limits_{i=1}^{p-1}{\rho}_i}{\overline{V}}\overline{V^2}\\ {}=\frac{1}{2}\sum \limits_{i=1}^{p-1}{\lambda}_i\overline{{X_i}^2}+\frac{1}{2}\frac{1-\sum \limits_{i=1}^{p-1}{\rho}_i}{\overline{V}}\overline{V^2}\end{array}} $$
(4)
As \( {W}_{new}^p \) is the service time of the newly arrived priority 1 to p − 1 packets since the priority p packet arrives, we have
$$ {W}_{new}^p=\frac{\lambda_1}{\mu_1}{T}_p={\rho}_1{T}_p,\kern1em p>1 $$
(5)
Thus, the expected delay Tp can be derived from (1), (2), (4), and (5).
Multi-channel load-based backoff mechanism
In FANETs, the node which has packets to send is defined as an active node. In the backoff mechanism, the size of contention window is determined by the number of active nodes in the meantime. The arrival of packets of all priorities is a Poisson process with the arrival rate \( \sum \limits_{p=1}^P{\lambda}_p \). In MCLA protocol, each packet is divided into multiple bursts to be transmitted in channel. The duty cycle of burst in a packet is R, which means the ratio of the transmission time of a burst in channel to that of the original packet in the channel. The node number in the network is N. According to the Poisson theory, the number of active nodes in the network is
$$ n=N\left(1-{e}^{-\frac{2\sum \limits_{p=1}^P{\lambda}_p}{RC}}\right) $$
(6)
According to the principle of MCLA protocol, the expression of contention window can be constructed as
$$ {W}_{bf}=\left\lceil -\frac{2}{\ln \frac{n}{N+1}}\right\rceil $$
(7)
In backoff stage i, the backoff duration is a random value, thus can be denoted as
$$ {W}_i= Random\left[1,\min \left({W}_{bf},{W}_{\mathrm{max}}\right)\right] $$
(8)
where Wmax is the maximum value of contention window we defined. When a node experiences multiple consecutive backoff stages, the size of contention window increases linearly until it reaches Wmax.
Modeling of the backoff mechanism
In the following, the two-dimensional Markov chain is adopted to model the backoff mechanism. Let \( {b}_{i,j}=\underset{t\to \infty }{\lim }P\left\{u(t)=i,v(t)=j\right\} \) denote the steady-state probability of every state in Markov chain, where i is the backoff stage and j is the size of the contention window. The backoff state space of node can be expressed as Ω = {(i, j)| i ∈ {−1, 0, 1, 2, ⋯, m}, j ∈ {0, 1, ⋯, Wi − 1}}, where the state (−1, 0) denotes the state after a burst is transmitted successfully, or a burst cannot still have access to channel through the maximum backoff times. The state transition in the model is shown in Fig. 5. The one-step state transition probability is
$$ {\displaystyle \begin{array}{l}P\left\{i+1,j+1|i,j\right\}=\\ {}P\left\{u\left(t+1\right)=i+1,v\left(t+1\right)=j+1|u(t)=i,v(t)=j\right\}\end{array}} $$
(9)
From Fig. 5, we can acquire the state transition probability
$$ \Big\{{\displaystyle \begin{array}{l}P\left\{-1,0|m,0\right\}=1\\ {}P\left\{-1,0|i,0\right\}=1-{p}_{col},i\in \left[0,m-1\right]\\ {}P\left\{\mathrm{i},j|i,j+1\right\}=1,i\in \left[0,m\right],j\in \left[0,{W}_i-1\right]\\ {}P\left\{i,j|i-1,0\right\}={p}_{col}/{W}_i,i\in \left[1,m\right],j\in \left[0,{W}_i-1\right]\\ {}P\left\{m,j|m,0\right\}={p}_{col}/{W}_m,\kern0.36em j\in \left[0,{W}_m-1\right]\\ {}P\left\{0,j|-1,0\right\}=q/{W}_0,\kern0.36em j\in \left[0,{W}_m-1\right]\end{array}} $$
(10)
where pcol is the collision probability of bursts, which is equal to the probability that at least one node in n − 1 active nodes send bursts to the same channel at the same time slot. It can be expressed as
$$ {p}_{col}=\frac{1-{\left(1-{p}_{in}\right)}^{n-1}}{C} $$
(11)
where pin is the probability that a burst can be accessed to the channel after the current backoff stage. From Fig. 5, it can be expressed as
$$ {p}_{in}=\sum \limits_{i=0}^m{b}_{i,0}=\frac{\left(1-{p}_{col}\right){b}_{0,0}}{1-{\left({p}_{col}\right)}^m} $$
(12)
In (10), q is defined as the probability that there are packets arrived during the time slot Tσ. Therefore, we can obtain
$$ q=1-{e}^{-\lambda {T}_{\sigma }} $$
(13)
According to (10) and Fig. 5, we have
$$ \Big\{{\displaystyle \begin{array}{l}{b}_{0,j}=\frac{W_0-j}{W_0}\cdot q,j\in \left[1,{W}_0-1\right]\kern2em \\ {}{b}_{i,j}={b}_{i-1,0}\cdot \frac{W_i-j}{W_i}\cdot {\left({p}_{col}\right)}^i,i\in \left[1,m\right],j\in \left[1,{W}_i-1\right]\end{array}} $$
(14)
The expressions of bi, j and b−1, 0 with b0, 0 can be obtained from (14)
$$ {b}_{i,j}=\frac{W_i-j}{W_i}{\left({p}_{col}\right)}^i{b}_{0,0},i\in \left[0,m\right],j\in \left[0,{W}_i-1\right] $$
(15)
$$ {b}_{-1,0}={b}_{0,0}/q $$
(16)
According to the definition of Markov chain, all states in Fig. 5 should meet the requirement of normalization, namely
$$ {b}_{-1,0}+\sum \limits_{i=0}^m\sum \limits_{j=0}^{W_i-1}\frac{W_i-j}{W_i}\cdot {\left({p}_{col}\right)}^i{b}_{0,0}=1 $$
(17)
According to (11) to (15), we have
$$ {b}_{0,0}=\frac{1}{\frac{1}{q}+\sum \limits_{i=0}^m\sum \limits_{j=0}^{W_i-1}\frac{W_i-j}{W_i}\cdot {\left({p}_{col}\right)}^i} $$
(18)
Metrics for protocol performance
Mean MAC delay of the backoff mechanism
Let E[TMAC] denote the mean MAC delay, which is the average time from the beginning of backoff stage to the time that the burst is accessed to channel or abandoned. It can be easily obtained that
$$ E\left[{T}_{MAC}\right]=\frac{\sum \limits_{i=0}^m{\left(1-{p}_{col}\right)}^i{W}_i{T}_{\sigma }}{2m} $$
(19)
Mean end-to-end delay
The mean end-to-end delay E[T] is the sum of the waiting time, the backoff time and the propagation time. Tp derived in part C of Section III is the sum of the waiting time and the backoff time. Define E[Tpro] as the packet propagation delay, and its value is related to the communication distance. Let dcom be the mean communication distance in a single hop, and c is the speed of light. So E[Tpro] can be calculated as
$$ E\left[{T}_{pro}\right]=\frac{d_{com}}{c} $$
(20)
Therefore, E[T] can be expressed as
$$ E\left[T\right]=E\left[{T}_p\right]+E\left[{T}_{pro}\right] $$
(21)
(3) Network throughput
Here, S is defined as network throughput, meaning the overall bursts accessed to channel within the unit time. It can be expressed as
$$ S=L\left(1-{p}_{col}\right)\sum \limits_{i=0}^m{b}_{i,0}\sum \limits_{p=1}^P{\lambda}_p=\frac{L{\left(1-{p}_{col}\right)}^2{b}_{0,0}\sum \limits_{p=1}^P{\lambda}_p}{1-{\left({p}_{col}\right)}^m} $$
(22)
where L is the burst length.