An Adaptive Time-Spread Multiple-Access Policy for Wireless Sensor Networks

Sensor networks require a simple and e ﬃ cient medium access control policy achieving high system throughput with no or limited control overhead in order to increase the network lifetime by minimizing the energy consumed during transmission attempts. Time-spread multiple-access (TSMA) policies that have been proposed for ad hoc network environments, can also be employed in sensor networks, since no control overhead is introduced. However, they do not take advantage of any cross-layer information in order to exploit the idiosyncrasies of the particular sensor network environment such as the presence of typically static nodes and a common destination for the forwarded data. An adaptive probabilistic TSMA-based policy, that is proposed and analyzed in this paper, exploits these idiosyncrasies and achieves higher system throughput than the existing TSMA-based policies without any need for extra control overhead. As it is analytically shown in this paper, the proposed policy always outperforms the existing TSMA-based policies, if certain parameter values are properly set; the analysis also provides for these proper values. It is also shown that the proposed policy is characterized by a certain convergence period and that high system throughput is achieved for long convergence periods. The claims and expectations of the provided analysis are supported by simulation results presented in this paper.


INTRODUCTION
Sensor networks have emerged in recent years offering a wide range of possible applications by the combination of sensing, computation, and communication capabilities in a single device.In most of the cases, this device is considered to be cheap and small in order to be easily deployed in large numbers in various environments of interest.Such environments can be an agricultural field in which the climatological conditions (e.g., temperature, moist) are of interest, a forest or a building in which the deployed sensors allow for fire detection at early stages and numerous other applications.
Sensor networks may be seen as a special case of ad hoc networks and share many of the principles in the design of, for example, the routing protocol, the medium access control (MAC), the physical layer, and so forth.However, there is a number of differences among the two environments: (a) in sensor networks the network topology is typically considered to be stationary while in ad hoc networks nodes' movement is the default case; (b) in sensor networks data are forwarded towards a certain destination in the network (the sink node), while in ad hoc networks the destination of the data can be any node.Depending on the particular ad hoc and sensor environment (and on the application scenario) a more precise list of differences can be created for each particular pair of networks.
Data packets are considered to be forwarded towards the sink node along the path determined by the employed routing protocol.The employed MAC policy shapes the data transmission attempts on each individual direct link.These transmission attempts should be minimized (or equivalently, the number of successful transmissions should be maximized) in order to conserve energy, (e.g., [1]).In sensor networks, nodes are not in general able to recharge their batteries and therefore, it is important to employ energy saving protocols in order to extend the network lifetime as long as possible.Clearly, an efficient MAC policy for sensor networks should guarantee that (a) limited or no extra control overhead is added to maintain connectivity; (b) high throughput is achieved.

EURASIP Journal on Wireless Communications and Networking
Several MAC policies have been proposed in the area of ad hoc networks, that can also be applied in sensor networks.Some of them, [2][3][4][5][6] are contention based in the sense that they use direct competition to access the channel.Allocationbased MAC protocols have also been proposed (e.g., [7]) and it has been shown that the derivation of an optimal scheduling (i.e., time slots during which a node is allowed to transmit during a frame) is an NP-complete problem, similar to the n-coloring problem in graph theory, [8,9].Consequently, these policies introduce a certain (and possibly large) control overhead for the scheduling derivation, which is not desirable especially in sensor network environments due to the aforementioned energy limitations.
A special category of allocation-based protocols are the time-spread multiple-access (TSMA) protocols which have been the focus of an increased research volume in the last decade.These protocols have no coordination overhead and-provided that they are efficient enough-could be adopted for sensor networks.In addition, reduced energy consumption may be achieved as opposed to CSMA/CAbased approach [12].Note however, that a time division system requires global synchronization which is not easily realizable [20].However, as in most of the cases in the area of time division MAC protocols, in this work it is assumed that nodes are synchronized (e.g., they are aware about the beginning of each time slot).
The idea that a node's transmission is successful is as old as the ALOHA variations in 1970s [10].Various variations have been proposed (e.g, more recently [11] or [13]) but the first TSMA protocol was proposed by Chlamtac and Farago, in 1994 [14].The particular work has given birth to the research area of TSMA-based protocols, and several new ones have been proposed in the past decade among which are: [15] in 1998, [16] in 2003, [17] in 2004, [18,19] in 2005, [21][22][23] in 2006, and so forth.Other researchers have studied the properties of the original TSMA protocol: Basagni and Bruschi [24] proved the lower bound of the frame length to be log N, where N is the number of nodes in the network and more recently in 2006, Miorandi et al. [25] proved that the throughput and the delay achieved by the TSMA protocol proposed by Chlamtac and Farago is very close to the theoretical bounds derived by Gupta and Kumar in their seminal work regarding capacity in wireless networks, [26], or other researchers [27].
In more detail, under the original TSMA policy proposed by Chlamtac and Farago in [14], nodes are allowed to transmit only at a (small) subset of the available time slots carefully selected so that at least one of them is collision free.The achieved throughput of this particular deterministic policy was shown that it could be further improved by allowing probabilistic transmission attempts during unallocated time slots that were not assigned under the deterministic assignment [17,18].For the rest, the deterministic policy, proposed by Chlamtac and Farago in [14], will be referred to as the D-Policy and the probabilistic policy, proposed in [17], as the P-Policy.
The main reason behind the throughput increase under the P-Policy is the use of time slots that are not allocated un-der the D-Policy but allow for corruption-free transmissions.Under the P-Policy, these time slots are utilized according to an access probability p fixed for all time slots and for all nodes in the network.Both policies are suitable for sensor networks since they do not require any control overhead to derive the scheduling of the nodes; thus energy is saved.However, crosslayer information [28], such as the network topology characteristics and the typically rarely changing and common destination of the transmitted data, are not taken into account.A new adaptive probabilistic policy, the A-Policy, based on the P-Policy and proposed in this paper (initially mentioned in [19]), is capable of achieving even higher throughput by exploiting the idiosyncrasies of the sensor network environment.This particular policy makes better use of the unallocated time slots than the P-Policy (or the D-Policy that fails to utilize them at all).The new idea behind the A-Policy is to utilize the unallocated time slots with probability 1, provided that the last transmission attempt was a successful one (assuming that there exist data available for transmission).The most direct result is a significant throughput increment since those unallocated time slots that allow for collision-free transmissions are better utilized under the A-Policy than under the P-Policy.
Due to its adaptive nature, the A-Policy requires a certain time period before the steady-state mode of operation is reached and the maximum system throughput is achieved.The transient mode of operation between the beginning of the network operation and the beginning of the steady-state mode, corresponds to the convergence period.As it will be shown later, in order to achieve a higher throughput at steady state, a long and of low-throughput convergence period is required.Note that even though for long convergence periods the system throughput at the steady-state mode of operation is maximized, during the convergence period it remains comparably low.Therefore, the larger the convergence period, the longer the system throughput remains low; this may not be a desirable effect for the efficient operation for some sensor networks.The analytical results provided in this paper derive a certain value for the access probability that allows for small convergence periods and comparably high system throughput.In comparison with the P-Policy, it is shown that this particular value for the access probability allows for higher system throughput under the A-Policy than that under the P-Policy.Simulation results support the particular claims of the analysis.Some definitions about the network and conditions regarding successful transmissions are given in Section 2. Section 2 also includes a brief introduction to those elements of the D-Policy and the P-Policy that the A-Policy depends on or that are needed later in the analytical part of the paper.The A-Policy policy is introduced in Section 3 where an analytical expression is derived for the system throughput.This particular analytical expression, under certain and justifiable approximations, helps to reveal important aspects of the behavior of the A-Policy, discussed in Section 4. In the same section it is analytically shown that the A-Policy outperforms the P-Policy and the provided simulation results support this claim as well.The simulation results included in Section 4 allow for a comprehensive demonstration of the convergence period and confirm the expectation that high system throughput is achieved for long convergence periods.
Finally, the conclusions are drawn in Section 5.

SYSTEM AND NETWORK DEFINITION
A sensor network may be viewed as a time varying multihop network and may be described in terms of a graph G(V , E), where V denotes the set of nodes and E the set of (bidirectional) links between the nodes at a given time instance.Let |X| denote the number of elements in set X and let N = |V | denote the number of nodes in the network.Let S u denote the set of neighbors of node u, u ∈ V .Let D denote the maximum number of neighbors for a node; clearly |S u | ≤ D, for all u ∈ V .Set S u includes any node v to which a direct transmission from node u (transmission u → v) is possible.Assuming the time is equally divided in time slots, let λ be the probability that there exist data available for transmission during a time slot for any node in the network (originating from a memoryless source).Suppose that node u wants to transmit to node v during a particular time slot i. Transmission u → v may be corrupted by any node that belongs to S v (apart from node u).However, transmissions that corrupt transmission u → v may (set Φ u→v ) or may not (set Θ u→v ) be corrupted by it, as it is graphically depicted in Figure 1.
Under the D-Policy a frame of size q 2 is created and each node is allowed to transmit during q (fixed) time slots in a frame.Let Ω u denote the set of time slots during which node u is allowed to transmit in a frame.It is obvious that |Ω u | = q.Actually, each node is assigned a polynomial of order k and coefficients from a Galois field of order GF(q).Parameters q and k are selected such that q ≥ kD + 1 is satisfied [14].Even though overlapping time slots (set C u→v = Ω u ∩ ( χ∈Sv∪{v}−{u} Ω χ )) with the neighbor nodes do exist, it is assured that at least one transmission in a frame will be collision free [14].This is actually due to the fact that two polynomials of order k may have at most k common roots, corresponding to k at most collisions for each pair of nodes.Given the fact that D is the maximum number of neighbor nodes, kD is the maximum number of collisions for a node in a frame.On the other hand, each node is allowed to transmit during q time slots in the frame and considering q ≥ kD + 1, it is evident that there will be at least one collision-free transmission for each node.
However, the achievable system throughput under the D-Policy, denoted by P D , is small due to unused time slots: (a) time slots that have been allocated to nodes which do not use them due to lack of data available for transmission (small values for λ); (b) unallocated time slots that nodes cannot access under the D-Policy and if they were accessed, successful transmissions would have taken place.Let R u→v denote this set of time slots for a particular transmission u → v.It was shown that [17].
In order to utilize those unused time slots, the P-Policy allows any node u to transmit in time slots i / ∈ Ω u according to an access probability p.The analysis presented in [17,18] examines P P (a more tractable form of the actual system throughput P P for which it was shown that when P P is maximized, P P is also close to the maximum [17,18]) and allows for its maximization.Eventually, where p λ,|S| [18] is the value of the access probability p that maximizes the (approximated) system throughput under the P-Policy.|S| = (1/N) ∀u∈V |S u | corresponds to the average number of neighbor nodes in the network.

THE A-POLICY
The key idea behind the A-Policy is to utilize more efficiently (compared to the P-Policy) the unused time slots.In particular, for a given transmission u → v, transmission attempts during a certain time slot i ∈ Ω u take place as soon as data are available for transmission (similar to the D-Policy and the P-Policy).For the case that i / ∈ Ω u , transmission attempts, initially, take place according to probability p, as soon as data are available for transmission.If transmission u → v is successful in time slot i during frame j, then the next time (in a future frame > j) that there will be data available for transmission in time slot i, the A-Policy would dictate a transmission attempt to take place with probability 1 (instead of p under the P-Policy).If a corruption occurs, then future transmission attempts will take place with probability p until any successful future transmission.The aforementioned policy can be summarized as follows.

The A-Policy
Each node u transmits in slot i during frame j, if i ∈ Ω u and transmits with probability p j i,u→v , if i / ∈ Ω u , provided it has data to transmit.
Two different values, p or 1, are possible for p j i,u→v , depending on the status of the most recent attempt of transmission u → v in time slot i.The initial value is set to p.The remaining of this section focuses on the derivation of an analytical expression regarding the system throughput under the A-Policy.
Let O i,u→v be that set of nodes χ whose transmissions corrupt a particular transmission u → v and which are also allowed to transmit in time slots i / Let Ψ j i,u→v be the set of nodes χ that corrupt transmission u → v (therefore, these nodes belong in S v ∪ {v} − {u}) in time slot i for which i / ∈ Ω χ (therefore, these nodes belong in O c i,u→v ) and for which nodes the most recent attempt for transmission χ → ψ was successful.Let Ψ j,c i,u→v be that set of nodes, which belong in O c i,u→v and for which the most recent attempt for transmission χ → ψ was not successful.Obviously, Let P j A,i,u→v be the probability of success for transmission u → v in time slot i during frame j.If i ∈ C u→v , then transmission u → v takes place with probability λ.The same applies for transmissions that belong to nodes χ ∈ O i,u→v .Transmissions that belong to nodes χ ∈ O c i,u→v take place with probability λ, if these nodes belong in Ψ j i,u→v , while they take place with probability pλ, if these nodes belong in Ψ j,c i,u→v .
According to the previous probability, When i ∈ Ω u and i / ∈ C u→v , node u transmits with probability λ.Since |O i,u→v | = 0 for these time slots, it is evident that Consequently, nodes that belong in Ψ j i,u→v transmit with probability λ, while nodes that belong in Ψ j,c i,u→v transmit with probability pλ.Therefore, In a similar manner, expressions for P j A,i,u→v can be derived when i / ∈ Ω u , (for this case node u transmits with probability p j i,u→v λ in every time slot i).Eventually, Let P j A,u→v be the average probability of success for transmission u → v in frame j under the A-Policy (P j A,u→v = (1/q 2 ) q 2 i=1 P j A,i,u→v ).Let P j A , for which (1/N) u∈V P j A,u→v , v ∈ S u , denote the system throughput in frame j.Finally, Even though the access probability is set to p at the beginning of the network operation, it is expected that after some time the access probability to be either 1 or p depending only on the status (successful or corrupted) of the latest transmission attempt.When this is the case, the network is considered to be at the steady-state mode of operation.Before entering the steady-state mode, there exists a certain convergence period that corresponds to the transient mode of operation.During the convergence period, the access probability for some nodes is equal to p due to lack of transmissions since the beginning of the network operation (and not due to corrupted transmissions).It is easy to calculate the number of frames that correspond to the convergence period and consequently, the beginning of the steady-state mode of operation.
Since nodes are initially allowed to transmit during an unallocated time slot with probability pλ, it takes 1/ pλ frames (on average) in order for all nodes in the network to transmit for the first time.Therefore, the steady-state mode of operation starts 1/ pλ frames (on average) after the beginning of the network operation.During the steady-state mode of operation, transmissions that are corrupted refrain from attempting to transmit and even though they try with small (on average) probability in subsequent frames, they still refrain from transmission due to subsequent corruptions.This allows for other (successful) transmissions to continue (almost uninterrupted) their successful transmission attempts.Consequently, there is a certain throughput improvement especially for increased traffic load.

OBSERVATIONS AND RESULTS
Equation ( 4) does not provide for a tractable form of the system throughput and therefore, it is not easy to proceed further the analysis of P j A .Some approximations are introduced in order to provide for a more tractable form of P j A , leading to an approximate expression for P j A denoted by P j A .First, (1 − λ)/(1 − pλ) is approximated by 1, and second, |S v | is approximated by |S|, for all v ∈ V .The latter approximation actually corresponds to a network with all nodes having the same number of neighbor nodes.Both approximations have been used in past works in the area (e.g., [17,18]), and their effectiveness has been demonstrated.According to (4), and in view of the aforementioned approximations, Based on both (2) and ( 5) it is easy to conclude that for any value of p, common for both the A-Policy and the P-Policy, the approximated system throughput under the A-Policy is higher than that under the P-Policy.This is easily concluded since for any transmission u → v, p j i,u→v is either equal to p or equal to 1. Therefore, p(q − 1) ≤ i / ∈Ωu p j i,u→v ≤ q(q − 1) and eventually, P j A ≥ P P .Due to the fact that these approximations are well justified, [17,18], when P j A ≥ P P is satisfied, then P j A ≥ P P , most likely, will be satisfied as well.Simulation results in the sequel demonstrate this particular argument.
For those cases that p = p λ,|S| (the system throughput under the P-Policy is maximized [18]), it is guaranteed that the system throughput under the A-Policy will always be higher than the maximum ever achieved under any setting p of the P-Policy.Smaller values of p (< p λ,|S| ) allow for higher throughput under the A-Policy during the steady-state mode of operation.Actually, when the access probability is equal to p during the steady-state mode of operation, this is a consequence of a past corrupted transmission and an indication that other nodes are using the particular time slot.Consequently, if p is small, then the interference caused to neighbor nodes is reduced and this is one of the reasons for the observed system throughput increase demonstrated later in the simulation results.On the other hand, for small values of p, the convergence period (duration of 1/ pλ frames on average) increases.Since the system throughput during the convergence period is not as high as that during the steady-state mode of operation (where there is an efficient utilization of the unallocated time slots), a rather extended convergence period may not always be suitable (e.g., when some not relatively high system throughput is required in a small number of frames since the beginning of the network operation).This interesting case is demonstrated using simulation results in the sequel.

Simulation Results
For simulation purposes, networks of 100 nodes are considered.The simulator is a program written in C that creates topologies which have the same number of neighbor nodes (however, not necessarily a grid).The events of transmission attempts are closely related to probabilities p and λ, which are implemented from random number generators assuming uniform distributions.The obtained results (after 10 000 time slots) have been averaged in order to provide for the figures depicted in the sequel.The algorithm presented in [15] is used to derive the sets of scheduling slots and the system throughput is calculated averaging the simulation results over different number of frames.Time slot sets Ω χ are assigned randomly to every node χ and kept the same for each scenario throughout the simulations.The purpose of the simulations presented here is to provide for a deeper understanding of the A-Policy.
In Figure 2, heavy traffic conditions (λ = 1) are considered.It can be seen that the system throughput under the D-Policy (P D ) is a straight line remaining unchanged as j increases.The system throughput under the P-Policy (P P ) for p = p λ,|S| (= 0.184) is also an almost straight line.This was expected for both the aforementioned policies, assuming the fact that the attempts for transmissions do not change (on average) as time (and eventually j) increases (there is no convergence period).Regarding the A-Policy, for a large value of p (p = 0.5), it is easy to observe that the achieved system throughput is even lower than that achieved under the D-Policy.However, note that there is no obvious convergence period for this case (actually, there is a convergence period lasting only 1/ pλ ≈ 2 frames, but it is not possible to clearly identify it in this particular figure).
When p = p λ,|S| = 0.184, under the A-Policy a convergence period of almost 5 frames is expected.This is identifiable from Figure 2. Note that at the beginning of the convergence period the system throughput under the A-Policy (P A ) is always higher than the maximum system throughput under the P-Policy (P P ), which is in accordance with the analytical results presented earlier.
Even though for p = p λ,|S| the A-Policy safely overpasses the P-Policy, it is interesting to see the behavior of the A-Policy for even smaller values of p (e.g., p = 0.01).The convergence period (100 frames) for this case is easily observable from Figure 2. It is also easy to observe that during the beginning of the convergence period and until (around) frame 25, the system throughput is smaller than that under the P-Policy.On the other hand, at the end of the convergence period, it is significantly higher (0.16 instead of 0.1). Figure 3 presents system throughput simulation results, averaged over different number of frames, as a function of λ.In Figures 3(a), 3(b), 3(c), and 3(d), P D increases with λ and P P is significantly higher than P D .The system throughput under the A-Policy (P A ) for large values of p (e.g., p = 1.0) appears not to be a good choice for large values of λ.For the case where p = p λ,|S| , it is easy to observe that P A ≥ P P , irrespectively of the value of λ.However, for small values of λ there is no obvious advantage of the A-Policy, as it can be also observed from Figure 3.This observation is in accordance with the analytical results.Smaller values of p (< p λ,|S| ) are possible to provide for high system throughput values under the A-Policy as λ and/or the number of frames increases.This is also expected from the aforementioned analysis since the duration of the convergence period is (on average) 1/ pλ frames.

CONCLUSIONS
In this paper a new adaptive probabilistic MAC policy, the A-Policy, was proposed for sensor network environments and various performance aspects were investigated both through analysis and simulation.The proposed policy is based on the deterministic policy (D-Policy) [14] and the probabilistic policy (P-Policy) [17] that have been proposed and studied in the context of general ad hoc network environments.While both policies (the D-Policy and the P-Policy) can be applied in sensor network environments, the A-Policy proposed here can take advantage of cross-layer information by exploiting the idiosyncrasies of the sensor network environment (e.g., nodes are not moving, all data traffic is forwarded to a certain sink node) and yield for a higher system throughput.
In particular, an analytical expression for the system throughput under the A-Policy was derived in this paper.Due to the intractability of the particular expression, certain approximations were introduced that have also been employed in the past for the P-Policy (e.g., [17,18]).The approximated expression allowed for a number of interesting observations.For example, for any value of p, the system throughput under the A-Policy is higher than that under the P-Policy (for the same value of p).This is also the case for p = p λ,|S| (the particular value of p that maximizes the system throughput under the P-Policy).Simulation results support the claims and the expectations of the aforementioned approximate analytical results and observations.
Another interesting observation refers to the existence of a convergence period of 1/ pλ frames on average preceding the steady-state mode of operation.It is shown that the system throughput gradually increases during the convergence period and assumes the maximum at the beginning of the steady-state mode of operation.During the steady-state mode, the system throughput remains at the maximum.For p = p λ,|S| , the A-Policy safely outperforms (as already mentioned) the P-Policy, even during the convergence period.However, as p decreases (assuming that the P-Policy always operates at maximum throughput obtained for p = p λ,|S| ) at the beginning of the convergence period and for a comparably small number of frames, the P-Policy performs better than the A-Policy.As time passes (a few frames later) and long before the convergence period is over, the A-Policy overpasses the P-Policy.An important difference is that for this case (p < p λ,|S| ) the achievable maximum throughput under the A-Policy is higher than that achieved for p = p λ,|S| .Consequently, p = p λ,|S| is a good choice if the objective is to safely outperform the P-Policy.When the objective is to achieve high values for the system throughput, then p should take small values.The only trade is that rather small values may result in rather long convergence periods.The selection of p should be based on the priorities and traffic characteristics of the specific environment.
In conclusion, it is shown in this paper that the A-Policy is capable of achieving high system throughput values in a sensor network environment by exploiting information specific to the environment (e.g., stationary topology, data are forwarded towards a fixed node in the network).This increased system throughput is for the benefit of the network since it minimizes the transmission attempts and thus energy is saved.In addition, the A-Policy is a simple MAC policy easy to implement in small communication devices like sensor nodes with limited capabilities.

Figure 1 :
Figure 1: Example transmission u → v, set of nodes S v , and transmissions that belong in Φ u→v or Θ u→v .

Figure 2 :
Figure 2: System throughput (P) simulation results for heavy traffic conditions, λ = 1, for each frame j.

1 P A , p = 1 P 1 P A , p = 1 P 1 P A , p = 1 P 1 P A , p = 1 PFigure 3 :
Figure 3: System throughput (P) simulation results as a function of λ averaged over different numbers of frames.