Skip to main content

A comprehensive-integrated buffer management strategy for opportunistic networks

Abstract

Opportunistic networks aim to provide reliable communications in an intermittently connected environment. The research in cache management of opportunistic networks has been done a lot in those aspects, such as queue strategy, cache replace, redundancy delete, etc. But most of the existing studies only focus on a subdivision of the buffer management. To deal with such case, this article proposes a comprehensive integration buffer management strategy, called comprehensive-integrated buffer management (CIM), which takes all information relevant to message delivery and network resources into consideration. The simulation experiments show that the CIM strategy improved the performance in terms of delivery ratio, overhead ratio, and average delivery delay.

1. Introduction

Opportunistic network has captured much attention from researchers in recent years as a natural evolution from mobile ad-hoc network [1]. It utilizes the communication opportunities arising from node movement to forward messages in a hop-by-hop way, and implements communications between nodes based on the manner of storing–carrying–forwarding transmission. Opportunistic networks are characterized by sparse connectivity, forwarding through mobility and fault tolerance. To deal with the unpredictability in connections and network partitions, many routing protocols [2–10] adopt flooding-based schemes to improve the message delivery, where a node receives packets, stores them in their buffers, carries them while moving, and forwards them to other nodes when they encounter each other. The excessive multi-copies spraying in the network causes serious congestion and exhaust nodes’ buffer space, thus influences the performance of transmission dramatically. Therefore, the buffer management plays a very important role in the transmission, and the limited buffer in each hop should be used reasonably. How to design an efficient and effective buffer management strategy in opportunistic networks becomes a crucial issue.

The main objectives of buffer management are (a) to delete the redundant information in the system, (b) to formulate reasonable queue strategy, (c) to control congestion, and (d) to build up the cache replacement policy. There has been prior work done in designing buffer management strategies.

Most of the existing studies of buffer management for opportunistic networks have focused on a subdivision of the field, such as queuing strategy, cache replacement, or redundancy removal. But few works take all of these factors into consideration at the same time. The conventional buffer management strategies including drop-random, drop-front, drop-tail, drop oldest, drop–least-recently-received, evict most forwarded first, and history-based drop [7, 8] have shown some improvement for dissemination of the message. The research in cache management of opportunistic networks has been done a lot in those aspects, such as queue strategy, cache replace, redundancy delete, etc. But the researches did not propose an integrated cache management strategy. How to fully utilize the characteristics of opportunistic networks to design the buffer management strategy is still an open issue.

In this article, we integrate the different parts of buffer management, and take all information relevant to message delivery and the network resources into account. Based on statistics and analysis of the state of the messages, and considering the delivery history of the node and location information, combined with the relevant information from mutual learning between nodes, this article proposes a comprehensive integration buffer management strategy. We have implemented the proposed strategies including all aspects in the opportunistic network simulation ONE [11, 12]. Simulation experiments show that the CIM strategy improved the performance in terms of the delivery ratio, the overhead ratio, and the average latency.

The remainder of the article is organized as follows. Section 2 gives the problem statement, In Section 3, we give an overview and detailed information of our algorithm. We evaluate our scheme through simulation in Section 4. Finally, we summarize our conclusions and discuss future work in Section 5.

2. Problem statement

In opportunistic networks, nodes not only forward data, but also store data in the cache and keep the data for a long time (store–carry–forward). Several factors, such as the mobility of nodes, the number of copies of the messages and buffer space of each node, etc., should be considered [13–19].

First, we observe that the mobility of nodes greatly affects the delivery of messages. Mobility of nodes result in a limited encountering time of the nodes, thus the number of successful delivery of messages is also limited. Based on this observation, we give high priority to those messages which can choose the encountering nodes as their relay and have higher delivery probability. This helps to suppress spreading of the messages with low delivery probability, which may result in more messages staying in the buffer and the congestion. So, without the help of effective buffer management, node may relay more messages with low delivery probability in this limited encountering time, and wastes the buffer resource, and degrades the performance of the system.

Second, flooding-based storing–carrying–forwarding transmission does not control the number of copies of the messages. The unrestricted flooding will inevitably lead to network congestion and frequent loss of messages, so the integrated buffer management strategy should involve this factor.

Third, there are still a large number of redundant copies of a message stay in the buffer when the message is successfully delivered to the destination. Normally, these copies either stay in the buffer until the time-to-live (TTL) of the copy expires [20], or continue spreading the copies of the message in the networks, this will increase the overhead and waste the network resources. So, we consider the redundancy deletion in the buffer management.

Finally, buffer constraints can severely affect the performance of transmission in opportunistic networks, studies show that flooding-based routing, e.g., epidemic routing (ER), has minimum delivery delay under no buffer constraints, but performs poorly when buffer sizes are limited. If the buffer size is full, the buffer management defines which message to drop when a new message is to be accommodated. Therefore, our buffer management takes the buffer replacement into consideration.

In order to improve network performance of transmission in opportunistic networks, we design a comprehensive integration cache management to tackle the problem discussed above.

3. Comprehensive-integrated buffer management strategy

The design of buffer management strategy should get along with the mobility model of the nodes and the routing algorithm. Most of the studies use the random-based mobility model to simulate the movement of the nodes. While the study in [21] shows that the typical random-based mobility models are significantly different from the real-move pattern of the daily lives of human being. Most people usually travel in some locations (such as home, dormitory, working place, athletic field, etc.), and only 3% of people leave home out of 100 km frequently. The opportunistic network mainly relies on the encountering opportunities of nodes, these nodes refer to mobile devices, and normally they are held by individuals. So, the mobility of nodes is characterized with sociality. Some of the nodes are more active than the others, and they have more chances to forward messages for others. Taking these factors into consideration, we design the queuing policy with the sociality of nodes. In addition, studies in [4–6, 22, 23] also show that the message delivery ratio increases as the TTL of the messages, the number of copies of the messages, and the number of forwarding increase, so if a message has a larger number of these factors, then it is more likely that the message has successfully been delivered, so we preferentially remove the messages with larger number of these parameters. Based on this idea, we propose our buffer replacement policy. Although the flooding-based routing has a worse performance when the buffer is constrain, but constrains of buffer and the CPU resources will not be a problem with the fast development of mobile terminators. On the other hand, flooding-based routings have the merits with large throughput, short delay. The future mobile devices are equipped with powerful hardware and large buffer size, which will help to increase the delivery of the messages of the flooding-based routing in the opportunistic networks. In our design, we combine the design of buffer management with the flooding-based routing. By collecting and analyzing the status of message, and considering the information of historical deliver and location, we propose a comprehensive integration cache management strategy. We give the detail of our design in the following sections.

3.1. Queuing policy

The queuing policy of our buffer management strategy uses two levels of priority queuing scheme. In the first level queuing, each node maintains a state information packet (SIP).The nodes update the state information every once in a while. Before the node sends messages to its relay nodes, it exchanges its information packet with the encountering nodes.

The structure of the SIP includes the node’s ID, message abstract, time stamp, position coordinate information of nodes, and most frequently contacting node list. The node’s ID refers to the node holding the packet. The message abstract of the SIP includes the list of message ID which the node is holding. From the message abstract, a node can learn which message it does not have. The most frequently contacting nodes refer to the nodes which get the number of messages ranked the top five. To maintain the frequently contacting nodes list, we first define the contact range as a circle which takes the current coordinator of the node as its center, LocationRang as its radius, where we sets up LocationRang 200 m. We also define a regular contact nodes list, nearHostList.

The process of updating the regular contact nodes list is shown in Figure 1.

Figure 1
figure 1

Creation of list of near hops.

After the node gets the SIP of its encountering node, it can learn if the destination of the new coming message is in the recently contacting nodes list. If so, it then preferentially inserts the message into the queue.

The second level of queuing deals with the remaining messages after the first level of queuing.

We give the message i a priority, P i according to the relayed hops of the message, Hrelay, and the number of forward in this node, Nforward.

P i = 1 H relay + N forward
(1)

We queue the messages ascending according to Pi, and then we add the queue to the tail of first level queue. In this way, the node not only takes advantage of the real-time state information to deliver message, but also takes the number of copies of messages into account. The process of the queuing policy is shown in Figure 2.

Figure 2
figure 2

Process of queuing policy.

3.2. Buffer replacement policy

In the opportunistic networks, the buffer size of each node is limited. A node must eventually discard old copies to make space for new coming messages when the buffer is full. Normally, copies will be discarded when the TTL is elapsed. If the TTL elapsed before the nodes encounter any nodes, the copies will be dropped, otherwise, a decision of which copies should be dropped must be made when the buffer is filled up. Mere consideration of the TTL is not comprehensive, because the message with the longer staying time in the buffer is not necessary the message which has been forwarded more times. In this article, we take the TTL, size, and forwarding times of the message into consideration. As shown in Equations (2) and (3), we set up a weight calculation equation to calculate the buffer utility of the message i in node j

W ij − α TT L min TT L o + β BS j S mi + γ count av count ,
(2)

where α, β, and γ are the weighted factor, which represent the impact of TTL, message size and average transmission times of the message, respectively. TTLmin is the remaining time of TTL of the message. TTLo is the initialization TTL for the message. BS j is the buffer size of node j. Smi is the size of the message, countav refers to the average forwarding time of all messages in node j, and count is the forwarding time of message i.

Then, we calculate the overall buffer utility of node j as follows:

U j = ( ∑ i mj w ij ) M j ,
(3)

where M j is total number of messages in the buffer.

When a node j received a message i, first it judges whether it is congested. If so, it calculates the buffer utility of the message i in node j, W ij . If W ij is less than the overall buffer utility of node j, U j , then message is discarded, otherwise, the buffer replacement function drops the buffered message with the minimum buffer utility. The processing goes straightforwardly until the free buffer size is enough to accommodate the message. The flow chart is shown in Figure 3.

Figure 3
figure 3

Flow chart of buffer replacement.

3.3. Congestion control

Congestion control is generally divided into two parts. The first part is how to choose the discarding message when butter space is filled up. This part is now separated, whereby the buffer replacement policy. The second part is how to control the copies of messages spread in the network. We mainly focus on the second part in this section.

We apply our buffer management strategy on ER, whose total number of message copies is uncontrolled. The excessive flooding will inevitably leads that the buffer space of nodes are filled up by the copies in very short of time, and the buffer utilization rate is relatively low, and it also consumes a lot of transmission resources of the network. To deal with this problem, the proposed congestion control function introduces a mechanism to prevent excessive flooding based on the number of forwarding hops of the message and the relay time of relaying the message. We set up a threshold value, HopsCountTh, for the number of forwarding hops of the message, and a threshold value, RelayCountTh, for the relay time of relaying the message. We use the average of relay time of relaying message as the threshold value for the sake of simplicity. As the number follows the Gaussian distribution, so we can calculate it by using the local information.

RelayCoun t Th = E RelayCoun t j ,
(4)

where RelayCount i is the relay time of message i.

To control the number of copies spread in the network, the node only forward the message whose number of forwarding hops is less than HopsCountTh, and the relay time less than RelayCountTh.

Figure 4 shows the maximum transmission case where HopsCountTh is 1 and RelayCountTh is 2.

Figure 4
figure 4

Maximum transmission case of message M1.

3.4. Redundant deletion

Flooding-based routings spread multi-copies of the messages in the network to increase the chance of delivery. In the transmission process, when one of the copies reaches the destination, the transmission process of other copies of the same message should be terminated. While other copies of the message still accommodate the buffer of the relay nodes and spread in the network until the TTL elapsed. These useless copies of the messages compete for the network resources with other useful copies, which results in unnecessary network resources consumption. Therefore, the redundant deletion should remove these copies of the message. We introduce a learning mechanism, each node maintains a list of messages, which have successfully been delivered. When two nodes encounter each other, they exchange the lists, and spread to other encountering nodes. By this way, the redundant message will be cleaned up and the buffer space will be released soon.

4. Simulation and analysis

4.1. Network model and simulation environment

This section evaluates the performance of the proposed comprehensive integration buffer management strategy in the ONE [14, 15] simulator. We model the opportunistic network as a dynamic set of mobile nodes. Nodes may join and leave the network at any time. In our opportunistic scenario, there are five groups of moving elements, such as pedestrian, bicycles, electrical motor car, vehicles, and office worker. Each group of moving elements follows the map-based movement model with different speeds. The vehicles and bicycles choose random destinations in their reach on the map. The number of different moving elements can be changed, which does not affect the characteristics of basic communication. The details about the simulation parameters are listed in Table 1.

Table 1 Simulation environment parameters

The following metrics are used in our simulations.

Delivery ratio, which is defined as the ratio of the number of delivered messages to the total number of sent messages.

Overhead ratio, which is defined as the average number of relays used for one delivered message.

Average delay, which is defined as the average delay of all messages received by destination nodes.

In the performance evaluation, we implement the proposed CIM strategy in the representative flooding-based routing, ER, and we also apply Random (a node will forward a message in its buffer randomly) and FIFO (when two nodes encountered, the most recently received message in the node buffer will be forwarded last) buffer management strategies to the representative flooding-based routings, including ER, ProPHET routing (PRO), and Spray and Wait(SNW), respectively.

We run all these routing in the same scenario with the above parameters, and compare their performance with regard to the success delivery rate and delivery delay under different buffer size, TTL, and total number of messages, respectively.

4.2. Overall performance

Figure 5 shows the comparison of delivery ratio of CIM with other four strategies, including ER with Random buffer management, ER with FIFO, and PRO. The delivery ratio of CIM increases as the time increases, after 2000 s, the CIM has the highest delivery ratio leading other strategies 15–20%. Considering that CIM uses two levels of priority queuing scheme which not only takes advantage of the real-time state information to deliver message, but also takes the number of copies of messages into account, the improvement of delivery rate is significant. It can clearly be seen from Figure 6 that the CIM has the lowest overhead and average delay, which are three to four times less than that of other three strategies. The proposed CIM uses a congestion control policy to confine the excessive flooding in ER by controlling the number of forwarding hops and relay time. This results in the lower overhead ratio. The low average delay also validates the original assumption we had when designing the buffer replacement policy, which takes the TTL, size, and forwarding times of the message into consideration and lets messages spend less time in the buffer. Overall the proposed CIM has the better performance than the other flooding-based schemes and has a comparable performance with SNW.

Figure 5
figure 5

Comparison of delivery ratio.

Figure 6
figure 6

Comparisons of overhead ratio and average delay.

4.3. Impact of buffer size

As shown in Figure 7, when the buffer size is 10 Mb, the ER using CIM strategy has higher delivery rate than the Random and FIFO strategy leading about 5%. This is because when the buffer size is limited, the proposed CIM can preferentially forward messages in the priority queue according to the location information of nodes and the record of historical encountering, and it also periodically removes redundant information in the buffer, and applies the reasonable buffer replacement policy when congestion happens. Although the ER using CIM is still a flooding-based routing, the packets loss is inevitable, but it still has comparable performance with the SNW.

Figure 7
figure 7

Delivery ratio with different buffer size.

Figure 8 reveals the comparison of CIM, Random, FIFO, PRO, and SNW with respect to the overhead ratio under different buffer size. As the buffer size increases, the overhead ratios of CIM and SNW keep stable and are lower than those of other three schemes. This is due to the congestion control policy and buffer replacement management used in CIM. The flooding in CIM is effectively controlled, the redundant messages are removed timely, and messages are delivered as soon as possible. This benefits the lower overhead ratio. Other flooding-based routings using Random or FIFO do not take congestion and historical information it considers. Therefore, the buffer utility ratio is very low, a large number of redundant messages are still in the buffer and consume the network resource, and thus their overhead is relatively high.

Figure 8
figure 8

Overhead with different buffer size.

Figure 9 shows that the average delay of CIM is lowest in all schemes. This is mainly due to the queuing strategy and buffer replacement policy, which make the messages delivered in short time. Although SNW yields low overhead by confining the number of copies, but the limited copies of messages result in the higher delivery delay than that of CIM. Trade-off is offered in terms of the delivery delay and overhead. Overall, CIM performs better than other schemes.

Figure 9
figure 9

Average delay with different buffer size.

5. Conclusion

Opportunistic networks aim to provide reliable communications in an intermittently connected environment. The performance of the flooding-based storing–carrying–forwarding transmission may get worse when the buffer size is limited. The existing studies of buffer management for opportunistic networks have focused on a subdivision of the field, such as queuing strategy, cache replacement, or redundancy removal. To deal with such case, we integrate the different parts of buffer management, and take all information relevant to message delivery and the network resources into account. We propose a comprehensive integration buffer management strategy. Extensive results are provided to evaluate the proposed routing protocol with ONE simulator. Simulation experiments indicate that the proposed buffer management strategy is effective and outperforms the existing solutions.

References

  1. Fall K: A delay-tolerant network architecture for challenged internet. Proceeding of ACM SIGCOMM 2003, Karlsruhe, Germany, August 25–29 2003, 24-27.

    Google Scholar 

  2. Vahdat A, Becker D: Epidemic routing for partially-connected ad hoc networks, Technical Report CS-2000-06. USA: Duke University Durham; 2000.

    Google Scholar 

  3. Spyropoulos T, Psounis K, Raghavendra CS: Spray and wait: an efficient routing in intermittently connected mobile networks. Proceedings of ACM SIGCOMM Workshop on Delay Tolerant Networking (WDTN), Philadelphia, USA 2005, 252-259.

    Google Scholar 

  4. Nguyen HA, Giordano S, Puiatti A: Probabilistic routing protocol for intermittently connected mobile ad hoc network. Proceedings of IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, Espoo, Finland 2007, 1-6.

    Google Scholar 

  5. Jouni K, Jörg O: Time scales and delay-tolerant routing protocols. Proceedings of CHANTS’08, San Francisco, California, USA 2008, 13-19.

    Google Scholar 

  6. Lindgren A, Doria A, Schelen O: Probabilistic routing in intermittently connected networks. ACM SIGMOBILE Mob. Comput. Commun. Rev. 2003, 7(3):19-20. 10.1145/961268.961272

    Article  Google Scholar 

  7. Santos RM, Orozco J, Ochoa S: A real-time analysis approach in opportunistic networks. ACM SIGBED Rev. 2011, 8(3):2011.

    Article  Google Scholar 

  8. Chiara B: Design and analysis of context-aware forwarding protocols for opportunistic networks. Proceedings of MobiOpp ’10, Pisa, Italy 2010, 201-202.

    Google Scholar 

  9. Boldrini C, Conti M, Passarella A: Exploiting users’ social relations to forward data in opportunistic networks: the HiBOp solution. Elsevier Pervasive Mob. Comput. 2008, 4(5):633-657. 10.1016/j.pmcj.2008.04.003

    Article  Google Scholar 

  10. Jindal A, Psounis K: Contention-aware analysis of routing schemes for mobile opportunistic networks. Proceeding of ACM MobiOpp, Puerto Rico, USA 2007, 1-8.

    Google Scholar 

  11. Keranen A, Ott L: The ONE simulator for DTN protocol evaluation. Proceedings of the 2nd International Conference on Simulation Tools and Techniques (SIMUTools, 2009), Rome, Italy 2009.

    Google Scholar 

  12. The One http://www.netlab.tkk.fi/tutkimus/dtn/theone/

  13. Chen Z, Qiu Y, Liu J, Xu L: Incentive mechanism for selfish nodes in wireless sensor networks based on evolutionary game. Comput. Math. Appl. 2011, 62: 3378-3388. 10.1016/j.camwa.2011.08.052

    Article  MathSciNet  Google Scholar 

  14. Adar E, Huberman BA: Free riding on Gnutella. Technical Report, SSL-00-63. Ecologies Area Xerox Palo Alto Research Center: Palo Alto; 2002.

    Google Scholar 

  15. Li Q, Zhu S, Cao G: Routing in socially selfish delay tolerant networks. Proceedings of the IEEE INFOCOM 2010, San Diego, CA, USA 2010, 1-9.

    Google Scholar 

  16. Panagakis A, Vaios A, Stavrakakis I: On the effects of cooperation in DTNs. Proceedings of the Second International Conference on System Software and Middleware (COMSWARE), Bangalore, India 2007, 1-6.

    Google Scholar 

  17. Resta G, Santi P: The effects of node cooperation level on routing performance in delay tolerant networks. In Proceedings of the 6th Annual Conference on Sensor, Mesh and Ad Hoc Communications and Networks. Washington, DC, USA: IEEE Computer Society; 2009:413-421.

    Google Scholar 

  18. Altman E, Kherani AA, Pietro P, Molva R: Non-cooperative forwarding in ad hoc networks. Lect. Notes Comput. Sci. 2005, 3462: 100-128.

    Google Scholar 

  19. Buchegger S, Boudec JL: Performance analysis of the CONFIDANT protocol. Proceedings of the ACM MobiHoc, Lausanne, Switzerland, June9-11 2002, 226-236.

    Google Scholar 

  20. Prodhan AT, Das R, Kabir H, Shoja GC: TTL based routing in opportunistic networks. J. Netw. Comput. Appl. 2011, 34(5):1660-1670. 10.1016/j.jnca.2011.05.005

    Article  Google Scholar 

  21. Gonza’lez MC, Hidalgo CA, Baraba’si L: Understanding individual human mobility patterns. Nature 2008, 453: 779-782. 10.1038/nature06958

    Article  Google Scholar 

  22. Nguyen HA, Giordano S: Context information prediction for social-based routing in opportunistic networks. Ad Hoc Networks 2012, 10(8):1557-1569. 10.1016/j.adhoc.2011.05.007

    Article  Google Scholar 

  23. Ram R, Richard H, Prithwish B, Regina RH, Rajesh K: Prioritized epidemic routing for opportunistic networks. In Proceedings of MobiOpp, NY, USA. : ; 2007:62-66.

    Google Scholar 

Download references

Acknowledgment

This study wassupported by the National Natural Science Foundation of China under Grant no.61172087 and by the Natural Science Foundation of Guangdong Province under Grant no. 06300923.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daru Pan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Pan, D., Ruan, Z., Zhou, N. et al. A comprehensive-integrated buffer management strategy for opportunistic networks. J Wireless Com Network 2013, 103 (2013). https://doi.org/10.1186/1687-1499-2013-103

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2013-103

Keywords