Skip to main content

Delay and energy-efficient data collection scheme-based matrix filling theory for dynamic traffic IoT

Abstract

Data collection is the basic functions of the Internet of Things (IoT), in which the sensed data are concentrations from sensor nodes to the sink, with a timely style, so the smart response can be done for emergency. The goal of multi-modal sensor data fusion is to obtain simple and accurate data to enhance system reliability and fault tolerance. Energy efficiency and small delay are the most important indicators which govern the performance of IoT. Convergecast is a low-latency data collection strategy based on effective time division multiple access (TDMA), in which each sensor node generates a packet, and m packets can aggregate to a packet. However, in most practical networks, sensor nodes do not necessarily generate packets during each data collection cycle, but instead generate packets from time to time. In the previous convergecast strategy, each node was fixedly allocated a slot, which increased the delay and wasted energy. A delay and energy-efficient data collection (DEEDC) scheme-based matrix filling theory is proposed to collect data in a randomly generated WSNs with minimum delay and energy consumption. The DEEDC scheme uses a clustering approach. For each cluster, the number of slots required for transmission is calculated by matrix filling theory, not the number of nodes that actually generate data. This ensures that data can be collected in a network with randomly generated data (number of slots ≤ number of nodes), thereby avoiding the allocation of slots for each node and the acquisition of redundant data to lead to the wastage of time and energy. Based on the above, a mixed slot scheduling strategy is proposed to construct energy and delay-efficient, collision-free schedule scheme. After extensive theoretical analysis, by using the DEEDC scheme, the delay is reduced by about 50~80%, and the energy consumed is reduced by about 40~57%.

1 Introduction

The Internet of Things (IoT) is the latest Internet evolution, with billions of Internet devices. Such as cameras, RFIDs, in addition to wearable, vehicles, to smart meters, medication pills, industrial, and signs machines [1,2,3,4,5]. An important part of the IoT is wireless sensor networks [6, 7]. Widely followed by researchers due to their large-scale, self-organizing, and dynamic characteristics [8,9,10], and has been widely used in industry, traffic information, military, environmental monitoring, etc. [11,12,13,14,15]. As the technology matures, the processing capabilities and sensing capabilities of sensors become more and more powerful. In addition, the number of types of perception increases and the accuracy of perception is higher. In addition, its size has become smaller and the price is cheaper [16,17,18,19,20]. This has made the application range of wireless sensor networks more widely expanded, including smart home, intelligent agriculture, and other fields [21,22,23,24,25]. With the development, sensing device has also developed rapidly; the current Internet is experiencing a trend from centralization to marginalization, Cloud computing [26, 27], Edge computing [28, 29], and Fog computer [30,31,32] which correspond to the new computational model proposed for such development [26, 28, 33,34,35]. With the rapid rise of artificial intelligence technology [36, 37], the combination of artificial intelligence and Internet of Things (IoT) has made it a longer development [38,39,40], which has become the focus of researchers.

Broadly speaking, multi-modal sensor data fusion not only refers to the synthesis in different modalities but also includes different features and the same type of data fusion in the same mode.

Sensor nodes are responsible for monitoring events within the perceived range, sensing surrounding data. Then send them to the sink node via multi-hop routing [41,42,43,44,45]. To realize the enormous potential of wireless sensor networks (WSNs), it is important for sink to integrate the sensed data by many sensor nodes, and distill the high value information for each application needs [46,47,48]. Based on economic considerations, sensor nodes are simple in construction, small in size, and powered by batteries, which can be used with limited energy [3, 5, 7, 43, 49, 50]. Therefore, energy efficiency has become an important indicator [51, 52]. On the other hand, the main application of WSNs is monitoring [22, 49]. When a monitored event occurs, the packet is forwarded to the sink so that the system can quickly process the event, because the delayed data transmission may cause serious loss [21, 44]. For example, in the monitoring of critical facilities, important buildings, industrial sites, geological hazards, and fires, delayed data routing can lead to the destruction of important facilities, and people and objects in geological disaster areas cannot be transferred, resulting in serious losses. Energy-efficient and fast data collection has become an important research content of WSNs [4, 5, 21, 53,54,55].

Convergecast data collection is an effective method widely used [4, 5, 21, 53,54,55]. The Convergecast method uses two important methods. The first is the data fusion mechanism. It adopts a data aggregation method in which a class of m data packets can be organized into one after encountering each other [4, 5, 21]. This data aggregation method is often used in applications such as monitoring the highest temperature, average temperature, and minimum temperature [4, 5, 21]. For example, when monitoring crops, only the three temperatures mentioned above need to be known. The second method is the time division multiple access (TDMA) mechanism [4, 5, 21]. In the TDMA method, a slot for performing data operations is accurately assigned to each node. When a node is performing a data operation, it is awake, and at other times, it goes into the sleep state to save energy. Therefore, for Convergecast, the working process of each node can be divided into two stages. First, the node does nothing except receive data. In the second phase, the node performs data transmission, that is to say, the received data is fused into one data packet and sent out, and then the data is no longer received [5].

In many studies, the state transition of a node does not consume energy. Therefore, it is a reasonable way to use the TDMA method to arrange the nodes’ slots on demand. However, in practice, the state transition of a node requires energy consumption. Therefore, Xu et al. [56] suggest that the node be in a sleep state for more time and reduce the possibility of node state transitions. If the interval between the two data operations of the node is long, let the node sleep first and then awake, which can save energy. However, if the interval between two data operations of the node is relatively short, it may be more energy-saving to keep the node in the awake state. Xu et al.’s research proposes a scheduling strategy that requires only two state transitions in a data collection process. The first phase is the collection of data in the cluster, at which point each member of the cluster sends data to the head. For cluster member nodes, they only need one slot to send data, and all the data operations are completed. For the cluster head, in the first stage, after awake, it starts to receive the data. In the above process, the cluster head node is always awake. The second phase is the data transfer between the cluster heads: When the first phase is completed, each cluster head has received all data. At this time, inter-cluster data relay starts from the far sink area. To save energy, the cluster head node near the sink area first goes into the sleep state, and waits until the inter-cluster data relay is needed. In the method proposed by Xu et al., the node only needs to convert the state twice.

In the study of Xu et al., a structure of an unequal cluster network is used. In such a network structure, the radius of all clusters in the network is the same. In Li et al. [5], we believe that the performance of data collection (e.g., latency and energy utilization) is not optimal in a network structure with equal cluster radius. Therefore, we use a network structure with unequal cluster radii. Different from most previous studies, in our proposed strategy, the cluster radius is smaller in the far sink region. With such a network structure, the node can only switch state once. This is because the nodes in the far-sink area have a smaller cluster radius, and they enter the inter-cluster relay process earlier. When the node near the sink area completes the data collection work in the cluster, the data packet in the distance has just arrives. Therefore, the number of state transitions can be reduced from twice to once, and the delay of data collection is significantly reduced.

Although the current research has achieved good results, it is based on the assumption that each node generates a packet in each round of data transmission. However, in actual situations, some nodes do not generate packets every time. As a simple example, when monitoring forest fires, nodes only need to send data packets when the danger is detected and only need to send sporadic packets in other times. Obviously, it is not appropriate to assign a slot to each node in such a network. According to the previous strategy, at least n slots are needed in a cluster with n nodes. However, if the data packet is generated irregularly, if the proportion of nodes generating the data packet is only ε, then the theoretical number of slots to be allocated is εn. In this case, the strategy of still allocating slots for all nodes is wasteful, making the data collection delay large, and let node awake, which does not need to transmit data, also consume energy and reduce network lifetime.

In a network where packets are randomly generated, in theory, the slots required can be greatly reduced, but finding a suitable collection strategy is an extremely hard problem. The difficult point is that the TDMA scheduling strategy is based on a centralized scheduling, and the state of each node in the network is obtained before scheduling. In addition, TDMA requires that the network parameters for each data collection be the same, ensuring that the scheduling slots are fixed. The network randomly generated by the data does not meet the above requirements.

First, the nodes that generate the packets and the number of generated packets are all changes. Therefore, it is necessary to re-allocate the slot for each node before each round of scheduling to achieve good results, but this is not practical. The reason is that the scheduling algorithm does not know in advance which nodes generated the data and which did not. In a network in which nodes randomly generate packets, the number of slots required for each round is different, which we cannot know in advance. If you assign a fixed slot to each node, it can cope with all the situations, but the performance is not good. At present, there has not been a research on a network generated randomly for data packets.

On the other hand, many studies have shown that there is a certain correlation between data packets in time and space. When the collected data packets reach a certain amount, some missing data packets can also be recovered, so that the information volume of the network is not lost.

We have proposed a delay and energy-efficient data collection (DEEDC) scheme-based matrix filling theory. By skillfully utilizing the matrix filling theory, the DEEDC scheme allows the entire network to collect only a subset of data samples in each round, while other data can be recovered from the acquired data. Suppose there are n nodes, to collect m packets (according to the matrix filling theory, and m ≤ n). Thus, in the DEEDC scheme, only m slots need to be arranged for data collection. This reducing the number of time slots is required, and ultimately reducing the delay. In the case where a node randomly generates a data packet, it is impossible to have exactly m cluster member (CM) nodes generating packets each time in a cluster. Our method is to find the threshold k (k > m). If more than k packets are generated in the cluster, the cluster head (CH) node will only receive k of them.

This paper proposes a DEEDC scheme-based matrix filling theory to collect data with energy effective and low delay style.

  1. 1.

    A DEEDC scheme-based matrix filling theory is proposed to collect data with energy effective and low delay style for dynamic traffic WSNs. Although the collection strategy proposed in the past can also be applied to networks that randomly generate data packets, it will generate a lot of delay and energy waste. The difference between the DEEDC scheme and the previous data collection strategy lies in the fact that by using the matrix filling theory, it is adapted to ensure that no data is lost in a network that dynamically generates data. Thus, each cluster head allocates a minimum number of collected data packets that can recover all the data.

    As far as we know, the DEEDC scheme is the first strategy that can adapt to dynamic data generation, and the number of slots, delay, and energy consumption required are much smaller.

  2. 2.

    A clustered data routing protocol suitable for networks that generate data from time to time is given for DEEDC scheme. Here, the unequal clustering network structure is used, which is the result of our previous research. Then, for the specific situation in this article, we give the data routing strategy in detail.

    The data routing strategy consists of two processes. The data collection phase in the cluster is the first, which contains two sub processes: the first is to allocate a slot for the cluster member; the second is to send the data to the cluster head. In the first step, the competing MAC protocol is used. Each cluster member with data generation competes with the channel, reporting data generation to the cluster head. After receiving the message of the member node, it is determined which packets to be received by the cluster head node. The ID and the slot information of the node are broadcasted by broadcasting. The second stage is inter-cluster data routing. Through the above two stages, data can be efficiently collected in a wireless sensor network that randomly generates data.

  3. 3.

    After rigorous and comprehensive theoretical analysis, the DEEDC scheme can adapt to wireless sensor networks that dynamically generate data. The DEEDC scheme reduces the node delay by about 50~80%, and reduced the energy consumption by at least 40%, up to 57%.

    The remaining chapters are organized as follows: the research background and related work is located in Section 2. The relevant model is in the third section. In Section 4, the DEEDC scheme proposed in this paper is introduced in detail. In Section 5, the DEEDC scheme was analyzed in detail and compared with the previous scheme. In Section 6, we summarize the full text. Finally, we give the meaning of the abbreviations appearing in the paper in Chapter 7.

2 Background and related work

The prosperity of WSNs provides an inexhaustible driving force for the development of IoT [57]. In WSNs, the sensor node collects peripheral data and forwards it to the sink node by using other sensor nodes as relays. The sink node receives the data packets generated in the network and by analyzing these data packets, timely controlling the special changes and making adjustment measures to achieve the purpose of automation.

In the above process, two main issues need to be taken seriously. The first is the energy, followed by the delay [10].

Sensor nodes are battery-powered and deployed in special areas with complex environments. It is conceivable that it is very troublesome to replace the battery for them [58]. In previous studies, the energy consumed by sensor nodes mainly considered the following: (1) due to packet transmission, including send and reception; (2) switching state [59]. In order to achieve the goal of reducing energy consumption, sensor nodes typically do not remain awake all the time, but constantly switch between sleep and awake states. Although state switching consumes a portion of the extra energy, it is small compared to the energy consumed for listening. It generally achieves the goal of reducing energy consumption, so it is often used. (3) To keep listening [60]. The node is awake but no packets need to be transmitted. In this case, it listens for the status of other nodes and receives it directly after sensing the packet. The energy consumed to keep listening is greater than remains asleep. Therefore, in the ideal case, the node wakes up just when it needs to transmit packets, and stays asleep at other times. That is, almost all of the energy is used to transmit packets. To this end, relevant scholars have also made a lot of research.

Delay is another factor that requires our attention in addition to energy consumption [61]. Real-time performance is a very important feature for WSNs. For example, in dealing with forest fires, the sooner the sink node receives the danger, the faster the rescue response will be. Timely rescue can keep losses to the smallest possible extent. However, since the WSNs are wirelessly transmitted, which means that the generation of delays is inevitable. The delay of a single hop mainly includes [60] (1) the time from the generation of the packet to the time it was officially sent. In a duty cycle controlled network, the receiver may be in a sleep state. The sender must wait, and the transmission interruption causes a delay [60]. (2) Delay caused by communication. In order to increase the reception rate of packets, the sender and the receiver often need to establish a connection in advance. Especially in the network to which the Send-Wait protocol is added. All of the above operations result in an increase in the delay of the node. (3) Delay in transmitting data packets. Nodes always transmit packets at a certain rate. The more packets, the greater the delay. Increasing the transmission rate of the node reasonably or reducing the amount of data transmitted can achieve the purpose of reducing the delay. (4) Delay in data retransmission [62]. Since the transmission is wireless, if the packet is not successfully received, this means that the sender node needs to transmit the same packet again, causing a delay.

2.1 Existing research related to delay

Delay as a performance indicator that seriously affects WSNs is naturally the focus of researchers [63]. In order to reduce the delay, the methods commonly used by researchers include the following.

  1. 1.

    In a network with duty cycle control, the receiver may fall asleep, causing a part of the delay. Eliminating the duty cycle control obviously reduces the delay directly to zero, but at the expense of increasing the energy consumption. Therefore, researchers tried to control the sender so that when the packet needs to be sent, it can wait for wake-up receivers as soon as possible. Reasonable control mainly includes finding out the hotspot time and increase the probability that the potential receiver will wake up during this period. All possible recipient nodes are scheduled to wake up evenly during the period during which the sender node may send a data packet [64].

  2. 2.

    Increase the transmission radius to make the hop count smaller. However, as the transmission radius increases, the reception rate of the node necessarily decreases; conversely, if the previous reception rate remains unchanged, the transmission power must be increased. Of course, related research indicates that the reception rate can be guaranteed even if the transmission power is not changed, for example, using opportunistic routing. However, this approach sacrifices other performance.

  3. 3.

    The shortest routing scheme [65]. In this scheme, once the node is deployed, it first sets its own hop count to infinity. The aggregation node then starts broadcasting, which contains the message “I am at a distance of 0 from the sink”. All nodes that have received this message have a distance of 1 from the sink node. Subsequently, these nodes continue to broadcast the message “My distance to the sink node is”. Those nodes that received this message set the distance to k + 1. In particular, if node A receives two different broadcast data, and the hop count information contained therein is i and j, respectively, it is set to min(i, j). Through the above method, the minimum hop count is successfully found, and the data packet transmission is performed along the found path, which reduces the delay.

  4. 4.

    By increasing the reliability of the transmission, which is mainly for networks with low reliability [62]. Assuming that the reliability of the node transmission is not high, so that the retransmission again causes a large amount of delay. From this perspective, improving reliability can effectively reduce the delay. In fact, many research programs are based on such ideas. In order to improve reliability, increasing the transmission power is the most direct and simple method. In addition, opportunistic routing is a common method to increase the reliability of communication links [64]. In opportunistic routing, the sender node selects multiple nodes as relays, which we refer to as candidate nodes. When data transmission is performed, the sender transmits the data packet to all candidate nodes at one time by broadcasting. Subsequently, one of the node to receive the packet is selected in order of priority, and the data packet is continuously transmitted forward through this node. Obviously, a one-hop transmission fails only if all candidate nodes fail to receive.

  5. 5.

    Reduce the delay of routing packets. Improve the packet transfer rate from a hardware perspective. Increasing the transmission rate, the time it takes to transmit an equal amount of data naturally decreases. The method is simple in thought and the effect is obvious. However, for integrated wireless sensor nodes, increasing the transmission rate means replacing all previous nodes with new sensor nodes, and the workload should not be underestimated. In addition, because of the size and energy of the sensor node itself, the transmission rate can be increased to a lesser extent.

Since the packets collected by the sensor nodes have a certain correlation in time and space, after receiving the packets the nodes can directly remove redundant parts through data aggregation technology [66]. When the transmission rate is constant, it can directly reduce the delay.

Since all data is routed to the sink node, there are far more data packets sent in the near sink area. Thus, by arranging more nodes near to the sink, the average amount of data transmitted by the node is reduced. Through the above scheme, the purpose of reducing the average one-hop delay is achieved.

2.2 Existing research related to energy consumption

Another important indicator in WSNs is energy consumption. If the initial energy is constant, the slower the sensor node consumes energy, the longer it can work, that is, the longer the lifetime [67]. To extend the lifetime as much as possible, the relevant researchers have proposed some common practices.

  1. 1.

    Duty cycle control is the most common method of saving energy consumption [64]. According to research, when the node stays asleep, the energy consumed is only about 1% of the state of waking. It is conceivable to let the node stay asleep in most time slots, which can effectively increase the energy utilization, but the delay becomes larger. Obviously, the first movers have realized this problem and have given strategies to deal with it. For example, the duty cycles of those nodes with residual energy are appropriately increased to keep them awake in more time slots, thereby reducing the latency of the transmitter nodes without changing the network lifetime.

  2. 2.

    The energy consumed by the node to listening can be reduced by reasonable control in a network with duty cycle control. For example, multiple possible recipients are prepared for each sender node [68]. If a node wants to transfer packets, as long as one receiver is awake, the transfer can proceed. By controlling the receiver’s wake-up slot, the average duration of the listening by the sender node can also be reduced. For example, a receiver is always awake according to priority, in which case the node keeps listening for zero energy.

  3. 3.

    Data aggregation technology [66]. Through this technology, the amount of transmitted data and packet size are effectively reduced, thereby increasing energy utilization and reducing energy consumption.

2.3 Convergecast data collection

The slots and delays involved in the previous data collection strategy are not fixed. In different rounds, the delay in data collection may be doubled, and the slot of data operation is not fixed. Convergecast data collection is another data collection strategy in which the time slot of data operation for each node is determined, so the delay at which the data reaches the sink is determined [4, 5, 53,54,55,56].

Convergecast is ideal for data fusion networks. In such a network, m packets can be combined into one. There are many cases of Convergecast data collection, the most common of which are tree-based data collection [53] and cluster-based data collection [5, 56]. The main difference between these two types of strategies is that different network structures are used separately. (1) The main feature of the tree-based data collection strategy is to divide the network into a tree. The data collection process is the process of arranging slots for each node. In order to save energy, its data collection also obeys the basic characteristics of the convergecast, that is, the node must wait until the data packets of all the child nodes are collected before sending data to the parent node. In such a strategy, the focus of the research is on how to control the wake-up time slots so that the time required for the entire data collection is the smallest. (2) In the cluster-based data collection strategy, each cluster first collects data within the cluster, and then performs inter-cluster data relay. Relatively speaking, the delay of the cluster-based convergecast strategy is relatively small. The reason is as follow: the tree-based convergecast needs to start with leaf nodes and collect data to the root node. In cluster-based converged broadcasts, the first phase of data collection in different clusters is simultaneous. In cluster-based convergecast, after cluster data collection, all CM nodes can be ignored, and the remaining CH nodes continue to form a tree.

However, as mentioned above, all current convergecast strategies are for networks where data is generated periodically. In many practical applications, the data of its nodes are generated randomly. Therefore, it is urgent to propose a strategy of low-energy consumption and low delay for networks that randomly generate data, which is a huge challenge.

2.4 Existing research related to matrix filling

Matrix filling (also known as matrix completion) technology is a common data processing optimization method [69]. Only a subset of known data is needed to recover other unknown data through matrix filling techniques. This feature makes matrix filling especially suitable for WSNs.

Based on matrix filling, an optimal unmanned aerial vehicle data collection trajectory (OUDCT) scheme was proposed [25]. In this scheme, the drone trajectory is optimized according to the matrix filling theory.

In Tan et al. [46], researchers proposed an adaptive collection scheme based on matrix completion (ACMC) scheme. The receiving scheme of the node is adaptively adjusted, thereby reducing the delay and increasing the energy utilization. Specifically, when the energy is sufficient, the node collects more data, and conversely, reduces the number of collected data.

In addition, matrix filling technology is combined with mobile crowdsourcing technology (an emerging data collection solution) to obtain a novel matrix completion technique-based data collection (MCTDC) scheme [63].

3 The system model and problem statement

3.1 The network model

We refer to a common clustering network model [56]. This is a circular-network with the sink node deployed at the center. The nodes are evenly distributed, and ρ is the density. The node generates a metadata packet with probability p in each round of data transmission. All sensor nodes are first grouped into groups according to their location in the network, each group being called a cluster. In a cluster, one node acts as a cluster head, denoted by CH, and other nodes naturally become members of the cluster, represented by CM. In particular, all nodes in the cluster take turns acting as CHs in different transmission rounds.

Data transmission can be separated into two parts, namely intra-cluster transmission and inter-cluster relay.

During the first part, the cluster member (CM) nodes will transmit the metadata packet to the CH node. In this model, it takes one time slot to transmit a packet, the energy consumed is constant, and the packet loss is not considered. After the CH node receives the packets, it aggregates them together to form a new packet, and then enters the inter-cluster relay process.

During inter-cluster relay, the sink node starts receiving packets of the CH nodes. In this process, the impact of packet size on transmission time is also ignored. The CH node located at the edge only collects the data packets sent by its CM, while the other CH nodes also receive the data packets sent by the outer CH nodes. After completing the above steps, they are sent out together with the data packets generated by themselves.

3.2 The clustering model

According to our previous research results, this paper uses a clustering method with different cluster radii [5]. The clusters of nodes close to sink have larger radii, as shown in Fig. 1.

Fig. 1
figure 1

Clustering method with different cluster radii

Nodes with small radii (away from the sink node) can complete data collection in the cluster in less time. At this point, the nodes with large radii (near the sink) are still collecting data in the cluster. Through reasonable control, when a CH node with a large cluster radius completes the cluster collection work, the data packet sent from the outer layer of the network just arrives at this CH node. In short, the in-cluster data transmission work and the inter-cluster relay work in the original series are changed to be parallel, thereby greatly reducing the delay and energy consumption.

For a network with radius R, it is divided into multiple layers by clustering. For nodes in different layers, the cluster radii are different and are represented as rh − 1, rh − 2, … , r1, r0 from the inside to the outside. There is \( \partial \sum \limits_{i=0}^{h-1}{r}_i\ge R \), and is a constant less than or equal to 1. By clustering, we can convert all the nodes into one tree, which can be seen in Fig. 2.

Fig. 2
figure 2

Transform the network into a tree

In the layer i, the degree of the CH node (i = 0, 1, 2, … h − 1) relative to the ordinary node is represented by \( {d}_i^{CM} \). \( {d}_i^{CH} \) indicates the degree of the CH node relative to other CH nodes. In particular, \( {d}_0^{CH}=0 \), \( {d}_{h-1}^{CH} \), and \( {d}_{h-1}^{CM} \) represent the degrees of the sink node.

When the CH node on the layer i begins to send packet, the cluster head on the layer i + 1 just completed the data collection in the cluster. In order to achieve this effect, the degree of the node must meet the following requirements:

$$ {d}_1^{CM}={d}_0^{CM} $$
(1)
$$ {d}_i^{CM}={d}_{i-1}^{CM}+{d}_{i-1}^{CH} $$
(2)

For ease of calculation, in Li et al. [5], \( {d}_i^{CH}\left(i=0,1,2,\dots h-1\right) \) is also specified as a constant, denoted by dCH. Since the cluster radii of adjacent layers are different, the number of nodes contained in a cluster on layer i is not \( {d}_i^{CM} \), but is expressed as:

$$ {n}_i={d}_i^{CM}-\rho \left({d}_i^{CM}-{d}_{i-1}^{CM}\right) $$
(3)

Where i = 1, 2, … h − 1. In particular, when i = 0, \( {n}_0={d}_0^{CM} \).

Therefore, according to the density ρ, the radius of the cluster on the layer i (i = 1, 2, … h − 1) can be obtained by:

$$ {r}_i=\sqrt{\frac{n_i+1}{\rho \pi}} $$
(4)

In particular, when i = 0:

$$ {r}_0=\sqrt{\frac{d_0^{CM}+1}{\rho \pi}} $$
(5)

3.3 The matrix filling model

Matrix filling is a hot topic about data acquisition and data recovery after compression sensing [69]. Simply put, the purpose of matrix filling is to recover the complete data as much as possible through a small number of collected packets. The premise of matrix filling feasibility is the low rank of the matrix. Because of this feature, matrix filling can be well applied to WSNs.

Matrix filling is represented by an optimization problem:

$$ {\displaystyle \begin{array}{ll}\operatorname{minimize}& \operatorname{rank}(X)\\ {}s.t.& {X}_{ij}={M}_{ij}\kern0.5em \left(i,j\right)\in \Omega \end{array}} $$
(6)

Where rank(X) represents the rank of the matrix X, Ω represents a set of known elements, and Mij represents the sum of known elements. Therefore, the problem of matrix filling is to complete the recovery of the missing matrix in the case of low rank.

As we all know, the solution of rank is an NP problem. Therefore, the relevant scholars proposed to use the nuclear norm to replace the rank of the matrix, and convert the above problem into:

$$ {\displaystyle \begin{array}{ll}\operatorname{minimize}& {\left\Vert X\right\Vert}_{\ast}\\ {}s.t.& {X}_{ij}={M}_{ij}\kern0.5em \left(i,j\right)\in \Omega \end{array}} $$
(7)

Here, X represents the kernel norm, which is equal to:

$$ {\left\Vert X\right\Vert}_{\ast }={\sum}_{k=1}^n{\sigma}_k(X) $$
(8)

Here, σk(X) represents the kth singular value (arranged in descending order).

According to the research in Candès and Recht [69], for a matrix M with n1 × n2 and rank r, m elements are known. With to constants C and c, when:

$$ m\ge C{n}^{5/4}r\log n $$
(9)

Here, n = max(n1, n2), the reduction degree is at least 1 − cn3 or more through the known m elements, that is, the above optimization problem is completely solved correctly.

In particular, if r ≤ n1/5, the probability of accurately restoring the matrix M can reach 1 − cn3, as long as:

$$ m\ge C{n}^{6/5}r\log n $$
(10)

With a small number of known elements, matrix filling can perfectly recover a matrix with a low rank. For example, if r = O(1) or r = O(logn), we just need n6/5 known elements to complete the matrix M [69].

3.4 The energy consumption model

The energy consumed is mainly due to [59] packet transmission, keeping the listening and switching states (awake/sleep). The calculation includes: CM sends a metadata packet; CH receives and sends a packet; node keep listening for a slot; and switching state once, as shown in Table 1.

Table 1 Energy consumption for different operations

3.5 Problem statement

For networks that randomly generate packets, we use the on-demand allocation method to arrange the transmission time slots of nodes in the DEEDC scheme. Compared with the general method, we hope to achieve the following optimization effects by dynamically allocating time slots:

  1. 1.

    Reduce the delay. Here, the time at which a node completes data collection is defined as a delay. The delay of node A is mainly determined by two aspects.

The first is to collect the packets inside the cluster. This is followed by the start time and duration of the packet sent by other CH nodes to A. By using TCM, TS, and TL to represent them respectively, the delay is:

$$ T=\left\{\begin{array}{c}\kern0.75em {T}_{CM}+{T}_L\kern2.75em if\ {T}_{CM}\ge {T}_S\\ {}\kern1.25em {T}_S+{T}_L\kern2.75em if\ {t}_{CM}<{T}_S\end{array}\right. $$
(11)

To reduce the delay:

$$ \min (T)=\min \left({T}_{CM},{T}_S\right)+\min \left({T}_L\right) $$
(12)

That is, the data collection in the cluster is completed as soon as possible, and the inter-cluster relay is completed as soon as possible.

  1. 2.

    Reduce energy consumption. We mainly consider the consumption to transmit data packets, to keep listening, and to switch state, and represent them by ET, ELPL, and Eats respectively. Thus, the optimization of energy is:

$$ \min (E)=\min \left({E}^T+{E}^{LPL}+{E}^{ats}\right) $$
(13)

In order to increase the utilization of the energy, reducing the transmitted data, the listening time and the state switching are effective methods.

Therefore, the optimization goal can be summarized as:

$$ \left\{\begin{array}{c}\min (T)\\ {}\min (E)\end{array}\right. $$
(14)

4 The design of DEEDC scheme

In a common clustering route, the cluster head node always reserves time for each node in the cluster to transfer data. For example, for a cluster with n nodes within a cluster, the CH node always allocates a time slot for each of them. In the corresponding time slot, the CM nodes send packets to the CH node. After n slots, the CH node completes the collection of data within the cluster and sends a data packet generated by itself along with other received data packets.

For networks where nodes randomly generate packets, fixed time slots are obviously not appropriate. For example, only m(m < n) nodes generate data packets, but the CH node allocates n time slots for them. This allows the CH node to wake up in n − m time slots without doing anything meaningful, wasting energy.

For such special cases, in order to be suitable for dynamic traffic wireless sensor networks, the DEEDC scheme arranges adaptive transmission times for CM nodes, that is, through the control of the CH node, only the time slots are arranged for the CM nodes that need to transmit data.

Simply put, the CM node that needs to send the data packet wakes up, and the other CM nodes continue to sleep. In the first time slot, the awake CM node sends a request-to-send message (RTS) to the corresponding CH node. In the RTS packet, the ID information of the CM node is mainly included. If the CH node receives a total of x RTS packets within a specified time, it will schedule slots for the corresponding CM nodes in the order in which the RTS packets are received. In detail, the ID information of the CM node is taken out from the received RTS data packet, and then all the ID information and the information of the allocated time slot are placed in one data packet and broadcasted. After receiving the broadcast, the CM node first finds the time slot information corresponding to its own ID, then sets the time of re-awake according to the information, and finally go to the sleep state. The CM node will wake up in the set time slot, sends a packet, and finally enters the sleep state. In particular, if a CM node is ready to send data at the second slot, it will eliminate the process of sleeping first and then waking up. The CH node arranges time slots for the CM node in the first time slot. In the next x time slots, the CH node sequentially receives the packets from corresponding CM node. After successfully receiving x packets, these packets are organized into one large packet. In particular, if the CH node itself also has a data packet, it is also included in the large packet. Through the above process, the node successfully completes the data collection in the cluster, and then, it is the relay process between the cluster heads.

In addition, we learned from the matrix-filling model in Section 3.3 that even if only a part of the data in the network is collected, the unknown data could be recovered. To this end, we have also added restrictions on the number of collected packets.

Assume that collecting x packets per round, the data not collected in the cluster can be completely recovered. Since the nodes in the network generate packets completely irregularly, it is impossible to generate exactly x packets each time, sometimes it may be more, sometimes it may be less. Therefore, for such a special case, the response strategy given by the DEEDC scheme is to make the average number of data packets collected per round greater than or equal to x. That is, a threshold y is found, which represents the threshold of the number of collected packets. Under this constraint, the workflow of the CH node and the CM node also needs to be adjusted accordingly.

The CH node first determines if it has generated a data packet. If so, y = y − 1; otherwise, y = y. After the CH node completes the reception of the RTS packet, the time slots are arranged for the first y corresponding CM nodes in the order of arrival. As for other CM nodes that also sent RTS packets, the CN node schedules the slot to − 1, indicating that the CH node does not intend to receive its data packets in this round. After receiving the broadcast data, the CM node finds that the time slot is scheduled to be − 1, and then it directly discards the data packet to be sent, and turn to the sleep stage.

In Algorithm 1 and Algorithm 2, we respectively show the working process of CH node and CM node in a round of data transmission.

figure a

Among them, the symbols appearing respectively indicate:

  • Slot_num: indicates the sequence number of the time slot. Before the start of each round, it will be reset to 0

  • Node_num: indicates the number of CMs inside the cluster that are ready to send packets

  • RTS: request-to-send, which is a request message sent by CM that wants to send a data packet

  • ID[ ]: record the ID of the CM node that is ready to send the packet

  • Node_id: ID of the CM node, obtained from the corresponding RTS packet

  • OP[ ]: a transmission slot assigned to the CM node. If equal to − 1, the CH node in the round does not receive its data packet

  • Max_nodenum: the maximum number of nodes that the CH node can receive its data packets

We assume that there are eight CM nodes in a cluster, which are represented by CM0, CM1,…, CM7. The working process of the CH node and the CMi node (i = 0, 1, 2, … , 7) are as follows.

First, it is first necessary to clarify which CM nodes’ packet the CH node will receive. As shown in Figs. 3 and 4 out of eight CM nodes that generate data packets, all four nodes send RTS to the CH. With the matrix filling theory, all data can be recovered by only receiving three packets. Then, the CH node arranges the transmission time slot for the CM nodes corresponding to the three RTS data packets received first, and as for the fourth CM node, let it directly enter the sleep state until the next data transmission.

Fig. 3
figure 3

Schematic diagram of the workflow of the CH node

Fig. 4
figure 4

Schematic diagram of the workflow of the CM node

The working process of the CM nodes are as follows:

figure b

Taking the above example as an example, the working process of the CM node is as shown in Fig. 4.

First, once the CM node wakes up (data is collected), it reports to the CH node (sends the RTS packet). In the next few time slots, the work is started according to the indication sent by the CH node. Assuming that the CH node does not intend to receive a packet from a certain CM node, for example, CM4, it will receive an instruction to go directly to sleep state, and will not wake up after entering the sleep state until the next time data is collected.

Other CM nodes, such as CM1, CM3, and CM7, will wake up in the scheduled time for data packet transmission.

For the convenience of the following description, we will explain the related symbols appearing below as follows:

According to the clustering model in Section 3.2, it is assumed that the network with radius R is divided into h layers by clustering. From the inside to the outside, the cluster head nodes on each layer are called CH0, CH1, … , CHh − 1, and the nodes in the cluster are called CM0, CM1, … , CMh − 1. Among the clusters on the layer i(i = 1, 2 … h − 1), \( {n}_{CM_i} \) CMi and \( {n}_{CH_{i-1}} \) CHi − 1 transmit data packets to CHi.

Specifically, there are \( {n}_{CM_0} \) CM0 and zero CH nodes transmitting data packets to CH0 on the 0th layer.

\( {C}_i^j \) indicates combination. For the sake of convenience, we introduce the symbol \( {A}_i^j \):

$$ {A}_i^j={C}_i^j\ast {p}^i\ast {\left(1-p\right)}^{i-j} $$
(15)

Here, p represents the probability that a node generates a data packet. \( {A}_i^j \) indicates the probability that exactly j nodes in i nodes generate data packets, while other i − j nodes do not generate data packets.

4.1 The maximum value of the collected packets

With the theoretical support of matrix filling, we can reduce the collected data and thus reduce energy consumption. However, be careful; otherwise, it cannot complete data recovery.

Theorem 1 There is a two-dimensional matrix A whose size is NT. Where N is the total number of nodes (excluding the sink node), T is the round of data transmission. There are ni CM nodes in each cluster on layer i. Then, the number of packets that must be collected by CHi in each round is:

$$ {x}_i=\frac{\left({n}_i+1\right){\left(\max \left(N,T\right)\right)}^{6/5}}{N\ast T} $$
(16)

Proof According to 3.3, we only need max(N, T)6/5 known elements to recover the complete matrix. max(N, T)6/5 elements are generated from NT corresponding positions. We have ni + 1 nodes in the cluster; it is necessary to acquire xi known elements on average. Therefore, there are the following equations:

$$ \frac{{\left(\max \left(N,T\right)\right)}^{6/5}}{N\ast T}=\frac{x_i}{n_i+1} $$
(17)

By shifting the item, you can finally prove the result in theorem 1.

In the actual situation, the number of packets collected by the CH is random. We specify a threshold. Under the control of the threshold, the average number packets collected per round is exactly xi after rounding down, which satisfies the requirement of restoring the complete matrix in the matrix filling theory.

Theorem 2 Use yi to represent the threshold of packets collected by CHi (i = 0, 1, …h − 1), which is:

$$ {\displaystyle \begin{array}{c}{y}_i=\min \left({j}_i\right)\\ {}s.t.\sum \limits_{j=0}^{j_i}j\ast {P}_{j_i}\ge {x}_i\end{array}} $$
(18)

Where ji indicates that a total of j data packets are generated in the cluster on the layer i, The clusters on layer i generate a total of j packets represented by ji, and \( {P}_{j_i} \) represents the corresponding probability.

Proof The probability that a node has generated a packet is p (0 < p < 1). For a cluster on the layer i, a total of ni + 1 nodes may generate a data packet, that is, the value range of ji is: 0 ≤ ji ≤ ni + 1. In other words, at most ni + 1 packets in the cluster (all CM nodes have generated packet) and at least zero packets (no CM node generates packet) are generated. The corresponding probability is:

$$ {P}_{j_i}={\mathrm{A}}_{n_i+1}^j $$
(19)

If yi packets are from other nodes in the cluster (non-CH), then a total of 1 + yi + 1 slots are needed. The CH node that wants to transfer data sends a RTS, and then the CH nodes allocate time slots to them by broadcasting. In the next yi time slots, the intra-cluster nodes sequentially send packets. Finally, the CH organizes the received data packets and its own generated data packets (if any) into one data packet and sends it out. If the CH node generates a data packet, it needs 1 + (yi − 1) + 1 time slot.

In particular, the last time slot is calculated in the next layer (because in this time slot, the transmission operation of this layer and the reception operation in the next layer are performed sequentially, no need to repeat the calculation).

The probability of including the CH node among the yi nodes that generate the data packet is:

$$ {P}_i^{i- CH}=\frac{C_{n_i}^{y_i-1}}{C_{n_i+1}^{y_i}} $$
(20)

The probability of not including the CH is:

$$ {P}_i^{ni- CH}=\frac{C_{n_i}^{y_i}}{C_{n_i+1}^{y_i}} $$
(21)

Therefore, the average time for the CH node to complete data collection in the cluster is:

$$ {t}_i={P}_i^{i- CH}\ast {y}_i+{P}_i^{ni- CH}\ast \left(1+{y}_i\right) $$
(22)

In particular, if ti ≥ ni, we do not use the above method to allocate time slots for nodes on the layer i, but to fix the time slots. Even if the corresponding intra-cluster node does not transmit data packets in the time slot, other nodes cannot occupy this time slot.

4.2 Number of transmitted packets

The energy consumed to transmit packets is mainly considered: the CM node and the CH node send data packets, and the CH node receives the data packet. In addition, the CM node will also send RTS packets, and the CH node will send a broadcast. Since these data packets are smaller than the foregoing three, both in terms of packet size and number, they are not considered in this paper.

Theorem 3 Use \( {N}_{CM_i}^s \), \( {N}_{CH_i}^s \), and \( {N}_{CH_i}^r \) to represent the number of packets sent by CMi, sent and received by CHi (i = 0, 1, …h − 1).

$$ {\displaystyle \begin{array}{c}{N}_{CM_i}^s={P}_i^{ni- CH}\ast \left[\sum \limits_{k=0}^{y_i-1}k\ast {A}_{n_i}^k+{y}_i\ast \left(1-\sum \limits_{k=0}^{y_i-1}{A}_{n_i}^k\right)\right]\\ {}+{P}_i^{i- CH}\ast \left[\sum \limits_{k=0}^{y_i-1}\left(k-1\right)\ast {A}_{n_i}^{k-1}+\left({y}_i-1\right)\ast \left(1-\sum \limits_{k=0}^{y_i-1}{A}_{n_i}^{k-1}\right)\right]\end{array}} $$
(23)
$$ {N}_{CH_i}^s=\left(1-p\right)\ast {N}_{CH_i}^r+p\ast \left({N}_{CH_i}^r+1\right) $$
(24)
$$ {N}_{CH_i}^r={N}_{CM_i}^s+{N}_{CH_{i-1}}^s $$
(25)

Especially:

$$ {N}_{CH_0}^r={N}_{CM_0}^s $$
(26)

Proof The number of CMi nodes that send metadata packets to CHi is equal to the number of transmitted metadata packets. According to the matrix filling theory, suppose the number of packets generated in the cluster exceeds yi, the CH nodes do not need to collect all. If the CH node generates a data packet, then at most only yi − 1 data packets need to be collected in the cluster. Otherwise, at most yi are collected. Thus, \( {N}_{CM_i}^s \) is equal to:

$$ {\displaystyle \begin{array}{c}{N}_{CM_i}^s={P}_i^{ni- CH}\ast \left[\sum \limits_{k=0}^{y_i-1}k\ast {A}_{n_i}^k+{y}_i\ast \left(1-\sum \limits_{k=0}^{y_i-1}{A}_{n_i}^k\right)\right]\\ {}+{P}_i^{i- CH}\ast \left[\sum \limits_{k=0}^{y_i-1}\left(k-1\right)\ast {A}_{n_i}^{k-1}+\left({y}_i-1\right)\ast \left(1-\sum \limits_{k=0}^{y_i-1}{A}_{n_i}^{k-1}\right)\right]\end{array}} $$
(27)

If the CH node does not generate a packet, then it sends the same number of packets as received. Otherwise, it needs to send one more. Which is:

$$ {N}_{CH_i}^s=\left(1-p\right)\ast {N}_{CH_i}^r+p\ast \left({N}_{CH_i}^r+1\right) $$
(28)

The data packets received by the CHi come from two parts: CMi and CHi − 1. Which is:

$$ {N}_{CH_i}^r={N}_{CM_i}^s+{N}_{CH_{i-1}}^s $$
(29)

In particular, CH0 only needs to receive data from CM0:

$$ {N}_{CH_0}^r={N}_{CM_0}^s $$
(30)

4.3 Delay

In the network model described above, after the CH node receives the data packets of the nodes in the cluster, it may take a while to forward them. For example, the CH node will go to sleep after completing the data collection in the cluster. Wait until the CH node outside it is ready to send packets and then switch to the awake state. After the inter-cluster packet relay is completed, the packet will be sent to the next hop. In this case, the delay under the general definition is not accurate enough.

Therefore, we redefine the delay, that is, the time at which the (CH) node completes all its collection work. For a sink node, that is, when it receives all the packets sent from the network.

Theorem 4 Assuming that the time slots is cleared before the new round starts, the time in which the CHi node (i = 0, 1, …h − 1) completes all collection operations (including collecting data packets sent by CMi and CHi − 1) can be expressed as:

$$ {T}_i=\mathit{\max}\left({T}_{i-1},{T}_i^{CM}\right)+{d}^{CH}\ast {N}_{CH_{i-1}}^s $$
(31)

Where dCH represents the degree between the cluster heads, \( {N}_{CH_{i-1}}^s \) is the amount of data sent by CHi − 1, and \( {T}_i^{CM} \) is the time in which CHi completes the data collection work in the cluster (yi is the threshold of the packet collected by CHi), and:

$$ {T}_i^{CM}=1+{y}_i $$
(32)

Proof The packet collection process in the network can be summarized as follows: first, in the first time slot, the node that wants to send data transmits a RTS to the corresponding CH node, and acquires corresponding time slot information from the broadcast data packet that is sent back. If the CH node decides not to receive the data of node A, then A will directly discard the collected data packet and go to sleep. Instead, to reduce energy consumption, first turn to the sleep state.

For CHi, it may receive k packets from the CMi node, where k = 0, 1, 2, … , yi. k = yi if and only if the CHi does not generate a data packet. Therefore, the average slot in which CHi completes the data collection can be expressed as:

$$ {\displaystyle \begin{array}{c}{T}_i^{CM}=1+\sum \limits_{k=0}^{y_i-1}k\ast {A}_{n_i}^k\\ {}+{y}_i\ast {P}_i^{ni- CH}\ast \left(1-\sum \limits_{k=0}^{y_i-1}{A}_{n_i}^k\right)\end{array}} $$
(33)

The above average time slots do not represent well the time slots in the actual situation. We fixed the number of slots of data collected by CHi to yi which ensures that all CHi can complete data collection as required. If CHi completes its work before the time slot yi, it will go to sleep. Therefore, the time for CHi to complete the data collection in the cluster can be simplified to:

$$ {T}_i^{CM}=1+{y}_i $$
(34)

After the CH node collects the data packets inside the cluster, it also needs to continue to accept the packets from the CH nodes located at upper layer. Since the CH0 does not need to receive packets sent by other CH nodes, therefore:

$$ {T}_0={T}_0^{CM}+0 $$
(35)

For other CHi nodes (i = 1, 2,  … , h − 1), the time to start receiving the packets sent by CHi − 1 depends on the time when CHi completes the data collection work in the cluster (\( {T}_i^{CM} \)) and the time when all the data collection work is completed by CHi − 1 (Ti − 1). That is, only after completing the above two parts of work, the CH node can start the relay process between clusters.

The time required for CHi to receive the packets sent by CHi − 1 depends on the number of CHi − 1 nodes (dCH) and the amount of data sent by each CHi − 1 (\( {N}_{CH_{i-1}}^s \)). Thus, the time for CHi to complete all collection work (including collecting data packets among CMi nodes and CHi − 1 nodes) can be expressed as:

$$ {T}_i=\max \left({T}_{i-1},{T}_i^{CM}\right)+{d}^{CH}\ast {N}_{CH_{i-1}}^s $$
(36)

It should be noted that when CHi − 1 sends a packet to CHi, we do not have to send the RTS data packet to allocate the time slot first. There are two main reasons:

First, the packet sent by CHi − 1 is a summary of multiple previous packets and must be accepted.

Secondly, the probability that a CHi − 1 node has a data packet to send is very large, especially as the number of layers increases, the probability approaches infinitely to one.

In this case, the practice of data collection within clusters is always counterproductive, increasing energy consumption and latency. Therefore, the best method is to arrange a fixed time slot directly for each CHi − 1.

4.4 Energy consumption

In our research, the energy consumption mainly comes from three parts (packet transmission, keep listening, as well as switching status). As for other energy consumption, they can be ignored.

Theorem 5 For a node on layer (i = 0, 1, …h − 1), the energy consumed is mainly: the energy consumed to transfer packets (\( {E}_i^T \)), to keep listening (\( {E}_i^{LPL} \)) and by switching state (\( {E}_i^{ats} \)):

$$ {E}_i={E}_i^T+{E}_i^{LPL}+{E}_i^{ats} $$
(37)

Here:

$$ {E}_i^T=\frac{N_{CM_i}^s\ast {E}_{s, CM}+{N}_{CH_i}^s\ast {E}_{s, CH}+{N}_{CH_i}^r\ast {E}_r}{n_i+1} $$
(38)
$$ {E}_i^{LPL}=\frac{\left({N}_{CM_i}^{lpl}+1\right)\ast {E}_l}{n_i+1} $$
(39)
$$ {E}_i^{ats}=\frac{\left({N}_{CM_i}^{sta}\ast {n}_i+{N}_{CH_i}^{sta}\right)\ast {E}_{ats}}{n_i+1} $$
(40)

\( {N}_{CM_i}^{lpl} \) indicates the time the node keeps listening:

$$ {N}_{CM_i}^{lpl}=\sum \limits_{k=0}^{n_i}k\ast {A}_{n_i}^k $$
(41)

\( {N}_{CM_i}^{sta} \) and \( {N}_{CH_i}^{sta} \) indicates how many times the state of the CM node and the CH node switched:

$$ {N}_{CM_i}^{sta}=p\ast \sum \limits_{k=1}^{n_i}{A}_{n_i}^k\ast \left(1\ast \frac{1}{k}+2\ast \frac{k-1}{k}\right) $$
(42)
$$ {N}_{CH_i}^{sta}=\left\{\begin{array}{c}\ 1\kern3.5em \mathrm{when}\kern1em {T}_{i-1}\le {y}_i\ \\ {}2\kern3.5em \mathrm{when}\kern1em {T}_{i-1}>{y}_i\end{array}\right. $$
(43)

Proof According to the Theorem 3 in Section 4.2, the data packets transmitted by nodes are mainly divided into three categories. In the layer i, the packets sent by the CM node is \( {N}_{CM_i}^s \), sent and received by the CH node is \( {N}_{CH_i}^s \) and \( {N}_{CH_i}^r \). So for a node on layer i, the energy consumed for transmission is:

$$ {E}_i^T=\frac{N_{CM_i}^s\ast {E}_{s, CM}+{N}_{CH_i}^s\ast {E}_{s, CH}+{N}_{CH_i}^r\ast {E}_r}{n_i+1} $$
(44)

Because of the two mutually influential processes of listening and state switching, they are discussed together.

In comparison, the process of keeping the CM node listening is much simpler. After sending the RTS packet, it keeps listening until the broadcast data is received. Therefore, the average number of CM nodes that are listening is:

$$ {N}_{CM_i}^{lpl}=\sum \limits_{k=0}^{n_i}k\ast {A}_{n_i}^k $$
(45)

The initial state of the nodes in these clusters is the sleep state, and the nodes with data to be sent will be switched to the awake state in the first time slot. These nodes will then remain in sleep until the corresponding time slot arrives. These nodes will go to sleep again after the data has been transferred. In particular, if a transmission time slot allocated by a CM node happens to be the second time slot, then the node will remain awake in the second slot.

In summary, the number of state transitions has the following possibilities: (1) zero times, that is, no data transmission; (2) once, that is, the assigned time slot is the second time slot; (3) twice, that is, nodes assigned to other time slots. Therefore, the average number of times the node switches states is:

$$ {N}_{CM_i}^{sta}=p\ast \sum \limits_{k=1}^{n_i}{A}_{n_i}^k\ast \left(1\ast \frac{1}{k}+2\ast \frac{k-1}{k}\right) $$
(46)

Where p is the probability of generating a packet.

The CHi node keeps listening for a duration of one slot (used to receive RTS packets in the first slot), which is:

$$ {N}_{CH_i}^{lpl}=1 $$
(47)

The CHi node wakes up at the beginning of each round of data transmission, and then its state switching has two cases. If the CHi completes the in-cluster receipt collection, the CHi − 1 node has not completed the data collection. Then the CHi will switch to sleep state first, and wait until the CHi − 1 starts sending data before it wakes up again. In this case, the state is switched twice. Conversely, the CHi will continue to receive packets from the CHi − 1. In this case, the CHi only needs to switch the state once.

$$ {N}_{CH_i}^{sta}=\left\{\begin{array}{c}\ 1\kern3.5em \mathrm{when}\kern1em {T}_{i-1}\le {y}_i\ \\ {}2\kern3.5em \mathrm{when}\kern1em {T}_{i-1}>{y}_i\end{array}\right. $$
(48)

Where Ti − 1 represents the time when the CHi − 1 (i = 1, 2, … , h − 1) node completes the collection of all data, and yi represents the time when the CHi node completes the data collection in the cluster.

Due to listening, the average energy consumed is expressed as follows:

$$ {E}_i^{LPL}=\frac{\left({N}_{CM_i}^{lpl}+{N}_{CH_i}^{lpl}\right)\ast {E}_l}{n_i+1}=\frac{\left({N}_{CM_i}^{lpl}+1\right)\ast {E}_l}{n_i+1} $$
(49)

Similarly, due to the switching state, the average energy consumed is:

$$ {E}_i^{ats}=\frac{\left({N}_{CM_i}^{sta}\ast {n}_i+{N}_{CH_i}^{sta}\right)\ast {E}_{ats}}{n_i+1} $$
(50)

Thus, by adding the above three energies, the total energy consumed on layer i can be obtained.

$$ {E}_i={E}_i^T+{E}_i^{LPL}+{E}_i^{ats} $$
(51)

5 The experimental results and analysis

At this part, we analyzed and compared the proposed strategy, including delay and energy consumption. Before that, we first complete some basic work.

In the previous chapter, the method of calculating the energy consumed and the delay generated due to the transfer data are given in detail. We will give the relevant performance indicators when the DEEDC scheme is not used, for comparison.

Theorem 6 In the case of selecting to receive all generated data packets, \( {N}_{CM_i}^s \), \( {N}_{CH_i}^s \), and \( {N}_{CH_i}^r \) (i = 0, 1, …h − 1) should be expressed as:

$$ {N}_{CM_i}^{s^{\prime }}=\sum \limits_{k=0}^{n_i}k\ast {A}_{n_i}^k $$
(52)
$$ {N}_{CH_i}^{s^{\prime }}=\left(1-p\right)\ast {N}_{CH_i}^{r^{\prime }}+p\ast \left({N}_{CH_i}^{r^{\prime }}+1\right) $$
(53)
$$ {N}_{CH_i}^{r^{\prime }}={N}_{CM_i}^{s^{\prime }}+{N}_{CH_{i-1}}^{s^{\prime }} $$
(54)

When i = 0:

$$ {N}_{CH_0}^{r^{\prime }}={N}_{CM_0}^{s^{\prime }} $$
(55)

Proof As we all known, there are ni CM nodes in the cluster on the layer i; they generated k packets (k [0, ni]). Multiply the corresponding probability and sum to get a weighted average of packets generated by CMi:

$$ {N}_{CM_i}^{s^{\prime }}=\sum \limits_{k=0}^{n_i}k\ast {A}_{n_i}^k $$
(56)

If the CH itself generates a data packet (probability is p), then the number of packets sent by it is incremented by one based on received packets. The opposite is the packets it receives:

$$ {N}_{CH_i}^{s^{\prime }}=\left(1-p\right)\ast {N}_{CH_i}^{r^{\prime }}+p\ast \left({N}_{CH_i}^{r^{\prime }}+1\right) $$
(57)

The data packets received by the CHi node, one part comes CMi, and the other part comes from CHi − 1:

$$ {N}_{CH_i}^{r^{\prime }}={N}_{CM_i}^{s^{\prime }}+{N}_{CH_{i-1}}^{s^{\prime }} $$
(58)

In addition, since the CH0 node does not need to receive packets from other CH nodes, there is:

$$ {N}_{CH_0}^{r^{\prime }}={N}_{CM_0}^{s^{\prime }} $$
(59)

Theorem 7 In the case of accepting all the generated data packets, the time for the CHi node (i = 0, 1, …h − 1) to complete all data collection work is:

$$ {T}_i^{\prime }=\mathit{\max}\left({T}_{i-1}^{\prime },{T}_i^{CM^{\prime }}\right)+{d}^{CH}\ast {N}_{CH_{i-1}}^{s\prime } $$
(60)

Here, \( {T}_i^{CM^{\prime }} \) is the time to complete the collection of data packets in the cluster.

Proof The cluster head node schedules a fixed time slot for all CM nodes. Therefore, there is no need to take out a single time slot to arrange the time slots. That is, the time at which the cluster head node completes the data collection work in the cluster is:

$$ {T}_i^{CM^{\prime }}={n}_i $$
(61)

The time when the node CHi starts to collect the data packet of CHi − 1, is after the CHi node completed the in-cluster data collection and the CHi − 1 nodes finished collection work. That is, the time slot in which the CHi node collects all data is:

$$ {T}_i^{\prime }=\mathit{\max}\left({T}_{i-1}^{\prime },{T}_i^{CM^{\prime }}\right)+{d}^{CH}\ast {N}_{CH_{i-1}}^{s\prime } $$
(62)

In particular, when i = 0, CH0 only needs to receive data in the cluster:

$$ {T}_0^{\prime }={T}_0^{CM^{\prime }}+0={n}_0 $$
(63)

Theorem 8 In the case of acceptance of all generated data, the average energy consumption of nodes on layer i (i = 0, 1, …h − 1) can be expressed as:

$$ {E_i}^{\prime }={E}_i^{T^{\prime }}+{E}_i^{LPL^{\prime }}+{E}_i^{sta^{\prime }} $$
(64)

Here, \( {E}_i^{T^{\prime }} \), \( {E}_i^{LPL^{\prime }} \) and \( {E}_i^{sta^{\prime }} \) indicate the energy consumption to transmits data packets, keeps listening and switching states without using the DEEDC scheme, respectively. Moreover, their values are:

$$ {E}_i^{T^{\prime }}=\frac{N_{CM_i}^{s^{\prime }}\ast {E}_{s, CM}+{N}_{CH_i}^{s^{\prime }}\ast {E}_{s, CH}+{N}_{CH_i}^{r^{\prime }}\ast {E}_r}{n_i+1} $$
(65)
$$ {E}_i^{LPL^{\prime }}=\frac{N_{CH_i}^{lpl^{\prime }}\ast {E}_l}{n_i+1} $$
(66)
$$ {E}_i^{sta^{\prime }}=\frac{\left({N}_{CM_i}^{sta^{\prime }}\ast {n}_i+{N}_{CH_i}^{sta^{\prime }}\right)\ast {E}_{ats}}{n_i+1} $$
(67)

Proof Similar to the Theorem 5, the average energy consumption of node because of transmit packets can be expressed as:

$$ {E}_i^{T^{\prime }}=\frac{N_{CM_i}^{s^{\prime }}\ast {E}_{s, CM}+{N}_{CH_i}^{s^{\prime }}\ast {E}_{s, CH}+{N}_{CH_i}^{r^{\prime }}\ast {E}_r}{n_i+1} $$
(68)

Since all CM nodes are assigned a slot, they wake up and send data in the corresponding time slot (it will not wake up if no data packets are sent), so the listening time is zero.

In a certain time slot, the corresponding CM node sends a packet, and then the CH node receives it; otherwise, it keeps listening state. That is, if there are k nodes send packets to the CH, the CH needs to keep listening in the remaining ni − k slots. That is, the average listening time of CH node is:

$$ {N}_{CH_i}^{lpl^{\prime }}=\sum \limits_{k=0}^{n_i}\left({n}_i-k\right)\ast {A}_{n_i}^k $$
(69)

Thus, the average energy consumed due to the listening is (i = 0, 1, …h − 1):

$$ {E}_i^{LPL^{\prime }}=\frac{N_{CH_i}^{lpl^{\prime }}\ast {E}_l}{n_i+1} $$
(70)

The state switching of the CM node is relatively simple. If no data is sensed, the node does not need to wake up and the number of state switching is zero. If there is data to be sent, it wakes up in the corresponding time slot and enters the sleep state after the data transmission is completed. The number of state switches is once. Therefore, the times of CM node switches states is:

$$ {N}_{CM_i}^{sta^{\prime }}=p\ast \sum \limits_{k=1}^{n_i}{A}_{n_i}^k $$
(71)

If the two processes of data collection in the cluster and relay between clusters are continuous, then the CH node only needs to switch the state once, otherwise twice. The times the CH node switches states is as fallow:

$$ {N}_{CH_i}^{sta^{\prime }}=\left\{\begin{array}{c}\ 1\kern3.5em \mathrm{when}\kern1em {T}_{i-1}^{\prime}\le {n}_i\ \\ {}2\kern3.5em \mathrm{when}\kern1em {T}_{i-1}^{\prime }>{n}_i\end{array}\right. $$
(72)

Where \( {T}_{i-1}^{\prime } \) represents the time when the node CHi − 1 completes all data collection without using the DEEDC scheme, and yi represents the time that the node CHi completes the data collection within the cluster.

Therefore, the average energy consumption due to switching states is:

$$ {E}_i^{sta^{\prime }}=\frac{\left({N}_{CM_i}^{sta^{\prime }}\ast {n}_i+{N}_{CH_i}^{sta^{\prime }}\right)\ast {E}_{ats}}{n_i+1} $$
(73)

In summary, the average energy consumption on the layer i (i = 0, 1, …h − 1) is:

$$ {E_i}^{\prime }={E}_i^{T^{\prime }}+{E}_i^{LPL^{\prime }}+{E}_i^{sta^{\prime }} $$
(74)

There is a network with R = 500m, the sink node is in the middle. The density between nodes is ρ = 0.002, and the simulation time is 1000 data transmission rounds.

5.1 Analysis of the number of packets

In Fig. 5, the number of data packets transmitted by the node when \( {d}_0^{CM} \) is different (10 and 12 respectively) in the case where the degree between cluster heads is the same (dCH = 3) is given.

Fig. 5
figure 5

The number of packets when \( {d}_0^{CM} \) takes different values

As the distance from the sink node is shortened, the number of transmitted packets gradually expands, as shown in Fig. 5. In addition, when \( {d}_0^{CM} \) becomes larger, the number of transmitted packets also increases, and the more layers, the more obvious the increase. This is because a part of the data packet on the layer i comes from the previous i layers.

In the case where \( {d}_0^{CM} \) is the same (\( {d}_0^{CM}=10 \)), when the degree dCH between the cluster heads is different (equal to 3 and 4 respectively), the number of transmitted packets can be seen from Fig. 6. The larger dCH, the more packets are sent.

Fig. 6
figure 6

The number of packets when dCH takes different values

Without the DEEDC scheme, the number of data packets transmitted by the node is as shown in Fig. 7 (dCH = 3, \( {d}_0^{CM}=10 \)).

Fig. 7
figure 7

The number of packets without DEEDC scheme

Nodes that are farther away from the sink node send fewer packets. Conversely, by using the DEEDC scheme, the reduction is very significant, as shown in Fig. 8.

Fig. 8
figure 8

The reduced percentage of the number of packets

From Fig. 8, we can draw two conclusions: first, the number of packets sent by the CH node is gradually reduced, as the number of layers increases. This is because the closer to the sink node, the more packets are transmitted by the cluster head node. Secondly, the degree of decline in the number of transmitted packets generally shows a downward trend, but does not want to show a monotonous decreasing trend like the CH node. The reason is that in reality, the number of packages cannot be a decimal, which requires us to round up the theoretical calculation results, resulting in errors. In comparison, the CM node sends smaller packets, so this error is more obvious when studying the rate of decline.

By using the DEEDC scheme, the number of transmitted packets is greatly reduced (more than 45%), thereby reducing the delay and energy consumption generated by transmitting data packets. At the same time, due to the support of the matrix padding technology, the sink node can still recover the complete data in the case of reduced data packets.

5.2 Analysis of delay

In Section 4.3, we give the calculation of the delay in the DEEDC scheme. When dCH are the same (dCH = 3) and \( {d}_0^{CM} \) is not the same, the delay of the nodes can be seen from Fig. 9.

Fig. 9
figure 9

Delay with different \( {d}_0^{CM} \)

The larger the \( {d}_0^{CM} \), the greater the delay. When \( {d}_0^{CM} \) in the network is equal (10) and the degrees dCH between the clusters are not the same, the delay can be seen from Fig. 10. Similar to Fig. 9, the larger the dCH, the greater the delay. The maximum delay in the network is also reduced with the number of layers of the cluster decreases, which is because the number of hops of the relay is reduced.

Fig. 10
figure 10

Delay with different dCH

In the case where the DEEDC scheme is not used, the delay can be seen from Fig. 11.

Fig. 11
figure 11

Delay without DEEDC scheme

Intuitively, as in the case of the DEEDC scheme, the larger the \( {d}_0^{CM} \), the greater the delay. In contrast, after using the DEEDC scheme, the percentage of delay reduction of a node is as shown in Fig. 12.

Fig. 12
figure 12

The reduced percentage of the delay

The delay of the node is reduced most at a position away from the sink node. This is mainly because the nodes in this part are in the beginning stage of the packet forwarding process, the data collection work can be completed quickly, and the delay is small, so that the slight reduction can also make the degree of decline larger. However, compared to the decline in this part of the delay, the decline of the node near the sink is more valuable. For example, in Fig. 12, when \( {d}_0^{CM}=10 \), the latency of the sink node is reduced by 56.06%.

5.3 Analysis of energy consumption

There are three main parts of the node’s energy consumption, as shown in Fig. 13 (dCH = 3, \( {d}_0^{CM}=10 \)). From Fig. 11, we can see that the energy consumption because of transfer packets is different at different layers. Obviously, this is due to the continuous accumulation of packets. Unlike transmitting data, the energy consumption to keep listening and switching states is not much different on different layers.

Fig. 13
figure 13

Different energy consumed by nodes on different layers

Figure 14 shows the total energy consumption when the degrees between the cluster heads are the same (dCH = 3) and \( {d}_0^{CM} \) are different.

Fig. 14
figure 14

Energy consumption when \( {d}_0^{CM} \) takes different values

As can be seen from Fig. 14, it is difficult to see the direct relationship between the value of \( {d}_0^{CM} \) and the total energy consumption. This phenomenon can be understood as the value of \( {d}_0^{CM} \) mainly determines the number of packets received by CH.

When \( {d}_0^{CM} \) takes the same value (\( {d}_0^{CM}=10 \)) and dCH is different, the total energy consumption is as shown in Fig. 15. The energy consumed by the node decreases as dCH increases. The larger the dCH, the more CM nodes, the smaller the average energy consumption to transfer packets, and ultimately, the smaller the total energy consumption.

Fig. 15
figure 15

Energy consumption when dCH takes different values

Under the same conditions (dCH = 3, \( {d}_0^{CM}=10 \)), when all the packets are collected, the energy consumption shown in Fig. 16.

Fig. 16
figure 16

Energy consumption without DEEDC scheme

Obviously, compared to the case of using the DEEDC scheme (see Fig. 13), the energy consumed is roughly the same as that of the DEEDC scheme. The difference is that the energy consumption value becomes smaller after using the DEEDC scheme.

Under the same conditions, the energy consumption changes by using the DEEDC scheme are shown in Figs. 17 and 18.

Fig. 17
figure 17

The reduced percentage of the energy consumption (different \( {d}_0^{CM} \))

Fig. 18
figure 18

The reduced percentage of the energy consumption (different dCH)

From them, it cannot be seen that there is a direct linear relationship between the percentage of energy reduction and \( {d}_0^{CM} \) and dCH, but the percentage of energy consumption reduction is at least 40%, which effectively saves energy.

6 Conclusion

A delay and energy-efficient data collection scheme was proposed based on matrix filling theory for dynamic traffic WSNs in this paper. In this algorithm, our main work includes the following two points.

First, the original fixed transmission time slot is changed to on-demand allocation, thereby avoiding the fact that the corresponding node does not have a data packet transmission in some time slots, resulting in wasted time and energy. Under this mechanism, the cluster member node that wants to transfer a packet first transfers an RTS data packet to the CH node. The CH node arranges time slots for the corresponding CM nodes after receiving the RTS data packets, and transmits the time slot information by broadcasting. Thus, all nodes are able to complete the data transmission work in the allocated time slots. Compared to fixed time slots, this strategy reduces the delay that the CH node receives packets and the length of time the CH node remains awake. Secondly, through the support of matrix filling, the number of packets is reduced, and achieve the purpose of minimize both energy consumption and delay.

To sum up, a clustering routing algorithm suitable for networks that generate packets from time to time has successfully proposed, which reducing both energy consumption and delay. By introducing matrix filling technology, the delay is further reduced and the energy utilization rate is improved.

Availability of data and materials

Not applicable.

Abbreviations

CH:

Cluster head

CM:

Cluster member

DEEDC:

Delay and energy-efficient data collection

IoT:

Internet of Things

RTS:

Request to send

TDMA:

Time division multiple access

WSNs:

Wireless sensor networks

References

  1. Z. Li, J. Gui, N. Xiong, Z. Zeng, Energy-efficient resource sharing scheme with out-band D2D relay-aided communications in C-RAN-based underlay cellular networks. IEEE Access 7(1), 19125–19142 (2019)

    Article  Google Scholar 

  2. H. Xiong, H. Zhang, J. Sun, Attribute-based privacy-preserving data sharing for dynamic groups in cloud computing. IEEE Syst. J. (2018). https://doi.org/10.1109/JSYST.2018.2865221

  3. J. Ren, Y. Zhang, K. Zhang, A. Liu, J. Chen, X. Shen, Lifetime and energy hole evolution analysis in data-gathering wireless sensor networks. IEEE Trans. Ind. Inf. 12(2), 788–800 (2016)

    Article  Google Scholar 

  4. J. Park, S. Lee, S. Yoo, Time slot assignment for convergecast in wireless sensor networks. J. Parallel Distrib. Comput. 83, 70–82 (2015)

    Article  Google Scholar 

  5. Z. Li, Y. Liu, A. Liu, S. Wang, H. Liu, Minimizing convergecast time and energy consumption in green internet of things. IEEE Trans. Emerg. Top. Comput. (2018). https://doi.org/10.1109/TETC.2018.2844282

  6. J. Wang, C. Jiang, L. Gao, S. Yu, Z. Han, Y. Ren, in 2016 IEEE International Conference on Communications (ICC). Complex network theoretical analysis on information dissemination over vehicular networks (2016), pp. 1–6

    Google Scholar 

  7. X. Ju, W. Liu, C. Zhang, A. Liu, T. Wang, N. Xiong, Z. Cai, An energy conserving and transmission radius adaptive scheme to optimize performance of energy harvesting sensor networks. Sensors 18(9), 2885 (2018). https://doi.org/10.3390/s18092885

    Article  Google Scholar 

  8. T. Li, K. Ota, T. Wang, X. Li, Z. Cai, A. Liu, Optimizing the coverage via the UAVs with lower costs for information-centric internet of things. IEEE Access 7(1), 15292–15309 (2019)

    Article  Google Scholar 

  9. X. Luo, C. Jiang, W. Wang, Y. Xu, J.H. Wang, W. Zhao, User behavior prediction in social networks using weighted extreme learning machine with distribution optimization. Futur. Gener. Comput. Syst. 93, 1023–1035 (2019)

    Article  Google Scholar 

  10. Y. Liu, A. Liu, N. Zhang, X. Liu, M. Ma, Y. Hu, DDC: dynamic duty cycle for improving delay and energy efficiency in wireless sensor networks. J. Netw. Comput. Appl. 131, 16–27 (2019)

    Article  Google Scholar 

  11. S. Zhang, X. Li, Z. Tan, T. Peng, G. Wang, A caching and spatial K-anonymity driven privacy enhancement scheme in continuous location-based services. Futur. Gener. Comput. Syst. 94, 40–50 (2019)

    Article  Google Scholar 

  12. J. Ren, Y. Zhang, N. Zhang, D. Zhang, X. Shen, Dynamic channel access to improve energy efficiency in cognitive radio sensor networks. IEEE Trans. Wirel. Commun. 15(5), 3143–3156 (2016)

    Article  Google Scholar 

  13. Y. Liu, M. Ma, X. Liu, N. Xiong, A. Liu, Y. Zhu, Design and analysis of probing route to defense sink-hole attacks for internet of things security. IEEE Trans. Netw. Sci. Eng. (2018) https://doi.org/10.1109/TNSE.2018.2881152

  14. L. Hu, A. Liu, M. Xie, T. Wang, UAVs joint vehicles as data mules for fast codes dissemination for edge networking in Smart city. Peer-Peer Netw. Appl. (2019). https://doi.org/10.1007/s12083-019-00752-0

  15. Z. Kuang, L. Li, J. Gao, L. Zhao, A. Liu, Partial offloading scheduling and power allocation for mobile edge computing systems. IEEE Internet Things J. (2019). https://doi.org/10.1109/JIOT.2019.2911455

  16. W. Zhang, W. Liu, T. Wang, A. Liu, Z. Zeng, H. Song, S. Zhang, Adaption resizing communication buffer to maximize lifetime and reduce delay for WVSNs. IEEE Access 7(1), 48266–48287 (2019)

    Article  Google Scholar 

  17. X. Liu, Q. Yang, J. Luo, B. Ding, S. Zhang, An energy-aware offloading framework for edge-augmented mobile RFID systems. IEEE Internet Things J. (2018). https://doi.org/10.1109/JIOT.2018.2881295

    Article  Google Scholar 

  18. L. Yin, J. Gui, Z. Zeng, Improving energy efficiency of multimedia content dissemination by adaptive clustering and D2D multicast. Mob. Inf. Syst. 2019, 5298508 (2019). https://doi.org/10.1155/2019/5298508

    Article  Google Scholar 

  19. Z. Gao, D. Wang, S. Wan, et al., Cognitive-inspired class-statistic matching with triple-constrain for camera free 3D object retrieval. Futur. Gener. Comput. Syst. 94, 641–653 (2019)

    Article  Google Scholar 

  20. C. Zhou, Y. Gu, S. He, Z. Shi, A robust and efficient algorithm for coprime array adaptive beamforming. IEEE Trans. Veh. Technol. 67(2), 1099–1112 (2018)

    Article  Google Scholar 

  21. T. Wang, J. Zhou, A. Liu, M. Bhuiyan, G. Wang, W. Jia, Fog-based computing and storage offloading for data synchronization in IoT. IEEE Internet Things J. (2018). https://doi.org/10.1109/JIOT.2018.2875915

    Article  Google Scholar 

  22. Y. Liu, A. Liu, N. Xiong, T. Wang, W. Gui, Content propagation for content-centric networking from location-based social networks. IEEE Trans. Syst. Man Cybern. Syst. (2019). https://doi.org/10.1109/TSMC.2019.2898982

  23. H. Teng, Y. Liu, A. Liu, N. Xiong, Z. Cai, T. Wang, X. Liu, A novel code data dissemination scheme for internet of things through mobile vehicle of smart cities. Futur. Gener. Comput. Syst. 94, 351–367 (2019)

    Article  Google Scholar 

  24. H. Zhang, Y. Qi, H. Zhou, J. Zhang, J. Sun, Testing and defending methods against DoS attack in state estimation. Asian J. Control 19(3), 1–11 (2017)

    MathSciNet  MATH  Google Scholar 

  25. X. Liu, Y. Liu, N. Zhang, W. Wu, A. Liu, Optimizing trajectory of unmanned aerial vehicles for efficient data acquisition: a matrix completion approach. IEEE Internet Things J. (2019). https://doi.org/10.1109/JIOT.2019.2894257

    Article  Google Scholar 

  26. L. Zhou, X. Li, K. Yeh, C. Su, W. Chiu, Lightweight IoT-based authentication scheme in cloud computing circumstance. Futur. Gener. Comput. Syst. 91, 244–251 (2019)

    Article  Google Scholar 

  27. X. Liu, S. Wu, X. Xu, J. Jiao, Q. Zhang, Improved polar SCL decoding by exploiting the error correction capability of CRC. IEEE Access 7, 7032–7040 (2019)

    Article  Google Scholar 

  28. T. Wang, G. Zhang, A. Liu, M. Bhuiyan, Q. Jin, A secure IoT service architecture with an efficient balance dynamics based on cloud and edge computing. IEEE Internet Things J. (2018). https://doi.org/10.1109/JIOT.2018.2870288

    Article  Google Scholar 

  29. Z. Zhu, J. Peng, X. Gu, H. Li, K. Liu, Z. Zhou, W. Liu, Fair resource allocation for system throughput maximization in mobile edge computing. IEEE Access 6, 5332–5340 (2018)

    Article  Google Scholar 

  30. Q. Liu, Y. Guo, J. Wu, G. Wang, Effective query grouping strategy in clouds. J. Comput. Sci. Technol. 32(6), 1231–1249 (2017)

    Article  MathSciNet  Google Scholar 

  31. T. Wang, G. Zhang, M. Bhuiyan, A. Liu, W. Jia, M. Xie, A novel trust mechanism based on fog computing in sensor-cloud system. Futur. Gener. Comput. Syst. (2018) https://doi.org/10.1016/j.future.2018.05.049

  32. M. Luo, K. Wang, Z. Cai, A. Liu, Y. Li, C. Cheang, Using imbalanced triangle synthetic data for machine learning anomaly detection. Comput. Mater. Continua 58(1), 15–26 (2019)

    Article  Google Scholar 

  33. S. Wan, Y. Zhao, T. Wang, et al., Multi-dimensional data indexing and range query processing via Voronoi diagram for internet of things. Futur. Gener. Comput. Syst. 91, 382–391 (2019)

    Article  Google Scholar 

  34. Q. Li, A. Liu, T. Wang, M. Xie, N. Xiong, Pipeline slot based fast rerouting scheme for delay optimization in duty cycle based M2M communications. Peer-Peer Netw. Appl. (2019). https://doi.org/10.1007/s12083-019-00753-z

  35. C. Zhang, Y. Lin, L. Zhu, A. Liu, Z. Zhang, F. Huang, CNN-VWII: An efficient approach for large-scale video retrieval by image queries. Pattern Recogn. Lett. 123, 82–88 (2019)

    Article  Google Scholar 

  36. X. Luo, Y. Lv, M. Zhou, W. Wang, W. Zhao, A laguerre neural network-based ADP learning scheme with its application to tracking control in the internet of things. Pers. Ubiquit. Comput. 20(3), 361–372 (2016)

    Article  Google Scholar 

  37. X. Luo, D. Zhang, T. Yang, J. Liu, X. Chang, H. Ning, A kernel machine-based secure data sensing and fusion scheme in wireless sensor networks for the cyber-physical systems. Futur. Gener. Comput. Syst. 61, 85–96 (2016)

    Article  Google Scholar 

  38. Wang Q, Liu W, Wang T, Zhao M, Li X, Xie M, Ma M, Zhang G, Liu A. Reducing Delay and Maximizing Lifetime for Wireless Sensor Networks with Dynamic Traffic Patterns. IEEE Access. 7(1), 70212–70236 (2019).

    Article  Google Scholar 

  39. J. Chen, K. Hu, Q. Wang, Y. Sun, Z. Shi, S. He, Narrow-band internet of things: implementations and applications. IEEE Internet Things J. 4(6), 2309–2314 (2017)

    Article  Google Scholar 

  40. Q. Deng, H. Zeng, J. Zhang, S. Tian, J. Cao, Z. Li, A. Liu, Compressed sensing for image reconstruction via back-off and rectification of greedy algorithm. Signal Process. 157, 280–287 (2019)

    Article  Google Scholar 

  41. S. Ding, S. Qu, Y. Xi, et al., A long video caption generation algorithm for big video data retrieval. Futur. Gener. Comput. Syst. 93, 583–595 (2019)

    Article  Google Scholar 

  42. Q. Liu, G. Wang, X. Liu, T. Peng, J. Wu, Achieving reliable and secure services in cloud computing environments. Comput. Electr. Eng. 59, 153–164 (2017)

    Article  Google Scholar 

  43. W. Yang, W. Liu, Z. Zeng, A. Liu, G. Huang, N. Xiong, Z. Cai. Adding active slot joint larger broadcast radius for fast code dissemination in WSNs. Sensors, 18(11), 4055 (2018). DOT: https://doi.org/10.3390/s18114055

    Article  Google Scholar 

  44. W. Qi, W. Liu, X. Liu, A. Liu, T. Wang, N. Xiong, Z. Cai, Minimizing delay and transmission times with long lifetime in code dissemination scheme for high loss ratio and low duty cycle WSNs. Sensors 18(10), 3516 (2018) https://doi.org/10.3390/s18103516

    Article  Google Scholar 

  45. Y. Ren, W. Liu, T. Wang, X. Li, N. Xiong, A. Liu, A collaboration platform for effective task and data reporter selection in crowdsourcing network. IEEE Access 7(1), 19238–19257 (2019)

    Article  Google Scholar 

  46. J. Tan, W. Liu, T. Wang, N. Xiong, H. Song, A. Liu, Z. Zeng, An adaptive collection scheme based matrix completion for data gathering in energy-harvesting wireless sensor network. IEEE Access 7(1), 6703–6723 (2019)

    Article  Google Scholar 

  47. Y. Liu, A. Liu, X. Liu, X. Huang, A statistical approach to participant selection in location-based social networks for offline event marketing. Inf. Sci. 480, 90–108 (2019)

    Article  Google Scholar 

  48. X. Xiang, W. Liu, N. Xiong, H. Song, A. Liu, T. Wang, Duty cycle adaptive adjustment based device to device (D2D) communication scheme for WSNs. IEEE Access 6(1), 76339–76373 (2018)

    Article  Google Scholar 

  49. B. Huang, W. Liu, T. Wang, X. Li, H. Song, A. Liu, Deployment optimization for data centers in vehicular networks. IEEE Access 7(1), 20644–20663 (2019)

    Article  Google Scholar 

  50. A. Liu, J. Min, K. Ota, M. Zhao, Reliable differentiated services optimization for network coding cooperative communication system. Int. J. Comput. Syst. Sci. Eng. 33(4), 235–250 (2018)

    Google Scholar 

  51. J. Gui, Z. Li, Z. Zeng, Improving energy-efficiency for resource allocation by relay-aided in-band D2D communications in C-RAN-based systems. IEEE Access 7(1), 8358–8375 (2019)

    Article  Google Scholar 

  52. W. Liu, P. Zhuang, h. Liang, J. Peng, Z. Huang, Distributed economic dispatch in microgrids based on cooperative reinforcement learning. IEEE Trans. Neural Netw. Learn. 29(6), 2192–2203 (2018)

    Article  MathSciNet  Google Scholar 

  53. L. CARR-MOTYČKOVÁ, D. Dryml, Convergecast tree construction in wireless sensor networks. Adhoc & Sensor Wireless Networks, vol 27 (2015), pp. 3–4, 263-293

    Google Scholar 

  54. B. Malhotra, I. Nikolaidis, M. Nascimento, Aggregation convergecast scheduling in wireless sensor networks. Wirel. Netw 17(2), 319–335 (2011)

    Article  Google Scholar 

  55. S. Gandham, Y. Zhang, Q. Huang, Distributed time-optimal scheduling for convergecast in wireless sensor networks. Comput. Netw. 52(3), 610–629 (2008)

    Article  MATH  Google Scholar 

  56. X. Xu, X. Li, X. Mao, S. Tang, S. Wang, A delay-efficient algorithm for data aggregation in multihop wireless sensor networks. IEEE Trans. Parallel Distrib. Syst. 22(1), 163–175 (2011)

    Article  Google Scholar 

  57. X. Xiang, W. Liu, A. Liu, N. Xiong, Z. Zeng, Z. Cai, Adaptive duty cycle control based opportunistic routing scheme to reduce delay in cyber physical systems. Int. J. Distrib. Sens. Netw.. https://doi.org/10.1177/1550147719841870 (2019)

    Article  Google Scholar 

  58. H. Teng, W. Liu, T. Wang, A. Liu, X. Liu, S. Zhang, A cost-efficient greedy code dissemination scheme through vehicle to sensing devices (V2SD) communication in Smart city. IEEE Access 7(1), 16675–16694 (2019)

    Article  Google Scholar 

  59. X. Wu, Y. Li, Y. Li, W. Lou, Energy-efficient wake-up scheduling for data collection and aggregation. IEEE Trans. Parallel Distrib. Syst. 21(2), 275–287 (2010)

    Article  Google Scholar 

  60. J. Li, W. Liu, T. Wang, H. Song, X. Li, F. Liu, A. Liu, Battery-friendly based relay selection scheme to prolong lifetime for sensor nodes in internet of things. IEEE Access 7(1), 33180–33201 (2019)

    Article  Google Scholar 

  61. M. Wu, Y. Wu, C. Liu, Z. Cai, N. Xiong, A. Liu, M. Ma, An effective delay reduction approach through portion of nodes with larger duty cycle for industrial WSNs. Sensors 18, 1535 (2018)

    Article  Google Scholar 

  62. Y. Liu, A. Liu, Z. Chen, Analysis and improvement of send-and-wait automatic repeat-request protocols for wireless sensor networks. Wirel. Pers. Commun. 81, 923–959 (2015)

    Article  Google Scholar 

  63. Y. Ren, Y. Liu, N. Zhang, A. Liu, N. Xiong, Z. Cai, Minimum-cost mobile crowdsourcing with QoS guarantee using matrix completion technique. Pervasive Mob. Comput. 49, 23–44 (2018)

    Article  Google Scholar 

  64. Y. Liu, A. Liu, Y. Hu, Z. Li, Y. Choi, H. Sekiya, J. Li, FFSC: An energy efficiency communications approach for delay minimizing in internet of things. IEEE Access 4(3775–3793) (2016)

  65. R. Dhanalakshmi, A. Vadivel, P. Parthiban, Shortest path routing in solar powered WSNs using soft computing techniques. J. Sci. Ind. Res. 76, 23–27 (2017)

    Google Scholar 

  66. U. Kim, E. Kong, H. Choi, J. Lee, Analysis of aggregation delay for multisource sensor data with on-off traffic pattern in wireless body area networks. Sensors 16, 1622 (2016)

    Article  Google Scholar 

  67. M. Huang, W. Liu, T. Wang, H. Song, X. Li, A. Liu, A queuing delay utilization scheme for on-path service aggregation in services oriented computing networks. IEEE Access 7(1), 23816–23833 (2019)

    Article  Google Scholar 

  68. X. Xu, M. Yuan, X. Liu, A. Liu, N. Xiong, Z. Cai, T. Wang, Cross-layer optimized opportunistic routing scheme for loss-and-delay sensitive WSNs. Sensors 18, 1422 (2018)

    Article  Google Scholar 

  69. E. Candès, B. Recht, Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank to the person who provided meticulous and valuable suggestions for improving the paper.

Funding

This research was funded by the National Natural Science Foundation of China (61772554, 61572526, 61572528), and The Natural Science Foundation of Zhejiang Province (No. LY17F020032).

Author information

Authors and Affiliations

Authors

Contributions

XX performed the experiment, analyzed the experiment results, and wrote the manuscript. WL, TW, XL, and HS comment on the manuscript. AL conceived of the work and wrote part of the manuscript. GZ comment on the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mande Xie.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xiang, X., Liu, W., Wang, T. et al. Delay and energy-efficient data collection scheme-based matrix filling theory for dynamic traffic IoT. J Wireless Com Network 2019, 168 (2019). https://doi.org/10.1186/s13638-019-1490-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-019-1490-5

Keywords