Skip to main content

Adaptive sparse random projections for wireless sensor networks with energy harvesting constraints

Abstract

Considering a large-scale energy-harvesting wireless sensor network (EH-WSN) measuring compressible data, sparse random projections are feasible for data well-approximation, and the sparsity of random projections impacts the mean square error (MSE) as well as the system delay. In this paper, we propose an adaptive algorithm for sparse random projections in order to achieve a better tradeoff between the MSE and the system delay. With the energy-harvesting constraints, the sparsity is adapted to channel conditions via an optimal power allocation algorithm, and the structure of the optimal power allocation solution is analyzed for some special case. The performance is illustrated by numerical simulations.

1 Introduction

Energy supply is a major design constraint for conventional wireless sensor networks (WSNs), and the lifetime is limited by the total energy available in the batteries. Some specific sensors in WSNs may consume more energy than the radio during a long acquisition time [1]. Replacing the batteries periodically may prolong the lifetime but not be a viable option when the replacement is considered to be too inconvenient, too dangerous, or even impossible when sensors are deployed in harsh conditions, e.g., in toxic environments or inside human bodies. Therefore, harvesting energy from the environment is a promising approach to cope with battery supplies and the increasing energy demand [2]. The energy that can be harvested includes solar energy, piezoelectric energy, or thermal energy, etc. and is theoretically unlimited. Besides, background radio-frequency (RF) signals radiated by ambient transmitters can also be a viable new source for wireless power transfer (WPT) [3,4] and (Ng et al.:Secure and Green SWIPT in Distributed Antenna Networks with Limited Backhaul Capacity, submitted). Unlike the conventional WSNs that are subject to a power constraint or sum energy constraint, each sensor with energy harvesting capabilities is, in every time slot, constrained to use the most amount of stored energy currently available, although more energy may be available in the future slot. Therefore, a causality constraint is imposed on the use of the harvested energy. Current researches on the energy harvesting issues mostly have focused on wireless communication systems. Gatzianas et al. [5] considered a cross-layer resource allocation problem to maximize the total system utility, and Ho and Zhang [6] studied the throughput maximization with causal side information and full side information for wireless communication systems. Ng et al. [3] studied the design of a resource allocation algorithm minimizing the total transmit power for the case when the legitimate receivers are able to harvest energy form RF signals for a multiuser multiple-input single-output downlink system. Energy management policies were studied for energy-harvesting wireless sensor networks (EH-WSNs) in [7], where sensor nodes have energy-harvesting capabilities, aiming at maximizing the system throughput and reducing the system delay.

For WSNs, however, accurately recovering signals is also important. Recent results in compressive sensing (CS) can provide an efficient signal reconstruction method for WSNs. Data collected from wireless sensors are typically correlated and thus compressible in an appropriate transform domain (e.g., the Fourier transform or wavelet)[8]. Therefore, the main ideal of CS is that n data values can be well-approximated using only k<<n transform coefficients if the data are compressible [8-13]. In particular, Wang et al. [13] propose a distributed compressive sensing scheme for WSNs in order to reduce computational complexity and the communication cost. It considers an m×n sparse random matrix with entries that have a probability g of being nonzero, so that on average there are ng nonzeros per row. The resulted data-approximation error rate is comparable to that of the optimal k-term approximation if the energy of the signal is not concentrated in a few elements. Somehow, the sparsity factor g of random projections impacts the accuracy of signal reconstructions. Usually, the sparsity factor g is statistically determined according to the amount of harvested energy and is homogeneous for all sensors [14,15]. Rana et al. [14] only considered AWGN channels. Yang et al. [15] took into account fading channels and studied the sufficient conditions guaranteeing a reliable and computationally efficient data approximation for sparse random projections. It is not surprising that the sparse random projections based signal recovery is non-optimal since the sparsity factor g is fixed during the entire transmission slots and thus can not reflect the effect of channel conditions. On the other hand, the system delay m, which is one of key quantities used to characterize the performance of random projection-based CS schemes, is expected to be as small as possible. Upon [14] and [15], we realize that the lower bound of the system delay also being related to the sparsity of random projections and the larger g being the shorter delay may be achieved. Note that there is often a tradeoff between the system delay and the data approximation [8,16]. Therefore, in this paper, we consider fading channels and the energy-harvesting constraints and study the problem on adapting sparsity of random projections according to full channel information, in order to improve the performance of signal recovery and reduce the system delay as well. To the best of our knowledge, very limited work such as [15] has touched upon this topic, which however, only provides rough discussion. The main contributions are presented as follows:

  • Considering the wireless fading channels, we verify that the random projection matrix satisfies the property that the inner product between two projected vectors are preserved in expectation, and then provide a lower bound of the system delay for achieving an acceptable data approximation error probability.

  • We give a new definition of the sparsity of random projections and formulate the optimal sparsity problem which is converted into an optimal power allocation problem for maximizing the system throughput. Unlike the conventional energy allocation problem, due to battery dynamics and channel dynamics, the closed-form solution may not be available. Therefore, we study a special case that the battery capacity is not bounded to find the structure of the optimal solution. Specifically, in the case the problem is converted into a convex optimization problem, then the closed-form solution is obtained in terms of Lagrangian multipliers.

The rest of paper is organized as follows. Section 2 gives the system model and overview previously known results on sparse random projections. Section 3 redefines the sparsity and formulate the optimal sparsity problem for EH-WSNs. Section 4 considers a specific case and address the structure of the optimal solution. Section 5 provides the simulation results. Finally, Section 6 concludes the paper.

2 System model

We consider a wireless sensor network of n sensor nodes, each of which measure a data value \(s_{i}\in \mathbb {C}\) and is capable of energy harvesting. We assume a Rayleigh-fading channel, and the channel coefficients, denoted as h ij , where 1≤i≤m denotes the slot index and 1≤j≤n denotes the sensor index, are independent and identically distributed (i.i.d) and satisfy the complex Gaussian distribution with zero mean and unit variance. We further assume the channel remains constant in each slot. Sensor j first multiplies its data s j by some random projections \(\phi _{\textit {ij}}\in \mathbb {R}\), then transmits in the ith time slot. At the receiver

$$ y_{i}=\sum_{j=1}^{n} h_{ij}\phi_{ij}s_{j}+e_{i} $$
((1))

where e i is a White Gaussian distributed noise with zero mean and variance σ 2. After m time slots, the received vector is given as

$$ \textbf{y}=(\textbf{H}\odot\Phi)\textbf{s}+\textbf{e}=\textbf{Z}\textbf{s}+\textbf{e} $$
((2))

where \(\textbf {H}=\left \{h_{\textit {ij}}\right \}\in \mathbb {C}^{m\times n}\), \(\Phi =\left \{ \phi _{\textit {ij}}\right \} \in \mathbb {R}^{m\times n}\), Z=H⊙Φ, and the operation ⊙ is the element-wise product of two matrices. The corresponding real-valued equation of (2) is

$$ {\fontsize{9.2}{6} \begin{aligned} \hat{\textbf{y}}&=\left[ \begin{array}[pos]{cc} \textit{Re}\left\{(\textbf{H}\odot\Phi)\right\}&-\textit{Im}\left\{(\textbf{H}\odot\Phi)\right\}\\ \textit{Im}\left\{(\textbf{H}\odot\Phi)\right\}&\textit{Re}\left\{(\textbf{H}\odot\Phi)\right\} \end{array} \right]\left[ \begin{array}[pos]{c} \textit{Re}\left\{\textbf{s}\right\}\\ \textit{Im}\left\{\textbf{s}\right\} \end{array}\right] +\left[ \begin{array}[pos]{c} \textit{Re}\left\{\textbf{e}\right\}\\ \textit{Im}\left\{\textbf{e}\right] \end{array}\right] \\ &=\hat{\textbf{Z}}\left[ \begin{array}[pos]{c} \textit{Re}\left\{\textbf{s}\right\}\\ \textit{Im}\left\{\textbf{s}\right\} \end{array}\right]+\left[ \begin{array}[pos]{c} \textit{Re}\left\{\textbf{e}\right\}\\ \textit{Im}\left\{\textbf{e}\right\} \end{array}\right] \end{aligned}} $$
((3))

where Re{A} and Im{A} denote the real part and the imaginary part of matrix A, respectively, and

$$ \hat{\textbf{Z}}=\left[ \begin{array}[pos]{cc} \textit{Re}\left\{(\textbf{H}\odot\Phi)\right\}&-\textit{Im}\left\{(\textbf{H}\odot\Phi)\right\}\\ \textit{Im}\left\{(\textbf{H}\odot\Phi)\right\}&\textit{Re}\left\{(\textbf{H}\odot\Phi)\right\} \end{array} \right] $$
((4))

2.1 Compressible data and sparse random projections

Suppose the aggregate sensor data \(\textbf {s}\in \mathbb {C}^{n}\) from n nodes is compressible, so that we can model it being sparse with respect to a fixed orthonormal basis \(\{\psi _{j}\in \mathbb {C}^{n}:j=1,\cdots,n\}\) [11], i.e.,

$$ \textbf{s}=\Psi \textbf{x} = \sum_{j=1}^{n} \psi_{j} x_{j} $$
((5))

Generally, for a compressible signal s the largest k transform coefficients capture most of the signal information and the k is usually referred to as the sparsity of s. The best k-term approximation method is applied to recover only the k largest transform coefficients and discard the remaining as zero [8], and achieves near-optimal performance of error probabilities. However, the random projection matrix Φ used in [8] is dense, which results in a great computational complexity. Therefore, Wang et al. [13] proposed sparse random projections to reduce the computational complexity but guarantee that the error probability is comparable to that achieved by the dense random projections. More concretely, the matrix of sparse random projections \(\Phi \in \mathbb {R}^{m\times n}\) contains i.i.d. entries

$$ \psi_{ij}=\frac{1}{\sqrt{g}}\left\{ \begin{array}[3]{ccc} +1&\texttt{w.p.}&g/2 \\ 0&\texttt{w.p.}&1-g\\ -1&\texttt{w.p.}&g/2 \end{array}\right. $$
((6))

where g is a factor which gives the probability of a measurement and controls the degree of sparsity of random projections, e.g., if g=1, the random matrix has no sparsity, and if g= logn/n, the expected number of nonzeros in each row is logn. We can easily verify that the entries within each row are four-wise independent, while the entries across different rows are fully independent, i.e.,

$$ \mathbb{E}[\phi_{ij}]=0, \mathbb{E}[\phi_{ij}^{2}]=1, \mathbb{E}[\phi_{ij}^{4}]=g $$
((7))

Therefore, each random projection vector is pseudo-randomly generated and stored in a small space.

Corollary 1.

[13] Consider a data vector \(\textbf {u}\in \mathbb {R}^{n}\) which satisfies the condition

$$ \frac{\left\|\boldsymbol{u}\right\|_{\infty}}{\left\|\boldsymbol{u}\right\|_{2}}\leq C. $$
((8))

in addition, Let V be any set of n vectors \(\left \{\textbf {v}_{1},\cdots,\textbf {v}_{n}\right \} \subset \mathbb {R}^{n}\). Suppose a sparse random matrix \(\Phi \in \mathbb {R}^{m\times n}\) satisfies the conditions as

$$ \mathbb{E}[\phi_{ij}]=0, \mathbb{E}[\phi_{ij}^{2}]=1, \mathbb{E}[\phi_{ij}^{4}]=g $$
((9))

If

$$ m=\left\{ \begin{array}[2]{cc} \mathcal{O}\left(\frac{\left(1+\gamma\right)}{\epsilon^{2}}C^{2}\log n\right), & if C^{2}/g\geq \Omega(1)\\ \mathcal{O}\left(\frac{\left(1+\gamma\right)}{\epsilon^{2}}\log n\right), &if C^{2}/g\leq \mathcal{O}(1) \end{array}\right. $$
((10))

with probability at least 1−n −γ, the random projections \(\frac {1}{\sqrt {m}}\Phi \textbf {u}\) and \(\frac {1}{\sqrt {m}}\Phi \textbf {v}_{i}\) can produce an estimate \(\hat {a}_{i}\) for u T v i satisfying

$$ \left|\hat{a}_{i}-\textbf{u}^{T}\textbf{v}_{i}\right|\leq \epsilon\left\|\textbf{u}\right\|_{2}\left\|\textbf{v}_{i}\right\|_{2} $$
((11))

â–¡

Corollary 1 states that sparse random projections of the data vector and any set of n vectors can produce estimates of their inner products to within a small error. Thus, sparse random projections can produce accurate estimates for the transform coefficients of the data, which are inner products between the data and the set of orthonormal bases. The sufficient condition (8) is to bound the peak-to-total energy of the data. This guarantees that the signal energy is not concentrated in a small number of components. If the data is compressible in the discrete Fourier transform with compressibility parameter θ, then [13]

$$ \frac{||\textbf{u}||_{\infty}}{||\textbf{u}||_{2}}\leq C=\left\{ \begin{array}[pos]{cc} \mathcal{O}\left(\log n/\sqrt{n}\right) & if \theta=1\\ \mathcal{O}\left(1/\sqrt{n}\right) & if 0<\theta<1 \end{array}\right. $$
((12))

3 Adaptive sparse random projections

3.1 Sparse random projections with channel fading

However, Wang et al. [13] only considered AWGN channel. With the assumption of channel fading, we wonder whether the inner products are still preserved by sparse random projections. Redefine the sparse random projection matrix as follows,

$$ \psi_{ij}=\frac{1}{\sqrt{g}_{ij}}\left\{ \begin{array}[3]{ccc} +1&\texttt{w.p.}&g_{ij}/2 \\ 0&\texttt{w.p.}&1-g_{ij}\\ -1&\texttt{w.p.}&g_{ij}/2 \end{array}\right. $$
((13))

where g ij gives the probability of a projection from sensor node j at time slot i. The details of g ij will be illustrated in the next section.

Proposition 1.

Let \(\hat {\textbf {Z}}\) be the projection matrix given by (4). Define \(\textbf {u}=\frac {1}{\sqrt {m}}\hat {\textbf {Z}}\textbf {x}\) and \(\textbf {v}=\frac {1}{\sqrt {m}}\hat {\textbf {Z}}\textbf {y}\) as the random projection of two vectors x and y. Expectation and variance of the inner product of u and v are respectively

$$ \mathbb{E}\left[\textbf{u}^{T}\textbf{v}\right] = \textbf{x}^{T}\textbf{y} $$
((14))
$$ {\fontsize{9}{6} \begin{aligned} Var\left(\textbf{u}^{T}\textbf{v}\right)= \frac{1}{m}\left(\left(\textbf{x}^{T}\textbf{y}\right)^{2} +\left\|\textbf{x}\right\|_{2}^{2}\left\|\textbf{y}\right\|_{2}^{2} +\sum_{j=1}^{n}\left(\frac{3}{g_{ij}}-3\right)\textbf{x}_{j}^{2}\textbf{y}_{j}^{2}\right) \end{aligned}} $$
((15))

Proof: With the assumption that h ij satisfies i.i.d complex Gaussian distribution with zero mean and unit variance, it is not difficult to verify the following equations:

$$ \mathbb{E}[\hat{z}_{ij}]=0, \mathbb{E}[\hat{z}_{ij}^{2}]=1,\mathbb{E}[\hat{z}_{ij}^{4}]=3/g_{ij} $$
((16))

By defining independent random variables \(w_{i}=\left (\sum _{j=1}^{n}x_{j}\hat {z}_{\textit {ij}}\right)\left (\sum _{j=1}^{n}y_{j}\hat {z}_{\textit {ij}}\right)\), it can be shown that \(\textbf {u}^{T}\textbf {v}=\frac {1}{m}\sum _{i=1}^{n}w_{i}\), and the expectation and the second moment of w i are

$$\begin{array}{@{}rcl@{}} \mathbb{E}\left[w_{i}\right]&=&\sum_{j=1}^{n}x_{j}y_{j}\mathbb{E}\left[\hat{z}_{ij}^{2}\right]=\textbf{x}^{T}\textbf{y}=\mathbb{E}\left[\textbf{u}^{T}\textbf{v}\right] \\ \mathbb{E}\left[{w_{i}^{2}}\right]&=&\sum_{j=1}^{n}{x_{j}^{2}}{y_{j}^{2}}\mathbb{E}\left[\hat{z}_{ij}^{4}\right]+4\sum_{l<m}x_{l}y_{l}x_{m}y_{m}\mathbb{E}\left[\hat{z}_{il}^{2}\right]\mathbb{E}\left[\hat{z}_{im}^{2}\right]\\ &&+2\sum_{l\neq m}{x_{l}^{2}}{y_{m}^{2}}\mathbb{E}[\hat{z}_{il}^{2}]\mathbb{E}[\hat{z}_{im}^{2}]\\ &=&2\left(\textbf{x}^{T}\textbf{y}\right)^{2}+\left\|\textbf{x}\right\|_{2}^{2}\left\|\textbf{y}\right\|_{2}^{2}+\sum_{j=1}^{n} \left(\frac{3}{g_{ij}}-3\right) {x_{j}^{2}}{y_{j}^{2}}\\ &=&mVar\left(\textbf{u}^{T}\textbf{v}\right) \end{array} $$
((17))

â–¡

Proposition 1 states that an estimation of the inner product between two vectors, using the matrix of sparse random projections (4), are correct in expectation and have bounded variance. If there is a signal and a matrix of sparse random projections satisfy the conditions (8) and (16), respectively, we can achieve the following proposition:

Proposition 2.

Consider a data vector \(\textbf {u}\in \mathbb {R}^{n}\) which satisfies the condition (8). In addition, suppose a sparse random matrix \(\Phi \in \mathbb {R}^{m\times n}\) satisfies the condition as (16). Let

$$ m \geq \mathcal{O}\left(\frac{\left(1+\gamma\right)}{\epsilon^{2}}\left(2+\sum_{j=1}^{n}\frac{3}{\min_{i}g_{ij}}C^{2}\right)\log n\right), $$
((18))

and consider an orthonormal transform \(\Psi \in \mathbb {R}^{n\times n}\). Given only \(\textbf {x}=\frac {1}{\sqrt {m}}\Phi \textbf {u}\), Φ and Ψ, the sparse random projections can produce an approximation with error

$$ \left\|\textbf{u}-\hat{\textbf{u}}\right\|_{2}^{2} \leq (1+\epsilon)\eta \left\|\textbf{u}\right\|_{2}^{2} $$
((19))

with probability at least 1−n −γ, if the k largest transform coefficients in magnitude give an approximation with error \(\left \|\textbf {u}-\hat {\textbf {u}}_{\textit {opt}}\right \|_{2}^{2} \leq \eta \left \|\textbf {u}\right \|_{2}^{2}\).

Proof: Follow the approach of [13] and define m=m 1 m 2. Partition the m×n matrix Φ into m 2 matrices \(\phantom {\dot {i}\!}{\Phi _{1}, \Phi _{2}, \cdots, \Phi _{m_{2}}}\), each of size m 1×n. Using the Chebyshev inequality, we have

$$ \begin{aligned} &P\left(\left|w_{i}-\textbf{u}^{T}\textbf{v}\right|\geq \epsilon\left\|\textbf{u}\right\|_{2}\left\|\textbf{v}\right\|_{2}\right) \leq Var(w_{i})/\epsilon\left\|\textbf{u}\right\|_{2}\left\|\textbf{v}\right\|_{2} \\ &\quad=\frac{1}{\epsilon^{2}m_{1}}\left(\frac{\left(\textbf{u}^{T}\textbf{v}\right)^{2}}{\left\|\textbf{u}\right\|^{2}_{2}\left\|\textbf{v}\right\|^{2}_{2}} \!+ 1 +\! \sum_{j=1}^{n}\left(\frac{3}{g_{ij}}-3\right)\frac{\sum_{j=1}^{n}{u_{j}^{2}}{v_{j}^{2}}}{\left\|\textbf{u}\right\|^{2}_{2}\left\|\textbf{v}\right\|^{2}_{2}}\right) \\ &\quad\leq\frac{1}{\epsilon^{2}m_{1}}\left(2+\sum_{j=1}^{n}\frac{3}{\min_{i}g_{ij}}C^{2}\right) = \textit{p} \end{aligned} $$
((20))

where (21) comes from the fact that \(\frac {||\textbf {u}||_{\infty }}{||\textbf {u}||_{2}}\leq C\). Thus we can obtain a constant probability p by setting \(m_{1}=\mathcal {O}\left (\frac {2+\sum _{j=1}^{n}\frac {3}{\min _{i} g_{\textit {ij}}}C^{2}}{\epsilon ^{2}}\right)\). For any pair of vectors u and v i , the random projections \(\frac {1}{L}\Phi \textbf {u}\) and \(\frac {1}{L}\Phi \textbf {v}_{i}\) produce an estimate \(\hat {w}_{i}\) that lies outside the tolerable approximation interval with probability at most \(\phantom {\dot {i}\!}e^{-c^{2}m_{2}/12}\) where 0<c<1 is the some constant and L 2 is the number of independent random variables w i which lie outside of the tolerable approximation interval with probability p. Setting \(m_{1}=\mathcal {O}\left (\frac {2+\sum _{j=1}^{n}\frac {3}{\min _{i}g_{\textit {ij}}}C^{2}}{\epsilon ^{2}}\right)\) and \(m_{2}=\mathcal {O}((1+\gamma)\log _{n})\) obtain p=1/4, and p e =n −γ for some a constant γ>0. Finally, for \(m=m_{1}m_{2} = \mathcal {O}\left (\frac {\left (1+\gamma \right)}{\epsilon ^{2}}\left (2+\sum _{j=1}^{n}\frac {3}{\min _{i}g_{\textit {ij}}}C^{2}\right)\log n\right)\), the random projections Φ can preserve all pairwise inner products within an approximation error ε with probability at least 1−n −γ.

â–¡

Proposition 2 states that sparse random projections can produce a data approximation with error comparable to the best k-term approximation with high probability.

3.2 Optimal power allocation based sparsity adaption

From the above propositions, we notice that the factor \(\sum _{j=1}^{n}\frac {1}{g_{\textit {ij}}}\) controls the value of the estimation variance (15) and the lower bound of the system delay m (18) as well. If g ij is a small value for node j at the time slot i, we may have an estimation with a high variance producing a low-accuracy approximation. Meanwhile, m should be very large for guaranteeing an acceptable error probability. An energy-aware sparsity is given as \(g_{j} =\frac {E_{j}}{\sum _{j=1}^{n} E_{j}}*\frac {m}{n}\) in EH-WSNs [14], where E j denotes the harvested energy profile for node j. Usually, g j is predetermined and uniform regardless of nodes and time slots, i.e., g j =g. Obviously, it is not a sophisticated definition because it does not consider the different channel conditions of nodes and times as well as the energy-harvesting constraints. Therefore, a more specific definition on sparsity is desired. We redefine the sparsity of random projections as follow,

$$ g_{ij} = \frac{p_{ij}^{*}}{E_{ij}} $$
((21))

where \(p_{\textit {ij}}^{*}\) is the allocated energy for node j during the ith time slot. \(p_{\textit {ij}}^{*}\) is determined in term of full information consisting of past and present and future channel conditions and amount of energy harvested. The case of full information may be justified if the environment is highly predictable, e.g., the energy is harvested from the vibration of motors that turned on only during fixed operating hours and line-of-sight is available for communications.

If the energy-harvesting profile E ij for each node is known in advance and kept constant during all transmission time slots, the optimal sparsity problem is converted into an optimal power allocation problem. But the rising question is which performance measurement will be used for power allocation. We know thta the performance of random projection-based CS schemes is characterized by two quantities, i.e., the data approximation error probability (or the mean square error (MSE)) and the system delay. Note that there is often a tradeoff between these two quantities [16]. Under an allowable MSE η>0, we thus define the achievable system delay D(η) as

$$\begin{array}{@{}rcl@{}} &D(\eta) = & \min m \\ &\texttt{s.t.}& \end{array} $$
((22))
$$\begin{array}{@{}rcl@{}} & &\mathbb{E}\left\{\left\|\hat{\textbf{s}}-\textbf{s}\right\|\right\}\leq \eta \end{array} $$
((23))
$$\begin{array}{@{}rcl@{}} & &\sum_{i=1}^{m}\log_{2}\left[1+\frac{\left|h_{ij}\right|^{2}p_{ij}}{\sum_{l=1,l\neq j}^{n} \left|h_{il}\right|^{2}E_{il}+\sigma^{2}}\right] \geq B \end{array} $$
((24))
$$\begin{array}{@{}rcl@{}} & &\sum_{k=1}^{i}p_{kj} \leq \sum_{k=1}^{i-1}E_{kj}, k=1,2,\cdots,m \end{array} $$
((25))
$$\begin{array}{@{}rcl@{}} & &\sum_{k=0}^{i}\!E_{kj}\,-\,\sum_{k=1}^{i}p_{kj}\!\leq\! E_{\max}, k\,=\,1,2,\cdots,m-1. \\ & &p_{ij} \geq 0, \forall i \end{array} $$
((26))

where \(\sum _{i=1}^{m}\log _{2}\left [1+\frac {\left |h_{\textit {ij}}\right |^{2}p_{\textit {ij}}}{\sum _{l=1,l\neq j}^{n} \left |h_{\textit {il}}\right |^{2}E_{\textit {il}}+\sigma ^{2}}\right ]\) is the lower bound of short-term throughput of node j and B is the required data information to transmit for each node. The constraint (26) is due to that the harvested energy cannot be consumed before its arrival, and the constraint (27) is the limited battery capacity. The battery overflow happens when the reserved energy plus the harvested energy exceeds the battery capacity, which, however, is not preferred because the data rate can be increased if the energy is used in advance instead of overflowed. If we assume that there is an m which satisfies the condition (24), the optimal problem minimizing the system delay is immediately converted into a throughput maximizing problem, which can be formulated as follows:

$$\begin{array}{@{}rcl@{}} &\max_{p_{ij}}&\sum_{i=1}^{m}\log_{2}\left[1+\frac{\left|h_{ij}\right|^{2}p_{ij}}{\sum_{l=1,l\neq j}^{n} \left|h_{il}\right|^{2}E_{il}+\sigma^{2}}\right] \\ &\texttt{s.t.}& \end{array} $$
((27))
$$\begin{array}{@{}rcl@{}} & &\sum_{k=1}^{i}p_{kj} \leq \sum_{k=1}^{i-1}E_{kj}, k=1,2, \cdots,m \end{array} $$
((28))
$$\begin{array}{@{}rcl@{}} & &\sum_{k=0}^{i}\!E_{kj}\,-\,\sum_{k=1}^{i}p_{kj}\!\leq\! E_{max}, k=1,2,\cdots,m-1. \\ & &p_{ij} \geq 0, \forall i \end{array} $$
((29))

Note that the objective (28) is convex for all i since it is a sum of log functions, and others are all affine constraints. Consequently, the optimization problem is a convex optimization problem, and the optimal solution satisfies the Karush-Kuhn-Tucker (KKT) conditions [17]. With the assumption that the initial battery energy E 0j is always known by node j, define the Lagrangian function for any multipliers λ i ≥0,μ i ≥0,β i ≥0 as

$$\begin{array}{@{}rcl@{}} \mathcal{L}&=&\sum_{i=1}^{m}\log_{2}\left[1+\frac{\left|h_{ij}\right|^{2}p_{ij}}{\sum_{l=1,l\neq j}^{n} \left|h_{il}\right|^{2}E_{il}+\sigma^{2}}\right]\\ &&-\sum_{i=1}^{m}\lambda_{i}\left(\sum_{k=1}^{i}p_{kj}-\sum_{k=1}^{i-1}E_{kj}\right)\\ &&-\sum_{i=1}^{m-1}\mu_{i}\left(\sum_{k=0}^{i}E_{kj}-\sum_{k=1}^{i}p_{kj}\leq E_{\max}\right) +\sum_{i=1}^{m}\beta_{i}p_{ij} \end{array} $$
((30))

with additional complementary slackness conditions

$$\begin{array}{@{}rcl@{}} \lambda_{i}\left(\sum_{k=1}^{i}p_{kj}-\sum_{k=1}^{i-1}E_{kj}\right) &=& 0, \forall i \end{array} $$
((31))
$$\begin{array}{@{}rcl@{}} \mu_{i}\left(\sum_{k=0}^{i}E_{kj}-\sum_{k=1}^{i}p_{kj}\leq E_{\max}\right) &=&0, i<m \end{array} $$
((32))
$$\begin{array}{@{}rcl@{}} \beta_{i}p_{ij}&=& 0 \forall i \end{array} $$
((33))

We apply the KKT optimality conditions to the Lagrangian function (30). By setting ∂ L/∂ p ij =0, we obtain the unique optimal energy level \(p_{\textit {ij}}^{*}\) in term of Lagrange multipliers as

$$ p_{ij}^{*}=\left[\alpha_{i}-\frac{1}{\gamma_{i}}\right]^{*} $$
((34))

where \(\alpha _{i}=\left [\ln 2\sum _{k=i}^{m}(\lambda _{k}-\mu _{k})\beta _{i}\right ]^{-1}\), μ m =0 and \(\gamma _{i}=\frac {\left |h_{\textit {ij}}\right |^{2}}{\sum _{l=1,l\neq j}^{n}\left |h_{\textit {il}}\right |^{2}E_{\textit {il}}}\).

3.3 Structural solution

If the battery capacity is finite, the optimal water-level is not monotonic. Therefore, the structure of the optimal energy allocation cannot be described in a simple and clear way, and an online programming may be required. Since we are more interested in an offline power allocation structure, we study the following special case.

Proposition 3.

if E max =∞, the optimal water levels are non-decreasing as α i ≤α i+1. In addition, the water level changes when all the energy harvested before the current transmission are used up.

Proof: Without the battery capacity constraint, the water level is given as \(\alpha _{i}=\left (\ln 2\sum _{k=i}^{m}\lambda _{k}\right)^{-1}\). Since λ k ≥0,∀k, we have α i ≤α i+1. If α i ≤α i+1, by definition \(\alpha _{i}=\left (\ln 2\sum _{k=i}^{m}\lambda _{k}\right)^{-1}\), we get λ i ≠0 and λ i >0. So the complementary slackness condition (32) only holds when \(\left (\sum _{k=1}^{i}p_{\textit {kj}}-\sum _{k=1}^{i-1}E_{\textit {kj}}\right) = 0\), which means all stored energy should be used up before the current transmission. □

The case of E max=∞ represents an ideal energy buffer which refers to a device that can store any amount of energy, does not have any inefficiency in charging, and does not leak any energy over time. As an example, consider a sensor node installed to monitor the health of heavy duty industrial motors. Suppose the node operates using energy harvested from the machine’s vibrations, the harvested energy is greater than the consumed power and the health monitoring function is desired only when the motor is powered on. Proposition 3 presents an analytically tractable structure of the optimal sparsity. Intuitively, the harvested energy is reserved in the battery for the use in the later transmission, in order to reduce the effect of causality constraint and improve the flexibility of harvested energy allocation. The optimal water level can be obtained by the power allocation policy and it is structured as follows: the water level is non-decreasing and the harvested energy is used in a conservative way. Based on the structural properties, we can use the following reserve multi-stage waterfilling algorithm modified based on [18], to achieve the solution:

4 Simulation results

We consider a EH-WSN containing n=500 sensor nodes, and a uniform energy-harvesting rate E ij =2 dB for all nodes. We evaluate the performance of the proposed adaptive sparse random projections. One of performance measurements is the mean-square error (MSE) given as

$$ \text{error}=\frac{\left\|\textbf{s}-\hat{\textbf{s}}\right\|_{2}^{2}}{\left\|\textbf{s}\right\|_{2}^{2}} $$
((35))

Figure 1 illustrates the data approximation performance using sparse random projections for the different degrees of sparsity. The larger g is given, the smaller MSE is achieved. However, a larger g may bring great computational complexity. Therefore, the sparsity factor g should be carefully chosen in order to keep a balance between the MSE and the complexity. Intuitively, when channel conditions are not good, a larger g should be selected for guaranteeing an acceptable MSE, whereas a smaller g should be selected for saving the computational complexity when channel conditions are good enough. This motivates us to study adapting the sparsity of random projections according to channel conditions for improving the data-approximation performance as well as the system delay.

Figure 1
figure 1

The MSEs comparison for sparse random projections with different degrees of sparsity.

Figures 2 and 3 compare the MSE performance obtained by our proposed adaptive sparse random projection (denoted as ‘Adaptive’ in the legend) with that obtained by the conventional sparse random projections (denoted as ‘Fixed’ in the legend) with respect to the number of transmission slots m for SNR = 15 dB and 30 dB, respectively. The conventional sparse random projections with a fixed sparsity given as g=1/4 is looked as a baseline since it achieves an acceptable MSE with a modest complexity. We observe that the proposed adaptive sparse random projections achieves better tradeoff between the MSE and the system delay than the conventional one does when k is either 10 or 5. However, the performance gap between the proposed scheme and the conventional one is getting smaller when SNR increases. That makes sense because when the channel conditions is getting better, the benefits from the adaptive sparsity become limited. For both SNR = 30 dB and 15 dB, we notice that the case of k=5 provides better performance than the case of k=10.

Figure 2
figure 2

MSEs comparison for different k when SNR = 15 dB.

Figure 3
figure 3

MSEs comparison for different k when SNR = 30 dB.

In Figure 4, we present the performance comparison between the conventional sparse random projection with a fixed sparsity and the proposed one with respect to the number of transmission (or the system delay) m for different SNRs. We still observe that the proposed scheme outperforms the conventional one for both SNR = 20 dB and 30 dB resulting in a better tradeoff between the MSE and the system delay. We also notice that, for both the proposed scheme and the conventional scheme, there is not a performance difference between the case of SNR = 20 dB and that of SNR = 30 dB when m<80, but the MSE decreases as SNR increases when m is over 80. That is because m is also one of factors which control the variance of the estimation illustrated in (15). If m is not sufficiently large, it is one of dominant factors which effect the MSE performance. Therefore, increasing SNR barely impacts the MSE performance. While m is large enough, a very limited improvement of the MSE may be achieved by further increasing m, but SNR now becomes a dominant factor and increasing SNR may benefit the MSE performance.

Figure 4
figure 4

MSEs comparison for different SNRs.

Figure 5 shows tradeoffs between the system delay and the MSE for the proposed adaptive sparse random projections and the conventional ones when SNR = 30 dB and k=5. Consider the MSE 3×10−2, the conventional sparse random projection requires about m=95 times transmission, while the proposed scheme only requires m=78 times transmission. Consequently, the proposed scheme achieves a better tradeoff compared to the conventional one.

Figure 5
figure 5

Tradeoff between the MSE and the system delay for SNR = 30 dB and k=5.

5 Conclusions

In this paper, we proposed to adapt sparsity of random projections according to full channel information for EH-WSNs. Compared to the conventional sparse random projections which keep the sparsity constant for the whole transmission slots, the proposed one achieves a better tradeoff between the MSE and the system delay. The optimal sparsity problem is turned into an optimal power allocation maximizing throughput with the energy-harvesting constraints. An offline power allocation structure is available for a special case that the battery capacity is infinite. Simulation results have shown that the proposed scheme achieves smaller MSEs than the conventional scheme. Meanwhile, the proposed scheme can also reduce the system delay given an accepted error rate. However, full channel information may not be always available. Therefore, for future work, we will study adaptive sparse random projections with partial channel information.

References

  1. V Raghunathan, S Ganeriwal, M Srivastava, Emerging techniques for long lived wireless sensor networks. IEEE Comm. Mag. 44, 108–114 (2006).

    Article  Google Scholar 

  2. T Wark, W Hu, P Corke, J Hodge, A Keto, B Mackey, G Foley, P Sikka, M Brunig, in IEEE Intelligent Sensors, Sensor Networks and Information Processing (IEEE ISSNIP),. Springbrook: Challenges in developing a long-term rainforest wireless sensor network (Sydney, Australia, 2008, 15-18 December 2008), pp. 599–604.

  3. DWK Ng, ES Lo, R Schober, Robust beamforming for secure communication in systems with wireless information and power transfer. IEEE Trans. Wireless Commun. 13, 4599–4615 (2014).

    Article  Google Scholar 

  4. DWK Ng, ES Lo, R Schober, Wireless information and power transfer: energy efficiency optimization in OFDMA systems. IEEE Trans. Wireless Commun. 12, 6352–6370 (2013).

    Article  Google Scholar 

  5. M Gatzianas, L Georgiadis, L Tassiulas, Control of wireless networks with rechargeable batteries. IEEE Trans. Wireless Commun. 9(2), 581–593 (2010).

    Article  Google Scholar 

  6. CK Ho, R Zhang, in Int. symposium Inf. Theory. Optimal energy allocation for wireless communications powered by energy harvesters (AustinTexas, U.S.A, 2010).

  7. V Sharma, U Mukherji, V Joseph, S Gupta, Optimal energy management policies for energy harvesting sensor nodes. IEEE Trans. Wireless Commun. 9(4), 1326–1336 (2010).

    Article  Google Scholar 

  8. EJ Candes, MB Wakin, An introduction to compressive sensing. IEEE Signal Precess. Mag. 25, 21–30 (2008).

    Article  Google Scholar 

  9. D Donoho, Compressive sensing. IEEE Trans. Inf. Theory. 52, 1289–1306 (2006).

    Article  MATH  MathSciNet  Google Scholar 

  10. EJ Candes, T Tao, Near optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory. 52, 3406–5425 (2006).

    Google Scholar 

  11. W Bajwa, J Haupt, A Sayeed, R Nowak, in Proceedings of The Fifth International Conference on Information Processing in Sensor Networks (IEEE IPSN). Compressive wireless sensing (Nashville, USA, 2006, 19-21 April 2006), pp. 134–142.

  12. JD Haupt, RD Nowak, Signal reconstruction from noisy random projections. IEEE Trans. Inf. Theory. 52, 4036–4048 (2006).

    Article  MATH  MathSciNet  Google Scholar 

  13. W Wang, M Garofalakis, K Ramchandran, in The 6th International Symposium on Information Processing in Sensor Networks (IEEE IPSN). Distributed sparse random projections for refinable approximation (Cambridge, USA, 2007, 25-27 April 2007), pp. 331–339.

  14. R Rana, W Hu, C Chou, in Proceedings of The Seventh European Conference on Wireless Sensor Networks (EWSN),. Energy-aware sparse approximation technique (east) for rechargeable wireless sensor networks (Coimbra, Portugal, 2010, 17-18 February 2010), pp. 306–321.

  15. G Yang, VYF Tan, CK Ho, SH Ting, YL Guan, Wireless compressive sensing for energy harvesting sensor nodes. IEEE Trans. Signal Process. 61(18), 4491–4505 (2013).

    Article  MathSciNet  Google Scholar 

  16. TT Cai, M Wang, G Xu, New bounds for restricted isometric constraints. IEEE Trans. Inf. Theory. 56, 4388–4394 (2010).

    Article  MathSciNet  Google Scholar 

  17. S Boyd, L Vandenberghe, Convex optimization (Cambridge University Press, 2005).

  18. CK Ho, R Zhang, Optimal energy allocation for wireless communications with energy harvesting constraints. IEEE Trans. Signal Process. 60, 4808–4818 (2012).

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (Grant. NRF-2012R1A1A1014392 and NRF-2014R1A1A1003562).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hayong Oh.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ran, R., Oh, H. Adaptive sparse random projections for wireless sensor networks with energy harvesting constraints. J Wireless Com Network 2015, 113 (2015). https://doi.org/10.1186/s13638-015-0324-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-015-0324-3

Keywords