 Research
 Open access
 Published:
Adaptive sparse random projections for wireless sensor networks with energy harvesting constraints
EURASIP Journal on Wireless Communications and Networking volumeÂ 2015, ArticleÂ number:Â 113 (2015)
Abstract
Considering a largescale energyharvesting wireless sensor network (EHWSN) measuring compressible data, sparse random projections are feasible for data wellapproximation, and the sparsity of random projections impacts the mean square error (MSE) as well as the system delay. In this paper, we propose an adaptive algorithm for sparse random projections in order to achieve a better tradeoff between the MSE and the system delay. With the energyharvesting constraints, the sparsity is adapted to channel conditions via an optimal power allocation algorithm, and the structure of the optimal power allocation solution is analyzed for some special case. The performance is illustrated by numerical simulations.
1 Introduction
Energy supply is a major design constraint for conventional wireless sensor networks (WSNs), and the lifetime is limited by the total energy available in the batteries. Some specific sensors in WSNs may consume more energy than the radio during a long acquisition time [1]. Replacing the batteries periodically may prolong the lifetime but not be a viable option when the replacement is considered to be too inconvenient, too dangerous, or even impossible when sensors are deployed in harsh conditions, e.g., in toxic environments or inside human bodies. Therefore, harvesting energy from the environment is a promising approach to cope with battery supplies and the increasing energy demand [2]. The energy that can be harvested includes solar energy, piezoelectric energy, or thermal energy, etc. and is theoretically unlimited. Besides, background radiofrequency (RF) signals radiated by ambient transmitters can also be a viable new source for wireless power transfer (WPT) [3,4] and (Ng et al.:Secure and Green SWIPT in Distributed Antenna Networks with Limited Backhaul Capacity, submitted). Unlike the conventional WSNs that are subject to a power constraint or sum energy constraint, each sensor with energy harvesting capabilities is, in every time slot, constrained to use the most amount of stored energy currently available, although more energy may be available in the future slot. Therefore, a causality constraint is imposed on the use of the harvested energy. Current researches on the energy harvesting issues mostly have focused on wireless communication systems. Gatzianas et al. [5] considered a crosslayer resource allocation problem to maximize the total system utility, and Ho and Zhang [6] studied the throughput maximization with causal side information and full side information for wireless communication systems. Ng et al. [3] studied the design of a resource allocation algorithm minimizing the total transmit power for the case when the legitimate receivers are able to harvest energy form RF signals for a multiuser multipleinput singleoutput downlink system. Energy management policies were studied for energyharvesting wireless sensor networks (EHWSNs) in [7], where sensor nodes have energyharvesting capabilities, aiming at maximizing the system throughput and reducing the system delay.
For WSNs, however, accurately recovering signals is also important. Recent results in compressive sensing (CS) can provide an efficient signal reconstruction method for WSNs. Data collected from wireless sensors are typically correlated and thus compressible in an appropriate transform domain (e.g., the Fourier transform or wavelet)[8]. Therefore, the main ideal of CS is that n data values can be wellapproximated using only k<<n transform coefficients if the data are compressible [813]. In particular, Wang et al. [13] propose a distributed compressive sensing scheme for WSNs in order to reduce computational complexity and the communication cost. It considers an mÃ—n sparse random matrix with entries that have a probability g of being nonzero, so that on average there are ng nonzeros per row. The resulted dataapproximation error rate is comparable to that of the optimal kterm approximation if the energy of the signal is not concentrated in a few elements. Somehow, the sparsity factor g of random projections impacts the accuracy of signal reconstructions. Usually, the sparsity factor g is statistically determined according to the amount of harvested energy and is homogeneous for all sensors [14,15]. Rana et al. [14] only considered AWGN channels. Yang et al. [15] took into account fading channels and studied the sufficient conditions guaranteeing a reliable and computationally efficient data approximation for sparse random projections. It is not surprising that the sparse random projections based signal recovery is nonoptimal since the sparsity factor g is fixed during the entire transmission slots and thus can not reflect the effect of channel conditions. On the other hand, the system delay m, which is one of key quantities used to characterize the performance of random projectionbased CS schemes, is expected to be as small as possible. Upon [14] and [15], we realize that the lower bound of the system delay also being related to the sparsity of random projections and the larger g being the shorter delay may be achieved. Note that there is often a tradeoff between the system delay and the data approximation [8,16]. Therefore, in this paper, we consider fading channels and the energyharvesting constraints and study the problem on adapting sparsity of random projections according to full channel information, in order to improve the performance of signal recovery and reduce the system delay as well. To the best of our knowledge, very limited work such as [15] has touched upon this topic, which however, only provides rough discussion. The main contributions are presented as follows:

Considering the wireless fading channels, we verify that the random projection matrix satisfies the property that the inner product between two projected vectors are preserved in expectation, and then provide a lower bound of the system delay for achieving an acceptable data approximation error probability.

We give a new definition of the sparsity of random projections and formulate the optimal sparsity problem which is converted into an optimal power allocation problem for maximizing the system throughput. Unlike the conventional energy allocation problem, due to battery dynamics and channel dynamics, the closedform solution may not be available. Therefore, we study a special case that the battery capacity is not bounded to find the structure of the optimal solution. Specifically, in the case the problem is converted into a convex optimization problem, then the closedform solution is obtained in terms of Lagrangian multipliers.
The rest of paper is organized as follows. Section 2 gives the system model and overview previously known results on sparse random projections. Section 3 redefines the sparsity and formulate the optimal sparsity problem for EHWSNs. Section 4 considers a specific case and address the structure of the optimal solution. Section 5 provides the simulation results. Finally, Section 6 concludes the paper.
2 System model
We consider a wireless sensor network of n sensor nodes, each of which measure a data value \(s_{i}\in \mathbb {C}\) and is capable of energy harvesting. We assume a Rayleighfading channel, and the channel coefficients, denoted as h _{ ij }, where 1â‰¤iâ‰¤m denotes the slot index and 1â‰¤jâ‰¤n denotes the sensor index, are independent and identically distributed (i.i.d) and satisfy the complex Gaussian distribution with zero mean and unit variance. We further assume the channel remains constant in each slot. Sensor j first multiplies its data s _{ j } by some random projections \(\phi _{\textit {ij}}\in \mathbb {R}\), then transmits in the ith time slot. At the receiver
where e _{ i } is a White Gaussian distributed noise with zero mean and variance Ïƒ ^{2}. After m time slots, the received vector is given as
where \(\textbf {H}=\left \{h_{\textit {ij}}\right \}\in \mathbb {C}^{m\times n}\), \(\Phi =\left \{ \phi _{\textit {ij}}\right \} \in \mathbb {R}^{m\times n}\), Z=HâŠ™Î¦, and the operation âŠ™ is the elementwise product of two matrices. The corresponding realvalued equation of (2) is
where Re{A} and Im{A} denote the real part and the imaginary part of matrix A, respectively, and
2.1 Compressible data and sparse random projections
Suppose the aggregate sensor data \(\textbf {s}\in \mathbb {C}^{n}\) from n nodes is compressible, so that we can model it being sparse with respect to a fixed orthonormal basis \(\{\psi _{j}\in \mathbb {C}^{n}:j=1,\cdots,n\}\) [11], i.e.,
Generally, for a compressible signal s the largest k transform coefficients capture most of the signal information and the k is usually referred to as the sparsity of s. The best kterm approximation method is applied to recover only the k largest transform coefficients and discard the remaining as zero [8], and achieves nearoptimal performance of error probabilities. However, the random projection matrix Î¦ used in [8] is dense, which results in a great computational complexity. Therefore, Wang et al. [13] proposed sparse random projections to reduce the computational complexity but guarantee that the error probability is comparable to that achieved by the dense random projections. More concretely, the matrix of sparse random projections \(\Phi \in \mathbb {R}^{m\times n}\) contains i.i.d. entries
where g is a factor which gives the probability of a measurement and controls the degree of sparsity of random projections, e.g., if g=1, the random matrix has no sparsity, and if g= logn/n, the expected number of nonzeros in each row is logn. We can easily verify that the entries within each row are fourwise independent, while the entries across different rows are fully independent, i.e.,
Therefore, each random projection vector is pseudorandomly generated and stored in a small space.
Corollary 1.
[13] Consider a data vector \(\textbf {u}\in \mathbb {R}^{n}\) which satisfies the condition
in addition, Let V be any set of n vectors \(\left \{\textbf {v}_{1},\cdots,\textbf {v}_{n}\right \} \subset \mathbb {R}^{n}\). Suppose a sparse random matrix \(\Phi \in \mathbb {R}^{m\times n}\) satisfies the conditions as
If
with probability at least 1âˆ’n ^{âˆ’Î³}, the random projections \(\frac {1}{\sqrt {m}}\Phi \textbf {u}\) and \(\frac {1}{\sqrt {m}}\Phi \textbf {v}_{i}\) can produce an estimate \(\hat {a}_{i}\) for u ^{T} v _{ i } satisfying
â–¡
Corollary 1 states that sparse random projections of the data vector and any set of n vectors can produce estimates of their inner products to within a small error. Thus, sparse random projections can produce accurate estimates for the transform coefficients of the data, which are inner products between the data and the set of orthonormal bases. The sufficient condition (8) is to bound the peaktototal energy of the data. This guarantees that the signal energy is not concentrated in a small number of components. If the data is compressible in the discrete Fourier transform with compressibility parameter Î¸, then [13]
3 Adaptive sparse random projections
3.1 Sparse random projections with channel fading
However, Wang et al. [13] only considered AWGN channel. With the assumption of channel fading, we wonder whether the inner products are still preserved by sparse random projections. Redefine the sparse random projection matrix as follows,
where g _{ ij } gives the probability of a projection from sensor node j at time slot i. The details of g _{ ij } will be illustrated in the next section.
Proposition 1.
Let \(\hat {\textbf {Z}}\) be the projection matrix given by (4). Define \(\textbf {u}=\frac {1}{\sqrt {m}}\hat {\textbf {Z}}\textbf {x}\) and \(\textbf {v}=\frac {1}{\sqrt {m}}\hat {\textbf {Z}}\textbf {y}\) as the random projection of two vectors x and y. Expectation and variance of the inner product of u and v are respectively
Proof: With the assumption that h _{ ij } satisfies i.i.d complex Gaussian distribution with zero mean and unit variance, it is not difficult to verify the following equations:
By defining independent random variables \(w_{i}=\left (\sum _{j=1}^{n}x_{j}\hat {z}_{\textit {ij}}\right)\left (\sum _{j=1}^{n}y_{j}\hat {z}_{\textit {ij}}\right)\), it can be shown that \(\textbf {u}^{T}\textbf {v}=\frac {1}{m}\sum _{i=1}^{n}w_{i}\), and the expectation and the second moment of w _{ i } are
â–¡
Proposition 1 states that an estimation of the inner product between two vectors, using the matrix of sparse random projections (4), are correct in expectation and have bounded variance. If there is a signal and a matrix of sparse random projections satisfy the conditions (8) and (16), respectively, we can achieve the following proposition:
Proposition 2.
Consider a data vector \(\textbf {u}\in \mathbb {R}^{n}\) which satisfies the condition (8). In addition, suppose a sparse random matrix \(\Phi \in \mathbb {R}^{m\times n}\) satisfies the condition as (16). Let
and consider an orthonormal transform \(\Psi \in \mathbb {R}^{n\times n}\). Given only \(\textbf {x}=\frac {1}{\sqrt {m}}\Phi \textbf {u}\), Î¦ and Î¨, the sparse random projections can produce an approximation with error
with probability at least 1âˆ’n ^{âˆ’Î³}, if the k largest transform coefficients in magnitude give an approximation with error \(\left \\textbf {u}\hat {\textbf {u}}_{\textit {opt}}\right \_{2}^{2} \leq \eta \left \\textbf {u}\right \_{2}^{2}\).
Proof: Follow the approach of [13] and define m=m _{1} m _{2}. Partition the mÃ—n matrix Î¦ into m _{2} matrices \(\phantom {\dot {i}\!}{\Phi _{1}, \Phi _{2}, \cdots, \Phi _{m_{2}}}\), each of size m _{1}Ã—n. Using the Chebyshev inequality, we have
where (21) comes from the fact that \(\frac {\textbf {u}_{\infty }}{\textbf {u}_{2}}\leq C\). Thus we can obtain a constant probability p by setting \(m_{1}=\mathcal {O}\left (\frac {2+\sum _{j=1}^{n}\frac {3}{\min _{i} g_{\textit {ij}}}C^{2}}{\epsilon ^{2}}\right)\). For any pair of vectors u and v _{ i }, the random projections \(\frac {1}{L}\Phi \textbf {u}\) and \(\frac {1}{L}\Phi \textbf {v}_{i}\) produce an estimate \(\hat {w}_{i}\) that lies outside the tolerable approximation interval with probability at most \(\phantom {\dot {i}\!}e^{c^{2}m_{2}/12}\) where 0<c<1 is the some constant and L _{2} is the number of independent random variables w _{ i } which lie outside of the tolerable approximation interval with probability p. Setting \(m_{1}=\mathcal {O}\left (\frac {2+\sum _{j=1}^{n}\frac {3}{\min _{i}g_{\textit {ij}}}C^{2}}{\epsilon ^{2}}\right)\) and \(m_{2}=\mathcal {O}((1+\gamma)\log _{n})\) obtain p=1/4, and p _{ e }=n ^{âˆ’Î³} for some a constant Î³>0. Finally, for \(m=m_{1}m_{2} = \mathcal {O}\left (\frac {\left (1+\gamma \right)}{\epsilon ^{2}}\left (2+\sum _{j=1}^{n}\frac {3}{\min _{i}g_{\textit {ij}}}C^{2}\right)\log n\right)\), the random projections Î¦ can preserve all pairwise inner products within an approximation error Îµ with probability at least 1âˆ’n ^{âˆ’Î³}.
â–¡
Proposition 2 states that sparse random projections can produce a data approximation with error comparable to the best kterm approximation with high probability.
3.2 Optimal power allocation based sparsity adaption
From the above propositions, we notice that the factor \(\sum _{j=1}^{n}\frac {1}{g_{\textit {ij}}}\) controls the value of the estimation variance (15) and the lower bound of the system delay m (18) as well. If g _{ ij } is a small value for node j at the time slot i, we may have an estimation with a high variance producing a lowaccuracy approximation. Meanwhile, m should be very large for guaranteeing an acceptable error probability. An energyaware sparsity is given as \(g_{j} =\frac {E_{j}}{\sum _{j=1}^{n} E_{j}}*\frac {m}{n}\) in EHWSNs [14], where E _{ j } denotes the harvested energy profile for node j. Usually, g _{ j } is predetermined and uniform regardless of nodes and time slots, i.e., g _{ j }=g. Obviously, it is not a sophisticated definition because it does not consider the different channel conditions of nodes and times as well as the energyharvesting constraints. Therefore, a more specific definition on sparsity is desired. We redefine the sparsity of random projections as follow,
where \(p_{\textit {ij}}^{*}\) is the allocated energy for node j during the ith time slot. \(p_{\textit {ij}}^{*}\) is determined in term of full information consisting of past and present and future channel conditions and amount of energy harvested. The case of full information may be justified if the environment is highly predictable, e.g., the energy is harvested from the vibration of motors that turned on only during fixed operating hours and lineofsight is available for communications.
If the energyharvesting profile E _{ ij } for each node is known in advance and kept constant during all transmission time slots, the optimal sparsity problem is converted into an optimal power allocation problem. But the rising question is which performance measurement will be used for power allocation. We know thta the performance of random projectionbased CS schemes is characterized by two quantities, i.e., the data approximation error probability (or the mean square error (MSE)) and the system delay. Note that there is often a tradeoff between these two quantities [16]. Under an allowable MSE Î·>0, we thus define the achievable system delay D(Î·) as
where \(\sum _{i=1}^{m}\log _{2}\left [1+\frac {\left h_{\textit {ij}}\right ^{2}p_{\textit {ij}}}{\sum _{l=1,l\neq j}^{n} \left h_{\textit {il}}\right ^{2}E_{\textit {il}}+\sigma ^{2}}\right ]\) is the lower bound of shortterm throughput of node j and B is the required data information to transmit for each node. The constraint (26) is due to that the harvested energy cannot be consumed before its arrival, and the constraint (27) is the limited battery capacity. The battery overflow happens when the reserved energy plus the harvested energy exceeds the battery capacity, which, however, is not preferred because the data rate can be increased if the energy is used in advance instead of overflowed. If we assume that there is an m which satisfies the condition (24), the optimal problem minimizing the system delay is immediately converted into a throughput maximizing problem, which can be formulated as follows:
Note that the objective (28) is convex for all i since it is a sum of log functions, and others are all affine constraints. Consequently, the optimization problem is a convex optimization problem, and the optimal solution satisfies the KarushKuhnTucker (KKT) conditions [17]. With the assumption that the initial battery energy E _{0j } is always known by node j, define the Lagrangian function for any multipliers Î» _{ i }â‰¥0,Î¼ _{ i }â‰¥0,Î² _{ i }â‰¥0 as
with additional complementary slackness conditions
We apply the KKT optimality conditions to the Lagrangian function (30). By setting âˆ‚ L/âˆ‚ p _{ ij }=0, we obtain the unique optimal energy level \(p_{\textit {ij}}^{*}\) in term of Lagrange multipliers as
where \(\alpha _{i}=\left [\ln 2\sum _{k=i}^{m}(\lambda _{k}\mu _{k})\beta _{i}\right ]^{1}\), Î¼ _{ m }=0 and \(\gamma _{i}=\frac {\left h_{\textit {ij}}\right ^{2}}{\sum _{l=1,l\neq j}^{n}\left h_{\textit {il}}\right ^{2}E_{\textit {il}}}\).
3.3 Structural solution
If the battery capacity is finite, the optimal waterlevel is not monotonic. Therefore, the structure of the optimal energy allocation cannot be described in a simple and clear way, and an online programming may be required. Since we are more interested in an offline power allocation structure, we study the following special case.
Proposition 3.
if E _{ max }=âˆž, the optimal water levels are nondecreasing as Î± _{ i }â‰¤Î± _{ i+1}. In addition, the water level changes when all the energy harvested before the current transmission are used up.
Proof: Without the battery capacity constraint, the water level is given as \(\alpha _{i}=\left (\ln 2\sum _{k=i}^{m}\lambda _{k}\right)^{1}\). Since Î» _{ k } â‰¥0,âˆ€k, we have Î± _{ i }â‰¤Î± _{ i+1}. If Î± _{ i }â‰¤Î± _{ i+1}, by definition \(\alpha _{i}=\left (\ln 2\sum _{k=i}^{m}\lambda _{k}\right)^{1}\), we get Î» _{ i }â‰ 0 and Î» _{ i }>0. So the complementary slackness condition (32) only holds when \(\left (\sum _{k=1}^{i}p_{\textit {kj}}\sum _{k=1}^{i1}E_{\textit {kj}}\right) = 0\), which means all stored energy should be used up before the current transmission. â–¡
The case of E _{max}=âˆž represents an ideal energy buffer which refers to a device that can store any amount of energy, does not have any inefficiency in charging, and does not leak any energy over time. As an example, consider a sensor node installed to monitor the health of heavy duty industrial motors. Suppose the node operates using energy harvested from the machineâ€™s vibrations, the harvested energy is greater than the consumed power and the health monitoring function is desired only when the motor is powered on. Proposition 3 presents an analytically tractable structure of the optimal sparsity. Intuitively, the harvested energy is reserved in the battery for the use in the later transmission, in order to reduce the effect of causality constraint and improve the flexibility of harvested energy allocation. The optimal water level can be obtained by the power allocation policy and it is structured as follows: the water level is nondecreasing and the harvested energy is used in a conservative way. Based on the structural properties, we can use the following reserve multistage waterfilling algorithm modified based on [18], to achieve the solution:
4 Simulation results
We consider a EHWSN containing n=500 sensor nodes, and a uniform energyharvesting rate E _{ ij }=2 dB for all nodes. We evaluate the performance of the proposed adaptive sparse random projections. One of performance measurements is the meansquare error (MSE) given as
Figure 1 illustrates the data approximation performance using sparse random projections for the different degrees of sparsity. The larger g is given, the smaller MSE is achieved. However, a larger g may bring great computational complexity. Therefore, the sparsity factor g should be carefully chosen in order to keep a balance between the MSE and the complexity. Intuitively, when channel conditions are not good, a larger g should be selected for guaranteeing an acceptable MSE, whereas a smaller g should be selected for saving the computational complexity when channel conditions are good enough. This motivates us to study adapting the sparsity of random projections according to channel conditions for improving the dataapproximation performance as well as the system delay.
Figures 2 and 3 compare the MSE performance obtained by our proposed adaptive sparse random projection (denoted as â€˜Adaptiveâ€™ in the legend) with that obtained by the conventional sparse random projections (denoted as â€˜Fixedâ€™ in the legend) with respect to the number of transmission slots m for SNR = 15 dB and 30 dB, respectively. The conventional sparse random projections with a fixed sparsity given as g=1/4 is looked as a baseline since it achieves an acceptable MSE with a modest complexity. We observe that the proposed adaptive sparse random projections achieves better tradeoff between the MSE and the system delay than the conventional one does when k is either 10 or 5. However, the performance gap between the proposed scheme and the conventional one is getting smaller when SNR increases. That makes sense because when the channel conditions is getting better, the benefits from the adaptive sparsity become limited. For both SNR = 30 dB and 15 dB, we notice that the case of k=5 provides better performance than the case of k=10.
In Figure 4, we present the performance comparison between the conventional sparse random projection with a fixed sparsity and the proposed one with respect to the number of transmission (or the system delay) m for different SNRs. We still observe that the proposed scheme outperforms the conventional one for both SNR = 20 dB and 30 dB resulting in a better tradeoff between the MSE and the system delay. We also notice that, for both the proposed scheme and the conventional scheme, there is not a performance difference between the case of SNR = 20 dB and that of SNR = 30 dB when m<80, but the MSE decreases as SNR increases when m is over 80. That is because m is also one of factors which control the variance of the estimation illustrated in (15). If m is not sufficiently large, it is one of dominant factors which effect the MSE performance. Therefore, increasing SNR barely impacts the MSE performance. While m is large enough, a very limited improvement of the MSE may be achieved by further increasing m, but SNR now becomes a dominant factor and increasing SNR may benefit the MSE performance.
Figure 5 shows tradeoffs between the system delay and the MSE for the proposed adaptive sparse random projections and the conventional ones when SNR = 30 dB and k=5. Consider the MSE 3Ã—10^{âˆ’2}, the conventional sparse random projection requires about m=95 times transmission, while the proposed scheme only requires m=78 times transmission. Consequently, the proposed scheme achieves a better tradeoff compared to the conventional one.
5 Conclusions
In this paper, we proposed to adapt sparsity of random projections according to full channel information for EHWSNs. Compared to the conventional sparse random projections which keep the sparsity constant for the whole transmission slots, the proposed one achieves a better tradeoff between the MSE and the system delay. The optimal sparsity problem is turned into an optimal power allocation maximizing throughput with the energyharvesting constraints. An offline power allocation structure is available for a special case that the battery capacity is infinite. Simulation results have shown that the proposed scheme achieves smaller MSEs than the conventional scheme. Meanwhile, the proposed scheme can also reduce the system delay given an accepted error rate. However, full channel information may not be always available. Therefore, for future work, we will study adaptive sparse random projections with partial channel information.
References
V Raghunathan, S Ganeriwal, M Srivastava, Emerging techniques for long lived wireless sensor networks. IEEE Comm. Mag. 44, 108â€“114 (2006).
T Wark, W Hu, P Corke, J Hodge, A Keto, B Mackey, G Foley, P Sikka, M Brunig, in IEEE Intelligent Sensors, Sensor Networks and Information Processing (IEEE ISSNIP),. Springbrook: Challenges in developing a longterm rainforest wireless sensor network (Sydney, Australia, 2008, 1518 December 2008), pp. 599â€“604.
DWK Ng, ES Lo, R Schober, Robust beamforming for secure communication in systems with wireless information and power transfer. IEEE Trans. Wireless Commun. 13, 4599â€“4615 (2014).
DWK Ng, ES Lo, R Schober, Wireless information and power transfer: energy efficiency optimization in OFDMA systems. IEEE Trans. Wireless Commun. 12, 6352â€“6370 (2013).
M Gatzianas, L Georgiadis, L Tassiulas, Control of wireless networks with rechargeable batteries. IEEE Trans. Wireless Commun. 9(2), 581â€“593 (2010).
CK Ho, R Zhang, in Int. symposium Inf. Theory. Optimal energy allocation for wireless communications powered by energy harvesters (AustinTexas, U.S.A, 2010).
V Sharma, U Mukherji, V Joseph, S Gupta, Optimal energy management policies for energy harvesting sensor nodes. IEEE Trans. Wireless Commun. 9(4), 1326â€“1336 (2010).
EJ Candes, MB Wakin, An introduction to compressive sensing. IEEE Signal Precess. Mag. 25, 21â€“30 (2008).
D Donoho, Compressive sensing. IEEE Trans. Inf. Theory. 52, 1289â€“1306 (2006).
EJ Candes, T Tao, Near optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory. 52, 3406â€“5425 (2006).
W Bajwa, J Haupt, A Sayeed, R Nowak, in Proceedings of The Fifth International Conference on Information Processing in Sensor Networks (IEEE IPSN). Compressive wireless sensing (Nashville, USA, 2006, 1921 April 2006), pp. 134â€“142.
JD Haupt, RD Nowak, Signal reconstruction from noisy random projections. IEEE Trans. Inf. Theory. 52, 4036â€“4048 (2006).
W Wang, M Garofalakis, K Ramchandran, in The 6th International Symposium on Information Processing in Sensor Networks (IEEE IPSN). Distributed sparse random projections for refinable approximation (Cambridge, USA, 2007, 2527 April 2007), pp. 331â€“339.
R Rana, W Hu, C Chou, in Proceedings of The Seventh European Conference on Wireless Sensor Networks (EWSN),. Energyaware sparse approximation technique (east) for rechargeable wireless sensor networks (Coimbra, Portugal, 2010, 1718 February 2010), pp. 306â€“321.
G Yang, VYF Tan, CK Ho, SH Ting, YL Guan, Wireless compressive sensing for energy harvesting sensor nodes. IEEE Trans. Signal Process. 61(18), 4491â€“4505 (2013).
TT Cai, M Wang, G Xu, New bounds for restricted isometric constraints. IEEE Trans. Inf. Theory. 56, 4388â€“4394 (2010).
S Boyd, L Vandenberghe, Convex optimization (Cambridge University Press, 2005).
CK Ho, R Zhang, Optimal energy allocation for wireless communications with energy harvesting constraints. IEEE Trans. Signal Process. 60, 4808â€“4818 (2012).
Acknowledgements
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (Grant. NRF2012R1A1A1014392 and NRF2014R1A1A1003562).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Ran, R., Oh, H. Adaptive sparse random projections for wireless sensor networks with energy harvesting constraints. J Wireless Com Network 2015, 113 (2015). https://doi.org/10.1186/s1363801503243
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363801503243