Quantized Network Coding for correlated sources
 Mahdy Nabaee^{1}Email author and
 Fabrice Labeau^{1}
https://doi.org/10.1186/16871499201440
© Nabaee and Labeau; licensee Springer. 2014
Received: 1 October 2013
Accepted: 18 February 2014
Published: 13 March 2014
Abstract
In this paper, we present a data gathering technique for sensor networks that exploits correlation between sensor data at different locations in the network. Contrary to distributed source coding, our method does not rely on knowledge of the source correlation model in each node although this knowledge is required at the decoder node. Similar to network coding, our proposed method (which we call Quantized Network Coding) propagates mixtures of packets through the network. The main conceptual difference between our technique and other existing methods is that Quantized Network Coding operates on the field of real numbers and not on a finite field. By exploiting principles borrowed from compressed sensing, we show that the proposed technique can achieve a good approximation of the network data at the sink node with only a few packets received and that this approximation gets progressively better as the number of received packets increases. We explain in the paper the theoretical foundation for the algorithm based on an analysis of the restricted isometry property of the corresponding measurement matrices. Extensive simulations comparing the proposed Quantized Network Coding to classic network coding and packet forwarding scenarios demonstrate its delay/distortion advantage.
Keywords
Linear network coding Distributed source coding Compressed sensing Restricted isometry property ℓ_{1} minimization1 Introduction
Flexible, low cost, and longlasting implementation of wireless sensor networks has made them an unavoidable alternative for conventional wired sensing structures in a wide variety of applications, including medicine, transportation, and military [1]. As a relatively new technology, more challenges are faced in the networking aspects of communication than in the aspects of classic physical [2]. One of the introduced challenges is the gathering of sensed data at a central node of the network, where delivery delay, precision, and robustness to network changes are emerging issues.
Packet forwarding via routing is widely used in different implementations of sensor networks. While it achieves capacity rates in the case of multiple session unicast in lossless networks [3], packet forwarding requires an appropriate routing [4] protocol to be run. However, packet forwarding can lead to difficulties because of its slow adaptation to the network changes, caused by deploying new node(s) or link failure(s).
Further, in the case of lossy networks, network coding offers a better error correction capability than packet forwarding, as a result of network diversity. Network coding [3] has been proposed as an alternative for packet forwarding in sensor networks [5, 6]. Specifically, network coding sends a function of incoming packets to the intermediate nodes, as opposed to sending their original content. Furthermore, the usage of random linear functions, also known as random linear network coding, is proved to be sufficient in lossless networks [7, 8]. Moreover, theoretical analysis shows that when network coding is used for transmission, no queuing is required to achieve the capacity rates of the network [3]. Network coding in lossy networks can result in improved achieved rate regions, compared to packet forwarding [9, 10].
In the case of correlated sources, distributed source coding [11, 12] on top of packet forwarding is proved to be sufficient, when dealing with networks of lossless links [13]. Similar to packet forwarding, network coding can be separately applied on top of distributed source coding for correlated sources [14, 15]. However, one has to perform joint source network decoding in order to achieve theoretical performance limits, which may not be feasible because of its computational complexity [15]. Different solutions have been proposed to tackle this practicality issue [16–18], by using lowdensity codes and sum product algorithm [19] for decoding.
Distributed source coding requires the availability of appropriate marginal coding rates at each encoder node; similarly, the deployment of joint source network decoding requires some knowledge of the correlation model of the sources on the encoding side. The assumption of this knowledge might not be practical in all cases, even more so when the source characteristics change over time.
Motivated by this observation, we aim to develop a data gathering and transmission scheme that, similar to network coding, does not rely on routing but at the same time can intrinsically take advantage of the source correlation. Our approach models source correlation through a sparsity or compressibility assumption; combined with a specific data gathering scheme inspired by network coding but acting in the real field, this assumption allows us to develop recovery algorithms at the sink node, which allow approximate data recovery with low delay. Our recovery mechanism will be based on ideas borrowed from compressed sensing [20, 21] in which the internode correlation model of the messages, interpreted as nearsparsity in some domain, is used.
Recently, the idea of using compressed sensing and sparse recovery concepts in sensor networks has drawn a lot of attention [22–25]. Specifically, with the aid of the compressed sensing concepts, compression of internode correlated data without using their correlation model is done in [22, 23]. Morevoer, in [26, 27], theoretical discussion on sparse recovery of graph constrained measurements with an interest in network monitoring application is presented. Joint source, channel, and network coding was also proposed in [28], where random linear mixing was proposed for compression of temporally and spatially correlated sources. In [29], practical possibility of finite field network coding of highly correlated sources was investigated, with the aid of lowdensity codes and belief propagationbased decoding. Unfortunately, a solid theoretical investigation on the feasibility of adopting sparse recovery in random linear network coding has not been done previously.
Real network coding has shown interesting advantages over the conventional finite field network coding [30]. In our earlier work [31], we combined the idea of using real field network coding with the concepts of compressed sensing and proposed a nonadaptive distributed compression scheme, called Quantized Network Coding (QNC), for exactly sparse sources. Furthermore, in [32], we initiated a discussion on the theoretical feasibility of compressed sensingbased network coding, using restricted isometry property of random matrices. In this paper, we extend our previous work from [31, 32] in two specific ways: (i) we extend the network source model used from exactly sparse to nearsparse signals, and (ii) we provide a detailed mathematical and numerical justification of the usage of sparse recovery algorithms (including a bound on the reconstruction error) for this source model. Finally, extensive computer simulations are used to compare the performance of the proposed QNC scenario with respect to other network transmission scenarios. Specifically, our focus is to study the distributed compression capabilities of the proposed QNC scenario in a lossless scenario. The study of robust transmission in lossy cases will be done in a future work.
Although the idea of using compressed sensing has been initially proposed in [22], its theoretical and practical possibilities have not been studied by providing a mathematical formulation. Additionally, we discuss on using compressed sensing in a network codingbased scenario, which involves quantization and is different from the work in [22].
As another contribution of our work, we discuss the satisfaction of RIP in a network coding scenario, which has not been addressed in other works. Specifically, in [25, 28], the authors do not discuss explicit conditions for which compressed sensing encoding (and decoding) works properly^{a}. In this work, we propose conditions for network coding coefficients which ensure a robust recovery of messages, by using restricted isometry property.
Finally, our QNC scenario is different from other proposed schemes, in the sense that we perform quantization to fulfill limited lossless communication between the nodes, as opposed to only using analog network coding. Specifically, we study the behavior of the socalled tail probability [32] in our QNC scheme and show that its behavior is similar to that which is observed in the classic (identically and independently distributed (i.i.d.) Gaussian measurement matrix [33, 34]. This leads us to conclude that our scheme requires a number of received measurements of the same order as that classic case (see Section 4). A detailed description of the data gathering scenario studied in this paper, as well as some notations, is presented in Section 2. In Section 3, we introduce and formulate our proposed Quantized Network Coding algorithm, followed by a discussion on its theoretical feasibility, using the restricted isometry property, in Section 4. In Section 5, we present the decoding algorithm used to recover quantized network coded packets and derive a performance bound on recovery error. Our simulation setup and results are presented in Section 6. Finally in Section 7, we conclude the paper with a discussion on the proposed method and ongoing work.
2 Problem description and notation
In this paper, we limit our study to a network with lossless links with limited capacity. This model could also correspond to lossy networks, where appropriate channel coding would have been applied. A more realistic lossy network model is left as a future work.
2.1 Network
The content of edge e at time instant t are represented by Y_{ e }(t), where t represents the discrete (integer) time index, during which a block of L channel symbols^{b} is transmitted. Y_{ e }(t) is from a finite alphabet of size $\lfloor {2}^{L{C}_{e}}\rfloor $, where ⌊▪⌋ denotes rounding down to the nearest integer. In the rest of the paper, the realizations of all capital letter random variables are denoted by lowercase letters.
2.2 Source signals
The nodes of the network are equipped with sensors; specifically, we model the sensed signals in each node v as an information source, X_{ v }, where ${X}_{v}\in \mathbb{R}$. To reflect the natural correlation between sensed data at each node, we assume that the set of signals X_{ v } are nearsparse in some transform domain.
i.e., ${\underline{S}}_{k}$ is ksparse. An example of the sparsifying transform matrix, ϕ, is the Karhunen Loeve transform of the messages.
Moreover, we assume that messages, X_{ v }’s, take their values in a bounded interval between q_{max} and +q_{max}. This is also a reasonable assumption as the sensing range of sensors is usually limited. The choice of q_{max} can be made after a statistical study of realizations of X_{ v }’s and can be chosen as some confidence region, in which most of the realizations of X_{ v }’s lay. Note that the sparsity model used in this paper is different from the conventional joint sparse model (JSM) [35], in that our node source signals or messages are scalar random variables, without correlation over time in each node. This is a valid assumption as a local transform coding could be applied to the time samples and generate a set of samples with no time redundancy.
2.3 Data gathering
Having these correlated information sources and the information network characterized, we study the transmission of X_{ v }’s to a single gateway node. The gateway or decoder node, denoted by v_{0}, ${v}_{0}\in \mathcal{V}$, has high computational resources and is usually in charge of forwarding the information to a next level network, e.g., a wired backbone network. The described (single session) incast of sources to the unique decoder node is referred to as data gathering.
3 Quantized Network Coding
3.1 Principle
Random linear network coding for multicast of independent sources has been proposed and studied in [8], where the algebraic operations are carried out in a finite field. Since our work is motivated by the concepts of compressed sensing, in which the results are valid in the infinite field of real number, we have to use a real field alternative for conventional finite field network coding. On the other hand, the finite capacity of the edges has to be appropriately coped with. As a result, we propose a method that we call Quantized Network Coding, which uses quantization to match infinite alphabet of real field network coded packets to the limited capacity of the network links.
3.2 Endtoend equations
and I denotes the identity matrix.
In conventional linear network coding, the total number of measurements, m (see (19)), is at least equal to the number of data sources, n (the number of nodes in the network here). Typically, the total measurement matrix is of full column rank, and if there is no uncertainty involved because of measurement noise, we are able to uniquely find a solution. In this paper, we are interested in investigating the feasibility of robust recovery of $\underline{X}$, when fewer number of measurements are received at the decoder than the number of messages, i.e., m<n.
The characteristic Equation (20) describing the QNC scenario can be treated as a compressed sensing measurement equation. This gives us an opportunity to apply the results in the literature of compressed sensing and sparse recovery [20, 37] to our QNC scenario with nearsparse messages. However, one needs to examine the required conditions which guarantee sparse recovery in the proposed QNC scenario. In the following, we discuss theoretical and practical feasibility of robust recovery with a compressed sensing perspective.
4 Restricted isometry property
One of the main advantages of the compressed sensing approach is that it relies on a simple model of correlation for the sources; if sparse reconstruction can be applied successfully to recover $\underline{X}$ from Equation 20 at a given time t, this is achieved without requiring the encoders (network nodes) to know much about the underlying signal correlation. This section discusses the design of the linear mixing coefficients α_{e,v}(t) and ${\beta}_{e,{e}^{\prime}}\left(t\right)$ and the impact of this design on the ability to apply sparse reconstruction techniques at the sink node v_{0} to approximately recover the n source signals $\underline{X}$ from m measurements $\underline{Z}\left(t\right)$ at a given time t, where m≪n.
4.1 The restricted isometry property
One of the properties that is widely used to characterize appropriate measurement matrices in the compressed sensing literature is the restricted isometry property (RIP) [33]. Roughly speaking, this property provides a measure of norm conservation under dimensionality reduction [34]. In compressed sensing, the RIP of the measurement matrix between the sparse domain and the measurement domain allows to draw strong conclusions about the possibility to recover the original data from a small set of measurements [33]. In our case, this means that the RIP should hold for the measurement matrix Θ_{tot}(t)=Ψ_{tot}(t)ϕ.
Random matrices with i.i.d. zeromean Gaussian entries are known to be appropriate measurement matrices for compressed sensing. Explicitly, an m×n i.i.d Gaussian random matrix, denoted G, with entries of variance $\frac{1}{m}$, satisfies RIP of order k and constant δ_{ k }, with a probability exceeding $1{e}^{{\kappa}_{1}m}$, (called overwhelming probability) if $m>{\kappa}_{2}klog\left(\frac{n}{k}\right)$, where κ_{1} and κ_{2} only depend on the value of δ_{ k } (theorem 5.2 in [38]).
which is smaller than the order of n, the size of the data [38].
4.2 QNC design for RIP
We now turn to the design of QNC coefficients in Equation 6 so that the overall design satisfies RIP with high probability. We assemble here several results from the literature and additional simulations to motivate the proposed design.
In [31, 32], we proposed a design for local network coding coefficients, ${\beta}_{e,{e}^{\prime}}\left(t\right)$’s and α_{e,v}(t)’s, which results in an appropriate total measurement matrix, Ψ_{tot}(t), in the compressed sensing framework.
Theorem 1(Theorem 3.1 in [32])
Consider a Quantized Network Coding scenario, in which the network coding coefficients, α_{e,v}(t) and ${\beta}_{e,{e}^{\prime}}\left(t\right)$, are such that:

α_{e,v}(t)=0, ∀t>2.

α_{e,v}(2)’s are independent zeromean Gaussian random variables.

${\beta}_{e,{e}^{\prime}}\left(t\right)$’s are deterministic.
For such a scenario, the entries of the resulting Ψ_{tot}(t) are zeromean Gaussian random variables. Further, the entries of different columns of Ψ_{tot}(t) are mutually independent. ■
In cases where the number of outgoing edges is greater than the number of incoming edges, i.e., Out(v)>In(v), some of the outgoing edges are randomly removed (not used for transmission) to ensure that the generated ${\beta}_{e,{e}^{\prime}}\left(t\right)$’s are locally orthogonal. Furthermore, the second equation in (25) is a coefficient normalization which has no specific impact at this stage of the analysis, but which will be important in the study of bounds on sparse recovery performance in Section 5. Heuristically, such choice of orthogonal set makes each outgoing packets (of each node) to be innovative.
by proving the following theorem.
Theorem 2(Theorem 4.1 in [32]).
In [32], we have derived a detailed expression of the tail probability (26). Our ultimate goal would be to use this expression to directly conclude that the number of necessary measurements m in the QNC scenario is of the same order as that of a wellknown Gaussian measurement matrix, as defined above. However, the relationship between the network and QNC parameters on the one hand and the measurement matrix Ψ_{tot}(t) on the other hand is too complicated to easily draw conclusions (see Equations 8, 9, and 16). We therefore resort to the following reasoning: we first show through simulations that the tail probabilities for the QNC and Gaussian measurement matrices are of the same order; we then conclude to a similar behavior of QNC and Gaussian measurement matrices in terms of RIP satisfaction and thus in terms of the required number of measurements.
Our numerical evaluations in Figure 3 show that for the same value of tail probability, the QNC measurement matrix, Ψ_{tot}(t), and the i.i.d. Gaussian matrix, G, require a number of measurements m of the same order.
We can therefore also say, using Theorem 2, that the QNC measurement matrix, Ψ_{tot}(t), and the i.i.d. Gaussian matrix, G, have a similar behavior in terms of satisfying RIP as a function of m, so that they will typically require values of m of the same order to ensure sparse recovery.
In the following section, we extend our discussion to the robust recovery in QNC scenario, by using the guarantees, implied from the satisfaction of RIP.
5 Decoding using sparse recovery
In this section, we will explore the performance of decoding using sparse recovery based on Equation 20 and the QNC design proposed in Theorem 1. It is well known that recovery of exactly sparse vectors from an underdetermined set of linear measurements can be done with no error, using linear programming [39]. Specifically, theoretical works show that the NPhard ℓ_{0} minimization can be replaced with ℓ_{1} minimization without any associated error, when dealing with noiseless measurements [37, 39]. However, when dealing with noisy measurements, ℓ_{1}min recovery does not necessarily offer a minimum mean squared error solution. There is still a lot of work being done to develop practical and near minimum mean squared error recovery algorithms for noisy cases. Sparse recovery from quantized measurements has been recently studied in a number of works [40–42]. For instance, the authors in [41] consider the estimation problem of sparse vectors from measurements that are quantized and corrupted by Gaussian noise. The main aspect that differentiates our model from that in [41] is that in our QNC scenario the resulting effective total measurement noises are nonlinear functions of quantization noises at each edge.
which can be solved by using linear programming [39]. The following theorems present our results on the recovery error using ℓ_{1}min decoding of (28).
Theorem 3.
Consider the QNC scenario where the absolute value of messages are bounded by q_{max} and the local network coding coefficients are such that:

α_{e,v}(t)=0, ∀t>2.

α_{e,v}(2)’s are independent zeromean Gaussian random variables with variance ${\sigma}_{0}^{2}$.

${\beta}_{e,{e}^{\prime}}\left(t\right)$’s are deterministic and locally orthogonal according to (25).
where Q(▪) is the tail probability of standard normal distributions (i.e., onesided Q function).
Proof
As a result, since α_{e,v}(t)’s are zero for t≥3, it is straightforward to imply that overflow may not happen for t≥3.
For t=2, since only the node message X_{ v } is available at each node, the values of ${\beta}_{e,{e}^{\prime}}\left(2\right)$’s do not affect anything. Hence, only the value of α_{e,v}(2) can result in overflow and therefore α_{e,v}(2) should be less than or equal to one to prevent overflow. Moreover, because of the Gaussian distribution of α_{e,v}(2)’s, each α_{e,v}(2) may have an absolute value more than one, with a probability of $2Q\left({\sigma}_{0}^{1}\right)$. Therefore, using the union bound, the probability that there is at least one α_{e,v}(2) with α_{e,v}(2)>1 is upper bounded by $2\left\mathcal{E}\rightQ\left({\sigma}_{0}^{1}\right)$.
Theorem 4.
Proof.
where (39) holds because of the onetoone mapping structure of B matrix. This provides an upper bound on the ℓ_{2}norm of measurement noise in our QNC scenario.
According to theorem 4.2 in [21], when the measurement matrix satisfies RIP of appropriate order and constant (as in the assumptions of Theorem 4) and the measurement noise is bounded, ℓ_{1}min recovery can yield an estimate with bounded recovery error. Explicitly, the bound is as in (33), considering the nearsparsity model of the messages and the obtained bound on the measurement noise.
According to the preceding theorem, the upper bound, c_{1}ε_{rec}, is decreased when the quantization steps, Δ_{ e }’s, are decreased. Since ${\Delta}_{e}=2{q}_{\text{max}}/{2}^{\lfloor L{C}_{e}\rfloor}$, a smaller upper bound on the ℓ_{2} norm of the recovery error can be obtained by increasing the block length, L. Although this can be done practically, it will simultaneously increase the point to point transmission delays in the network, which may not be desirable. This creates a tradeoff between reconstruction quality and delay, which will be explored in detail in Section 6.
As discussed in Theorem 4, the local network coding coefficients, proposed in (25), ensure that the normalization is respected and overflow does not happen, with high probability. More precisely, an appropriate choice of σ_{0} should also be picked for this purpose. For example, when the number of edges is in the order of 1,000, selecting σ_{0}=0.25 would result in a low probability for overflow.
It was also discussed in Section 4 that the resulting Θ_{tot}(t)=Ψ_{tot}(t)ϕ satisfies the RIP condition with a high probability, when the local network coding coefficients are generated according to the assumptions of Theorem 1, with a number of measurements m of the same order as would be required for a i.i.d. Gaussian measurement matrix. Based on Theorem 4, if the resulting Ψ_{tot}(t) satisfies the RIP of appropriate order with a high probability, then the robust recovery can be guaranteed with high probability.
Therefore, putting all these numerical and theoretical results together, QNC will result in bounded error recovery (33) with a number of measurements (number of packets received at the decoder) of smaller order than the number of messages. This saving in the required number of received packets can be interpreted as an embedded distributed compression, achieved by Quantized Network Coding at the nodes: the more packets are received at the decoder, the larger m will be and the lower the reconstruction error will be.
6 Simulation results
In this section, we evaluate the performance of Quantized Network Coding, by using different numerical simulations. The main motivation behind the proposed Quantized Network Coding technique is to allow for reconstruction of the correlated source signals at the sink node or decoder with a limited number of measurements. To this end, we will compare delay distortion curves for different data gathering algorithms. Our performance analysis includes statistical evaluation of the proposed QNC scenario versus packet forwarding and conventional finite field network coding schemes. The resulting analysis will provide a comprehensive comparison between these transmission methods for different network deployments and correlation of sources.
6.1 Network deployment and message generation
Number of hops in each deployment setting with n=100 nodes
Simulation settings  AverageIn(v)  Average hops 

d_{0}=0.15,GW_{corner}  5.3  9.7 
d_{0}=0.15,GW_{center}  5.3  5.3 
d_{0}=0.25,GW_{corner}  13.9  3.9 
d_{0}=0.25,GW_{center}  13.9  2.3 
d_{0}=0.35,GW_{corner}  24.8  2.7 
d_{0}=0.35,GW_{center}  24.8  1.7 
In our simulation, each communication link (edges) can maintain a lossless communication of 1 bit per use, i.e., C_{ e }=1, for all $e\in \mathcal{E}$. We also assume that there is no interference involved from transmission in other nodes which may have been achieved by using a time multiplexing strategy. A sample network deployment is shown in Figure 4b, where the arrows represent the directed links between the nodes.
This is followed by generation of an orthonormal random matrix, ϕ, and calculating random messages: $\underline{x}=\varphi \xb7\underline{s}.$ To ensure that x_{ j }’s are bounded, they are normalized between q_{max} and +q_{max} (x_{ j }’s are multiplied by a constant value). The value of q_{max} used for the simulations does not affect the simulation results, since we are using average SNR as a measure of decoding quality. We study the performance of different transmission scenarios by repeating our simulations for different values of sparsity factor, $\frac{k}{n}$, and nearsparsity parameter, ε_{ k }.
where $\overline{(\blacksquare )}$ stands for the average over different realizations of network deployments. For each realization of network deployment, we only generate one realization of messages, and therefore, taking the average over different network deployments is enough to obtain the average SNR values.
The payback measure in our comparisons is the corresponding average delivery delay, to achieve the required quality of service (average SNR). Explicitly, delivery delay for a transmission which has terminated at t is equal to (t1) L in all cases of transmission scenarios. In the case of packet forwarding, we do not consider the learning period required to find the routes from each sensor node to the decoder node.
The parameters of messages and the networks used in our simulations
Parameter  Value(s) 

n  100 
d _{0}  0.15,0.25,0.35 
P _{0}  0.9 
L  1,…,40 
k/n  0.01,0.05,0.10 
ε _{ k }  0,0.002,0.02,0.2 
6.2 Quantized Network Coding
For each generated random network deployment, we perform QNC with ℓ_{1}min decoding. Local network coding coefficients, α_{e,v}(t)’s and ${\beta}_{e,{e}^{\prime}}\left(t\right)$’s, are generated according to the conditions of Theorem 3, where σ_{0}=0.25. Edge quantizers, Q_{ e }(▪)’s, have uniform characteristic with a range of [q_{max},+q_{max}] and 2^{ L } intervals (since C_{ e }=1, ∀e). Random α_{e,v}(2)’s and ${\beta}_{e,{e}^{\prime}}\left(t\right)$’s can be generated in a pseudorandom way, and therefore, only the generator seed needs to be transmitted to the decoder in a packet header.
At the decoder, the received measurements up to t, ${\underline{Z}}_{\text{tot}}\left(t\right)$, are used to recover the original messages. Specifically, for a realization of messages, $\underline{X}$, we define ${\underline{\widehat{x}}}_{\text{QNC}}\left(t\right)$ to be the recovered messages, using ℓ_{1}min decoding, according to (28). The convex optimization, involved in (28), is solved by using the open source implementation of disciplined convex programming [43]. Moreover, the network deployment is assumed to be known at the decoder in order to build up Ψ_{tot}(t) matrices (the random generator seed is enough to regenerate local network coding coefficients). Although the exact sparsity of messages, k, does not need to be known for performing ℓ_{1}min decoding, the sparsifying transform, ϕ, should be known. The block length, L, has to be known at the decoder to be able to calculate the level of the effective measurement noise, i.e., ε_{rec}(t)’s.
6.3 Quantization and packet forwarding
6.4 Quantization and packet forwarding with CS decoding
As it can be predicted, compressed sensingbased decoding tries to find an approximate estimation for the undelivered messages by using the redundancy of messages and improves the overall performance in terms of recovery error norm.
6.5 Quantization and network coding
This is referred to as all or nothing decoding in the conventional network coding literature. Similar to QNC scenario, the network deployment is assumed to be known at the decode node, and the mapping matrix (from messages to received packets) can be built up by only receiving the seed of pseudorandom generators.
6.6 Analysis of simulation results
As it is shown in Figure 5a,b, when using the same block length, QNC achieves significant improvement, compared to PF, for low values of delivery delay. These low delays correspond to the initial t’s in the transmission, at which a small number of packets are received at the decoder. As promised by the theory of compressed sensing, fewer measurements enable message recovery, with an associated measurement noise. After enough packets are received at the decoder, QNC achieves its best performance (where the curve is almost flat). This best performance improves (i.e., average SNR increases) when the correlation of messages is higher (sparsity factor k/n is lower).
The best performance for QandPF, QandPFwithCS, and QandNC happens after a longer period of time than for QNC. As it can be seen, this is the best achievable quality (SNR value), which is limited only by quantization noises at the source nodes, for both QandPF and QandNC scenarios. As it is also expected, using compressed sensing decoding (as in QandPFwithCS scenario) provides a better estimation of the messages before all the packets are delivered. Furthermore, as opposed to QandPF which shows a progressive improvement in the quality, QandNC has an all or nothing characteristic, as mentioned earlier. It is also interesting to note that lowdensity adjacency matrices in networks with small degree of nodes result in having (finite field) measurement matrices that are not of full rank in the QandNC scenario. Hence, as shown in Figure 5a, QandNC scheme fails to work properly.
The quantization noises and their propagation through the network does not allow QNC to achieve the same best performance as in PF and QandNC scenarios (where only source quantization noise is involved). However, as it is shown in the following, QNC outperforms QandPF (with and without compressed sensing decoding) and QandNC scenarios in a wide range of delay values, when an appropriate block length is chosen.
It can be seen in Figure 6a,b,c,d that, when the network does not have too many links (i.e., when the average hop distances are low), the proposed QNC scenario outperforms both routingbased packet forwarding (with and without compressed sensing decoding) and conventional QandNC scenarios. This is true for a wide range of average SNR values, varying up to around 35 dB, which is considered as high quality in many applications. Moreover, as it is expected, the average SNR of QNC scenario increases when the correlation of messages increases (i.e., when the sparsity factor, k/n, decreases).
As shown in Figure 6e,f, when dealing with networks with very high number of edges, which results in small average hop distances, the proposed QNC scenario cannot outperform QandNC scenario, for very high SNR values (explicitly for average SNR values higher than 40 dB). This may be a result of quantization noise propagation through the network during the QNC steps, which strengthen the effective measurement noise above the level that sparse recovery can compensate.
By comparing the figures, in which only the location of gateway node has changed, i.e., from GW_{center} to GW_{corner} (Figure 6a to Figure 6b and Figure 6c to Figure 6d), we can understand that QNC shows a more robust behavior than PF and QandNC schemes. In other words, QNC does not suffer from the complications (especially happening in packet forwarding) caused by asymmetric distribution of network flow. Using compressed sensing decoding for packet forwarding, as in QandPFwithCS scenario, improves the performance of packet forwarding in this situation, although it cannot outperform QNC scenario.
In the routingbased packet forwarding scenarios (with and without compressed sensing decoding), the intermediate (sensor) nodes have to go through route training and queuing of packets. One of the main advantages of QNC is that the intermediate nodes should only carry out simple linear combination and quantization, which reduces the required computational power of intermediate sensor nodes (they still have to perform sensing and physical layer transmission). On the other hand, at the decoder sides, QNC requires an ℓ_{1}min decoder which is potentially more complex than the receiver required for packet forwarding. However, since the gateway node is usually capable of handling higher computational operations, this may not be an issue in practical cases.
6.7 QNC in lossy networks
Although it is not the main focus of our paper, we have run some numerical simulations to assess the robustness of QNC scenario in lossy networks. Specifically, we consider a network model similar to the one used for the lossless case, but with the presence of packet losses. More precisely, all the links are assumed to have a bit dropping rate of p_{drop}, i.e., a bit (which corresponds to a symbol in the case of C_{0}=1 considered in the simulations) is dropped (lost) during the transmission with a probability of p_{drop}. When dealing with packets of length L, a packet is considered as being dropped if one or more of its bits are lost. This will be applied to all different transmission schemes, described in Section 6.1.
During the packet forwarding, if a packet is not successfully transmitted over a channel, it needs to be retransmitted completely. Moreover, in the QandNC and QNC scenarios where finite field network coding and Quantized Network Coding are adopted, loss of a packet (transmitted over a link) is reflected by a zero value for the corresponding local network coding coefficient.
Since the low SNR values (low decoding quality) in QNC scenario are obtained by using small packet lengths (small values of L), the probability of having a bit drop in the packet is smaller, compared to a larger packet length (larger L). As a result, the resulting performance curves are not very different, when having different loss rates. This is shown in Figure 8a,b, where there is a small gap between the curves of different p_{drop} values at low SNR values.
Moreover, since the compressed sensing decoder exploits the correlation between the messages, it is able to reconstruct some messages when their corresponding linear measurements are lost in the transmission. This fact can also be seen in QandPF scenario when compressed sensing decoding is adopted.
7 Conclusions
Joint source network coding of correlated sources was studied with a sparse recovery perspective. In order to achieve encoding of correlated sources without requiring the encoders to know the source correlation model, we proposed Quantized Network Coding, which incorporates real field network coding and quantization to take advantage of decoding using linear programming. Thanks to the work in the literature of compressed sensing, we discussed theoretical guarantees to ensure efficient encoding and robust decoding of messages. Moreover, we were able to make conclusive statements about the robust recovery of messages, when fewer number of received packets than the number of source signals (messages) were available at the decoder. Finally, our computer simulations verified the reduction in the average delivery delay, by using Quantized Network Coding.
Currently, we are studying the feasibility of near minimum mean squared error decoding, when other forms of prior information are available about the source. Specifically, we have suggested the use of belief propagationbased decoding [45] in a Bayesian scenario. However, more theoretical work is needed to derive mathematical guarantees for robust recovery. Studying the general case of lossy networks with interference between the links is also one of the proposed future directions.
Endnotes
^{a} They only mention that dense networks satisfy restricted eigenvalue condition and do not prove it.
^{b} Although the impact and value of L are not discussed at this point, it is an important design parameter, which will be extensively discussed in Section 6.
^{c} In this paper, all the vectors are columnwise.
^{d} This choice reduces the tail probabilities defined later on in Equation 26 and, as such, increases the probability of the measurement matrix satisfying RIP.
^{e} Explicitly, we have a predetermined set of orthogonal matrices, used as ${\beta}_{e,{e}^{\prime}}\left(t\right)$’s. Further, the variance of α_{e,v}(2)’s are picked the same such that the mean of ℓ_{2}norms (defined in [32]) is equal to 1.
^{f} Although a uniform quantizer may not be the best choice for some message distributions, it is still widely used in practice. It also allows us to simplify the mathematical analysis to provide a theoretical bound on the resulting recovery error. The study of the impact of different quantizer designs is left as a future work.
^{g} This depends on the characteristic of quantizers used at the source node to quantize each message before packet forwarding. Specifically, in our simulations where we used uniform quantizers with step size Δ_{ Q }, ε_{rec,PF}(t) is equal to the product of Δ_{ Q } and the number of delivered quantized messages.
Declarations
Acknowledgements
This work was supported by HydroQuébec, the Natural Sciences and Engineering Research Council of Canada, and McGill University in the framework of the NSERC/HydroQuébec/McGill Industrial Research Chair in Interactive Information Infrastructure for the Power Grid.
Authors’ Affiliations
References
 Akyildiz I, Su W, Sankarasubramaniam Y, Cayirci E: A survey on sensor networks. IEEE Commun. Mag 2002, 40(8):102114. 10.1109/MCOM.2002.1024422View ArticleGoogle Scholar
 Chong C, Kumar S: Sensor networks: evolution, opportunities, and challenges. Proc. IEEE 2003, 91(8):12471256. 10.1109/JPROC.2003.814918View ArticleGoogle Scholar
 Ahlswede R, Cai N, Li SY, Yeung R: Network information flow. IEEE Trans. Inf. Theory 2000, 46: 12041216. 10.1109/18.850663MathSciNetView ArticleGoogle Scholar
 AlKaraki J, Kamal A: Routing techniques in wireless sensor networks: a survey. IEEE Wireless Commun 2004, 11(6):628. 10.1109/MWC.2004.1368893View ArticleGoogle Scholar
 Ho T, Koetter R, Medard M, Karger D, Effros M: The benefits of coding over routing in a randomized setting. In IEEE International Symposium on Information Theory. IEEE, Piscataway; 2003:442442.Google Scholar
 Fragouli C: Network coding for sensor networks. In Handbook Array Processing Sensor Networks. Wiley Online Library; 2009:645667.Google Scholar
 Koetter R, Médard M: An algebraic approach to network coding. IEEE Trans. Netw 2003, 11(5):782795. 10.1109/TNET.2003.818197View ArticleGoogle Scholar
 Ho T, Medard M, Koetter R, Karger D, Effros M, Shi J, Leong B: A random linear network coding approach to multicast. IEEE Trans. Inf. Theory 2006, 52: 44134430.MathSciNetView ArticleGoogle Scholar
 Lim S, Kim Y, El Gamal A, Chung S: Noisy network coding. IEEE Trans. Inf. Theory 2011, 57(5):31323152.MathSciNetView ArticleGoogle Scholar
 Dana A, Gowaikar R, Palanki R, Hassibi B, Effros M: Capacity of wireless erasure networks. IEEE Trans. Inf. Theory 2006, 52: 789804.MathSciNetView ArticleGoogle Scholar
 Slepian D, Wolf J: Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19(4):471480. 10.1109/TIT.1973.1055037MathSciNetView ArticleGoogle Scholar
 Xiong Z, Liveris A, Cheng S: Distributed source coding for sensor networks. IEEE Signal Process. Mag 2004, 21(5):8094. 10.1109/MSP.2004.1328091View ArticleGoogle Scholar
 Han TS: Slepianwolfcover theorem for networks of channels. Inf. Control 1980, 47(1):6783. 10.1016/S00199958(80)902843View ArticleGoogle Scholar
 Ho T, Médard M, Effros M, Koetter R, Karger D: Network coding for correlated sources. In Proceedings of Conference on Information Sciences and Systems. CiteSeer; 2004.Google Scholar
 Ramamoorthy A, Jain K, Chou PA, Effros M: Separating distributed source coding from network coding. IEEE Trans. Netw 2006, 14: 27852795.MathSciNetGoogle Scholar
 Wu Y, Stankovic V, Xiong Z, Kung S: On practical design for joint distributed source and network coding. IEEE Trans. Inf. Theory 2009, 55(4):17091720.MathSciNetView ArticleGoogle Scholar
 Maierbacher G, Barros J, Médard M: Practical sourcenetwork decoding. In 6th International Symposium on Wireless Communication Systems. IEEE, Piscataway; 2009:283287.Google Scholar
 Cruz S, Maierbacher G, Barros J: Joint sourcenetwork coding for largescale sensor networks. In IEEE International Symposium on Information Theory Proceedings. IEEE, Piscataway; 2011:420424.Google Scholar
 Kschischang F, Frey B, Loeliger H: Factor graphs and the sumproduct algorithm. IEEE Trans. Inf. Theory 2001, 47(2):498519. 10.1109/18.910572MathSciNetView ArticleGoogle Scholar
 Donoho D: Compressed sensing. IEEE Trans. Inf. Theory 2006, 52: 12891306.MathSciNetView ArticleGoogle Scholar
 Baraniuk R, Davenport M, Duarte M, Hegde C: An Introduction to Compressive Sensing. Boston: AddisonWesley; 2011.Google Scholar
 Haupt J, Bajwa W, Rabbat M, Nowak R: Compressed sensing for networked data. IEEE Signal Process. Mag 2008, 25: 92101.View ArticleGoogle Scholar
 Nguyen N, Jones D, Krishnamurthy S: Netcompress: coupling network coding and compressed sensing for efficient data communication in wireless sensor networks. In 2010 IEEE Workshop on Signal Processing Systems. IEEE, Piscataway; 2010:356361.View ArticleGoogle Scholar
 Luo C, Wu F, Sun J, Chen CW: Compressive data gathering for largescale wireless sensor networks. In Proceedings of the 15th Annual International Conference on Mobile Computing and Networking. ACM, New York; 2009:145156.View ArticleGoogle Scholar
 Feizi S, Médard M, Effros M: Compressive sensing over networks. In 48th Annual Allerton Conference on Communication, Control, and Computing. IEEE, Piscataway; 2010:11291136.Google Scholar
 Xu W, Mallada E, Tang A: Compressive sensing over graphs. In IEEE International Conference on Computer Communications (INFOCOM). IEEE, Piscataway; 2011:20872095.Google Scholar
 Wang M, Xu W, Mallada E, Tang A: Sparse recovery with graph constraints: fundamental limits and measurement construction. In IEEE International Conference on Computer Communications (INFOCOM). IEEE, Piscataway; 2012:18711879.Google Scholar
 Feizi S, Medard M: A power efficient sensing/communication scheme: joint sourcechannelnetwork coding by using compressive sensing. In 49th Annual Allerton Conference on Communication, Control, and Computing. Piscataway: IEEE,; 2011:10481054.Google Scholar
 Bassi F, Chao L, Iwaza L, Kieffer M: Compressive linear network coding for efficient data collection in wireless sensor networks. In Proceedings of the 2012 European Signal Processing Conference. IEEE, Piscataway; 2012:15.Google Scholar
 Dey B, Katti S, Jaggi S, Katabi D, Medard M, Shintre S: “Real” and “complex” network codes: promises and challenges. In Fourth Workshop on Network Coding, Theory and Applications. 2008 NetCod 2008. IEEE, Piscataway; 2008:16.Google Scholar
 Nabaee M, Labeau F: Quantized network coding for sparse messages. In IEEE Statistical Signal Processing Workshop. IEEE, Piscataway; 2012:832835.Google Scholar
 Nabaee M, Labeau F: Restricted isometry property in quantized network coding of sparse messages. In IEEE Global Telecommunications Conference. IEEE, Piscataway; 2012.Google Scholar
 Candes EJ: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 2008, 346(910):589592. 10.1016/j.crma.2008.03.014MathSciNetView ArticleGoogle Scholar
 Baraniuk R: Compressive sensing. IEEE Signal Process. Mag 2007, 24(4):118121.View ArticleGoogle Scholar
 Duarte MF, Sarvotham S, Wakin MB, Baron D, Baraniuk RG: Joint sparsity models for distributed compressed sensing. In Proceedings of the Workshop on Signal Processing with Adaptative Sparse Structured Representations. IEEE, Piscataway; 2005.Google Scholar
 Kailath T: Linear Systems. Englewood Cliffs: PrenticeHall; 1980.Google Scholar
 Candes E, Romberg J: Sparsity and incoherence in compressive sampling. Inverse Probl 2007, 23(3):969. 10.1088/02665611/23/3/008MathSciNetView ArticleGoogle Scholar
 Baraniuk R, Davenport M, Devore R, Wakin M: A simple proof of the restricted isometry property for random matrices. Constr. Approx 2007, 28(3):253263.MathSciNetView ArticleGoogle Scholar
 Candes E, Tao T: Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51: 42034215. 10.1109/TIT.2005.858979MathSciNetView ArticleGoogle Scholar
 Dai W, Pham HV, Milenkovic O: Distortionrate functions for quantized compressive sensing. In IEEE Information Theory Workshop on Networking and Information Theory. IEEE, Piscataway; 2009:171175.View ArticleGoogle Scholar
 Zymnis A, Boyd S, Candes E: Compressed sensing with quantized measurements. IEEE Signal Process. Lett 2010, 17(2):149152.View ArticleGoogle Scholar
 Jacques L, Hammond DK, Fadili JM: Dequantizing compressed sensing: when oversampling and nonGaussian constraints combine. IEEE Trans. Inf. Theory 2011, 57(1):559571.MathSciNetView ArticleGoogle Scholar
 Grant M, Boyd S: CVX: Matlab software for disciplined convex programming, version 1.21. 2012.http://cvxr.com/cvx . Accessed Aug 2012Google Scholar
 Dijkstra E: A note on two problems in connexion with graphs. Numerische Mathematik 1959, 1(1):269271. 10.1007/BF01386390MathSciNetView ArticleGoogle Scholar
 Nabaee M, Labeau F: Nonadaptive distributed compression in networks. In 2013 IEEE Digital Signal Processing and Signal Processing Education Meeting (DSP/SPE). IEEE, Piscataway; 2013:239244.View ArticleGoogle Scholar
 Nabaee M, Labeau F: Bayesian quantized network coding via generalized approximate message passing. In 2014 Wireless Telecommunications Symposium. IEEE, Piscataway; 2014.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.