- Research Article
- Open Access
An Optimal Adaptive Network Coding Scheme for Minimizing Decoding Delay in Broadcast Erasure Channels
© Parastoo Sadeghi et al. 2010
- Received: 31 August 2009
- Accepted: 3 March 2010
- Published: 14 April 2010
We are concerned with designing feedback-based adaptive network coding schemes with the aim of minimizing decoding delay in each transmission in packet-based erasure networks. We study systems where each packet brings new information to the destination regardless of its order and require the packets to be instantaneously decodable. We first formulate the decoding delay minimization problem as an integer linear program and then propose efficient algorithms for finding its optimal solution(s). We show that our problem formulation is applicable to memoryless erasures as well as Gilbert-Elliott erasures with memory. We then propose a number of heuristic algorithms with worst case linear execution complexity that can be used when an optimal solution cannot be found in a reasonable time. We verify the delay and speed performance of our techniques through numerical analysis. This analysis reveals that by taking channel memory into account in network coding decisions, one can considerably reduce decoding delays.
- Network Code
- Delay Performance
- Fountain Code
- Source Packet
- Erasure Channel
In this paper, we are concerned with designing feedback-based adaptive network coding schemes that can deliver high throughputs and low decoding delays in packet erasure networks. We first present some background on existing work and emphasize that the notion of delay and the choice of a suitable network coding strategy are highly entangled with the underlying application.
1.1. Motivation and Background
Consider a broadcast packet-based transmission from one source to many destinations where erasures can occur in the links between the source and destinations. Two main throughput optimal schemes to deal with such erasures are fountain codes  and random linear network codes (RLNC) . In the latter scheme, for example, the source transmits random linear mixtures of all the packets to be delivered. It is well-known that if the random coefficients are chosen from a finite field with a sufficiently large size, each coded packet will almost surely become linearly independent of all previously received coded packets and hence, innovative for every destination . The scheme is therefore almost surely throughput optimal. Another benefit of fountain codes and RLNC is that they do not require feedback about erasures in individual links in order to operate.
However in these schemes, throughput optimality comes at the cost of large decoding delays, as the receiver needs, in general, to collect all coded packets in a block before being able to decode. Despite this drawback, there are applications which are insensitive to such delays. Consider, for example, a simple software update (file download). The update only starts to work when the whole file is downloaded. In this case, the main desired properties are throughput optimality and the mean completion time and there is often little or no incentive to aim for partial "premature" decoding. The completion time performance of RLNC for rateless file download applications has been considered in . In , the mean completion time of RLNC is shown to be much shorter than scheduling. Reference  considers time division duplex systems with large round-trip link latencies and proposes solutions for the number of coded packet transmissions before waiting for acknowledgement on the received number of degrees of freedom.
There are applications where partial decoding can crucially influence the end user's experience. Consider, for example, broadcasting a continuous stream of video or audio in live or playback modes. Even though fountain codes and RLNC are throughput optimal, having to wait for the entire coded block to arrive can result in unacceptable delays in the application layer. But, we also note that partial decoding of packets out of their natural temporal order does not necessarily translate into low delivery delays desired by the application layer. The authors in [5, 6] have proposed feedback-based throughput-optimal schemes to deal with the transmitter queue size, as well as decoding and delivery delays at the destinations. When the traffic load approaches system capacity, their methods are shown to behave "gracefully" and meet the delay performance benchmark of single-receiver automatic repeat request (ARQ) schemes.
There is yet another set of applications for which partial decoding is beneficial and can result in lower delays irrespective of the order in which packets are being decoded. Consider, for example, a wireless sensor network in which there is a fusion/command center together with numerous sensors/agents scattered in a region. Each sensor/agent has to execute or process one or more complex commands. Each command and its associated data is dispatched from the center in a packet. For coordination purposes, each agent needs to know its own and other agents' commands. Therefore, commands are broadcast to everyone in the network. In this application, in-order processing/execution of commands may not be a real issue. However, fast command execution may be crucial and therefore, it is imperative that innovative packets arrive and get decoded at the destinations as quickly as possible regardless of their order. As another example, consider emergency operations in a large geographical region where emergency-related updates of the map of the area need to be dispatched to all emergency crew members. In such situations too, updates of different parts of the map can be decoded in any order and still be useful for handling the emergency.
Finally, some applications may be designed in such a way that they are insensitive to in-order delivery. This can be particularly useful where the transport medium is unreliable. In such a case, it may be natural to use multiple-description source coding techniques , in which every decoded packet brings new information to the destination, irrespective of its order. In light of the emergency applications described above, one can perform multiple-description coding for map updates, so that updates of different subregions can be divided into multiple packets and each packet can provide an improved view of one region in a truly order-insensitive fashion.
In Section 1.1, we have motivated the problem in light of possible applications in sensor and ad hoc networks. To the best of our knowledge, such application-dependent classification of network coding delays did not previously exist in the literature.
In Section 3.1, we present a systematic framework for the minimization of decoding delay in each transmission subject to the instantaneous decodability constraint. We show that this problem can be cast into a special integer linear programming (ILP) framework, where instantaneously decodable packet transmission corresponds to a set packing problem  on an appropriately defined set structure.
In Section 3.2, we provide a customized and efficient method for finding the optimal solution to the set packing problem (which is in general NP-hard). Our numerical results in Section 6 show that for reasonably sized number of receivers, the optimum solution(s) can be found in a time that is linearly proportional to the total number of packets.
In Section 4, we discuss decoding delay minimization for an important class of erasure channels with memory, which can occur in wireless communication systems due to deep fades and shadowing . We show that the general set packing framework in Section 3 can be easily modified to account for the erasure memory. Our results in Section 6 reveal that by adapting network coding decisions based on channel erasure conditions, significant improvements in delay are possible compared to when decisions are taken irrespective of channel states.
In Section 5, we provide a number of heuristic variations of the optimal search for finding (possibly suboptimal) solutions faster, if needed. Our results in Section 6 show that such heuristics work very well and often provide solutions that are very close to the search algorithm. Moreover, they improve on the proposed random opportunistic method in .
Consider a single source that wants to broadcast some data to receivers, denoted by for . The data to be broadcast is divided into packets, denoted by for . Time is slotted and the source can transmit one (possibly coded) packet per slot.
A packet erasure link connects the source to each individual receiver . Erasures in different links can be independent or correlated with each other. Different erasures in a single link can be independent (memoryless) or correlated with each other (with memory) over time.
For correlated erasures, we consider the well-known Gilbert-Elliott channel (GEC) , which is a Markov model with a good and a bad state. If the channel is in the good state, packets can be successfully received, while in the bad state packets are lost (e.g., due to deep fades or shadowing in the channel). The probability of moving from the good state to the bad state in link is and the probability of moving from the bad state to the good state is , where is the time slot index. Steady-state probabilities are given by and . Following , we define the memory content of the GEC in link as , which signifies the persistence of the channel in remaining in the same state. A small means a channel with little memory and a large means a channel with large memory.
Before transmission of the next packet, the source collects error-free and delay-free 1-bit feedback from each destination indicating if the packet was successfully received or not. A successful reception generates an acknowledgement (ACK) and an erasure generates a negative acknowledgement (NAK). This feedback is used for optimizing network coding decisions at the source for the next packet transmission round, as described in future sections.
In this work, we consider linear network coding  in which coded packets are formed by taking linear combinations of the original source packets. Packets are vectors of fixed size over a finite field . The coefficient vector used for linear network coding is sent in the packet header so that each destination can at some point recover the original packets. Since in this paper we are only dealing with instantaneously decodable packet transmission, it suffices to consider linear network coding over . That is, coded packets are formed using binary XOR of the original source packets. Thus, network coding is performed in a similar manner as in .
A transmitted packet is instantaneously decodable for receiver if it is a linear combination of source packets containing at most one source packet that has not decoded yet. A scheme is called instantaneously decodable if all transmissions have this property for all receivers.
At the end of transmission round in an instantaneously decodable scheme, the knowledge of receiver is the set consisting of all packets that the receiver has decoded so far. The receiver can therefore, compute any linear combination of the packets that it has decoded for decoding future packets.
In an instantaneously decodable scheme, a coded packet is called non-innovative for receiver if it only contains source packets that the receiver has decoded so far. Otherwise, the packet is innovative.
A scheme is called rate or throughput optimal if all transmissions are innovative for the entire set of receivers.
In time slot , receiver experiences one unit of delay if it successfully receives a packet that is either non-innovative or not instantaneously decodable. If we impose instantaneous decodability on the scheme, a delay can only occur if the received packet is not innovative.
Note that in the last definition, we do not count channel inflicted delays due to erasures. The delay only counts "algorithmic" overhead delays when we are not able to provide innovative and instantaneously decodable packets to a receiver.
We note that a packet that is not transmitted yet or transmitted but not received by any receiver can be transmitted in an uncoded manner at any transmission slot without incurring any algorithmic delay. In fact, this is how the transmission starts: by sending uncoded, for example.
A zero-delay scheme would require all packets to be both innovative and instantaneously decodable to all receivers. Thus zero-delay implies rate optimality, but not vice versa. As the authors show in [8, Theorem 1] for the case of and receivers, there exists an offline algorithm that is both rate optimal and delay-free. For the authors prove that a zero-delay algorithm does not exist. By offline we mean that the algorithm needs to know future realizations of erasures in broadcast links. In contrast, an online algorithm decides on what to send in the next time slot based on the information received in the past and in the current slot. In this paper, we focus on designing online algorithms.
3.1. Problem Formulation Based on Integer Linear Programming
The condition of instantaneous decodability means that at any transmission round we cannot choose more than one packet which is still unknown to a receiver . In the example above, at , we cannot send because it contains more than one packet unknown to .
Let represent a binary decision vector of length that determines which packets are being coded together. The transmitted packet consists of the binary XOR of the source packets for which . More formally, we can define the instantaneous decodability constraint for all receivers as , where represents an all-one vector of length and the inequality is examined on an element-by-element basis (Note that although is a binary or Boolean vector, is calculated in real domain. Hence, is in fact a pseudo-Boolean constraint.). This condition ensures that a transmitted coded packet contains at most one unknown source packet for each receiver. A vector is called infeasible if it does not satisfy the instantaneous decodability condition. In other words, is called infeasible if and only if there exists at least one for which in . A vector is called a solution if and only if it satisfies . In the rest of this paper, " " and " is a solution" are used interchangeably.
Now consider sets , where is the nonempty set of receivers that still need source packet . Note that these sets can be easily determined by looking at the columns of matrix . The "importance" of packet can be, for example, taken to be the size of set , which is the number of receivers that still need .
where . This is a standard problem in combinatorial optimization, usually called set packing . Here the universe is the set of all receivers and we need to find disjoint (due to instantaneous decodability condition) subsets with the largest total size. In the (most desirable) case when equality holds in for every receiver, we also speak of a set partition. This is equivalent to a zero-delay transmission.
In Section 4, we will consider other measures of packet importance and discuss the role of in tailoring the optimization problem according to the application requirements or channel conditions, such as memory in erasure links.
We assume that elements of , which signify packet importance, are all positive. If one has already found a solution such as with , then changing this solution into by changing into can only result in a strictly smaller than . We say that given solution , is clearly suboptimal and hence, can be discarded in an algorithm that searches for the optimal solution(s).
3.2. Efficient Search Methods for Finding the Optimal Solution of (4)
It is well known that the set packing problem is NP-hard . Here, we present an efficient ILP solver designed to take advantage of the specific problem structure. Later, we will see that for many practical situations of interest, our method performs well empirically. Based on this framework, we will also present some heuristics in Section 5 to deal with more complicated and time-consuming problem instances.
We begin presenting our method by first defining constrained and unconstrained variables.
Two binary-valued variables are said to be constrained if they cannot be simultaneously in a solution. Or formally, and are constrained if for any satisfying , (Again, note that the addition of variables takes place in real domain.). We also say that is constrained to and vice versa. It can be proven that and are constrained if and only if there exits at least one row index in for which .
One can easily verify the relations defined above. For example, variables and are constrained because for , . Variables and are not constrained to each other because columns and do not have a nonzero element in the same row position. Variable is unconstrained because no other column has a nonzero element in rows 6 or 7. In summary, , , , and .
Unconstrained variables must be set to 1. In other words, setting those variables to 0 does not contribute to the optimal solution (note that the elements in are positive). In the above example, and must be set to 1 because no other variable is constrained to them (we will make this statement formal in the optimality proof of the algorithm in the appendix).
At a given step, the parameter space can be pruned most by resolving the variable with the largest constrained set.
Application of the third observation, in a search algorithm results in greedy pruning of the parameter space. We note that greedy pruning is only optimal for a given step of the algorithm and is not guaranteed to result in the optimal reduction of the overall complexity of the search.
We now make a final remark before presenting the search algorithm. In particular, we have observed that finding constrained sets for each variable in each step of the algorithm can be somewhat time consuming. A very effective alternative is to first sort matrix , column-wise, in descending order of the number of 's in each column. Setting the "most important" head variable (with the highest ) to is likely to result in the largest constrained set (because it potentially overlaps with many other variables) and hence, many variables will be resolved in the next recursion. We will refer to the approach based on finding the largest constrained set as the greedy pruning strategy and to the alterative approach as the sorted pruning search strategy.
The greedy pruning search strategy is shown in Figure 1, which with appropriate modifications can also represent the sorted pruning variation.Let denote the problem of size whose input is an receiver-packet incidence matrix and whose output is a set of solutions of the form of length which satisfy the instantaneous decodability condition . The algorithms can be described as shown in Algorithm 1.
Algorithm 1: Recursive search for the optimal solution(s) of (4).
variables (Note that we have overused index 1 to refer to the head variable in the reordered matrix at each
(16) Set all the unconstrained variables to 1.
(20) Combine the solution with previously resolved variables. Save solution.
(24) Combine the solution with previously resolved variables. Return solution(s).
(25) end if
In the appendix, we prove by structural induction that Algorithm 1 is guaranteed to return all optimal solutions of (4). However, we note that not every solution returned by Algorithm 1 is optimal. The nonoptimal solutions can be easily discarded by testing against the objective function (4) at the end of the algorithm. We also note that in Algorithm 1, we can simply remove those packets received by every receiver from the problem. If there are such variables, we can start step above from instead of . The Matlab code for both the greedy and sorted pruning algorithms can be found at http://users.rsise.anu.edu.au/~parastoo/netcod/.
We conclude this section by a brief note on the computational complexity of Algorithm 1. Let us denote the number of recursions required to solve the problem of size by . According to Algorithm 1, this problem is always broken into two smaller problems of size and . Therefore, one can find the number of recursions required to solve by recursively computing . The recursion stops when one reaches a problem of size 1 (only one packet to transmit) where .
Here, we present a generalization of the set packing approach for coded transmission in erasure channels with memory. The idea is that the importance of a packet is no longer determined by how many receivers need , but by the probability that will be successfully decoded by the receivers that need it. In computing this probability, one can use the fact that successive channel erasures in a link are usually correlated with each other and hence, their history can be used to make predictions about whether a receiver is going to experience erasure or not in the next time slot. To present the idea, we focus on the GEC model for representing channel erasures. More general memory models for erasure can also be incorporated into our framework.
The above weight vector gives higher priority to a packet for which there is a higher chance of successful reception, because the receivers that need are more likely to be in good state in the next time slot. With this newly defined weight vector, one can try to solve the optimization problem given in (4) under the same instantaneous decodability condition.
We conclude this section by emphasizing that the optimization framework in (4) is very flexible in accommodating other possibilities for the weight vector , which can be appropriately determined based on the application. For example, instead of allocating the same weight to a packet needed by a subset of receivers, one can allocate different weights to the same packet (looking column-wise at ) depending on the priorities or demands of each user. In the map update example described in the Introduction, different emergency units can adaptively flag to the base station different parts of the map as more or less important depending on their distance from a certain disaster zone. The task of the base station is then to send a packet combination that satisfies the largest total priority. One can also combine user-dependent packet weights with the channel state prediction outcomes in a GEC. One possibility is to multiply the probabilities by the receiver priority. It could then turn out that although a receiver is more likely to be in erasure in the next transmission round, it may be served because of a high priority request.
In Section 3.2, we proposed efficient search algorithms for finding the optimal solution(s) of (4). However, there may be situations where one would like to obtain a (possibly suboptimal) solution much more quickly. This may be the case, for example, when the total number of packets to be transmitted is very large. Therefore, designing efficient heuristic algorithms to complement the optimal search is important. In this section, we propose a number of such heuristics.
5.1. Heuristic 1—Weight Sorted Heuristic Algorithm
The idea behind this recursive algorithm is very simple. As in Algorithm 1, we start with the original problem of size . We then rearrange the columns of the matrix in descending order of (starting from the packet with the highest weight). Note that this is different from the sorted pruning version of the Algorithm 1, in which the columns of were sorted in descending order of to potentially result in large constrained sets. We then set the head variable and find its corresponding constrained set to resolve variables that are to be set to zero. We then solve the smaller problem of size and continue until the problem cannot be further reduced. One main difference between Heuristic 1 and Algorithm 1 is that at each recursion, the head variable is only set to one; the other possibility of is not pursued at all. In a sense, this heuristic algorithm finds greedy solutions to the problem at each recursion by serving the highest priority packet. In this heuristic algorithm, all unconstrained variables are naturally set to 1 in the course of the algorithm. The computational complexity of this method is at worst proportional to , which can happen when there is no constraint between packets.
5.2. Heuristic 2—Search Algorithm 1 with Maximum Recursions/Elapsed Time
It is possible to terminate the recursive search Algorithm 1 prematurely once it reaches a maximum number of allowed recursions/elapsed time. If the algorithm reaches this value and the search is not complete, it performs a termination procedure whereby it heuristically resolves the remaining unresolved packets in the current incomplete solution. That is, it performs Heuristic 1 on a smaller problem, which is yet to be solved. It then returns the best solution that has been found so far. We note that due the extra termination procedure, the actual number of recursions/elapsed time can be (slightly) higher than the preset value.
Two comments are in order here. Firstly, Algorithm 1 is designed to sort the matrix based on the number of receivers that need a packet. It only reverts to sorting the unresolved variables based on the vector in the termination process. Secondly, if the maximum number of recursions is set to one, Algorithm 1 just performs the termination process and becomes identical to Heuristic 1.
5.3. Heuristic 3—Dynamic Number of Recursions
This heuristic is based on Heuristic 2, where we dynamically increase the number of allowed recursions as needed. At each transmission round, we start with only one allowed recursion (effectively run Heuristic 1). If the throughput (Let denote the index of receivers that still need at least one packet and denote such receivers. The achieved throughput at time slot is defined as , where is the found solution and is an appropriate function of receivers' needs. For memoryless erasures and for GEC's (refer to Section 4 and (7)).) is higher than a desired value, there is no need to proceed any further. Otherwise, we can gradually increase the number of recursions by an appropriate step size. This heuristic stops when it either reaches the maximum allowed recursions or when increasing the number of recursions does not result in a noticeable improvement in the throughput.
We start this section by presenting end-to-end decoding delay results for memoryless erasure channels. We then specialize to erasure channels with memory. The end-to-end problem is the complete transmission of packets. End-to-end decoding delay of a receiver is the sum of decoding delays for the receiver in each transmission step. In the following, when we say "the delay performance of method X", we are referring to the delay performance of the end-to-end transmission, where method X is applied at each step.
In the course of presenting the results and based on the observed trends, we will discuss some secondary coding techniques and post processing considerations that can improve the decoding delay. Throughout the analysis of this section, we assume independent erasures in different links with identical probabilities. Hence, we can drop subscript when referring to link erasure probabilities.
We note that the delays presented here (and also in the following figures) are, in fact, excess median or mean delays beyond the minimum required number of transmissions, which is . For example, a mean delay of 10 slots for packets signifies on average overhead, which is the price for guaranteeing instantaneous decodability. In other words, one measure of throughput is , where is the mean delay across all receivers. An example is shown in Figure 3. For up to around receivers in the system, Algorithm 1, Heuristics 2, and 3 ensure an average throughput loss of .
It is noted in Figure 4 that the first solution returned by Algorithm 1 performs almost the same as the minimum coding solution. The reason for this is that Algorithm 1 first ranks the packets based on the number of receivers that need them. Therefore, the first solution picked by the algorithm is likely to contain packets with largest constrained sets and hence, many resolved packets are set to zero, which often translates into small amount of coding. Throughout this section, unless otherwise stated, we have shown the delay results based on the first returned solution of Algorithm 1.
In this paper, we provided an online optimal network coding scheme with feedback to minimize decoding delay in each transmission round in erasure broadcast channels. Efficient search algorithms for the optimal network coding solution, as well as heuristic methods were presented and their delay and computational performance were tested in several system scenarios. We found that adopting an optimized approach using as much information about the channel as possible, such as memory, leads to a significantly better decoding delay. An interesting problem for future research is to relax the instantaneous decodability condition to -step decodability and investigate the delay-throughput tradeoff.
The authors wish to thank anonymous reviewers for their valuable comments which helped to improve the presentation of this paper. In the early stages of this work, the authors benefited from fruitful discussions with Ralf Koetter. This paper is dedicated to his memory. Preliminary results of this paper were presented in the 2009 Workshop on Network Coding, Theory and Applications (NetCod 2009), Lausanne, Switzerland. The work of P. Sadeghi was supported under ARC Discovery Projects funding scheme (Project no. DP0984950). The work of D. Traskov was supported by the European Commission in the framework of the FP7 Network of Excellence in Wireless COMmunications NEWCOM++ (Contract no. 216715).
- Shokrollahi A: Raptor codes. IEEE Transactions on Information Theory 2006, 52(6):2551-2567.MATHMathSciNetView ArticleGoogle Scholar
- Ho T, Medard M, Koetter R, et al.: A random linear network coding approach to multicast. IEEE Transactions on Information Theory 2006, 52(10):4413-4430.MathSciNetView ArticleGoogle Scholar
- Eryilmaz A, Ozdaglar A, Medard M: On delay performance gains from network coding. Proceedings of the IEEE Annual Conference on Information Sciences and Systems (CISS '06), March 2006, Princeton, NJ, USA 864-870.Google Scholar
- Lucani DE, Stojanovic M, Medard M: Random linear network coding for time division duplexing: when to stop talking and start listening. Proceedings of the IEEE Conference on Computer Communications (INFOCOM '09), April 2009 1800-1808.Google Scholar
- Sundararajan J-K, Shah D, Medard M: Feedback-based online network coding. Submitted to IEEE Transactions on Information Theory, http://arxiv.org/pdf/0904.1730v1 Submitted to IEEE Transactions on Information Theory,
- Sundararajan J-K, Sadeghi P, Medard M: A feedback-based adaptive broadcast coding scheme for reducing in-order delivery delay. Proceedings of the Workshop on Network Coding, Theory, and Applications (NetCod '09), June 2009, Lausanne, SwitzerlandGoogle Scholar
- Goyal VK: Multiple description coding: compression meets the network. IEEE Signal Processing Magazine 2001, 18(5):74-93. 10.1109/79.952806View ArticleGoogle Scholar
- Keller L, Drinea E, Fragouli C: Online broadcasting with network coding. Proceedings of the 4th Workshop on Network Coding, Theory, and Applications (NetCod '08), January 2008, Hong kongGoogle Scholar
- Bertsimas D, Weissmantel R: Optimization Over Integers. Dynamic Ideas, Belmont, Mass, USA; 2005.Google Scholar
- Rappaport TS: Wireless Communications, Principles and Practice. 2nd edition. Prentice Hall, Upper Saddle River, NJ, USA; 2002.Google Scholar
- Sadeghi P, Kennedy RA, Rapajic PB, Shams R: Finite-state Markov modeling of fading channels: a survey of principles and applications. IEEE Signal Processing Magazine 2008, 25(5):57-80.View ArticleGoogle Scholar
- Mushkin M, Bar-David I: Capacity and coding for the Gilbert-Elliot channels. IEEE Transactions on Information Theory 1989, 35(6):1277-1290.MATHView ArticleGoogle Scholar
- Katti S, Rahul H, Hu W, Katabi D, Medard M, Crowcroft J: XORs in the air: practical wireless network coding. In Proceedings of the ACM Computer Communication Review (SIGCOMM '06), October 2006. Volume 36. ACM Press; 243-254.Google Scholar
- Sadeghi P, Traskov D, Koetter R: Adaptive network coding for broadcast channels. Proceedings of the Workshop on Network Coding, Theory, and Applications (NetCod '09), June 2009, Lausanne, Switzerland 80-85.Google Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.