Diversity Analysis, Code Design and Tight Error Rate Lower Bound for Binary Joint Network-Channel Coding

Joint network-channel codes (JNCC) can improve the performance of communication in wireless networks, by combining, at the physical layer, the channel codes and the network code as an overall error-correcting code. JNCC is increasingly proposed as an alternative to a standard layered construction, such as the OSI-model. The main performance metrics for JNCCs are scalability to larger networks and error rate. The diversity order is one of the most important parameters determining the error rate. The literature on JNCC is growing, but a rigorous diversity analysis is lacking, mainly because of the many degrees of freedom in wireless networks, which makes it very hard to prove general statements on the diversity order. In this paper, we consider a network with slowly varying fading point-to-point links, where all sources also act as relay and additional non-source relays may be present. We propose a general structure for JNCCs to be applied in such network. In the relay phase, each relay transmits a linear transform of a set of source codewords. Our main contributions are the proposition of an upper and lower bound on the diversity order, a scalable code design and a new lower bound on the word error rate to asses the performance of the network code. The lower bound on the diversity order is only valid for JNCCs where the relays transform only two source codewords. We then validate this analysis with an example which compares the JNCC performance to that of a standard layered construction. Our numerical results suggest that as networks grow, it is difficult to perform significantly better than a standard layered construction, both on a fundamental level, expressed by the outage probability, as on a practical level, expressed by the word error rate.

error performance. Cooperation may occur in many forms at different layers, e.g. cooperative channel coding at the physical layer and network coding at the network layer. Network coding refers to the case where the intermediate nodes in the network are allowed to perform encoding operations over multiple received streams from different sources. In a standard layered construction, the decoding of the network code is performed at the network layer, after the point-to-point transmissions have been decoded at the physical layer. Channel coding refers to the case where nodes perform coding over one point-to-point wireless link only. Cooperative channel coding is achieved by letting one or more relays transmit redundant bits for one source at a time. Usually, channel coding and network coding are studied separately (e.g. [9], [22], [27] for cooperative channel coding and [1], [23], [28], [34], [42] for network coding).
Joint network-channel coding (JNCC) received much of attention in the last years. This technique combines both decode and forward [25] (cooperative communication) and crosslayer design by using a network code, which is accessible at the physical layer. The rationale behind joint networkchannel coding is to improve the performance, by combining, at the physical layer, the channel codes and the network code as an overall error-correcting code [15]. Mostly, the two most important performance metrics are (R, P e ), where R is the spectral efficiency and P e is the error rate (bit error rate or word error rate). Here, we consider a fixed spectral efficiency R, so that the aim is to minimize P e for a given channel quality, expressed by γ, the signal-to-noise ratio (SNR). Expressing the asymptotic (for large γ) error rate as P e = 1 cγ d , where c and d are defined as the coding gain and the diversity order, respectively, improving the performance refers to maximizing first d and then c (because d has the larger impact).
Standard linear network coding consists of taking linear combinations of several source packets, well known as the typical XOR operations for binary codes. In general, non-binary coefficients are used in the linear combinations. However, when the network code is used at the physical layer to decode the noisy channel output, this simple technique might yield poor error performance. Therefore, powerful network codes consisting of taking linear transformations of the incoming information packets, have been introduced. We denote this methodology as generalized linear network coding (GLNC). The well known standard linear network codes, taking linear combinations, are a special case of GLNC. Combining GLNC with channel coding, is denoted as joint network-channel coding (JNCC). The JNCC, which is the overall code comprising the channel codes and the network code, can for example be an LDPC code or a Turbo code. Of course, while JNCC brings more degrees of freedom and opens perspectives for a higher coding gain c, it must be verified that important metrics, such as the diversity order d and the scalability to large networks, are not negatively affected.
Binary JNCC has already been studied in the literature. Pioneering papers [17], [18] designed Turbo codes and LDPC codes, respectively, for the multiple access relay channel 1 (MARC) and for the two-way relay channel [19]. However, the code design was not immediately scalable to general large networks and did not contain the required structure to achieve full diversity. In [10], [11], a full-diversity JNCC for the MARC was proposed but it was not extended to large networks. The work of Hausl et al. [17]- [19] was followed by the interesting work of Bao et al. [2], presenting a JNCC that is scalable to large networks. However, this JNCC was not structured to achieve full diversity and has weak points from a coding point of view [12]. A deficiency in the literature, for general networks with sources and relays, is the lack of a detailed diversity analysis in the case that the sources can act as a relay (which is for example the model assumed by [2]). The effect of the parameters of the JNCC on the diversity order is in general not known, because it is very hard to prove general statements on the diversity order, in an environment with so many degrees of freedom. This paper is a modest attempt to contribute to the solution of this problem. Related to this, we mention [29], [30], where the authors designed a JNCC for the case where the sources cannot act as a relay, but other nodes play the role of relay to communicate to one destination. As the source nodes are excluded to act as a relay node in this model, the diversity analysis in [29], [30] is different from ours.
In this paper, we consider a JNCC where the network code forms an integral part of the overall error-correcting code, that is used at the destination to decode the information from the sources. The body of this paper consists of two main parts. In Sec. IV, we perform a diversity analysis, leading to an upper bound on the diversity order of any linear binary JNCC following our system model, and to a lower bound on the diversity order for a particular subset of linear binary JNCCs. The upper and lower bound depend on the parameters of the JNCC and can be used to verify whether a particular JNCC has the potential to achieve full diversity on a certain network. Secondly, in Sec. V, a specific JNCC of the LDPC-type is proposed that achieves full diversity for a well identified set of wireless networks. The scalability of this specific JNCC to large networks is discussed. The coding gain c is not considered in the body of the paper and the parameters of our proposed code may be further optimized by applying techniques such as in [29], to maximize c. To assess the performance of the proposed JNCC, we determine the outage probability, a well known lower bound of the word error rate, in Sec. VI. We also present a tighter word error rate lower bound in Sec. VI-B, that takes into account the particular structure of the JNCC. In Sec. VII, the numerical results corroborate the established theory. We also briefly comment on the coding gain achieved by the proposed JNCC and conclusions are drawn for different classes of large networks. This paper extends the work, published in [12], by also considering non-perfect source-relay channels, by considerably extending the diversity analysis, by providing an achievability proof for the diversity order of the proposed JNCC, by clearly indicating the set of wireless networks where the proposed JNCC is diversity-optimal, by providing a tighter lower bound on the word error rate, and by providing more numerical results.

II. JOINT NETWORK-CHANNEL CODING
We first illustrate joint network-channel coding by means of a simple example. Consider two sources orthogonally broadcasting a vector of symbols, mapped from the binary vectors s 1 and s 2 , respectively, to a relay and a destination. This channel is denoted as a multiple access relay channel (MARC) in the literature. Supposing that the relay is able to decode the received symbols, the relay computes a binary vector r 1 , which is mapped to symbols and transmitted to the destination. The relation between all bits is expressed by the JNCC, whose parity-check matrix has the following general form,

H1
. (1) The Note that GLNC includes standard network codes used in an OSI communication model as a special case. In the latter case, the matrices H i j and H i (considering more than one relay in general) are identity matrices or all-zero matrices, so that the network code simplifies to the relay packet being a linear combination of source packets, also expressed as XORing of packets or symbol-wise addition of packets.
Ideally, the overall matrix H conforms optimized degree distributions that specify the LDPC code. When the channels between sources and relay are perfect, we can drop the first 3 sets of rows and only keep the GLNC, represented by H GLNC ; in this case the information bits of the code are s 1 and s 2 , and r 1 contains the parity bits. This is still a JNCC as the redundancy in the network code is used to decode the received symbols on the physical layer at the destination. In [10], [11], it is proved that the matrices H p do no affect the diversity order in the case of the MARC.

III. SYSTEM MODEL
We consider wireless networks with m s sources directly communicating to a common destination (e.g. cellphones communicating to a base station). Two time-orthogonal phases are distinguished. In the source phase, the sources orthogonally broadcast their respective source packet. In the following relay phase, the relays orthogonally broadcast their respective packet. All considered sources overhear each other during the source phase, and act as relay in the relay phase. Other nodes, not acting as a source, might be present in the network (i.e., overhearing the sources) and also act as relay. Hence, we consider a total of m r relays, where m r ≥ m s . This general network model, which is practically relevant as it fits many applications, is adopted in e.g. [2]. Take for example any large network and consider a volume in space (cfr. picocells or femtocells) where all nodes can overhear each other. These nodes form sub-networks and can be modelled by our proposed model. Note that in the literature, sometimes other models are assumed, such as the M − N − 1 model [29], [30], where M sources are helped by N relays (the relays are nodes different from the sources) to communicate to one destination.
All devices have one antenna, are half-duplex and transmit orthogonally using BPSK modulation. The K information bits of each source are encoded via point-to-point channel codes into a systematic codeword, denoted as source codeword, of length L, expressed by the column vector s us for user u s , u s ∈ [1, . . . , m s ]. The parity-check matrix of dimension (L−K)×L of this point-to-point codeword is denoted by H p , which is the same for each user u s , so that H p s us = 0 for all u s . In the relay phase, each relay u r , u r ∈ [1, . . . , m r ], transmits a point-to-point codeword r ur of length L to the destination, also satisfying H p r ur = 0. Hence, all slots have equal duration, the coding rate of the point-to-point channels is R c,p = K L , and the overall coding rate is R c = msK (ms+mr )L = R c,p ms ms+mr . We define the fraction of source transmissions in the total number of transmissions as the network coding rate R n = ms ms+mr , so that R c = R c,p R n . The overall codeword of length (m s + m r )L is expressed by the column vector The destination declares a word error if it can not perfectly retrieve all m s K information bits, and the overall word error rate is denoted by P ew . All relevant channels between different 2 pairs of network nodes are assumed independent, memoryless, with real additive white Gaussian noise and multiplicative real fading (Rayleigh distributed with expected squared value equal to one). The fading coefficient of a wireless link is only known at the receiver side of that link. We consider a slow fading environment with a finite coherence time that is longer than the duration of the source phase and the relay phase, so that the fading gain between two network nodes takes the same value during both phases. We denote the fading gain from node u to the destination as α u , with E[α 2 u ] = 1. All pointto-point channels have the same average signal-to-noise ratio (SNR), denoted by γ. Differences in average SNR between the channels would not alter the diversity analysis, on the condition that the large SNR behaviour inherent to a diversity analysis refers to all 3 SNRs being large. Denoting the received symbol vector at the destination 4 in timeslot i as y i , the channel equation is y us = α us s ′ us + n us , u s = 1, . . . , m s y ms+ur = α ur r ′ ur + n ms+ur , u r = 1, . . . , m r , where n i ∼ CN (0, 1 γ I) is the noise vector in timeslot i, s ′ us = 2s us − 1 and r ′ ur = 2r ur − 1 (BPSK modulation). Hence, at the destination, each of the m s independent fading gains between the sources and the destination affects 2L bits (L bits in the source phase and L bits in the relay phase) and each of m r − m s fading gains between the non-source relays and the destination affects L bits, assuming that all m r relays could decode the messages received from the sources. Hence, from the point of view of the destination, the overall codeword is transmitted on a block fading (BF) channel with m r blocks, each affected by its own fading gain, where m s blocks have length 2L and m r −m s blocks have length L. This notion will be essential in the subsequent diversity analysis (Sec. IV).
In the source phase, relay u r attempts to decode the received symbols from sources belonging to the decoding set S(u r ). The users that are successfully decoded at relay u r are added to its retrieval set, denoted by R(u r ), R(u r ) ⊂ S(u r ), with cardinality l ur . Next, in the relay phase, relay u r transmits a relay packet, which is a linear transformation of n ur source codewords 5 originated by the sources from the transmission set T (u r ) = {u 1 , . . . , u nu r } of relay u r , with T (u r ) ⊂ R(u r ). If l ur < n ur , then relay u r does not transmit anything. In Sec. IV, we show that n ur is an important parameter that strongly affects the diversity order.
For example, user 3 attempts to decode the messages from users 1, 2 and 5, and succeeds in decoding the messages from users 1 and 5 from which a linear transformation is computed.
Because the channel between a node and the destination remains constant during both source and relay phases, a relay has no interest in including its own source message in S(u r ).
Using the transmission set for each relay, the GLNC in Eq. (2) generalizes to where the matrices H ur and H ur us are of dimension K × L. Hence, each transmitted relay codeword r ur is a linear transformation of n ur source codewords. The superscript u r in H ur us indicates that the vector s us is in general not transformed by the same matrix for all relays u r where u s ∈ T (u r ). The overall parity-check matrix H is thus expressed as where H c is block diagonal with H p on its diagonal, representing the channel code, and

IV. DIVERSITY ANALYSIS OF JNCC
Before passing to the actual diversity analysis, we provide the well-known formal definition of the diversity order [39].

Definition 1 The diversity order attained by a code C is defined as
In other words, P ew ∝ γ −d , where ∝ denotes proportional to.
In the proofs of propositions in this paper, we will often use the diversity equivalence between a BF channel and a block binary erasure channel (block BEC), which was proved in [5], [6]. A block BEC channel is obtained by restricting the fading gains in our model to belong to the set {0, ∞}, so that a point-to-point channel is either erased or perfect. Denoting the erasure probability Pr[α ur = 0] by ǫ, a diversity order d is achieved if P ew ∝ ǫ d for small ǫ [14]. A diversity order of d is thus achievable if there exists no combination of d − 1 erased point-to-point channels leading to a word error. On the other hand, a diversity order of d is not achievable if there exists at least one combination of d − 1 erased channels leading to word error.
In this section, we present the relation between the diversity order d and the parameters {n ur , u r = 1, . . . , m r }, as well as between d and the choice of {T (u r ), u r = 1, . . . , m r }. This guides the code design and furthermore, the potential, of a linear binary JNCC satisfying some conditions, to achieve full diversity, can be verified without performing Monte Carlo simulations.
We first prove that the diversity order is a function of only the network coding rate R n (Sec. IV-A). We then determine in Sec. IV-B the relation between the diversity order d and the set {n ur , u r = 1, . . . , m r }, for any linear binary JNCC expressed as in Eqs. (6) and (7). The set {n ur , u r = 1, . . . , m r } actually determines the maximal spatial diversity that can be achieved by cooperation, leading to an upper bound on the diversity order. In Sec. IV-C, we propose a lower bound on the diversity order in the case that n ur = n = 2, which depends on all transmission sets {T (u r ), u r = 1, . . . , m r }. In Sec. IV-D, we discuss how the diversity order is affected by interuser failures. Finally, in Sec. IV-E, we briefly comment on the diversity order in a layered construction, such as the OSI model.

A. Diversity as a function of the network coding rate
We denote the maximum achievable diversity order by d max . We will determine d max in this section and show that it only depends on the network coding rate R n = ms ms+mr .

Proposition 1 Under ML decoding, the maximum diversity order d max that can be achieved by any linear JNCC is
Proof: See App. A. Note that the maximal diversity order does not depend on L. It can actually be reformulated in the following way: which for m r = m s = m reduces to the maximum diversity order for a standard BF channel 6 with m blocks and coding rate R n [13], [24], [31]. Hence, the maximum diversity order does not change when the point-to-point channel coding rate R c,p changes. This corresponds with our intuition as the parity bits of the pointto-point codes only provide redundancy within one block forming a point-to-point codeword, hence these parity bits cannot combat erasures which affect the complete point-topoint codeword. Another consequence is that the maximal diversity order of JNCC cannot be larger than in a layered approach, with the same network coding rate.
In the remainder of the paper, full diversity refers to the diversity order being equal to the maximal diversity order, d = d max , from (8).

B. Space diversity by cooperation
We denote the word error rate for each source u s by P ew,us , which is the fraction of packets where at least 1 of the K information bits from source u s is erroneously decoded at the destination. Associated to P ew,us , we define d us , so that P ew,us ∝ 1 γ du s for large γ. We have that max u P ew,us ≤ P ew ≤ us P ew,us . From Def. 1, it follows that d = min Denote t us , u s ∈ {1, . . . , m s }, as the number of times that source u s is included in the transmission set of a relay: t us = ur =us 1 1 (u s ∈ T (u r )), where 1 1 (.) is the indicator function, which equals one when its argument is true and zero otherwise. Some simple measures can be determined: t min = min us t us and t av = mr ur =1 nu r ms . We will show that d us depends on t us and thus, by Eq. (10), d depends on t min . We denote 1 + t min by d R , which we call the space diversity order, as it is the minimal number of channels that convey a source message to the destination.

Proposition 2 For any linear JNCC, applied in our system model, the diversity order d is upper bounded as
Proof: We use the diversity equivalence between a BF channel and block BEC [5], [6]. Assume that the channel between source u s and the destination is erased. Source u s is included in at most t us transmission sets. Assume that all t us channels between the relays, that include source u s in their transmission set, and the destination are also erased. Then the destination does not receive any information on source u s so that it can never retrieve its message. The probability of occurrence of this event is ǫ 1+tu s , so that P ew,us ≥ ǫ 1+tu s , hence d us ≤ 1 + t us . Using Eq. (10), we obtain Prop. 2. Note that the proof of Prop. 2 is based on the assumption that relay u r only considers packets transmitted in the source phase for inclusion in S(u r ). In the case that relay u r computes its relay packet also based on packets transmitted by other relays during the relay phase, the diversity order becomes more difficult to analyse.
In Cor. 1, we propose the conditions on t min so that the space diversity order d R is not smaller than the maximum achievable diversity order.

Corollary 1 For any linear JNCC, applied in our system model, full diversity can be achieved only if
The proof follows directly from Props. 1 and 2.
Given a GLNC, and thus a choice of T (u r ), one can verify through Cor. 1 whether full diversity can be achieved. However, to get more insight for the code design, we consider the simplest case of a network code where the cardinality of the transmission set is constant (n ur = n).

Corollary 2
For any linear JNCC, applied in our system model, with constant n ur = n, full diversity can be achieved only if Proof: It always holds that t min ≤ ⌊t av ⌋ and if n ur = n, then t av = mrn ms . From Cor. 1, full diversity can be achieved only if mr n ms ≥ q. Because mrn ms ≥ mr n ms , we have the necessary condition that n ≥ q ms mr . As n is an integer, this bound can be tightened, yielding n ≥ ms mr q . Filling in q from Cor. 1 yields Cor. 2. Table I illustrates Cor. 2, showing the set of networks in which a certain parameter n is diversity-optimal, which means that the choice of n does not prevent the code to achieve full diversity. In Sec. V, we propose a JNCC for n = 2, where taking n = 2 is diversity-optimal in all networks corresponding to bold elements in Table I.

C. A lower bound based on
A certain relay does not help one source only, but a combination of sources, expressed by the transmission set T (u r ) for each relay u r . In this section we provide a lower bound on the diversity order, based on the choice of {T (u r ), u r = 1, . . . , m r }. If this lower bound and the upper bound in the previous section are tight, the exact diversity order of JNCCs can so be determined, as will be illustrated in Sec. V.
Based on T (u r ), m s and m r , we construct the (m s +m r )× m s coding matrix M , where The matrix M expresses the presence of a source-codeword in each transmission, i.e., M i,us = 1 if s us is considered in transmission i (i = 1, . . . , m s and i = m s + 1, . . . , m s + m r correspond to the source and relay transmission phases, respectively). Therefore, the upper part of M is an identity matrix as each source u s transmits its own codeword s us in the source phase. The matrix M represents what is often called the "coding header" or "the global coding coefficients" in the network coding literature (see e.g. [7]). Consider a block BEC channel where e of the m r blocks have been erased. The indices of the fading gains corresponding to the erased blocks are collected in the set E = Based on E, we construct M E which corresponds to the subset of transmissions that are not erased, i.e., all rows E i (if E i ≤ m s ) and m s + E i , for i = 1, . . . , e, in M are dropped. We denote the rank of M E as r ME . The set M(e) collects all possible matrices M E which can be constructed from M if |E| = e.
Consider an example for m s = m r = 3. Assume that Next, assume that E = {1}. Hence, the channel between user 1 and the destination is erased, so that rows 1 and 4 from M 6 are dropped: and r ME = 3. It can be verified that all matrices M E ∈ M(1) have rank r ME = 3. However, there exist matrices M E ∈ M(2) having rank r ME < 3.
We can now define a metric that depends on {T (u r )}.
A simple computer program can compute d M , given T (u r ), m s and m r .

Lemma 1
In a JNCC following the form of Eq. (6) with m s = m r and constant n ur = n = 2, the metric d M is at most three.
Proof: If m s = m r and n = 2, then the minimum column weight of M is smaller than or equal to three. Erasing the three rows where M i,us = 1, for a certain u s corresponding to the minimum column weight, leads to M E having at least one zero column, and thus r ME < m s . By Def. 2, d M < 4.
In the next proposition, we provide a lower bound on the diversity order under ML decoding or Belief Propagation (BP) decoding [32]. We denote which are square matrices of dimension L.

Proposition 3
Using ML decoding, the diversity order of a JNCC following the form of Eq. (6) with constant n ur = n = 2, is lower bounded as if the matrices H ur us , u s ∈ T (u r ), u r ∈ {1, . . . , m s }, have full rank.
Using BP-decoding, the diversity order of a JNCC following the form of Eq. (6) with constant n ur = n = 2, is lower bounded as if, for each u r , the set of L equations In Sec. V, we propose a JNCC where the parity bits of pointto-point codes do not have a support in H GLNC , so that we take (17) instead of (16) as condition for BP decoding in the remainder of the paper.

D. Diversity order with interuser failures
It is often easier to prove that a particular diversity order is achieved assuming perfect interuser channels (see for example in Sec. V). Here, we discuss how this diversity order is affected by interuser failures.

Lemma 2
In the case of non-reciprocal interuser channels, any JNCC achieves the same diversity order with or without interuser channel failures.
Proof: See Appendix C. In the case of reciprocal interuser channels, the achieved diversity order with interuser failures depends on the transmission sets {T (u r ), u r = 1, . . . , m r }. We propose an algorithm to construct {T (u r )} in Sec. V and we will then discuss the diversity order with reciprocal interuser channels.

E. Diversity order in a layered construction
In a layered construction, such as the standard OSI model, the destination first attempts to decode the point-to-point transmissions. If it can not successfully retrieve the transmitted point-to-point codeword for a particular node-to-destination channel, then it declares a block erasure, where a block refers to one point-to-point codeword. Denoting this block erasure probability by ǫ, we have that ǫ ∝ 1 γ [39]. If for example e blocks of length L are erased, then the decoding corresponds to solving a set of equations with eL unknowns.
Standard linear network coding consists of taking linear combinations of several source packets. In general, non-binary coefficients are used in the linear combinations. Hence, packets are treated symbol-wise, which is shown to be capacity achieving for the layered construction [23]. Hence, in Eq. (5), the matrices H ur and H ur us are replaced by identity matrices, which are multiplied with a non-binary coefficient in general. A consequence of this symbol-wise treatment is that the effective block length of the network code reduces to m s +m r and the set of equations is expressed by the coding matrix M E . At this block length, ML decoding (which is equivalent to Gaussian elimination at the network layer) has low complexity. Therefore, non-random linear network codes being maximum distance separable (MDS) achieve the diversity order d M (Def. 2). Also note that random linear network codes are MDS codes with high probability for a sufficiently large field size [21].
V. PRACTICAL JNCC FOR n ur = 2 In the literature, a detailed diversity analysis is most often lacking. Codes were proposed and corresponding numerical results suggested that a certain diversity order was achieved on a specific network. It is sometimes not clear why this diversity order is achieved, and how it would vary if the network or some parameters change. In the previous section, we made a detailed diversity analysis of a JNCC following the form of Eq. (6). However, the utility of for example Prop. 3 is limited to JNCCs following the form of Eq. (6) with a constant n ur = 2, which suggests that it is very hard to rigorously prove diversity claims in general. However, the modest analysis made in Sec. IV can be applied in some cases and we will show its utility through an example.
We consider networks with m s = m r = m ≥ 4 and a JNCC following the form of Eq. (6) with n ur = n = 2 for u r = 1, . . . , m. We will rigorously prove that a diversity order of three is achieved, using the Props. of Sec. IV. From Table I, it can be seen that this JNCC is diversity-optimal for m = 4 and m = 5. In Sec. VII, we provide numerical results for m = 5.
From Table I, it is clear that restricting n to two is not diversity-optimal in larger networks. However, it also has some advantages. If n = 2, then every relay just needs to decode 2 users, and encoding is restricted to taking a linear transformation of only two source packets. Furthermore, taking n = 2 does not impose infeasible constraints on the number of sources in the vicinity of a relay in the case that spatial neighbourhoods are taken into account. Next, the theoretical analysis is simpler in the case n = 2. Finally, taking n = 2 allows to reuse strong codes designed for the multiple access relay channel, e.g. in [10], [11].
Besides the diversity order, we indicated in Sec. I that scalability is also very important. The JNCC proposed here is scalable to any large network without requiring a redesign of the code. This means that we provide an on the fly construction method. The latter is particularly important for self regulating networks. As a node adds itself to the network, it can seamlessly integrate to the network. Together with the new symbols sent by the new node, a new JNCC code is formed which still possesses all desirable properties. Finally, note that due to the large block length of JNCC, ML decoding is too complex and low-complexity techniques, such as BP decoding, must be used.
Hence, two properties are claimed: scalability to large networks and a diversity order of three (which is full diversity in some cases) under BP decoding. The JNCC code is presented in two steps. First, we present the design of {T (u r )} and thus the coding matrix M . In a second step (Eq. (20)), we specify the matrices H ur and H ur us and we will prove that the scalability and the diversity order of three are achieved.

A. First step: design of T (u r )
The transmission sets {T (u r )} have a large impact on the diversity order. For example, in [12], a random construction was studied (each relay chooses n = 2 sources at random) and it was shown that E[t us ] = 2, but Var[t us ] = 2 as well, so that most probably t min < 2 and d R < 3 (Prop. 2). So we need a more intelligent construction.
We present an algorithm to determine {T (u r )}, given m s and m r , and we subsequently determine the corresponding metrics t min and d M . We define the function f ms (x) = ((x − 1) mod m s ) + 1 which adapts the modulo operation to the range 1 ≤ f ms (x) ≤ m s .
The transmission set T (u r ) is expressed via the bottom part of M . An example of such a matrix M is given in Eq. (18) for m s = m r = 5.
If a node is added as a source node, it adopts the largest source index, m s +1, and relay-only nodes, with indices larger than or equal to m s + 1, increment their index by one. The function f ms (x) is updated to the new m s . Note that the algorithm corresponds to a deterministic cooperation strategy, which avoids extra signalling to the destination regarding the code design.
We first consider the case of perfect interuser channels and prove that Alg. 1 yields d = 3 (Cor. 3). We then consider interuser failures and prove that the diversity order is not affected (Lemma 3).

Corollary 3
Having perfect links from sources to relays, the diversity order of a JNCC, with m s = m r and with transmission set constructed via Algorithm 1, achieves a diversity order d = 3 using BP-decoding, if, for each u r , Eqs. (17) can be solved with BP in the case of only one unknown sourcecodeword vector.
Proof: Because the links between sources and relays are perfect, the relays will never stay silent. In the case that m r = m s and n ur = 2, we have that t min = t av = 2 and so d R = 3.
Next, we show that d M = 3 (and thus, according to Lemma 1, d M is maximized if n = 2). Consider |E| = 2. Without loss of generality, consider that E = {1, 2}. Consider the set of equations M E z = c. Variables z 3 , . . . , z ms can be recovered via the top m s − 2 rows of M E . The two relays u 1 and u 2 having source u s in their transmission set (T (u 1 ) and T (u 2 ), respectively) are Hence, source 1 is included in T (m−1) and T (m), and source 2 is included in T (m) and T (1). Hence, relay transmission m − 1 can be used to retrieve source 1 and relay transmission m can be used to retrieve source 2, as long as m ≥ 4. Hence, M E has full rank. The generalization to any set E satisfying |E| = 2, is straightforward. Therefore, we have that d M = 3.
As d R = d M = 3, the proof follows immediately from Props. 2 and 3.
Next, it can be proved that a JNCC applied in our system model has a diversity order of three, if it has a diversity order of three when all interuser channels are perfect. This is proved in general for non-reciprocal interuser channels in Lemma 2, and here, we consider reciprocal interuser channels.

B. Second step: JNCC of LDPC-type
In the first step, we specified {T (u r )} and proved that d R = d M = 3 if m r = m s = m > 3. According to Cor. 3, a diversity order of three is achieved under BP decoding if, for each u r , Eqs. (17) can be solved with BP in the case of only one unknown source-codeword vector. In the second step, we specify the sub matrices H ur , H ur us , ∀u r , u s , to satisfy this condition, given that {T (u r )} is constructed according to Alg.

1.
A simple solution is to replace the K left most columns in all K × L sub matrices H ur , H ur us , ∀u r , u s , by identity matrices. In this case, the joint network channel coding essentially reduces to a layered solution: the source-codewords are decoded at the relays and simply added according to Eq. (5). If the network code is used at the physical layer, it has to deal with noise and a more advanced code might be required.
In the literature, a full-diversity close-to-outage performing JNCC for the Multiple Access Relay Channel (MARC) has been proposed [10], [11], which is a code in the form of Eq.
(1). These codes are such that the set of Eqs.
where s j = [1i j 2i j p j ] is the codeword from source j, with [1i j 2i j ] and p j denoting the information bits and the parity bits, respectively (j = 1, 2); 1i j and 2i j each contain K 2 information bits. However, the parity bits p j are not involved in H GLNC, MARC . The matrices R i , with i = 1, 2, 3, are random matrices, chosen according to the required degree distributions of the LDPC code. To facilitate future notation, we denote and , wherē H i = H i or H ′ i (it will become clear hereunder which one has to be chosen at each relay). InH 1 andH 2 , the first two block columns each consist of K/2 columns (corresponding to information bits) and the last block column consists of L−K columns (corresponding to parity bits from the point-topoint codes). The zero block columns indicate that parity bits from point-to-point codes have no support in these matrices. Now replace all sub matrices H ur , H ur us by these matrices, for each relay u r , so that in each block column corresponding to information bits, we have a random matrix R i ; this is required to conform any preferred degree distribution of the LDPC code. For example, H GLNC can be given by Each set of rows and each set of columns in H will have at least one random matrix, so that any LDPC code degree distribution can be conformed. We denote this JNCC by the SMARC-JNCC, where S stands for scalable.

Proposition 4 In a network following the system model proposed in Sec. III and using BP, the SMARC-JNCC achieves a diversity order d = 3.
Proof: Consider the set of K equations In [10], it is proved that this set of K equations can be solved using the matrices proposed above. We provide another more simple proof here. Consider a block BEC. BecauseH 1 andH 2 are upper-or lower triangular, with ones on the diagonal, the unknown K information bits can be retrieved using backward substition, hence it can be retrieved with BP as well. By Cor. 3 and Lemma 3, the SMARC-JNCC achieves a diversity order d = 3.
Note that the information bits of a source need to be split in two parts: bits of the type 1i and 2i. This allows the introduction of the matrices R 1 and R 2 in Eq. 19, so that all information bits have a random matrix in their corresponding block column in the parity-check matrix. Now, the LDPC code can conform any degree distribution.
VI. LOWER BOUND FOR THE WER To assess the performance of the SMARC-JNCC we need to compare it with the outage probability limit (Sec. VI-A). We show that the outage probability limit is not always tight and we propose a tighter lower bound, which is presented in Sec. VI-B.

A. Calculation of the outage probability
The outage probability limit is the probability that the instantaneous mutual information between the sources and sinks of the network is less than the transmitted rate. The outage probability is an achievable (using a random codebook) lower bound of the average WER of coded systems in the limit of large block length [4], [13], [33].
For a multi-user environment, two types of mutual information are considered. First, it is verified whether the sumrate, R c in this case, is smaller than the instantaneous mutual information between all the sources and the sink. Then, it is verified whether each individual source rate, Rc ms in this case, is smaller than the instantaneous mutual information between the nodes, transmitting information for this source, and the destination. The outage probability for the MARC was determined in [10], [20] using the method described above.
The outage probability is where E out is denoted as an outage event. Similarly as in [10], [20], an outage event is given by The terms I(S i ; D), I(R i ; D) and I(S i ; R j ) are the instantaneous mutual informations of the corresponding point-to-point channels with input x ∈ {−1, 1}, received signal y = α i x + w with w ∼ CN (0, 1 γ ), conditioned on the channel realization α i , which are determined by applying the formula for mutual information [8], [40]: where E Y |{x=1,αi} is the mathematical expectation over Y given x = 1 and α i .
We now consider the outage probability of a layered construction, such as the standard OSI model, where the destination first decodes the point-to-point transmissions, declaring a block erasure if decoding is not successful. For the network code, we assume a maximum distance separable (MDS) code, which is outage-achieving over the (noiseless) block-erasure channel [14]. That is, any m s correctly received packets suffice for decoding. Accordingly, an outage event for the layered construction, denoted as E out,l is given by The outage probability for JNCC and a layered construction are compared in Fig. 1 for m s = m r = 5, coding matrix 8 M given in Eq. (18) and R c,p = 6/7. The overall spectral efficiency is R = 3/7 bpcu, so that E b /N 0 = 7γ 3 . The main conclusion is that the difference between both outage probabilities is only 1dB. Hence, on a fundamental level, the achievable coding gain by JNCC with respect to a standard layered construction is small for the adopted system model.

B. Calculation of a tighter lower bound on WER
According to information theory, the outage probability is achievable, where the proof relies on using random codebooks. However, the nature of the JNCC protocol largely deviates from a random code. For example, the parity bits corresponding to the point-to-point codes are forced in a block diagonal structure in H c (see Eq. 6), which is not taken into account in the outage probability limit. In fact, in Prop. 1, it was proved that the maximal diversity order does not depend on R c but on Fig. 2. The depicted part of the factor graph (using a Tanner notation) illustrates that a bit node (bit i on the figure) is essentially connected to two sets of check nodes, corresponding with Hc and H GLNC , respectively. A set of check nodes is denoted as CND for check node decoder. The LLR-value coming from the CND corresponding with Hc is denoted as Lc. The LLRvalue corresponding with the channel observation is denoted as L obs .
R n , which is not taken into account in the outage probability limit. Therefore, we argue that the outage probability limit is in general not achievable by a JNCC, which is illustrated by means of an example. Consider a network with m s = m r = 3. The adopted point-to-point codes have coding rate R c,p = 0.5, so that R c = 0.25. We take n u = 2 and adopt the coding matrix M , given in Eq. (13). Because of the small coding rate R c , the outage probability achieves a diversity order of three (Fig. 4). However, it follows from Prop. 1 that d max = 2. We therefore propose a new lower bound, which takes into account the point-to-point codes.
A bit node is essentially protected by two codes: a pointto-point code (H c ) and a network code (H GLNC ), which is illustrated on the factor graph [26] representation (a Tanner notation [38] is adopted) 9 of the decoder (Fig. 2). Usually, both codes are characterized by separate degree distributions, denoted as (λ c (x), ρ c (x)) and (λ GLNC (x), ρ GLNC (x)) for H c and H GLNC , respectively.
The new lower bound assumes a concatenated decoding scheme. At the destination, first the point-to-point codes are decoded and then soft information is passed to the network decoder. This is illustrated in Fig. 3, where the soft information is denoted by the log-likelihood ratio (LLR) L obs ′ . Note that the bit node of bit i is duplicated to be able to clearly indicate L obs ′ . Applying the sum-product algorithm (SPA) on this factor graph or the original factor graph (without node duplication) is equivalent (see [41] for a background on factor graphs and the SPA). The LLR L obs ′ can be viewed as a new channel observation as it remains fixed during the iterative decoding of the network code (H GLNC ). The maximum rate that can be achieved by the network code is given by λ GLNC (x) L obs ′ Fig. 3. The bit node in Fig. 2 can be duplicated with a single edge between both nodes as shown in this figure. The LLR L obs ′ is the sum of all incoming LLR-values from the left, and contains the soft information which is passed to the network code decoder in a concatenated coding scheme. random variable L obs ′ , conditioned on the channel realization α i , determined by applying the formula for mutual information [8], [40], i.e., I(X; L obs ′ |α i ) is The density of the random variable L obs ′ can be obtained by means of density evolution [35], given the degree distributions of the point-to-point code, or by means of Monte Carlo simulations, given the actual factor graph of the point-topoint code. Both approaches yield to the same results in our simulations.
Similarly to the conventional case, an outage event, denoted as E out,2 is given by Note that the network coding rate is used instead of the overall rate R c , which corresponds to Prop. 1. The tight lower bound presented here is a valid lower bound if the point-to-point codes are first decoded, followed by the network code, without iterating back to the point-to-point codes.
Let us now go back to the small network example with m s = m r = 3, considered in the beginning of this section. Fig.  4 compares the conventional outage probability (Sec. VI-A) with the tighter lower bound proposed here. As mentioned before, the conventional outage probability has a larger diversity order than what is achievable, while the tighter lower bound only achieves a diversity order of two. We are seeing a 3dB difference at an outage probability of 10 −4 . To assess the performance of the network code only, given a certain point-to-point code, the WER of the SMARC-JNCC should be compared with the tight lower bound presented here. In the subsequent sections, we always include both lower bounds.

VII. NUMERICAL RESULTS
In this section, we provide numerical results for the SMARC-JNCC. We will clarify the proposed techniques on an illustrating network example, where m s = m r = 5 (Fig. 5). We use the same network example as in [2], [12] so that a comparison is possible. For simplicity, we assume non-reciprocal interuser channel in the simulation results. Note that in the case that m s > 4 and Alg. 1 is used to construct {T (u r ), u r = 1, . . . , m r }, reciprocity is irrelevant for our proposed code, as it applies that i / ∈ T (j) if j ∈ T (i).
We compare the error rate performance of the SMARC-JNCC with the outage probability limit and the tighter lower bound, which are presented in Sec. VI, and with standard network coding techniques (using identity matrices in H GLNC ) and a layered network construction (also using identity matrices in H GLNC , and where, at the destination, the network code is only decoded after decoding all point-to-point codewords separately and taking a hard decision).
The point-to-point code used in the simulations is an irregular LDPC code [35] characterized by the standard polynomials λ(x) and ρ(x) [35]: where λ(x) and ρ(x) are the left and right degree distributions from an edge perspective. The coefficients λ i and ρ i are the fraction of edges connected to a bit node and check node, respectively, of degree i. The adopted point-to-point code is fetched from [16], has coding rate R c,p = 6/7 and conforms the following degree distributions:

A. Perfect source-relay links
We start by assessing the performance of H GLNC , the bottom part of Eq. (20), which determines the diversity order. Therefore, we assume perfect links between sources and relays. Hence, the channel model is the same as described in Sec. III, with the exception of the interuser channels, which are assumed to be perfect (no fading and no noise). The parameters used for the simulation are K = L = 900, m s = m r = 5 (so that N = 10K = 9000), where N is the block length of the overall codeword. The overall spectral efficiency is R = 0.5 bpcu, so that E b /N 0 = 2γ. Fig. 6 shows that a diversity order of 3 is achieved for SMARC-JNCC, which corroborates Cor. 3. It performs at 2.5dB from the outage probability (because no point-to-point codes are considered, only the conventional outage probability can be calculated), which may be improved by optimizing the degree distributions. We also show a JNCC, where all submatrices H ur , H ur us , ∀u r , u s are replaced by identity matrices, denoted as the I-JNCC. Finally, we show an I-JNCC with irregular {n ur }, with coding matrix M , given by It is clear that, even without optimizing the SMARC-JNCC, there is a benefit in terms of coding gain compared to the I-JNCC.

B. Rayleigh faded source-relay links
Now, we assess the performance of the complete paritycheck matrix H of the SMARC-JNCC. We use the channel model as described in Sec. III. Hence, all links have the same statistical model and the average SNR is the same as for all channels. The parameters used for the simulation are K = 606, R c,p = 6/7, L = 707, m s = m r = 5 (so that N = 10L = 7070). The overall spectral efficiency is R = 3/7 bpcu, so that E b /N 0 = 7γ/3. Because the simulation time would be very large if every point-to-point source-relay link has to be decoded separately, we made an approximation. The word error rate of the point-to-point code when transmitted on a channel with fading gain α is smaller than 10 −4 when α 2 γ = 5.5dB. Therefore, we assumed that a relay had correctly decoded the source-codeword if α 2 γ > 5.5dB and not otherwise. We also add the performance of the SMARC-JNCC from Sec. VII-A, corresponding to perfect source-relay links and R = 0.5 bpcu, as a reference curve (note that the reference curve corresponds to a larger spectral efficiency -the coding rate R c is larger -than for the other curves, which slightly disadvantages the reference curve in terms of error performance). Fig. 7 shows that a diversity order of 3 is still achieved, which corroborates Prop. 4. In addition, two main conclusions can be made. First of all, the loss due to interuser failures is 6.5dB, which is very large. Secondly, the benefit in terms of coding gain of the SMARC-JNCC compared to the I-JNCC is considerably decreased, compared to Sec. VII-A, which corresponds to the small horizontal SNR-gap between the outage probabilities of a layered and joint construction. Also note that the tighter lower bound using density evolution, is close to the conventional lower bound in this case. Finally, the WER performance of a layered construction is shown, which coincides with that of the I-JNCC.

C. Gaussian source-relay links
We test again the complete parity-check matrix H of the SMARC-JNCC, now assuming that the source-relay links are Gaussian, having additive white Gaussian noise only, without fading; fading occurs on the source-destination and relaydestination links only. We assume that the average SNR is the same for all channels. The parameters used for the simulation are the same as in Sec. VII-B. Fig. 8 shows that in the case of Gaussian interuser channels, the loss compared to perfect interuser channels is very small.

VIII. CONCLUSION
We put forward a general form of joint network-channel codes (JNCCs) for a wireless communication network where sources also act as relay. The influence of important parameters of the JNCC on the diversity order is studied and an upper and lower bound on the diversity order are proposed. The lower bound is only valid for the case where the number of sources is equal to the number of relays, and where each relay only helps two sources.
We then proposed a practical JNCC that is scalable to large networks. Using the diversity analysis, we managed to rigorously prove its achieved diversity order, which is optimal in a well identified set of wireless networks. We verified the performance of a regular LDPC code via numerical simulations, which suggest that as networks grow, it is difficult to perform significantly better than a standard layered construction.
ACKNOWLEDGEMENT This work was supported by the European Commission in the framework of the FP7 Network of Excellence in Wireless COMmunications NEWCOM++ (contract n. 216715).

A. Proof of Prop. 1
The maximal diversity order can be derived using the diversity equivalence between a block BEC and a BF channel [5], [6]. Assume a block BEC, so that a block s us or r ur is completely erased or perfectly known. Consider the case that e 1 blocks of length 2L and e 2 blocks of length L have been erased, where e = e 1 +e 2 is the total number of erasures, e 1 ≤ m s and e 2 ≤ m r − m s . Hence, the number of unknown bits is equal to e 1 2L + e 2 L. Considering the structure of H from (6) containing the block-diagonal matrix H c , it follows that the e 1 2L+e 2 L erased bits appear in only (2e 1 +e 2 )(L−K)+m r K of the available (m s + m r )L − m s K parity equations, i.e., (2e 1 + e 2 )(L − K) equations involving H c and all m r K equations involving H GLNC . Hence, the unknown bits can be retrieved only if there are sufficient linearly independent useful equations. This yields the necessary condition: Denoting by e = e 1 + e 2 the total number of erased blocks, the largest value e max of e for which e 1 and e 2 satisfy (23) for all e 1 ≤ m s and e 2 ≤ m r − m s is given by Hence, d max = e max + 1, yielding Prop. 1.

B. Proof of Prop. 3
Before we present the actual proof, we first propose two lemmas. Proof: If a matrix has full rank, there is no vector z = 0 such that Sz = 0. However, if S has row weight 2, then S1 = 0, where 1 corresponds to a column vector with each entry equal to 1.
Consider now a column vector of b unknown variables z and a set of constraints on these variables, which are stacked in S so that Sz = c, where c is a column vector of known constants. In general, solving S for z corresponds to performing Gaussian elimination of S. However, under some conditions, this simplifies to backward substitution.

Lemma 5
If a binary a × b matrix S, a ≥ b, has full rank b and maximal row weight of 2, Gaussian elimination simplifies to backward substitution.
Proof: Without loss of generality, we eliminate all redundant (linearly dependent) rows in S to obtain a square matrix of size b. By Lemma 4, there must be at least one row in S with unit weight to have full rank. Starting from this known variable, we can solve for a further variable in z at each step as the row weight is smaller than or equal to 2.
Assume that this backward substitution procedure cannot be continued until all variables are known. That is, after successive decoding, there are k rows consisting of a combination of z i k + z j k where neither z i k nor z j k are known. We split the matrix S into two parts: S unknown ∈ {0, 1} k×ms and S known ∈ {0, 1} ms−k×ms . The former comprises the rows involving only unknown variables (note that the weight of each row of S unknown is 2). The latter consists of the rows involving only known variables. If the number of unknown variables is equal to k, then the rank of S unknown must be equal to k which is impossible by Lemma 4. So, the matrix S was not full rank which contradicts our assumption. If the number of unknown variables is smaller than k, then there were redundant (linearly dependent) rows in S known which contradicts the assumptions again. We conclude that the procedure only fails if S does not have full rank.
To prove Prop. 3, we use the diversity equivalence between a block BEC and the BF channel. In a block BEC, the channel equation (4) simplifies to y us = ǫ us s ′ us , u s = 1, . . . , m s y ms+ur = ǫ ur r ′ ur , u r = 1, . . . , m r where ǫ i = 0 when the channel is erased and ǫ i = 1 otherwise.
Source-codewords s i can be retrieved from the transmissions in the source phase if ǫ i = 0. Decoding the other sourcecodewords at the destination is performed through the paritycheck matrix H (Eq. (6)). We split H in two parts: where H left and H right have m s L and m r L columns, respectively. We also define s = [s T 1 . . . s T ms ] T and r = [r T 1 . . . r T mr ] T . As Hx = 0, we have that H left s = H right r.
As we consider a block BEC, some transmissions are perfect. As in App. A, consider the case that e 1 blocks of length 2L and e 2 blocks of length L have been erased, where e = e 1 + e 2 = |E| is the total number of erasures, e 1 ≤ m s and e 2 ≤ m r − m s . Considering the structure of H from (6) containing the block-diagonal matrix H c , it follows that the e 1 2L+e 2 L erased bits appear in only (2e 1 +e 2 )(L−K)+m r K of the available (m s + m r )L − m s K parity equations, i.e., (2e 1 + e 2 )(L − K) equations involving H c and all m r K equations involving H GLNC . Next, (e 1 + e 2 )K from the m r K equations involving H GLNC cannot be used to solve erased bits in s as these equations always have at least two unknowns. The overall set of equations to decode s thus becomes or, using the notation from (15) where y ′ i = 1+yi If |E| ≤ d M −1, then M E has full rank, according to Def. 2. As established in Lemma 5, the set of equations represented by M E can be solved using backward substitution. This means that at each iteration, there is an equation with only one unknown. Consider a particular iteration and denote the index of the unknown by u. In H s , this corresponds to an equation with an unknown source-codeword vector s u of the type H p s u = 0 H ur u s u = us∈T (ur ) us =u H ur us s us + H ur y ′ ms+ur .
or of the type s u = y ′ u . Under ML decoding, we obtain what was claimed if the matrices H ur us , u s ∈ T (u r ), u r ∈ {1, . . . , m r } have full rank. Under BP decoding, we obtain what was claimed if, for each u r , the set of L equations (31) can be solved with BP in the case of only one unknown source-codeword vector s u .

C. Proof of Lemma 2
A relay may not succeed in successfully decoding the message from a source, denoted as a failure. There are m 2 −m interuser channels, which all have a probability of failure. Hence, there exist We denote the case where all interuser channels are successful as case 1. Using Bayes' law, the error rate can be split: Defining the diversity order corresponding to each case as d c,i = − lim γ→∞ log P (case i)P (ew|case i) log γ , it follows that the overall diversity order is d = min i d c,i .
The probability of f failures on independent interuser channels is proportional to 1 γ f [39], so that for this case i, The diversity order in the case of perfect interuser channels (f = 0) is d c,1 . That is, the error-correcting code can bear d c,1 − 1 erasures on node-destination links. Hence, d c,i ≥ d c,1 only if P (ew|case i) ≤ c γ d c,1 −f , or, all information can still be retrieved at the destination, given that f interuser channels and d c,1 − f − 1 node-destination channels are erased. Let us check whether this is true for all f .
A relay stays silent if it cannot decode all source codewords corresponding to its transmission set. If there are f interuser failures, there are at most f relays which stay silent in the relay phase. This corresponds to at most f additional nodedestination erasures adding to the assumed d c,1 − f − 1 already erased node-destination channels, yielding a total of at most d c,1 − 1 erased node-destination channels, which can be supported by the code, by the definition of d c,1 .

D. Proof of Lemma 3
In the case that m s > 4 and Alg. 1 is used to construct {T (u r ), u r = 1, . . . , m r }, reciprocity is irrelevant for our proposed code, as it applies that i / ∈ T (j) if j ∈ T (i). Hence, if m s > 4, the proof given in App. C is always valid. Now consider the case that d c,1 = 2, which corresponds to m s = m r = m < 4 (see Prop. 1). In the case of f = 1 interuser channel, d c,i is always larger than one, because P (ew|case i) ≤ c γ as at least one channel, the sourcedestination channel, needs to fail to loose the corresponding information bits.
Finally, consider the case that m s = m r = m = 4 and thus d c,1 = 3. Hence, in the case of no interuser failures, the code can support two node-destination failures, corresponding to four erased transmissions from two nodes, in the source phase and in the relay phase. Reciprocity is relevant as i ∈ T (j) if j ∈ T (i) for (i, j) is (1, 3) and (2, 4). Because P (ew|case i) ≤ c γ , we only have to consider the case that f = 1, denoted as case i in general. Hence, in the case that the interuser channel between sources one and three or two and four have been erased, relays one and three or two and four, respectively, stay silent. Note that the transmission sets from the remaining active relays are disjoint when Alg. 1 is used, and because n = 2, they support all sources u s = 1, . . . , 4. If one node-destination channel is consequently erased, which corresponds to at most two transmissions, the destination has to recover the information bits from the erased source-codeword. Because relay u r cannot have u r in their own transmission set T (u r ), the erased relay codeword does not contain any information on the erased source-codeword, which implies that the information is in the remaining relay codeword. Hence, we have that P (ew|case i) ≤ c γ 2 or by (34), d c,i ≥ 3. In other words, interuser failures do not decrease the diversity order.