 Research Article
 Open access
 Published:
Improved Design of Unequal Error Protection LDPC Codes
EURASIP Journal on Wireless Communications and Networking volume 2010, Article number: 423989 (2010)
Abstract
We propose an improved method for designing unequal error protection (UEP) lowdensity paritycheck (LDPC) codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a nonUEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the shorttomedium length codes designed with a suitable threshold offset all outperform the corresponding nonUEP codes in terms of average biterror rate. The proposed codes are also compared to other UEPLDPC codes found in the literature.
1. Introduction
In many communication scenarios,such as wireless networks and transport of multimedia data, sufficient error protection is often a luxury. In these systems, it may be wasteful or even infeasible to provide uniform protection for all information bits. Instead, it is more efficient to protect the most important information more than the rest, by using a channel code with unequal error protection (UEP). This implies improving the performance of the more important bits by sacrificing some performance of the less important bits. This paper focuses on the design of UEP lowdensity paritycheck (LDPC) codes with improved average biterror rate (BER).
Several methods for designing UEPLDPC codes have been presented, [1–11]. The irregular UEPLDPC design schemes described in [1–7] are based on the irregularity of the variable and/or check node degree distributions. These schemes enhance the UEP properties of the code through density evolution methods. Vasic et al. proposed a class of UEPLDPC codes based on cyclic difference families, [8]. In [9], UEP capability is achieved by a combination of two Tanner graphs of different rates. The UEPLDPC codes presented in [10] are based on the algebraic Plotkin construction and are decoded in multiple stages. UEP may also be provided by nonbinary LDPC codes, [11].
In this work, we consider the flexible UEPLDPC code design proposed in [3], which is based on a hierarchical optimization of the variable node degree distribution for each protection class. The algorithm maximizes the average variable node degree within one class at a time while guaranteeing a minimum variable node degree as high as possible. The optimization can be stated as a linear programming problem and can, thus, be easily solved. To keep the average performance of the UEPLDPC code reasonably good, the search for UEP codes is limited to degree distributions whose convergence thresholds lie within a certain range of the minimum threshold of a code with the same parameters. In the following, we call the threshold offset.
In the latest years, much effort has been spent on construction algorithms for shorttomediumlength LDPC codes, [12–14]. However, these algorithms rely on degree distributions optimized for infinitely long codes and focus on constructing LDPC code graphs with a small number of short cycles, thereby improving the performance in the errorfloor region for short LDPC codes. In the design proposed here, we optimize the threshold offset given the construction algorithm used to specify the paritycheck matrix of the code. The improved UEP codes have an average performance that is better than the corresponding nonUEP codes (with dB), which may seem counterintuitive. Typically, the reduced error rate of the most protected class is compensated for by an increased error rate of the other classes. Nonetheless, we show that with a good choice of the threshold offset and for several common code construction algorithms, the performance of the UEP code is significantly better than the performance of the corresponding nonUEP code. However, for long codes and a high number of decoder iterations, the UEP code design reduces the performance since the UEP codes have worse thresholds than the nonUEP codes. The performance for different values of the threshold offset is found in [3], which shows that a threshold offset of 0.1 dB is a good choice. This is true for the random construction used in [3], but it is not noted that the best choice of the threshold offset varies for different construction algorithms.
Some intuition as to why UEP code design may increase the average performance can be gained by considering irregular LDPC codes, not designed for UEP, and their advantages compared to regular LDPC codes, [15]. In irregular codes, variable nodes with high degree typically correct their value quickly and these nodes can then help to correct lower degree variable nodes. Therefore, irregular graphs may lead to a wave effect, where the highest degree nodes are corrected first, then the nodes with slightly lower degree, and so on. The more irregular a code is, that is, the higher the maximum variable node degree, the faster the correction of the high degree variable nodes. There are reasons to believe that the code with the best threshold under an appropriate constraint on the allowed number of iterations, that is, a code with fast convergence, yields the best performance for finitelength codes also when the number of iterations is high, [16]. UEP code design is another way to achieve the differentiation between nodes that may lead to a wave effect and fast convergence. By allowing the code to have a worse threshold (as is the case in the UEP code design we consider), more differentiation between nodes in different classes can be achieved. It should also be noted that there is a tradeoff between the maximum variable node degree and the codeword length. The maximum variable node degree should be lower for a short code to reduce the number of harmful cycles involving variable nodes of low degree. The wave effect achieved by the UEP design is accomplished without increasing the maximum variable node degree.
Let us recall some basic notation of LDPC codes. The sparse paritycheck matrix has dimension , where and are the lengths of the information word and the codeword, respectively. We consider irregular LDPC codes with edgebased variable node and check node degree distributions defined by the polynomials [16] and , where and are the maximum variable and check node degree of the code, respectively. For UEP codes, we divide the variable nodes into several protection classes () with degrading level of protection. The resulting variable node degree distribution is defined by the coefficients , which denote the fractions of edges incident to degree variable nodes of protection class . The overall degree distribution is given by . In the following, we distinguish between code design, by which we mean the design of degree distributions that describe a code ensemble, and code construction, by which we mean the construction of a specific code realization (described by a paritycheck matrix).
2. Design of FiniteLength UEP Codes
In [16], Richardson et al. state that for short LDPC codes, it is not always best to pick the degree distribution pair with the best threshold. Instead, it can be advantageous to look for the best possible threshold under an appropriate constraint on the allowed number of iterations. In this paper, we show that by searching among degree distributions designed for UEP with worse threshold than the corresponding nonUEP degree distributions, we may find degree distributions with significantly lower error rates for a finite length than the degree distributions with the best possible threshold. Welldesigned UEP codes have faster convergence for all protection classes and thereby better performance for finitelength LDPC codes.
2.1. Detailed Mutual Information Evolution
An appropriate method for analyzing UEP codes is needed to choose a good value for the threshold offset , without relying on timeconsuming error rate simulations. We consider the theoretical mutual information (MI) functions, which are typically calculated from the degree distributions and of a code. However, different LDPC codes with the same degree distributions can have very different UEP properties, [17]. The differences depend on how different protection classes are connected, which in turn depends on the code construction algorithm used to place the edges in the graph according to the given degree distributions. To observe the differences also between codes with equal degree distributions, a detailed computation of MI may be performed by considering the edgebased MI messages traversing the graph instead of nodebased averages. This has been done for protographs in [18]. We follow the same approach, but use the paritycheck matrix instead of the protograph base matrix. See [19, 20] for more details on MI analysis. The detailed MI evolution is described in detail in the appendix. In the following, we use the average a posteriori variable node MI denoted by (calculated for each variable node in step (5) of the MI analysis in the appendix) to compare the convergence rates of different LDPC codes.
2.2. Design Procedure
For simplicity, a good value of is found through an exhaustive search in a region of typical values. In the following section, we show that there is only a small difference in MI and BER for similar values of . It is therefore reasonable to consider only a few values of in the search and the best value among these is likely to give a BER result very close to what could be achieved by a more thorough search. For each value of in the range of the search, three steps must be performed.

(1)
Design a UEP code following [3] for the under consideration, keeping , , and the proportions between the protection classes fixed. This step results in subdegree distributions for each protection class.

(2)
Construct a paritycheck matrix using an appropriate code construction algorithm.

(3)
Calculate the detailed MI evolution for a given and a maximum number of decoder iterations. The code with the highest average has the best overall performance within this family of codes.
The value of the threshold offset is optimized for a specific , which means that a code that is optimized for low may perform worse for high , and vice versa. For UEP codes, the proportions between the protection classes also affect the UEP properties of a code and thereby are also the best choice of . In our simulations we have seen that with 20% of the information bits in the most protected class and 80% in a less protected class , good performance is achieved for rate 1/2 codes of different lengths. We therefore omit further investigations of the effect of different proportions between the protection classes.
3. Design Examples
We design UEP codes of lengths and . All codes are designed using the check node degree distributions given in [16, Table II]. The performance of any UEP code is compared to the performance of a nonUEP code with the variable node degree distribution that gives the best threshold, also tabulated in [16, Table II]. A maximum of 100 decoder iterations is allowed. Except for the variable node degree distribution, the UEP codes and the corresponding nonUEP code have the same parameters. We consider only rate1/2 LDPC codes. All UEP codes presented in this section have 20% of the information bits in and the remaining 80% in . A third protection class contains all parity bits. We first focus on design of generalized ACE constrained progressive edgegrowth (PEG) codes [14] (in the following denoted by PEGACE codes) in Section 3.1. The random construction and the PEG construction algorithm [12] are considered in Section 3.2.
The progressive edgegrowth (PEG) construction algorithm is an efficient algorithm for the construction of paritycheck matrices with large girth (the length of the shortest cycle in the Tanner graph) by progressively connecting variable nodes and check nodes [12]. The approximate cycle extrinsic message degree (ACE) construction algorithm lowers the error floor by emphasizing both the number of edges from variable nodes in a cycle to nodes in the graph that are not part of the cycle as well as the length of cycles [13]. The PEGACE construction algorithm is a generalization of the popular PEG algorithm, that is shown to generate good LDPC codes with short and moderate block lengths having large girth [14]. If the creation of cycles cannot be avoided while adding an edge, the PEGACE construction algorithm chooses an edge that creates the longest possible cycle with the best possible ACE constraint.
3.1. Optimization of the Threshold Offset for PEGACE Codes
Six different PEGACE codes with varying are designed and constructed according to the design procedure in Section 2.2 NonUEP codes correspond to dB. These codes have length and allowed maximum variable node degree . Figure 1 shows the average a posteriori variable node MI as a function of decoder iterations at two different . For a low number of iterations, a large gives fast convergence for both shown. This implies that for applications where only a small number of decoder iterations are allowed, a large yields the best performance. The average at dB is maximized by dB. After 100 iterations, dB gives the highest average , but simulations show that the code with dB outperforms the code with dB at low . At dB, dB maximizes the average . The variable node degree distributions for the PEGACE codes with dB and dB are tabulated in Table 1. The random code with dB will be considered in Section 3.2.
Figure 2 shows the BER performance of the two protection classes containing information bits for dB and dB.The BER of the nonUEP code is shown for comparison. The figure shows that the code with dB performs well for low as expected, while the code with lower has less UEP capability but better average performance at high . The average BER is just slightly lower than the BER of , since the average BER is calculated from the BERs of the two protection classes containing information bits, scaled with the proportions of the classes. Note that both classes of these UEP codes perform better than the comparable nonUEP code. In addition, the UEP codes offer a small difference in BER between the classes.
Figure 3 shows the performance of three PEGACE codes of length and allowed maximum variable node degree . The nonUEP code is compared to a code optimized for low (which gives dB) and a code optimized for high (which gives dB). Both classes of the two UEP codes perform better than the nonUEP code. At dB, the UEP code with dB has an average BER which is around one magnitude less than the nonUEP code.
3.2. Code Design for Other Construction Algorithms
For given degree distributions, it has been shown that the choice of the construction algorithm strongly affects the UEP properties of the LDPC code, [17]. For codes with little inherent UEP (e.g., PEG and PEGACE codes), the threshold offset needs to be large to yield a code with good UEP capability. On the other hand, for codes with significant inherent UEP (e.g., randomly constructed codes), a high may make the less protected classes so badly protected that a wave effect does not occur. Figure 4 shows the performance of codes of length constructed by three different algorithms: the random construction, only avoiding cycles of length 4, the PEG construction algorithm, and the PEGACE construction algorithm. The figure shows the BER of the nonUEP codes as well as the BER of classes and of the UEP codes. The variable node degree distributions of the UEP codes are given in Table 1. Note that the PEG dB code has the same variable node degree distribution as the PEGACE dB code.
For the random code, MI calculations at dB show that the nonUEP code gives the highest MI, except at very few decoder iterations. At dB, the code with dB is slightly better than other choices of after around 50 iterations. Thereby it can be assumed that the nonUEP random code will perform better at low , and the UEP random code with dB will perform better at high . This is confirmed by the results shown in Figure 4. For the PEG and PEGACE code, the threshold offset dB used to design the codes gives the maximum MI at dB. For both codes, dB gives higher MI at dB, but for the PEG code there is only a slight difference in MI between these two values of .
The figure shows that both the PEG and the PEGACE UEP codes have significantly better performance than the corresponding nonUEP codes at low . Remember that the PEGACE code with dB performs better for high . The PEG and PEGACE construction algorithms result in codes with little inherent UEP and the UEP capabilities gained by the relatively high give faster convergence of the codes. However, the random UEP code has only a slightly lower average BER than the nonUEP code at high . Note that the average performance of random codes with dB is worse than for the nonUEP code. That is, for the random code with much inherent UEP there is not as much to gain by the UEP code design as for the PEG and the PEGACE codes, which have little inherent UEP.
These results show as expected that a PEG or PEGACE code should typically be chosen instead of a random code, even if the application benefits from UEP. For a large range of , the most protected class of the PEG and PEGACE code has only slightly worse performance than the random code, while the average BER is much higher for the random code. This paper shows that we can improve both the average BER performance and the UEP capability by optimizing the threshold offset.
3.3. Comparison to Other UEPLDPC Codes
UEPLDPC codes of similar length and rate as the codes presented here can be found in [6, 7]. Ma and Kwak [6] propose a partially regular code design, where all variable nodes in one protection class have the same degree. Good variable node degrees for each class are found through density evolution using the Gaussian approximation. Figures 5 and 6 show the performance of the code designed by Ma and Kwak together with the performance of two PEGACE codes (also shown in Figure 2). All these codes have and . The code designed by Ma and Kwak has 12.5% of the information bits in and 87.5% in , compared to the codes proposed in this paper where 20% of the information bits belong to . Note that Ma and Kwak [6] present the performance after only 10 decoder iterations, so in Figure 5 we compare their code to the PEGACE codes after 10 iterations. Figure 6 demonstrates the performance of the same codes after 100 iterations. The performance of the Ma and Kwak code for 10 iterations is taken directly from [6]. To find the performance after 100 iterations, we have run simulations with a code constructed according to the specifications given in [6]. It has been verified that our simulations give the same result as shown in [6] after 10 iterations.
Figure 5 shows that after only 10 decoder iterations, the PEGACE dB code performs best at low . This is in accordance with Figure 1, which demonstrates that the code with the highest has the best average MI when the number of decoder iterations is low. However, at high , the code proposed by Ma and Kwak has a lower average BER ( performs better). The PEGACE dB code performs almost the same as the Ma and Kwak code, except at high , where of the PEGACE code has worse performance.
After 100 decoder iterations, see Figure 6, the PEGACE dB code performs well at low compared to the code designed by Ma and Kwak. Up to dB, there is a gain of around 0.13 dB for and around 0.1 dB for for the different BERs. At dB, the Ma and Kwak code performs slightly better than the PEGACE dB code. The PEGACE dB code has a little bit lower BERs than the Ma and Kwak code at all .
Note that there is a significant difference in BER between and after 10 iterations, while the difference is much smaller after 100 iterations. This is typical for codes constructed using the PEG or PEGACE algorithm [17]. However, if 100 iterations are allowed, both classes have a lower BER than the most protected class have after only 10 iterations. Thus, if a reasonably high number of iterations can be allowed in terms of time and complexity, it is better to run many iterations even if the difference in error protection between the classes is reduced.
Yang et al. [7] present simulation results for an irregular UEPLDPC code of length and , which may be compared to the PEGACE codes proposed here with and . Figure 7 shows the performance of these codes. Note that Yang et al. divide the information bits into three different protection classes, 1485 bits in , 307 bits in , and 3208 bits in . The PEGACE dB code performs best for high . At dB, of the PEGACE dB code has a lower BER than of the code designed by Yang et al. At low , the PEGACE dB code has similar performance as the code designed by Yang et al. The PEGACE dB code has very good performance at low , and at up to 0.7 dB the average BER is lower than the BER of the most protected class in the code designed by Yang et al.
4. Conclusions
We have proposed an improved design algorithm for UEPLDPC codes, resulting in codes with reduced average BER. The algorithm searches for good threshold offsets for the UEP design, given different codeword lengths and different construction algorithms. The choice of the threshold offset is based on the average a posteriori variable node MI of the codes. Simulations show that the codes designed with a suitable threshold offset all outperform the corresponding nonUEP codes in terms of average BER. We show that the average BER is reduced by up to an order of magnitude by the proposed code design.
References
Rahnavard N, Fekri F: Unequal error protection using lowdensity paritycheck codes. Proceedings of the IEEE International Symposium on Information Theory (ISIT '04), July 2004 449.
Rahnavard N, PishroNik H, Fekri F: Unequal error protection using partially regular LDPC codes. IEEE Transactions on Communications 2007, 55(3):387391.
Poulliat C, Declercq D, Fijalkow I: Enhancement of unequal error protection properties of LDPC codes. Eurasip Journal on Wireless Communications and Networking 2007, 2007:9.
Sassatelli L, Henkel W, Declercq D: Checkirregular LDPC codes for unequal error protection under iterative decoding. Proceedings of the 4th International Symposium on Turbo Codes & Related Topics, April 2006
PishroNik H, Rahnavard N, Fekri F: Nonuniform error correction using lowdensity paritycheck codes. IEEE Transactions on Information Theory 2005, 51(7):27022714. 10.1109/TIT.2005.850230
Ma P, Kwak KS: Unequal error protection lowdensity paritycheck codes design based on gaussian approximation in image transmission. Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC '09), April 2009 16.
Yang X, Yuan D, Ma P, Jiang M: New research on unequal error protection (UEP) property of irregular LDPC codes. Proceedings of the 1st Consumer Communications and Networking Conference (CCNC '04), January 2004 361363.
Vasic B, Cvetkovic A, Sankaranarayanan S, Marcellin M: Adaptive error protection lowdensity paritycheck codes for joint sourcechannel coding schemes. Proceedings of the IEEE International Symposium on Information Theory (ISIT '03), July 2003 267267.
Rahnavard N, Fekri F: New results on unequal error protection using LDPC codes. IEEE Communications Letters 2006, 10(1):4345. 10.1109/LCOMM.2006.1576564
Kumar V, Milenkovic O: On unequal error protection LDPC codes based on plotkintype constructions. IEEE Transactions on Communications 2006, 54(6):9941005. 10.1109/TCOMM.2006.876842
Goupil A, Declercq D: UEP nonbinary LDPC codes: a promising framework based on group codes. Proceedings of the IEEE International Symposium on Information Theory (ISIT '08), July 2008 22272231.
Hu XY, Eleftheriou E, Arnold DM: Regular and irregular progressive edgegrowth tanner graphs. IEEE Transactions on Information Theory 2005, 51(1):386398.
Tian T, Jones CR, Villasenor JD, Wesel RD: Selective avoidance of cycles in irregular LDPC code construction. IEEE Transactions on Communications 2004, 52(8):12421247. 10.1109/TCOMM.2004.833048
Vukobratović D, Šenk V: Generalized ACE constrained progressive edgegrowth LDPC code design. IEEE Communications Letters 2008, 12(1):3234.
Luby MG, Mitzenmacher M, Shokrollahi MA, Spielman DA: Improved lowdensity paritycheck codes using irregular graphs. IEEE Transactions on Information Theory 2001, 47(2):585598. 10.1109/18.910576
Richardson TJ, Shokrollahi MA, Urbanke RL: Design of capacityapproaching irregular lowdensity paritycheck codes. IEEE Transactions on Information Theory 2001, 47(2):619637. 10.1109/18.910578
Von Deetzen N, Sandberg S: On the UEP capabilities of several LDPC construction algorithms. IEEE Transactions on Communications 2010, 58(11):30413046.
Liva G, Chiani M: Protograph LDPC codes design based on EXIT analysis. Proceedings of the 50th Annual IEEE Global Telecommunications Conference (GLOBECOM '07), November 2007 32503254.
Brink ST: Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Transactions on Communications 2001, 49(10):17271737. 10.1109/26.957394
Ten Brink S, Kramer G, Ashikhmin A: Design of lowdensity paritycheck codes for modulation and detection. IEEE Transactions on Communications 2004, 52(4):670678. 10.1109/TCOMM.2004.826370
Chung SY, Richardson TJ, Urbanke RL: Analysis of sumproduct decoding of lowdensity paritycheck codes using a Gaussian approximation. IEEE Transactions on Information Theory 2001, 47(2):657670. 10.1109/18.910580
Author information
Authors and Affiliations
Corresponding author
Appendix
Detailed MI Evolution
Let be the a priori MI between one input message and the codeword bit associated to the variable node. is the extrinsic MI between one output message and the codeword bit. Similarly on the check node side, we define () to be the a priori (extrinsic) MI between one check node input (output) message and the codeword bit corresponding to the variable node providing (receiving) the message. The evolution is initialized by the MI between one received message and the corresponding codeword bit, denoted by , which corresponds to the channel capacity. For the AWGN channel, it is given by , where
and is the signaltonoise ratio at which the analysis is performed. The function is defined by
and computes the MI based on the noise variance. For a variable node with degree , the extrinsic MI between the th output message and the corresponding codeword bit is [18]
where is the a priori MI of the message received by the variable node on its th edge. The extrinsic MI for a check node with degree may be written as
where is the a priori MI of the message received by the check node on its th edge. Note that the MI functions are subject to the Gaussian approximation (see [21]) and are not exact.
The following algorithm describes the MI analysis of a given paritycheck matrix. We denote element () of the paritycheck matrix by .

(1)
Initialization
(A.5) 
(2)
Check to variable update

(a)
For and , if , calculate
(A.6) 
where
is the set of check nodes incident to variable node .

(b)
If , .

(c)
For and , set .

(a)

(3)
Variable to check update

(a)
For and , if , calculate
(A.7) 
where
is the set of variable nodes incident to check node .

(b)
If , .

(c)
For and , set .

(a)

(4)
A posteriori check node MI

For
, calculate
(A.8) 
(5)
A posteriori variable node MI

For
, calculate
(A.9) 
(6)
Repeat (2)–(5) until for .
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Sandberg, S. Improved Design of Unequal Error Protection LDPC Codes. J Wireless Com Network 2010, 423989 (2010). https://doi.org/10.1155/2010/423989
Received:
Accepted:
Published:
DOI: https://doi.org/10.1155/2010/423989