Efficient use of a hybrid decoding technique for LDPC codes
© Guimarães et al.; licensee Springer. 2014
Received: 8 April 2013
Accepted: 9 January 2014
Published: 26 February 2014
A word error rate (WER) reducing approach for a hybrid iterative error and erasure decoding algorithm for low-density parity-check codes is described. A lower WER is achieved when the maximum number of iterations of the min-sum belief propagation decoder stage is set to certain specific values which are code dependent. By proper choice of decoder parameters, this approach reduces WER by about 2 orders of magnitude for an equivalent decoding complexity. Computer simulation results are given for the efficient use of this hybrid decoding technique in the presence of additive white Gaussian noise.
KeywordsIterative decoding LDPC codes Erasure decoding
Low-density parity-check (LDPC) codes were first introduced by Gallager in 1963  and rediscovered by Mackay  in late 1990s. LDPC codes are characterized by a sparse parity-check matrix H, for which an iterative decoder becomes an attractive option. One example is the belief propagation (BP) decoder which achieves the optimal MAP (Maximum a Posteriori) decoding condition if the code graph does not contain cycles (, p. 211). However, most LDPC codes have cycles in their Tanner graph representation, mainly for small and medium blocklengths , which adversely affect code performance. The BP decoding operation is executed for a preset number of iterations, and in case of a decoding failure, it leads to a frame error which implies a number of bits erroneously decoded. In the error-floor region, where LDPC codes exhibit a sudden saturation in word error rate (WER) for a sufficiently high signal-to-noise ratio (SNR), the bit errors are primarily caused by trapping sets (, p. 225). A considerable amount of research has gone into designing decoders to mitigate the errors caused by trapping sets [4–6].
In , a bi-mode decoder for LDPC codes was proposed for the additive white Gaussian noise (AWGN) channel, called a hybrid decoder (HD). The HD system operates in two modes (stages), where in mode 1 (first stage) a min-sum BP decoding is employed and in mode 2 (second stage) an iterative erasure decoder is used. One cycle of the HD operation includes at least one passage through the min-sum BP decoder, and if necessary, it also includes a passage through the erasure decoder.
In this article, we make a more efficient use of the HD system introduced in . In comparison with , the novelty here is the experimental determination of appropriate values for the number of iterations for the min-sum BP decoder which permits to perform a fine tune on the HD system. Furthermore, a new material has been added to explain the estimation of the number of bits to be erased at the end of an unsuccessful BP decoding, and computer simulations were performed to investigate the behavior of certain LDPC codes in the presence of AWGN by analyzing the following:
Behavior of the number of errors resulting after a decoding failure;
Impact on the performance of HD systems caused by setting a specific number of iterations in a min-sum BP decoder, such that the cardinality of the remaining bit error pattern is small in case of unsuccessful BP decoding;
Performance curves obtained by computer simulation for some LDPC codes used in the IEEE 802.11n standard , under the constraints on the decoding conditions indicated earlier.
Simplification of computations performed by the decoder
Reducing the number of iterations needed to converge
Reducing the complexity cost of an iteration.
Different approaches have been followed to reduce the complexity of LDPC decoding. For example, by replacing the sum-product check node computation by a simpler function , to simplify the computations, or using various scheduling techniques to increase the convergence rate by altering the order of updating messages during the iteration , or using the forced convergence method . In , an approach named lazy scheduling was employed to force a reduction in the complexity of decoding computations without reducing the number of iterations. Here, we address an approach whereby the maximum number of iterations is reduced without paying a penalty for loss in performance.
Hybrid decoding system
For a given positive integer , let denote a set containing coordinates, , selected among N coordinates with lowest reliability values in . The value of each one of those selected coordinates is mapped to z as an erasure, and the remaining coordinates in are mapped to z as either 0’s or 1’s by hard-decision according to Rule 1 as follows:
where Δ denotes an erasure.
The iterative erasure correction decoder then acts on z to correct the erasures introduced by Rule 1. In case all erasures are corrected, the HD cycle is interrupted and the erasure decoder outputs a binary N-tuple w. If w H T =0, then w is considered to be a codeword; otherwise, a decoding failure occurs. However, if all erasures are not corrected but at least one erasure is corrected when the erasure decoding operation is finished, the erasure decoder outputs an N-tuple w in which the coordinate values coincide with corresponding coordinate values in z, except for those coordinates where erasures in z have been corrected and, therefore, will contain binary values in w. Next, w is associated to an N-tuple of reliability values ξ by Rule 2 as follows:
When the erasure decoding operation is finished, we notice in ξ that the QLLR values from the previous BP decoding are kept if the corresponding digits remained erased or if their values are the same as those already computed at the previous BP decoding. Otherwise, the sign of the corresponding QLLR value is changed. A binary N-tuple b, obtained by hard-decision on the coordinate values of ξ, has its syndrome computed to find out whether decoding has been successful or not. If not so and the maximum number of cycles of the HD algorithm, denoted by IHD, has not been reached, then ξ is used as input to the min-sum BP decoder and thus a new HD cycle is initiated.
The choice of a value for
As we already mentioned, the parameter represents the number of bits to be erased at the end of an unsuccessful BP decoding using the criteria of erasing those digits with lowest QLLR absolute values. An adequate determination of is essential for obtaining a satisfactory overall performance since, in the erasure iterative decoding process, as the number of introduced erasures increases, it is less and less likely to find parity-check equations containing a single erased bit which could thus be solved. The set of erased bits which cannot be decoded with the code parity-check equations is called a stopping set.
Definition 1(Reference )
A stopping set in a linear block code is a subset of the set of variable nodes (or bit nodes) of the code Tanner graph, such that all neighbors of (parity-check nodes) are connected to at least twice.
Whenever there is a decoding failure on the first decoding stage, then the second decoding stage is employed to process the output of an artificially created BEC. The input to the BEC is the binary sequence produced by the min-sum BP decoder at the end of the IBPth iteration. Our motivation to employ an iterative erasure decoding stage is the following. For a given block code with minimum Hamming distance d, it is well known that any erasure pattern containing up to d−1 erasures (, p. 81) can be corrected, as well as a large fraction of patterns containing d or more erasures, as long as the number of erasures in a codeword does not exceed the number of parity-check digits and these erasures do not cover a codeword . As shown later, typical useful values of are much larger than d.
Definition 2(Reference ).
where T(δ) denotes the total number of distinct combinations of δ positions with erasures and S(δ) denotes the number of such combinations which constitute a stopping set. Thus, the ratio S(δ)/T(δ) can be interpreted as the probability of a given set δ of erased digits to be a stopping set. All erasure patterns containing a number of erasures less than the stopping distance s(H) can be corrected by the erasure iterative decoder. Apparently, a good choice for should satisfy , in which case all erasure patterns would be corrected. However, a small value for s(H) may be insufficient for the possible to contain the majority of the errors present in the N-tuples .
We define an N-tuple containing errors produced when BP decoding fails as detected if all erroneous positions are contained in .
We define an N-tuple containing errors produced when BP decoding fails as corrected if it can be detected, and the corresponding binary word produced by the erasure iterative decoder belongs to code , i.e., .
Error event analysis
In the HD scheme, ideally after the last iteration of an unsuccessful min-sum BP decoding, all remaining bit errors should be included among those erased positions in the N-tuple z. That is more likely to occur when either the value of is large or else when the cardinality of the remaining bit error pattern is small. Opting for a large in many cases may not be a good approach because it can give rise to stopping sets (, p. 244), thus hampering the success of the iterative erasure decoder as we can observe by analyzing the behavior of the green bars in Figures 3 and 4. On the other hand, the number of bit errors for a given frame error occurrence can vary significantly along the intermediate iterations , i.e., the cardinality of the remaining bit error pattern can vary significantly. The variation in the number of erroneous bits at the output of a BP decoder after an unsuccessful decoding motivated us to develop an error event analysis in order to perform a fine tune on the HD system , more specifically for selecting an appropriate value for IBP. In , initially, an arbitrary value of IBP was employed for the first stage of the HD system, which is a min-sum BP decoder. In this manner, there was no clue to the number of remaining errors when IBP iterations were performed and the decoding was not successful. When that happened, it was likely that the BP decoder had reached a trapping set (, p. 651). Once a trapping set has been reached, it is not possible, by performing more iterations with the BP decoder, to get out of it and successfully decode the received word. According to (, p. 652), the variation in the cardinality of the remaining bit error patterns occurs due to the behavior of the trapping sets that dominate the error floor.
Trapping sets can be classified into one of three classes according to their behavior (, p. 652): (1) a stable trapping set (also called a fixed-point trapping set), (2) a periodically oscillating trapping-set, and (3) an aperiodic oscillating trapping set. The relevance of the variation in the cardinality of the remaining bit error patterns depends on the class of trapping sets that dominate the error-floor region. In general, there is no known theoretical way to establish the trapping sets for a given code. Thus, finding trapping sets is in general a difficult task because it requires intensive computer simulations at very low error rates which often take months to run (, p. 651). As an alternative to the usual computer simulation, we develop an error event analysis, which has a much lower computational cost, and apply it to two LDPC codes of the IEEE802.11n standard used as examples in this article. The purpose of the error event analysis is to find out the average behavior of the min-sum BP decoding, with respect to the variation in the number of errors after each iteration, in case of a decoding failure. If the cardinality of the remaining bit error patterns varies significantly, we expect to find out values for the number of iterations for which the number or erroneous bits is minimal and thus choose one of these values for IBP. As a result, the probability that erroneous bits are included among the erased positions in the N-tuple z is higher.
where M denotes the maximum number of samples considered in the analysis. Thus, the sequence γ allows us to track the average behavior of the min-sum BP decoding algorithm with respect to the variation in the number of errors at each iteration. The results shown in Figure 5 are for the two codes of interest, where the horizontal axis represents the number of decoding iterations, and the vertical axis represents the average normalized number of bit errors at each iteration of the min-sum BP algorithm, γ i , 1≤i≤IBP=100. In Figure 5, for the (1296,648) code, the curve is plotted for SNR =2.4 dB and M=525 samples, and for the (1296,864) code, the curve is plotted for SNR =3.2 dB and M=510 samples. Since trapping sets depend not only on the graphical structure of the code but also on the channel (, p. 653), it is noteworthy that other SNR values were tested to analyze the behavior of the two codes in the situation described, and similar results to those shown in Figure 5 were obtained. In agreement with , for both LDPC codes considered, the number of remaining errors after a BP decoding failure exhibit an oscillating behavior as illustrated in Figure 5. The resulting number of bit errors may reach higher values, referred to as peak points or lower values (local minima) referred to as valley points.
Although the error event analysis performed for the codes examined may not be enough to draw precise conclusions about the class of the trapping sets that dominate the error-floor region, it is enough to perform a fine tune of the HD system as we show in the next section.
Efficient use of the hybrid decoder
Furthermore, for the range of SNR values in Figure 6, the WER for the HD system with 1 cycle is lower than that for a conventional min-sum BP decoder with IBP=12 iterations by approximately 1 order of magnitude. For 2 cycles, the resulting WER for the HD system is lower by 2 orders of magnitude than that for a conventional min-sum BP decoder with IBP=12 iterations.
Figure 6 also illustrates the results obtained for the (1296,648) LDPC code for IBP=25 iterations. We observe in Figure 5 that IBP=25 iterations is close to a peak point and large cardinality error patterns are expected. For IBP=25, an analysis similar to that from the previous case shows that the HD system with 1 cycle requires an SNR around 0.1 dB lower than that of a min-sum BP decoder for IBP=25 iterations in order to achieve the same performance in terms of WER. For a 2-cycle HD, a reduction of 0.2 dB results, again in comparison with a min-sum BP decoder for IBP=25 iterations. With respect to the values of WER, for IBP=25, we observe that the difference between the values of WER for the HD system with 2 cycles and for a min-sum BP decoder with IBP=25 iterations is smaller than 1 order of magnitude. In summary, in terms of WER, no considerable gain results if the value chosen for IBP is close to a peak point in Figure 5. On the other hand, we obtain a more efficient use of the HD system for the (1296,648) LDPC code by setting the value of IBP close to a valley point in Figure 5.
Performance curves in Figures 6 and 7 for the min-sum BP algorithm with 50 iterations serve as a reference value. By setting IBP close to a valley point, the HD system needs an overall reduced number of iterations to achieve a similar performance to the min-sum BP decoding scheme, with equivalent implementation complexity.
The oscillating behavior of the error rate versus number of iterations, featured by some LDPC codes of the IEEE802.11n standard, was exploited in order to take advantage of the so-called valley points in the error event analysis. The goal achieved here was an enhancement of the performance of the bi-modal hybrid decoder from . Simulation results show that a more efficient use of the HD system is obtained when the min-sum BP decoder employs a maximum number of iterations corresponding to points close to and possibly including a valley point and avoiding values close to and possibly including a peak point. In particular, the error event analysis for the (1296,648) code in Figure 5 indicates for 12 iterations a number of remaining errors much smaller than for 25 iterations, whenever there is a decoding failure. However, in general, a BP decoder employing a maximum of 25 iterations on average will fail less times than when employing a maximum of 12 iterations. A similar reasoning applies to other LDPC codes of similar parameters as well as to LDPC codes of similar block length. In particular, it applies to the (1296,864) code that was considered here.
This work received partial support from the Brazilian National Council for Scientific and Technological Development - CNPq, Project 304696/2010-2, and from the Pernambuco State Foundation to Support Science and Technology - FACEPE, Project 0288-0.34/10.
- Gallager RG: Low-density parity-check codes. Ph.D. thesis, MIT Press, Cambridge 1963.Google Scholar
- Mackay DJC: Good error-correcting codes based on very sparse matrices. IEEE Trans. Inform. Theory 1999, 45(2):399-431. 10.1109/18.748992MathSciNetView ArticleGoogle Scholar
- Ryan WE, Lin S: Channel Codes: Classical and Modern. Cambridge: Cambridge University Press; 2009.View ArticleGoogle Scholar
- Gounai S, Ohtsuki T, Kaneko T: Modified belief propagation decoding algorithm for low-density parity check code based on oscillation. In Proceedings of the IEEE 63rd Vehicular Technology Conference VTC 2006 Spring, Melbourne, 7–10 May 2006. Piscataway: IEEE; 2006:1467-1471.Google Scholar
- Alghonaim E, Landolsi MA, El-Maleh A: Improving BER performance of LDPC codes based on intermediate decoding results. In Proceedings of the IEEE International Conference on Signal Processing and Communications ICSPC 2007, Dubai, 24–27 Nov 2007. Piscataway: IEEE; 2007:1547-1550.Google Scholar
- Han Y, Ryan E: Low-floor decoders for LDPC codes. IEEE Trans. Commun 2009, 57: 1663-1673.View ArticleGoogle Scholar
- Guimarães WPS, Lemos-Neto JS, da Rocha Jr VC: A hybrid iterative decoder for LDPC codes. In Proceedings of the 9th International Symposium on Wireless Communication Systems ISWCS 2012, Paris, 28–31 Aug 2012. Piscataway: IEEE; 2012:979-983.Google Scholar
- IEEE Standards Association: IEEE 802.11n-D1.0 - IEEE 802.11n Wireless LAN Medium Access Control MAC and Physical Layer PHY Specifications. Piscataway: IEEE; 2006.Google Scholar
- Levin D, Sharon E, Litsyn S: Lazy scheduling for LDPC decoding. IEEE Commun. Lett 2007, 30(1):70-72.View ArticleGoogle Scholar
- Zhang J, Fossorier M, Du G, Zhang J: Two-dimensional correction for min-sum decoding of irregular LDPC codes. IEEE Commun. Lett 2006, 10: 180-182. 10.1109/LCOMM.2006.1603377View ArticleGoogle Scholar
- Zhang J, Fossorier M: Shuffled belief propagation decoding. In Proceedings of the 36th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, 3–6 Nov 2002. Piscataway: IEEE; 2002:8-15.View ArticleGoogle Scholar
- Fettweis G, Zimmermann E, Rave W: Forced convergence decoding of LDPC codes: EXIT chart analysis and combination with node complexity reduction techniques. In Proceedings of the 11th European Wireless Conference, Nicosia, 10–13 Apr 2005. Piscataway: IEEE; 2005:1-8.Google Scholar
- Chen J, Dholakia A, Eleftheriou E, Fossorier MPC, Hu XY: Reduced-complexity decoding of LDPC codes. IEEE Trans. Commun 2005, 53(8):1288-1299. 10.1109/TCOMM.2005.852852View ArticleGoogle Scholar
- Lin S, Costello DJ: Error Control Coding. Englewood Cliffs: Prentice Hall; 2004.Google Scholar
- Di C, Proietti D, Telatar I, Richardson T, Urbanke R: Finite-length analysis of low-density parity-check codes on the binary erasure channel. IEEE Trans. Inform. Theory 2002, 48(6):1570-1579. 10.1109/TIT.2002.1003839MathSciNetView ArticleGoogle Scholar
- Freitas PR, da Rocha V, Lemos-Neto JS: On the iterative decoding of binary product codes over the binary erasure channel. In 2011 8th International Symposium on Wireless Communication Systems (ISWCS), Aachen, 6–9 Nov 2011. Piscataway: IEEE; 2011:126-130.Google Scholar
- Schwartz M, Vardy A: On the stopping distance and the stopping redundancy of codes. IEEE Trans. Inform. Theory 2006, 52(3):922-932.MathSciNetView ArticleGoogle Scholar
- Johnson S: Iterative Error Correction Turbo, Low-Density Parity-Check and Repeat-Accumulate Codes. Cambridge: Cambridge University Press; 2010.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.