- Research Article
- Open Access
Efficient use of a hybrid decoding technique for LDPC codes
EURASIP Journal on Wireless Communications and Networkingvolume 2014, Article number: 32 (2014)
A word error rate (WER) reducing approach for a hybrid iterative error and erasure decoding algorithm for low-density parity-check codes is described. A lower WER is achieved when the maximum number of iterations of the min-sum belief propagation decoder stage is set to certain specific values which are code dependent. By proper choice of decoder parameters, this approach reduces WER by about 2 orders of magnitude for an equivalent decoding complexity. Computer simulation results are given for the efficient use of this hybrid decoding technique in the presence of additive white Gaussian noise.
Low-density parity-check (LDPC) codes were first introduced by Gallager in 1963  and rediscovered by Mackay  in late 1990s. LDPC codes are characterized by a sparse parity-check matrix H, for which an iterative decoder becomes an attractive option. One example is the belief propagation (BP) decoder which achieves the optimal MAP (Maximum a Posteriori) decoding condition if the code graph does not contain cycles (, p. 211). However, most LDPC codes have cycles in their Tanner graph representation, mainly for small and medium blocklengths , which adversely affect code performance. The BP decoding operation is executed for a preset number of iterations, and in case of a decoding failure, it leads to a frame error which implies a number of bits erroneously decoded. In the error-floor region, where LDPC codes exhibit a sudden saturation in word error rate (WER) for a sufficiently high signal-to-noise ratio (SNR), the bit errors are primarily caused by trapping sets (, p. 225). A considerable amount of research has gone into designing decoders to mitigate the errors caused by trapping sets [4–6].
In , a bi-mode decoder for LDPC codes was proposed for the additive white Gaussian noise (AWGN) channel, called a hybrid decoder (HD). The HD system operates in two modes (stages), where in mode 1 (first stage) a min-sum BP decoding is employed and in mode 2 (second stage) an iterative erasure decoder is used. One cycle of the HD operation includes at least one passage through the min-sum BP decoder, and if necessary, it also includes a passage through the erasure decoder.
In this article, we make a more efficient use of the HD system introduced in . In comparison with , the novelty here is the experimental determination of appropriate values for the number of iterations for the min-sum BP decoder which permits to perform a fine tune on the HD system. Furthermore, a new material has been added to explain the estimation of the number of bits to be erased at the end of an unsuccessful BP decoding, and computer simulations were performed to investigate the behavior of certain LDPC codes in the presence of AWGN by analyzing the following:
Behavior of the number of errors resulting after a decoding failure;
Impact on the performance of HD systems caused by setting a specific number of iterations in a min-sum BP decoder, such that the cardinality of the remaining bit error pattern is small in case of unsuccessful BP decoding;
Performance curves obtained by computer simulation for some LDPC codes used in the IEEE 802.11n standard , under the constraints on the decoding conditions indicated earlier.
Nowadays, LDPC codes are continually in high demand, and that causes a strong need for the development of low-complexity decoding algorithms for these codes to enable low-cost high-throughput decoder implementations. As mentioned in , there are three main approaches to reduce the complexity of LDPC decoding, namely
Simplification of computations performed by the decoder
Reducing the number of iterations needed to converge
Reducing the complexity cost of an iteration.
Different approaches have been followed to reduce the complexity of LDPC decoding. For example, by replacing the sum-product check node computation by a simpler function , to simplify the computations, or using various scheduling techniques to increase the convergence rate by altering the order of updating messages during the iteration , or using the forced convergence method . In , an approach named lazy scheduling was employed to force a reduction in the complexity of decoding computations without reducing the number of iterations. Here, we address an approach whereby the maximum number of iterations is reduced without paying a penalty for loss in performance.
Hybrid decoding system
We illustrate in Figure 1 the basic digital communication system adopted for the computer simulation of some selected LDPC codes. At the transmitter, a binary information K-tuple u=(u1,u2,…,u K ) feeds the encoder for a binary LDPC code to produce the codeword c=(c1,c2,…,c N ). Each codeword is then processed by a mapper in such a way that each digit 0 is converted to the integer +1 and each digit 1 is converted to the integer −1. At the mapper output, the N-tuple x=(x1,x2,…,x N ) is generated where x i , 1≤i≤N, denotes an integer equal either to +1 or −1. The N-tuple x is sent through an AWGN channel, the output of which is denoted by y, expressed as y=x+n, where n=(n1,n2,…,n N ) denotes a vector whose coordinates are AWGN samples introduced by the channel with zero mean and variance σ2.
At the receiver, decoding consists of processing y to recover u, and this is performed in at most two decoding stages as we explain in the sequel according to the flowchart in Figure 2. On receiving the N-tuple y, the decoder converts y into a vector ξ=(ξ1,ξ2,…,ξ N ) whose coordinates are values of a quantized logarithmic likelihood ratio (QLLR) . The first decoding stage employs a min-sum BP decoder. Let IBP denote the maximum number of iterations for the first decoding stage. After each iteration, the min-sum BP decoder generates an N-tuple of reliability values, which is converted to a binary form by hard-decision and denoted as . The binary N-tuple is tested for its syndrome by means of the operation where HT denotes the transpose of the code parity-check matrix. A well-established result for linear block codes (, p. 70), guarantees that if occurs, then we can consider to be the transmitted codeword and the HD cycle is successfully interrupted. Otherwise, i.e., if , then a new iteration is performed by the BP decoder and the newly generated is tested to check whether . The BP decoder iterations continue until the condition is satisfied or else the number of iterations reaches the value IBP. After IBP iterations are performed and decoding is not successful, instead of declaring a decoding failure, the resulting N-tuple of reliability values is used by an artificially created binary erasure channel (BEC)  to produce an N-tuple z of 0’s, 1’s, and erasures.
For a given positive integer , let denote a set containing coordinates, , selected among N coordinates with lowest reliability values in . The value of each one of those selected coordinates is mapped to z as an erasure, and the remaining coordinates in are mapped to z as either 0’s or 1’s by hard-decision according to Rule 1 as follows:
where Δ denotes an erasure.
The iterative erasure correction decoder then acts on z to correct the erasures introduced by Rule 1. In case all erasures are corrected, the HD cycle is interrupted and the erasure decoder outputs a binary N-tuple w. If w HT=0, then w is considered to be a codeword; otherwise, a decoding failure occurs. However, if all erasures are not corrected but at least one erasure is corrected when the erasure decoding operation is finished, the erasure decoder outputs an N-tuple w in which the coordinate values coincide with corresponding coordinate values in z, except for those coordinates where erasures in z have been corrected and, therefore, will contain binary values in w. Next, w is associated to an N-tuple of reliability values ξ by Rule 2 as follows:
When the erasure decoding operation is finished, we notice in ξ that the QLLR values from the previous BP decoding are kept if the corresponding digits remained erased or if their values are the same as those already computed at the previous BP decoding. Otherwise, the sign of the corresponding QLLR value is changed. A binary N-tuple b, obtained by hard-decision on the coordinate values of ξ, has its syndrome computed to find out whether decoding has been successful or not. If not so and the maximum number of cycles of the HD algorithm, denoted by IHD, has not been reached, then ξ is used as input to the min-sum BP decoder and thus a new HD cycle is initiated.
The choice of a value for
As we already mentioned, the parameter represents the number of bits to be erased at the end of an unsuccessful BP decoding using the criteria of erasing those digits with lowest QLLR absolute values. An adequate determination of is essential for obtaining a satisfactory overall performance since, in the erasure iterative decoding process, as the number of introduced erasures increases, it is less and less likely to find parity-check equations containing a single erased bit which could thus be solved. The set of erased bits which cannot be decoded with the code parity-check equations is called a stopping set.
Definition 1(Reference )
A stopping set in a linear block code is a subset of the set of variable nodes (or bit nodes) of the code Tanner graph, such that all neighbors of (parity-check nodes) are connected to at least twice.
Whenever there is a decoding failure on the first decoding stage, then the second decoding stage is employed to process the output of an artificially created BEC. The input to the BEC is the binary sequence produced by the min-sum BP decoder at the end of the IBPth iteration. Our motivation to employ an iterative erasure decoding stage is the following. For a given block code with minimum Hamming distance d, it is well known that any erasure pattern containing up to d−1 erasures (, p. 81) can be corrected, as well as a large fraction of patterns containing d or more erasures, as long as the number of erasures in a codeword does not exceed the number of parity-check digits and these erasures do not cover a codeword . As shown later, typical useful values of are much larger than d.
Definition 2(Reference ).
The stopping distance of a linear block code , denoted as s(H), is defined as the set of least cardinality among all stopping sets of code .
The number of distinct stopping sets of a linear code determines the code performance with erasure iterative decoding . The performance of code with blocklength N, for a specific parity-check matrix H, as a function of the BEC erasure probability ε, is computed with the probability P H (ε), i.e., the word erasure rate, expressed as (, p. 313)
where T(δ) denotes the total number of distinct combinations of δ positions with erasures and S(δ) denotes the number of such combinations which constitute a stopping set. Thus, the ratio S(δ)/T(δ) can be interpreted as the probability of a given set δ of erased digits to be a stopping set. All erasure patterns containing a number of erasures less than the stopping distance s(H) can be corrected by the erasure iterative decoder. Apparently, a good choice for should satisfy , in which case all erasure patterns would be corrected. However, a small value for s(H) may be insufficient for the possible to contain the majority of the errors present in the N-tuples .
We define an N-tuple containing errors produced when BP decoding fails as detected if all erroneous positions are contained in .
We define an N-tuple containing errors produced when BP decoding fails as corrected if it can be detected, and the corresponding binary word produced by the erasure iterative decoder belongs to code , i.e., .
For two LDPC codes used in the IEEE802.11n standard and used as examples in this article, namely the (1296,648) and the (1296,864) codes, the best value for was determined by computer simulation of the HD algorithm by trying out various values for , , and computing the percentage of detected error patterns, the percentage of corrected erasures, and the percentage of errors corrected in the received N-tuple. The best value of for the (1296,648) LDPC code is as can be viewed in Figure 3. For the (1296,864) LDPC code, the best value of is equal to 100 as indicated in Figure 4. By observing Figures 3 and 4, we notice that by increasing , there is a corresponding increase in the percentage detection of error patterns (blue bars); however, this comes accompanied by a decrease in the percentage correction of erasures (green bars) due primarily to the presence of stopping sets. We notice that the (1296,648) code has a minimum distance d=23, while the minimum distance of the (1296,864) code, although not known exactly, is estimated to be at most 20, i.e., both minimum distances are much less than the corresponding values of . Other SNR values were tested to analyze the behavior of the two codes considered here, and similar results to those shown in Figures 3 and 4 were obtained.
Error event analysis
In the HD scheme, ideally after the last iteration of an unsuccessful min-sum BP decoding, all remaining bit errors should be included among those erased positions in the N-tuple z. That is more likely to occur when either the value of is large or else when the cardinality of the remaining bit error pattern is small. Opting for a large in many cases may not be a good approach because it can give rise to stopping sets (, p. 244), thus hampering the success of the iterative erasure decoder as we can observe by analyzing the behavior of the green bars in Figures 3 and 4. On the other hand, the number of bit errors for a given frame error occurrence can vary significantly along the intermediate iterations , i.e., the cardinality of the remaining bit error pattern can vary significantly. The variation in the number of erroneous bits at the output of a BP decoder after an unsuccessful decoding motivated us to develop an error event analysis in order to perform a fine tune on the HD system , more specifically for selecting an appropriate value for IBP. In , initially, an arbitrary value of IBP was employed for the first stage of the HD system, which is a min-sum BP decoder. In this manner, there was no clue to the number of remaining errors when IBP iterations were performed and the decoding was not successful. When that happened, it was likely that the BP decoder had reached a trapping set (, p. 651). Once a trapping set has been reached, it is not possible, by performing more iterations with the BP decoder, to get out of it and successfully decode the received word. According to (, p. 652), the variation in the cardinality of the remaining bit error patterns occurs due to the behavior of the trapping sets that dominate the error floor.
Trapping sets can be classified into one of three classes according to their behavior (, p. 652): (1) a stable trapping set (also called a fixed-point trapping set), (2) a periodically oscillating trapping-set, and (3) an aperiodic oscillating trapping set. The relevance of the variation in the cardinality of the remaining bit error patterns depends on the class of trapping sets that dominate the error-floor region. In general, there is no known theoretical way to establish the trapping sets for a given code. Thus, finding trapping sets is in general a difficult task because it requires intensive computer simulations at very low error rates which often take months to run (, p. 651). As an alternative to the usual computer simulation, we develop an error event analysis, which has a much lower computational cost, and apply it to two LDPC codes of the IEEE802.11n standard used as examples in this article. The purpose of the error event analysis is to find out the average behavior of the min-sum BP decoding, with respect to the variation in the number of errors after each iteration, in case of a decoding failure. If the cardinality of the remaining bit error patterns varies significantly, we expect to find out values for the number of iterations for which the number or erroneous bits is minimal and thus choose one of these values for IBP. As a result, the probability that erroneous bits are included among the erased positions in the N-tuple z is higher.
In the sequel, we present results of computer simulations performed for two LDPC codes used in the IEEE802.11n standard, namely the (1296,648) and the (1296,864) codes. Our purpose is to find out the average behavior of the min-sum BP decoding with respect to the variation in the number of errors remaining after each iteration in case of a decoding failure. The number of bit errors is averaged over the number of intermediate iterations of a min-sum BP decoder. The procedure employed to generate the curves in Figure 5 works as follows.
Let β i be the number of errors in an estimated N-tuple in the i th iteration of the min-sum BP decoding algorithm, where i≤IBP. In case of a min-sum BP decoding failure, let denote the m th sample sequence having for each coordinate the number of errors in the corresponding BP iteration. The coordinates in sequence x m are then normalized by their largest value giving rise to the normalized sequence . Averaging the overall m th normalized sequences gives rise to the sequence, whose i th coordinate (corresponding to i th iteration) is expressed by
where M denotes the maximum number of samples considered in the analysis. Thus, the sequence γ allows us to track the average behavior of the min-sum BP decoding algorithm with respect to the variation in the number of errors at each iteration. The results shown in Figure 5 are for the two codes of interest, where the horizontal axis represents the number of decoding iterations, and the vertical axis represents the average normalized number of bit errors at each iteration of the min-sum BP algorithm, γ i , 1≤i≤IBP=100. In Figure 5, for the (1296,648) code, the curve is plotted for SNR =2.4 dB and M=525 samples, and for the (1296,864) code, the curve is plotted for SNR =3.2 dB and M=510 samples. Since trapping sets depend not only on the graphical structure of the code but also on the channel (, p. 653), it is noteworthy that other SNR values were tested to analyze the behavior of the two codes in the situation described, and similar results to those shown in Figure 5 were obtained. In agreement with , for both LDPC codes considered, the number of remaining errors after a BP decoding failure exhibit an oscillating behavior as illustrated in Figure 5. The resulting number of bit errors may reach higher values, referred to as peak points or lower values (local minima) referred to as valley points.
Although the error event analysis performed for the codes examined may not be enough to draw precise conclusions about the class of the trapping sets that dominate the error-floor region, it is enough to perform a fine tune of the HD system as we show in the next section.
Efficient use of the hybrid decoder
Figure 6 shows some performance curves for the (1296,648) LDPC code. First, we consider decoding the (1296,648) LDPC code using IBP=12 iterations for the min-sum BP decoder (first stage) of the HD system. The value IBP=12 is very close to a valley point in the curve shown in Figure 5, i.e., for which low cardinality bit error patterns in the estimated N-tuple are expected. It is noticed in Figure 6 that to achieve the same performance, in terms of WER, the HD system with IHD=1 (1 cycle) requires an SNR around 0.3 dB lower than that required for a conventional min-sum BP decoder with IBP=12 iterations. Similarly, for IHD=2 (2 cycles), the HD system requires an SNR around 0.6 dB lower than that required for a conventional min-sum BP decoder with IBP=12 iterations.
Furthermore, for the range of SNR values in Figure 6, the WER for the HD system with 1 cycle is lower than that for a conventional min-sum BP decoder with IBP=12 iterations by approximately 1 order of magnitude. For 2 cycles, the resulting WER for the HD system is lower by 2 orders of magnitude than that for a conventional min-sum BP decoder with IBP=12 iterations.
Figure 6 also illustrates the results obtained for the (1296,648) LDPC code for IBP=25 iterations. We observe in Figure 5 that IBP=25 iterations is close to a peak point and large cardinality error patterns are expected. For IBP=25, an analysis similar to that from the previous case shows that the HD system with 1 cycle requires an SNR around 0.1 dB lower than that of a min-sum BP decoder for IBP=25 iterations in order to achieve the same performance in terms of WER. For a 2-cycle HD, a reduction of 0.2 dB results, again in comparison with a min-sum BP decoder for IBP=25 iterations. With respect to the values of WER, for IBP=25, we observe that the difference between the values of WER for the HD system with 2 cycles and for a min-sum BP decoder with IBP=25 iterations is smaller than 1 order of magnitude. In summary, in terms of WER, no considerable gain results if the value chosen for IBP is close to a peak point in Figure 5. On the other hand, we obtain a more efficient use of the HD system for the (1296,648) LDPC code by setting the value of IBP close to a valley point in Figure 5.
A procedure similar to that described for the (1296,648) code was also performed for decoding the (1296,864) code using the HD system, and the corresponding results are shown in Figure 7. As expected, for IBP=10 iterations, which is very close to a valley point in Figure 5, the use of the HD system produces better results than those for IBP=16 iterations, which is a peak point in Figure 5, according to the analysis described earlier for the (1296,648) code.
Performance curves in Figures 6 and 7 for the min-sum BP algorithm with 50 iterations serve as a reference value. By setting IBP close to a valley point, the HD system needs an overall reduced number of iterations to achieve a similar performance to the min-sum BP decoding scheme, with equivalent implementation complexity.
The oscillating behavior of the error rate versus number of iterations, featured by some LDPC codes of the IEEE802.11n standard, was exploited in order to take advantage of the so-called valley points in the error event analysis. The goal achieved here was an enhancement of the performance of the bi-modal hybrid decoder from . Simulation results show that a more efficient use of the HD system is obtained when the min-sum BP decoder employs a maximum number of iterations corresponding to points close to and possibly including a valley point and avoiding values close to and possibly including a peak point. In particular, the error event analysis for the (1296,648) code in Figure 5 indicates for 12 iterations a number of remaining errors much smaller than for 25 iterations, whenever there is a decoding failure. However, in general, a BP decoder employing a maximum of 25 iterations on average will fail less times than when employing a maximum of 12 iterations. A similar reasoning applies to other LDPC codes of similar parameters as well as to LDPC codes of similar block length. In particular, it applies to the (1296,864) code that was considered here.
Gallager RG: Low-density parity-check codes. Ph.D. thesis, MIT Press, Cambridge 1963.
Mackay DJC: Good error-correcting codes based on very sparse matrices. IEEE Trans. Inform. Theory 1999, 45(2):399-431. 10.1109/18.748992
Ryan WE, Lin S: Channel Codes: Classical and Modern. Cambridge: Cambridge University Press; 2009.
Gounai S, Ohtsuki T, Kaneko T: Modified belief propagation decoding algorithm for low-density parity check code based on oscillation. In Proceedings of the IEEE 63rd Vehicular Technology Conference VTC 2006 Spring, Melbourne, 7–10 May 2006. Piscataway: IEEE; 2006:1467-1471.
Alghonaim E, Landolsi MA, El-Maleh A: Improving BER performance of LDPC codes based on intermediate decoding results. In Proceedings of the IEEE International Conference on Signal Processing and Communications ICSPC 2007, Dubai, 24–27 Nov 2007. Piscataway: IEEE; 2007:1547-1550.
Han Y, Ryan E: Low-floor decoders for LDPC codes. IEEE Trans. Commun 2009, 57: 1663-1673.
Guimarães WPS, Lemos-Neto JS, da Rocha Jr VC: A hybrid iterative decoder for LDPC codes. In Proceedings of the 9th International Symposium on Wireless Communication Systems ISWCS 2012, Paris, 28–31 Aug 2012. Piscataway: IEEE; 2012:979-983.
IEEE Standards Association: IEEE 802.11n-D1.0 - IEEE 802.11n Wireless LAN Medium Access Control MAC and Physical Layer PHY Specifications. Piscataway: IEEE; 2006.
Levin D, Sharon E, Litsyn S: Lazy scheduling for LDPC decoding. IEEE Commun. Lett 2007, 30(1):70-72.
Zhang J, Fossorier M, Du G, Zhang J: Two-dimensional correction for min-sum decoding of irregular LDPC codes. IEEE Commun. Lett 2006, 10: 180-182. 10.1109/LCOMM.2006.1603377
Zhang J, Fossorier M: Shuffled belief propagation decoding. In Proceedings of the 36th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, 3–6 Nov 2002. Piscataway: IEEE; 2002:8-15.
Fettweis G, Zimmermann E, Rave W: Forced convergence decoding of LDPC codes: EXIT chart analysis and combination with node complexity reduction techniques. In Proceedings of the 11th European Wireless Conference, Nicosia, 10–13 Apr 2005. Piscataway: IEEE; 2005:1-8.
Chen J, Dholakia A, Eleftheriou E, Fossorier MPC, Hu XY: Reduced-complexity decoding of LDPC codes. IEEE Trans. Commun 2005, 53(8):1288-1299. 10.1109/TCOMM.2005.852852
Lin S, Costello DJ: Error Control Coding. Englewood Cliffs: Prentice Hall; 2004.
Di C, Proietti D, Telatar I, Richardson T, Urbanke R: Finite-length analysis of low-density parity-check codes on the binary erasure channel. IEEE Trans. Inform. Theory 2002, 48(6):1570-1579. 10.1109/TIT.2002.1003839
Freitas PR, da Rocha V, Lemos-Neto JS: On the iterative decoding of binary product codes over the binary erasure channel. In 2011 8th International Symposium on Wireless Communication Systems (ISWCS), Aachen, 6–9 Nov 2011. Piscataway: IEEE; 2011:126-130.
Schwartz M, Vardy A: On the stopping distance and the stopping redundancy of codes. IEEE Trans. Inform. Theory 2006, 52(3):922-932.
Johnson S: Iterative Error Correction Turbo, Low-Density Parity-Check and Repeat-Accumulate Codes. Cambridge: Cambridge University Press; 2010.
This work received partial support from the Brazilian National Council for Scientific and Technological Development - CNPq, Project 304696/2010-2, and from the Pernambuco State Foundation to Support Science and Technology - FACEPE, Project 0288-0.34/10.
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.