Adaptive deactivation and zeroforcing scheme for lowcomplexity LDPC decoders
 Taehyun Kim^{1},
 Jonghyun Baik^{2},
 Myeongwoo Lee^{3} and
 Jun Heo^{1}Email authorView ORCID ID profile
https://doi.org/10.1186/s136380170934z
© The Author(s) 2017
Received: 20 March 2017
Accepted: 22 August 2017
Published: 11 September 2017
Abstract
A modified message propagation algorithm is proposed for a lowcomplexity decoder of lowdensity paritycheck (LDPC) codes, which controls the information propagated from variable and check nodes.The proposed thresholdbased node deactivation for variable nodes and zeroforcing scheme for check nodes remarkably reduce decoding complexity required for similar error performance. In the proposed scheme, different thresholds, which are determined from the base matrix of the LDPC codes, are applied for each type of variable node. In addition, thresholds for deactivating variable nodes are increased while the decoding process is operated for a reduction in decoding complexity without early error floor, which is a drawback of the conventional thresholdbased deactivation scheme. Simulation results show that the proposed scheme enables normalized minsum decoders to decode successfully with less complexity than the conventional thresholdbased deactivating scheme.
Keywords
Lowdensity paritycheck codes Protographbased extrinsic transfer chart Normalized minsum algorithm Complexity reduction1 Introduction
Lowdensity paritycheck (LDPC) codes were first introduced by Gallager [1] in 1962 and rediscovered by Mackay [2] in 1999. LDPC codes have been in competition with turbo codes, which were proposed in 1993 [3], for error control in many applications. In the viewpoint of error performance, LDPC codes exhibit better behavior for high code rates compared to turbo codes, which the last are better for lower code rates. Complexity as well as error performance is one of the factors that determine the error correction code to use. Therefore, reducing the complexity of the decoding algorithm makes LDPC codes more applicable, especially to the energy limited applications, such as Internet of Things (IoT) and deep space communication. Since this paper is the research for LDPC codes, only schemes for LDPC codes are briefly introduced in this section, but there were various researches for practical usage of turbo codes [4–7]. The complexity of the LDPC decoding process is determined by three major factors: the computational complexity, the number of iterations, and the number of activated nodes.
Although the original iterative decoding algorithm for LDPC codes, known as the sumproduct algorithm [2], shows good error correction performance, its computational complexity is quite high. The sumproduct algorithm can be simplified using a mathematical approximation, which is called the minsum algorithm [8]. The degraded performance from the approximation can be improved by simply scaling messages [9, 10]. On the other hand, Savin proposed a selfcorrection (SC) method for improving performance of minsum algorithm [11, 12]. He checked the sign changes of the messages from variable nodes between two consecutive iterations to identify unreliable messages. Erasing unreliable messages yields better performance compared to other modified minsum algorithms [9], especially in the error floor region [13]. As an additional effect, the SC method affects power efficiency due to a reduction in switching activity [14, 15]. The cost of the performance improvement is the additional memory required to store the signs of passing messages. This overhead can be relieved by changing the rules of the conventional SC method [16]. This scheme has memory requirements similar to those of the conventional minsum algorithm.
The decoding complexity is proportional to the number of iterations. In order to reduce the number of iterations, several serial scheduling methods have been proposed. Unlike classical decoding, iteratively generated information is updated following a certain sequence in a serial scheduling. Check nodewise and variable nodewise schedules have been known as layered decoding [17] and shuffled decoding [18], respectively. The required number of iterations can be reduced to half of that of parallel decoding while the performance is maintained. A stopping criterion can also reduce the average number of iterations. Recently, a stopping criterion for reducing not only the average number of iteration but also the number of calculations per iteration was proposed [19].
In addition, decoding complexity has close relationship with the number of activated nodes. An activated node is a node that calculates messages for increasing reliability. In contrast, a node that does not calculate any information, is called an deactivated node. The forced convergence (FC) method [20] reduces the number of activated nodes per iteration. During the decoding procedure, the reliability of some variable node messages become sufficiently high within a few iterations. The update of those reliable variable nodes can be skipped in the rest of the iterations [20]. Similarly, the check nodes can be deactivated when they satisfy some inequality conditions [21]. An important issue for achieving good error correction performance is proper selection of the threshold value. Sarajlc et al. tried to obtain optimized threshold values through an optimization problem [22, 23]. In addition, there exist simpler versions of the FC method [24–26] for operational efficiency.

Thresholds for the proposed scheme are determined by using information theoretic analysis based on a base matrix of LDPC codes.

For the variable nodes, a thresholdbased deactivation process is proposed with dynamically increasing thresholds.

For the check nodes, the total amount of checknode operations is reduced by generating zeroforced loglikelihood ratios.
Different thresholds are applied to each variable node according to its connectivity. In addition, prematurely deactivated variable nodes are reactivated by dynamically increasing threshold values. The threshold increasing condition is checked with a simple estimator based on logical circuits. For the check nodes, the SC method is modified to make it useful for reducing check node calculations.
The rest of this paper is organized as follows: In Section 2, we explain precisely the classical decoding algorithm and conventional techniques that motivated our proposed scheme. Section 3 describes the proposed scheme. The simulation results of the proposed scheme are discussed in Section 4 and a conclusion is provided in Section 5.
2 Background
2.1 Decoding algorithms for LDPC codes
The drawback of MSA is degraded performance compared with SPA. For performance improvement, the normalized MSA was used in this study. The performance of MSA can be improved by simply scaling the absolute value of the C2V message. Although optimum scaling factor is an real value [10], the multiplication of real value is sublated in implementation due to the high complexity. Therefore, for the sake of simplicity and practical usefulness, we used a scaling factor of 0.75, which is close to the optimized value and can be implemented by several shift registers.
2.2 Forced convergence
2.3 Selfcorrection for the minsum algorithm
3 Adaptive deactivation and zeroforcing scheme
In this section, calculation reduction techniques for variable and check nodes are proposed. We were motivated by the FC and SC schemes that are explained in Section 2. The proposed node deactivation process for variable nodes is conceptually similar to the conventional FC scheme in the viewpoint of reducing the operational complexity of variable nodes with high reliability. The conventional FC scheme uses an equal threshold value for all variable nodes. In contrast, we decided a different threshold value for each variable node. Because the increment of reliability per iteration during the decoding process is different according to the connectivity, the use of various threshold is more efficient for complexity reduction. In addition, not fixed but variable thresholds are used for preventing early error floor, which is a drawback of the FC scheme.
From the algorithm of the SC method, check node operation can be simplified according to the number of erased V2C messages. However, the main proposal of the SC method is an error correcting performance improvement. The number of erased V2C messages in the conventional SC method is not enough to obtain sufficient benefits in the viewpoint of complexity. Therefore, we proposed a thresholdbased zeroforcing scheme for improving check node complexity reduction.
All thresholds used for the proposed scheme should be properly determined to deactivate nodes as much as possible but not to cause early error floor. We proposed a threshold decision method using an information theoretic analysis tool.
3.1 Adaptive node deactivation for reliable information
The parameter t _{ v }(j) is a threshold for typej variable nodes. As mentioned previously, the thresholds for each type of variable node are increased during decoding process. The reason for a usage variable threshold instead of fixed threshold is a tradeoff between error performance and complexity reduction. Although substantial complexity reduction can be obtained when the threshold is small, early error floor occurs. The reason is that the hastily deactivated variable nodes arising from the small threshold value degrade error performance. On the other hand, a large threshold value brings the opposite results.
In order to reactivate the hastily deactivated variable nodes, the threshold in the proposed scheme is increased when the rate of variable node deactivation exceeds a certain criterion. The increment of threshold value reactivates the deactivated variable nodes that do not satisfy condition (7). As a result, threshold values from the early phase to the last phase compensate each other by obtaining substantial complexity reduction with relatively less performance loss compared to when the fixed threshold value is used.
Initial thresholds t _{ v,ini } are determined using a protographbased extrinsic information transfer (PEXIT) chart [27], which is an analysis tool for protographbased and multiedge type LDPC codes. In contrast to the extrinsic information transfer (EXIT) chart [28], which is based on the degree distribution of LDPC codes, a base matrix is used for the PEXIT chart. The PEXIT chart provides not only the threshold signal to noise (SNR) of LDPC codes but also the growth of reliability for each type of node in the iterative decoding process.
Since the mutual information of variable nodes increases at different rates, it is more reasonable to apply different threshold values for each type of variable node rather than an equal threshold value as in conventional FC schemes.
3.2 Zeroforcing scheme for unreliable information
In the check node operation for normalized MSA, it is required to find the minimum and second minimum values among the V2C messages that come from connected variable nodes. It can be simplified when some V2C messages are equal to 0. For example, let a check node m receive more than two zeroV2C messages. Then the check node does not have to calculate any C2V messages because all of the check node outputs are zero in this case. Consequently, it is known that zeroV2C messages enable check nodes to execute fewer operations. However, V2C messages are generally nonzero values in the conventional decoding algorithm. In order to generate zeroV2C messages, we proposed a thresholdbased zeroforcing scheme. In the proposed method, V2C messages, which are regarded as unreliable information, are forced to be zero.
For LDPC codes of length N and rate \(R=\frac {NM}{N}\), the proposed simultaneous node deactivation and zeroforcing scheme for highly reliable and unreliable information is described in Algorithm 1.
4 Simulation results
In order to examine the performance of the proposed algorithm, simulations were performed over an additive white Gaussian noise (AWGN) channel with binary phase shift keying (BPSK) modulation. LDPC codes specified in the IEEE 802.11ad standard of length N=672 and rate R=1/2 and 3/4 were used to evaluate performance. In addition, the simulation results for LDPC codes specified in the IEEE 802.22 standard [30] of length N=384 and IEEE 802.11n standard [31] of length N=1944 are represented in the table at the end of this section. We performed all simulations using 7bit fixedpoint precision, with 4 bits for the integer part and 3 bits for the fractional part. The maximum number of iterations for all simulations was 20.
The parameter \(v_{i}^{itr}\) means that the ith variable node is activated(=0) or deactivated(=1) at the itrth iteration. Because the effect of a deactivated high degree variable node on computation complexity is greater than that of a deactivated low degree variable node, we considered the degree of variable nodes as a weight. When the iteration is stopped before reaching the maximum number of iterations, all \(v_{i}^{itr}\) values for the rest of the iterations are equal to 1. In this way, a reasonable complexity considering early stopping can be obtained. For the proposed scheme and FC schemes, the complexity decreases as SNR increases, because each variable node is likely to have a higher APPLLR at a higher SNR. As represented in Fig. 13, the variable node complexity of the proposed scheme applied to WiGig LDPC code of rate 1/2 is only 0.26, which is almost half of that of the FC scheme (0.53). When the threshold value for the FC scheme is smaller than 6, the complexity of the FC scheme is lower than that of the proposed scheme. However, as represented in Fig. 9, the FC scheme with a low threshold value is meaningless due to the error performance loss. For the proposed scheme, using an NAE with s=3 shows less complexity than that with s=4. The reason is that the probability of detecting 1 becomes higher when NAE has more stages, as shown in Fig. 6. Although more stages in NAE cause less complexity reduction compared to fewer stages, it has an advantage in the viewpoint of error floor. Similarly, the number of increment trials also influences error performance and decoding complexity. The large number of trials brings low decoding complexity and early error floor. On the contrary, lower complexity gain with less performance loss can be obtained when the smaller number of increment trials is used. Consequently, the number of stages in NAE and increment trials should be selected cautiously in accordance with the requirements of the communication system due to a tradeoff between complexity and error performance.
Simulation results of the FC scheme and proposed ADZF scheme when 802.22 LDPC codes and 802.11n LDPC codes are used
Code  FER  Scheme  Average iteration  VN complexity  CN complexity 

802.22 LDPC  10^{−2}  FC (t=10)  6.637  0.631  0.894 
(N=384,R=1/2)  ADZF (proposed)  7.050  0.373  0.707  
10^{−3}  FC (t=10)  5.186  0.586  0.861  
ADZF (proposed)  5.408  0.320  0.634  
802.11n LDPC  10^{−2}  FC (t=12)  9.091  0.531  0.854 
(N=1944,R=1/2)  ADZF (proposed)  9.390  0.225  0.507  
10^{−3}  FC (t=12)  7.207  0.508  0.826  
ADZF (proposed)  7.377  0.202  0.458 
Finally, as a merit of the proposed scheme, ADZF scheme is compatible with the various existing schemes using LDPC codes, such as [32–34], in the viewpoint of the practical usage of their schemes. Decoding complexity of their schemes is reduced through simply adopting proposed scheme to LDPC decoder.
5 Conclusions
A node operation reducing technique, called the ADZF scheme, is proposed for a lowcomplexity decoder of LDPC codes. Variable nodes with high reliability are deactivated to omit their calculations. A simple NAE constructed with logical circuits is introduced to increase the threshold value when deactivated variable nodes are saturated. Hastily deactivated variable nodes are automatically reactivated by the increased threshold value. Messages with low reliability propagated from variable nodes are replaced with zeroes to reduce check node operations. Since the threshold value is decided only once in the offline mode, the proposed threshold value decision method does not incur additional computational complexity for the decoder. Simulation results show that the proposed scheme reduced decoding complexity more than the FC scheme. Moreover, the proposed scheme resolves the early error floor problem, which is a drawback of the FC scheme.
Declarations
Acknowledgements
This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (IITP20172015000385) supervised by the IITP (Institute for Information and communications Technology Promotion).
Authors’ contributions
TK designed the main of the algorithm, analyzed the data, and wrote this paper. JB and ML designed a part of the algorithm and completed simulations. JH gave valuable suggestions on the idea of this paper. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 R Gallager, Lowdensity paritycheck codes. IRE Trans. Inf. Theory. 8(1), 21–28 (1962).MathSciNetView ArticleMATHGoogle Scholar
 DJC Mackay, Good errorcorrecting codes based on very sparse matrices. IEEE Trans. Inf. Theory. 45(2), 399–431 (1999).MathSciNetView ArticleMATHGoogle Scholar
 C Berrou, A Glavieux, P Thitimajshima, in IEEE International Conference on Communications (ICC 1993). Near Shannon limit errorcorrecting coding and decoding: Turbocodes. 1 (IEEE, Geneva, 1993).Google Scholar
 SK Chronopoulos, G Tatsis, V Raptis, P Kostarakis, in PanHellenic Conference on Electronics and Telecommunications (PACET 2012). A Parallel turbo encoderdecoder scheme (The Electronics and the Telecommunications laboratories of the Departments of Physics and Electrical and Computer Engineering, of the Aristotle University of Thessaloniki (AUTh), Thessaloniki, 2012).Google Scholar
 SK Chronopoulos, T Giorgos, P Kostarakis, Turbo codes–a new PCCC design. Commun. Netw. 3(4), 229–234 (2011).View ArticleGoogle Scholar
 SK Chronopoulos, T Giorgos, P Kostarakis, Turbo coded OFDM with large number of subcarriers. J. Signal Inf. Process. 3(2), 161–168 (2012).Google Scholar
 SK Chronopoulos, V Christofilakis, G Tatsis, P Kostarakis, Performance of turbo coded OFDM under the presence of various noise types. Wireless Pers. Commun. 87(4), 1319–1336 (2016).View ArticleGoogle Scholar
 MPC Fossorier, M Mihaljevic, H Imai, Reduced complexity iterative decoding of low density parity check codes based on belief propagation. IEEE Trans. Commun. 47(5), 673–680 (1999).View ArticleGoogle Scholar
 J Chen, MP Fossorier, Near optimum universal belief propagation based decoding of low density parity check codes. IEEE Trans. Commun. 50(3), 406–414 (2002).View ArticleGoogle Scholar
 J Heo, Analysis of scaling soft information on low density parity check code. IEE Electron. Lett. 37(25), 1530–1531 (2001).View ArticleGoogle Scholar
 V Savin, in IEEE International Symposium on Information Theory (ISIT 2007). Iterative LDPC decoding using neighborhood reliabilities (IEEE, Nice, 2007).Google Scholar
 V Savin, in IEEE International Symposium on Information Theory (ISIT 2008). Selfcorrected minsum decoding of LDPC codes (IEEE, Toronto, 2008).Google Scholar
 J Andrade, G Falcao, V Silva, JP Barreto, N Goncalves, V Savin, in IEEE International Conference on Communications (ICC 2013). NearLSPA performance at MSA complexity (IEEE, Budapest, 2013).Google Scholar
 E Amador, V Rezard, R Pacalet, in 17th IFIP International Conference on Very Large Scale Integration (VLSISoC 2009). Energy efficiency of SISO algorithms for turbodecoding messagepassing LDPC decoders (IEEE, Florianopolis, 2009).Google Scholar
 E Amador, R Knopp, V Rezard, R Pacalet, in IEEE Computer Society Annual Symposium on VLSI (ISVLSI 2010). Dynamic power management on LDPC decoders (IEEE, Lixouri, 2010).Google Scholar
 O Boncalo, A Amaricai, V Savin, in 21st IEEE International Conference on Electronics, Circuits and Systems (ICECS 2014). Memory efficient implementation of selfcorrected minsum LDPC decoder (IEEE, Marseille, 2014).Google Scholar
 E Yeo, P Pakzad, B Nikolic, V Anantharam, in IEEE Global Telecommunications Conference (GLOBECOM 2001). High throughput lowdensity paritycheck decoder architectures (IEEE, San Antonio, 2001).Google Scholar
 J Zhang, M Fossorier, Shuffled iterative decoding. IEEE Trans. Commun. 53(2), 209–213 (2005).View ArticleGoogle Scholar
 CY Lin, MK Ku, in IEEE International Symposium on Circuits and Systems (ISCAS 2009). Node operation reduced decoding for LDPC codes (IEEE, Taipei, 2009).Google Scholar
 E Zimmermann, P Pattisapu, PK Bora, G Fettweis, in 7th International Symposium on Wireless Personal Multimedia Communications (WPMC 2004). Reduced complexity LDPC decoding using forced convergence (WPMC steering boardAbano Terme, 2004).Google Scholar
 E Zimmermann, W Rave, G Fettweis, in 11th European Wireless Conference 2005Next Generation Wireless and Mobile Communications and Services. Forced convergence decoding of LDPC Codes  EXIT chart analysis and combination with node complexity reduction techniques (VDE, Nicosia, 2005).Google Scholar
 M Sarajlc, L Liu, O Edfors, in IEEE 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC 2014). Reducing the complexity of LDPC decoding algorithms: an optimizationoriented approach (IEEE, Washington, 2014).Google Scholar
 M Sarajlc, L Liu, O Edfors, in IEEE 26th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC 2015). Modified forced convergence decoding of LDPC codes with optimized decoder parameters (IEEE, Hong Kong, 2015).Google Scholar
 J Fan, H Yang, in IEEE International Conference on Communications Technology and Applications (ICCTA 2009). A new forced convergence decoding scheme for LDPC codes (IEEE, Beijing, 2009).Google Scholar
 BJ Choi, MH Sunwoo, in International SoC Design Conference (ISOCC 2013). Efficient forced convergence algorithm for low power LDPC decoders (IEEE, Busan, 2013).Google Scholar
 BJ Choi, MH Sunwoo, in IEEE Asia Pacific Conference on Circuits and Systems (APCCAS 2014). Simplified forced convergence decoding algorithm for low power LDPC decoders (IEEE, Ishigaki Island, 2014).Google Scholar
 G Liva, M Chiani, in IEEE Global Telecommunications Conference (GLOBECOM 2007). Protograph LDPC codes design based on EXIT analysis (IEEE, Washington, 2007).Google Scholar
 S ten Brink, Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Trans. Commun. 49(10), 1727–1737 (2001).View ArticleMATHGoogle Scholar
 (IEEE, New York, 2012). http://standards.ieee.org/getieee802/download/802.11ad2012.pdf.
 (IEEE, New York, 2011). http://standards.ieee.org/getieee802/download/802.222011.pdf.
 (IEEE, New York, 2009). http://standards.ieee.org/getieee802/download/802.11n2009.pdf.
 M Baldi, N Maturo, G Ricciutelli, F Chiaraluce, Security gap analysis of some LDPC coded transmission schemes over the flat and fast fading Gaussian wiretap channels. EURASIP J. Wirel. Commun. Netw. 232(1), 1–12 (2015).Google Scholar
 M Baldi, N Maturo, E Paolini, F Chiaraluce, One the use of ordered statics decoders for lowdensity paritycheck codes in space telecommand links. EURASIP J. Wirel. Commun. Netw. 272(1), 1–15 (2016).Google Scholar
 K Kwon, T Kim, J Heo, Precoded LDPC coding for physical layer security. EURASIP J. Wirel. Commun. Netw. 283(1), 1–18 (2016).Google Scholar