 Research
 Open Access
 Published:
Adaptive deactivation and zeroforcing scheme for lowcomplexity LDPC decoders
EURASIP Journal on Wireless Communications and Networking volume 2017, Article number: 153 (2017)
Abstract
A modified message propagation algorithm is proposed for a lowcomplexity decoder of lowdensity paritycheck (LDPC) codes, which controls the information propagated from variable and check nodes.The proposed thresholdbased node deactivation for variable nodes and zeroforcing scheme for check nodes remarkably reduce decoding complexity required for similar error performance. In the proposed scheme, different thresholds, which are determined from the base matrix of the LDPC codes, are applied for each type of variable node. In addition, thresholds for deactivating variable nodes are increased while the decoding process is operated for a reduction in decoding complexity without early error floor, which is a drawback of the conventional thresholdbased deactivation scheme. Simulation results show that the proposed scheme enables normalized minsum decoders to decode successfully with less complexity than the conventional thresholdbased deactivating scheme.
Introduction
Lowdensity paritycheck (LDPC) codes were first introduced by Gallager [1] in 1962 and rediscovered by Mackay [2] in 1999. LDPC codes have been in competition with turbo codes, which were proposed in 1993 [3], for error control in many applications. In the viewpoint of error performance, LDPC codes exhibit better behavior for high code rates compared to turbo codes, which the last are better for lower code rates. Complexity as well as error performance is one of the factors that determine the error correction code to use. Therefore, reducing the complexity of the decoding algorithm makes LDPC codes more applicable, especially to the energy limited applications, such as Internet of Things (IoT) and deep space communication. Since this paper is the research for LDPC codes, only schemes for LDPC codes are briefly introduced in this section, but there were various researches for practical usage of turbo codes [4–7]. The complexity of the LDPC decoding process is determined by three major factors: the computational complexity, the number of iterations, and the number of activated nodes.
Although the original iterative decoding algorithm for LDPC codes, known as the sumproduct algorithm [2], shows good error correction performance, its computational complexity is quite high. The sumproduct algorithm can be simplified using a mathematical approximation, which is called the minsum algorithm [8]. The degraded performance from the approximation can be improved by simply scaling messages [9, 10]. On the other hand, Savin proposed a selfcorrection (SC) method for improving performance of minsum algorithm [11, 12]. He checked the sign changes of the messages from variable nodes between two consecutive iterations to identify unreliable messages. Erasing unreliable messages yields better performance compared to other modified minsum algorithms [9], especially in the error floor region [13]. As an additional effect, the SC method affects power efficiency due to a reduction in switching activity [14, 15]. The cost of the performance improvement is the additional memory required to store the signs of passing messages. This overhead can be relieved by changing the rules of the conventional SC method [16]. This scheme has memory requirements similar to those of the conventional minsum algorithm.
The decoding complexity is proportional to the number of iterations. In order to reduce the number of iterations, several serial scheduling methods have been proposed. Unlike classical decoding, iteratively generated information is updated following a certain sequence in a serial scheduling. Check nodewise and variable nodewise schedules have been known as layered decoding [17] and shuffled decoding [18], respectively. The required number of iterations can be reduced to half of that of parallel decoding while the performance is maintained. A stopping criterion can also reduce the average number of iterations. Recently, a stopping criterion for reducing not only the average number of iteration but also the number of calculations per iteration was proposed [19].
In addition, decoding complexity has close relationship with the number of activated nodes. An activated node is a node that calculates messages for increasing reliability. In contrast, a node that does not calculate any information, is called an deactivated node. The forced convergence (FC) method [20] reduces the number of activated nodes per iteration. During the decoding procedure, the reliability of some variable node messages become sufficiently high within a few iterations. The update of those reliable variable nodes can be skipped in the rest of the iterations [20]. Similarly, the check nodes can be deactivated when they satisfy some inequality conditions [21]. An important issue for achieving good error correction performance is proper selection of the threshold value. Sarajlc et al. tried to obtain optimized threshold values through an optimization problem [22, 23]. In addition, there exist simpler versions of the FC method [24–26] for operational efficiency.
In this paper, we introduce a lowcomplexity decoding scheme that reduces the number of activated nodes. Compared to FC scheme, the proposed scheme has lower complexity with less performance loss. The main contribution of this work can be presented as follows:

Thresholds for the proposed scheme are determined by using information theoretic analysis based on a base matrix of LDPC codes.

For the variable nodes, a thresholdbased deactivation process is proposed with dynamically increasing thresholds.

For the check nodes, the total amount of checknode operations is reduced by generating zeroforced loglikelihood ratios.
Different thresholds are applied to each variable node according to its connectivity. In addition, prematurely deactivated variable nodes are reactivated by dynamically increasing threshold values. The threshold increasing condition is checked with a simple estimator based on logical circuits. For the check nodes, the SC method is modified to make it useful for reducing check node calculations.
The rest of this paper is organized as follows: In Section 2, we explain precisely the classical decoding algorithm and conventional techniques that motivated our proposed scheme. Section 3 describes the proposed scheme. The simulation results of the proposed scheme are discussed in Section 4 and a conclusion is provided in Section 5.
Background
Decoding algorithms for LDPC codes
There are three major decoding algorithms for LDPC codes: maximum likelihood (ML) decoder, sumproduct algorithm (SPA), and minsum algorithm (MSA). The ML decoder is an alternative maximum a posteriori (MAP) decoder. It essentially selects the codeword that maximizes the likelihood of the received signal. Even though the ML decoder is optimum, it is impractical due to its high computational complexity. The SPA is a decoding method exchanging variabletocheck (V2C) and checktovariable (C2V) information iteratively. The C2V message, \(U_{mn}^{(itr)}\), which propagates from check node m to variable node n at the itrth iteration, is generated as follows:
where \(V_{mn}^{(itr1)}\) denotes the V2C loglikelihood ratio (LLR) from variable node n to check node m at the (itr−1)th iteration. The term n ^{′}∈N(m)∖n denotes the neighbors of check node m except the variable node n. The V2C message, \(V_{mn}^{(itr)}\), is generated using
The term U _{ n,ch } denotes the channel LLR of variable node n, and M(n)∖m denotes the neighbors of variable node n except the check node m. Along the decoding procedure, the a posteriori probability LLR (APPLLR) of variable node n at the itrth iteration, \(\beta _{n}^{(itr)}\), is updated as follows:
Although SPA is more efficient to implement than ML, the complexity is still high due to the numerous multiplications of soft values in each iteration. MSA is a lowcomplexity algorithm that uses min operations instead of using multiplications. The simplified version of the check node updating rule can be represented as follows:
The drawback of MSA is degraded performance compared with SPA. For performance improvement, the normalized MSA was used in this study. The performance of MSA can be improved by simply scaling the absolute value of the C2V message. Although optimum scaling factor is an real value [10], the multiplication of real value is sublated in implementation due to the high complexity. Therefore, for the sake of simplicity and practical usefulness, we used a scaling factor of 0.75, which is close to the optimized value and can be implemented by several shift registers.
Forced convergence
The FC scheme forces variable nodes, which have highly reliable information, to stop updating their information. Specifically, variable nodes whose absolute value of APPLLR is greater than a threshold t are deactivated during the decoding process as follows:
In order to reduce the complexity stem from the variable nodes, the deactivation process should be operated before the variable node operation is started. Therefore, the APPLLR, which was calculated in the previous iteration, is used as a criterion for deactivation. Deactivated variable nodes do not update their V2C messages and propagate the last updated messages. In order not to experience performance degradation due to hastily deactivated variable nodes, the decoder reactivates the deactivated variable nodes when they are found in the unsatisfied paritycheck equation. The behavior of the FC scheme is depicted in Fig. 1. Hereafter, “FC scheme” or “conventional FC scheme” both mean the scheme proposed in [20].
Selfcorrection for the minsum algorithm
The SC method is proposed to recover the performance loss of MSA and achieves a performance gain by erasing unreliable information. It identifies unreliable information by comparing the predicted V2C message with the message at the previous iteration. When the sign is changed, the decoder regards that message as unreliable information. Therefore, the process of the SC decoder can be defined as
The erased message can be represented by assigning a zero value, which means the bit states are equiprobable. Whenever the message at the previous iteration is erased, the predicted message is sent without considering its sign. The behavior of the SC scheme is depicted in Fig. 2.
Adaptive deactivation and zeroforcing scheme
In this section, calculation reduction techniques for variable and check nodes are proposed. We were motivated by the FC and SC schemes that are explained in Section 2. The proposed node deactivation process for variable nodes is conceptually similar to the conventional FC scheme in the viewpoint of reducing the operational complexity of variable nodes with high reliability. The conventional FC scheme uses an equal threshold value for all variable nodes. In contrast, we decided a different threshold value for each variable node. Because the increment of reliability per iteration during the decoding process is different according to the connectivity, the use of various threshold is more efficient for complexity reduction. In addition, not fixed but variable thresholds are used for preventing early error floor, which is a drawback of the FC scheme.
From the algorithm of the SC method, check node operation can be simplified according to the number of erased V2C messages. However, the main proposal of the SC method is an error correcting performance improvement. The number of erased V2C messages in the conventional SC method is not enough to obtain sufficient benefits in the viewpoint of complexity. Therefore, we proposed a thresholdbased zeroforcing scheme for improving check node complexity reduction.
All thresholds used for the proposed scheme should be properly determined to deactivate nodes as much as possible but not to cause early error floor. We proposed a threshold decision method using an information theoretic analysis tool.
Adaptive node deactivation for reliable information
Assume that there is an LDPC code that is defined by an M×N parity check matrix H or an M _{ p }×N _{ p } base matrix B. Each element of base matrix b _{(i,j)},(1≤i≤M _{ p },1≤j≤N _{ p }) represents the number of connected edges between typei check nodes and typej variable nodes. As in condition (5), variable nodes satisfying the condition given in (7) are deactivated.
The parameter t _{ v }(j) is a threshold for typej variable nodes. As mentioned previously, the thresholds for each type of variable node are increased during decoding process. The reason for a usage variable threshold instead of fixed threshold is a tradeoff between error performance and complexity reduction. Although substantial complexity reduction can be obtained when the threshold is small, early error floor occurs. The reason is that the hastily deactivated variable nodes arising from the small threshold value degrade error performance. On the other hand, a large threshold value brings the opposite results.
In order to reactivate the hastily deactivated variable nodes, the threshold in the proposed scheme is increased when the rate of variable node deactivation exceeds a certain criterion. The increment of threshold value reactivates the deactivated variable nodes that do not satisfy condition (7). As a result, threshold values from the early phase to the last phase compensate each other by obtaining substantial complexity reduction with relatively less performance loss compared to when the fixed threshold value is used.
Initial thresholds t _{ v,ini } are determined using a protographbased extrinsic information transfer (PEXIT) chart [27], which is an analysis tool for protographbased and multiedge type LDPC codes. In contrast to the extrinsic information transfer (EXIT) chart [28], which is based on the degree distribution of LDPC codes, a base matrix is used for the PEXIT chart. The PEXIT chart provides not only the threshold signal to noise (SNR) of LDPC codes but also the growth of reliability for each type of node in the iterative decoding process.
Let J(σ) denote the mutual information between a binary random variable X with Pr(X=μ)=Pr(X=−μ)=1/2, and a continuous Gaussian random variable Y with mean X and variance σ ^{2}=2μ. The function J(σ) is calculated as follows:
When a rate R LDPC code is used, the mutual information for the channel messages to typej variable nodes is \(I_{ch}(j)=J(\sqrt {8R\cdot SNR_{th}})\), where R is the code rate of LDPC codes, and SNR _{ th } is the iterative decoding threshold SNR of LDPC codes. Then, transmitted mutual information between check nodes and variable nodes are calculated iteratively as iterative decoding method. The V2C mutual information between a typei check node and a typej variable node can be evaluated as
Similarly, the C2V mutual information between a typei check node and a typej variable node can be represent as follows:
Using I _{ ch } and the updated I _{ C2V } at each iteration, the cumulative mutual information (CMI) for each type of variable node can be obtained as
Since the mutual information of variable nodes increases at different rates, it is more reasonable to apply different threshold values for each type of variable node rather than an equal threshold value as in conventional FC schemes.
We decided the initial thresholds from the increment of CMI. Let \(I_{CMI}^{l}(j)\) denote the CMI of a typej variable node at the lth iteration. The end point of minimum increment from the (l−1)th iteration to the lth iteration is calculated as
The proposed technique uses APPLLR at the l ^{∗}(j)th iteration as the initial threshold value, t _{ v,ini }(j). APPLLR can be obtained by applying the J ^{−1} function to I _{ CMI } as follows:
The deactivating process reduces the amount of variable node operations as the ratio of deactivated variable nodes gradually increases. However, the deactivation disturbs the decoding because some hastily deactivated variable nodes provide insufficient V2C LLR values whose magnitudes are not large enough. Therefore, we enabled the decoder to increase the threshold t _{ v }(j) several times in the middle of the decoding procedure as follows:
The threshold increasing operation is depicted in Fig. 3. However, the unlimited increasing threshold may reduce the ratio of deactivated nodes. Therefore, we set the threshold limit based on the PEXIT chart as the initial threshold. In this case, we found a maximum increment as follows:
Figure 4 shows an example of CMI and ranges for expected t _{ v,ini } and t _{ v,max } when a rate1/2 LDPC code specified in the IEEE 802.11ad standard (WiGig)[29] is used. Because there are 16 types of variable nodes, several behaviors of CMI are depicted in Fig. 4.
When the value of threshold t _{ v }(j) is greater than t _{ v,max }(j), it is forced to t _{ v,max }(j). Increments of the threshold Δ t _{ v }(j) are determined by the number of desired increment trials T and the limit of threshold t _{ v,max }(j).
The threshold t _{ v } is increased when the output of a simple device called a node activeness estimator (NAE) is 1. The architecture of NAE is shown in Fig. 5. The estimator is constructed by concatenating AND gates and OR gates where it has 4^{s} inputs with a positive integer s, \(\phantom {\dot {i}\!}v_{1},v_{2},...v_{4^{s}}\), and one output. The parameter s is the number of stages that consist of AND gates and OR gates. First, choose 4^{s} out of n variable nodes. For the sake of simplicity, let the chosen variable nodes correspond to the first 4^{s} columns of H. Then determine the inputs of the estimator as follows:
The NAE shown in Fig. 5 detects 1 with high probability when the ratio of 1s among the inputs is greater than 0.7. The probability that the NAE detects 1 can be calculated using the following equation recursively:
The parameter P _{ i } is the probability that the output is 1 when the binary inputs are processed through a set of ANDOR logical circuits i times. The parameter P _{0} is the proportion of the deactivated variable nodes in the 4^{s} samples. For instance, when the NAE consists of three stages, the probability of detection is \(P_{3}=1  (1  (1  (1  (1  (1  P_{0}^{2})^{2})^{2})^{2})^{2})^{2}\). As a result, when the density of deactivated variable nodes is equal to 0.7, the probability of detection is 0.86 when s=3 and increases to 0.94 when s=4. In other words, the deactivating threshold t _{ v } is increased with high probability when more than 70% of the variable nodes are deactivated. Figure 6 shows the simulation result of probability that the NAE detects 1 as the density of binary inputs varies.
Zeroforcing scheme for unreliable information
In the check node operation for normalized MSA, it is required to find the minimum and second minimum values among the V2C messages that come from connected variable nodes. It can be simplified when some V2C messages are equal to 0. For example, let a check node m receive more than two zeroV2C messages. Then the check node does not have to calculate any C2V messages because all of the check node outputs are zero in this case. Consequently, it is known that zeroV2C messages enable check nodes to execute fewer operations. However, V2C messages are generally nonzero values in the conventional decoding algorithm. In order to generate zeroV2C messages, we proposed a thresholdbased zeroforcing scheme. In the proposed method, V2C messages, which are regarded as unreliable information, are forced to be zero.
Then, check nodes can selectively calculate C2V LLR values from the received V2C LLRs. For a check node m, if more than two zero V2C LLR values are received, then every C2V LLR from the check node m, \(U_{mn}^{(itr)}~ \forall n\), is set to zero without any calculation. If only one zero V2C LLR, say \(V_{mn}^{(itr1)}=0\), is received, the check node operation is executed only for \(U_{mn}^{(itr)}\). The effects of zeroforced V2C LLRs on check node operations are depicted in Fig. 7. Because less calculation is required on the checknode side as more V2C LLRs are replaced with zero, a higher t _{ c } is helpful for complexity. However, too high t _{ c } results in performance degradation.
Thresholds for identifying unreliable information are obtained by using the PEXIT chart as thresholds for variable nodes. In this case, the mutual information I _{ V2C }, which is transmitted from variable nodes to check nodes, is used. From the PEXIT chart, there are M _{ p } N _{ p } kinds of I _{ V2C } that exist. Although we could achieve the thresholds for each type of V2C message, a large amount of memory is required for thresholds. Therefore, an equal threshold t _{ c } is used for every V2C message from an equal variable node. The threshold t _{ c } for each variable node is approximately calculated by averaging the values that are obtained by applying J ^{−1} function to the corresponding I _{ V2C }. Let l ^{‡}(i,j) denote the number of iterations when the increment of I _{ V2C } is a minimum. Then, the calculation of threshold for V2C messages from typej variable nodes t _{ c }(j) can be represented as follows:
where d _{ v,j } is the degree of typej variable nodes. Figure 8 shows an example of V2C LLR and a range for expected t _{ c } when the rate1/2 LDPC code specified in IEEE 802.11ad is used. Similar to the case of CMI, because there are 16 types of variable nodes, several behaviors of V2C LLR are depicted in Fig. 8.
Since a V2C LLR value is said to be unreliable when its magnitude is small, our method forces V2C LLR values whose magnitudes are less than threshold to be zero as follows:
For LDPC codes of length N and rate \(R=\frac {NM}{N}\), the proposed simultaneous node deactivation and zeroforcing scheme for highly reliable and unreliable information is described in Algorithm 1.
Simulation results
In order to examine the performance of the proposed algorithm, simulations were performed over an additive white Gaussian noise (AWGN) channel with binary phase shift keying (BPSK) modulation. LDPC codes specified in the IEEE 802.11ad standard of length N=672 and rate R=1/2 and 3/4 were used to evaluate performance. In addition, the simulation results for LDPC codes specified in the IEEE 802.22 standard [30] of length N=384 and IEEE 802.11n standard [31] of length N=1944 are represented in the table at the end of this section. We performed all simulations using 7bit fixedpoint precision, with 4 bits for the integer part and 3 bits for the fractional part. The maximum number of iterations for all simulations was 20.
First, the frame error rates (FERs) of the proposed adaptive deactivation and zeroforcing (ADZF) scheme are compared to those of the normalized MSA without any additional manipulation, and the conventional FC schemes, as shown in Fig. 9. Performance of the proposed scheme was evaluated when the number of stages in NAE was equal to 4, and the number of increment trials was 10. The simulation results of the FC scheme applied to WiGig LDPC code with R=1/2 show the influence of the threshold value on the error correcting performance of the FC scheme in a high SNR regime. Even though significant complexity reduction can be achieved by the FC scheme using a low threshold value, the performance loss in the target SNR region makes the scheme meaningless. Figure 10 shows the influence of threshold value on the required SNR for achieving FER =10^{−3}. As represented in Fig. 10, in order to overcome performance loss, a sufficiently large threshold value should be used. In this section, a threshold value that show similar error performance at FER =10^{−3}, such as t=9 or 10, was used for the comparison of complexity.
Next, we evaluated the average number of iterations for each scheme. Figure 11 shows the average number of iterations where the maximum iteration I _{ max }=20. The ADZF and FC schemes require a similar average number of iterations to the normalized MSA. The reason for slight difference in iterations at low SNR region (FER<10^{−2}) is that the initial thresholds t _{ v,ini } in the proposed ADZF scheme are smaller than the threshold value for the FC scheme. Figure 12 shows the influence of threshold value on the average number of iterations for achieving FER =10^{−2}. When a smaller threshold value is used, the average number of iteration for maintaining error performance increases. In the proposed ADZF scheme, when the NAE detects 1, the threshold values increase from t _{ v,ini }, which can be lower than 6, to t _{ v,max }, which can be higher than 10. Since the NAE is operated at the end of each iteration, the opportunity for threshold increment increases at the low SNR region, where more iteration is required compared to the high SNR region. In other words, the thresholds for the proposed ADZF scheme can be higher than that of the FC schemes at the low SNR region. Nevertheless, since the difference in iterations decreases to zero at practical operation region (FER≥10^{−2}), it does not affect system’s performance significantly.
Figure 13 shows the variable node complexity of each scheme when the complexity of the normalized MSA is evaluated as 1. The variable node complexity is defined as follows:
The parameter \(v_{i}^{itr}\) means that the ith variable node is activated(=0) or deactivated(=1) at the itrth iteration. Because the effect of a deactivated high degree variable node on computation complexity is greater than that of a deactivated low degree variable node, we considered the degree of variable nodes as a weight. When the iteration is stopped before reaching the maximum number of iterations, all \(v_{i}^{itr}\) values for the rest of the iterations are equal to 1. In this way, a reasonable complexity considering early stopping can be obtained. For the proposed scheme and FC schemes, the complexity decreases as SNR increases, because each variable node is likely to have a higher APPLLR at a higher SNR. As represented in Fig. 13, the variable node complexity of the proposed scheme applied to WiGig LDPC code of rate 1/2 is only 0.26, which is almost half of that of the FC scheme (0.53). When the threshold value for the FC scheme is smaller than 6, the complexity of the FC scheme is lower than that of the proposed scheme. However, as represented in Fig. 9, the FC scheme with a low threshold value is meaningless due to the error performance loss. For the proposed scheme, using an NAE with s=3 shows less complexity than that with s=4. The reason is that the probability of detecting 1 becomes higher when NAE has more stages, as shown in Fig. 6. Although more stages in NAE cause less complexity reduction compared to fewer stages, it has an advantage in the viewpoint of error floor. Similarly, the number of increment trials also influences error performance and decoding complexity. The large number of trials brings low decoding complexity and early error floor. On the contrary, lower complexity gain with less performance loss can be obtained when the smaller number of increment trials is used. Consequently, the number of stages in NAE and increment trials should be selected cautiously in accordance with the requirements of the communication system due to a tradeoff between complexity and error performance.
Check node complexity is shown in Fig. 14, which is the effect of zeroforced V2C LLRs as explained in Section 3, and deactivated variable nodes. Check node complexity is obtained by evaluating the amount of check node calculations:
The parameter d _{ c,i } is the degree of the ith check node, and \(c_{i}^{itr}\) means that the operation of the ith check node can be omitted (=1) or not (=0) at the itrth iteration. Similar to the calculation of variable node complexity, the degree of check node is considered as a weight. In addition, when the iteration is stopped before reaching the maximum number of iterations, all \(c_{i}^{itr}\) values for the rest of the iterations are equal to 1. As mentioned previously, when a check node receives more than two zeroV2C messages, the corresponding check node operation also can be omitted, because all C2V messages from that check node are zero. In addition, because the operation of check nodes is meaningless when all related variable nodes are deactivated, we can omit the operation of corresponding check nodes. On the other hand, the FC scheme has only the effect of deactivated variable nodes. Moreover, as represented in Fig. 13, the number of deactivated variable nodes in the FC scheme is less than that in the proposed scheme. As a result, the check node complexity of the proposed scheme applied to WiGig LDPC code of rate 1/2 is 0.55, which is lower than that of the FC scheme (0.76). Complexity gain increases as SNR increases because the the portion of deactivated variable nodes is increased. Meanwhile, the influence of the SC scheme on the check node complexity reduction is decreased due to the increment of channel information reliability. Figure 15 shows the percentage of V2C messages for which the absolute value of LLR is lower than 0.5 and 1.0 as a function of SNR and iteration. As shown in Fig. 15, the percentage of unreliable V2C messages decreases significantly while SNR is increased.
Table 1 shows the effect of the proposed ADZF scheme when other LDPC codes are used. The average number of iterations, variable node complexity, and check node complexity of each scheme for obtaining FER =10^{−2} and 10^{−3} are represented. The represented variable node complexity and check node complexity are normalized by the complexity of the normalized MSA. Similar to the results for WiGig LDPC codes, the proposed scheme shows significantly lower node operation complexity compared to FC scheme. If the threshold value is smaller than the value represented in the Table 1, early error floor occurs. Figure 16 shows the error performance of each scheme when the smaller threshold value is used. To make it easier to see, we represent results only at the SNR regions that the FER of normalized MSA is approximately 10^{−2} and 10^{−3}. As represented in Fig. 16, when the smaller threshold is used, there is a noticeable performance gap at the SNR =2.8 and 3.0 dB due to the early error floor.
For the sake of fair comparison, we should consider the additional complexity which is caused by variable node deactivation, zeroforcing message, and NAE. Then, the total complexity of ADZF scheme C _{ ADZF } is calculated as
where C _{variable} and C _{check} are the complexity of variable node operation and check node operation, respectively. The parameters C _{D}, C _{ZF} and C _{NAE} are the complexity of deactivation, zeroforcing, and NAE, respectively. Similarly, the total complexity of FC scheme C _{FC} is calculated as follows:
In order to combine the complexity of different operations, we first checked the number of additions, comparisons, and ANDOR logical circuits that are basic components of each operation. Variable node operation consists of addition. Comparison is used for check node operation, deactivation, and zeroforcing process. At last, node activeness estimator consists of ANDOR logical circuits as shown in Fig. 5. Then, we decided the weight of each component, since the computational complexity of each component are different. Using the number of components and their weight, the Eqs. (25) and (26) can be rewritten as
where N _{add}, N _{comp} and N _{AND−OR} represent the number of additions, comparisons, and ANDOR logical circuits, respectively. The parameters W _{add}, W _{comp}, and W _{AND−OR} mean the weight of addition, comparison, and ANDOR logical circuit, respectively. To quantify the computational complexity of each component, we used execution time. From the simulation, we obtained that the execution time for the one ANDOR logical circuit is approximately 1.5 times longer than that of addition and comparison. Therefore, the weight of the ANDOR logical circuit is 1.5 when that of the addition and comparison is 1. Figure 17 shows the normalized total complexity of each scheme when the total complexity of normalizedMSA is 1. Simulated error rate region is FER =10^{−3}, and threshold of FC scheme is t=12 for 802.11n code and t=10 for the rest. As a result, even though ADZF scheme has extra operations compared with FC scheme, it still have benefit in the viewpoint of complexity.
Finally, as a merit of the proposed scheme, ADZF scheme is compatible with the various existing schemes using LDPC codes, such as [32–34], in the viewpoint of the practical usage of their schemes. Decoding complexity of their schemes is reduced through simply adopting proposed scheme to LDPC decoder.
Conclusions
A node operation reducing technique, called the ADZF scheme, is proposed for a lowcomplexity decoder of LDPC codes. Variable nodes with high reliability are deactivated to omit their calculations. A simple NAE constructed with logical circuits is introduced to increase the threshold value when deactivated variable nodes are saturated. Hastily deactivated variable nodes are automatically reactivated by the increased threshold value. Messages with low reliability propagated from variable nodes are replaced with zeroes to reduce check node operations. Since the threshold value is decided only once in the offline mode, the proposed threshold value decision method does not incur additional computational complexity for the decoder. Simulation results show that the proposed scheme reduced decoding complexity more than the FC scheme. Moreover, the proposed scheme resolves the early error floor problem, which is a drawback of the FC scheme.
References
 1
R Gallager, Lowdensity paritycheck codes. IRE Trans. Inf. Theory. 8(1), 21–28 (1962).
 2
DJC Mackay, Good errorcorrecting codes based on very sparse matrices. IEEE Trans. Inf. Theory. 45(2), 399–431 (1999).
 3
C Berrou, A Glavieux, P Thitimajshima, in IEEE International Conference on Communications (ICC 1993). Near Shannon limit errorcorrecting coding and decoding: Turbocodes. 1 (IEEE, Geneva, 1993).
 4
SK Chronopoulos, G Tatsis, V Raptis, P Kostarakis, in PanHellenic Conference on Electronics and Telecommunications (PACET 2012). A Parallel turbo encoderdecoder scheme (The Electronics and the Telecommunications laboratories of the Departments of Physics and Electrical and Computer Engineering, of the Aristotle University of Thessaloniki (AUTh), Thessaloniki, 2012).
 5
SK Chronopoulos, T Giorgos, P Kostarakis, Turbo codes–a new PCCC design. Commun. Netw. 3(4), 229–234 (2011).
 6
SK Chronopoulos, T Giorgos, P Kostarakis, Turbo coded OFDM with large number of subcarriers. J. Signal Inf. Process. 3(2), 161–168 (2012).
 7
SK Chronopoulos, V Christofilakis, G Tatsis, P Kostarakis, Performance of turbo coded OFDM under the presence of various noise types. Wireless Pers. Commun. 87(4), 1319–1336 (2016).
 8
MPC Fossorier, M Mihaljevic, H Imai, Reduced complexity iterative decoding of low density parity check codes based on belief propagation. IEEE Trans. Commun. 47(5), 673–680 (1999).
 9
J Chen, MP Fossorier, Near optimum universal belief propagation based decoding of low density parity check codes. IEEE Trans. Commun. 50(3), 406–414 (2002).
 10
J Heo, Analysis of scaling soft information on low density parity check code. IEE Electron. Lett. 37(25), 1530–1531 (2001).
 11
V Savin, in IEEE International Symposium on Information Theory (ISIT 2007). Iterative LDPC decoding using neighborhood reliabilities (IEEE, Nice, 2007).
 12
V Savin, in IEEE International Symposium on Information Theory (ISIT 2008). Selfcorrected minsum decoding of LDPC codes (IEEE, Toronto, 2008).
 13
J Andrade, G Falcao, V Silva, JP Barreto, N Goncalves, V Savin, in IEEE International Conference on Communications (ICC 2013). NearLSPA performance at MSA complexity (IEEE, Budapest, 2013).
 14
E Amador, V Rezard, R Pacalet, in 17th IFIP International Conference on Very Large Scale Integration (VLSISoC 2009). Energy efficiency of SISO algorithms for turbodecoding messagepassing LDPC decoders (IEEE, Florianopolis, 2009).
 15
E Amador, R Knopp, V Rezard, R Pacalet, in IEEE Computer Society Annual Symposium on VLSI (ISVLSI 2010). Dynamic power management on LDPC decoders (IEEE, Lixouri, 2010).
 16
O Boncalo, A Amaricai, V Savin, in 21st IEEE International Conference on Electronics, Circuits and Systems (ICECS 2014). Memory efficient implementation of selfcorrected minsum LDPC decoder (IEEE, Marseille, 2014).
 17
E Yeo, P Pakzad, B Nikolic, V Anantharam, in IEEE Global Telecommunications Conference (GLOBECOM 2001). High throughput lowdensity paritycheck decoder architectures (IEEE, San Antonio, 2001).
 18
J Zhang, M Fossorier, Shuffled iterative decoding. IEEE Trans. Commun. 53(2), 209–213 (2005).
 19
CY Lin, MK Ku, in IEEE International Symposium on Circuits and Systems (ISCAS 2009). Node operation reduced decoding for LDPC codes (IEEE, Taipei, 2009).
 20
E Zimmermann, P Pattisapu, PK Bora, G Fettweis, in 7th International Symposium on Wireless Personal Multimedia Communications (WPMC 2004). Reduced complexity LDPC decoding using forced convergence (WPMC steering boardAbano Terme, 2004).
 21
E Zimmermann, W Rave, G Fettweis, in 11th European Wireless Conference 2005Next Generation Wireless and Mobile Communications and Services. Forced convergence decoding of LDPC Codes  EXIT chart analysis and combination with node complexity reduction techniques (VDE, Nicosia, 2005).
 22
M Sarajlc, L Liu, O Edfors, in IEEE 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC 2014). Reducing the complexity of LDPC decoding algorithms: an optimizationoriented approach (IEEE, Washington, 2014).
 23
M Sarajlc, L Liu, O Edfors, in IEEE 26th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC 2015). Modified forced convergence decoding of LDPC codes with optimized decoder parameters (IEEE, Hong Kong, 2015).
 24
J Fan, H Yang, in IEEE International Conference on Communications Technology and Applications (ICCTA 2009). A new forced convergence decoding scheme for LDPC codes (IEEE, Beijing, 2009).
 25
BJ Choi, MH Sunwoo, in International SoC Design Conference (ISOCC 2013). Efficient forced convergence algorithm for low power LDPC decoders (IEEE, Busan, 2013).
 26
BJ Choi, MH Sunwoo, in IEEE Asia Pacific Conference on Circuits and Systems (APCCAS 2014). Simplified forced convergence decoding algorithm for low power LDPC decoders (IEEE, Ishigaki Island, 2014).
 27
G Liva, M Chiani, in IEEE Global Telecommunications Conference (GLOBECOM 2007). Protograph LDPC codes design based on EXIT analysis (IEEE, Washington, 2007).
 28
S ten Brink, Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Trans. Commun. 49(10), 1727–1737 (2001).
 29
(IEEE, New York, 2012). http://standards.ieee.org/getieee802/download/802.11ad2012.pdf.
 30
(IEEE, New York, 2011). http://standards.ieee.org/getieee802/download/802.222011.pdf.
 31
(IEEE, New York, 2009). http://standards.ieee.org/getieee802/download/802.11n2009.pdf.
 32
M Baldi, N Maturo, G Ricciutelli, F Chiaraluce, Security gap analysis of some LDPC coded transmission schemes over the flat and fast fading Gaussian wiretap channels. EURASIP J. Wirel. Commun. Netw. 232(1), 1–12 (2015).
 33
M Baldi, N Maturo, E Paolini, F Chiaraluce, One the use of ordered statics decoders for lowdensity paritycheck codes in space telecommand links. EURASIP J. Wirel. Commun. Netw. 272(1), 1–15 (2016).
 34
K Kwon, T Kim, J Heo, Precoded LDPC coding for physical layer security. EURASIP J. Wirel. Commun. Netw. 283(1), 1–18 (2016).
Acknowledgements
This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (IITP20172015000385) supervised by the IITP (Institute for Information and communications Technology Promotion).
Author information
Affiliations
Contributions
TK designed the main of the algorithm, analyzed the data, and wrote this paper. JB and ML designed a part of the algorithm and completed simulations. JH gave valuable suggestions on the idea of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Kim, T., Baik, J., Lee, M. et al. Adaptive deactivation and zeroforcing scheme for lowcomplexity LDPC decoders. J Wireless Com Network 2017, 153 (2017). https://doi.org/10.1186/s136380170934z
Received:
Accepted:
Published:
Keywords
 Lowdensity paritycheck codes
 Protographbased extrinsic transfer chart
 Normalized minsum algorithm
 Complexity reduction