Skip to main content

Augmented decoders for LDPC codes

Abstract

The performance of a belief propagation decoder for low-density parity-check codes is limited by the presence of trapping sets in the code’s graph. This leads to an error floor at higher signal-to-noise ratios. We propose the use of an augmented decoder which operates by iteratively decoding on a set of graphs which have a subset of repeated check nodes. We compare the augmented decoder to other modified belief propagation decoders that have been presented in the literature. We show that for all the codes considered, the augmented decoder yields the best frame error rate in the error floor region.

1 Introduction

Low-density parity-check (LDPC) codes provide good error correction performance when a belief propagation (BP) decoder is used [1]. BP computes the marginal probabilities of the transmitted bits with reasonable accuracy even though the graph representation of the code contains loops. These loops can lead to trapping sets which degrade decoder performance, causing an error floor at high signal-to-noise ratios (SNR) [2].

It is possible to design LDPC codes for good performance with a BP decoder by optimizing the degree distribution [3]. It is also possible to avoid loops in the construction of the code’s graph through methods such as progressive edge-growth (PEG) [4]. The harmfulness of particular trapping sets to BP performance can be analyzed, and codes can be designed to avoid them [5]. For a given code, it is possible to add additional parity checks at the transmitter [6], but this comes at a potential rate penalty.

Several different modifications to the BP decoder have been suggested in the literature. Generalized LDPC (G-LDPC) decoders are presented in [7], these aim to mitigate the effect of prominent trapping sets by combining associated check nodes into “super nodes.” The non-uniqueness of the code’s parity-check matrix (and hence its graph) is utilized in [8] where BP is employed in parallel on a set of equivalent graphs. Backtracking was proposed in [9], which involves the iterative sign flipping of a BP message associated with a non-satisfied check node. Limiting the magnitudes of messages passed in BP can also successfully reduce the error floor [10]. If the BP messages are limited to a small number of bits, then modifying the quantization scheme can improve the performance [11]. The scheduling of message passing in a BP decoder can also be altered, giving rise to serial decoders [1214]. It is also possible to identify particular trapping sets and apply an appropriate perturbation if a known trapping set is present [15].

In this paper, we describe the augmented decoder in the context of LDPC codes. The augmented decoder iteratively decodes on a set of graphs with a subset of repeated check nodes; these repetitions serve to alter the dynamics of BP. We apply the augmented decoder to a suite of regular and irregular LDPC codes, demonstrating that it is effective at mitigating error floors due to trapping sets. Furthermore, we show that in terms of error floor frame error rate (FER), the augmented decoder outperforms other methods of trapping set mitigation presented in the literature.

The paper is organized as follows. Section 2 gives a brief overview of the LDPC codes. The augmented decoder is presented in Section 3. Section 4 presents the numerical results that demonstrate the improved performance offered by the augmented decoder. The paper is concluded in Section 5.

2 Brief overview LDPC codes

An LDPC code \(C\subset \mathbb {F}_{2}^{n}\) can be defined as the kernel of an (nkn parity-check matrix H

$$ C=\left\{\mathbf{x} \in \mathbb{F}_{2}^{n}:H\mathbf{x}=\mathbf{0}\right\}. $$
(1)

If H is full rank (rank(H)=nk), then dim(C)=k. The rate of a code is defined as R=k/n where k is the number of uncoded bits and n the block length.

The parity-check matrix H can be represented using a Tanner graph. As an example, consider the parity-check matrix of the (7,4) Hamming code

$$ H= \left(\begin{array}{ccccccc} 1 & 0 & 1 & 0 & 1 & 0 & 1\\ 0 & 1 & 1 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 1 & 1 & 1 & 1\\ \end{array} \right). $$
(2)

The associated graph is shown in Fig. 1. Each of the circles is a variable node corresponding to a column in H. Each of the squares is a parity-check node corresponding to a row in H. There is a connection, edge, between the variable node i and parity-check node j if Hij=1.

Fig. 1
figure 1

Tanner graph example. Tanner graph for the (7,4) Hamming code. Circles are variable nodes, and squares are parity-check nodes. Nodes are connected via edges according to H

An LDPC code is said to be regular if the degree of all parity-check and variable nodes in the graph is the same, i.e., the degree distribution is uniform; otherwise, the code is irregular.

Encoding can be performed using the k×n generator matrix G whose rows form a basis for C. The data word \(\mathbf {d}\in \mathbb {F}_{2}^{k}\) is encoded as:

$$ \mathbf{x}=G^{T} \mathbf{d} \in C. $$
(3)

The Hamming distance, d(x1,x2), between any two codewords x1 and x2C is the number of vector elements in which they differ. A code’s minimum Hamming distance is then:

$$ d_{m}(C)=\text{min}(\{d(\mathbf{x}_{1},\mathbf{x}_{2}) : \mathbf{x}_{1}, \mathbf{x}_{2} \in C\}). $$
(4)

The weight of a codeword xC, w(x), is the Hamming distance between it and the all zero codeword 0, i.e., w(x)=d(0,x). The minimum weight of a code is then naturally defined as:

$$ w_{m}(C)=\text{min}(\{d(\mathbf{0},\mathbf{x}) : \mathbf{x} \in C\}). $$
(5)

As C is a linear space, it follows that dm(C)=wm(C).

For transmission, codewords are binary phase-shift keying (BPSK) modulated with the mapping {0,1}→{1,− 1}. Denoting the modulated codeword vector as \(\mathbf {x}_{m} \in \mathbb {R}_{2}^{n}\), the received vector \(\mathbf {y}\in \mathbb {R}_{2}^{n}\) is:

$$ \mathbf{y}=\mathbf{x}_{m} + \mathbf{n}, $$
(6)

where \(\mathbf {n}\in \mathbb {R}_{2}^{n}\) is the noise vector. For additive white Gaussian noise (AWGN) \(n_{i}=\mathcal {N}\left (0,\sigma ^{2}\right)\), where σ2 is the variance of the noise. The magnitude of which relative to the transmitted signal is quantified by the signal-to-noise (SNR) ratio [16]:

$$ E_{b} / N_{0}=\frac{(\bar{\mathbf{x}}_{m})_{i}^{2}}{2 R \sigma^{2}}, $$
(7)

where \((\bar {\mathbf {x}}_{m})_{i}^{2}\) is the average power of a transmitted bit.

The decoder will attempt to determine the transmitted codeword given the received signal. An ideal maximum a posteriori (MAP) decoder would output the most likely codeword \(\tilde {\mathbf {x}}\) given the received signal, i.e.,

$$ \tilde{\mathbf{x}}=\text{max}_{\mathbf{x}} P(\mathbf{x}|\mathbf{y}). $$
(8)

However, MAP decoding for LDPC codes is NP hard [17]. As a result, the approximate method of belief propagation (BP) is employed [18]. BP is an iterative local neighborhood message-passing algorithm, where in each iteration, a set of messages are passed between variable and check nodes. The first step is to calculate the a priori log-likelihood ratio (LLR) values for each received bit:

$$ \Lambda_{i}=\ln\left(\frac{P(y_{i}|x_{i}=0)}{P(y_{i}|x_{i}=1)}\right), $$
(9)

for the AWGN channel this reduces to

$$ \Lambda_{i}=\frac{2y_{i}}{\sigma^{2}}. $$
(10)

If ri>0, then it is more likely that xi=0; conversely, if ri<0, then it is more likely that xi=1. As such, a first estimate of the transmitted codeword can be made where

$$ \tilde{\mathrm{x}}_{i}=\frac{1}{2}\left(1-\frac{\Lambda_{i}}{|\Lambda_{i}|}\right). $$
(11)

If \(H\tilde {\mathbf {x}}=\mathbf {0}\), then decoding is complete, if not then an iterative process commences. First, a message is sent from all variable nodes to their connected check nodes. The message is sent from variable node vi to check node cj

$$ Q_{i,j}=\Lambda_{i}+\sum_{c_{a}\in N(v_{i}) \setminus c_{j}}{R_{a,i}}, $$
(12)

where N(vi)cj is the set of check nodes connected to vi except for cj itself. Note that in the first iteration, Rj,i=0 i,j. Next, a message is sent from all check nodes to their connected check nodes. The message sent from check node cj to variable node vi is:

$$ R_{j,i}=2~\text{arctanh}\left(\prod_{v_{b}\in N(c_{j}) \setminus v_{i}}{\tanh\left(\frac{Q_{b,j}}{2}\right)}\right). $$
(13)

Finally, the marginal LLR values are updated for each bit:

$$ L_{i}=\Lambda_{i}+\sum_{c_{a}\in N(v_{i})}{R_{a,i}}. $$
(14)

These values can then be used to make a new estimate at the transmitted codeword:

$$ \tilde{\text{x}}_{i}=\frac{1}{2}\left(1-\frac{L_{i}}{|L_{i}|}\right). $$
(15)

If \(H\tilde {\mathbf {x}}=\mathbf {0}\), then decoding is complete, if not then another iteration commences.

There are three possible decoding outcomes. The first outcome is a correct decoding where \(\tilde {\mathbf {x}}=\mathbf {x}\). Second is a detected error where some maximum number of iterations is reached with \(\tilde {\mathbf {x}} \ne \mathbf {x}\) because \(H \tilde {\mathbf {x}} \ne \mathbf {0}\). The last case is an undetected error where the decoder terminates because \(H \tilde {\mathbf {x}}=\mathbf {0}\), but \(\tilde {\mathbf {x}} \neq \mathbf {x}\).

BP will achieve MAP decoding in the case of the graph having a tree structure. However, the graphs of LDPC codes are typically not trees (if a code’s graph is a tree, then the code must contain weight two codewords [19]). A loop is a closed walk with no repeated edges or nodes. The size of a loop is the number of edges traversed (always even for a bipartite graph), and the girth of a given code is defined as the size of the smallest loop in the graph.

The set of variable nodes that do not converge to the correct value in decoding is denoted T(y) [20]. If T(y)≠ then let a=|T(y)| and b be the number of odd degree check nodes in the sub-graph induced by T(y) (the sub-graph contains T(y) and connected check nodes) then T(y) is a (a,b) trapping set. An example of a (4,4) trapping set is shown in Fig. 2.

Fig. 2
figure 2

Trapping set example (4,4) trapping set example. The black variable nodes have failed to converge, and thus, the black (odd degree) check nodes are unsatisfied. The white (even degree) nodes are satisfied

The frame error rate (FER) performance of a decoder is divided into three regions, these can be clearly seen in Fig. 3. The first region is at low SNR where the decoder fails to decode the majority of the frames. As SNR increases, there is a rapidly decreasing FER in the waterfall region. Then at higher SNR, the FER decreases at a reduced rate; this is the error floor region. This error floor can either be due to the undetected errors (a result of low-weight codewords) or detected errors. In the second case, the error floor can be lowered by mitigating trapping sets; this is the aim of the augmented decoder.

Fig. 3
figure 3

Quasi-regular FER plot. FER performance of augmented decoders for the n=4095, R=0.82 quasi-regular code

3 Method

3.1 Parity-check formulation

Graph augmentation aims to mitigate the effect of trapping sets in the graph by iteratively duplicating a subset of parity checks. An augmented decoder employs a set of modified or augmented parity-check matrices, called the candidate set. Each of the candidates in the set has the form:

$$ H_{A}=\left(\begin{array}{c} H\\ H_{d} \end{array}\right), $$
(16)

where H is the original (given) parity-check matrix and Hd is a dn×n matrix containing dn rows randomly selected from H. d is called the augmentation density.

On the graph for HA, the augmentation defined above corresponds to a duplication of a subset of parity-check nodes (of the original graph). The candidate set contains N-augmented matrices denoted by:

$$ \Delta=\left\{{H}_{A}^{1}, {H}_{A}^{2}, \ldots, {H}_{A}^{N}\right\}. $$
(17)

The operation of the augmented decoder is outlined in Algorithm 1. Decoding is first attempted on the standard graph (not augmented). If this decoding step is unsuccessful, then decoding is reattempted using an augmented candidate graph until either decoding is successful or the candidate set is exhausted.

As the duplicated rows in each candidate are selected randomly, it is possible to generate candidates as they are required. Furthermore, it is possible to vary the augmentation density depending on SNR. Section 4.1 demonstrates that this can lead to improved performance.

3.2 BP formulation

The effect of duplicating a subset of parity checks can also be considered as a modification of BP on the code’s original graph (i.e., with no duplicated checks). First, we define a function that indicates whether a check node has been duplicated:

$$ r(j)=\left\{ \begin{array}{l} 1 \ \text{if} \ c_{j} \ \text{duplicated}\\ 0 \ \text{if} \ c_{j} \ \mathrm{not \ duplicated}\\ \end{array}. \right. $$
(18)

The equivalent variable to check node message on the original graph is then:

$$ Q_{i,j}=\Lambda_{i}+\sum_{c_{a}\in N(v_{i}) \setminus c_{j}}{R_{a,i}}+\frac{1}{2}r(j)R_{j,i}, $$
(19)

and the check to variable node message is:

$$ R_{j,i}=2\text{arctanh}\left(\prod_{v_{b}\in N(c_{j}) \setminus v_{i}}\tanh\left(\frac{Q_{b,j}}{2}\right)\right)\left(1+r\left(j\right)\right). $$
(20)

It can be seen that the effect of duplicating a check node is to double the magnitude of the messages it sends in our modified BP. It can also be seen that the message sent from variable node vi to check node cj is no longer independent of the message sent from cj to vi in the previous iteration, i.e., feedback has been introduced.

This feedback and message amplification alters the rate at which the marginal probabilities converge. Reliable information from the fast converging variable nodes can then propagate to other variable nodes (via check nodes). Variable nodes will converge in different orders depending on which checks have been duplicated, this can potentially lead to avoiding a trapping set that occurs when using standard BP.

Implementing augmentation as a modification to BP means that there is no complexity overhead for each iteration. However, implementing it based on duplicated rows in the parity-check matrix allows for an efficient pre-existing BP decoder to be used.

4 Numerical results

4.1 Quasi-regular Mackay

The first code considered is an n=4095 and R=0.82 quasi-regular code [21]. The error floor of this code when using the standard belief propagation decoder is shown in Fig. 3.

To find the optimal augmentation density, decoders with N=100 candidates of varying d were tested at four SNR values as shown in Fig. 4. It can be seen that at higher SNR (in the error floor region), the FER reduction of the augmented decoder is more pronounced than at lower SNR (in the waterfall region). Furthermore, it can be seen that as SNR increases, the optimal augmentation density also increases.

Fig. 4
figure 4

Quasi-regular density plot. FER performance of an augmented decoder with N=100 candidates with varying augmentation density d for the n=4095 and R=0.82 quasi-regular code

The performance of augmented decoders with N=10, N=100, and N=1000 candidates is shown in Fig. 3. The augmentation density of these decoders is selected to be the optimal values shown in Fig. 4, 4% at 3 dB, 5.7% at 3.25–3.5 dB, and 23% at 3.75 dB. With 1000 candidates at 3.75 dB, approximately half the errors are undetected errors due to weight 10–14 codewords, i.e., this is the start of a low-weight codeword error floor.

The average number of decoding attempts (including the initial standard BP attempt) is shown in Fig. 5. It can be seen that at higher SNR, the increase in the average number of decoding attempts is negligible, this is due to the two factors. Firstly, at higher SNR, the standard decoder is able to decode more frames, as such the likelihood that any candidates are required is reduced. Secondly, the average number of candidates required to decode a frame which cannot be decoded using standard BP reduces as SNR increases.

Fig. 5
figure 5

Quasi-regular decoding attempts plot. Average number of decoding attempts of augmented decoders for the n=4095 and R=0.82 quasi-regular codex

4.2 WiMAX

4.2.1 n=1056 and R=1/2 code

The second code considered is the n=1056 and R=1/2 quasi-cyclic LDPC (QC-LDPC) code from the IEEE 802.16e WiMAX standard [22]. Here, augmentation is compared to the second backtracking method presented in [9]. As is shown in Fig. 6, the decoder with backtracking has similar waterfall performance to the standard BP decoder but a significantly improved error floor.

Fig. 6
figure 6

n=1056 WiMAX FER plot. FER performance of augmented decoders for the n=1056 and R=1/2 WiMAX code. Also shown is the performance of a backtracking decoder

Augmented decoders with N=10, N=100, and N=1000 candidates were tested. The augmentation density used at each SNR was optimized in the same way as for the quasi-regular code of Section 4.1. These densities are 4% for 1.4–1.8 dB, 5.7% for 2–2.2 dB, 8% for 2.4–3 dB, and 11% for 3.2–3.4 dB. It can be seen that the augmented decoders provide a significant gain in waterfall FER. Furthermore, decoding based on augmentation reaches an error floor due to low-weight (mostly weight 21) codewords at a significantly lower SNR than the backtracking decoder. At 3.4 dB, over 95% of the errors found using 1000 candidates are undetected, as such the FER cannot be further reduced.

4.2.2 n=576 and R=1/2 code

The third code considered is the n=576 and R=1/2 QC-LDPC code from the WiMAX standard. Here, augmentation is compared to the multiple-bases belief-propagation (MBBP) and leaking MBBP (L-MBBP) [8]. This comparison is important as both the augmented decoder and the MBBP decoder make use of a set of parity-check matrices at the decoder. For this reason, we selected the size of the candidate set to be identical to the number of parity-check matrices used by Hehn et al. in [8]. An augmentation density of 5.7% was used at all SNR values. The results are shown in Fig. 7, indicating that graph augmentation yields a lower FER when compared to the MBBP and L-MBBP methods. Approximately, a third of the errors found at 2.6 dB using the decoder based on 30 candidates are undetected; this suggests that there will be a low-weight (mostly weight 13) error floor at higher SNR.

Fig. 7
figure 7

n=576 WiMAX FER plot. FER performance of augmented decoders for the n=576 and R=1/2 WiMAX code. Also shown is the performance of various MBBP decoders

4.3 802.11n

The fourth code considered is the n=1944 and R=1/2 code from the IEEE 802.11n standard [23]. This code has similar construction to the WiMAX code. Here, augmentation is compared to the performance of the approximate node-wise scheduling (ANS) serial decoder presented in [12]. It can be seen in Fig. 8 that the serial decoder performs similarly to the standard flooding-based decoder in the waterfall region. However, serial decoding provides an improved FER in the error floor region.

Fig. 8
figure 8

802.11n FER plot. FER performance of augmented decoders for the n=1944 and R=1/2 802.11n code. Also shown is the performance of a ANS serial decoder

Augmented decoders with N=10, N=100, and N=1000 candidates were tested. The augmentation densities used were 2% at 1–1.25 dB, 2.8% at 1.5–1.75 dB, and 5.7% at 2 dB. It can be seen that the decoder with ten candidates gives similar performance to the serial decoder and that the decoders with 100 and 1000 candidates give significantly better performance. At 2 dB, the augmented decoder based on 1000 candidates produces more undetected than detected errors; this suggests that it will present a low-weight (ranging in weight from 27–40) codeword error floor beyond this point.

4.4 Margulis

The final code considered is the n=2640 and R=1/2 Margulis code [24]. This is a protograph-based code, and here, augmentation is compared to the performance of a serial decoder employing schedule diversity [14] (note that this type of decoder only works for protograph-based codes). This decoder first attempts decoding with an optimized message passing schedule; if decoding fails, then it iteratively reattempts decoding using a scrambled schedule up to a maximum of T attempts. The performances of approximate node-wise scheduling (ANS), backtracking, and G-LDPC decoders are also given for this code (these are also taken from [24]).

Augmented decoders with N=10, N=100, and N=1000 candidates were tested, and the results are shown in Fig. 9. An augmentation density of 2% was used at all SNR values considered. It can be seen that at 2.3 dB, the augmented decoder with ten candidates is able to outperform the backtracking and G-LDPC decoders and gives similar performance to the ANS decoder. The schedule diversity decoder has a performance between that of an augmented decoder with 10 candidates and one with 100 candidates (it can be matched by using approximately 35 candidates).

Fig. 9
figure 9

Margulis FER plot. FER performance of augmented decoders for the n=2640 and R=1/2 Margulis code. Also shown are the performances of ANS, backtracking, G-LDPC, and schedule diversity decoders

5 Conclusions

We proposed the use of a decoder based on graph augmentation for mitigating trapping sets and thus error floors in LDPC codes. The performance of the proposed decoder was tested through computer simulation on a number of different LDPC codes (both regular and irregular). In all cases, the augmented decoder provided a lower error floor frame error rate than other modified belief propagation decoders presented in the literature.

Abbreviations

ANS:

Approximate node-wise scheduling

AWGN:

Additive white Gaussian noise

BP:

Belief propagation

BPSK:

Binary phase-shift keying

FER:

Frame error rate

G-LDPC:

Generalized LDPC

LDPC:

Low-density parity-check

LLR:

Log-likelihood ratio

L-MBBP:

Leaking MBBP

MAP:

Maximum a posteriori

MBBP:

Multiple-bases belief-propagation

PEG:

Progressive edge-growth

QC-LDPC:

Quasi-cyclic LDPC

SNR:

Signal-to-noise ratio

References

  1. R Gallager, Low-density parity-check codes. IRE Trans. Inf. Theory. 8(1), 21–28 (1962).

    Article  MathSciNet  MATH  Google Scholar 

  2. T Richardson, in Error floors of LDPC codes, 41. Proceedings of the Annual Allerton Conference on Communication Control and Computing (University of IllinoisCurbana-Champaign, 2003), pp. 1426–1435. The University; 1998.

    Google Scholar 

  3. TJ Richardson, MA Shokrollahi, RL Urbanke, Design of capacity-approaching irregular low-density parity-check codes. IEEE Trans. Inf. Theory. 47(2), 619–637 (2001).

    Article  MathSciNet  MATH  Google Scholar 

  4. X-Y Hu, E Eleftheriou, D-M Arnold, in Progressive edge-growth Tanner graphs, 2. Global Telecommunications Conference, 2001. GLOBECOM’01. IEEE (IEEESan Antonio, 2001), pp. 995–1001.

    Google Scholar 

  5. DV Nguyen, SK Chilappagari, MW Marcellin, B Vasic, On the construction of structured LDPC codes free of small trapping sets. IEEE Trans. Inf. Theory. 58(4), 2280–2302 (2012).

    Article  MathSciNet  MATH  Google Scholar 

  6. O Fainzilber, E Sharon, S Litsyn, in Decreasing error floor in LDPC codes by parity-check matrix extensions. Information Theory, 2009. ISIT 2009. IEEE International Symposium On (IEEESeul, 2009), pp. 374–378.

    Google Scholar 

  7. Y Han, WE Ryan, Low-floor decoders for LDPC codes. IEEE Trans. Commun.57(6), 1663–1673 (2009).

    Article  Google Scholar 

  8. T Hehn, JB Huber, S Laendner, in Improved iterative decoding of LDPC codes from the IEEE WiMAX standard. Source and Channel Coding (SCC), 2010 International ITG Conference On (IEEESiegen, 2010), pp. 1–6.

    Google Scholar 

  9. J Kang, Q Huang, S Lin, K Abdel-Ghaffar, An iterative decoding algorithm with backtracking to lower the error-floors of LDPC codes. IEEE Trans. Commun.59(1), 64–73 (2011).

    Article  Google Scholar 

  10. J Hamkins, in Performance of low-density parity-check coded modulation. Aerospace Conference, 2010 IEEE (IEEEBig Sky, 2010), pp. 1–14.

    Google Scholar 

  11. X Zhang, PH Siegel, Quantized iterative message passing decoders with low error floor for LDPC codes. IEEE Trans. Commun.62(1), 1–14 (2014).

    Article  Google Scholar 

  12. AIV Casado, M Griot, RD Wesel, LDPC decoders with informed dynamic scheduling. IEEE Trans. Commun.58(12), 3470–3479 (2010).

    Article  Google Scholar 

  13. X Liu, Z Zhou, R Cui, E Liu, Informed decoding algorithms of LDPC codes based on dynamic selection strategy. IEEE Trans. Commun.64(4), 1357–1366 (2016).

    Article  Google Scholar 

  14. H-C Lee, Y-L Ueng, LDPC decoding scheduling for faster convergence and lower error floor. IEEE Trans. Commun.62(9), 3104–3113 (2014).

    Article  Google Scholar 

  15. E Cavus, B Daneshrad, in A performance improvement and error floor avoidance technique for belief propagation decoding of LDPC codes, 4. Personal, Indoor and Mobile Radio Communications, 2005. PIMRC 2005. IEEE 16th International Symposium On (IEEEBerlin, 2005), pp. 2386–2390.

    Google Scholar 

  16. DJ MacKay, Information theory, inference and learning algorithms (Cambridge University Press, Cambridge, 2003).

    MATH  Google Scholar 

  17. E Berlekamp, R McEliece, H Van Tilborg, On the inherent intractability of certain coding problems (corresp.)IEEE Trans. Inf. Theory. 24(3), 384–386 (1978).

    Article  MATH  Google Scholar 

  18. DJ MacKay, Good error-correcting codes based on very sparse matrices. IEEE Trans. Inf. Theory. 45(2), 399–431 (1999).

    Article  MathSciNet  MATH  Google Scholar 

  19. T Richardson, R Urbanke, Modern coding theory (Cambridge University Press, New York, 2008).

    Book  MATH  Google Scholar 

  20. B Vasić, SK Chilappagari, DV Nguyen, SK Planjery, in Trapping set ontology. Communication, Control, and Computing, 2009. Allerton 2009. 47th Annual Allerton Conference On (IEEEMonticello, 2009), pp. 1–7.

    Google Scholar 

  21. Encyclopedia of Sparse Graph Codes, Code 4095.737.3.101. http://www.inference.org.uk/mackay/codes/data.html. Accessed 30 June 2016.

  22. IEEE Standard for Local and metropolitan area networks Part 16: Air Interface for Broadband Wireless Access Systems. IEEE Std 802.16-2009 (Revision of IEEE Std 802.16-2004), 1–2080 (2009). https://doi.org/10.1109/IEEESTD.2009.5062485.

  23. IEEE Standard for Information technology–Telecommunications and information exchange between systems Local and metropolitan area networks– Specific requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. IEEE Std 802.11-2016 (Revision of IEEE Std 802.11-2012), 1–3534 (2016). https://doi.org/10.1109/IEEESTD.2016.7786995.

  24. GA Margulis, Explicit constructions of graphs without short cycles and low density codes. Combinatorica. 2(1), 71–78 (1982).

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

ARR contributed 50%, and all other authors contributed an equal 12.5%. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Alex R. Rigby.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rigby, A., Olivier, J., Myburgh, H. et al. Augmented decoders for LDPC codes. J Wireless Com Network 2018, 189 (2018). https://doi.org/10.1186/s13638-018-1203-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-018-1203-5

Keywords