Skip to main content

Shortened LDPC codes accelerate OSD decoding performance

Abstract

Medium-length LDPC codes are in demand in certain areas such as mobile environment (Wi-Fi and Mobile WiMAX) and in telecommand links from the ground to space because of their lower latency properties. However, because of the length of these codes is rather short, decoding error rates are worse than those of long-length codes. In this paper, we show that the combination of shortened LDPC codes, whose shortened positions are properly selected, and ordered statistic decoding (OSD) significantly improves the decoding error. For the best choice of shortening positions, we used the integer programming approach. In particular, we utilized Feldman–Wainwright–Karger code polytope for this purpose. Some studies have independently reported the efficiency of shortened LDPC codes and OSD methods. This paper emphasizes that their combination results in multiplicative effectiveness.

Introduction

It is well known that long-length LDPC codes with the sum-product (SP) decoding algorithm show good decoding error properties near the Shannon limit; see sections 17.10–17.19 of [13]. However, medium-length LDPC codes (whose length are a hundred to a few thousand bits) are adopted in some actual communication standards, such as IEEE 802.11n (Wi-Fi), 802.16e (WiMAX) and telecommand (TC) links from ground to space [4, 5] because of their lower latency properties. Unfortunately, the error correction abilities of these medium-length codes are inferior to that of long-length codes. As a method that compensates for this gap, the Ordered Statistic Decoding (OSD) method [8] has been considered effective. This method uses outputs of SP decoding as reliability measure and reprocesses them to improve the output quality. However, for each reprocessing phase i, the method requires \(\left( {\begin{array}{c}k\\ i\end{array}}\right)\) codewords to be processed, where k is the information bit length. Thus, up to phase p, a total number of \(\sum _{i=0}^{p}\left( {\begin{array}{c}k\\ i\end{array}}\right)\) codewords will be generated. This procedure is called “Order-p reprocessing”. Thus, in practice, the reprocessing procedure is limited to a relatively small number, for example at most order 4; see [2]. Due to this limitation on the reprocessing order, improvements from SP decoding are not large enough as expected, especially if the order of reprocessing is small, such as \(p=1\) or 2. To resolve these contradictory requirements on decoding precision and decoding time, we propose a method, that is a concatenation of the iterative decoding method (SP) and OSD method. It is known that shortened LDPC codes improve the decoding error rate to a certain extent, e.g., see [16, 17] and references therein. Thus, by choosing shortened LDPC codes in some appropriate way, the number of erroneous bits in most reliable bits (MRB) are expected to be reduced, and this decreasing of errors in the MRB improves the total decoding error significantly even in the adoption of “Order-1” or “Order-2” reprocessing. Here, we used the term “-appropriately shortened LDPC codes-” for LDPC codes whose shortening positions were selected to decrease the decoding error. We compared three methods for the selection of shortening positions, namely (A): Every 8 bits, (B): Worst reliable bits [16], (C): Integer Programming based approach. We note that in method (C), the Feldman–Wainwright–Karger code polytope [7] is used for demonstrating integer programming (IP) procedure to enumerate a set of codewords with small Hamming weight. We exclude these codewords by suitably choosing shortening positions and for this choice of shortening positions, IP again plays an important role. To the best of our knowledge, the approach (C) is new in our current paper.

For the decoding time, it was suppressed to be reasonable, because we assumed a relatively small-order reprocessing scheme. We examine this in Sect. 6 with numerical experiments.

For an error-correcting performance, by appropriately choosing shortening positions of information bits, we found that Order-2 reprocessing is enough to achieve a codeword error rate (CER) of less than \(10^{-5}\) at a signal-to-noise ratio \(E_{b}/N_{0}=3.0\) dB. To be specific, the followings accomplish this property:

  1. (1)

    IEEE 802.16e (WiMAX) LDPC code with \((n,k)=(576,288)\) shortened 36 bits,

  2. (2)

    IEEE 802.11n (Wi-Fi) LDPC code with \((n,k)=(648,324)\) shortened 40 and 36 bits,

  3. (3)

    Consultative Committee for Space Data System (CCSDS) TC link LDPC code with \((n,k)=(512,256)\) shortened 32 bits,

where n is the code length and k the information bit length. We note that the codeword error rate level in TC links is required to be lower than \(10^{-5}\); see Baldi et al. [2]. Of three methods for the determination of shortening positions, method (C) seems to be effective especially in the case of a relatively high signal-to-noise ratio range.

Finally, we note that the method we are proposing here is a sort of concatenation of SP and OSD methods, analogous to [2]. Hence, further concatenation with a cyclic redundancy check (CRC) as [9, 14] may be possible by decreasing the number of shortening bits and using them for CRC.

This paper is organized as follows. In Sect. 3, we review the shortened LDPC code, the OSD method, and their concatenation. In Sect. 4, we propose three different methods for shortening positions. We evaluate these methods through a series of numerical experiments in Sect. 5. Section 6 discusses our evaluations from the execution time. Throughout the experiments, we assume relatively small order OSD, while keeping the ability of error correction at some satisfactory level. Thus, the execution time is also relatively small in these OSD decoding class. Finally, in Sect. 7 we discuss some topics which we leave for future research.

Methods

We have examined the effectiveness of an encoding/decoding method via computer simulations, which were performed on an Intel(R) Xeon(R) E5-1660 3.70GHz processor host using gcc 4.4.7 -O3.

Shortened LDPC codes and the OSD method

In this section we review the shortened LDPC codes with SP decoding and the OSD method, and then we look for the way to combine them into a single coding algorithm. First, we introduce some notations. Let (nk) denote code length and information bit length respectively. We assume an additive white Gaussian noise (AWGN) channel and \(N_0/2\) to be the two-sided noise power spectral density, thus the standard deviation of a noise is \(\sigma =\sqrt{N_{0}/2}\). As for modulation, binary phase-shift keying (BPSK) is assumed, whose signal energy per one information bit is denoted by \(E_{b}\). Thus, over a noiseless channel, outputs of the matched filters are

$$\begin{aligned} {\left\{ \begin{array}{ll} -\sqrt{RE_b},&{} \text {if sending information is 1}, \\ \sqrt{RE_b},&{}\text {if sending information is 0}, \end{array}\right. } \end{aligned}$$
(1)

where R is the code rate. Next, we briefly introduce the encoding algorithm, which is standard for shortened LDPC code. Assume a \((n+\alpha ,k+\alpha )\) base LDPC code is available. Let \(\alpha\) be the number of shortened bits from the base LDPC code. We denote the parity check matrix of base LDPC code by \(H=\left( h_{i,j}\right)\).

figurea

We introduce the following three different methods to determine the shortening position T:

$$\begin{aligned} {\left\{ \begin{array}{ll} \text {(A) }T=\{8,16,\ldots ,8\alpha \}\text { (equal interval)}\\ \text {(B) Worst reliable }\alpha \text { positions according to }[16]\\ \text {(C) Integer Programming based approach} \end{array}\right. } \end{aligned}$$

We note that the length of \(n+\alpha\) sequence is not transmitted in the above encoding process. Instead, we transmit the length of a n sequence, thus, the coding rate is k/n (Fig. 1). Because both sender and receiver know α bits of shortening position T and they are set to be zero, hence it is not necessary to transmit these all zero bits.

Fig. 1
figure1

Encoding process

Next, we introduce a decoding algorithm, which is a concatenation of the SP-algorithm for shortened LDPC codes and the OSD method. We note that the following decoding algorithm is essentially the same as the hybrid decoding algorithm in [2], except for the adoption of shortened LDPC codes (for the applications of this hybrid decoding algorithm to non-binary LDPC codes, consult Baldi et al. [1]). However, as shown in the numerical experiments discussed in Sect. 5, adoption of appropriate shortened LDPC codes fairly improve error correction ability compared to the one that uses prime base LDPC codes for the OSD process.

In the following, L denotes a maximum iteration number of SP-algorithm, M is a large real number, and \(\mathbf{y} =(y_{1},\ldots ,y_{k},y_{k+\alpha +1},\ldots ,y_{n+\alpha })\) \((y_{j}\in {\mathbb {R}}, j\in \{ 1,\ldots ,k,k+\alpha +1,\ldots ,n+\alpha \})\), a received sequence. In step (2) of the following decoding procedure, we will use accumulative log-likelihood ratio (LLR) information as in [11] owing to its efficiency. In step (3), we apply hard decision process to the outputs of SP. If this hard decision satisfy parity condition, we adopt this hard decision as the final decoding result. In step (4), positions corresponding to set T are set to large value M, since these bits include no error. Step (5) and (6) are conventional OSD method. 

figureb

Here we note that we do not flip any of \(z'_{\pi (n+1)},\ldots ,\) \(z'_{\pi (n+\alpha )}\) in step (6) since this sequence does not contain any errors.

Selection of \(\alpha\)-shortening positions

The selection of shortening positions in \(\alpha\) information bits is important for improving CER; see [3, 16, 17] and their references. Shortening techniques do not depend on the code length, however, they result in significant progress in the error correction ability when they are adopted to medium-length codes with OSD decoding. This is because, a small refinement in error rate of MRB often results in large improvement in the process of OSD decoding, as we shall see in the numerical experiments in the following sections. In the experiments, we try the following three types of shortening methods. First method is a code independent, while other two methods are code dependent one.

figurec
figured

In the case of (B), we fixed \(E_{s}=0.5E_{b}=0.5N_{0}=4\) and \(M=50\). By applying the above method (B), we obtained the following tables that describe the shortening positions for (1) \((n+\alpha ,k+\alpha ,\alpha )=(576,288,36)\) IEEE 802.16e (WiMAX) LDPC code, (2) \((n+\alpha ,k+\alpha ,\alpha )=(648,324,40)\) and (648,324,36) IEEE 802.11n (Wi-Fi) LDPC code, (3) \((n+\alpha ,k+\alpha ,\alpha )=(256,128,16)\) CCSDS LDPC code and (4) \((n+\alpha ,k+\alpha ,\alpha )=(512,256,32)\) CCSDS LDPC code (Table 1). 

Table 1 Shortening positions T of each LDPC code when method (B) is applied

For the third method, we need some definitions.

Definition 1

Let

$$\begin{aligned} A_{i}&:=\left\{ j\in \{1,2,\ldots , n+\alpha \}:h_{i,j}=1\right\} \nonumber \\&\qquad \forall i\in \{1,2\ldots , n-k\} \end{aligned}$$
(2)
$$\begin{aligned} T_{i}&:=\{S\subset A_{i}: \vert S\vert \text { is odd}\} \end{aligned}$$
(3)

and

$$\begin{aligned}&1+\sum _{t\in S}(x_{t}-1)-\sum _{t\in A_{i}\setminus S}x_{t}\le 0,\nonumber \\&\quad \forall i\in \{1,\ldots ,n-k\},\,\,\forall S\in T_{i} \end{aligned}$$
(4)
$$\begin{aligned}&0\le x_{j}\le 1,\quad \forall j\in \{1,\ldots , n+\alpha \}. \end{aligned}$$
(5)

The following polytope is called the Feldman–Wainwright–Karger code polytope [7]:

$$\begin{aligned} P(H)=\{\mathbf{x }\in {\mathbb {R}}^{n+\alpha }: \mathbf{x } \text { satisfies } (4)\text { and }(5) \}. \end{aligned}$$
(6)

First, we determine a code set which has small Hamming weight.

figuree

The set C obtained through above process represents a collection of codewords that have a small Hamming weight. Table 2 shows (a part of) the positions that take \(x^{*}_{t}=1\), for (1) \((n+\alpha ,k+\alpha ,\alpha )=(576,288,36)\) IEEE 802.16e (WiMAX) LDPC code and (2) \((n+\alpha ,k+\alpha ,\alpha )=(648,324,40)\) IEEE 802.11n (Wi-Fi) LDPC code. We have computed 900 codewords in ascending order of the Hamming weight for case (1) and 800 codewords for case (2) by using MIP Solver Gurobi optimizer 8.1. We tried to avoid codewords of small Hamming weight appearing in set C as much as possible. To decide the shortening positions of the code so as to achieve this objective, we make use of the integer programming technique again. We set a positive integer K as large as possible so that the following optimization problem is feasible. We put N as the total number of shortening positions.

figuref
Table 2 A set of codewords which have small Hamming weight

The vector \(\mathbf{z} =(z_{1},\ldots ,z_{n})\) above represents a position to be shortened. More precisely, if \(z_{i}=1\) then i-th position in original codeword is shortened and its value is set to be “0”. Next proposition shows any codeword up to K-th position in the set C is prohibited in the proposed shortened code.

Proposition 1

Assume \(\mathbf{z} =(z_{1},\ldots ,z_{n})\) be a feasible solution to the integer programming problem (C). Further, if \(z_{i}=1\), then we set \(i\in T\) in “Encoding Algorithm”. Then, any codeword up to K-th position in the set C never appears in the set of codewords designed by the “Encoding Algorithm”.

(Proof)

Set \(C_{K}\subset C\) be the set of codewords up to K-th position in C and \({{\mathcal {C}}}\) be the set of codewords designed by the “Encoding Algorithm”. Let \(\mathbf{x} ^{*}\) be an element in \(C_{K}\) with minimum Hamming weight \(d^{*}\). Then, from the definition of \(\mathbf{z}\), at least one position of \(\mathbf{x} ^{*}\) whose bit has a value “1” is forced to be “0” in “Encoding Algorithm”. Thus, \(\mathbf{x} ^{*}\) is never contained in \({{\mathcal {C}}}\). Assume any codeword of \(C_{K}\) with Hamming weight d (\(d>d^{*}\)) is not contained in \({{\mathcal {C}}}\). Now let \(\mathbf{x} _{d+1}\in C_{K}\) be an arbitrary codeword with Hamming weight \(d+1\). Then, again from the definition of \(\mathbf{z}\), at least one position of \(\mathbf{x} _{d+1}\) whose bit has a value “1” is forced to be “0” in “Encoding Algorithm”(thus modified \(\mathbf{x} _{d+1}\) has a Hamming weight less than or equal to d). Hence, by induction hypothesis, \(\mathbf{x} _{d+1}\) is never contained in \({{\mathcal {C}}}\), which proves the assertion. \(\square\)

Thus, the minimum free distance of the proposed shortened code is significantly increased compared to the original code. Hence, we can expect erroneous bits in the proposed shortened code decrease, especially in a relatively high signal-to-noise ratio environment. This minor improvement on erroneous bits accelerates total improvement on OSD decoding method.

Although we have experimented with four cases of different LDPC codes, we have demonstrated approach (C) for only the above two cases (1) and (2) (different two cases \(\alpha =40\) and \(\alpha =36\)); see Table 3. This is mainly due to the limit of computation time (in our computing environment, it took about a month to obtain the result of Table 2). As we shall see in the following, method (C) shows a remarkable advantage in error correction ability, it seems worthwhile to try to explore some methods to obtain the result of (C)-I (as shown in Table 2) within more reasonable time. Combination of some probabilistic methods for weight distribution, e.g., [6, 10, 12, 15] and IP approach might prove promising. Specifically, an approach based on the “impulse method” by Hu et al. [10] and Declercq and Fossorier [6] is known to be effective for enumerating such low weight codewords.

Table 3 Shortening positions T of each LDPC code in case method (C) were applied

Numerical experiments

For our experiments, we used four different LDPC codes, which have a systematic structure (WiMAX code, Wi-Fi code and two CCSDS codes). We evaluated error correction ability with codeword error rate (CER), which is defined as the ratio of the number of decoding failures to total number of received codewords. In each of the following figures, horizontal axis represents the signal to noise ratio: \(E_{b}/N_{0}\). In all experiments we assumed \(E_{s}=RE_{b}=1\) and they were performed on Intel(R) Xeon(R) E5-1660 3.70 GHz host using gcc 4.4.7-O3.

(1) In this case, we used \((n+\alpha ,k+\alpha ,\alpha )=(576,288,36)\) IEEE 802.16e (WiMAX) LDPC code as the base code. Thus, \((n,k)=(540,252)\), and hence coding rate is \(R=k/n=0.467\). The result for this case is shown in Fig. 2. In the figure, label SP shows the CER of the “original” (576,288) LDPC code with the SP-algorithm, and OSD1 stands for the result of Order-1 OSD method applied to base LDPC code. \(\hbox {SH}_{\mathrm{A}}\), \(\hbox {SH}_{\mathrm{B}}\) and \(\hbox {SH}_{\mathrm{C}}\) are the results of \(\alpha\) bit shortened codes whose shortening positions were determined by (A), (B) and (C) as described in the previous section, respectively. \(\hbox {SHOSD1}_{\mathrm{A}}\), \(\hbox {SHOSD1}_{\mathrm{B}}\) and \(\hbox {SHOSD1}_{\mathrm{C}}\) are the proposed methods which apply the Order-1 OSD method to \(\hbox {SH}_{\mathrm{A}}\), \(\hbox {SH}_{\mathrm{B}}\) and \(\hbox {SH}_{\mathrm{C}}\), respectively. \(\hbox {SH}_{\mathrm{B}}\), \(\hbox {SH}_{\mathrm{C}}\) and OSD1 showed almost the same CER abilities (whereas \(\hbox {SH}_{\mathrm{A}}\) is inferior to those). However, we observed that \(\hbox {SHOSD1}_{\mathrm{A}}\), \(\hbox {SHOSD1}_{\mathrm{B}}\) and \(\hbox {SHOSD1}_{\mathrm{C}}\) accomplished CER under \(10^{-5}\) at \({E}_b/{N}_0=3.0 \,\hbox {dB}\) and 3.5 dB and constantly overperformed the result of OSD-1 decoding. In particular, the \(\hbox {SHOSD1}_{\mathrm{C}}\) is superior even if compared to \(\hbox {SHOSD1}_{\mathrm{A}}\) and \(\hbox {SHOSD1}_{\mathrm{B}}\) at \({E}_b\)/\({N}_0=3.0\) dB and 3.5 dB. Table 4 shows the effectiveness of the OSD effect for the base and shortened LDPC codes. As shown in Table 4, the \(\hbox {SH}_{\mathrm{A}}\)/\(\hbox {SHOSD1}_{\mathrm{A}}\) and \(\hbox {SH}_{\mathrm{B}}\)/\(\hbox {SHOSD1}_{\mathrm{B}}\) ratios are, in most cases, superior to the SP/OSD ratio, except at 3.0 dB and 3.5 dB. However, \(\hbox {SH}_{\mathrm{C}}\)/\(\hbox {SHOSD1}_{\mathrm{C}}\) shows a far superior improvement rate compared with the other three cases. This indicates that if we properly select shortening positons, the OSD effect is accelerated by the corresponding shortened codes. Table 5 shows the number of excluded codewords, out of No.1 through No.900, appearing in Table 2 by methods (A), (B) and (C), respectively. From this, we see that the number of excluded codewords by method (C) is larger than those with methods (A) and (B). Figure 3 gives the distribution of the number of codewords which are not covered by each of the shortening methods (A), (B) and (C). The horizontal axis indicates the code weight, while the vertical gives the number of (log scaled) codewords uncovered by respective methods. Method (A) and (B) remain considerable amount of uncovered codewords (especially for a low weight distribution, such as 13 and 16), whereas the method (C) keeps only three uncovered codewords at a weight 23. The error correction ability at 3.0 dB and 3.5 dB appears to be affected by these numbers of uncovered numbers of codewords with small Hamming weight in Table 5.

Fig. 2
figure2

IEEE 802.16e \((n+\alpha ,k+\alpha )=(576,288)\) base LDPC code. \(M=50, L=10{,}000\)

Table 4 OSD effect for base and shortened codes: case (1)
Table 5 Number of covered (inhibited) codewords appearing in Table 2 for IEEE 802.16e code by method (A), (B) and (C) respectively
Fig. 3
figure3

Distribution of number of codewords which are not covered by each shortening methods for WiMAX \((n+\alpha ,k+\alpha ,\alpha )=(576,288,36)\) LDPC code

(2)-1 In this case, we used \((n+\alpha ,k+\alpha ,\alpha )=(648,324,40)\) IEEE 802.11n (Wi-Fi) LDPC code as the base code. Therefore, \((n,k)=(608,284)\), and hence coding rate is \(R=k/n=0.467\). The result for this case is shown in Fig. 4. All labels mean the same as in Fig. 2. We can see that \(\hbox {SH}_{\mathrm{A}}\), \(\hbox {SH}_{\mathrm{B}}\), \(\hbox {SH}_{\mathrm{C}}\) and OSD1 show almost the equivalent CER abilities. On the other hand, we observe that \(\hbox {SHOSD1}_{\mathrm{A}}\), \(\hbox {SHOSD1}_{\mathrm{B}}\) and \(\hbox {SHOSD1}_{\mathrm{C}}\) accomplish CER under \(10^{-5}\) at \({E}_b\)/\({N}_0=2.5 \,\hbox {dB}\) and constantly overperform OSD-1 decoding. In particular, the SHOSD1\(\mathrm{_B}\) and SHOSD1\(\mathrm{_C}\) results at \(E_{b}/E_{0}=3.0\,\hbox {dB}\) are strong compared to those of SHOSD1\(\mathrm{_A}\). Table 6 shows the efficiency of the OSD effect for the base and shortened LDPC codes. As shown in Table 6, the ratios \(\hbox {SH}_{\mathrm{A}}\)/\(\hbox {SHOSD1}_{\mathrm{A}}\), \(\hbox {SH}_{\mathrm{B}}\)/\(\hbox {SHOSD1}_{\mathrm{B}}\) and \(\hbox {SH}_{\mathrm{C}}\)/\(\hbox {SHOSD1}_{\mathrm{C}}\), except for the case of 3.0 dB for \(\hbox {SH}_{\mathrm{A}}\)/\(\hbox {SHOSD1}_{\mathrm{A}}\), are superior to the ratio SP/OSD, which means that the proposed method accelerates the OSD effect. As a possible cause that SHOSD1\(\mathrm{_{A}}\) loses its efficiency at \(E_{b}/E_{0}=3.0\,\hbox {dB}\), we mention the following. First, as shown in Table 7, the covered codewords for methods (B) and (C) are relatively large (771 and 793) compared to the number for method (A). Second, from Fig. 5, we observe similarity in the distribution of the number of uncovered codewords for the methods (B) and (C), although the distribution with method (B) has non-zero value at weight 20. On the other hand, the distribution of the number of the uncovered codewords for method (A) has non-zero value at low Hamming weight (at 15, 18, 19). This likely causes the degradation of CER at \(E_{b}/E_{0}=3.0\,\hbox {dB}\) for method SHOSD1\(\mathrm{_{A}}\).

Fig. 4
figure4

IEEE 802.11n \((n+\alpha ,k+\alpha )=(648,324)\) base LDPC code \((\alpha =40)\). \(M=50, L=10{,}000\)

Table 6 OSD effect for base and shortened codes: case (2) (α = 40)
Table 7 Number of covered (inhibited) codewords appearing in Table 2 for the IEEE 802.11n code (\(\alpha =40\)) by method (A), (B) and (C) respectively
Fig. 5
figure5

Distribution of number of codewords which are not covered by each shortening method for Wi-Fi \((n+\alpha ,k+\alpha ,\alpha )=(648,324,40)\) LDPC code

(2)-2 Next, we examine the same IEEE 802.11n (Wi-Fi) LDPC code as a base code for a different parameter, specifically, \((n+\alpha ,k+\alpha ,\alpha )=(648,324,36)\). Thus, \((n,k)=(612,288)\) and hence \(R=k/n=0.471\). The results are shown in Fig. 6, with its efficiency in Table 8, the number of covered codewords for each method in Table 9, and the distribution of the number of uncovered codewords for each Hamming weight of codeword in Fig. 7. We observed almost the same tendency as in the case of \(\alpha =40\).

Fig. 6
figure6

IEEE 802.11n \((n+\alpha ,k+\alpha )=(648,324)\) base LDPC code \((\alpha =36)\). \(M=50, L=10{,}000\)

Table 8 OSD effect for base and shortened codes: case (2) (α = 36)
Table 9 Number of covered (inhibited) codewords appearing in Table 2 for IEEE 802.11n code (α = 36) by method (A), (B) and (C) respectively
Fig. 7
figure7

Distribution of number of codewords which are not covered by each shortening methods for Wi-Fi \((n+\alpha ,k+\alpha ,\alpha )=(648,324,36)\) LDPC code

(3) In this case, we used \((n+\alpha ,k+\alpha ,\alpha )=(256,128,16)\) CCSDS LDPC code as the base code see; [4]. Thus, \((n,k)=(240,112)\), and hence coding rate is \(R=k/n=0.467\). The result for this case is shown in Fig. 8. In the figure, the meanings of SP, \(\hbox {SH}_{\mathrm{A}}\) and \(\hbox {SH}_{\mathrm{B}}\) are the same as in Fig. 2. OSD2 shows the result from the Order-2 OSD method applied to the base LDPC code. \(\hbox {SHOSD2}_{\mathrm{A}}\) and \(\hbox {SHOSD2}_{\mathrm{B}}\) are proposed methods that apply the Order-2 OSD method to \(\hbox {SH}_{\mathrm{A}}\) and \(\hbox {SH}_{\mathrm{B}}\) respectively. We found that the error correction ability of OSD2 is superior to those of \(\hbox {SH}_{\mathrm{A}}\) and \(\hbox {SH}_{\mathrm{B}}\). We observed that \(\hbox {SHOSD2}_{\mathrm{A}}\) and \(\hbox {SHOSD2}_{\mathrm{B}}\) accomplish CER under \(10^{-5}\) at \({E}_b\)/\({N}_0=3.5\,\hbox {dB}\) and constantly outperforms the result of OSD-2 decoding. As an interesting finding, we observe in this case CER of SHOSD2\(_{\mathrm{A}}\) constantly outperforms SHOSD2\(_{\mathrm{B}}\), although not so significant degree. Table 10 shows the efficiency of OSD effect for a base and shortened LDPC codes. From this table, we see that ratios \(\hbox {SH}_{\mathrm{A}}\)/\(\hbox {SHOSD2}_{\mathrm{A}}\) and \(\hbox {SH}_{\mathrm{B}}\)/\(\hbox {SHOSD2}_{\mathrm{B}}\) consistently improve the ratio of SP/OSD2. Hence, in this case, by shortening the base LDPC code, the OSD effect was accelerated.

Fig. 8
figure8

CCSDS \((n+\alpha ,k+\alpha )=(256,128)\) base LDPC code. \(M=50, L=10{,}000\)

Table 10 OSD effect for base and shortened codes: case (3)

(4) In this case we used \((n+\alpha ,k+\alpha ,\alpha )=(512,256,32)\) CCSDS LDPC code as the base code see; [4, 5]. Thus, \((n,k)=(480,224)\), and hence coding rate is \(R=k/n=0.467\). The results for this case is shown in Fig. 9. All captions are the same as in Fig. 8. \(\hbox {SH}_{\mathrm{A}}\), \(\hbox {SH}_{\mathrm{B}}\) and OSD2 show almost the same CER abilities. On the other hand, \(\hbox {SHOSD2}_{\mathrm{A}}\) and \(\hbox {SHOSD2}_{\mathrm{B}}\) accomplish CER under \(10^{-5}\) at \({E}_b\)/\({N}_0=3.0\) dB and consistently improve the OSD-2 decoding result. As in case (3), the CER of SHOSD2\(_{\mathrm{A}}\) slightly better than that of SHOSD2\(_{\mathrm{B}}\). Table 11 shows the efficiency of the OSD effect for the base and shortened LDPC codes. As shown in Table 11, we see that \(\hbox {SH}_{\mathrm{A}}\)/\(\hbox {SHOSD2}_{\mathrm{A}}\) and \(\hbox {SH}_{\mathrm{B}}\)/\(\hbox {SHOSD2}_{\mathrm{B}}\) constantly improve the ratio of SP/OSD2. Therefore, as in case (3), by shortening the base LDPC code, the OSD effect was accelerated.

Fig. 9
figure9

CCSDS \((n+\alpha ,k+\alpha )=(512,256)\) base LDPC code. \(M=50, L=10{,}000\)

Table 11 OSD effect for base and shortened codes: case (4)

Execution time evaluation

The decoding algorithm we presented in this paper is a kind of hybrid type decoding algorithms and its structure is analogous to that of Baldi et al. [2]. As explained in [2], most decoding trials end with procedure (3) of the decoding algorithm. Moreover, we assumed a relatively small order OSD reprocessing procedure (in the experiment discussed in the previous section order one and two reprocessing was used), thus, the average decoding time would not be so apart from that of SP. We demonstrated this via numerical experiments. Tables 12 and 13 show the average computing time of ratio for the experiment cases (1) and (2) (\(\alpha =40\)) of the previous section respectively. Here “OSD1” represents the average execution time for Order-1 OSD process (without the time for Sum-product part) and similarly, “SHOSD1\(_{\mathrm{A}}\)” shows the average execution time for OSD process with shortened LDPC code and shortening method (A). OSD ratio refers to the percentage of OSD trials, i.e., the ratio of SP decoding failure. The labels OSD1/SP and SHOSD1\(_{\mathrm{A}}\)/SP refer to the execution time ratios. We note execution times shown in these tables are based on Intel(R) Xeon(R) E5-1660 3.70GHz processor host using gcc 4.4.7 -O3. Both Tables 12 and 13 show that the execution times for OSD and SHOSD1\(\mathrm{_{A}}\) do not depend on the signal to noise ratio \(E_b/N_{0}\). From Table 12, we can observe that in the case, \(E_b/N_{0}\) is relatively low (\(=1.0 \,\hbox {dB}\)), execution times of SP and OSD1 (or SHOSD1\(\mathrm{_A}\)) do not show a remarkable difference (OSD1/SP=SHOSD1\(\mathrm{_{A}}\)/\(\hbox {SP}=1.27\)). On the other hand, in the case, \(E_b/N_{0}\) is relatively high (\(=2.0 \,dB\)), execution times of OSD1 and SHOSD1\(_A\) are approximately three times longer than that of SP. However, in the case \(\hbox {Eb/N}_{0}=2.0 \,\hbox {dB}\), OSD ratio is only 2.08 %, so about 98 % of instances are sufficient for SP decoding, as we have noted at the beginning of this section, that most of decoding trials end at procedure (3) of decoding algorithm. Thus, even from the viewpoint of the total execution time, OSD-based decodings do not seem to lose their advantage (high precision decoding property) compared with SP decoding even in a relatively high \(E_{b}/N_{0}\) circumstance. Almost same tendency can be observed in Table 13.

Table 12 Average execution time of Case (1)
Table 13 Average execution time of Case (2) (α = 40)

Results and discussion

An effective way to increase the OSD decoding ability was presented. As mentioned regarding the experiments described in Sect. 5, by determining T appropriately, CER can be reduced. In particular the method (C), which is based on mathematical programming, seems to be effective. However, this method requires a collection of the codewords that have a small Hamming weight as shown in Table 2. Obtaining these tables via only mathematical programming as in method (C)–(I) is a computationally very hard task. Hence, hybrid methods with some heuristic approaches are desirable; see [6, 10, 12, 15].

Data availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Abbreviations

AWGN:

Additive White Gaussian noise

BPSK:

Binary phase-shift keying

CER:

Codeword error rate

CRC:

Cyclic redundancy check

LDPC:

Low density parity check code

MRB:

Most reliable bits

OSD:

Ordered statistic decoding

SP:

Sum-product

TC:

Telecommand

IP:

Integer programming

References

  1. 1.

    M. Baldi, F. Chiaraluce, N. Maturo, G. Liva, E. Paolini, A hybrid decoding scheme for short non-binary LDPC codes. IEEE Commun. Lett. 18(12), 2093–2096 (2014)

    Article  Google Scholar 

  2. 2.

    M. Baldi, N. Maturo, E. Paolini, F. Chiaraluce, On the use of ordered statistics decoders for low-density parity-check codes in space telecommand links. EURASIP J Wirel Commun Netw 2016, 272 (2016)

    Article  Google Scholar 

  3. 3.

    M. Beermann, T. Breddermann, P. Vary, Rate-compatible LDPC codes using optimized dummy bit insertion, in 8th International Symposium on Wireless Communication Systems (2011). p. 447–451

  4. 4.

    CCSDS, Short Block Length LDPC Codes for TC Synchronization and Channel Coding, Orange Book. CCSDS 231.1-O-1 (2015)

  5. 5.

    CCSDS, TC Synchronization and Channel Coding, Blue Book. CCSDS 231.0-B-3 (2017)

  6. 6.

    D. Declercq, M.P.C. Fossorier, Improved impulse method to evaluate the low weight profile of sparse binary linear codes, in 2008 IEEE International Symposium on Information Theory, Toronto (2008). p. 1963–1967

  7. 7.

    J. Feldman, M.J. Wainwright, D.R. Karger, Using linear programming to decode binary linear codes. IEEE Trans. Inf. Theory 51(3), 954–972 (2005)

    MathSciNet  Article  Google Scholar 

  8. 8.

    M.P.C. Fossorier, Iterative reliability-based decoding of low-density parity check codes. IEEE J. Sel. Areas Commun. 19(5), 908–917 (2001)

    Article  Google Scholar 

  9. 9.

    S. Gounai, T. Ohtsuki, Lowering error floor of irregular LDPC codes by CRC and OSD algorithm. IEICE Trans. Commun. E89–B(1), 1–10 (2006)

    Article  Google Scholar 

  10. 10.

    X. Hu, M.P.C. Fossorier, E. Eleftheriou, On the computation of the minimum distance of low-density parity-check codes, in 2004 IEEE International Conference on Communications (IEEE Cat. No.04CH37577), 2 (2004). p. 767–771

  11. 11.

    M. Jiang, C. Zhao, E. Xu, L. Zhang, Reliability-based iterative decoding of LDPC codes using likelihood accumulation. IEEE Commun. Lett. 11(8), 677–679 (2007)

    Article  Google Scholar 

  12. 12.

    J. Leon, A probabilistic algorithm for computing minimum weights of large error-correcting codes. IEEE Trans. Inf. Theory IT–34(5), 1354–1359 (1988)

    MathSciNet  Article  Google Scholar 

  13. 13.

    S. Lin, D.J. Costello, Error Control Coding, 2nd edn. (Pearson, Hoboken, 2004)

    MATH  Google Scholar 

  14. 14.

    J. Lim, D.J. Shin, A novel bit flipping decoder for systematic LDPC codes. IEICE Electron. Express 14(2), 1–8 (2017)

    MathSciNet  Article  Google Scholar 

  15. 15.

    J. Stern, A method for finding codewords of small weight, in Coding Theory and Applications, ed. by G. Cohen, J. Wolfmann (Springer, New York, 1989)

    Google Scholar 

  16. 16.

    H. Wang, Q. Chen, Y. Zhang, On the LLR criterion based shortening design for LDPC codes, in IEEE 2016 Annual Conference on Information Science and Systems (2016). https://doi.org/10.1109/CISS.2016.7460482

  17. 17.

    A. Wongsriwor, V. Imtawil, P. Suttisopapan, Design of rate-compatible LDPC codes based on uniform shortening distribution. Eng. Appl. Sci. Res. 45(2), 140–146 (2018)

    Google Scholar 

Download references

Acknowledgements

The authors are grateful to the referees for their careful reading and invaluable comments. The first author (KW) is grateful to Professor Takeo Yamada for his careful reading and comments.

Author information

Affiliations

Authors

Contributions

RK and TS carried out the simulation and tuned up the encoding/decoding algorithm and KW designed the encoding/decoding algorithm. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kohtaro Watanabe.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Watanabe, K., Kaguchi, R. & Shinoda, T. Shortened LDPC codes accelerate OSD decoding performance. J Wireless Com Network 2021, 22 (2021). https://doi.org/10.1186/s13638-021-01901-x

Download citation

Keywords

  • LDPC codes
  • OSD method
  • Shortened code
  • Code polytope
  • Integer programming