Skip to content

Advertisement

  • Research
  • Open Access

Enhanced belief propagation decoding of polar codes by adapting the parity-check matrix

Contributed equally
EURASIP Journal on Wireless Communications and Networking20172017:40

https://doi.org/10.1186/s13638-017-0825-3

  • Received: 5 April 2016
  • Accepted: 15 February 2017
  • Published:

Abstract

Though the performance of belief propagation (BP) decoder for polar codes is comparable with the successive cancellation (SC) decoder, it has significant performance gap when compared with the improved SC decoders, such as SC list (SCL) decoding. In this paper, we propose an improved BP decoding for polar codes with good performance by adapting their parity-check matrices. The decoding process is iterative and consists of two parts. Firstly, the parity-check matrix of polar codes is adjusted such that one of its submatrices corresponding to less reliable bits is in a sparse nature. Secondly, the BP decoding is applied to the adjusted parity-check matrix. Simulation results show that the proposed decoder, when accompanied with the early termination scheme, provides significant performance gains over the original polar BP decoder under a relatively low decoding complexity and even competes with the cyclic redundancy check (CRC)-aided SCL (CRC-SCL) decoder only by increasing a tolerate complexity.

Keywords

  • Polar codes
  • Belief propagation
  • Parity-check matrix adaptation
  • early termination

1 Introduction

Polar codes are shown to be capacity-achieving for binary input discrete memoryless channel (B-DMC) with explicit construction and decoding methods [1]. In the regime of finite block lengths with practical considerations, the performance of polar codes is not very appealing. One of the reasons is that the successive cancellation (SC) decoding for polar codes is suboptimal and it may cause error propagation due to the serial decoding architecture. Although there are some improved SC decoders [2, 3], the throughput of these decoders is low and the decoding latency is relatively high. Another interesting decoding method for polar codes is belief propagation (BP) decoding [411]. One advantage of BP decoder is its fully parallel architecture which is important for possible applications. However, the performance of BP decoders reported in the literatures is inferior to the improved SC decoders.

Since the parity-check matrix of polar codes is not sparse in general [12], running BP decoding directly on it does not obtain good frame error rate (FER) performances. Thus, the proposed BP decoders in [4, 5] are based on the factor graph representation of the generator matrix. In this paper, based on the idea of iterative soft-input-soft-output decoding of Reed-Solomon (RS) codes [13], we propose an improved BP decoder for polar codes by adapting their parity-check matrices. In particular, we first adapt the parity-check matrix according to the bit reliability so that the unreliable bits correspond to a sparse submatrix. Then, the BP decoding algorithm is applied to the adapted parity-check matrix. The above process is iteratively performed until the predefined maximum number of iterations is reached or the stopping criterion is satisfied.

2 Preliminaries

2.1 Polar codes

Let W denote a B-DMC with input u{0,1}, output \(y \in \mathcal {Y}\), and transition probabilities W(y|u). By recursively applying channel combining and splitting operations on N (N = 2 n , n≥1) independent copies of W, we can get a set of N polarized subchannels, denoted by \(W_{N}^{(i)}\left (y_{1}^{N},u_{1}^{i-1}|u_{i}\right)\), where i=1,2,,N. The construction of an (N,K) polar code is completed by choosing the K best polarized subchannels which carry information bits and freezing the remaining subchannels to zero or some fixed values. According to the encoding rule of polar codes, the encoder can first generate the source block \(u_{1}^{N}\). Then, polar coding is performed on the constraint \(x_{1}^{N}=u_{1}^{N}{\boldsymbol {G}}_{N}\), where G N is the generator matrix and the vector \(x_{1}^{N}\) is the codeword. The matrix G N is obtained by the formula B N F n , where F n is the nth Kronecker power of \({\boldsymbol {F}}\triangleq \left [\begin {array}{cc} 1 & 0 \\ 1 & 1 \end {array}\right ]\) and B N is the bit-reversal permutation matrix. The popular decoding algorithms of polar codes are SC algorithm and BP algorithm.

2.2 Successive cancellation decoding of polar codes

In [1], SC decoding is proposed as a baseline algorithm which has a very low complexity NlogN, where N is the codeblock length. Based on the recursive structure of the polar encoder, the SC decoder performs a series of interlaced step-by-step decisions in which a decision in each step heavily depends on the decisions in the previous steps and the received sequence \(y_{1}^{N}\) from channel. We define the log-likelihood ratio (LLR) of the ith bit as
$$ L_{N }^{(i)}\left(y_{1}^{N}, \hat{u}_{1}^{i-1}\right) = \log\frac{W_{N}^{(i)}\left(y_{1}^{N}, \hat{u}_{1}^{i-1}|0\right)}{W_{N}^{(i)}\left(y_{1}^{N}, \hat{u}_{1}^{i-1}|1\right)}. $$
(1)
Decisions are taken according to
$$ \hat{u}_{i}=~\left\{ \begin{array}{ll} 0 &\quad if~~i\in {I}~\text{and}~L_{N}^{(i)}\left(y_{1}^{N}, \hat{u}_{1}^{i-1}\right)\geq 0 \\ 1 &\quad if~~i\in {I}~\text{and}~L_{N}^{(i)}\left(y_{1}^{N}, \hat{u}_{1}^{i-1}\right)< 0\\ 0 &\quad if~~i\in {I}^{c}, \end{array}\right. $$
(2)
where the vector I denotes the index set of information subchannels and I c is the complementary of the set I, and the decision LLR \(L_{N}^{(i)}\left (y_{1}^{N},\hat {u}_{1}^{i-1}\right)\) can be straightly calculated using the following recursive formulas according to [1, 14]
$$\begin{array}{*{20}l} &L_{N}^{(2i-1)}\left(y_{1}^{N}, \hat{u}_{1}^{2i-2}\right) \\ &\quad=2\text{tanh}^{-1}\left(\text{tanh}\left(L_{N/2}^{(i)}\left(y_{1}^{N/2}, \hat{u}_{1, e}^{2i-2}\oplus\hat{u}_{1, o}^{2i-2}\right)/2\right)\right.\\ &\quad\quad\times\text{tanh}\ \left. \left(L_{N/2}^{(i)}\left(y_{N/2+1}^{N}, \hat{u}_{1, o}^{2i-2}\right)/2\right)\right), \end{array} $$
(3)
or
$$\begin{array}{*{20}l} L_{N}^{(2i)}\left(y_{1}^{N}, \hat{u}_{1}^{2i-1}\right) &= L_{N/2}^{(i)}\left(y_{N/2+1}^{N}, \hat{u}_{1, e}^{2i-2}\right)\\ &\quad+(-1)^{\hat{u}_{2i-1}}L_{N/2}^{(i)}\left(y_{1}^{N/2}, \hat{u}_{1, e}^{2i-2}\oplus \hat{u}_{1, o}^{2i-2}\right)\!, \end{array} $$
(4)

where \(\hat {u}_{1, e}^{2i-2}\) and \(\hat {u}_{1, o}^{2i-2}\) denote the subvectors which consist of elements of \(\hat {u}_{1}^{N}\) with even and odd indices, respectively, and the symbol denotes modulo-2 addition. From the above decoding process, we can see that the complexity is determined essentially by the complexity of computing the LLRs.

2.3 Belief propagation decoding of polar codes

To perform BP decoding of polar codes, the authors in [4] show that the factor graph of polar codes can be obtained by adding N check nodes to each column of the first n (n = logN) columns from left to right in the encoding graph. During the whole BP iteration process, soft messages are updated and propagated among adjacent nodes from the rightmost column to the leftmost column. Then, the decoder reverses the course and updates schedule toward the rightmost column. This procedure makes one round of BP iteration. After reaching predefined number of BP iteration M BP, the decision sequence \(\hat {u}_{1}^{N}\) is determined based on the hard decision of messages from the nodes in the leftmost column. However, the performance gains of BP decoder are not significant over the binary-input additive white Gaussian noise (BAWGN) channel [12].

3 Enhanced belief propagation decoding of polar codes by adapting the parity-check matrix

3.1 Enhanced belief propagation decoding of polar codes

Generally, the BP decoder for polar codes is based on the factor graph representation obtained by the encoding graph of polar codes [4]. In [12], the authors proved that the parity-check matrix H of polar codes is formed by the columns of G N with indices in I c , where I c is the index set of frozen channels. However, no one considers to use H for the polar BP decoder since the density of H is relatively high. In general, BP decoder is not suitable for linear block codes with high-density parity-check (HDPC) matrix. However, in [13], Jiang and Narayanan proposed an iterative decoding scheme that performs well for RS code with HDPC matrix by adapting its parity-check matrix when BP decoding fails to converge. In this paper, we investigate whether the idea of parity-check matrix adaptation also performs well for polar codes or not.

Suppose a codeword x is transmitted over a B-DMC and y is received. The LLR of the ith coded bit is defined as γ(x i ) = log\(\left (\frac {\text {Pr}\left (\mathbf {y}|x_{i}~=~0\right)}{\text {Pr}\left (\mathbf {y}|x_{i}~=~1\right)}\right)\). Let γ = (γ(x 1),γ(x 2),,γ(x NK ),,γ(x N )) denotes the LLRs of N coded bits. The improved polar BP decoder is described as follows.

Stage 1 matrix adaptation. Firstly, all absolute values |γ(x i )|(i=1,2,,N) for γ are sorted by ascending order. This yields a permutation sequence (j 1,j 2,,j NK ,,j N ) of (1,2,,N- K,,N) with \(\left |\gamma \left (x_{j_{1}}\right)\right | < \left |\gamma \left (x_{j_{2}}\right)\right | <,\cdots,< \left |\gamma \left (x_{j_{N}}\right)\right |\). Secondly, we introduce a new vector B=(j 1,j 2,,j NK ) which denotes the indices of the NK least reliable bits in γ. Let H B denotes the columns of H with the indices chosen from B. Finally, the Gauss elimination (GE) is applied to H B to reduce this submatrix to be an identity matrix. If H B is singular, we replace some elements of B with the indices of less reliable information bits until H B becomes invertible. Then, a new matrix H of H is obtained. The process can make H B as sparse as possible. We choose to use GE to H B based on the fact that the LLR value for a bit reflects its reliability. Therefore, the columns of H corresponding to the most unreliable bits are reduced to a sparse matrix. This will help BP to correct these unreliable bits.

Stage 2 BP decoding. The BP algorithm is performed on the new obtained matrix H . During one round of BP iteration, the LLR value of the ith bit is updated as:
$$\begin{array}{*{20}l} &\gamma^{'}(x_{i})=\gamma(x_{i})+\eta \gamma_{\text{ext}}(x_{i}),& \end{array} $$
(5)
where 0<η≤1 is the damping factor [13]. The extrinsic information of ith bit γ ext(x i ) is calculated as follows [13]:
$$ \gamma_{\text{ext}}(x_{i})=\sum\limits_{j=1, H^{'}_{ji}=1}2\text{tanh}^{-1}\left(\begin{array}{l} \prod\limits_{p=1, p\neq i, H^{'}_{jp}=1} \text{tanh}\Big(\frac{\gamma(x_{i})}{2}\Big) \end{array} \right). $$
(6)

According to γ (x i ), the hard decision of ith bit can be made. After a fixed number of BP iterations, say M BP, the decoder goes to stage 1 and continues to the next round of outer iteration. It should be noted that the matrix adaptation should use the latest updated LLRs γ obtained by the inner BP decoder. The advantages of matrix adaptation are as follows: (1) It reduces the density of the original parity-check matrix H and eliminates part of short cycles. (2) It takes LLR values of reliable bits into account when updating unreliable bits. Therefore, the decoding performance can be improved [13].

3.2 Early termination

It is well-known that the stopping criterion for the BP decoder is to determine if the constraint \(\hat {\boldsymbol {x}}\left ({\boldsymbol {H}}^{'}\right)^{T}\)= 0 is satisfied. However, according to the analysis in [15] and simulation results, the BP decoder does not provide a good error-correction performance if we only use \(\hat {\boldsymbol {x}}\left ({\boldsymbol {H}}^{'}\right)^{T}\)= 0 as the stopping criterion for polar codes. Hence, we consider the following simple but efficient stopping criterion for the proposed decoder based on the criterion described in [15], which also can reduce the number of matrix adaption.

minLLR-based stopping criterion: Let Min|LLR| denotes the minimum absolute value of LLRs for γ and β be a positive real number. If Min|LLR|>β, then the decoder calculates \(\hat {\boldsymbol {u}}\) according to \(\hat {\boldsymbol {x}}\) and stops decoding. Otherwise, the decoder continues to perform the next round of matrix adaptation.

The above criterion is based on the stopping criterion described in [15], though it is proposed for BP decoder with factor graph representation of generator matrix. To illustrate the validity of this criterion, the proposed decoder only with \(\hat {\boldsymbol {x}}\left ({\boldsymbol {H}}^{'}\right)^{T}\)= 0 as the stopping criterion is used to decode the (256, 128) polar code at SNR = 2.5 dB. In order to avoid getting trapped for the decoder when BP iterations could not converge to a valid estimated sequence (i.e., \(\hat {\boldsymbol {x}}\left ({\boldsymbol {H}}^{'}\right)^{T} \neq \)0), the total number of BP iterations should be limited to a predefined value M BP. We generate 153 frames, and 50 frames among them cannot be decoded by the proposed decoder. It means that for 103 frames, the condition \(\hat {\boldsymbol {x}}\left ({\boldsymbol {H}}^{'}\right)^{T}\) = 0 is met before the total number of iterations is achieved and that for 50 frames, the condition \(\hat {\boldsymbol {x}}\left ({\boldsymbol {H}}^{'}\right)^{T}\) = 0 is still not met when the total number of iterations is achieved. Figure 1 shows the values of Min|LLR|s for γ which are gathered during the decoding of these 153 frames. In order to examine the results, Min|LLR|s for the 50 error frames are plotted on the right side of Fig. 1. We can see that the Min|LLR| of a correct frame is much larger than zero in general, while that of incorrect frames approach zero. The parameter β can be chosen according to simulations which are shown in the next section.
Fig. 1
Fig. 1

Distribution of Min|LLR|s for a (256, 128) polar code at SNR = 2.5 dB

3.3 Algorithm description

Based on the above analysis, the proposed scheme is described in Algorithm 1, where M AP denotes the predefined number of matrix adaptation. The function AdaptiveMatrix() performs matrix adaptation which reduces H to H . The function BP() performs BP algorithm based on H and stores its returned value into the boolean variable BP result . If BP result = false, it means the constraint \(\hat {\boldsymbol {x}}\left ({\boldsymbol {H}}^{'}\right)^{T}=0\) is still not satisfied when the number of iterations that BP used is increased to M BP. Thus, the algorithm goes back to perform the matrix adaptation again. Otherwise, it means the constraint \(\hat {\boldsymbol {x}}\left ({\boldsymbol {H}}^{'}\right)^{T} = 0\) is satisfied. Then, we compare Min|LLR| with β, and the matrix adaptation will be performed again when Min|LLR|<β. It should be noted that we can use the cyclic redundancy check (CRC) to detect the output sequence of the BP algorithm when Min|LLR|≥β to further improve the performance of the proposed algorithm. Accordingly, it needs to set the last r unfrozen bits to hold the r-bit CRC when encoding, where r is a small constant. In addition, considering that there are lots of variables in Section 3, we first provide a variable list as shown in Table 1 to make the used variables more clearly. Specifically, the last four variables in Table 1 will be used in the following subsection.
Table 1

Symbol definition

Symbol

Meaning

\(\boldsymbol {\hat {u}}\)

The decision source block

\(\boldsymbol {\hat {x}}\)

The decision codeword

M BP

The predefined number of BP iterations

M AP

The predefined number of matrix adaptation

H

The parity check matrix

H

The adjusted parity-check matrix

B

The index set of the NK least reliable bits in γ

γ

The initial LLRs set of N coded bits

γ

The new LLRs set of N coded bits

η

The damping factor

Min|LLR|

The minimum absolute value of LLRs for γ

β

a positive real number

BPresult

The returned value of function BP()

C AP

The average number of matrix adaptation

C BP

The average number of iterations that BP() used

W R

The average row weight

W C

The average column weight

3.4 Computational complexity

The computational complexity of Algorithm 1 is determined by three parts: (1) sorting in AdaptiveMartix(), (2) GE in AdaptiveMartix(), and (3) BP decoding. Since different parts are involved in different operations, we analyze the complexity by counting the number of binary operations (BOs), floating point comparisons (FPCs), and floating point operations (FPOs), respectively. For part (1), the sorting step needs O(C AP NlogN) FPCs, where C AP denotes the average number of matrix adaptation. The GE in part (2) needs O(C AP N 3) BOs. Note that though the complexity of GE is cubic, it only needs BOs which are much simpler than FPOs. Moreover, simulations show that AdaptiveMatrix() is performed only a few times as SNR increases. When analyzing the complexity of BP decoding in part (3), we notice that an LLR for ith bit with odd index in SC decoding can be computed in the form 2tanh−1(tanh(·)×tanh(·)), which is the same as that of the BP decoding [14]. For simplicity, we take the tanh() function as a black box and do not count the number of FPOs for tanh() function when analyzing the complexity of different decoders. This is reasonable since all the involved decoders need tanh() function. According to Eq. (2), the N-K check nodes need O(C AP C BP(NK)(W R −2)) multiplication operations, where W R is the average row weight and C BP is the average number of iterations that B P() used. The N variable nodes need O(C AP C BP N(W C −2)) additive operations, where W C is the average column weight. Table 2 summarizes the complexity analysis of Algorithm 1 and SC decoder. It should be noted that the complexity of the proposed algorithm decreases rapidly as SNR increases since C AP and C BP are very small. Actually, from Table 3, we can see that W C and W R after AdaptiveMatrix() are much smaller than K. Hence, the average complexity of BP decoding is relatively low.
Table 2

Complexity comparisons with different decoders

 

BOs

FPCs

FPOs

   

×/÷

+/−

SC

/

/

O(N logN)

O(N logN)

proposed

O(C AP N 3)

O(C AP N logN)

O(C AP C BP (NK)(W R −2))

O(C AP C BP N(W c −2))

Table 3

The average row and column weight of H for a (1024, 512) polar code

SNR

Avg. weight of column(W C )

Avg. weight of row(W R )

2.0 dB

26.08

52.67

2.5 dB

26.12

52.72

3.0 dB

26.10

52.73

3.5 dB

26.14

52.77

4.0 dB

26.16

52.79

3.5 Variations to the Proposed Algorithms

In this subsection, we give several variations of the proposed algorithm to further improve the performance or reduce the decoding complexity.
  1. 1.

    Improving the performance

    The first variation inspired by [13] can further improve the performance by running the proposed algorithm several times each time with the same initial LLRs from the channel but a different grouping of the less reliable bits. This is based on the fact that some bits with their |LLR|s close to those in the unreliable set B are also of the wrong sign and vice versa. Each time the proposed algorithm is run, a different estimate of codeword may be obtained due to the different parity-check matrix. The decoder keeps all the returned codewords in a list and chooses the one that minimizes Euclidean distance from the received vector of the channel. This variation can significantly improve the asymptotic performance of polar codes.

     
  2. 2.
    Reducing the complexity
    1. (a)

      Partial reliable bit updating

      The complexity of the proposed algorithm mostly depends on the complexity of the FPCs. The main FPCs complexity comes from two ways: the computation of the extrinsic information in the reliable part and the adaptation of the parity-check matrix. Since only some bits in the boundary will be switched from the reliable part to the unreliable part in the adaptation of the parity-check matrix, we can use the rule of partial reliable bit updating proposed in [13] to reduce the complexity in the bit reliabilities update part.

       
    2. (b)

      Sophisticated update schemes

      Using sophisticated update schemes reduces the complexity of matrix adaptation, such as the scheme that adapting the parity-check matrix from the previous ones proposed by EI-Khamy and McEliece [16], which reduces the overall complexityby 75%.

       
     

4 Simulation results

In this section, we consider three polar codes: a (256, 128) rate- 1/2 polar code C 1, a (1024, 512) rate- 1/2 polar code C 2, and a (2048, 1536) rate- 3/4 polar code C 3. We set M AP = 10 and M BP = 20/50. According to the selection principle of η in [13], we set first the observed performances of the proposed decoder when η = 0.5. And a CRC-8 code for C 1 with generator polynomial g(D) = D 8+D 7+D 6+D 4+D 2+1 is used. While for for C 2 and C 3, we use a CRC-24 code with generator polynomial g(D)=D 24+D 23+D 6+D 5+D+1. Since the length of CRC sequences is very short when compared to that of information bits in a polar code, the rate loss caused by using CRC is almost negligible. All simulations are performed over an additive white Gaussian noise (AWGN) channel with binary phase shift keying (BPSK).

4.1 Choice of β

Figure 2 presents the FER performance as a function of β at different signal-to-noise ratios (SNRs) for C 1. It can be seen that for SNR <3.5 dB, the FER performances are almost the same as β increases. For SNR ≥3.5 dB, the FER decreases gradually as β increases. Although the value of β has little influence on the FER performance of the proposed scheme for small SNRs, it is shown from the simulations that for the larger value of β, the decoder needs to perform a larger number of BP iterations. For example, if the value of β is 0.5, the average number of BP iterations for C 1 is about 49 at SNR = 2.0 dB. However, if the value of β is 7.5, the average number of BP iterations for C 1 is about 63 at the same SNR. Therefore, as in [15], to make a tradeoff between the FER performance and decoding complexity, we choose β = 0.5 when SNR <3.5 dB and β = 7.5 when SNR ≥ 3.5 dB for C 1. For C 2 and C 3, we choose β= 0.5 when SNR <3.5 dB and β= 15.5 when SNR ≥3.5 dB.
Fig. 2
Fig. 2

Performances of the proposed decoder for C 1 with different β

4.2 Comparison of the FER performance

Figures 3 and 4 show the FER performances for C 1 and C 2 with different polar decoders, respectively. We can see that the proposed decoder with M BP = 50 performs significantly better than the SC decoder and the original BP decoder. Particularly, for C 2, at FER = 10−4, the performance gains for the proposed decoder with M BP = 50 is about 1.2 dB when compared to the original BP decoder with M BP=60 and the improved BP decoders proposed in [11] and [15]. And also, compared to the SC list (SCL) decoder with the list size L = 32, the polar codes using the proposed decoder can obtain almost 0.7 dB performance gains. Figure 5 presents the FER performances for C 3 using different polar decoders. Although the code rate becomes large, the proposed decoder still can obtain about 0.5 dB performance gains at FER = 10−4 compared to the original polar BP decoder. It shows that the proposed decoder can improve the FER performance of the original polar BP decoder significantly on the AWGN channel with short and moderate blocklength at different code rates. Another side, we notice that the algorithm proposed in [10] which the readers are referred to for more details also can improve the performance of the original polar BP decoder with an improvement in SNR of 0.3 dB. However, both Figs. 4 and 5 show the performance gains using the proposed decoder can be increased to 0.5 dB at least, such as 0.5 dB at a rate = 0.75 and 1.2 dB at a rate = 0.5.
Fig. 3
Fig. 3

Performance comparisons of different decoders for C 1

Fig. 4
Fig. 4

Performance comparisons of different decoders for C 2

Fig. 5
Fig. 5

Performance comparisons of different decoders for C 3

We make a performance comparison between the proposed decoder with variations and CRC-aided SCL (CRC-SCL) decoder as shown in Fig. 6. Specifically, the notion proposed(M AP, M BP, q 1) refers to the proposed scheme with variations which are presented in Section 3.5. The parameter q 1 refers to the number of decoding rounds with different groupings of the unreliable bits. The damping factor η is also specified on the plots. From Fig. 6, we can see that the performance of the proposed decoder using the different grouping method can be improved significantly. Compared with the CRC-SCL (L = 32) [17, 18], the proposed decoder with the parameters (10, 50, 3) provides about 0.2 dB performance gains at an FER = 10−4. Additionally, the damping coefficients of the proposed algorithm must be carefully chosen to control the updating step width. In general, if the η is set to be larger, the performance of the proposed decoder has a flat slope(as shown in Fig. 6) which is mainly due to the overshooting of the update scheduling such that the decoder tends to a wrong codeword, quickly.
Fig. 6
Fig. 6

Performance comparisons between the proposed decoder with variations and CRC-SCL for C 2

4.3 Average iterative number and decoding complexity

Figure 7 shows the average number of total BP iterations (=C AP C BP) per frame for C 2 using the original BP decoder, BP decoder with G-matrix-based early stopping scheme [15], BP decoder with the freezing of connected sub-factor-graphs (CSFGs) [19], and the proposed decoder. From simulations, we can see that as SNR increases, the average number of BP iterations for the proposed decoder decreases faster than other decoders. It shows that the proposed algorithm can tend to a correct codeword with a faster convergence rate. To show the FPO complexity of the proposed decoder clearly, we use the complexity of SCL decoder as reference. Table 4 shows the numbers of FPOs using the original SC decoder, SCL decoder (L = 32) and the proposed decoder (M BP = 50) for a (1024, 512) polar code. Since the average number of total BP iterations C AP C BP becomes smaller as SNR increases, such as C AP C BP = 5.01 at SNR = 3.5 dB, the average numbers of FPOs of the proposed decoder decrease with SNR increasing. When SNR ≥3.5 dB, the proposed decoder has the lower complexity of FPOs than SCL decoder. It should be noted that all the numbers in Table 4 are rounding numbers. From the table, it shows that the FPO complexity of the proposed decoder decreases rapidly as SNR increases. And also, we can see that the complexity of the proposed decoder when using the variations mentioned in Section 3.5 is reduced quickly. However, compared with the reduced-complexity SCL [18], the proposed decoder with the variations still has a higher complexity. Considering the better performance and parallel architecture of the proposed decoder, if a larger complexity is tolerated, it will be a promising decoder to reduce the decoding latency of CRC-SCL decoder for polar codes and provide a good performance.
Fig. 7
Fig. 7

Comparison of average number of BP iterations for C 2 using the BP decoder and the proposed decoder

Table 4

The numbers of FPOs of different decoders (×103) for C 2

 

SNR = 2.5dB

SNR = 3.0dB

SNR = 3.5dB

SNR = 4.0dB

 

M/D

A/S

M/D

A/S

M/D

A/S

M/D

A/S

SC

5

5

5

5

5

5

5

5

SCL(L = 32)

328

328

328

328

328

328

328

328

Proposed(M BP = 50)

286

272

182

173

130

124

78

74

Improved SCL(L = 32)[18]

45

45

23

23

10

10

6

6

Proposed(10, 50, 3)

214

204

136

130

98

93

58

55

5 Conclusions

In this paper, we have proposed an improved BP decoder for polar codes by adapting their parity-check matrices. Though the idea of the proposed decoder is not new, it has never been used for polar codes before. More importantly, simulation results show that the proposed decoder can provide significant performance gains compared to the polar BP decoder and also can compete with CRC-SCL decoder when using the variations of the proposed algorithm described in Section 3.5. Although the proposed decoder still has a little larger complexity than that of those reduced-complexity CRC-SCL, the proposed decoder is a promising decoder as a fully parallel architecture to reduce the decoding latency of CRC-SCL decoder for polar codes with a tolerated complexity.

Declarations

Acknowledgements

The authors would like to thank the editor and anonymous reviewers for their constructive comments which helped improve the quality of this paper. This work is supported by the National Nature Science Foundation of China (no. 61471286, no. 61271004), the Fundamental Research Funds for the Central Universities, and the open research fund of Key Laboratory of Information Coding and Transmission, Southwest Jiaotong University (no. 2010-03).

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
The School of Computer Science and Technology, Xidian University, TaiBai Road, Xi’an, China
(2)
The Key Laboratory of Information Coding and Transmission, Southwest Jiaotong University, ErHuan Road, Chengdu, China
(3)
The School of Computer Science and Technology, Xi’an Technological University, JinHua Road, Xi’an, China
(4)
Border Defence Academy, HuanShan Road, Xi’an, China

References

  1. E Arikan, Channel polarization: a method for constructing capacity achieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inform. Theory. 55(7), 3051–3073 (2009).MathSciNetView ArticleGoogle Scholar
  2. I Tal, A Vardy, List decoding of polar codes. IEEE Trans Inf Theory. 61(5), 2213–2226 (2015).MathSciNetView ArticleGoogle Scholar
  3. K Niu, K Chen, Stack decoding of polar codes. Electron. Lett. 48(12), 695–697 (2012).View ArticleGoogle Scholar
  4. E Arikan, A performance comparison of polar codes and Reed-Muller codes. IEEE Commun. Lett. 12(6), 447–449 (2008).View ArticleGoogle Scholar
  5. A Eslami, H Pishro-Nik, On finite-length performance of polar codes: stopping sets, error floor, concatenated design.IEEE Trans. Commun. 61(3), 919–929 (2013).View ArticleGoogle Scholar
  6. N Hussami, R Urbanke, SB Korada, in Proceedings of the IEEE International Symposium on Information Theory (ISIT). Performance of polar codes for channel and source coding (IEEESeoul, 2009), pp. 1488–1492.Google Scholar
  7. A Eslami, H Pishro-Nik, in Proceedings of the 48th Annual Allerton Conference on Communication, Control, and Computing. On bit error rate performance of polar codes in finite regime (AACCCCAllerton, 2010), pp. 188–194.Google Scholar
  8. Y Zhang, A Liu, X Pan, Z Ye, C Gong, A modified belief propagation polar decoder. IEEE Commun. Lett. 18(7), 1091–1094 (2014).View ArticleGoogle Scholar
  9. B Yuan, KK Parhi, in Proceedings of the IEEE International Conference Acoustics, Speech and Signal Processing. Architecture optimizations for BP polar decoders (ICASSPVancouver, British Columbia, 2013), pp. 2654–2658.Google Scholar
  10. J Guo, M Qin, A Guillén i Fàbregas, PH Siegel, in Proceedings of the IEEE International Symposium on Information Theory. Enhanced belief propagation decoding of polar codes through concatenation (ISITHonolulu, 2014), pp. 2987–2991.Google Scholar
  11. Y Zhang, Q Zhang, X Pan, Z Ye, C Gong, in Proceedings of the IEEE International Wireless Symposium. A simplified belief propagation decoder for polar codes (IWSXi’an, 2014), pp. 1–4.Google Scholar
  12. N Goela, SB Korada, M Gastpar, in Proceedings of the IEEE Information Theory Workshop. On LP decoding of polar codes (ITWDublin, 2010), pp. 1–5.Google Scholar
  13. J Jiang, K Narayanan, Iterative soft-input-soft-output decoding of Reed-Solomn codes by adapting the parity check matrix. IEEE Trans. Inf. Theory. 52(8), 3746–3756 (2006).View ArticleMATHGoogle Scholar
  14. R Mori, T Tanaka, Performance of polar codes with the construction using density evoluation. IEEE Commun. Lett. 13(7), 519–521 (2009).View ArticleGoogle Scholar
  15. B Yuan, KK Parhi, Early stopping criteria for energy-efficient low-latency belief-propagation code decoders. IEEE Trans. Signal Proc. 62(24), 6496–6506 (2014).MathSciNetView ArticleGoogle Scholar
  16. M El-Khamy, RJ McEliece, J Harel, in Proceedings of the IEEE International Symposium on Information Theory. Performance enhancements for algebraic soft-decision decoding of Reed-Solomon codes (ISITChicago, 2004), p. 419.Google Scholar
  17. K Chen, K Niu, J Lin, Improved successive cancellation decoding of polar codes. IEEE Trans. Commun. 61(8), 3100–3107 (2013).View ArticleGoogle Scholar
  18. K Chen, K Niu, J Lin, in Proceedings of the IEEE Vehicular Technology Conference. A reduced-complexity successive cancellation list decoding of polar codes (VTC SpringDresden, 2013), pp. 1–5.Google Scholar
  19. S Mohsin Abbas, Y Fan, J Chen, C Tsui, Low complexity belief propagation polar code decoders [Online] (2015). Available, http://arxiv.org/abs/1505.04979.Google Scholar

Copyright

© The Author(s) 2017

Advertisement