 Research
 Open Access
 Published:
Blind recognition of binary cyclic codes
EURASIP Journal on Wireless Communications and Networking volume 2013, Article number: 218 (2013)
Abstract
A solution to blind recognition of binary cyclic codes is proposed in this paper. This problem could be addressed on the context of noncooperative communications or adaptive coding and modulations. We consider it as a reverse engineering problem of errorcorrecting coding. The proposed algorithm recovers the encoder parameters of a cyclic, coded communication system with the only knowledge of the noisy information streams. By taking advantages of softdecision outputs of the channel and by employing statistical signalprocessing methods, it achieves higher recognition performances than existing algorithms which are based on algebraic approaches in harddecision situations. By comprehensive simulations, we show that the probability of false estimation of coding parameters of our proposed algorithm is much lower than the existing algorithms, and falls rapidly when signaltonoise ratio increases.
1. Introduction
T he blind recognition of cyclic codes is a reverse engineering problem of the errorcorrecting coding which can be applied to noncooperative communications [1, 2] and adaptive coding and modulations (ACM) [3–6]. In most cases of digital communications, forward errorcorrecting coding is used to protect the transmitted information against noisy channels to reduce errors which occur during transmission. Cyclic codes are one class of the most important errorcorrecting codes applied in communication area. In cooperative context, the parameters of the codes and modulations are usually known by the transmitters and receivers both. But a receiver in noncooperative communications or a cognitive radio receiver may not know those parameters and thus cannot directly receive and decode the transmitted information on the channel. Therefore, to adapt itself to an unknown transmission context, the receiver must recognize the modulation and coding parameters blindly before processing the received data. In this paper, we develop an approach for blind recognition of the coding parameters of a communication system which uses binary cyclic codes.
2. Related work
In [7], a Euclidean algorithmbased method is proposed to identify a 1/2rate convolutional encoder in noiseless cases. However, it is not suitable for noisy channels. In [8], another approach is presented to identify a 1/nrate convolutional encoder in noisy cases based on the Expectation Maximization algorithm. The authors of [9, 10] develop methods for blind recovery of convolutional encoder in turbo code configuration. In [6, 11], a dual code method for blind identification of k/nrate convolutional codes is proposed for cognitive radio receivers. An iterative decodingtechniquebased reconstruction of block code is introduced by the authors of [12] and was applied to lowdensity paritycheck (LDPC) codes. An algebraic approach for the reconstruction of linear and convolutional codes is presented in [13]. In [14], an algorithm for blind recognition of errorcorrecting codes is presented by utilizing the rank properties of the received stream.
In [15], an approach for blind recognition of binary linear block codes in low coderate situations is presented. The authors propose to estimate the code length according to the code weight distribution characters of the lowrate codes and then get the generator matrix by improving the traditional simplification of matrices. It has a good performance in high bit error rate (BER) but is not suitable for high code rate situations. Furthermore, it requires a large amount of observed data. In [16] and [17], the authors present a blind recognition algorithm for BoseChaudhuriHocquenghem (BCH) codes based on the Roots Information Dispersion Entropy and Roots Statistic (RIDERS). This algorithm can achieve correct recognition in both high and low code rate situations with the BER of 10^{−2}. But it is computationally intensive, especially when the code length is large. The authors of [18] improve the algorithm proposed in [16, 17] by reducing the computational complexity and making the recognition procedure faster.
Most of the previous works are concentrating on harddecision situations, and are based on utilizing the algebraic properties of the codes in Galois fields (GF). The major drawback of them is that they have a low fault tolerance. Even if only 1 bit error occurs in a codeword, the algebraic properties of errorcorrecting codes will be largely destroyed. Therefore, the recognizers need a large amount of observed data. On the other hand, if soft information about the channel output is available, the softdecision outputs can provide more information for the code recognition, and statistical signal processing algorithms can also be employed to improve the recognition performance.
When statistic and artificialintelligencebased iterative algorithms are applied to errorcorrecting decoding, the decoding performance is improved about 2 ~ 3 dB in softdecision situations [19]. In [20, 21], the authors introduce a MAP approach to achieve blind frame synchronization of errorcorrecting codes with a sparse paritycheck matrix. It is also developed on Reed Solomon (RS) codes [22] and BCH product codes [23] and yields better performances than previous hard decision ones. In this paper, we propose an algorithm to achieve blind recognition of binary cyclic codes in softdecision situations. Literature [4] also considers the blind recognition of coding parameters based on soft decisions. But in fact, its recognition procedure is semiblind. The authors assume that the channel code which is used at the transmitter is unknown to the receiver, but the code is chosen from a set of possible codes which the authors call the candidate set. This set has a limited number of candidates, and is arranged beforehand by both the transmitter and the receiver. It has good performances on ACM, but is not suitable for noncooperative cases.
To the best of our knowledge, this paper is the first publication to consider the completeblind recognition problem of binary cyclic codes in softdecision situations. The proposed algorithm in this paper is based on the RIDERS algorithm introduced in [16–18]. We improve and extend this work in order to handle softdecision situations. To utilize the softdecision outputs, we employ the idea of MAPbased processing method proposed in [20–23].
The remainder of this paper is organized as follows: section 3 briefly introduces the RIDERS algorithm in hard decision situations proposed in [16–18]; section 4 presents the principle of our proposed recognition algorithm for binary cyclic codes in softdecision situation; section 5 draws the general recognition procedure of the proposed algorithm; and finally, the simulation results and conclusions are given in sections 6 and 7.
3. RIDERS algorithm for blind recognition of BCH codes
3.1 Introduction of RIDERS algorithm
The RIDERS algorithm is introduced in [16, 17] and improved in [18] to solve the problem of recognition of BCH codes. The system model of blind recognition problem of coding parameters is shown in Figure 1. On the transmitter, the information sequence T_{ m } is encoded and separated to coded blocks T_{ c } by the encoder and modulated before transmitted to the channel. After demodulation, the receiver blindly recognizes the coding parameters and decodes the received blocks R_{ c } to correct the errors which occur during the transmission. R_{ m } is the decoded information which could be processed forward.
We define c(x) to be the codeword polynomial of T_{ c }, then the algebraic model of the encoding procedure can be described as follows [24]:
or in systemic form:
where m(x) is the input information polynomial and g(x) is the generator polynomial. The purpose of the recognition is to estimate the codeword length and generator polynomial g(x) blindly with the only knowledge of the received streams. For an encoding system, m(x) is different in each codeword, but g(x) is the same. According to Equations 1 and 2, the roots of g(x) are also the roots of c(x). If no error occurs, the roots of g(x) will appear in every codeword. However, for an invalid codeword, this algebraic relationship does not exist. In this paper, we define the code roots as the roots of the generator polynomial. The root space of a binary codeword polynomial c(x) defined in GF(2^{m}) (m ≥ 1) is a finite space, which contains 2^{m} − 1 symbols. We define A to be the set of the generator polynomial roots. In a noisy context, statistically, for each codeword c(x), the probabilities of the codeword polynomial roots appear in A is larger than that in \overline{{\rm A}} (defined in GF(2^{m})). While for an invalid codeword polynomial c’(x), the roots of c’(x) appear randomly in GF(2^{m}). In this case, the authors of [16–18] propose the following unproved hypothesis:
Hypothesis 1: Each symbol in GF(2^{m}) has a uniform probability of being a root of c’(x).
According to this hypothesis, the authors of [16–18] propose an algorithm to recognize the BCH code length by traversing all the possible code length and primitive polynomials to find the correct coding parameters that maximize the roots Information Dispersion Entropy Function (IDEF) as follows:
where n = 2^{m} − 1 is the code length, p_{ i } (1 ≤ i ≤ 2^{m} − 1) is the probability of α^{i} to be the root of the code and α is a primitive element in GF(2^{m}). p_{ i } is calculated as follows:
The received sequence, i.e. R_{ c } in Figure 1, is separated to M packets with an assumption of code length l, as shown in Figure 2. In [16–18], the authors assume that the start point of the first coding packet is obtained according to the frame synchronization testing, while the code length and generator polynomial are unknown. We define r_{ j }(x)(1 ≤ j ≤ M) to be the codeword polynomial of the j th packet in the received sequence. In Equation 4, N_{ i } is the times of appearances of α^{i} being the root of r_{ j }(x) in the M packets, and N={\displaystyle \sum _{i=1}^{{2}^{m}1}{N}_{i}}.
According to Hypothesis 1, when the estimation of code length and primitive polynomial is incorrect, p_{ i } could be considered uniformly distributed, and p_{ i } ≈ 1/(2^{m} − 1) (1 ≤ i ≤ 2^{m} − 1). Thus the ∆H in Equation 2 is low. If the code parameters are estimated correctly and α^{i} is a root of g(x), p_{ i } should be larger. Therefore, the distribution of p_{ i } should not be uniform. Then the information entropy of p_{ i } is lower and ∆H is larger. This is the basic principle of estimating the code length by maximizing the ∆H defined in Equation 3.
Once the code length is estimated, by comparing p_{ i } at different roots, we can consider the obviously higher ones as the estimation of the code roots and the generator polynomial could be obtained by g\left(x\right)=\left(x{\alpha}^{{i}_{1}}\right)\left(x{\alpha}^{{i}_{2}}\right)\cdots \left(x{\alpha}^{{i}_{r}}\right), where {\alpha}^{{i}_{1}},\phantom{\rule{0.5em}{0ex}}{\alpha}^{{i}_{2}},\phantom{\rule{0.5em}{0ex}}\cdots ,\phantom{\rule{0.5em}{0ex}}{\alpha}^{{i}_{r}} are the estimated code roots, i.e. the roots of the generator polynomial.
The RIDERS algorithm has a good performance but there are still some drawbacks which need to be improved, which are described as follows:

1)
Hypothesis 1 proposed in [16–18] is not correct. In section 3.2, we give the proof. In fact, not all the symbols in GF(2^{m}) have the same probability of being a root of an invalid codeword c’(x).

2)
This algorithm only considers the BCH codes in the cases of regular code length, i.e. code length l = 2^{m} − 1. The authors ignored the shortened code case, which are widely applied, however.

3)
The code roots can be separated into some conjugate root groups, and each group contains several conjugate roots, which are the roots of a same minimal polynomial. If a generator polynomial g(x) has a root β, which is a root of the minimal polynomial m _{ p }(x), the symbols which are other roots of m _{ p }(x) also are part of the roots of g(x). So we can test which minimal polynomials are factors of the generator polynomial rather than testing which elements in GF(2^{m}) are roots of the code.

4)
This algorithm is based on the hard decision symbols and do not utilize the soft channel outputs

5)
This algorithm only considers the recognition of BCH codes and does not discuss the applications on other binary cyclic codes.

6)
The authors of [16–18] ignore the synchronization of the codewords. They assume that the starting positions of the codewords have been known before the recognition procedure by framing testing. But in practical implementations, this should not be the case in blind context.
In the paragraph from section 4, we propose an improved RIDERS algorithm based on softdecision situations and extend the applications to general binary cyclic codes.
3.2 Proof of faultiness of Hypothesis 1
In this section, we present that Hypothesis 1 proposed in [16–18] is not always correct. The proof is shown below.
Proof. Let c’(x) be the codeword polynomial of a codeword C ’ , we can calculate p_{ i }, which is the probability that α^{i} is a root of c’(x). To calculate p_{ i }, we define the minimal paritycheck matrix H_{min} (α^{i}) corresponding to the element α^{i} in GF(2^{m}) as follows:
We transform H_{min} (α^{i}) to its binary form by replacing the symbols in H_{min} (α^{i}) by their binary column vector patterns according to the coding theory [25] and record it Hb_{min} (α^{i}).
For example, the minimal paritycheck matrix H_{min} (α^{3}) corresponding to the element α^{3} in GF(2^{3}) with code length l = 2^{3} − 1 = 7 is as follows:
Based on the primitive polynomial p(x) = x^{3} + x + 1, we can replace the symbol α^{3} by the vector [011]^{T}, and other symbols are processed similarly. Then the paritycheck matrix can be written in GF(2) as follows:
If α^{i} is a root of c’(x), we have
There are m rows in Hb_{min}(α^{i}) and we define h_{ μ }(1 ≤ μ ≤ m) to be the μ th row of Hb_{min}(α^{i}). Then the equation Hb_{min}(α^{i}) × C′ = 0 means that the product of any row of Hb_{min}(α^{i}) with the codeword C’ equals to zero, as shown in Equation 9:
So we can calculate the probability of α^{i} being a root of c’(x), i.e. the probability of Hb_{min}(α^{i}) × C′ = 0 as follows:
In the following paragraphs of this paper, we define P_{ r }(x) as the probability of x. Let h_{μ,u}(1 ≤ u ≤ n) and C_{ u } be the u th elements in the vector h_{ μ } and C’ and we define the checking indexing set S_{ μ } for h_{ μ } and C’ as follows:
Obviously, when the number of nonzero elements in S_{ μ } is even, we have
And when the number of nonzero elements in S_{ μ } is odd, we have
When C’ is not a valid codeword, i.e. the elements in C’ can be considered to appear randomly, the probabilities of the number of nonzero elements in S_{ μ } being odd and even are all about 0.5. When Hb_{min}(α^{i}) is full rank (the rank is calculated in GF(2)), the rows of Hb_{min}(α^{i}) is linearly independent, so we can calculate Equation 10 as follows:
But if Hb_{min}(α^{i}) is not full rank, the calculation of P_{ r }[Hb_{min}(α^{i}) × C = 0] by Equation 14 is not correct. We define the maximum linearly independent vector group MI of the row vectors set H = {h_{ μ }1 ≤ μ ≤ m} as follows:
MI is a subset of H and meets the following conditions:

(1)
The vectors in MI are linearly independent;

(2)
Any vector in H can be obtained by linear combinations of the vectors in MI.
And it is easy to prove that the number of vectors in MI equals to the rank of Hb_{min}(α^{i}).
According to the condition 2 of the definition of MI, if all the vectors in {h_{ μ }h_{ μ } ∈ MI} make h_{ μ } × C = 0, then also for all the vectors in {h_{ μ }h_{ μ } ∈ H}, we have h_{ μ } × C = 0. So the calculation of Equation 10 should be:
where the elements in \left\{{\mathbf{h}}_{{\mu}_{\theta}}1\le \theta \le \mathrm{rank}\left(H{b}_{min}\left({\alpha}^{i}\right)\right)\right\} are the vectors in MI, i.e. a maximum linearly independent vector group of the rows of Hb_{min}(α^{i}).
According to Equation 15, Hypothesis 1 is true only if all the Hb_{min}(α^{i}), where 1 ≤ i ≤ 2^{m} − 1, have the same rank. Unfortunately, this condition cannot always be met. For example, we have the following results over GF(2^{6}):
Therefore, we have
Therefore, we can get the conclusion that Hypothesis 1 proposed in [16–18] is not correct.
Figure 3 shows the probabilities that the elements in GF(2^{6}) are the roots of a random block with length l = 63 by simulations.
4. Blind recognition algorithm in softdecision situations
4.1 Code length estimation and blind block synchronization
Soft outputs of the channel could provide more information about the reliability of each decision symbol. In this section, we propose an approach to improve the recognition performance by employing the soft decisions.
We define c_{ r }(x) to be the codeword polynomial of a code block C_{ r }. According to the algebraic principles of cyclic codes, if α^{i} is a root of c_{ r }(x), we have c_{ r }(α^{i}) = 0 and H_{min}(α^{i}) × C_{ r } = 0. In softdecision situations, instead of verifying whether α^{i} is a root of each block, we can calculate p_{ j,i }, the probability that α^{i} is a root of the j th block in the received sequence as shown in Figure 2, and calculate p_{ i } in Equation 4 as follows:
where M is the number of blocks, as shown in Figure 2.
The elements in an extension field GF(2^{m}) can be separated to some groups according to the minimal elements over GF(2^{m}). Each minimal polynomial has several roots in GF(2^{m}), we call the set of them as a conjugate element group in this paper. And the generator polynomial of a cyclic code can be factorized by some minimal polynomials as follows:
Because the generator polynomial g(x) is a factor of a valid codeword polynomial c(x), the minimal polynomials in Equation 19 are also the factors of c(x). So if an element α^{i}(1 ≤ i ≤ 2^{m} − 1) in GF(2^{m}) is a root of c(x), the elements which have the same minimal polynomial with α^{i} are also the roots of c(x). Therefore, we can just calculate p′ _{ λ }(1 ≤ λ ≤ q), the probability that the minimal polynomial m_{ λ }(x)(1 ≤ λ ≤ q) is a factor of c_{ r }(x), where q denotes the number of minimal polynomials over GF(2^{m}). According to this idea, we can modify Equation 18 to Equation 20 to calculate p′ _{ λ } rather than p_{ i }. This modification can reduce the calculation complexity because the number of minimal polynomials over GF(2^{m}) is severely lower than the number of elements in GF(2^{m}). In Equation 20, p′ _{j,λ} denotes the probability that m_{ λ }(x) is a factor of the codeword polynomial of the j th block in the observed window as shown in Figure 2.
And the IDEF defined in Equation 3 should be modified to Equation 21:
To calculate p′ _{j,λ} in Equation 20, which is the probability that a minimal polynomial m_{ λ }(x) is a factor of c_{ r }(x), we can define the binary minimal paritycheck matrix Hb_{min}(m_{ λ }(x)) corresponding to m_{ λ }(x) and calculate the probability of Hb_{min}(m_{ λ }(x)) × C_{ r } = 0.
The coefficients of m_{ λ }(x) are in GF(2) and m_{ λ }(x) can be written as follows:
where e is the degree of m_{ λ }(x). g_{ e }, g_{e − 1}, ⋯, g_{1} and g_{0} are all in GF(2). According to these coefficients of m_{ λ }(x), we can obtain the minimal polynomialbased binary, minimal paritycheck matrix Hb_{min}(m_{ λ }(x)) with the following steps.

1)
We assume the code length is l and initialize a matrix G as follows:
In Equation 23, the number of rows and columns are le and l, respectively.
G=\left(\begin{array}{ccccccc}\hfill {g}_{e}\hfill & \hfill {g}_{e1}\hfill & \hfill \dots \hfill & \hfill {g}_{1}\hfill & \hfill {g}_{0}\hfill & \hfill 0\hfill & \hfill \cdots \hfill \\ \hfill 0\hfill & \hfill {g}_{e}\hfill & \hfill {g}_{e1}\hfill & \hfill \cdots \hfill & \hfill {g}_{1}\hfill & \hfill {g}_{0}\hfill & \hfill 0\hfill & \hfill \cdots \hfill \\ \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill \\ \hfill \cdots \hfill & \hfill 0\hfill & \hfill {g}_{e}\hfill & \hfill {g}_{e1}\hfill & \hfill \cdots \hfill & \hfill {g}_{1}\hfill & \hfill {g}_{0}\hfill \end{array}\right)(23) 
2)
Transform the left e × e area of G to an identity matrix I by elementary row transformation as follows:
where Q is a matrix, which has le rows and e columns.
G=\left(IQ\right),(24) 
3)
The minimal paritycheck matrix can be obtained as follows:
H{b}_{min}\left({m}_{\lambda}\left(x\right)\right)=\left({Q}^{T}I\right)(25)
According to the algebraic principles of coding theories, we can calculate the syndromes corresponding to Hb_{min}(m_{ λ }(x)) by Equation 25 [23]:
where n_{ r } is the number of rows in Hb_{min}(m_{ λ }(x)), i.e. the degree of m_{ λ }(x). If m_{ λ }(x) is a factor of c_{ r }(x) and no error occurs during the transmission, all syndromes should equal to zero. If the block contains errors or m_{ λ }(x) is not a factor of c_{ r }(x), not all the syndromes equal to zero. So when the minimal polynomials, which are the factors of the generator polynomial, are correctly estimated, the probability of S = 0 is larger than the case of incorrect estimation of the minimal polynomials. p′ _{j,λ} in Equation 20 can be calculated as follows:
where P_{ r }[S_{ H }(k) = 0][1 ≤ k ≤ n_{ r }] is the probability of S_{ H }(k) = 0(1 ≤ k ≤ n_{ r }), k denotes the corresponding row number of Hb_{min}(m_{ λ }(x)). In fact, p′ _{j,λ} calculated in Equation 27 is not the probability that m_{ λ }(x) is a factor of the codeword polynomial, it is just the mean value of the probabilities that the syndromes equal to zero. The true probability should be obtained by calculating the probability that all syndromes equal to zero. But as shown in section 3.2, the probability that all syndromes equal to zero is determined by the degree of the corresponding minimal polynomial for incorrect coding parameter estimations, the probability distribution is not uniform. But we use the mean value of P_{ r }[S_{ H }(k) = 0] to indirectly depict the probability that a minimal polynomial is a factor of the codeword polynomial, the influence of the degree of the difference minimal polynomials is low. In this case, we can assume that for a random data, the distribution of the probabilities of the minimal polynomials being factors of the codeword polynomials is approximately uniform.
Jing proposed the Adaptive Belief Propagation (ABP) method on softInput SoftOutput decoding of RS codes [26]. The main idea is adapting the paritycheck matrix of the codes to the reliability of the received information bits at each iteration step of the iterative decoding procedure. This idea is also employed in [22] to achieve blind frame synchronization of RS codes. The adaptation procedure reduces the impact of most unreliable decision bits on the calculation of syndromes. In our work, we also utilize the adaptation algorithm introduced in [23] and [26] before using Equation 27. The adaptive processing for a given received codeword C_{ r } and a binary minimal paritycheck matrix Hb_{min}(m_{ λ }(x)) includes the following steps:

1)
Combine Hb _{min}(m _{ λ }(x)) and C _{ r } ^{T} to form a matrix H ^{*}(m _{ λ }(x)) as follows:
(28)where r_{1}, r_{2}, …, r_{3} are the softdecision bits of the codeword C_{ r }, {h_{k,u}1 ≤ k ≤ n_{ r }, 1 ≤ u ≤ l} are the elements of Hb_{min}(m_{ λ }(x)) in GF(2).

2)
Replace each r _{ u } (1 ≤ u ≤ l) in H ^{*}(m _{ λ }(x)) with their absolute values to form a new matrix {H}_{r}^{*}\left({m}_{\lambda}\left(x\right)\right), adjust the positions of the columns in {H}_{r}^{*}\left({m}_{\lambda}\left(x\right)\right) to make the first row in {H}_{r}^{*}\left({m}_{\lambda}\left(x\right)\right) is ranked from the lowest to the highest and record the indexes. The absolute values of {r _{ u }1 ≤ u ≤ l} denote the reliabilities of the received softdecision bits. As shown in Equation 29, \left{r}_{{i}_{1}}\right\le \left{r}_{{i}_{2}}\right\le \cdots \le \left{r}_{{i}_{l}}\right and i _{1}, i _{2}, ⋯, i _{ l } are the column indexes of {r}_{{i}_{1}},{r}_{{i}_{2}},\cdots ,{r}_{{i}_{l}} in H*(m _{ λ }(x)).
(29) 
3)
Transform {H}_{r}^{*}\left({m}_{\lambda}\left(x\right)\right) by elementary row operations to make the last n _{ r } elements of the first column in {H}_{r}^{*}\left({m}_{\lambda}\left(x\right)\right) has only one “1” at the top, as shown in Equation 30. The first row does not join the elementary transformations.
(30)
This transformation limits the influences of the most unreliable decision bit to only one syndrome element. Furthermore, we continue the elementary transformation on {H}_{r}^{\ast}\left({m}_{i}\left(x\right)\right) to limit the numbers of “1” in the following n_{ r }–1 columns to one (except the first row), as shown in Equation 31.
When the left bottom n_{ r } × n_{ r } area becomes an indent matrix, stop the operation. Then the last n_{ r } rows in H_{ r }*(m_{ λ }(x)) form a new matrix. We recover its original column orders and call it Hb_{min_ a}(m_{ λ }(x)). Because the transformation is elementary, the relationship Hb_{min_ a}(m_{ λ }(x)) × C_{ r } = 0 in the hard decision situations still exists if C_{ r } is a valid codeword. So we can calculate the probability P_{ r }[S_{ H }(k) = 0] according to Hb_{min_ a}(m_{ λ }(x)). This replacement reduces the influences of the n_{ r } most unreliable decision bits.
In this paper, we assume that the transmitter is sending a binary sequence of codewords and using a binary phase shift keying (BPSK) modulation, i.e. let +1 and −1 be the modulated symbols of 0 and 1. The modulation operation from code bit c to modulated symbol s could be written as s = 1 – 2c, and we assume that the propagation channel is a binary symmetry channel which is corrupted by an additive white Gaussian noise (AWGN). For each configuration, the information symbols in the codes are randomly chosen. A received symbol r could be expressed as r = s + w, where w is the AWGN.
According to the previous assumptions, s is an equally probable binary random variable and
The noise w follows a normal distribution with the probability density function (PDF)
So the conditional PDF of r is
where {\sigma}^{2}=\frac{1}{2\left({E}_{s}/{N}_{0}\right)} is the variance of the noise.
For a given received bit r, we can obtain the following conditional probabilities:
Let r = [r_{1}, r_{2}, …,r_{ n }, r_{n+1}, …] be a received softdecision vector corresponding to the random modulated vector s = [s_{1}, s_{2}, …, s_{ n }, s_{n + 1}, …]. We now calculate the conditional probabilities of s_{1} ⊕ s_{2} = + 1 and s_{1} ⊕ s_{2} = −1. According to the mapping operation defined by s = 1–2c, we have
Similarly, we can calculate the conditional probabilities of s_{1} ⊕ s_{2} ⊕ s_{3} = + 1 and s_{1} ⊕ s_{2} ⊕ s_{3} = −1 as follow:
We define the XORSUM operation as {\displaystyle \sum _{u=1}^{n}\otimes}{s}_{u}={s}_{1}\oplus {s}_{2}\oplus \cdots \oplus {s}_{n} and assume that the conditional probabilities of XORSUM can be expressed as Equation 41:
Then, we have
According to the induction principle, the expression of the conditional probabilities in Equation 41 turns out to be true, and could be simplified as follows:
By employing Equation 44, we can calculate the probability P_{ r }[S_{ H }(k) = 0] as follows:
where w_{ k } is the number of ones in the k th row of the adapted minimal binary paritycheck matrix Hb_{min}_a(m_{ λ }(x)), u_{ v } represents the position of the v th nonzero element in the k th row of Hb_{min_a}(m_{ λ }(x)). {s}_{{u}_{v}} and {r}_{{u}_{v}} are the u_{ v }th modulated symbol on the transmitter and the corresponding softdecision output on the receiver, respectively.
In shortened code cases, a codeword with block length l and shortened length l_{ s } can be obtained by choosing the last l elements from a codeword which has a regular length (l + l_{ s }) as follows:
where the first l_{ s } elements of C_{ w } are zeros. Therefore, we can simply obtain the minimal paritycheck matrices of the shortened codes by deleting the first l_{ s } columns of Hb_{min}(m_{ λ }(x)).
4.2 Recognition of generator polynomials
After the code length and synchronization position estimation, the extension field degree m corresponding to the being recognized code can also be obtained. Then we can list the minimal polynomials over GF(2^{m}) and find out which ones are factors of the generator polynomial. These minimal polynomials can also be recognized according to the probabilities of syndromes equaling to zero.
In the procedure of the code length and synchronization position estimation, we have calculated the probability that a minimal polynomial is a factor of the received codeword polynomials. We assume that the estimated code length and extension field degree are l and m, the number of minimal polynomials over GF(2^{m}) is q and m_{1}(x), m_{2}(x), …, m_{ q }(x) are the minimal polynomials over GF(2^{m}).
According to Equation 45, we can calculate the k th syndrome for a given minimal paritycheck matrix of Hb_{min}(m_{ λ }(x)). Equation 47 is the loglikelihood ratios (LLR) of P_{ r }[S_{ H }(k) = 0], where H = Hb_{min}(m_{ λ }(x)) is
And we propose to calculate a likelihood criterion (LC) of m_{ λ }(x)(1 ≤ i ≤ q) being a factor of the generator polynomial as follows:
where Hb_{min_a}(m_{ λ }(x)) is the adapted minimal paritycheck matrix corresponding to the minimal polynomial m_{ λ }(x), M is the number of packets in the observed window W as shown in Figure 2, n_{ r } is the number of the rows in Hb_{min_a}(m_{ λ }(x)), {L}_{j}\left[{S}_{H{b}_{{min}_{}a}\left({m}_{\lambda}\left(x\right)\right)}\left(k\right)\right] is the LLR defined by Equation 47 and calculated at the j th block of the observed window W. According to Equation 48, we can calculate the LCs of all the minimal polynomials over GF(2^{m}). By comparing the LCs, we can choose the minimal polynomials, LCs of which are obviously higher than others, as the estimated factors of the generator polynomial, then the generator polynomial is obtained.
However, we can test whether the product of several most likely minimal polynomials is a factor of the generator polynomial to increase the successful recognition rate, because according to the adaptive processing of the paritycheck matrices, the more parity equations we consider, the more we are able to construct a parity matrix which is parsed on less reliable bits. For the convenience of automatic recognition using computer programs, we propose the procedure including the following steps to estimate the optimal paritycheck matrix:
Step 1: Calculate the LCs to form a vector L:
Step 2: Rank the vector L from the highest to the lowest, in order to form a new vector L_{ R } as follows:
and record the indexes:
where λ_{ ω }(1 ≤ ω ≤ q) denotes the index of L\left({m}_{{\lambda}_{\omega}}\left(x\right)\right) in L.
Step 3: Let ω increase from 1 to q, combine the binary minimal parity matrices for the minimal polynomials {m}_{{\lambda}_{1}}\left(x\right)\dots {m}_{{\lambda}_{\omega}}\left(x\right), in order to form H_{ ω } as follows:
After adaptive processing for H_{ ω }, calculate the LCs of H_{ ω } × C_{ r } = 0(1 ≤ ω ≤ q) by Equation 53 and obtain the LC vector L_{ H } as shown in Equation 54.
Step 4: Find the maximal element of L_{ H }, record the corresponding matrix
Step 5: According to Equations 49 and 50, we can find the polynomials {m}_{{\lambda}_{1}}\left(x\right)\phantom{\rule{1em}{0ex}}\dots \phantom{\rule{1em}{0ex}}{m}_{{\lambda}_{\widehat{\omega}}}\left(x\right) and write the generator polynomial as follows:
But in our work, we find that some minimal polynomials are easily lost. These minimal polynomials have the minimal paritycheck matrices with low rows, so the adaptive processing can only reduce the influence for low number of unreliable decision bits. For example, consider the following minimal polynomials corresponding to the elements α^{1}, α^{9} and α^{0} in GF(2^{6}):
The degrees of m_{1}(x), m_{2}(x) and m_{3}(x) are 6, 3, and 1, respectively. Therefore, the number of rows of the binary minimal paritycheck matrices Hb_{min}(m_{1}(x)), Hb_{min}(m_{2}(x)) and Hb_{min}(m_{3}(x)) corresponding to m_{1}(x), m_{2}(x) and m_{3}(x) are also 6, 3, and 1, respectively. So Hb_{min}(m_{1}(x)), Hb_{min}(m_{2}(x)) and Hb_{min}(m_{3}(x)) can limit the influences of 6, 3, 1 unreliable decision bits after adaptive processing, respectively. For m_{2}(x) and m_{3}(x), the LCs of Hb_{min_ a}(m_{2}(x)) and Hb_{min_ a}(m_{3}(x)), especially Hb_{min_ a}(m_{2}(x)), may lower than the incorrect minimal polynomials when the signaltonoise ratio (SNR) is low. In this case, the ranking of LCs in Equation 50 may not be correct, so the generator polynomial recognition is failed. To solve this problem, we can additionally combine these minimal paritycheck matrices with H\widehat{\omega} obtained in Step 4 described previously and check whether the corresponding minimal polynomials are also factors of the generator polynomials. The details of the additional steps are listed below:
Step 6: List the binary minimal paritycheck matrices over GF(2^{m}) which have low rows: Hb_{min}(m_{L 1}(x)), Hb_{min}(m_{L 2}(x)),…, Hb_{min}(m_{L η}(x)), here η represents the number of binary minimal paritycheck matrices with low rows.
Step 7 Record \mathrm{L}{\mathrm{C}}_{max}=\mathrm{LC}\left(H,\widehat{\omega}\right) and initialize a variable τ to be 1.
Step 8: Combine H\widehat{\omega} and Hb_{min}(m_{ Lτ }(x)) to form a new paritycheck matrix {H}_{\widehat{\omega},\tau} as follow:
Step 9: If \mathrm{LC}\left({H}_{\widehat{\omega},\tau}\right)>0.9\times {\mathrm{LC}}_{\mathrm{max}}, let {H}_{\widehat{\omega}}={H}_{\widehat{\omega \mathit{,}}\tau} and {\mathrm{LC}}_{\mathrm{max}}=\mathrm{max}\left({\mathrm{LC}}_{\mathrm{max}},\mathrm{LC}\left({H}_{\widehat{\omega},\tau}\right)\right).
Step 10: If τ = η, execute step 11; else, let τ = τ + 1 and go back to step 8.
Step 11: Output the newly obtained H\widehat{\omega} as the final estimation of the paritycheck matrix and get the generator polynomials according to the minimal polynomials corresponding to H\widehat{\omega}.
5. General recognition procedure
In this section, we present the general procedure for the blind recognition of binary cyclic codes based on the principles proposed in the previous sections. Before the recognition, some prior information could help to estimate the possible range of the code length l. Then, we traverse all the possible values of code length l and codeword starting position t and choose the parameter pair (l, t) which maximizes the IDEF defined in Equation 21 to be the estimated code length and block synchronization position. Note that to get the minimal polynomials for each code length l over an extension field GF(2^{m}), we must know the field exponent m of the code. For an ordinary binary cyclic code, its code length is 2^{m} − 1, while the code length l of a shortened code is ={2}^{{2}^{m}}1{l}_{s}, where l_{ s } is the shortened length. Therefore, the minimal value of the field exponent m for a code length l is the smallest integer k such that <{2}^{{2}^{k}}. The maximal value of m should be estimated with some prior information. For each code length l and synchronization position t, we traverse all the possible extension field degrees to calculate ∆H, and choose the maximum one as ∆H(l,t). After the code length estimation, we search for the minimal polynomials which are the factors of the generator polynomial by the algorithm described in section 4.2.
The general recognition procedure is listed below:

Step 1: According to some prior information, set the searching range of the code length l, i.e. set the minimal and maximal code length l_{min} and l_{max}.

Step 2: Design a window W which has a length L at least 10 × l_{max}, i.e. M = 10 in Figure 2.

Step 3: Full fill the window W with the received softdecision bits.

Step 4: Set the code length l = l_{min}.

Step 5: Set the initial synchronization position t at 0, which is the starting position of W.

Step 6: Assume the code length is l and the synchronization position is t and calculate ∆H. Note that the window W has more than one assumed codewords, we calculate the ∆H on all the codewords and compute the mean of them as ∆H(l,t).

Step 7: If t < l, then let t = t + 1 and go back to step 6; if t = l, then jump to step 8.

Step 8: If l < l_{ max }, then let l = l + 1 and go back to step 5; if l = l_{ max }, then jump to step 9.

Step 9: Compare all the calculated ∆H(l,t), select the maximum one and get the corresponding values of l, t and m as the estimated code length, synchronization position and the degree of the GF of the recognized codes, respectively.

Step 10: Let the code length and synchronization position be the estimated parameters l and t, fetch M codewords from the observed window W. And list the minimal polynomials over GF(2^{m}), which are m_{1}(x), m_{2}(x),…, m_{ q }(x).

Step 11: Calculate the LCs of the minimal polynomials over GF(2^{m}) by Equations 47 and 48 for the M packets in W, and get the LC vector as shown in Equation 49.

Step 12: Recognize the generator polynomial follow the steps described in section 4.2.
Finally, we need a detection threshold to reject random data. When the received data stream is not encoded by binary cyclic codes, it can be considered that the data is random for all the coding parameters. The recognizer should give a report to reject the estimated parameters when the paritycheck matrix is not likely enough.
We define the mean value of p′ _{j,λ} for all the blocks in the observed window as follows:
where p′ _{j,λ} is calculated by Equation 27 according to the recognized paritycheck matrix H\widehat{\omega}, H in Equation 27 is the recognized paritycheck matrix H\widehat{\omega} and n_{ r } denotes the number of rows of H\widehat{\omega}. As shown in Figure 4, the distributions of mean (p′ _{j,λ}) for random data and coded data with correctly estimated coding parameters are separated. The distances between the two distributions are mainly determined by the noise level, the number of rows in H\widehat{\omega}, and the number of code blocks in the observed window. Experimentally, we propose the threshold δ to be about 0.6, in order to decide whether the data stream is random or not. After the estimation of the coding parameters, we calculate mean (p′ _{j,λ}) for all complete code blocks in the observed window. If mean (p′ _{j,λ}) is smaller than δ, we propose to reject the recognition result.
6. Simulations
In this section, we show the efficiency of our proposed blind recognition algorithm by simulations. In the simulations, we assume that the searching range of the code length is 7 ~ 128 and the observed window contains N = 3,000 consecutive softdecision bits from the BPSK demodulator. Meanwhile, we assume the data stream is corrupted by an AWGN on the channel.
When employing the proposed algorithm to recognize the BCH (63, 51) code, the simulation results for code length and synchronization position recognitions are shown in Figures 5, 6, 7 and 8. The SNR is E_{ s }/N_{ 0 } = 5 dB and corresponding BER is 10^{−2.19}. Figure 5 shows the values of p′ _{ λ } defined in Equation 20 when l = 63 and m = 6, and the block synchronization is achieved. Figure 6 is the case of another l and m. It is shown in the two figures that when the code length and synchronization positions are correctly estimated, some minimal polynomials have higher probabilities to be factors of the received codeword polynomials. The obviously larger ones are calculated on the minimal polynomials which are factors of the generator polynomial. If the parameters are not correctly estimated, such feature will not exist. Figure 7 shows the IDEF ∆H for different code length l and synchronization position t, while the first bit of the observed window is the 40th bit of a codeword. When l = 63 and t = 23, the IDEF is the largest. Thus, we propose l = 63 and t=23+\mathit{lk}\left(k\in {\mathbb{Z}}^{+}\right) to be the estimation of the code length and synchronization positions, which are consistent with the simulation settings.
The performance of the algorithm is affected by the channel quality. In Figure 8, we draw the performance of the proposed algorithm when applied to code length recognitions of several different binary cyclic codes. The curves depict the false recognition probabilities (FRP) of the code length and synchronization position estimations on different SNRs. In Figure 8, we also compare the performance of our proposed recognition algorithm with the harddecisionbased RIDERS algorithm proposed in [16–18]. The PFR of our proposed algorithm fall rapidly when SNR increases, and it is much lower than that of the previous algorithms on each single SNR value.
After the code length estimation, the generator polynomial could be recognized by searching for the minimal polynomials which are factors of the generator polynomial according to the steps proposed in section 4.2. We assume that the data stream sent by the transmitter is coded by a cyclic code, the code length and information length of which are 63 and 36, respectively. We call it cyc (63, 36) code in this paper. The generator polynomial of the code is the product of the following minimal polynomials, which includes lowdegree minimal polynomials:
The coded data is modulated by BPSK and corrupted by an AWGN with SNR E_{s}/N_{0} = 1.5 dB, and the corresponding harddecision BER is about 4 × 10^{−2}. The recognizing procedure is shown in Figures 9, 10 and 11.
There are 13 minimal polynomials over GF(2^{6}), which are listed below:
Figure 9 shows the original LCs of different minimal polynomials over GF(2^{6}) to be factors of the codeword polynomials in the observed window. We rank the original LCs from the highest to the lowest, in order to form a new vector L_{ R } and record the index I (defined in Equation 51) as follows:
Then we let ω increase from 1 to 13, combine the binary minimal parity matrices for the minimal polynomials m_{I(1)}(x)…m_{I(ω)}(x), in order to form H_{ ω } by Equation 52, and calculate the LCs of H_{ ω } × C_{ r } = 0(1 ≤ ω ≤ q) by Equation 48. The LCs are shown in Figure 10. We can see that the LC of H_{4} is the highest. H_{4} is obtained by combining the minimal paritycheck matrices Hb_{min}(m_{4}(x)), Hb_{min}(m_{1}(x)), Hb_{min}(m_{2}(x)) and Hb_{min}(m_{3}(x)). Furthermore, we list the lowdegree minimal polynomials to check whether they are factors of the generator polynomial. The lowdegree minimal polynomials are m_{L 1}(x) = m_{5}(x), m_{L 2}(x) = m_{9}(x), m_{L 3}(x) = m_{11}(x) and m_{L 4}(x) = m_{13}(x). We record LC_{max} = LC(H_{4}) = 4,406.8 and execute the steps 8 ~ 10 described in section 4.2. Finally, we can obtain the values of LLR(H_{4,k})(1 ≤ k ≤ 4)) in Table 1.
It is obvious that LLR(H_{4,1}) > 0.9 × LLR(H_{4}) and LLR(H_{4,4}) > 0.9 × LLR(H_{4,1}). Therefore, H_{4,4} should be considered as the finally recognized paritycheck matrix. According to section 4.2, H_{4,4} is obtained by combining the minimal paritycheck matrices Hb_{min}(m_{4}(x)), Hb_{min}(m_{1}(x)), Hb_{min}(m_{2}(x)), Hb_{min}(m_{3}(x)), Hb_{min}(m_{5}(x)) and Hb_{min}(m_{13}(x)), so we can write the generator polynomial as follows:
The recognition result is accordant with the simulation settings.
Figure 11 shows the performance of the proposed generator polynomial recognition algorithm when applied to several different binary cyclic codes. The curves show the FRP on different noise levels. As E_{s}/N_{0} rises, the curves fall rapidly. We also compare our proposed algorithm with the previous harddecisionbased recognition algorithms proposed in [16–18]. It shows that the recognition performance is improved obviously in softdecision situations.
After the coding parameter recognition, an additional testing program checks whether the data is random. The principle is described in section 4.2. We list the errorrejectionprobabilities (ERPs) for some binary cyclic codes and the erroracceptance probabilities (EAP) for random data in Table 2. The ERP level is much lower than the FRP. Especially when the noise level is low enough, the ERPs are nearly zeros. And all the random data is rejected, that is to say, nearly no recognized result on random data is accepted.
7. Conclusion
A blind recognition method for binary cyclic codes for noncooperative communications and ACM in softdecision situations is proposed. The code length and synchronization positions are estimated by checking the minimal paritycheck matrices. After that, the whole check matrix and generator polynomial are reconstructed by searching which minimal polynomials are factors of the generator polynomial. The recognition method proposed in this paper is based on an earlier published RIDERS algorithm with some significant improvements. By calculating the probability that a minimal polynomial is a factor of the received codewords rather than checking whether an element in the extension field is a root of the codewords, we develop the RIDERS algorithm to softdecision situations. To calculate the probability that a minimal polynomial is a factor of a received codeword, we adopt some algorithms and ideas introduced in softdecisionbased decoding methods and blindframesynchronization approaches for RS and BCH codes in the literatures. Although we have always a loss of performance when these algorithms are applied in cyclic codes while they are particularly well suited for LDPC codes, the algorithm proposed in this paper still has a previously better recognition performance for binary cyclic codes in a softdecision situation than that in a harddecision situation. And by the reliabilitybased adaptive processing, we reduce the influences of the most unreliability decision bits on the calculation of the syndromes, though the paritycheck matrices of binary cyclic codes are not sparse. Moreover, the application field of the recognition method is extended to general binary cyclic codes in this paper, including shortened codes. To the best of our knowledge, this paper is the first publication in literature, which introduces an approach for completeblind recognition of binary cyclic codes in softdecision situations. Simulations show that our proposed blind recognition algorithm yields obviously better performance than that of the previous ones.
References
Choqueuse V, Marazin M, Collin L, Yao KC, Burel G: Blind reconstruction of linear spacetime block codes: a likelihoodbased approach. IEEE. Trans. Signal. Proc. 2010, 58(3):12901299.
Burel G, Gautier R: Blind estimation of encoder and interleaver characteristics in a non cooperative context. In Proceedings of IASTED International Conference on Communications, Internet and Information Technology. Scottsdale, AZ: ; 2003.
Marazin M, Gautier R, Burel G: Algebraic method for blind recovery of punctured convolutional encoders from an erroneous bitstream. IET. Signal. Proc. 2012, 6(2):122131. 10.1049/ietspr.2010.0343
Moosavi R, Larsson EG: A fast scheme for blind identification of channel codes. In Proceedings of the 54th GLOBECOM 2011. Houston: ; 2011.
Goldsmith AJ, Chua SG: Adaptive coded modulation for fading channels. IEEE. Tran. Commun. 1998, 46(5):595602. 10.1109/26.668727
Marazin M, Gautier R, Burel G: Dual code method for blind identification of convolutional encoder for cognitive radio receiver design. In Proceedings of IEEE Globecom Workshops. Honolulu: ; 2009.
Wang F, Huang Z, Zhou Y: A method for blind recognition of convolution code based on Euclidean algorithm. In Proceedings of IEEE WiCom. Shanghai: ; 2007:2125.
Dignel J, Hagenauer J: Parameter estimation of a convolutional encoder from noisy observations. In Proceedings of IEEE ISIT. Nice: ; 2007.
Marazin M, Gautier R, Burel G: Blind recovery of the second convolutional encoder of a turbocode when its systematic outputs are punctured. Mil. Tech. Acad. Rev. 2009, XIX(2):213232.
Yongguang Z: Blind recognition method for the turbo coding parameters. J. Xidian. Univ. 2011, 38(2):167172.
Marazin M, Gautier R, Burel G: Blind recovery of k/n rate convolutional encoders in a noisy environment. EURASIP J. Wirel. Commun. Netw. 2011, 2011(168):19.
Cluzeau M: Block code reconstruction using iterative decoding techniques. In Proceedings of IEEE ISIT. Seattle: ; 2006.
Barbier J, Sicot G, Houcke S: Algebraic approach for the reconstruction of linear and convolutional error correcting codes. In Proceedings of World academy of science, engineering and technology. Venice, Italy: ; 2006.
Barbier J, Letessier J: Forward error correcting codes characterization based on rank properties. In Proceedings of International Conferences on Wireless Communications. Nanjing: ; 2009.
Junjun Z, Yanbin L: Blind recognition of low coderate binary linear block codes. Radio. Eng. 2009, 39(1):1922.
Niancheng W, Xiaojing Y: Recognition methods of BCH codes. Elec. Warfare. 2010, 2010(6):3034.
Xiaojing Y, Niancheng W: Recognition method of BCH codes on roots information dispersion entropy and roots statistic. J. Detect. Contr. 2010, 32(3):6973.
Xizai L, Zhiping H, Shaojing S: Fast recognition method of generator polynomial of BCH codes. J. Xidian. Univ. 2011, 38(6):187191.
Lin S, Costello DJ: Costello, Reliabilitybased softdecision decoding algorithms for linear block codes. In Error Control Coding: Fundamentals and Applications. 2nd edition. Englewood Cliffs, NJ: Pearson Pretice Hall; 2004:395452.
Imad R, Sicot G, Houcke S: Blind frame synchronization for error correcting codes having a sparse parity check matrix. IEEE. Trans. Comm. 2009, 57(6):15741577.
Imad R, Houcke S: Theoretical analysis of a MAP based blind frame synchronizer. IEEE. Trans. Wireless. Commun. 2009, 8(11):54725476.
Imad R, Poulliat C, Houcke S, Gadat G: Blind frame synchronization of ReedSolomon codes: nonbinary vs. binary approach. In Proceedings of IEEE SPAWC 2010. Marrakech, Morocco: ; 2010.
Imad R, Houcke S, Jego C: Blind frame synchronization of product codes based on the adaptation of the parity check matrix. In Proceedings of IEEE ICC 2009. Dresden, Germany: ; 2009.
Lin S, Costello DJ: Linear block codes. In Error Control Coding: Fundamentals and Applications. 2nd edition. Englewood Cliffs, NJ: Pearson Pretice Hall; 2004:6698.
Lin S, Costello DJ: Introduction to algebra. In Error Control Coding: Fundamentals and Applications. 2nd edition. Englewood Cliffs, NJ: Pearson Pretice Hall; 2004:2565.
Jiang J, Narayanan KR: Iterative softinputsoftoutput decoding of ReedSolomon codes by adapting the parity check matrix. IEEE. Trans. Infor. Theory. 2006, 52(8):37463756.
Acknowledgements
This paper was supported by the graduate innovation fund of the National University of Defence Technology. And we wish to thank the anonymous reviewers who helped to improve the quality of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Jing, Z., Zhiping, H., Shaojing, S. et al. Blind recognition of binary cyclic codes. J Wireless Com Network 2013, 218 (2013). https://doi.org/10.1186/168714992013218
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168714992013218
Keywords
 Blind recognition
 Channel coding
 Cyclic codes
 Reverse engineering