 Research
 Open Access
 Published:
Multinonbinary turbo codes
EURASIP Journal on Wireless Communications and Networking volume 2013, Article number: 279 (2013)
Abstract
This paper presents a new family of turbo codes called multinonbinary turbo codes (MNBTCs) that generalizes the concept of turbo codes to multinonbinary (MNB) parallel concatenated convolutional codes (PCCC). An MNBTC incorporates, as component encoders, recursive and systematic multinonbinary convolutional encoders. The more compact data structure for these encoders confers some advantages on MNBTCs over other types of turbo codes, such as better asymptotic behavior, better convergence, and reduced latency. This paper presents in detail the structure and operation of an MNBTC: MNB encoding, trellis termination, MaxLogMAP decoding adapted to the MNB case. It also shows an example of MNBTC whose performance is compared with the stateoftheart turbo code adopted in the DVBRCS2 standard.
1 Introduction
Two decades after their introduction in [1], turbo codes (TCs) have found their utility in numerous communication systems. TCs can be found in LTE, DVB or in deep space communications standards [2–4]. This TC success was possible because of the numerous studies in the past years, leading to the diversification of TC families and to the emergence of new codes, whose decoding is based on the principle of iterative or turbo decoding. Thus, the evolution of TC in terms of component convolutional encoder after the now classic single binary turbo codes (SBTC) [1], was followed by double/multibinary turbo codes (D/MBTCs) [5, 6] and nonbinary turbo codes (NBTCs) [7–9]. These new TC families aim to improve SBTC performance, especially by lowering the error floor. Proposing a new family of multinonbinary turbo codes (MNBTCs), this paper may be seen as a continuation of these concerns. The MNBTC was firstly introduced in [10], but this paper presents a more complete description of MNBTCs and also an example of more efficient MNBTCs. An MNBTC has recursive and systematic convolutional component encoders, with several nonbinary inputs. This evolution from singlebinary to (multi)nonbinary is found at lowdensity paritycheck (LDPC) codes [11], but it happened much earlier [12].
The first benefit brought forth by the new family of MNBTCs is latency reduction, the data block being more compact. Another benefit, argued by practical results, is a lower error floor. Basically, simulations show that the waterfall region extends below 10^{−8} of frame error rate (FER). Furthermore, MNB turbo decoders have a better convergence at high SNRs.
In addition, MNBTCs as nonbinary LDPCs (NBLDPCs) can be easily combined with highorder modulations, yielding increased bandwidth efficiency. The price for these benefits is the increased complexity of the component code trellis and, therefore, of the computational effort of the decoder. But, due to the evolution of the processing capacity, it is expected that the complexity of nonbinary codes to no longer pose a disadvantage in relation to binary codes. This is confirmed by the extensive studies of NBLDPC codes, [13–17].
The structure of the paper is as follows: in Section 2, we describe the MNBTC encoder and decoder, as well as component codes, the MNBTC structure, the encoding process and trellis termination, the MaxLogMAP algorithm adjusted to the MNB case, and details of interleaving. In Section 3, we propose a memory3 MNBTC with two inputs offering a good tradeoff between performance and complexity. The process of component encoder selection is explained and performance is assessed through simulations in both the AWGN and the Rayleigh fading channels, in conjunction with BPSK and 16QAM modulations. Performance is compared with the DBTC of the DVBRCS2 standard [3] and also with some NBLDPC from literature. Section 4 concludes the paper.
2 The structure of MNBTCs
2.1 Multinonbinary convolutional encoders
In this section, we describe the encoding scheme of the constituent codes of MNBTCs. Each constituent encoder has R nonbinary inputs taking values in the Galois field GF(2^{Q}), and is then referred to as a multinonbinary (MNB). In Figure 1, we present the general scheme of a memoryM recursive systematic MNB convolutional encoder (MNBCE), with a rate R_{ c } = R/(R + 1). This scheme is known as the observer canonical form [18]. Each cell of the register in Figure 1 stores a vector of Q bits at a time. All the links are supposed to have a width equal to Q in order to carry Qbit vectors. Each block labeled g_{m,r}, with 0 ≤ m < M, 0 ≤ r ≤ R, represents a multiplier in GF(2^{Q}) by the generator polynomial coefficient g_{m,r}. Adders also performs the sum in GF(2^{Q}). At time n, the R encoder inputs consist of a word of symbols ${u}^{n}=\left[\begin{array}{cccc}\hfill {u}_{R}^{n}\hfill & \hfill \dots \hfill & \hfill {u}_{2}^{n}\hfill & \hfill {u}_{1}^{n}\hfill \end{array}\right]$ with ${u}_{r}^{n}=\left[\begin{array}{cccc}\hfill {u}_{r,Q1}^{n}\hfill & \hfill \dots \hfill & \hfill {u}_{r,1}^{n}\hfill & \hfill {u}_{r,0}^{n}\hfill \end{array}\right]$, 1 ≤ r ≤ R, 0 ≤ n < N, where N is the interleaver length and ${u}_{r,q}^{n}$ are binary coefficients. The MNB convolutional encoder generates R + 1 outputs, of which the first R symbols correspond to the R inputs (systematic part) and the last symbol is a redundant symbol. The current encoder state at time n is given by the content of the shift register, ${S}_{0}^{n}$, ${S}_{1}^{n}$, … ${S}_{M1}^{n}$. We define the encoder state vector by ${s}^{n}=\left[\begin{array}{cccc}\hfill {s}_{M1}^{n}\hfill & \hfill \dots \hfill & \hfill {s}_{1}^{n}\hfill & \hfill {s}_{0}^{n}\hfill \end{array}\right]$.
To simplify the notations, we understand by u^{n} both its vector expression $\left[\begin{array}{cccc}\hfill {u}_{R}^{n}\hfill & \hfill \dots \hfill & \hfill {u}_{2}^{n}\hfill & \hfill {u}_{1}^{n}\hfill \end{array}\right]$ and its scalar one u^{n} being an element of GF(2^{Q∙R}), the distinction being made in context. Similarly, ${u}_{r}^{n}$ is in GF(2^{Q}) and s^{n} is in GF(2^{Q∙M}). Also, we note that we have chosen to place temporal indices in the top right and the rest of the indices in the bottom right.
The relation between the encoder state at time n + 1 and the encoder state at time n can be expressed in the compact form:
where G_{ T } = G_{ L }⋅G_{ F }+${G}_{0}^{\mathrm{T}}$ and W = ([0 0 … 0 1]_{1×M})^{T}, with ‘^{T}’ denoting the transpose operation and G_{0} = [ g_{ m,r }]_{M ≥ m ≥1, R ≥ r ≥ 1}, G_{ F } = [ g_{M,0}g_{M 1,0}… g_{1,0}], G_{ L } = [g_{0,R}g_{0,R 1}… g_{0,1}]^{T}, $T=\left[\begin{array}{c}\hfill \begin{array}{cc}\hfill {0}_{\left(M1\right)\times 1}\hfill & \hfill {I}_{M1}\hfill \end{array}\hfill \\ \hfill {G}_{F}\hfill \end{array}\right]$. In the definition of T and in (1), we marked the size of the matrices in question by pairs of indices of the form x × y.
Applying the D transform similarly to the binary case [19] ($X\left(D\right)={\displaystyle \sum _{k=\infty}^{+\infty}}{x}^{k}\cdot {D}^{k}$) to Equations 1 and 2, we obtain
After some basic manipulations, it can be shown that
where ${g}_{r}\left(D\right)={\displaystyle {\sum}_{m=0}^{M}}{g}_{m,r}\cdot {D}^{m}$, 0 ≤ r ≤ R, are the generator polynomials. Because of the systematic nature of the involved encoders, (5) can be seen as the encoding relationship: U_{0}(D) = G(D)⋅U(D), where
This encoding matrix uniquely identifies a MNB convolutional encoder from the whole encoders family. For the sake of simplicity, we will use the form G = [g_{m,r}]_{M≥m≥0,R≥r≥0} to refer to the encoding matrix. Hence, we have
2.2 The trellis termination for an MNBCE
In this section, we show how the already known techniques for trellis termination can be adapted to the MNB case: tailbiting [20] and zero padding [21, 22]. Although the form of the equations is similar to the binary case, for the MNB case, all operations are performed in GF(2^{Q}).
Starting from Equation 1 we can develop consecutively^{a}
Thus, using the notation
we have
The tailbiting technique involves a preencoding stage. The goal of preencoding is to determine s^{u}. Thus, starting with the null state s^{0} = 0, it results that s^{u} = s^{N}. Actual coding starts with state s^{0} = s^{x} and finally reaches the same state s^{N} = s^{x}. We have s^{x} = s^{u} + s^{x} ⋅ T^{N}, hence
provided that the matrix (I_{ M } + T^{N}) is nonsingular. If N_{ f } is the period of the polynomial g_{0}(D) associated to the feedback, we have
Therefore, (I_{ M } + T^{N}) is nonsingular and hence the tailbiting technique can be applied if N is not a multiple of the period N_{ f }. The maximum possible period of a degreeM feedback polynomial with coefficients in GF(2^{Q}) is
In practice, the transition from s^{N} to s^{x} can be achieved using a lookup table with 2^{Q∙M} values.
The tailbiting technique has the advantage of not requiring the insertion of additional redundancy. Using tailbiting, the coding rate of the MNBCE in Figure 1 is
Zero padding can be achieved in two ways: interleaved termination or uninterleaved termination. Zero padding closes the trellis at null state^{b} at both ends: s^{end} = s^{0} = 0. If s^{0} = 0, (8) results in
Zero padding with uninterleaved termination does not require preencoding. For zero padding with uninterleaved termination, K redundant symbols are added to the input block [u^{n}]_{0≤ n <N} so that
The matrix Equation 16 should be regarded as a system of M scalar equations on GF(2^{Q}), where the unknowns are redundant symbols u^{N}, …, u^{N+K−1} with parameters the entries of s^{N}. For (16) to be a compatible system, uniquely determined, it is necessary and sufficient to insert K = M redundant symbols.
Similar to the tailbiting case (Equation 11), Equation 16 also can be solved a priori, then using a lookuptable.
The coding rate for zero padding with uninterleaved termination is
Zero padding with interleaved termination is done, dislodging M information symbols and attributing their places to redundant symbols. Assuming that the positions of redundant symbols are J_{ZPI} = {n_{1}, …, n_{ M }}, we have
As for tailbiting, Equation 18 can be solved in two steps. By a preencoding stage, in which ${\left({u}^{n}\right)}_{n\in {J}_{\mathit{ZPI}}}$ is set to 0, we obtain
Knowing s^{u}, the matrix Equation 18 is a system of M scalar equations with M unknowns that can be solved a priori, and for which a lookup table can be used. One can note that the preencoding step can be eliminated if J_{ZPI} = {NM, …, N1}. In this case, the calculation of s^{u}, identifying the redundant symbols for the trellis termination and the encoding of these symbols using a lookup table are two phases of a single encoding.
The coding rate for zero padding with interleaved termination is
2.3 Turbo encoding of an MNBTC
The left part of Figure 2 describes the structure of an MNBTC encoder. Input data block u = [u_{ R } … u_{2}u_{1}], with ${u}_{r}=\left[\begin{array}{cccc}\hfill {u}_{r}^{0}\hfill & \hfill {u}_{r}^{1}\hfill & \hfill \dots \hfill & \hfill {u}_{r}^{N1}\hfill \end{array}\right]$, 1 ≤ r ≤ R, is encoded by MNB encoder C1, providing the redundant symbols sequence ${x}_{1}=\left[\begin{array}{cccc}\hfill {x}_{1}^{0}\hfill & \hfill {x}_{1}^{1}\hfill & \hfill \dots \hfill & \hfill {x}_{1}^{N1}\hfill \end{array}\right]$. At the same time, after being permuted by the block interleaver π, the input data sequence π(u) is encoded by MNB encoder C0 which generates the second redundant symbols sequence ${x}_{0}=\left[\begin{array}{cccc}\hfill {x}_{0}^{0}\hfill & \hfill {x}_{0}^{1}\hfill & \hfill \dots \hfill & \hfill {x}_{0}^{N1}\hfill \end{array}\right]$. So, the output of the turbo encoder is x = [x_{R+1} … x_{2}x_{1}x_{0}], where [x_{R+1} … x_{2}] = [u_{ R } … u_{2}u_{1}] is the systematic part of the output. In this way, the turbo coding rate is R_{ c }= R/(R + 2). Of course, if a higher turbo coding rate is desired, puncturing can be performed.
Figure 3 shows the general structure of a block generated by an MNB turbo encoder at bitwise in Figure 3a, and at symbolwise in Figure 3b, respectively. All bits of a block form the MNBTC word. Columns in both figures contain the bits/symbols generated at the turbo encoder output in the same moment of time (tact). The entire bit sequence contained in a column was named coded word (different from the MNBTC word). These coded words can be identified by the temporal indices: x^{i}, 0 ≤ i < N. The structure of bitwise encoded words is visible in Figure 3a, of symbolwise encoded words in Figure 3b. Also, Figure 3b shows that a coded word contains both information (symbols) and redundancy (symbols). The same block (turbocoded word) can be seen/regrouped in Qsymbol sequences generated by the R + 2 outputs of the MNBTC, denoted x_{ r }, 0 ≤ r ≤ R + 1. Sequences x_{2}, x_{3}, …, x_{R+1} form the data block, and x_{0} and x_{1} form the redundancy generated by the MNBTC. Note that a branch through the trellis encoder C1 is given by the pair $\left[\begin{array}{cc}\hfill {u}^{n}\hfill & \hfill {x}_{1}^{n}\hfill \end{array}\right]$ and a branch in C0 encoder trellis is given by the pair $\left[\begin{array}{cc}\hfill \pi {\left(u\right)}^{n}\hfill & \hfill {x}_{0}^{n}\hfill \end{array}\right]$.
Trellis termination in MNBTC may be done primarily in four ways:
i) Dual tailbiting. Both encoders, C1 and C0, use the tailbiting technique to terminate their trellises. Turbocoding rate in this case is
ii) Semitermination. C1 terminates its trellis according to the zero padding technique with interleaved termination with J_{ZPI} = {NM, …, N1} while C0 does not terminate the trellis. For the turbocoding rate, the following value results
iii) Interleaved dual termination. Both encoders use the zero padding technique with interleaved termination. Positions of redundant symbols for C1 trellis termination can be chosen like in the previous case J_{ZPI−1} = {NM, …, N1}, but for C0 they result from interleaving: J_{ZPI2} = {π(NM), …, π(N1)}. Thus, at least C0 performs a precoding stage. The coding rate is
iv) Uninterleaved dual termination. Both encoders use the zero padding technique with uninterleaved termination. The coding rate is
The uninterleaved dual termination technique is easy to accomplish, but subsequently raises the issue of restructuring the encoded block as sequences x_{2}, …, x_{R+1} contain N + 2∙M symbols, while sequences x_{0} and x_{1} only consist of N + M symbols.
2.4 Modulation
For a 2^{P}order modulation, if Q is a multiple of P, i.e., Q = P ∙ B, the Q bits of a turbocoded Qsymbol can be grouped into B groups of P bits that can be transmitted in Bmodulated Psymbols $\left[\begin{array}{cccc}\hfill {z}_{r,B1}^{n}\hfill & \hfill \dots \hfill & \hfill {z}_{r,1}^{n}\hfill & \hfill {z}_{r,0}^{n}\hfill \end{array}\right]$, ${z}_{r,b}^{n}$ ∈GF(2^{P}). For the mapping between (the bits^{c} of) the coded word and modulated symbol, we use the notation
2.5 Turbo decoding of MNBTCs
In this section, we describe the possibilities for decoding MNBTCs.
2.5.1 Channel output and the decoding strategy
We will start by identifying the channel sequences arriving at the (turbo) decoder.
The sequences retrieved at the channel output, after demodulation, are denoted by y = [y_{R+1} … y_{2}y_{1}y_{0}], with ${y}_{r}=\left[\begin{array}{cccc}\hfill {y}_{r}^{0}\hfill & \hfill {y}_{r}^{1}\hfill & \hfill \dots \hfill & \hfill {y}_{r}^{N1}\hfill \end{array}\right]$, ${y}_{r}^{n}=\left[\begin{array}{cccc}\hfill {y}_{r,B1}^{n}\hfill & \hfill \dots \hfill & \hfill {y}_{r,1}^{n}\hfill & \hfill {y}_{r,0}^{n}\hfill \end{array}\right]$, 0 ≤ r ≤ R + 1, 0 ≤ n < N. Considering a Rayleigh flat fading channel^{d}, ${y}_{r,b}^{n}$ can be expressed as ${y}_{r,b}^{n}=a\cdot {z}_{r,b}^{n}+{w}_{r,b}^{n}$, 0 ≤ b < B, where ${w}_{r,b}^{n}$ is a sample of the additive white Gaussian noise (AWGN) and a is the fading amplitude. The decoding of these sequences can be performed bitwise, symbolwise, or wordwise. Of course, dedicated decoding algorithms can be used: maximum a posteriori (MAP) [23], logarithmic MAP (LogMAP) [24] or maximum logarithmic MAP (MaxLogMAP) [25, 26]. Due to the known advantages of the MaxLogMAP algorithm with respect to practical implementation, we further present its adaptation to the MNB case. Let us note that although we adopted the strategy of the MaxLogMAP algorithm, the changes needed to adapt MAP and LogMAP algorithms to the MNB case are similar to the one shown.
2.5.2 Decoding over trellises of the component codes
To describe the operations performed by component decoders, we will first describe their operands.
At iteration i, each decoder DECj processes the channel soft values [y_{R+1} … y_{2}y_{ j }] and the a priori information La_{ j }, j = 0, 1, to calculate the anch metrics (the gamma coefficients), the nodes metrics by forward recurrence (the alpha coefficients), the nodes metrics by backward recurrence (the beta coefficients), a posteriori information^{e}L, and extrinsic information Le.
The decoder tries to reconstruct the encoder's path through the trellis. In this respect, it assigns reliability values to each node and each branch in the trellis according to the values of received sequences, ${y}_{r}^{n}$. A trellis branch is identified by the triplet s^{n}, s^{n+1}, and u^{n}. However, the possible values for the variables s^{n}, s^{n+1}, and u^{n} respectively have a timeless character (do not depend on n or i). For possible values, we will use the notation: $\widehat{\mathrm{\theta}}$ for s^{n}, θ for s^{n+1}, and d for u^{n}. We will also denote by $\tilde{x}$ the generic word obtained by encoding d, and with ${\tilde{z}}_{r,b},$ the generic symbol obtained by modulating the word $\tilde{x}:$
Obviously, there is interdependence between the current state $\widehat{\mathrm{\theta}}$, future state θ, and the value of the input word^{f}d: any two of them uniquely determine the third one. For instance, the current state $\widehat{\mathrm{\theta}}$ of the encoder and the input word value d determines the future state θ of the encoder. We will mark it through notation $\left(\widehat{\mathrm{\theta}},d\right)\to \mathrm{\theta}$. Given this interdependence property, the indexing of branches can be done by any two of these parameters. Therefore, the following branch metric notations are equivalent:
with $0\le \widehat{\mathrm{\theta}},\mathrm{\theta}<{2}^{Q\cdot M}$, 0 ≤ d < 2^{Q·R}, j =0,1.
Both branch metric γ and information processed during decoding (L, Le, or La) depend on how the input word d is regarded by the decoder. Thus we have

a)
wordwise decoding: d is a scalar in GF(2^{Q∙R});
b)Qsymbolwise decoding: d is a vector $\left[\begin{array}{cccc}\hfill {d}_{R1}\hfill & \hfill \dots \hfill & \hfill {d}_{1}\hfill & \hfill {d}_{0}\hfill \end{array}\right]$ with entries in GF(2^{Q}) (obviously, the scalar equivalent value of d remains the same in GF(2^{Q∙R});

c)
Bsymbolwise decoding: d _{ r } from the previous expression of d is also a $\left[\begin{array}{cccc}\hfill {d}_{r,B1}\hfill & \hfill \dots \hfill & \hfill {d}_{r,1}\hfill & \hfill {d}_{r,0}\hfill \end{array}\right]$ vector with entries in GF(2^{P}) with P = Q/B;

d)
bitwise decoding. This case results as a degeneration of the previous case for P = 1 and B = Q. We have ${d}_{r}=\left[\begin{array}{cccc}\hfill {d}_{r,Q1}\hfill & \hfill \dots \hfill & \hfill {d}_{r,1}\hfill & \hfill {d}_{r,0}\hfill \end{array}\right]$ .
2.5.3 Recurrences of component decoders
In this section, we present calculations performed by each component decoder in each iteration.
Regardless of the adopted decoding way (bit/symbol/wordwise), forward and backward recurrences have the same form:
Branch metrics γ, a posteriori information L, and extrinsic information Le are calculated by the formulas
where σ^{2} is the noise variance and
δ and Ex depend on the adopted decoding way:

a)
Wordwise decoding: δ=d and $\mathit{Ex}={\mathit{Le}}_{j}^{n,i}\left(d\right)$. The decoder calculates N _{La} = 2^{Q∙R} values for L, Le, and La.

b)
Qsymbolwise decoding: δ = d _{ r } and $\mathit{Ex}={\displaystyle \sum _{r=1}^{R}{\mathit{Le}}_{j}^{n,i}\left({d}_{r}\right)\left{\left(\widehat{\mathrm{\theta}},d\right)\to {d}_{r}}_{}\right.}$. The decoder calculates N _{L b} = R∙2^{Q} values for L, Le, and La.

c)
Bsymbolwise decoding: δ=d _{ r,b } and $\mathrm{Ex}={\displaystyle \sum _{r=1}^{R}{\displaystyle \sum _{b=0}^{B1}{\mathrm{Le}}_{j}^{n,i}\left({d}_{r,b}\right)\left{\left(\widehat{\theta},d\right)\to {d}_{r,b}}_{}\right.}}$. The decoder calculates N _{L c} = R∙B∙2^{P} values for L, Le and La.

d)
Bitwise decoding: δ=d _{ r,q } and $\mathit{Ex}={\displaystyle \sum _{r=1}^{R}}{\displaystyle \sum _{q=0}^{Q1}}{\mathit{Le}}_{j}^{n,i}\left({d}_{r,q}\right)\left{\left(\hat{\mathrm{\theta}},\mathit{d}\right)\to {d}_{r,q}}_{}\right.$. The decoder calculates N _{L d} = 2∙R∙Q values for L, Le, and La.
We have N_{L d} ≤ N_{L c} ≤ N_{L b} ≤ N_{L a}. Obviously, a higher value of N_{ L } requires a higher computational effort, but also a decoder finer ‘resolution’. In fact, this is the essence of the difference between SBTC, MBTC, NBTC, and MNBTC. A simple block data restructuring would not bring any advantage without rethinking of the decoding. Considering the decoding modes from d) to a), one can note a decoding optimization shift from bitwise to symbolwise and then wordwise. Wordwise decoding optimization accredits the MNBTC with a higher potential in decoder performance^{g}. Other details that define the structure and operation of a MNBTC remain to be optimized. An example of this is interleaving.
2.5.4 Turbo decoding
Knowing the outputs of component decoders, we can specify the equations that govern turbodecoding. At the turbo decoder level, we have the relations
where δ is defined above. Each decoder generates extrinsic information L_{ ej } that is transformed by interleaving in a priori information for the other component decoder
with ${L}_{e1}^{0}\equiv 0$.
The iterative process continues until all iterations are performed or until a stop condition is fulfilled, if the turbo decoder is equipped with a mechanism to stop the iterations.
2.5.5 The hard decision
After performing all iterations, decoding is completed by a hard decision on a posteriori information values provided by one of the two component decoders. Thus, for wordwise decoding, we have
For other decoding methods, ${\widehat{u}}^{n}$ and d are replaced with

${\widehat{u}}_{r}^{n}$ and d _{ r }, 0 ≤ d _{ r } < 2^{Q}, 0 ≤ r < R, for Qsymbolwise decoding;

${\widehat{u}}_{r,b}^{n}$ and d _{r,b}, 0 ≤ d _{r,b} < 2^{P}, 0 ≤ r < R, 0 ≤ b < B, for Bsymbolwise decoding; and

${\widehat{u}}_{r,q}^{n}$ and d _{r,q}, 0 ≤ d _{r,q} < 2, 0 ≤ r < R, 0 ≤ q < Q, for bitwise decoding.
2.6 Interleaving for MNBTCs
Similarly, to the interleavers used in MBTCs [5], for MNBTCs we also perform an intrasymbol interleaving step and an intersymbol interleaving step. Intersymbol interleaving permutes the input words u^{n} in the information data sequence and may be done using already known interleaving techniques [27–29]. An advantage brought by MNBTCs is the essential reduction of the interleaver length N. The MNBTC data block structure is threedimensional: R × Q × N. For SBTC, we have a linear data block structure (R = Q = 1) and for MBTC and NBTC planar structures (R > Q = 1 for MBTC and Q > R = 1 for NBTC) as well.
Interleaver size reduction implicitly leads to latency decrease. This is an advantage offered by MNBTCs. Latency is generally defined as the time from the first bit of a block input to the device to the time that the same bit is output from the device. Since this time is proportional to the length of the interleaver (not the size of the data block), a decrease in the latency results, proportional to the reduction of N, both the MNB encoder and the MNB decoder. Latency reduction is possible because both (MNB encoder and decoder) operate with symbols and not bits.
MNBTCs also constitute a compromise in complexity (placed) between MBTCs and NBTCs. Thus, considering the same data block of size R × Q × N bits and the same memory M for all three (MBTC, NBTC, and MNBTC), in Table 1, we present the total number of ways from component encoders' trellises (MBCE, NBCE, and MNBCE). We considered that each encoder has an extra output (minimum redundancy). Then, the MNB encoder's trellis has 2^{(Q − 1) × (R + 1) × M}/Q more branches than the MB encoder's trellis and 2^{(R − 1) × Q × M} less branches than the NB encoder's trellis. However, an accurate comparison is difficult, given that data block size and coding rate for the encoder in question cannot be aligned simultaneously. The fact however is that the MNB encoder has a low complexity compared to the NB encoder due to a lower number of bits per symbol.
Table 2 compares the volume of the calculation made by the convolutional decoder for SB, MB, and MNB. For each case, we present details about the calculation of the recurrent coefficients α and β, of the coefficients γ (trellis branch metrics), and the values of the coefficients L. ‘No.’ is the total number of coefficients calculated. To calculate each coefficient, we require a number of ‘prod’ multiplications and a number of ‘sum’ additions or ‘cmp’ comparisons. For a fair analysis, it is necessary to use the same number of bits of information, the same encoding rate and the same constraint length (binary equivalent) for all three cases. The encoding rate is adjustable through puncturing. But puncturing does not change the ‘volume’ of decoding calculation. Thus, for comparison, we considered the natural rate of encoding in each case. In order to preserve the size of the input data block, we considered that a data block has the same number of bits (R × Q × N) in all three cases.
‘Preserving’ the constraint length (binary equivalent) requires that between M’ (SB and MB encoders' memory) and M (MNB encoder's memory) there be the formula: M’ = Q × M.
As expected, the volume of the calculation for the MNB decoder is significantly higher than for SB and MB. However, a comparison between computational efforts made in turbodecoding for the three cases must take into account the convergence of the iterative process (see Section 3.2, Equation 38).
By imposing the ‘preservation’ of the number of bits in the data block results an advantage for MNB: reduced latency. Thus, the numbers of recurrent steps necessary to be taken in the SB, MB, or MNB decoders are R × Q × N + 1, Q × N + 1, and N + 1, respectively. If the other theoretical calculations can be parallelized, recurrences require successive calculations. Thus, latency becomes proportional to the number of recurrent steps and, of course, there is a reduction in the case of MNB.
Intrasymbol interleaving, used in the DVBRCS and WiMAX standards, operates on the R = 2 bits (Q = 1) of each input symbol, reversing their positions for even indexes n. Since for MNBTCs, an input word consists of at least 4 bits, intrasymbol interleaving can be made in many more ways. A study of these modes is beyond the scope of this paper. The intrasymbol permutation implemented in the simulation results presented in Section 3 was not specifically optimized: we performed helical intrasymbol interleaving, i.e., the R∙Q bits of the u^{n} word are cyclically permuted by one position in the same direction.
3 An example of MNBTC
This section presents an instance of MNBTC offering a good tradeoff between performance and complexity. It is the outcome of a search among the MNB component encoders family with R = 2 (doublebinary input), Q = 2, and memory M = 3. The next sections provide some details on the MNB component encoder family's cardinality and on the selection criteria. Then the BER/FER performance of the selected MNBTC is compared with the memory4 DBTC used in the DVBRCS2 standard. We chose this TC as benchmark, since it is the most powerful standardized TC with a ½ natural rate. But an absolute ‘fair’ comparison should be made between convolutional codes with the same ‘binary equivalent constraint length’.
3.1 Cardinality of the two doublebinary inputs MNB convolutional encoder family
A component of this family is identified by the following coding matrix:
where g_{m,r}∈GF(4). The cardinality of the set of matrices in the form given by (10), verifying g_{3,2}∙g_{3,1}∙g_{3,0} ≠ 0 is N_{G 3 × 2 × 2} = 4^{11} − 4^{8}, i.e. over 4 million. Comparatively, the number of memory4 convolutional encoders amounts to N_{G 4 × 1 × 1} = 2^{9} − 2^{7} for singlebinary codes and N_{G 4 × 2 × 1} = 2^{14} − 2^{11} for doublebinary codes. It is desirable to reduce the selection base by eliminating a priori poor performance matrices. Thus, we investigated only matrices with g_{0} = [g_{3,0} g_{2,0} g_{1,0} 1] as primitive polynomial (irreducible with maximum period 63). These are [2 1 1 1], [2 1 3 1], [2 2 2 1], [2 2 3 1], [2 3 1 1], [2 3 2 1], [3 1 1 1], [3 1 2 1], [3 2 1 1], [3 2 3 1], [3 3 2 1], and [3 3 3 1].
3.2 Search for the best MNBCE
Through practical observations, we found that an efficient turbo decoder in terms of BER/FER converges also rapidly, meaning that on average, it performs few iterations to correct a block. Thus, if the TC is equipped with a criterion for stopping the iterations, its convergence is defined as the ratio of the total number of iterations, iter(Nb) necessary for the decoding of Nb turbocoded blocks and Nb, when Nb is large
For small SNR values, convergence C_{ v } tends to iter_{max}, the maximum iteration number, which is set a priori. When the SNR increases, the weight of the nonconverging blocks decreases.
The motivation of using convergence criterion is that C_{ v } can be calculated with a quite good accuracy for small values of Nb. So, the turbo code functionality was simulated with each of the component codes. The simulations were stopped if the convergence C_{ v } overcame a given limit
where dC_{ v } represents the speed of threshold decreasing. The given threshold C_{v 0} – Nb⋅dC_{ v } decreases with Nb in order to limit the simulation duration for a specific code. In the search we made, the values of the parameters C_{v 0} and dC_{ v } were chosen equal to 4.5 and 15 × 10^{−4} respectively. For example, for a TC with a convergence C_{v 1} = 3 iterations/block, N_{b 1} = 10^{3} blocks are simulated. The lower the convergence of a TC, the more blocks have to be simulated, which leads to a more precise estimate of its convergence.
We used the QPP intersymbol interleaver set forth in the LTE standard for the length of 376. Thus, the data block size is 4 × 376 bits. For the most efficient component encoders in terms of convergence, we performed simulations in order to estimate BER and FER. Note that we obtained many component encoders with very close BER/FER performance.
3.3 Simulations results
The method described above led to the following generator matrix:
The MNBTC performance built with this component encoder is compared with the one of the benchmark DBTC from DVBRCS2 in Figure 4. For a fair comparison, we used data blocks with sizes of R × Q × N = 2 × 2 × 376 bits for MNBTC and 2 × 1 × 752 bits for DBTC. Also, for both TCs, we used tailbiting and the MaxLogMAP decoding algorithm with the adaptation previously described for MNBs (wordwise decoding). To plot the curves, we performed simulations until 500 erroneous blocks were obtained or until a maximum number of blocks equal to 10^{9} for DBTC and around 2 ×10^{8} for MNBTC were simulated. Exceptions are simulations to determine the extreme points of the curves ‘AWGN&BPSK’ for MNBTC. For these points, simulations ran until we obtained 30 erroneous blocks (over 2.1 ×10^{9} blocks transmitted). The maximum number of iterations is set to 16 or 100, combined with the genie stopping criterion [30]. The theoretical limit mentioned in Figure 4 is the sphere packing bound that takes the penalty due to the block size into account [31]. For the 16QAM modulation, to perform the mapping between coded symbols and modulation symbols, in the MNBTC case, we used the values B = Q = 2, P = 1, and the relation
where ${x}_{r,b}^{n}$ ∈GF(4)={0,1,2,3}, 0 ≤ r < R + 1, 0 ≤ b < 2, and 0 ≤ n < N–1.
For the DBTC case, we used the relations
where $\left[\begin{array}{cccc}\hfill {x}_{3}^{n}\hfill & \hfill {x}_{2}^{n}\hfill & \hfill {x}_{1}^{n}\hfill & \hfill {x}_{0}^{n}\hfill \end{array}\right]$ is the turbo decoder output at time n. We do not use puncturing, nor quantization nor bit interleaving between code and modulation for the Rayleigh case, since BPSK is assumed in this case.
Whatever type of channel involved and the type of modulation used, as argued by the curves in Figure 4, the MNBTC does not show the error floor effect in the value range for FER investigated. The MNBTC performance in the waterfall region is slightly weaker than for the DBTC: the observed convergence loss ranges from 0.1 to 0.4 dB, but that the error floor is much lower in all the simulated cases.
An advantage of MNBTCs over DBTCs is the continuous improvement of BER/FER performance when performing an average number of iterations greater than usual for TCs. Figure 4 shows that the FER performance of the DBTC does not improve from 16 to 100 iterations, whereas for the MNBTC, the improvement is clear. This is an interesting property for applications where the decoding latency/throughput is not an issue. Some additional details on this behavior are given in Figure 5. At a SNR = 1.4 dB, the histogram of blocks as a function of the number of iterations required for correction presents a much higher dispersion in MNBTCs than DBTCs. While the SNR increases, the histograms for MNBTCs becomes ‘finer’, and have become more compact than DBTC histograms at SNRs greater than 2 dB. For SNR > 2 dB, MNBTC shows better convergence than DBTC. This is argued by the third diagram in Figure 5.
In Figure 6, we compared the performance of the proposed MNBTC with that of the NBLDPCs presented recently in the literature, [32–38]. In the simulations carried out for the construction of the curves for the MNBTCs in Figure 6, we used the stop criteria described in [39], with a threshold value of 20. The results in this figure clearly lead to the conclusion that the proposed MNBTC is inferior in terms of performance to the NBLDPC in small size data block (under a thousand bits), but becomes superior at larger sizes (thousands of bits).
4 Conclusion
This work opens a new horizon for TCs: using of the multinonbinary convolutional encoders by generalizing the concepts of multibinary and nonbinary codes. The sample of MNBTC presented in this paper successfully competes in performance with the DBTC recently standardized in DVBRCS2. The main advantages (as for example latency reduction) of MNBTCs arise from the more compact form of data blocks. Many aspects still remain to be explored in this horizon, e.g., an optimization for intrasymbol interleaving. We intend also to investigate new solutions for decoding MNBTCs. The MAP adaptation algorithm, described above, is a rather complex solution. Due to the nonbinary nature, it is possible to find solutions less complicated, possibly by adapting the algorithms used in (turbo) decoding of the ReedSolomon codes. Such solutions even if they are less effective than MAP type algorithms can be alternatives.
Regarding the iteration stopping criteria, stopping mechanisms dedicated to SBTCs abound in the literature [30, 39–42]. A few studies on the stopping criteria for MBTCs can also be found [39, 43]. In general, they can be adapted to the MNB case. Although a detailed study on this topic is beyond the scope of this paper, we have implemented the stopping criterion called minAPP in [39] for the MNB case. Performance in terms of BER/FER is similar to the application of the genie criterion [30]. However, the average number of iterations performed for decoding a block depends on the selected stopping technique. Thus, an MNB turbodecoder equipped with a mechanism to stop the iterations based on the minAPP criterion (easily implementable in practice) performs on average 1.5 more iterations than the one based on the genie criterion (not applicable in practice).
The investigation of the possibilities of using the MNBTC in cooperative communications schemes, [44–46], is also of interest.
Endnotes
^{a}To avoid confusion, let us point out that the matrix T is the only variable throughout the paper that appears exponentiated, i.e., for which the top right indexes are exponents.
^{b}Obviously, the trellis can be closed in any other state, but this does not bring benefits.
^{c}Even if working with symbols and words, they exist physically still under binary form.
^{d}By ‘flat’ we considered a frequency nonselective fading, whose amplitude remains constant for the duration of a transmitted symbol (which is P times greater than the length of a bit).
^{e}Since in the MaxLogMAP algorithm operations are performed in the logarithmic domain, operands have an information dimension and not a probability one.
^{f}The input word uniquely identifies a branch between all outgoings (or incomings) in a node.
^{g}Optimal decoding algorithm uses a posteriori probabilities for the entire sequence of data transmitted.
Abbreviations
 APP:

A posteriori probability
 AWGN:

additive white Gaussian noise
 BER:

bit error rate
 BPSK:

binary phaseshift keying
 DBTC:

Double binary turbo codes
 DVBRCS:

Digital video broadcastingreturn channel via satellite
 FER:

Frame error rate
 GF:

Galois field
 LogMAP:

Logarithmic MAP
 LTE:

Longterm evolution
 MAP:

Maximum a posteriori probability
 MaxLogMAP:

Maximum logarithmic MAP
 MBTC:

Multibinary turbo codes
 MNB:

Multinonbinary
 MNBCE:

Multinonbinary convolutional encoder
 MNBTC:

Multinonbinary turbo codes
 NBLDPC:

Nonbinary lowdensity parity check codes
 NBTC:

Nonbinary turbo codes
 PCCC:

Parallel concatenated convolutional codes
 QAM:

Quadrature amplitude modulation
 QPP:

Quadratic permutation polynomial
 SB:

Single binary
 SBTC:

Single binary turbo codes
 SNR:

Signaltonoise ratio
 TC:

turbo codes
 WiMAX:

worldwide interoperability for microwave access.
References
 1.
Berrou C, Glavieux A, Thitimajshima P: Near Shannon limit errorcorrecting coding and decoding: turbocodes. In Proceedings of ICC, Geneva May, 1993, 23–26: 10641070.
 2.
ETSI, 3GPP TS 36.212: Evolved universal terrestrial radio access (EUTRA). 2010. (January 2010)http://www.etsi.org/deliver/etsi_ts/136200_136299/136212/08.08.00_60/ts_136212v080800p.pdf
 3.
DVB, European Telecommunications Standards Institute: DVB Interactive Satellite System, Part 2. Lower Layers for Satellite standard, DVB document A1552; 2011. (March 2011)http://www.dvb.org/resources/public/standards/a1552_DVBRCS2_Lower_Layers.pdf
 4.
Consultative Committee for Space Data Systems: Recommendation for space data system standards, synchronization and channel coding. 2011. (August 2011)http://public.ccsds.org/publications/archive/131x0b2ec1.pdf
 5.
Douillard C, Berrou C: Turbo codes with ratem/(m + 1) constituent convolutional codes. IEEE Transactions on Communications 2005, 53(10):16301638.
 6.
Ferrari M, Bellini S: Rate variable multibinary turbo codes with controlled errorfloor. IEEE Transactions on Communications 2009, 57(5):12091214.
 7.
Berkmann J: On turbo decoding of nonbinary codes. IEEE Communications Letters 1998, 2(4):9496.
 8.
Reid AC, Gulliver TA, Taylor DP: Rate1/2 component codes for nonbinary turbo codes. IEEE Transactions on Communications 2005, 53(9):14171422.
 9.
Liva G, Scalise S, Paolini E, Chiani M: Turbo codes based on timevariant memory1 convolutional codes over Fq, In IEEE ICC 2011, Kyoto, 5–9. June 2011.
 10.
Balta H, Kovaci M, de Baynast A, Vlădeanu C, Lucaciu R: MultiNonBinary TurboCodes From Convolutional to ReedSolomon Codes, In Scientific Bulletin of Politehnica University of Timisoara, Transactions on Electronics and Communications, Tom 51–65 Timisoara. Sept: Romania; 2006:113118.
 11.
Mackay DJC, Neal RM: Good codes based on very sparse matrices. In Cryptography and Coding, 5th IMA Conference, Lecture Notes in Computer Science number 1025. Edited by: Colin B. Berlin, Germany: Springer; 1995:100111.
 12.
Davey MC, MacKay DJC: Lowdensity parity check codes over GF(q). IEEE Communications Letters 1998, 2(6):165167.
 13.
Djordjevic IB, Vasic B: Nonbinary LDPC codes for optical communication systems. IEEE Photonics Technology Letters 2005, 17(10):22242226.
 14.
Bosco G, Garello R, Mininni F, Baldi M, Chiaraluce F: Nonbinary low density parity check codes for satellite communications. In Proceedings of the ISCC, Cagliari, Italy June, 2006, 26–29: 10191024.
 15.
Cocco G, Pfletschinger S, Navarro M, Ibars C: Opportunistic adaptive transmission for network coding using nonbinary LDPC codes. EURASIP Journal on Wireless Communications and Networking (2010); 2010. Doi: 10.1155/2010/517921
 16.
Mourad A, Picchi O, Gutierrez I, Luise M: Low complexity soft demapping for nonbinary LDPC codes. EURASIP Journal on Wireless Communications and Networking 2012, 2012: 55.
 17.
Marinoni A, Savazzi P, Gamba P: Efficient detection and decoding of qary LDPC coded signals over partial response channels. EURASIP Journal on Wireless Communications and Networking 2013, 2013: 18.
 18.
Kailath T: Linear Systems. Upper Saddle River: PrenticeHall; 1980.
 19.
Johannesson R, Zigangirov KS: Fundamentals of Convolutional Coding. Fundamentals of Convolutional Coding: IEEE Press; 1999.
 20.
Weiss C, Bettstetter C, Riedel S, Costello DJ: Turbo decoding with tailbiting trellises In Proc. Electron, Pisa, Italy: IEEE Int. Symp. Signals, Syst; 1998:343348.
 21.
Guinand P, Lodge J: Trellis Termination for Turbo Encoders In Proceedings of the 17th Biennial Symposium on Communications Queen’s University. ON, Canada: Kingston; May 1994:389392.
 22.
Hokfelt J, Edfors O, Maseng T: On the theory and performance of trellis termination methods for turbo codes. IEEE Journal on Selected Areas in Communications 2001, 19(5):838847.
 23.
Bahl LR, Cocke J, Jelinek F, Raviv J: Optimal decoding of linear codes for minimising symbol error rate. IEEE Transactions on Information Theory 1974, 20(2):284287.
 24.
Robertson P, Hoeher P, Villebrun E: Optimal and suboptimal maximum a posteriori algorithms suitable for turbo decoding. European Trans. Telecommun. 1997, 8(2):119125.
 25.
Koch W, Baier A: Optimum and suboptimum detection of coded data disturbed by timevarying intersymbol interference, In GLOBECOM’90. CA, USA: San Diego; December 1990:16791684.
 26.
Vogt J, Finger A: Improving the maxlogMAP turbo decoder. Electronics Letters 2000, 36(23):19371939.
 27.
Crozie SN: New highspread highdistance interleavers for turbocodes. Commun, Kingston, Canada: 20th Biennial Symp; 2000:37.
 28.
Berrou C, Saouter Y, Douillard C, Kerouédan S, Jézéquel M: Designing good permutations for turbo codes: towards a single model. In IEEE International Conference on Communications, Paris, France 2004, 1: 341345.
 29.
Sun QJ, Takeshita OY: Interleavers for turbo codes using permutation polynomials over integer rings. IEEE Trans. Inform. Theory 2005, 51(1):101119.
 30.
Matache A, Dolinar S, Pollara F: Stopping rules for turbo decoders. Pasadena, California: TMO Progress Report 42–142, Jet Propulsion Laboratory; 2000.
 31.
Gallager RG: Information Theory and Reliable Communication. Wiley, New York; 1968:56.
 32.
Boutillon E, CondeCanencia L: Bubble check: a simplified algorithm for elementary check node processing in extended minsum nonbinary LDPC decoders. Electronic Letter 2010, 46(9):633634.
 33.
Kasai K, Declercq D, Poulliat C, Sakaniwa K: Multiplicatively repeated nonbinary LDPC codes. IEEE Transactions on Information Theory 2011, 57(10):67886795.
 34.
Costantini L, Matuz B, Liva G, Paolini E, Chiani M: Nonbinary protograph lowdensity paritycheck codes for space communications. Int. J. Satell. Commun. Network 2012, 30: 4351.
 35.
Costantini L, Matuz B, Liva G, Paolini E, Chiani M: On the Performance of ModerateLength NonBinary LDPC Codes for Space Communications, In the 5th Advanced Satellite Multimedia Systems Conference and the 11th Signal Processing for Space Communications Workshop . Cagliari Sept. 2010, 13–15: 122126.
 36.
Sassatelli L, Declercq D: Nonbinary Hybrid LDPC Codes. IEEE Transactions on Information Theory 2010, 56(10):53145334.
 37.
Boutillon E, CondeCanencia L, Ghouwayel AA: Design of a GF(64)LDPC Decoder Based on the EMS Algorithm. IEEE Transactions on Circuits and Systems I 2013, 60(10):26442656.
 38.
Sassatelli L, Declercq D: Nonbinary Hybrid LDPC Codes: structure, decoding and optimization. IEEE Information Theory Workshop, ITW’06 (Chengdu, 2006) 2006, 7175. doi: 10.1109/ITW2.2006.323759
 39.
Balta H, Douillard C, Kovaci M: “The Minimum Likelihood APP Based Early Stopping Criterion for MultiBinary Turbo Codes”, In Scientific Bulletin of Politehnica University of Timisoara, Transactions on Electronics and Communications, Tom 51–65. Romania: Timisoara; Sept. 2006:199203.
 40.
Hagenauer J, Offer E, Papke L: Iterative decoding of binary block and convolutional codes. IEEE Transactions on Information Theory 1996, 42(2):429445.
 41.
Shao RY, Lin S, Fossorier MPC: Two simple stopping criteria for turbo decoding. IEEE Transactions on Communications 1999, 47(8):11171120.
 42.
Zhai F, Fair IJ: Techniques for early stopping and error detection in turbo decoding. IEEE Transactions on Communications 2003, 51(10):16171623.
 43.
Guerrieri L, Veronesi D, Bisaglia P: Stopping rules for duobinary turbo codes and application to HomePlug AV. New Orleans: In IEEE Global Telecommunications Conference; 2008:15. doi: 10.1109/GLOCOM.2008. ECP.558
 44.
Van Khuong H, LeNgoc T: A cooperative turbo coding scheme for wireless fading channels. IET Communications 2009, 3(10):16061615.
 45.
Bota V, ZsA P, Silva A, Teodoro S, Stef MP, Moco A, Botos A, Gameiro A: Combined distributed turbo coding and space frequency block coding techniques. EURASIP Journal on Wireless Communications and Networking 2010. doi: 10.1155/2010/327041
 46.
Babich F, Crismani A: Cooperative coding schemes: design and performance evaluation. IEEE Transactions on Wireless Communications 2012, 11(1):222235.
Acknowledgements
This paper was partially supported by the project "Development and support for multidisciplinary postdoctoral programs in primordial technical areas of the national strategy for research  development  innovation" 4DPOSTDOC, contract nr. POSDRU/89/1.5/S/52603, project cofunded from the European Social Fund through the Sectorial Operational Program Human Resources 2007–2013, and was partially supported by a grant of the Romanian Ministry of Education, CNCS – UEFISCDI, project number PNIIRUPD201230122.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Cite this article
Balta, H., Douillard, C. & Lucaciu, R. Multinonbinary turbo codes. J Wireless Com Network 2013, 279 (2013). https://doi.org/10.1186/168714992013279
Received:
Accepted:
Published:
Keywords
 Turbo codes
 Nonbinary code
 Multinonbinary recursive systematic convolutional code
 Galois field