4.1 Another representation of turbo codes
In most cases, the two (or more) constituent RSC encoders of a parallel TC are identical. For this reason, the authors in [19] proposed a ‘self-concatenated’ turbo encoder, depicted in Figure 5. It consists of the concatenation of a repetition code and an RSC code separated by an interleaver. In fact, it is possible to merge the two trellis encoders and replace the initial interleaver with a double-sized interleaver preceded by a twofold repetition (d-fold repetition in general). The interest of this second equivalent encoding structure of a classical TC lies in its simplicity and in the opportunity of introducing an irregular structure. By adopting the terminology used for the low density parity check (LDPC) codes, a regular TC is related to the use of a uniform repetition in the equivalent encoding structure (see Figure 5). On the other side, when the repetition degree d is not the same for all the information bits, the TC is said to be irregular. Note that the irregular TC implemented in Figure 5 is a generalization of repeat-accumulate codes presented in [20].
The performance of the self-concatenated regular TC is identical to the one we get using the standard turbo encoder when the interleaver length is sufficiently high. For short and medium block sizes, a few additional iterations are necessary to achieve the same performance. This is due to the decoding process because no extrinsic information is available at the first iteration. It is possible to implement a shuffled iterative decoding [21, 22] in order to achieve the same performance as for the classical turbo encoder and avoid adding iterations in practice. In shuffled decoding, the decoder updates the extrinsic information as soon as possible, without waiting for all the copies of a given data to be processed. In other words, the decoder does not wait until the next iteration to send extrinsic messages. However, as it does not represent a major problem, we have chosen to increase the number of iterations in our simulations by two additional iterations for the classical sequential decoding.
For instance, the BER performance of the 3GPP2 TC has been simulated for blocks of 2,298 bits at coding rate . Figure 6 shows that the use of the equivalent encoding structure for a regular TC requires two additional iterations (i.e. 12 instead of 10) to reach the performance of the classical parallel TC.
An irregular TC consists of the concatenation of a non-uniform repetition, an interleaver and a RSC code. In fact, the introduced irregularity makes it possible to improve the performance of a TC by inserting some bits inside the RSC trellis with a degree d > 2. These high-degree bits are commonly called pilot bits: they have reliable forward and backward metrics in the decoding process, and they propagate on both directions and influence the other bits with degree d = 2. However, making the code irregular leads to an increase in the rate of the constituent codes. Therefore, for usual coding rate values, only a small fraction of information bits is repeated d > 2 times. Thanks to their higher degree, the pilot bits include d extrinsics instead of two and are thus extremely well protected.
4.2 Selecting the degree profile
For irregular TCs, the information bits are divided into classes, each class j having a specific degree d
j
= j. The fraction of bits of degree d
j
in a class j is denoted by f
j
, where f
j
∈ [0, 1]. A degree profile consists of all the degrees d
j
and their corresponding non-zero fractions f
j
. In the sequel, we represent a degree profile by the vector (f2, f3,…, fmax) or the vector (2, 3,…, max). The average information bit degree is . We keep the minimum degree equal to 2 in order to refer to the classical TC. The maximum degree is dmax.
The convergence threshold as well as the asymptotic performance of an irregular TC strongly depends on its degree profile. The best profile depends on the interleaver and the generator polynomials of the RSC code. For codes with very large block lengths, the optimization of the previous parameters can be done using the density evolution method developed by Richardson and Urbranke [23, 24]. Using this approach, irregular LDPC codes with performance at 0.0045 dB from the capacity were obtained [25]. The Gaussian approximation can be used to speed up the search for good parameters. This sub-optimal method leads to quite accurate results and was defined in several different ways [26–29]. Although the density evolution method and the Gaussian approximation approach can be used to select a good degree profile for codes with large block sizes, Monte Carlo simulations of the bit error probability are carried on for finite length. The main drawback is that they are time-consuming, and only profiles with two non-zero fractions can be considered. In fact, the method consists in fixing a degree dIrreg and varying its fraction fIrreg. A fraction that achieves the best performance can be found. The next step involves changing the degree dIrreg, while the fraction fIrreg is fixed to the value already selected. We can then find optimal values for both dIrreg and fIrreg. However, this profile is not automatically the best one, since the optimization does not take into account all the possible combinations (dIrreg, fIrreg). Also, better performance may be attained when the profile is not restricted to two non-zero fractions. A simple method, based on the EXIT diagrams to select a good profile without resorting to extensive and long simulations, was introduced in [9]. This method allows many degree profiles to be compared at the same time. Comparisons between different degree profiles in terms of convergence behaviour as well as asymptotic behaviour are made, and the selection of the best degree profiles can be progressively refined.
The application of this technique led to the choice of two degree values d = 2 and d = 8 with the following degree profile: . For reasons of simplicity, only two degrees have been considered, and d = 2 is fixed with reference to the classical TC. In the sequel, we adopt this profile as working assumption. In fact, compared to the regular TC, the irregular TC with degree profile (f2, f8) offers a significant gain in the waterfall region of 0.3 dB. However, this code has a poor performance in the floor. The following subsections focus on the improvement of performance at high SNRs.
4.3 Design of suitable permutations for irregular turbo codes
The first intuitive idea investigated was to design an interleaver where all the groups of eight bits are uniformly distributed. The objective is that they spread their reliable forward and backward metrics along the frame. On the other hand, the spread between the pilot groups should be large enough to avoid correlation between them. High correlation may dramatically degrade error correction performance and even ruin the possible gain due to a large minimum distance. An empirical value for the spread is to be at least equal to 2 × (ν + 1), where v is the memory length of the simulated code. The condition on the spread between the pilot groups imposes a constraint on the fraction f8 (fmax in general) of the pilot bits: , where dav is the average information bit degree and k is the total number of information bits. For usual block size values, the term is negligible, and the constraint above is a relation between fmax, dmax and v that can be expressed as follows: . For , dav = 3 and ν = 3, this condition can never be satisfied.
In the cases where this condition is not satisfied, we proposed an algorithm in order to jointly spread the groups of eight bits along the frame and maximize the MHD [10]. This idea was inspired by a procedure described in [30] where the author focuses on the optimization of TC permutation design with the so-called almost regular permutation model. In [31], Kraidy et al. used a progressive edge growth-based interleaver for irregular TCs to lower the floor in the case of binary erasure channels. The algorithm we proposed is based on Dijkstra's algorithm [32] and on the estimation of the minimum distance for each permutation. To apply Dijkstra's algorithm in our specific context, we first represent the interleaver by a graph. At the beginning, the graph exists and it is empty. As we consider a tail-biting code, the graph has the form of a ring. The nodes, empty at first, are connected two by two in the ring. Thus, each node v
i
is connected to only a predecessor and a successor v
j
in the graph. The weight of each connection w
i,j
is equal to 1. The purpose of the algorithm is to put addresses in the nodes. Before interleaving, an appropriate repetition is applied to each bit among the k bits stemming from the source. The addresses that appear progressively in the graph are the interleaved addresses corresponding to the output of the interleaver. For each bit with repetition degree d, we choose a random interleaved address π(j) for the first copy. Then, we compute the distances from π(j) for any vertex in the actual graph by Dijkstra's algorithm. The interleaved address of the second copy is chosen at random such that the score for π(j) is greater than α times the best possible value. Then, a connection must appear in the graph between vπ(j) and vπ(j+1), represented by a crossbar of weight equal to 0: wπ(j),π(j+1) = 0.
The algorithm constructs the graph progressively. At the end, connections are added to the graph for every bit with degree d. These crossbars, of weight equal to 0, connect different interleaved data of the same information bit. Besides, the parameter 0 < α ≤ 1 is used to implement a random variation in the selection. The value α = 1 corresponds to the original Dijkstra's algorithm. We noticed that if we fix α = 1 in our algorithm, the obtained interleavers produce high values of girths but sometimes unacceptable values of minimum distances. The girth is the length of a shortest cycle contained in the graph. This criterion of selection was a priority for the author of [30]. In fact, his first purpose was to maximize the correlation girth while keeping an acceptable minimum Hamming distance. However, the minimum Hamming distance is our most important criterion of selection in order to improve the distance properties of irregular TCs. If the pilot bits are highly correlated, they may dramatically degrade error performance and even ruin the possible gain due to a large minimum distance. In order to reduce the auto-correlation effect, the interleaver should spread the pilot bits by increasing the correlation girth. However, the minimum distance can suffer since a long cycle has more probability to contain parity bits of the same data bit. Thus, we do not look for maximizing this criterion but only increasing the correlation girth and searching for higher minimum distances. Therefore, different values were tested for the parameter a, and we finally set a to 0.85, giving a reasonable space of search. Every time an interleaver is found, we estimate the minimum Hamming distance of the irregular TC using the all-zero iterative decoding algorithm [13]. Only interleavers that improve the asymptotic performance of irregular TCs are memorized.
The proposed algorithm allows suitable permutations to be designed that increase the minimum distance of irregular TCs. The pilot bits are now distributed along the frame in a way that they guarantee a good spread between the different groups, and the correlation effect between the pilot groups is reduced. This algorithm works very well for short block sizes [10]. For medium sizes and large blocks, the algorithm leads to a systematic search for an optimized interleaver in a wide domain of parameter values, due to the random selection introduced by parameter a. When the block size is some thousand bits, the algorithm may take an unacceptable computational time to find good interleavers, and we cannot be sure of detecting all the possible cases.
This algorithm was run for blocks of 1,146 bits with degree profile (f2, f8). As the average degree is dav = 3, the interleaver length is equal to 3,438. Figure 7 compares the FER performance of the code under both random and optimized interleaving. A gain of two orders of magnitude in the error floor is observed, in favor of the optimal interleaver. Nevertheless, like for random interleavers, one additional drawback of this family of interleavers is the necessity to store the interleaved addresses as no equations are available for the permutation.
4.4 Adding a post-encoder to irregular turbo codes
The proposed algorithm in section 4.3 is only practicable for short to medium blocks. It is possible to investigate the interleavers provided by the proposed algorithm and explore them in detail in order to find structured interleavers, having similar properties, which can be described in an analytical way. However, as previously explained, devising permutations for turbo codes is not an easy task. In order to ensure large asymptotic gain at very low error rates, even with non-optimized internal permutation, we propose an irregular TC inspired by our work about 3D TCs in order to improve the distance properties of these codes [9]. A fraction 0 ≤ λ ≤ 1 of the parity bits are post-encoded by a rate-1 post-encoder. For the regular 3D TC, the increase in minimum distance is significant at the expense of a loss in convergence threshold and an increase in complexity. The same kind of behaviour is expected for irregular TCs.
Figure 8 shows the BER and FER performance of regular and irregular TCs for blocks of 2,046 bits and code rate . Thus, the interleaver length is equal to 6,138. Compared with irregular TCs, the gain at high SNRs is nearly two orders of magnitude when the post-encoding is performed. Other simulations show that a gain of nearly 2.5 orders of magnitude can be observed for longer interleavers. Note that, unlike the method described in section 4.3, there is no limitation on the block size. The great advantage is that irregular TCs with post-encoding perform better than the regular TCs in both the waterfall and the error floor regions. It is possible to increase even more the minimum distance by the search of an adapted pattern of post-encoding. Here, the post-encoding is regular. However, it is possible to postcode only the pilot bits, or only the bits with the lowest degree, or to find a balance between both of them. This perspective is expected to give better results and will be investigated in a future work.