Skip to main content

Irregular 8PSK mapping for variable length codes with small blocks in a BICM-ID system

Abstract

Iterative decoding is an effective technique to approach the channel capacity for very large block sizes with enough iterations. However, due to the limitation of bandwidth and delay, small blocks of data are much more commonly applied in practical communications, and low iteration counts are usually preferred for both decoding complexity and delay consideration. In such cases, the design rules of near capacity decoding—which is generally asymptotic with respect to the block size—may cause inferior performance. To overcome this problem for 8-phase shift keying (8PSK) modulated variable length codes (VLCs), an irregular mapping scheme for the transmission system of bit-interleaved coded modulation with iterative decoding (BICM-ID) is studied in this paper. A submapping searching algorithm and an irregular mapping optimization algorithm are proposed aiming at maximizing the extrinsic mutual information after a target number of iterations. Simulation results show that for small data block size with a low iteration count, our scheme has advantages with respect to the existing near capacity systems optimized by the asymptotic tools.

1 Introduction

Variable length codes (VLCs) were originally proposed for entropy coding, and have been widely applied in a variety of audio and video compression standards. While providing higher data rates than fixed length codes (FLCs), they are sensitive to error propagations due to channel noises. Even a single bit mistake can cause a loss of synchronization, resulting in an invalid packet. Especially when bandwidth-efficient modulations, e.g., 8-phase shift keying (8PSK) or M-ary quadrature amplitude modulation (QAM), are used for communication, the noise-susceptible characteristic of these high-order modulations will be adverse to correct decoding of variable length data.

To improve the robustness of VLCs, reversible variable length codes (RVLCs) [1-3] were proposed and studied to mitigate the impact of synchronization loss by bidirectional decoding. Another class of robust VLCs focus on the free distance of the codewords, just like the channel codes do, and that is why they are called variable length error-correcting codes (VLECs) [2,4,5]. In addition, soft VLC decoding, e.g., BCJR decoding [6] based on the bit-level [7] or symbol-level [8] VLC trellis, was introduced to achieve a better performance than hard instant decoding, especially when involving iterative decoding [8,9].

Besides, from the perspective of high-order modulation, bit-interleaved coded modulation with iterative decoding (BICM-ID) [10] is an effective technique to approach the channel capacity. With the aid of extrinsic information transfer (EXIT) charts [11,12], near capacity iterative decoding could be achieved through EXIT curve matching, under the assumption of large block size. The EXIT curves can be optimized by adjusting the mapping of bits to symbol constellation [13,14]. Yet another important series of EXIT optimization techniques are irregular codes, such as irregular convolutional codes (IrCCs) [15], irregular VLCs (IrVLCs) [9], and irregular mappings/modulations [16-18].

The iterative decoding behaviors analyzed in [9,14-18] are all asymptotic with respect to block size, where large interleavers together with enough iterations are assumed. However, the asymptotically good performance cannot be guaranteed with small blocks.

In this paper, we derive an irregular mapping scheme for 8PSK modulated VLC in a small block BICM-ID system, and show its coding gain over the existing near capacity systems optimized by standard asymptotic tools. Our design criterion is to maximize the extrinsic mutual information at a target E b /N 0 after a fixed and low number of iterations, which is modified from the algorithm in [19]. Besides, a simplified sub mapping searching algorithm is proposed, based on the work of [20].

The remainder of this paper is organized as follows. Section 2 gives an overview of the transmission system. After a short introduction about the classification of 8PSK mappings in Section 3, our scheme of irregular mapping optimization is illustrated in Section 4. Simulation results are shown in Section 5, and Section 6 concludes this paper.

2 System overview

Our transmission system is depicted in Figure 1. We considered a source of discrete symbols from the finite alphabet . A source symbol at time index k is denoted by \(U_{k}\in \mathcal {A}\). One packet U 1:K =[U 1,U 2,…,U K ] (or written as U : for simplicity) consists of K source symbols. The VLC encoder maps a symbol U k to a variable length codeword C(U k ), and outputs a bit stream u 1:N =[C(U 1),C(U 2),…,C(U K )]=[u 1,u 2,…,u N ] with bit length \(N=\sum _{k=1}^{K}{\ell \left (U_{k}\right)}\). Here, (·) denotes the length of a specific word. The VLC packetization rule follows:

$$ N\leq N_{\text{packet}}<N+\ell\left(U_{K+1}\right) $$
((1))
Figure 1
figure 1

Block diagram of the transmission system. L A (·): a priori information; L E (·): extrinsic information; L(·)=L A (·)+L E (·): APP information for final estimation.

In this way, a constant size N packet can be maintained for VLC packets with dynamic K and N. When N<N packet , zero bits are padded. Possible values of the zero-padding length padding belongs to set {0,1, max−1}, and the average value of N is:

$$\begin{array}{@{}rcl@{}} \overline{N}&=&N_{\text{packet}}-\overline{\ell}_{\text{padding}} \\ &=&N_{\text{packet}}-\frac{\sum_{i=0}^{\ell_{\text{max}}-1}{i\cdot\Pr\{\ell(U)>i\}}} {\sum_{i=0}^{\ell_{\text{max}}-1}{\Pr\{\ell(U)> i\}}} \end{array} $$
((2))

where max is the longest word length. The proof of Equation 2 is in Appendix A. Here, we suppose bit-level VLC trellis [7] is applied in the VLC soft decoding, which implies the decoder does not necessarily have to know K.

A random interleaver with size P·N packet (P=1,2,…) permutes the bit stream u : to u:′. The interleaved u:′ is then encoded by a rate-1 recursive systematic convolutional (RSC) code, producing bit stream v :. This rate-1 RSC coding is also called ‘doping’ [21] since the output bits are doped with both info-bits and parity-bits. Then after the mapping of bits v : to 8PSK constellation, the signal is transmitted over the additive white Gaussian noise (AWGN) channel. Dependencies between the modulated symbols are introduced by the inner RSC code, for the purpose of reaching EXIT point (1,1) [21], which corresponds to error-floor removed iterative decoding.

In this paper, the rate-1 RSC code in Figure 1, we choose is a simple repeat-accumulate (RA) code with transfer function 1/(1+D) and memory length 1. The ‘doping’ rate is set to be 1/60, i.e., 59 info-bits and 1 parity-bit within a doping period [21]. The packetization rule is set as N packet=600.

At the receiver side, bit-level extrinsic information is exchanged between the inner a posteriori probability (APP) demapper/decoder and the outer VLC APP decoder. It can be categorized into a sub field of iterative source–channel decoding (ISCD). The inner decoder APP in consists of a soft demapper and the concatenated rate-1 RSC decoder. The a priori information L A (u n′) of the RSC decoder is also input to the soft demapper as L A (v n ) only if n is the info-bit index of the RSC code. Finally, after a fixed number of iterations, the original source symbols are recovered using a soft-input VLC Viterbi decoder.

3 Backgrounds on the classification of mappings for 8PSK

Consider the rotational symmetric feature, there are a total of 7!=5040 different mappings for 8PSK, which are systematically classified in [20]. The classification is based on the bitwise distance spectra, in the situations of no prior information, W 0, and full prior information, W 1. For an M-ary modulation (M=2Q,Q≥3), W 0 is defined as:

$$ \textit{\textbf{W}}_{0}\triangleq \left[ \begin{array}{cccc} {w_{0}^{1}}(1) & {w_{0}^{1}}(2) & \cdots & {w_{0}^{1}}(M/2) \\ \vdots & \vdots & \ddots & \vdots \\ {w_{0}^{Q}}(1) & {w_{0}^{Q}}(2) & \cdots & {w_{0}^{Q}}(M/2) \end{array} \right] $$
((3))

where each entry \({w_{0}^{i}}(j)\) (i=1,,Q and j=1,,M/2) represents the total Hamming distance for the ith bit v i between constellation symbol s m (m=0,,M−1) and all the other symbols at Euclidian distance d j , averaged over all \(s_{m}\in \mathcal {S}\).

$$ {w_{0}^{i}}(j)=\frac{1}{M}\sum\limits_{m=0}^{M-1}{\sum\limits_{s_{mj}\in\mathcal{S}_{mj}}{{v_{m}^{i}} \oplus v_{mj}^{i}}} $$
((4))

where \({v_{m}^{i}}\) and \(v_{\textit {mj}}^{i}\) represent the ith bit of symbol s m and s mj , respectively, and the operator ‘ ’ denotes xor operation between these two bits. The set \(\mathcal {S}_{\textit {mj}}\) contains the constellation symbols at Euclidian distance d j from s m ,

$$ \mathcal{S}_{mj}=\left\{s\in\mathcal{S}:d\left(s,s_{m}\right)=d_{j}\right\} $$
((5))

where d(·,·) denotes the Euclidian distance between two constellation symbol. An example for 8PSK constellation symbol set partitioned by Euclidian distance is shown in Figure 2.

Figure 2
figure 2

8PSK constellation divided by Euclidian distances.

W 1, the distance spectrum with full prior information, has an identical structure as W 0 in Equation 3. Under this full prior information condition, the other bits v [i]=[v 1,,v i−1,v i+1,,v Q] within the same Q-bit tuple are assumed to be already perfectly known when demapping bit v i. Let s(v [i],v i) be the mapped constellation symbol of the Q-bit tuple [v [i],v i]=[v 1,,v i−1,v i,v i+1,,v Q], then \(s_{m}=s\left (\textit {\textbf {v}}_{m}^{[i]},{v_{m}^{i}}\right)\). The entries of W 1 is determined as

((6))

where \(\bar {v}_{m}^{i}\) denotes the bit complement of \({v_{m}^{i}}\). And is the indicator function that returns 1 when the event is true, otherwise 0 is returned.

It is observed that only pairwise constellation symbols with a single different bit need to be taken into consideration when calculating the entries of W 1 in Equation 6 since only one bit is not known yet under full prior information assumption. On the contrary, with no prior information, the calculation the entries of W 0 in Equation 4 involves every pair of symbols.

The 5040 mappings of 8PSK are divided into 86 classes with unique [W 0,W 1], and listed in Table IV of [20]. Thanks to this systematic classification work, the mapping optimization of 8PSK is much more simplified.

4 Irregular mapping optimization

The procedure of irregular mapping optimization for the 8PSK modulated small block VLC is illustrated in this section. First of all, Section 4.1 sets the optimization target. In Section 4.2, a sub mapping searching algorithm is proposed, for the purpose of selecting a group of sub demappers with diverse-shaped EXIT curves. At last, the corresponding percentages of the selected mappings are optimized in Section 4.3.

4.1 Optimization target setting

For a certain VLC, its EXIT curve as outer decoder can be obtained by Monte Carlo simulation, like the dash line in Figure 3a as an example. By searching among the EXIT curves of all 86 classes of APP in decoders (86 classes of demappers [20] followed by a doping rate 1/60 RA decoder, as in Figure 1), it is found that among the 86 mappings, the EXIT curve of the APP in decoder of mapper 50 converges best with that of the VLC decoder APP out: A narrow open tunnel of EXIT curves can be observed at E b /N 0=2.4 dB. The principle of this exhaustive search is consistent with the asymptotic near capacity design criterion [12]. Thus, mapper 50 is treated as a basic and benchmark mapping, based on which our optimization target is set.

Figure 3
figure 3

Heuristic experiments for target setting.(a) EXIT curves of the corresponding outer/inner APP decoder and decoding trajectory for 30 iterations; (b) SER L vs. E b /N 0 curves for different iterations. The outer VLC is from [5], and the inner mapper is the mapper 50 of the 8PSK mapping classification in [20]. P=1,N packet=600, RA doping rate 1/60, and AWGN channel assumed.

Our goal is to maximize , which is the expected extrinsic mutual information after a target number of F iterations. Figure 3b shows the symbol error rates (SERs) of mapper 50 for different iterations, under small block condition (P·N packet=600). From the SER chart, decoding with around 5 iterations is attractive for a good tradeoff between performance and complexity. The ‘waterfall’ region of 5-iteration SER curve starts at about E b /N 0=3.5 dB. At this signal to noise ratio (SNR), the decoding trajectory of average mutual information for 30 iterations is depicted in Figure 3a. All the extrinsic mutual information measurements of the EXIT curves and the tracepoints were obtained using the histogram-based approximation of the L A or L E probability density functions (PDFs) [11].

It is observed in Figure 3a that the trajectory deviates from the EXIT curves as the iteration count increases, which is due to the increase of correlations between the decoded soft bits, especially under the circumstances of small block size. Since our optimization is based on EXIT analysis, to avoid excessive inaccuracy, F should better be low, here no more than 5 is a necessary limit. Then our optimization target is set as maximizing at E b /N 0=3.5 dB.

Moreover, the detailed simulation parameters here in this sub section can be found in Section 5.1.

4.2 Sub mapping searching

We use the difference between the extrinsic mutual information with full and no prior information input as measurement to characterize a specific mapping.

$$ C_{j}=T_{j}(1)-T_{j}(0),\quad j=1,2,\cdots,86 $$
((7))

where T j (·) is the soft demapper EXIT function of the jth mapping from Table IV of [20], AWGN channel assumed. T j (1) and T j (0) can either be calculated by EXIT simulations, or directly using Equations 60 and 65 from [20], which is determined by the bitwise distance spectrum W 1 and W 0, respectively. The main feature of the sub inner decoder (with a concatenated doping RA decoder) can be roughly estimated by considering the mapping alone since the doping rate is very low. For example, the sub demapper EXIT functions in Figure 4 (upper) are similar as the EXIT curves of the corresponding sub inner decoders (, down), except that the latter ones ‘bends up’ to point (1,1) at the region very close to . This phenomenon is also an intuitive explanation to the role of the RA code doping [21].

Figure 4
figure 4

EXIT curves of the selected sub demappers (upper) and that of the corresponding sub inner decoders (lower). An example for 8PSK with AWGN channel, E S /N 0=5.0 dB, and the RA doping rate is 1/60.

The process of searching for 8PSK sub (de)mappings with diverse EXIT features is summarized in Algorithm 1. Assume the target sub mapping number \(J_{0}=2^{l}+1\left (l\in \mathbb {N}_{+}\right)\). The measurements {C j } are sorted in ascending order before the beginning of the algorithm, thus we have \(C_{1}=\min \limits _{j}{C_{j}}\) and \(C_{86}=\max \limits _{j}{C_{j}}\). These two mappings with the minimum and the maximum measurement are selected as the initial sub mappings. By the way, the two mappings corresponds to Gray and anti-Gray mapping, respectively.

After initialization, the main idea of Algorithm 1 can be can be summarized as finding the rest sub mappings with diverse measurements between C 1 and C 86, which is done in a way like binary search. In the algorithm, the mapping with measurement closest to \(\left (C_{j_{1}}+C_{j_{2}}\right)/2\) is treated as the one with ‘middle feature’ of two already-selected mappings j 1 and j 2. This procedure is repeated by l= log2(J 0−1) times. After the algorithm, a set of J (JJ 0) mappings are selected from the 86 classes as sub mappings for the irregular inner code.

Figure 4 shows the example curves for J=3,5,9 and 16 (J 0=3,5,9, and 17, respectively), at signal to noise ratio (SNR) of 5.0 dB. Generally speaking, a larger J provides a more flexible weighted sum of sub code EXIT functions, i.e., more possible shapes for the EXIT function of the irregular code, and which means more accurately matched for the optimization purpose. However, a larger J also means more memory requirement for the mapping table of course, and also more complicated optimization for the irregular code (see Section 4.3), thus we do not want it to be too big. Later in our simulation, we choose an empirical value J=9, since a higher one brings barely any improvements to the optimization results.

4.3 Maximization of the extrinsic mutual information

For iterative decoding, the correlations between the decoded bits increase with the number of iterations. With small block sizes, there would be a rapid degradation for the effectiveness of iteration after only the first few ones. In [19], the design criteria aims at maximizing the extrinsic information after a fixed number of iterations, and its advantages are shown with small blocks. With some modifications, the optimization algorithm of [19] can also be applied to our scheme.

Let α j (j=1,2,,J) be the percentages of the bits mapped by sub mapper j. Unlike the irregular outer code in [19], irregular inner code is utilized in our scheme, and its EXIT function is

((8))

where is the jth sub inner EXIT function. The optimization problem of maximizing the expected extrinsic mutual information after F iterations can be summarized as

$$\begin{aligned} &\text{Maximize}\,\,\mu_{F}\\ &\text{subject to}\,\,\sum\limits_{j=1}^{J}\alpha_{j}=1; \quad 0\leq\alpha_{j}\leq 1,\enskip \forall j \end{aligned} $$

After the ith iteration of the decoding process,

((9))

To maximize or , a variant of steepest decent approach proposed in [19] is applied to optimize the weighting vector α=(α 1,,α J )T.

((10))

where δ n is the step size of the nth recursion, and the gradient

((11))

where the partial derivatives can be recursively expressed as

((12))

with an initial value \(\frac {\partial {\mu _{1}}}{\partial {\alpha _{j}}}=T_{\text {out}}'\left (T_{\text {in}}(0)\right)\cdot T_{\text {in},j}(0)\) for all j=1,2,,J. Equation 12 is the main difference between our irregular inner code situation and the irregular outer code situation. By substituting Equation 12 into 10, a steepest gradient algorithm similar as that in [19] can be applied to irregular inner codes, e.g., irregular mapping (with RA doping) in this paper. Other than this, the rest of the optimization algorithm is like that in Appendix A of [19].

5 Simulation

In this section, the effectiveness of our method is discussed with two kinds of sources. The source used in Section 5.1 is so-called ‘English Alphabet’ [4], which is a common subject for comparing VLC/VLEC/RVLC schemes [1-5]. In Section 5.2, we employ a source of 16-level Lloyd-Max quantized [22] Gaussian independent identical distributed (i.i.d.) samples. In both cases, the performance of our method is compared with the asymptotic near capacity design.

5.1 Simulation with ‘English Alphabet’ source symbol

‘English Alphabet’ is a theoretical source of independent symbols distributed at the probabilities of the 26 letters (with entropy H(U)=4.176). We use the joint source–channel (JSC) VLC from [5] with an average codeword length \(\bar {\ell }_{\text {JSC-VLC}}=7.338\) to encode the source. For N packet=600, an average length of 3.331 zero bits are padded and the rate r padding=0.994. Then the overall coding rate for our system is \(R=H(U)/\bar {\ell }_{\text {JSC-VLC}}\cdot r_{\text {padding}}\cdot 3=1.70\) bits per channel use (8PSK symbol), resulting a theoretical limit of E b /N 0=1.80 dB. We choose J 0=9, and the sub mapping searching and optimization are done at E b /N 0=3.5 dB, by maximizing . The selected sub mappings and their optimized weighting factors are listed in Table 1, with the corresponding EXIT curves depicted in dotted lines in Figure 5.

Figure 5
figure 5

EXIT related simulation results with ‘English Alphabet.’ EXIT curves and decoding trajectory of the VLC-BICM-ID system with the proposed irregular mapping.

Table 1 8PSK sub mappings of the irregularly mapped BICM-ISCD system with ‘English Alphabet’ source symbols

Compare the EXIT curves in Figure 5 with that of the basic benchmark mapper 50 in Figure 3a. At E b /N 0=3.5 dB, the optimized irregular mapper obtains a larger , i.e., the extrinsic mutual information after 5 iterations. The cost is a higher convergence threshold: 2.8 dB, and 0.4 dB lost in E b /N 0 is observed compared with that of mapper 50. However, this threshold only represents the asymptotic iterative decoding behavior, and has no direct connection with the performance under the circumstances of small blocks.

Figure 6 shows the performance of SER measured by Levenshtein distance [23]. Other than the benchmark mapper 50, some famous mapping scheme such as Gray, anti-Gray, and maximum squared Euclidian weight (MSEW) [24] are also presented as reference mappings. Around the optimization targeted spot E b /N 0=3.5 dB, the proposed scheme gains at most 0.3 dB compared with the benchmark mapper 50. Besides, it is worth noting that less obvious ‘waterfall’ region is observed by looking at the SER L curve of the proposed irregular mapping scheme, which generally means superiorities for smaller SNRs. Additionally by comparing with other heuristic mappings in Figure 6a, we can say the proposed scheme beats these mappings on the overall performance under this very condition of block size 600 with 5 iterations.

Figure 6
figure 6

SER related simulation results with ‘English Alphabet.’ SER L vs. E b /N 0 for data blocks with P=1 in (a) and P=500 in (b), respectively, i.e., the block sizes are 600 in (a) and 3×105 in (b). And the iteration counts are 5 in (a) and 50 in (b).

Another reference system is a classic separated system concatenated by Huffman VLC, IrCC and anti-Gray mapping (with 1/60 RA doping too). Its overall coding rate is adjusted to the same as other JSC systems. This separated system obtains the ‘waterfall’ region at the lowest SNR (at about E b /N 0=2.2 dB) with a large interleaver and enough iterations in Figure 6b, but shows the worst performance when the block size is diminished as in Figure 6a. This is an example to show that the design rules of asymptotically near capacity decoding can sometimes cause really inferior performance with small blocks and low iteration counts. Besides, in Figure 6b, the proposed irregular mapping scheme shows its ‘waterfall’ at E b /N 0=2.8 dB, consistent with the EXIT convergence analysis in Figure 5.

5.2 Simulation with Gaussian i.i.d. source sample

In order to verify the robustness of our method over source statistics, an additional simulation with 16-level Lloyd-Max quantized Gaussian i.i.d source samples is presented in this sub section. Gaussian distribution has widespread applications owning to the wide applicability of the central limit theorem, which makes it a practical source.

The entropy of the 16-level quantized source is H(U)=3.747. The VLEC we use is constructed employing the algorithm in [2] and with average length \(\bar {\ell }_{\text {VLEC}}=8.271\). Continuing to simulate with N packet=600, we obtain an overall coding rate R=1.45 bits per channel use and a theoretical limit of E b /N 0=0.89 dB. Figure 7 shows the EXIT curves of irregular mapping by maximizing at E b /N 0=2.6 dB. Sub mappings and their corresponding percentages are listed in Table 2. In addition to that, the convergence at large block size analyzed by EXIT chart is E b /N 0=1.6 dB, which is also depicted in Figure 7.

Figure 7
figure 7

EXIT related simulation results with quantized Gaussian sample and irregular mapping. EXIT curves and expected decoding trajectory of the VLC-BICM-ID system with the proposed irregular mapping.

Table 2 8PSK sub mappings of the irregularly mapped BICM-ISCD system with 16-level Lloyd-Max quantized i.i.d. Gaussian source samples

The performance of our method is compared with an existing near capacity method in [9], which employs IrVLC for the optimization of EXIT curves. The parameters of sub VLCs are shown in Table 3. By the way, the fourth sub VLC is the same as the VLC used in our proposed design. The mapper for this reference system is MSEW from [24], which results in the lowest decoding convergence with large interleaver assumed. As depicted in Figure 8, a narrow tunnel is observed at E b /N 0 =1.4 dB, which is 0.2 dB gain with respect to the proposed method in Figure 7. However, the observed extrinsic mutual information at the target E b /N 0=2.6 dB is much lower, which makes it unfit for the situation of small block with 5 iteration.

Figure 8
figure 8

EXIT related simulation results with quantized Gaussian sample and IrVLC. EXIT curves and expected decoding trajectory of the IrVLC-BICM-ID system proposed in [9].

Table 3 Sub VLCs of the asymptotic near capacity IrVLC system [9]

Figure 9 is the reconstructive SNR (RSNR) performance. At sufficient high E b /N 0 values, the RSNR will reach 20.22 dB, which represents the case where channel-introduced errors are negligible compared with quantization noise. For the situation of small block with 5 iteration in Figure 9a, our method of irregular mapping gains more than 3.0 dB over the asymptotic near capacity IrVLC design, while it also has coding gains over other mappings like Gray, MSEW, and etc. in the Figure 9a. Although a loss of about 0.2 dB is observed in Figure 9b with a large interleaver and a large number of iterations.

Figure 9
figure 9

RSNR related simulation results with quantized Gaussian sample. RSNR vs. E b /N 0 for data blocks with P=1 in (a) and P=500 in (b), respectively, i.e., the block sizes are 600 in (a) and 3×105 in (b). And the iteration counts are 5 in (a) and 50 in (b).

6 Conclusions

An irregularly mapped BICM-ID scheme for VLCs modulated with 8PSK is proposed in this paper. The complexity of the encoding part is very low. Other than a simple RA code, only lookup operations are needed (for VLC encoding, interleaving, and mapping). It can be applied to circumstances where the energy of the transmitter is a critical resource. The encoding complexity is in a way shifted to the receiver, since the decoding of the less compressed VLC with greater average word length requires more computations.

The scheme in this paper concentrates on the iterative decoding performance for small block sizes with low iterations, which is more practical in bandwidth and delay sensitive communications. By setting an optimization target, selecting a set of sub mappings with diverse-shaped EXIT curves and then optimizing their percentages, irregular mapping with maximum extrinsic mutual information after a target number of iterations is derived. For small interleaving depths, our scheme exhibits better performance with respect to the existing asymptotic capacity-approaching system.

7 A Appendix: Proof of Equation 2

Possible values of the zero-padding length padding belongs to set {0,1,…, max−1}, and the probabilities for padding with the values in that set is

$$\begin{array}{@{}rcl@{}} \Pr\{\ell_{\text{padding}}=i\}&=&\Pr\{\ell\left(\textit{\textbf{U}}_{1:K}\right)=N_{\text{packet}}-i\}\\ &&\cdot\Pr\{\ell\left(U_{K+1}\right)>i\}, \\ 0&\leq& i\leq\ell_{\text{max}}-1 \end{array} $$
((13))

For sufficient long packet length N packet (much larger than max), the values of Pr{(U 1:K )=N packeti} for different i{0,1,…, max−1} are almost equal with each other, which thus can be treated as a constant normalization factor ω, i.e., Pr{ padding=i}=ω· Pr{(U K+1)>i} satisfying that \(\sum _{i=0}^{\ell _{\text {max}}-1}{\Pr \{\ell _{\text {padding}}=i\}}=1\). Then we have

$$ \omega=\Pr\{\ell\left(\textbf{U}_{1:K}\right)\,=\,N_{\text{packet}}-i\} \,=\,\frac{1}{\sum_{i=0}^{\ell_{\text{max}}-1}{\Pr\left\{\ell(U_{K+1})>i\right\}}} $$
((14))

The average value for padding is

$$\begin{array}{@{}rcl@{}} \overline{\ell}_{\text{padding}}&=&\sum\limits_{i=0}^{\ell_{\text{max}}-1} {i\cdot\Pr\left\{\ell_{\text{padding}}=i\right\}} \\ &=&\frac{\sum_{i=0}^{\ell_{\text{max}}-1}{i\cdot\Pr\left\{\ell(U_{K+1})>i\right\}}} {\sum_{i=0}^{\ell_{\text{max}}-1}{\Pr\left\{\ell(U_{K+1})>i\right\}}} \end{array} $$
((15))

thus

$$\begin{array}{@{}rcl@{}} \overline{N}&=&N_{\text{packet}}-\overline{\ell}_{\text{padding}} \\ &=&N_{\text{packet}}-\frac{\sum_{i=0}^{\ell_{\text{max}}-1}{i\cdot\Pr\left\{\ell(U_{K+1})>i\right\}}} {\sum_{i=0}^{\ell_{\text{max}}-1}{\Pr\left\{\ell(U_{K+1})>i\right\}}} \end{array} $$
((16))

is derived, which is exactly Equation 2.

References

  1. Y Takishima, M Wada, H Murakami, Reversible variable length codes. IEEE Trans. Commun. 43(234), 158–162 (1995).

    Article  MATH  Google Scholar 

  2. J Wang, L-L Yang, L Hanzo, Iterative construction of reversible variable-length codes and variable-length error-correcting codes. IEEE Commun. Lett. 8(11), 671–673 (2004).

    Article  Google Scholar 

  3. Y-M Huang, T-Y Wu, YS Han, An A*-based algorithm for constructing reversible variable length codes with minimum average codeword length. IEEE Trans. Commun. 58(11), 3175–3185 (2010).

    Article  Google Scholar 

  4. V Buttigieg, PG Farrell, Variable-length error-correcting codes. IEE Proc. Commun. 147(4), 211–215 (2000).

    Article  Google Scholar 

  5. A Diallo, C Weidmann, M Kieffer, New free distance bounds and design techniques for joint source-channel variable-length codes. IEEE Trans. Commun. 60(10), 3080–3090 (2012).

    Article  Google Scholar 

  6. L Bahl, J Cocke, F Jelinek, J Raviv, Optimal decoding of linear codes for minimizing symbol error rate (corresp.)IEEE Trans. Inform. Theory. 20(2), 284–287 (1974).

    Article  MATH  MathSciNet  Google Scholar 

  7. VB Balakirsky, in Proceedings of IEEE International Symposium on Information Theory (ISIT ’97): June-July 1997. Joint source-channel coding with variable length codes (Ulm, Germany, 1997), p. 419.

  8. R Bauer, J Hagenauer, in Proceedings of the 3rd ITG Conference on Source and Channel Coding (CSCC ’00): January 2000. Symbol-by-symbol MAP decoding of variable length codes (Munich, Germany, 2000), pp. 111–116.

  9. RG Maunder, J Wang, SX Ng, L-L Yang, L Hanzo, On the performance and complexity of irregular variable length codes for near-capacity joint source and channel coding. IEEE Trans. Wireless Commun. 7(4), 1338–1347 (2008).

    Article  Google Scholar 

  10. X Li, JA Ritcey, Bit-interleaved coded modulation with iterative decoding. IEEE Commun. Lett. 1(6), 169–171 (1997).

    Article  Google Scholar 

  11. S ten Brink, Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Trans. Commun. 49(10), 1727–1737 (2001).

    Article  MATH  Google Scholar 

  12. A Ashikhmin, G Kramer, S ten Brink, Extrinsic information transfer functions: model and erasure channel properties. IEEE Trans. Inform. Theory. 50(11), 2657–2673 (2004).

    Article  MATH  MathSciNet  Google Scholar 

  13. F Schreckenbach, N Görtz, J Hagenauer, G Bauch, in Proc. IEEE GLOBECOM 2003. Optimized symbol mappings for bit-interleaved coded modulation with iterative decoding (San Francisco, CA, 2003), pp. 3316–3320.

  14. Z Yang, Q Xie, K Peng, J Song, Labeling optimization for BICM-ID systems. IEEE Commun. Lett. 14(11), 1047–1049 (2010).

    Article  Google Scholar 

  15. M Tüchler, J Hagenauer, in Proc. 2002 Conf. Information Sciences and Systems. EXIT charts of irregular codes (Princeton, NJ, 2002), pp. 748–753.

  16. F Schreckenbach, G Bauch, Bit-interleaved coded irregular modulation. European Trans. Telecommun. 17(2), 269–282 (2006).

    Article  Google Scholar 

  17. R Tee, RG Maunder, L Hanzo, EXIT-chart aided near-capacity irregular bit-interleaved coded modulation design. IEEE Trans. Wireless Commun. 8(1), 32–37 (2009).

    Article  Google Scholar 

  18. C Cheng, G Tu, C Zhang, J Dai, Optimization of irregular mapping for error floor removed bit-interleaved coded modulation with iterative decoding and 8psk. EURASIP J. Wireless Commun. Netw. 2014(1), 31–45 (2014).

    Article  Google Scholar 

  19. M Tüchler, Design of serially concatenated systems depending on the block length. IEEE Trans. Commun. 52(2), 209–218 (2004).

    Article  Google Scholar 

  20. F Brännström, LK Rasmussen, Classification of unique mappings for 8PSK based on bit-wise distance spectra. IEEE Trans. Inform. Theory. 55(3), 1131–1145 (2009).

    Article  MathSciNet  Google Scholar 

  21. S Pfletschinger, F Sanzi, Error floor removal for bit-interleaved coded modulation with iterative detection. IEEE Trans. Wireless Commun. 5(11), 3174–3181 (2006).

    Article  Google Scholar 

  22. S Lloyd, Least squares quantization in PCM. IEEE Trans. Info. Theory. 28(2), 129–137 (1982).

    Article  MATH  MathSciNet  Google Scholar 

  23. VI Levenshtein, Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics Doklady. 10, 707 (1966).

    MathSciNet  Google Scholar 

  24. J Tan, G Stüber, Analysis and design of symbol mappers for iteratively decoded BICM. IEEE Trans. Wireless Commun. 4(2), 662–672 (2005).

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers and the editor for their valuable comments. This research work was supported by the State Key Program of National Natural Science Foundation of China (Grant No. 61032006) and the National Science Foundation of China (Grant No. 61271282 and 60172045).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing Dai.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dai, J., Zhang, C., Gao, S. et al. Irregular 8PSK mapping for variable length codes with small blocks in a BICM-ID system. J Wireless Com Network 2015, 123 (2015). https://doi.org/10.1186/s13638-015-0351-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-015-0351-0

Keywords