 Research
 Open Access
 Published:
Continuous phase modulation with 1bit quantization and oversampling using iterative detection and decoding
EURASIP Journal on Wireless Communications and Networking volume 2020, Article number: 237 (2020)
Abstract
A channel with continuous phase modulation and 1bit ADC with oversampling is considered. Due to oversampling, higherorder modulations yield a higher achievable rate and this study presents methods to approach this with sophisticated channel coding. Iterative detection and decoding is considered, which exploits the soft information extracted from oversampled CPM sequences. Besides the strategy based on conventional channel coding techniques, a tailored bit mapping is proposed where two bits can be detected reliably at high SNR and only the third bit is affected by the uncertainties from the coarse quantization. With this, a convolutional code is only applied to the third bit. Our experiments with a turbo receiver show that the iterative detection and decoding is a promising approach for exploiting the additional information brought by oversampling. Moreover, it is shown that the proposed method based on the tailored bit mapping corresponds to a lower bit error rate in comparison with the case of conventional channel coding.
Introduction
The energy consumption of an analogtodigital converter (ADC) scales exponentially with its resolution in amplitude [1]. One promising approach to overcome the limitation with respect to the ADC power consumption is 1bit quantization, i.e., only the sign of the signal is known to the receiver. This coarse quantization is combined with oversampling at the receiver in order to compensate for the losses in terms of achievable rate, even for a noisy scenario [2], an argument that is in accordance with [3], where oversampling is used to achieve extra bits of resolution.
In [4], where a bandlimited channel is considered, a marginal benefit of 1bit quantization and oversampling at the receiver in terms of achievable rate has been reported. In [5], a significant gain in terms of the achievable rate has been reported due to oversampling, by using a Zakai bandlimited process [6]. Both studies [4, 5] considered a noiseless channel. However, by considering the capacity per unit cost, it has been shown in [2] that 1bit quantization and oversampling at the receiver can also be beneficial in a noisy scenario, as it was shown later in [7] for bandlimited channels. The high signaltonoise ratio (\({\mathrm {SNR}}\)) regime has been considered based on the concept of the generalized mutual information [8], which results in a minor benefit in terms of achievable rate. For a mid to high \({\mathrm {SNR}}\) scenario, an analytical evaluation of a lower bound on the mutual information rate is derived in [9] for a 1bit quantized continuoustime bandlimited additive white Gaussian noise (AWGN) channel. Moreover, a practical approach for a 16QAM system is presented in [10], where filter coefficients are optimized by maximizing the minimum distance to decision thresholds, a concept that is extended for multipleinput–singleoutput channels in [11]. With respect to multipleinput–multipleoutput (MIMO) systems, 1bit ADCs are attractive due to lowcost hardware and low power consumption per antenna, especially in massive MIMO systems [12], where it can also benefit from oversampling [13, 14]. In this context, channel estimation techniques with oversampling have been studied in [15]. Moreover, iterative detection and decoding for MIMO systems with 1bit quantization has been studied in [16] to improve the overall bit error rate performance.
Oversampling and 1bit quantization have been studied before in combination with continuous phase modulation (CPM) in [17, 18]. CPM signals are spectrally efficient, having smooth phase transitions and constant envelope [19, 20], which allows for energyefficient power amplifiers. The information is implicitly conveyed in phase transitions, which makes the use of oversampling promising in the presence of coarse quantization at the receiver.
The achievable rate for channels with 1bit quantization and oversampling at the receiver has been studied in [7, 21], where bandlimited and not strictly bandlimited signals are considered, respectively. In the two studies, it has been shown that the achievable rate can be lowerbounded by a truncationbased auxiliary channel law. The resulting channel has a finite state memory, where a sequence design is beneficial in terms of achievable rate. Similar ideas are considered for CPM with 1bit quantization and oversampling at the receiver in [17], where it is shown how oversampling increases the achievable rate. Later, more practical approaches were proposed in [18], where the intermediate frequency and the waveform is considered in a geometrical analysis of the phase transitions.
In this study, we consider the design and analysis of CPM schemes with 1bit quantization and oversampling at the receiver, employing channel coding in scenarios with higher modulation order. The present study extends our prior work on this subject presented in [22] by a detailed description of the iterative receive processing, the EXIT chart analysis and extensive number of numerical experiments.
For such cases, based on the achievable rates computed in [17], channel coding is essential for establishing reliable communications. In this context, this work extends the discrete system model for CPM signals received with 1bit quantization and oversampling, presented in [17], for a sophisticated coding and decoding scheme. The proposed coded transmission system implies at the receiver an iterative detection and decoding strategy which consists of a BCJR algorithm for the computation of soft information and a second BCJR algorithm for the decoding process. Additionally, a sophisticated channel coding scheme, related to hierarchical modulation [23], is proposed which exploits the special properties of the channel with 1bit quantization. In summary, the main contributions of this paper are the following:

Iterative detection and decoding scheme for CPM systems with 1bit quantization and oversampling at the receiver;

Novel phasestatedependent bit mapping, which is designed for the 1bit ADC problem;

Channel coding scheme suited for the bit stream separation into subchannels.
The rest of the paper is organized as follows. In Sect. 2 the discrete system model is presented, taking into account the ADC and channel coding with iterative detection and decoding. In Sect. 3, the discrete time description of the quantized CPM signal is presented. Section 4 discusses the soft detection process and the iterative decoding strategy. The proposed methods to enhance the performance in terms of BER are presented in Sect. 5. The numerical results are shown in Sects. 6 and 7 gives the conclusions.
Notation Bold symbols denote vectors, namely oversampling vectors, e.g., \({\varvec{y}}_k\) is a column vector with M entries, where k indicates the kth symbol in time or rather its corresponding time interval. Bold capital symbols denote matrices. Sequences are denoted with \(x^n= [x_1,\ldots ,x_n]^{{\mathrm{T}}}\). Likewise, sequences of vectors are written as \({\varvec{y}}^n= [{\varvec{y}}_1^{{\mathrm{T}}},\ldots ,{\varvec{y}}_n^{{\mathrm{T}}}]^{{\mathrm{T}}}\). A segment of a sequence is denoted as \(x^{k}_{kL}=[ x_{kL}, \ldots , x_k ]^{{\mathrm{T}}}\).
System model overview
The system model considered in this study is an extension of the discretetime system model proposed in [17, 24]. In this context of CPM with 1bit quantization and oversampling at the receiver, Fig. 1 illustrates the extension in terms of the additional coding blocks. The purpose of this extension is to design a system for reliable communication by considering sophisticated forward error correction.
For the proposed CPM system, the lower bound on the achievable rate presented in “Appendix 1” serves as the base information for choosing the code rate for channel coding. The result of the achievable rate computation in [17] shows that the scenarios with higher modulation order, e.g., \(M_{{\mathrm{cpm}}}=8\), may require channel coding in order to provide communication with low probability of error, because the achievable rate at high SNR can be significantly lower than the input entropy. With this, a coding scheme must satisfy the following inequality
where R denotes the code rate of the channel code and \(I_{M_{{\mathrm{cpm}}}}\) denotes the achievable rate conditioned on the corresponding CPM modulation scheme. Note that \(\log _{2}(M_{{\mathrm{cpm}}})\) is the maximum entropy rate of the input, which is an upper bound of the achievable rate.
On the transmit path, the channel encoder receives information bits and generates an encoded message adding redundant information. The encoded message is interleaved to protect the coded information against burst errors. Then, the interleaved bits are grouped according to the modulation order and mapped to CPM symbols. After that, a signal is generated by a CPM modulator and noise is generated at the receiver.
On the receive path, the signal is filtered and quantized by a 1bit ADC. The quantized data are then processed by an iterative detection and decoding scheme. First, the binary samples are processed by a soft detection algorithm. Then, the soft information is converted to bit oriented loglikelihood ratios, which are deinterleaved subsequently. Finally, the soft information is given to the channel decoder, which returns extrinsic soft information to the detection algorithm via an interleaver and a soft mapper. In the next sections, the individual blocks are described in detail.
CPM with 1bit quantization
The CPM signal in the passband with carrier frequency \(f_0\) [19] is described by
where \({\mathrm {Re}}\left\{ \cdot \right\} \) denotes the real part. The phase term is given by
where \(T_{{\mathrm{s}}}\) denotes the symbol duration, \(h=\frac{K_{{\mathrm{cpm}}}}{P_{{\mathrm{cpm}}}}\) is the modulation index, \(f\left( \cdot \right) \) is the phase response, \(\varphi _0\) is a phaseoffset, \(\alpha _k\) represents the transmit symbols with symbol energy \(E_{{\mathrm{s}}}\). \(K_{{\mathrm{cpm}}}\) and \(P_{{\mathrm{cpm}}}\) must be relatively prime positive integers in order to obtain a finite number of phase states.
The phase response function shapes the phase transition between the phase states. It fulfills the following condition:
where \(L_{{\mathrm{cpm}}}\) is the depth of the memory in terms of transmit symbols. The phase response corresponds to the integration over the frequency pulse \(g_{f}\left( \cdot \right) \), which is conventionally a rectangular pulse, a truncated and raised cosine pulse, or a Gaussian pulse. For an even modulation order \(M_{{\mathrm{cpm}}}\), the symbol alphabet can be described by \(\alpha _k \in \left\{ \pm 1, \pm 3,\ldots ,\pm (M_{{\mathrm{cpm}}}1) \right\} \).
Generally, the corresponding phase trellis of (3) is time variant, which means that the possible phase states are timedependent. Because of that, the number of wrapped absolute phase states can be larger than \(M_{{\mathrm{cpm}}}\), e.g., when \(M_{{\mathrm{cpm}}}=2\) and \(h=\frac{1}{2}\), there are at least four trellis states in total and even more dependent on the memory of the channel. In order to reduce the complexity at the receiver, a time invariant trellis is constructed by tilting the trellis according to the decomposition approach in [25]. This is illustrated in Fig. 2. The tilt corresponds to an extension of the phase term given by (3) as
where the second term on the righthand side (RHS) corresponds to the tilt of the trellis. Taking the derivative of this tilt with respect to time and dividing by \(2 \pi \) results in a frequency offset given by \(\Delta f = h (M_{{\mathrm{cpm}}} 1)/ (2 T_{{\mathrm{s}}})\).
A modified data sequence is obtained by replacing the symbol notation with the change of variable \(x_k= (\alpha _k + M_{{\mathrm{cpm}}}  1) / 2 \), where the corresponding symbol alphabet can be described with \(x_k \in {\mathbb {X}} = \left\{ 0, 1,\ldots , M_{{\mathrm{cpm}}}1 \right\} \). According to [25], substituting \(t = \tau + k T_{{\mathrm{s}}}\) and \(\alpha _k = 2x_k  M_{{\mathrm{cpm}}} + 1\) in Eq. (3) leads to the tilted phase expression within one symbol duration
where the timedependent terms on the RHS depend only on the variable \(\tau \), which is welldefined along one symbol duration. Applying the \({\text {mod}} 2 \pi \) operator to the first term on the righthand side of (5) yields
which introduces the absolute phase state \(\beta _k\), i.e., it is related to the \(2\pi \)wrapped accumulated phase contributions of the input symbols that occur prior to the CPM memory. With this, the phase expression for one symbol duration can be fully described by the absolute phase state \(\beta _{kL_{{\mathrm{cpm}}}}\) and the previous and the current transmit symbols \(x^{k}_{kL_{{\mathrm{cpm}}}+1}\) given by \(\tilde{s}_{k} = \left[ \beta _{kL_{{\mathrm{cpm}}}}, x^{k}_{kL_{{\mathrm{cpm}}}+1} \right] \). It is important to highlight that \(\tilde{s}_{k}\) is the appropriate state description for the modeling of the signal at the intermediate frequency. To model the signal which has passed the bandpass filter at the intermediate frequency (IF) another state description, namely \(s_k\), will be introduced later. For better integration with 1bit quantization, \(\varphi _0= \pi h \) is used instead of \(\varphi _0= 0 \) whenever it avoids phase states placed on the axis in the constellation diagram, which is the case in all the considered examples.
In the following, a discretetime system model description is considered which implies that the CPM phase is represented in a vector notation. The corresponding tilted CPM phase \(\psi (\tau + k T_{{\mathrm{s}}} )\) for one symbol interval, i.e., \(0 \le \tau < T_{{\mathrm{s}}}\), is then discretized into MD samples, which composes the vector denoted by
where M is the oversampling factor and D is a higher resolution multiplier.
The receive filter g(t) has an impulse response of length \(T_g\). In the discrete model for expressing a subsequence of \((N +1)\) oversampling output symbols it is represented in a matrix form with \({\varvec{G}}\), as a \(MD(N+1) \times MD(L_g+N+1)\) Toepliz matrix, as described as follows
where
and unit energy normalization is considered with \(\int _{\infty }^{\infty } \vert g(t)\vert ^2 dt = 1\) and \(\Vert {\varvec{g}} \Vert _2^2 = 1\). A higher sampling grid in the waveform signal, in the noise generation and in the filtering is adopted to adequately model the aliasing effect. This receives filtering yields an increase in memory in the system by \(L_g\) symbols, where \((L_{g}1) T_{{\mathrm{s}}} < T_g \le L_{g} T_{{\mathrm{s}}}\).
The filtered samples are decimated to the vector \({\varvec{z}}^{k}_{kN}\) according to the oversampling factor M, by multiplication with the Dfold decimation matrix \({\varvec{D}}\) with dimensions \(M(N+1) \times MD(N+1)\), described by
Then, the result \({\varvec{z}}^{k}_{kN}\) is 1bit quantized to the vector \({\varvec{y}}^{k}_{kN}\). These operations can be represented by the following equations:
where \(Q(\cdot )\) denotes the quantization operator. The quantization of \({\varvec{z}}_k\) is described by \({y}_{k,m} ={\mathrm {sgn}}({\mathrm {Re}}\left\{ {z}_{k,m}\right\} )+j {\mathrm {sgn}}({\mathrm {Im}}\left\{ {z}_{k,m}\right\} )\), where m denotes the oversampling index which runs from 1 to M and \({y}_{k,m} \in \left\{ 1+j,1j,1+j,1j \right\} \). The vector \({\varvec{n}}^{k}_{kNL_g}\) contains complex zeromean white Gaussian noise samples with variance \(\sigma _n^2=N_0\).
Soft detection and decoding
The soft detection relies on the maximum a posteriori (MAP) decision metric for each bit, which corresponds to the a posteriori probability (APP) given the received sequence \({\varvec{y}}^n\). For the considered system, the bit APPs can be approximately computed via a Bahl–Cocke–Jelinek–Raviv (BCJR) algorithm [26] based on an auxiliary channel law. Prior to computing those probabilities, the channel model for the system is required, i.e., the output of soft information by the detection algorithm depends on the set of channel output probabilities. This section covers the mentioned topics and describes the iterative decoding subsequently.
Auxiliary channel law and channel output probability
Depending on the receive filter, noise samples are correlated which then implies dependency on previous channel outputs, such that the channel law has the form \(P({\varvec{y}}_k \vert {\varvec{y}}^{k1}, x^n )\). In this case, the consideration of an auxiliary channel law \(W(\cdot )\) is required, which reads as
where the dependency on N previous channel outputs is taken into account with \(L=L_{{\mathrm{cpm}}}+L_g+N\) being the total memory of the system. With this, the BCJR algorithm relies on an extended state representation denoted by
Consequently, the notation of the auxiliary channel law can be written in terms of the state notation \(s_k\)
These probabilities involve a multivariate Gaussian integration in terms of
where \({\mathbb {Y}}_{kN}^{k}\) represents the quantization interval which belongs to the channel output symbol \({\varvec{y}}^{k}_{kN}\), described in (9). The vector \({\varvec{z}}_{kN}^{k}\) is a complex Gaussian random vector that describes the input of the ADC, with mean vector \({\varvec{m}}_z = {\varvec{D}} {\varvec{G}} \left[ \sqrt{ E_{{\mathrm{s}}}/T_{{\mathrm{s}}}} {\hbox {e}}^{{\varvec{\psi }}^{k}_{kNL_g}} \right] \) and covariance matrix \({\varvec{K}}_z = \sigma _n^2 {\varvec{D}} {\varvec{G}} {\varvec{G}}^H {\varvec{D}}^{{{\mathrm{T}}}}\), with \({\varvec{D}}\) and \({\varvec{G}}\) as introduced before. In order to numerically evaluate (13) using an existing quasiMonte Carlo integration algorithm, based on methods developed in [27] and implemented in the MATLABfunction \({\text {mvncdf}}(\cdot )\), a real valued formulation is required. To achieve this, the conditional probability density function \(p( {\varvec{z}}^{k}_{kN} \vert s_k, s_{k1})\) is written as follows
where \(\left \cdot \right \) denotes the determinant, \({{\varvec{z}}^{k}_{kN}}^{\prime} = \left[ {\mathrm {Re}}\left\{ {\varvec{z}}_{kN}^{k} \right\} ^{{{\mathrm{T}}}}, {\mathrm {Im}}\left\{ {\varvec{z}}_{kN}^{k} \right\} ^{{{\mathrm{T}}}}\right] ^{{{\mathrm{T}}}} \) and the mean vector \({\varvec{m}}_{z}^{\prime} \) contains the real and imaginary components in a stacked fashion as given by
Accordingly, the covariance matrix is denoted as
As detailed in [18], the number of evaluations \(n_{ev}\) of the multivariate integral in (13) required for the model respects the proportion described by
where \(4^M\) is the number of all possible observed complex vectors \({\varvec{y}}_k\) and \(M_{{\mathrm{cpm}}}^{L}\) is proportional to the number of all the possible state transitions. With this, the evaluation of all channel output probabilities becomes computationally expensive when the oversampling factor M, the modulation order \(M_{{\mathrm{cpm}}}\) and the overall channel memory L are high valued.
BCJR algorithm based on the auxiliary channel law
With the purpose of evaluate the APPs for the bit sequence, the value \(P(s_k,s_{k1}\vert {\varvec{y}}^n)\) must be determined. This can be achieved by normalizing the joint probability \(P(s_k,s_{k1}, {\varvec{y}}^n)\), which can be decomposed into
where the considered auxiliary channel law \(W(\cdot )\) as introduced in (10) is applied. The factor \(\gamma _k(s_{k1},s_k)\) can be rewritten as
which relates to the channel output probability and to the state transition probabilities. The factor \({\text {f}}_{k1}(s_{k1})\) is the forward probability which contains all the paths in the trellis that leads to the state \(s_{k1}\). The forward recursion used to compute \({\text {f}}_{k1}(s_{k1})\) is described by
Finally, \({\text {b}}_{k}(s_{k})\) is the backward probability that contains all possible paths from state \(s_k\) to \(s_n\). Similarly, \({\text {b}}_{k}(s_{k})\) is computed using the backward recursion
With this, an approximate value \(P_{{\mathrm{aux}}}(s_k,s_{k1}\vert {\varvec{y}}^n)\) of the probability \(P(s_k,s_{k1}\vert {\varvec{y}}^n)\) can be computed by normalizing for \(P({\varvec{y}}^n)\), equation
Note that for the recursions described in (20) and (21), initial values for \({\text {f}}_{k=0}(s_k)\) and \({\text {b}}_{k=n+1}(s_k)\) are required. While the BCJR algorithm can be carried out using the steps above, it is practical to employ a matrix form of the algorithm described in [28].
A posteriori probabilities and soft information
As illustrated in the previous section, with the channel output probability as input, the BCJR algorithm provides the probabilities \(P_{{\mathrm{aux}}} \left( s_{k1},s_{k} \vert {\varvec{y}}^n \right) \) used for computing the bit APPs. Letting \(d^m = [d_1,\ldots ,d_m]^{{{\mathrm{T}}}}\), where \(m = n \cdot \log _2(M_{{\mathrm{cpm}}})\), as the interleaved bit sequence that is mapped into CPM symbols, its bit APPs are described by the summation
where \(d_q = {\mathrm {map}}^{1}_k(x_k,i)\) with bit index \(q=(k1)\cdot \log _2(M_{{\mathrm{cpm}}}) + i\) denotes extraction of the bit \(d_q\), which corresponds to the \(i^{{\text {th}}}\) bit (most significant bit first) of the \(k^{{\text {th}}}\) demapped symbol with \(i \in \left\{ 1, 2, \ldots , \log _2(M_{{\mathrm{cpm}}})\right\} \). The posterior probabilities \(P(d_q \vert {\varvec{y}}^n)\) are the natural choice for the soft information \({\mathfrak {s}}(d_q)\) about the demapped bits. With binary random variables described by the probabilities \({\mathfrak {s}}(d_q=0) = P(d_{q}=0\vert {\varvec{y}}^n)\) and \({\mathfrak {s}}(d_q=1) = P(d_{q}=1\vert {\varvec{y}}^n)\), the use of loglikelihood ratios (LLR) given the received sequence \({\varvec{y}}^n\) is appropriate [29] and obtained with
which can be decomposed, according to [29, 30], into an extrinsic and an a priori LLR described by
The extrinsic LLR \(L_{{\mathrm{ext}}}(d_{q} \vert {\varvec{y}}^n)\) represents the information about \(d_q\) contained in \({\varvec{y}}^n\) and \(P(d_j)\) for all \(j \ne q\). The a priori LLR \(L_a(d_{q})\) describes the available a priori information about \(d_q\). The extrinsic LLR sequence is deinterleaved, according to the permutation adopted. The resultant sequence represents the detected soft information used as input for the channel decoder as represented in Fig. 1.
Iterative detection and decoding
This work uses an iterative detection and decoding procedure that consists of the exchange of soft information between the detector and the channel decoder. Initially, the state transition probabilities are considered to be uniformly distributed, i.e., \(P(s_k\vert s_{k1})=1/M_{{\mathrm{cpm}}}\). Such assumption is suboptimal, but it is possible to take into account that state transitions can have different probabilities by feeding back updated extrinsic soft information about the code bits.
The iterative detection and decoding process relies on the feedback of soft information from the decoder to adjust the transition probabilities of the soft detector, which becomes aware of the underlying code. Let \(c^m = [c_1,\ldots ,c_m]^{{{\mathrm{T}}}}\) the code bit sequence that represents the encoded message. The soft information \({\mathfrak {s}}(c_q)\), with \(q \in \{1,\ldots ,m\}\), is the input of the channel decoder, which is able to compute an update version \({\mathfrak {s}}^{\prime }(c_q)\) of this soft information
The redundancy introduced during the encoding process certifies that the reliability of \({\mathfrak {s}}^{\prime }(c_q)\) is usually improved by the channel decoder when compared to \({\mathfrak {s}}(c_q)\). This soft information can be interleaved into the sequence corresponded to \({\mathfrak {s}}^{\prime }(d_q)\), which is incorporated to the soft detector that uses this knowledge acquired from the channel decoder to recompute \({\mathfrak {s}}(d_q)\). This message passing algorithm is done iteratively with soft information exchange between the detection and decoding steps. With soft information exchange, the BER performance can be improved, but for approaching the optimal results, the transfer of the extrinsic information
from one iteration to the next is necessary, which is considered as crucial [29]. This extrinsic soft information characterizes the information about \(c_q\) contained in \({\mathfrak {s}}(c_1),\ldots , {\mathfrak {s}}(c_{m})\) except \({\mathfrak {s}}(c_q)\). The idea of passing extrinsic soft information between the receive algorithms was first proposed in [30] for decoding turbo codes and has been applied to coded data transmission over channels with intersymbol interference (ISI) [28, 31], where it is called turbo equalization. In practical terms, it is often more convenient to replace the two probabilities \({\mathfrak {s}}(d_q = 0)\) and \({\mathfrak {s}}(d_q = 1)\) by the LLRs in (24) and in (25) [29]
where \(\lambda _{{\mathrm{ext}}}(d_q)\) and \(\lambda _{a}(d_q)\) are the extrinsic and the a priori LLRs, respectively. Accordingly, instead of the extrinsic soft information in the feedback path \({\mathfrak {s}}_{{\mathrm{ext}}}(c_q)\), the extrinsic LLR is considered
A decomposition similar to (25) can be applied with \(\lambda (c_q)\)
This extrinsic information, can be computed during the channel decoding process by means of employing a second BCJR algorithm [28]. It can then be interleaved into new values for the a priori soft information \(\lambda _{a}(d_q)\) and soft mapped to state transition probabilities \(P\left( s_k\vert s_{k1}\right) \). This soft mapping is achieved by bringing the loglikelihood representation back to a probability description [28], which is given by
where \(x_k\) is the input symbol that produce the state transition from the state \(s_{k1}\) to \(s_k\), \([b^{\prime }_{1},\ldots , b^{\prime }_{\log _2(M_{{\mathrm{cpm}}})}]\) is the bit sequence which such symbol is mapped to, and \(\lambda _{a}( d_{q})\) represents the interleaved extrinsic soft information fed back by the channel decoder of the bit \(d_q\). The probabilities \(P\left( s_k\vert s_{k1}\right) \) can be used to perform the soft detection step again by updating the transition probabilities in the BCJR algorithm, detailed in Sect. 4.2.
In the final step of the channel decoder, i.e., after the execution of the required number of iterations, the soft information on the information bits is computed
where R is the used code rate. At last, the estimation of the original information bits is evaluated based on the sign of the corresponding soft information \(\lambda (b_r)\)
The iterative decoding steps are illustrated in Fig. 3 and described in Table 1, where the soft information exchange between detector and decoder is represented with the considered LLRs. The soft decoding step performed by the channel decoder is dependent of the used code. For completeness and illustration purposes, “Appendix 2” describes the BCJRbased decoding algorithm for a simple convolutional code.
Modified subchannel coding
The extra amount of information provided by the oversampling is often not enough for the system to provide reliable communication, which motivated the proposition of the CPM system in Fig. 1 with additional coding blocks. With the purpose of further improving the performance of the proposed system, this section covers a different coding strategy and an alternative bit mapping scheme illustrated for the example of \(M_{{\mathrm{cpm}}}=8\).
Bit mapping
For the considered CPM waveform with \(L_{{\mathrm{cpm}}}=1\), the 1bit quantization of the inphase and quadrature components leads to a fourlevel phase decision, which grants two bits of information when the sample at time \(\tau = T_s\) is observed in a noisefree scenario. This is the reason why, when noise is considered, the computed achievable rate for \(M_{{\mathrm{cpm}}}=4\) reaches 2 bpcu in a high \({\mathrm {SNR}}\) regime. For the case with \(M_{{\mathrm{cpm}}}=8\), more than two bits per channel use can be achieved with oversampling, which allows for the extraction of more information along the phase transition between the phase states. This idea is the key to understanding the motivation to study bit mapping alternatives, and how the available information at the receiver is distributed for the bits. In summary, this subsection presents two mapping strategies and how they deal with bit allocation.
Figure 4 shows the CPM tilted phase constellation on how bit sets are associated to the phase transition when using the Gray mapping, given an “even” or an “odd” initial state, whose parity is defined by the parity of the absolute phase state described in (6). The symbols are distinguished by on how much the phase is increased for each possible input.
The established Gray mapping scheme implies wellknown benefits for conventional communication systems. However, in the following, it is proposed to modify the Gray coding scheme in order to enable the exploitation of the properties of the CPM system with 1bit quantization and oversampling at the receiver. Similarly to the Gray mapping, Fig. 5 illustrates the CPM tilted phase transitions with the corresponding bits for the novel mapping scheme, termed advanced mapping. The proposed modification allows that the information conveyed in the orthant of the received signal can be readily extracted. The additional information that is brought by the temporal oversampling is then conveyed in the third bit of the CPM symbol in case of \(M_{{\mathrm{cpm}}}=8\). For the interpretation of the novel approach, one can consider that each bit of the CPM symbol corresponds to a separate binary subchannel. Accordingly, the illustrated case with \(M_{{\text {cpm}}}=8\) can be understood as a system that consists of three different binary subchannels. Considering the advanced mapping strategy, two subchannels can each yield up to 1 bit per channel use and the third subchannel yields a lower achievable rate which depends on the oversampling factor.
The Gray mapping shuffles the uncertainties brought by the coarse quantization for every bit subchannel, whereas the advanced mapping aims to concentrate the uncertainty on the third bit, by doing a circular shift on the Gray mapping when phase transition happens from a phase state of “odd” order. Figure 6b illustrates what has been discussed by showing increased BER performance on the first two bits of the mapping in comparison with Fig. 6a.
Subchannel coding scheme
The coding scheme which we refer to as the conventional approach consists of the system model illustrated in Fig. 1 that contemplates the use of channel coding for forward error correction (FEC), interleavers to protect the system against burst errors and an iterative decoding procedure as part of a sophisticated channel decoding approach. However, no regard toward bit subchannel performance, presented in Fig. 6, is taken into account. With the proposed advanced mapping, it is known apriori on the transmitter side which bits in terms of binary subchannels require protection by forward error correction. This circumstance can be exploited by sophisticated subchannel coding. In this context, it is proposed a modified coding scheme that has the same structure of the conventional one, but applies different code rates for the bit subchannels separately, i.e., in transmission, bit subchannels streams that are more sensitive to the coarse quantization, noise or channel impairments would be protected by a stronger channel code.
For the case study, no coding is applied to the first two bit subchannels whereas a strong convolutional code is applied to the third subchannel. This is illustrated in Fig. 7a, where in (b) a code rate of 1/3 is applied to the third subchannel, which corresponds to an overall code rate of \(R=7/9\).
Results and discussion
All the computations rely on \(M_{{\mathrm{cpm}}}=8\) and modulation index \(h=\frac{1}{M_{{\mathrm{cpm}}}}\). The used frequency pulse is the 1REC [19]. A suboptimal bandpass filter is considered with
where \(T_g = \frac{1}{2} T_{{\mathrm{s}}}\). Such filter is similar to the integrate and dump receiver considered in [32], but with its frequency response centered in lowIF. Note that the common receiver based on a matched filter bank is hardware demanding and not compatible with the considered 1bit approach. The extraction of the soft information is done based on an auxiliary channel law \(W(\cdot )\) with the parameter \(N=0\), cf. (10). The \({\mathrm {SNR}}\) is given by the ratio between the transmit power and the product of the noise power spectral density \(N_0\) and the twosided 90% power containment bandwidth \(B_{90\%}\)
where \(B_{90\%}\) denotes the 90% power containment bandwidth. The considered SNR definition can be interpreted as the average energy per Nyquist interval divided by the noise power density.
This work uses convolutional codes as the simplest nontrivial example for channel coding. Convolutional codes are characterized by their constraint length and its generator polynomials [33]. Channel codes with larger constraint length and longer blocks have better performance in general, but it requires more computation resources on the decoder.
High rate convolutional codes, e.g., 3/4, can be implemented using the puncturing technique to create any desired code rate from a basic lowrate code [34]. This is used to adapt the code rate while keeping a lowcomplexity decoder. Puncturing can be applied to modify the system throughput and robustness level by changing the puncturing pattern, which describes the bits that are propagated and discarded. The decoder must take this pattern into account to compute metrics and survivor paths. Puncturing is performed using the puncturing patterns described in Table 2
A block of information bits is randomly generated and forwarded to the encoder according to the discrete system model described in Fig. 1. SRandom interleavers are used such that they are generated for each analyzed block.
Achievable rate
The lower bound on the achievable rate is computed the way it is presented in “Appendix 1.” Figure 8 shows how oversampling increases the achievable rate due to the extra information provided during the symbol transitions. With \(M_{{\mathrm{cpm}}}=4\) the achievable rate reaches \(\log _2(M_{{\mathrm{cpm}}}) = 2\) [bpcu] for higher values of \({\mathrm {SNR}}\). For lower \({\mathrm {SNR}}\) values, when the achievable rate is below 2 [bpcu], consideration of channel coding is required for establishing reliable communication. For cases with higher modulation order, such as \(M_{{\mathrm{cpm}}}=8\), the achievable rate of a CPM system with 1bit quantization in general does not reach the source entropy rate of \(\log _2(M_{{\mathrm{cpm}}}) = 3\) [bpcu] independent of the \({\mathrm {SNR}}\). With this, the development of coding techniques is required, in order to design a system capable to provide reliable communication. In addition, the proposed system is compared with the same CPM system which is equipped with a uniform 2bit quantizer with thresholds \(\{\,\Delta z, 0, \Delta z \}\) at the receiver for real and imaginary part that operates at symbol rate sampling rate.^{Footnote 1} Considering that the 2bit ADC is implemented as a Flash ADC, it consists of three comparators which implies three comparator operations per symbol duration, which is then equivalent to the 1bit oversampling with \(M=3\). While the performance in terms of achievable rate is comparable, the system that uses 1bit ADCs does not require automatic gain control.
EXIT chart analysis
In this subsection, the iterative detection and decoding process is illustrated based on extrinsic information transfer (EXIT) charts [35]. The mutual information between the bit \({\mathcal {D}}\) and its respective a priori LLR \(\varLambda \) as random processes reads as
where the conditional probabilities are according to the LLR definition given by
Assuming ergodicity for the LLR process \(\varLambda \), the expectation in Eq. (35) can be replaced by a time average. Thus, the mutual information can be computed from a large number \({\mathcal {N}}\) of generated LLRs as follows
which is numerically evaluated by the consideration of large number of channel realizations. The LLR sequences corresponding to \(\lambda _a(d_q)\), \(\lambda _{{\mathrm{ext}}}(d_q)\), \(\lambda _a(c_q)\) and \(\lambda _{{\mathrm{ext}}}(c_q)\) for all \(q \in \{1,\ldots ,m\}\) are related to the mutual information \(I_a^{d}\), \(I_{{\mathrm{ext}}}^{d}\), \(I_a^{c}\) and \(I_{{\mathrm{ext}}}^{c}\), respectively, which are computed with Eq. (37) at every iteration step. Those LLRs are considered a priori information updated by the BCJR detector and decoder, as displayed in Fig. 3. Once interleaving and deinterleaving are neutral operations for the statistical description, the assertions \(I_a^{d} = I_{{\mathrm{ext}}}^{c}\) and \(I_a^{c} = I_{{\mathrm{ext}}}^{d}\) are true. According to [36], and [37], those a priori LLRs can be approximated as independent and identically distributed Gaussian random variables. This is done to simplify the computation of the transfer information functions \(T_d\) and \(T_c\) of the detector and decoder, respectively, which are described by
where for each input mutual information, random Gaussian distributed a priori LLRs can be generated with a simulation tool to evaluate the detector and the decoder separately by measuring the mutual information between the output extrinsic LLRs and the encoded bit sequence. This procedure is done to record the mutual information evolution with the transfer functions.
Figure 9 shows examples of EXIT chart analysis for the proposed iterative detection and decoding. Various cases have been evaluated where it was observed that iterative processing yields a benefit especially for cases with low SNR. Figure 9a, c illustrates examples for this case. For cases at high SNR, it has been observed that in many cases it is not required to consider more than 2 iterations, as illustrated in Fig. 9b. Finally, it has been observed that considering advanced mapping instead of the Gray mapping significantly changes the exit chart of the detector, as shown in Fig. 9d. In such a case, the detection and decoding process has quasi converged after the first iteration.
Bit error rate
Figure 10 shows the BER for a system with Gray mapping and modulation order of \(M_{{\mathrm{cpm}}}=8\) with different oversampling factors. The results show that oversampling can significantly improve the BER performance. Moreover, the results confirm that there is only marginally gain from considering more than two iterations for the iterative receive processing, which is in line with the EXIT chart analysis. Moreover, the BER of the corresponding system with symbol rate sampling and 2bit ADC is shown for comparison. The BER performance indicate that a higher resolution in time is beneficial at low and medium SNR and that a higher resolution in amplitude is beneficial in the high SNR regime.
For the case with modulation order of \(M_{{\mathrm{cpm}}}=8\) and oversampling factor of \(M=3\), channel coding is applied according to the extended discrete model shown in Fig. 1. Different code rates gave been considered which is summarized in Table 2. The results presented in Fig. 11 confirm that iterative decoding scheme can improve the BER performance.
Additionally, the Gray mapping is compared against the novel advanced bit mapping. Figure 12 illustrates the BER simulation results, which confirm that system designs using advanced mapping outperform system designs using Gray mapping in terms of BER. For systems with Gray mapping, three iterations are considered and for systems with advanced mapping only two, because no further performance gain was observed by increasing the iterations for the latter.
Finally, in Fig. 13, the BER results for the proposed coding scheme are presented. The conventional coding scheme uses a convolutional code with rate 7/9, generator polynomial \((5\,7)\) and puncturing pattern \(1 1 \vert 0 1 \vert 0 1 \vert 1 0 \vert 1 0 \vert 0 1 \vert 1 1\). In contrast, the proposed coding scheme uses a convolutional code with rate 1/3 for the third bit subchannel as shown in Fig. 7b. In this last scenario, the use of the proposed coding scheme brings a consistent performance gain from low to high SNR levels. The performance gain can be explained by the fact that the proposed method exploits apriori knowledge about the reliability of the binary subchannels.
The proposed schemes are compared with the corresponding system with resolution in amplitude with 2bit quantization. Such an ADC, when realized as flash ADC, corresponds to the same number of comparator operations per time interval. While the system with 1bit quantization requires a higher SNR for achieving a low BER, it has practical benefits such as the relaxed requirements on the automatic gain control and the low noise amplifier.
Conclusions
Iterative detection and decoding applied to a CPM system with 1bit quantization and oversampling at the receiver has been studied. Different code rates have been considered, and it turns out that channel coding is beneficial in all the cases. Additional performance gain can be achieved by using the proposed tailored bit mapping strategy in combination with a coding scheme that considers different binary subchannels separately.
Availability of data and materials
Not applicable.
Notes
 1.
Note that symbol rate sampling corresponds to subsampling because the considered CPM signals have a larger bandwidth than \(\frac{1}{T_{{\text {s}}}}\).
Abbreviations
 ADC:

Analogtodigital converter
 APP:

A posteriori probability
 AWGN:

Additive white Gaussian noise
 BER:

Bit error rate
 BCJR:

Bahl–Cocke–Jelinek–Raviv
 bpcu:

Bits per channel use
 CPM:

Continuous phase modulation
 EXIT:

Extrinsic information transfer
 IF:

Intermediate frequency
 ISI:

Intersymbol interference
 LLR:

Loglikelihood ratio
 MAP:

Maximum a posteriori
 RHS:

Righthand side
 SNR:

Signaltonoise ratio
References
 1.
R.H. Walden, Analogtodigital converter survey and analysis. IEEE J. Sel. Areas Commun. 17(4), 539–550 (1999)
 2.
T. Koch, A. Lapidoth, Increased capacity per unitcost by oversampling, in Proceedings of the IEEE Convention of Electrical and Electronics Engineers in Israel (Eilat, Israel, 2010)
 3.
H. Grewal, Oversampling the adc12 for higher resolution (Texas Instruments: Application Report, 2006)
 4.
E.N. Gilbert, Increased information rate by oversampling. IEEE Trans. Inf. Theory 39(6), 1973–1976 (1993)
 5.
S. Shamai, Information rates by oversampling the sign of a bandlimited process. IEEE Trans. Inf. Theory 40(4), 1230–1236 (1994)
 6.
M. Zakai, Bandlimited functions and the sampling theorem. Inf. Control 8(2), 143–158 (1965)
 7.
L. Landau, M. Dörpinghaus, G.P. Fettweis, 1bit quantization and oversampling at the receiver: Communication over bandlimited channels with noise. IEEE Commun. Lett. 21(5), 1007–1010 (2017)
 8.
W. Zhang, A general framework for transmission with transceiver distortion and some applications. IEEE Trans. Commun. 60(2), 384–399 (2012)
 9.
S. Bender, M. Dörpinghaus, G.P. Fettweis, On the achievable rate of bandlimited continuoustime 1bit quantized AWGN channels, in Proc. IEEE Int. Symp. Inform. Theory (ISIT) (Aachen, Germany, 2017), pp. 2083–2087
 10.
L. Landau, S. Krone, G.P. Fettweis, Intersymbolinterference design for maximum information rates with 1bit quantization and oversampling at the receiver, in Proceedings of the International ITG Conference on Systems, Communications and Coding (Munich, Germany, 2013)
 11.
A. Gokceoglu, E. Björnson, E.G. Larsson, M. Valkama, Waveform design for massive MISO downlink with energyefficient receivers adopting 1bit ADCs, in Proc. IEEE Int. Conf. Commun. (ICC) (Kuala Lumpur, Malaysia, 2016), pp. 1–7
 12.
S. Jacobsson, G. Durisi, M. Coldrey, U. Gustavsson, C. Studer, Throughput analysis of massive MIMO uplink with lowresolution ADCs. IEEE Trans. Wirel. Commun. 16(6), 4038–4051 (2017)
 13.
A.B. Üçüncü, A.Ö. Yılmaz, Oversampling in onebit quantized massive MIMO systems and performance analysis. IEEE Trans. Wirel. Commun. 17(12), 7952–7964 (2018)
 14.
A. Gokceoglu, E. Björnson, E.G. Larsson, M. Valkama, Spatiotemporal waveform design for multiuser massive MIMO downlink with 1bit receivers. IEEE J. Sel. Top. Signal Process. 11(2), 347–362 (2017)
 15.
Z. Shao, L.T.N. Landau, R.C. de Lamare, Channel estimation using 1bit quantization and oversampling for largescale multipleantenna systems, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (Brighton, United Kingdom, 2019), pp. 4669–4673
 16.
Z. Shao, R.C. de Lamare, L.T.N. Landau, Iterative detection and decoding for largescale multipleantenna systems with 1bit ADCs. IEEE Wirel. Commun. Lett. 7(3), 476–479 (2018)
 17.
L.T.N. Landau, M. Dörpinghaus, R.C. de Lamare, G.P. Fettweis, Achievable rate with 1bit quantization and oversampling using continuous phase modulationbased sequences. IEEE Trans. Wirel. Commun. 17(10), 7080–7095 (2018)
 18.
S. Bender, M. Dörpinghaus, G.P. Fettweis, The potential of continuous phase modulation for oversampled 1bit quantized channels, in Proc. of the IEEE Int. Workshop on Signal Processing Advances in Wireless Communications (Cannes, France, 2019)
 19.
J. Anderson, T. Aulin, C.E. Sundberg, Digital Phase Modulation (Plenum Press, New York, 1986)
 20.
C.E. Sundberg, Continuous phase modulation. IEEE Commun. Mag. 24(4), 25–38 (1986)
 21.
L.T.N. Landau, M. Dörpinghaus, G.P. Fettweis, 1Bit quantization and oversampling at the receiver: sequencebased communication. EURASIP J. Wirel. Commun. Netw. 17, 83 (2018)
 22.
R.R.M. de Alencar, L.T.N. Landau, R.C. de Lamare, Continuous phase modulation with 1bit quantization and oversampling using iterative detection and decoding, in 2019 53rd Asilomar Conference on Signals, Systems, and Computers (Pacific Grove, CA, USA, 2019), pp. 1729–1733
 23.
J. Hong, P.A. Wilford, A hierarchical modulation for upgrading digital broadcast systems. IEEE Trans. Broadcast. 51(2), 223–229 (2005)
 24.
L. Landau, M. Dörpinghaus, R.C. de Lamare, G.P. Fettweis, Achievable rate with 1bit quantization and oversampling at the receiver using continuous phase modulation, in Proc. of the IEEE Int. Conf. on Ubiquitous Wireless Broadband (Salamanca, Spain, 2017)
 25.
B.E. Rimoldi, A decomposition approach to CPM. IEEE Trans. Inf. Theory 34(2), 260–270 (1988)
 26.
L. Bahl, J. Cocke, F. Jelinek, J. Raviv, Optimal decoding of linear codes for minimizing symbol error rate (corresp.). IEEE Trans. Inf. Theory 20(2), 284–287 (1974)
 27.
A. Genz, Numerical computation of multivariate normal probabilities. J. Comput. Graph. Stat. 1, 141–149 (1992)
 28.
M. Tüchler, A.C. Singer, Turbo equalization: an overview. IEEE Trans. Inf. Theory 57(2), 920–952 (2011)
 29.
J. Hagenauer, E. Offer, L. Papke, Iterative decoding of binary block and convolutional codes. IEEE Trans. Inf. Theory 42(2), 429–445 (1996)
 30.
C. Berrou, A. Glavieux, P. Thitimajshima, Near Shannon limit errorcorrecting coding and decoding: turbocodes, in Proc. IEEE Int. Conf. Commun. (ICC), vol. 2 (Geneva, Switzerland, 1993), pp. 1064–1070
 31.
C. Douillard, M. Jezequel, C. Berrou, Iterative correction of intersymbol interference: turbo equalization. Eur. Trans. Telecommun. 6(5), 507–511 (1995)
 32.
M.H.M. Costa, A practical demodulator for continuous phase modulation, in Proc. IEEE Int. Symp. Inform. Theory (ISIT) (Trondheim, Norway, 1994), p. 88
 33.
G.D. Forney, Convolutional codes I: algebraic structure. IEEE Trans. Inf. Theory 16(6), 729–738 (1970)
 34.
J.B. Cain, G.C. Clark, J.M. Geist, Punctured convolutional codes of rate \((n  1)/n\) and simplifed maximum likelihood decoding. IEEE Trans. Inf. Theory 25(1), 97–100 (1979)
 35.
S. ten Brink, Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Trans. Commun. 49(10), 1727–1737 (2001)
 36.
S.J. Lee, A.C. Singer, N.R. Shanbhag, Analysis of linear turbo equalizer via exit chart, in GLOBECOM’03. IEEE Global Telecommunications Conference (IEEE Cat. No.03CH37489), vol. 4 (San Francisco, CA, USA, 2003), pp. 2237–22424
 37.
M. Tuchler, R. Koetter, A.C. Singer, Turbo equalization: principles and new results. IEEE Trans. Commun. 50(5), 754–767 (2002)
 38.
D.M. Arnold, H.A. Loeliger, P.O. Vontobel, A. Kavcic, W. Zeng, Simulationbased computation of information rates for channels with memory. IEEE Trans. Inf. Theory 52(8), 3498–3508 (2006)
 39.
H.D. Pfister, J.B. Soriaga, P.H. Siegel, On the achievable information rates of finite state ISI channels, in Proc. IEEE Glob. Comm. Conf. (GLOBECOM) (San Antonio, TX, USA, 2001)
Acknowledgements
Not applicable.
Funding
This work has been funded by ELIOT ANR18CE400030, FAPESP 2018/125797 Project and FAPESP 2015/244990. It was also supported by CNPq, CAPES and FAPERJ.
Author information
Affiliations
Contributions
All authors contributed to the conception and design of the study. RA drafted the manuscript and did the simulation work. All authors contributed to the interpretation of the results. LL and RdL reviewed and edited the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Parts of this paper have been published at Asilomar 2019
Appendices
Appendix 1: A lowerbound on the achievable rate
According to [38, 39], the achievable rate is expressed and lowerbounded by using an auxiliary channel law [7, 21]
where the auxiliary channel law in (10) is used to reduce the complexity of the computation, such that
As illustrated in [21], the probabilities \(W({\varvec{y}}^n)\) and \(W({\varvec{y}}^n \vert x^n)\) in (38) can be computed recursively with the forward recursion (20) of the BCJR algorithm covered in Sect. 4.2. By using the state notation \(s_k\), \(W({\varvec{y}}^n)\) is determined with \(W({\varvec{y}}^k) = \sum _{s_k} W(s_k, {\varvec{y}}^k) = \sum _{s_k} {\text {f}}_k(s_k)\), whereas the conditional probability \(W({\varvec{y}}^n \vert x^n)\) is computed with the following recursion rule
The initialization of the algorithm is performed by defining the metrics \({\text {f}}_{k=0}(s_k)\) and \(\tilde{{\text {f}}}_{k=0} = 1\). In practice, it was observed in [38] that during the recursion steps, those branch metrics tend to go to zero with few iterations; therefore, the recursion (20) is slightly changed to
where the scaling factors \(\mu _k\) are chosen such that \(\sum _{s_k} {\text {f}}_k(s_k) = 1\). Regarding the recursion in (39), auxiliary variables \(\tilde{\mu }_k\) are defined as \(\tilde{\mu }_k = {P ({\varvec{y}}_k \vert {\varvec{y}}^{k1}_{kN}, s_k, s_{k1} )}^{1}\), with the aim to express the desired information rate quantity as
when a large value for n is considered.
Appendix 2: Channel decoder example with BCJR algorithm
To illustrate an example on how to compute (29) and (31), i.e., the outputs of the channel decoder, a memory2 convolutional code characterized by the generator polynomial \((5\,7)\) is considered. It can be described by the states transitions triggered by the bit inputs, which are illustrated in Table 3.
To describe a BCJRbased channel decoder algorithm, it is practical to employ a matrix notation, which requires the following definitions:

\(\mathbf{f} _r\) is the vector form of the forward probabilities for all states at time index r;

\(\mathbf{b} _r\) is the vector form of the backward probabilities for all states at time index r;

\({\varvec{\varGamma }}_r\) is the matrix form of the transition probabilities at time index r.
With this, the channel decoder algorithm follows the description covered in [28], which is presented in Table 4. By considering the input bits, \({\varvec{U}}(b)\) is introduced to the vector notation as a matrix of 1’s and 0’s that translates the sum of applicable probabilities, similar to (23), into an elementwise multiplication, represented by the operator \(\odot \).
The given algorithm is capable of producing the output soft information in (26) with the expressions
where
are matrices that relate to the state transitions with respect to the output bits of the channel code presented in Table 3. With this, the extrinsic LLR described in (29) can be computed as follows
where \(i \in \{\,1,0\}\). As asserted in [28], this equation faces problems when a numerical evaluation is performed due to the logarithm nature of the difference on the righthand side (RHS) of the equality. To solve this for practical implementations, \(\lambda ^{\prime }_{{\mathrm{ext}}}(c_{q})\) is computed as follows:
where \({\varvec{\varGamma }}_{{\mathrm{ext}},i,r}\) are extrinsic transition matrices that has the dependency on the input LLR \(\lambda (c_{2r+i})\) removed, while computing \(\lambda ^{\prime }_{{\mathrm{ext}}}(c_{2r+i})\). For example, the extrinsic transition matrix \({\varvec{\varGamma }}_{{\mathrm{ext}},1,r}\) corresponding to \(\lambda ^{\prime }_{{\mathrm{ext}}}(c_{2r1})\), for the convolutional code that has been considered so far, is given by
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
de Alencar, R.R.M., Landau, L.T.N. & de Lamare, R.C. Continuous phase modulation with 1bit quantization and oversampling using iterative detection and decoding. J Wireless Com Network 2020, 237 (2020). https://doi.org/10.1186/s13638020018565
Received:
Accepted:
Published:
Keywords
 1Bit quantization
 Oversampling
 ADC
 Continuous phase modulation
 BCJR
 Iterative decoding