In [29], we proposed an enhanced equalization method and we utilize it for the initial processing of the received CL-LINC-OFDM signal. To start with, we first reformulate the received signals in (4). Define a diagonal matrix C(s
t
) as
$$ \mathbf{C}(\mathbf{s}_{t})=\left[\begin{array}{c c c c} \sqrt{\frac{{V_{0}^{2}}}{|s_{t,0}|^{2}}-1}&0&\dots&0\\ 0&\sqrt{\frac{{V_{0}^{2}}}{|s_{t,1}|^{2}}-1}&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&\sqrt{\frac{{V_{0}^{2}}}{|s_{t,M-1}|^{2}}-1} \end{array}\right]. $$
(9)
By (9) and the definition of e in (3), we can rewrite s
t1 and s
t2 as
$$\begin{array}{@{}rcl@{}} \mathbf{s}_{t1}=\left(\mathbf{I}+j\mathbf{C}(\mathbf{s}_{t})\right)\frac{\mathbf{s}_{t}}{2}\,\\ \mathbf{s}_{t2}=\left(\mathbf{I}-j\mathbf{C}(\mathbf{s}_{t})\right)\frac{\mathbf{s}_{t}}{2}, \end{array} $$
(10)
where I is an M×M identity matrix. The received signal in (4) can then be written as
$$ \mathbf{y}=\frac{1}{2}\left(\mathbf{H}_{1}+\mathbf{H}_{2}\right)\mathbf{s}+ \frac{j}{2}\left(\mathbf{H}_{1}-\mathbf{H}_{2}\right)\mathbf{F}\mathbf{C}(\mathbf{s}_{t})\mathbf{F}^{H}\mathbf{s}+\mathbf{n}, $$
(11)
where F and F
H are used to denote the DFT and IDFT matrices; both of them are assumed to be unitary. Note that s
t
=F
H
s. From (11), we can see that the original representation of the received signals has been transferred to a form in which s instead of s
1 and s
2 is involved. Also, we can see that if H
1 is not equal to H
2, the second term in the right-hand-side of (11) acts as an interference. The magnitude of this interference can be large even the difference of H
1 and H
2 is small, and this is the reason why CL-LINC-OFDM with DZF performs not well.
Observe that the diagonal terms in C(s
t
) are all greater or equal to zero which suggests that the means of these terms are not zero. If the non-zero-means can be removed, the level of the interference can be reduced. Define a positive value μ and rewrite (11) as
$$ \begin{aligned} \mathbf{y} &=\!\frac{1}{2}\left(\mathbf{H}_{1}+\mathbf{H}_{2}\right)\mathbf{s}+\! \frac{j}{2}\left(\mathbf{H}_{1}\,-\,\mathbf{H}_{2}\right)\mathbf{F} \left[ \mathbf{C}(\mathbf{s}_{t})-\!\mu\mathbf{I}\! +\! \mu\mathbf{I} \right] \mathbf{F}^{H}\mathbf{s} +\mathbf{n}\\ &=\!\frac{1}{2}\left[ \left(\mathbf{H}_{1}+\mathbf{H}_{2}\right)+\! j\mu\left(\mathbf{H}_{1}-\mathbf{H}_{2}\right) \right]\mathbf{s}+ \frac{j}{2}\left(\mathbf{H}_{1}-\mathbf{H}_{2}\right)\mathbf{F} \left(\mathbf{C}(\mathbf{s}_{t})\right.\\ & \left.\quad-\mu\mathbf{I}\right)\mathbf{F}^{H}\mathbf{s}+\mathbf{n}.\\ \end{aligned} $$
(12)
Then, the equalized symbols can be obtained as
$$ \hat{\mathbf{s}}=2\left[ \left(\mathbf{H}_{1}+\mathbf{H}_{2}\right)+ j\mu\left(\mathbf{H}_{1}-\mathbf{H}_{2}\right) \right]^{-1}\mathbf{y}. $$
(13)
When μ is equal to zero, (13) is reduced to (5). So, (13) can be seen as a generalized form of the DZF equalizer when CL-LINC-OFDM transmission is applied. We name this equalizer as the enhanced ZF (EZF) equalizer.
The performance of the EZF equalizer strongly depends on the choice of μ. We now derive an optimum μ such that the average power of the interference is minimized. Define a vector v as
$$ \mathbf{v}=\left[\mathbf{C}(\mathbf{s}_{t})-\mu\mathbf{I}\right] \mathbf{s}_{t}. $$
(14)
Thus, the interference vector in (12) is equal to \(\frac {j}{2}\left (\mathbf {H}_{1}-\mathbf {H}_{2}\right)\mathbf {F}\mathbf {v}\). To calculate the variance of the interference in each subcarrier, we first calculate that of v. When M is large, it is reasonable to approximate s
t
as a zero-mean complex white Gaussian vector. The covariance matrix of s
t
can be expressed as E
s
I where E
s
is the power of the transmit signal. The mean of each component of v is given by
$$ E\left\{ \left(\sqrt{\frac{{V_{0}^{2}}}{|s_{t,i}|^{2}}-1} -\mu \right)s_{t,i} \right\}=E\left\{ s_{t,i}\sqrt{\frac{{V_{0}^{2}}}{|s_{t,i}|^{2}}-1}\right\}. $$
(15)
Note that s
t,i
is a zero-mean complex Gaussian random variable such that \(s_{t,i}\sqrt {\frac {{V_{0}^{2}}}{|s_{t,i}|^{2}}-1}\) and \(-s_{t,i}\sqrt {\frac {{V_{0}^{2}}}{|-s_{t,i}|^{2}}-1}\) have the same appearing probability, and the expectation value of (15) is zero. By the truth that each element of v is also independent to each other, the covariance matrix of v can be seen as an M×M diagonal matrix with the same diagonal term. Defining each diagonal term as \({\sigma ^{2}_{v}}\) and assuming \(\frac {{V_{0}^{2}}}{E_{s}}\gg 1\), we can approximate |s
t,i
| as a Rayleigh distributed random variable and its probability density function (PDF) is given by
$$ p\left(|s_{t,i}|\right)=\frac{2|s_{t,i}|}{E_{s}}e^{\frac{-|s_{t,i}|^{2}}{E_{s}}}. $$
(16)
The value of \({\sigma _{v}^{2}}\) can then be derived as
$$ {} \begin{aligned} {\sigma_{v}^{2}} &= E\left\{ \left|\left(\sqrt{\frac{{V_{0}^{2}}}{|s_{t,i}|^{2}}-1} -\mu \right)s_{t,i} \right|^{2}\right\} \\ &= E\left\{|s_{t,i}|^{2}\left(\frac{{V_{0}^{2}}}{|s_{t,i}|^{2}}-1\right)\right\}-2\mu E\left\{|s_{t,i}|^{2}\sqrt{\frac{{V_{0}^{2}}}{|s_{t,i}|^{2}}-1}\right\}\\ &\quad+\mu^{2}E\left\{|s_{t,i}|^{2}\right\} \\ &= {V_{0}^{2}}+\left(\mu^{2}-1\right)E_{s}-2\mu E\left\{|s_{t,i}|^{2}\sqrt{\frac{{V_{0}^{2}}}{|s_{t,i}|^{2}}-1}\right\}. \end{aligned} $$
(17)
To derive a closed-form solution for the third term of (17), we use an approximation that \(\sqrt {1-x}\approx 1-0.6x\):
$$\begin{array}{@{}rcl@{}} |s_{t,i}|^{2} \sqrt{\frac{{V_{0}^{2}}}{|s_{t,i}|^{2}}-1}&=& |s_{t,i}|V_{0}\sqrt{1-\frac{|s_{t,i}|^{2}}{{V_{0}^{2}} }} \\ &\approx & V_{0} |s_{t,i}|\left(1-0.6\frac{|s_{t,i}|^{2}}{{V_{0}^{2}} }\right) \\ &=& V_{0} |s_{t,i}|-0.6\frac{|s_{t,i}|^{3}}{V_{0}}. \end{array} $$
(18)
The approximation used in (18) is modified from the first-order Taylor expansion such that better performance can be obtained. With the PDF shown in (16), the mean and the third moment of |s
t,i
| can be obtained as
$$\begin{array}{@{}rcl@{}} E\left\{|s_{t,i}|\right\}=\frac{\sqrt{E_{s} \pi} }{2},~~ \text{and} \quad E\left\{|s_{t,i}|^{3}\right\}=\frac{3\sqrt{\pi}}{4}E_{s}^{\frac{3}{2}}. \end{array} $$
(19)
Substituting (18) and (19) to (17), we can obtain
$$\begin{array}{@{}rcl@{}} {\sigma_{v}^{2}} & \approx & {V_{0}^{2}}+\left(\mu^{2}-1\right)E_{s}-2\mu E\left\{V_{0} |s_{t,i}|-0.6\frac{|s_{t,i}|^{3}}{V_{0}}\right\} \\ &=& {V_{0}^{2}}+\left(\mu^{2}-1\right)E_{s}-\mu V_{0} \sqrt{E_{s} \pi}+\frac{0.9\mu \sqrt{\pi}}{V_{0}}E_{s}^{\frac{3}{2}} \\ &=& {V_{0}^{2}}-E_{s}-\frac{1}{4}E_{s}\left(V_{0}\sqrt{\frac{\pi}{E_{s}}}-\frac{0.9\sqrt{\pi E_{s}}}{V_{0}} \right)^{2} \\ && +{}E_{s} \left[ \mu-\frac{1}{2}\left(V_{0}\sqrt{\frac{\pi}{E_{s}}}-\frac{0.9\sqrt{\pi E_{s}}}{V_{0}} \right) \right]^{2}. \end{array} $$
(20)
Equation (20) is a quadratic function of μ, and the optimum μ giving the smallest \({\sigma _{v}^{2}}\) can be easily found as:
$$ \mu =\frac{V_{0}}{2}\sqrt{\frac{\pi}{E_{s}}}-\frac{0.45\sqrt{\pi E_{s}}}{V_{0}}. $$
(21)
The corresponding \({\sigma _{v}^{2}}\) can also be obtained as
$$ {}\begin{aligned} {\sigma_{v}^{2}} &= {V_{0}^{2}}-E_{s}-\frac{1}{4}E_{s}\left(V_{0}\sqrt{\frac{\pi}{E_{s}}}-\frac{0.9\sqrt{\pi E_{s}}}{V_{0}} \right)^{2} \\ &= {V_{0}^{2}} \left[ \left(1-\frac{\pi}{4}\right)+ \left(0.45\pi-1\right)\frac{E_{s}}{{V_{0}^{2}}}-\frac{0.81\pi {E_{s}^{2}}}{4 {V_{0}^{4}}}\right]. \end{aligned} $$
(22)
Since the channel encoder is operated on the bit level, the equalized symbol \(\hat {\mathbf {s}}\) has to be de-mapped into soft bits, a process referred to as soft-demapping [31]. Soft-demapping for OFDM systems requires to calculate the log-likelihood-ratio (LLR) for each transmit bit. To do that, we have to first calculate the signal-to-interference-noise ratio (SINR) of the equalized signal at each subcarrier. Let the channel magnitude response of subcarrier k be G
k
, and the variance of interference-plus-noise in the same subcarrier be \({\sigma ^{2}_{k}}\). Then, the SINR, denoted as γ
k
, can be calculated as:
$$ \gamma_{k}=\frac{E_{s}|G_{k}|^{2}}{{\sigma^{2}_{k}}}. $$
(23)
As mentioned, the covariance matrix of v is diagonal. Then, the covariance matrix of the interference-and-noise vector in (12) is also diagonal and its kth diagonal term is given by
$$ {}{ \begin{aligned} {\sigma_{k}^{2}}&=\frac{|H_{1,k}-H_{2,k}|^{2}}{4}{\sigma_{v}^{2}}\\& \quad+\frac{|\left(H_{1,k}+H_{2,k}\right)+j\mu\left(H_{1,k}-H_{2,k}\right)|^{2}}{4}{\sigma_{c}^{2}}+{\sigma_{n}^{2}}, \end{aligned}} $$
(24)
where H
1,k
is the kth diagonal component of H
1, H
2,k
is that of H
2, \({\sigma _{v}^{2}}\) is the interference variance given in (22), and \({\sigma ^{2}_{c}}\) is the variance of the clipping noise. From (22), we see that the value of \({\sigma _{v}^{2}}\) is an increasing function of V
0. Note that the value of V
0 determines the PAPR of the transmit signal. A smaller V
0 means a lower PAPR, higher power efficiency, and lower interference in (12). However, more signal samples will be clipped, increasing the clipping noise level. From (12), we can obtain the channel magnitude response and then calculate γ
k
as
$$ \begin{aligned} \gamma_{k}=\frac{{A^{2}_{c}}E_{s}|\left(H_{1,k}+H_{2,k}\right)+j\mu\left(H_{1,k}-H_{2,k}\right)|^{2}}{{\sigma_{v}^{2}}|H_{1,k}-H_{1,k}|^{2}+{\sigma^{2}_{c}}|\left(H_{1,k}+H_{2,k}\right)+j\mu\left(H_{1,k}-H_{2,k}\right)|^{2}+4{\sigma_{n}^{2}}}, \end{aligned} $$
(25)
where A
c
is an equivalent amplitude after clipping. The values of \({\sigma ^{2}_{c}}\) and A
c
can be found as follows. Denote the clipped time-domain OFDM signal as \(\check {s}_{t,i}\) and the clipping ratio, equivalent to the square root of the PAPR value, as
$$ \kappa=\frac{V_{0}}{\sqrt{E_{s}}}. $$
(26)
Since the time-domain OFDM symbol is approximated by a complex white Gaussian process, then
$$ E\{|\check{s}_{t,i}|^{2}\}=E_{s}\left(1-e^{-\kappa^{2}} \right). $$
(27)
In [32], it was shown that the clipped signal can be modelled as
$$ \check{s}_{t,i}=A_{c}s_{t,i}+n_{c,i}, $$
(28)
where n
c,i
is a clipping noise uncorrelated to s
t,i
. The value of A
c
and the variance of n
c,i
, denoted as \({\sigma ^{2}_{c}}\), are found to be
$$ A_{c}=1-e^{-\kappa^{2}}+\frac{\sqrt{\pi}\kappa}{2}\text{erfc}\left(\kappa\right), $$
(29)
where erfc(.) denotes the complementary error function, and
$$\begin{array}{@{}rcl@{}} {\sigma^{2}_{c}}&=E\{|\check{s}_{t,i}|^{2}\}-{A^{2}_{c}}E_{s} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \\ &=E_{s}\left[1-e^{-\kappa^{2}}-\left(1-e^{-\kappa^{2}}+\frac{\kappa\sqrt{\pi}}{2}\text{erfc}\left(\kappa\right)\right)^{2}\right]\thinspace \\ &\ \ \ \approx E_{s}\left[e^{-\kappa^{2}}-\kappa\sqrt{\pi}\text{erfc}\left(\kappa\right)\right].\quad \quad \quad \quad \quad \quad \quad \quad \enspace \end{array} $$
(30)
Finally, the LLR of the ith bit transmitted at kth subcarrier is obtained as
$$ {} \begin{aligned} LLR\left(b_{i,k}\right)\approx \ln\frac{\max\limits_{\alpha\in S_{i,k}^{1}}P\left(\tilde{y}_{k}|s_{k}=\alpha\right)}{\max\limits_{\alpha\in S_{i,k}^{0}}P\left(\tilde{y}_{k}|s_{k}=\alpha\right)}&=\gamma_{k} \{\min\limits_{\alpha\in S_{i,k}^{0}}|\hat{s}_{k}-\alpha|^{2}\\& \quad-\min\limits_{\alpha\in S_{i,k}^{1}}|\hat{s}_{k}-\alpha|^{2}\}, \end{aligned} $$
(31)
where \(S_{i,k}^{1}\) or \(S_{i,k}^{0}\) indicates the symbol set in which the ith bit of each element is 1 or 0. The LLR values are deemed as the de-mapped soft bits and then used as the input to the channel decoder.
As we can see from (30), the SINR at each subcarrier depends on the clipping ratio, κ. If κ is larger, the clipping noise will be smaller. At the same time, however, the interference will become stronger. We now derive a closed-form expression for the average SINR such that an optimum κ maximizing the average SINR can be found. Let the mean of each channel gain in H
1 and H
2 be normalized to one. Then, the average channel gain for the EZF can be obtained as
$$ \begin{aligned} E\{|\left(H_{1,k}+H_{2,k}\right)+j\mu\left(H_{1,k}-H_{2,k}\right)|^{2}\}= 2+2\rho+2\mu^{2}(1-\rho), \end{aligned} $$
(32)
where \(\rho =E\{H_{1,k}H_{2,k}^{*}\}\) denotes the antenna correlation. Substituting (20), (26), (30), and (32) into (25) and taking the expectation, we can obtain the average SINR (i.e., E{γ
k
}), denoted by SINR
a
, of the CL-LINC-OFDM system with the EZF equalizer as
$$ {\begin{aligned} \text{SINR}_{a}= \frac{\bar{\rho}\left[1-e^{-\kappa^{2}}+\frac{\sqrt{\pi}\kappa}{2}\text{erfc}\left(\kappa\right)\right]^{2}}{\left(1-\rho\right)\left(\kappa^{2}+\mu^{2}+\frac{0.9\mu\sqrt{\pi}}{\kappa}-\kappa\mu\sqrt{\pi}-1\right)+ \bar{\rho}\left[e^{-\kappa^{2}}-\kappa\sqrt{\pi}\text{erfc}\left(\kappa\right)\right]+2\text{SNR}^{-1}}, \end{aligned}} $$
(33)
where SNR=\(E_{s}/{\sigma ^{2}_{n}}\) and \(\bar {\rho }=\left [1+\rho +\mu ^{2}\left (1-\rho \right)\right ]\). Using the value of μ in (21), we can rewrite the average SINR as
$$ \begin{aligned} \text{SINR}_{a}&= &\frac{\tilde{\rho}\left[1-e^{-\kappa^{2}}+\frac{\sqrt{\pi}\kappa}{2}\text{erfc}\left(\kappa\right)\right]^{2}}{\left(1-\rho\right)\left[\kappa^{2}\left(1-\frac{\pi}{4}\right)+0.45\pi-1-\frac{0.81\pi}{4\kappa^{2}}\right]+ \tilde{\rho}\left[e^{-\kappa^{2}}-\kappa\sqrt{\pi}\text{erfc}\left(\kappa\right)\right]+2\text{SNR}^{-1}}, \end{aligned} $$
(34)
where \(\tilde {\rho }=\left [1+\rho +\pi \left (\frac {\kappa }{2}-\frac {0.45}{\kappa }\right)^{2}\left (1-\rho \right)\right ] \). As we can see, the average SINR is a function of ρ and κ. It can be shown that the average SINR is a concave function of κ. For each ρ, we can then find the optimum κ by a simple numerical search.