 Research
 Open Access
 Published:
On the achievable rate of bandlimited continuoustime AWGN channels with 1bit output quantization
EURASIP Journal on Wireless Communications and Networking volume 2021, Article number: 54 (2021)
Abstract
We consider a real continuoustime bandlimited additive white Gaussian noise channel with 1bit output quantization. On such a channel the information is carried by the temporal distances of the zerocrossings of the transmit signal. We derive an approximate lower bound on the capacity by lowerbounding the mutual information rate for input signals with exponentially distributed zerocrossing distances, sineshaped transition waveform, and an average power constraint. The focus is on the behavior in the midtohigh signaltonoise ratio (SNR) regime above 10 dB. For hard bandlimited channels, the lower bound on the mutual information rate saturates with the SNR growing to infinity. For a given SNR the loss with respect to the unquantized additive white Gaussian noise channel solely depends on the ratio of channel bandwidth and the rate parameter of the exponential distribution. We complement those findings with an approximate upper bound on the mutual information rate for the specific signaling scheme. We show that both bounds are close in the SNR domain of approximately 10–20 dB.
Introduction
In digital communications, we typically assume that the analogtodigital converter (ADC) at the receiver provides a sufficiently fine grained quantization of the magnitude of the received signal. In the present paper, we consider very short range high data rate communication, where high carrier frequencies and large bandwidths are used. In such a scenario, the power consumption of the ADC becomes a major factor. The consumed energy per conversion step increases with the sampling rate [3], such that high resolution ADCs become unfeasible in the subTHz regime at the very high sampling rates required. An exemplary application is wireless communications between computer boards within a server [4]. The above problem can be circumvented by using 1bit quantization and oversampling of the received signal with respect to (w.r.t.) the Nyquist rate. Onebit quantization is fairly simple to realize as it requires neither an automatic gain control, nor linear amplification at the receiver. The loss in amplitude information can be partly compensated by oversampling, such that one could say that quantization resolution of the signal magnitude is tradedoff against resolution in time domain. Optimal communication over the resulting channel including the ADC requires a modulation and signaling scheme adapted to this specific channel. Since this coarse quantization reduces the achievable rate and oversampling can partly compensate this effect, the question is how much the channel capacity is degraded compared to an additive white Gaussian noise (AWGN) channel sampled at Nyquist rate.
For the noise free case it has been shown already in the early works by Gilbert [5] and Shamai [6] that oversampling of a bandlimited channel can increase the information rate w.r.t. Nyquist sampling. The latter lowerbounded the capacity by \(\log_2 (n + 1)\) [bits/Nyquist interval] where n is the oversampling factor w.r.t. Nyquist sampling. However, for assessing the performance of practical communication systems with oversampled 1bit quantization finite SNR performance is highly relevant. Regarding the low signaltonoise ratio (SNR) domain, Koch and Lapidoth have shown in [7] that oversampling increases the capacity per unitcost of bandlimited Gaussian channels with 1bit output quantization. In [8] it has been shown that oversampling increases the achievable rate based on the study of the generalized mutual information. Moreover, in [9] bounds on the achievable rate in a discretetime scenario are studied, which are evaluated via simulation in [10] w.r.t. 90% and 95% power containment bandwidth, in [11] considering hard bandlimitation, and in [12] w.r.t. a spectral mask. In some of these approaches socalled fasterthanNyquist (FTN) signaling is applied. FTN signaling is closely related to oversampling as both increase the resolution of the grid on which the zerocrossings of the transmit signal can be placed. In addition, in [13] the capacity for coarsely quantized systems under Nyquistsignaling is studied.
However, an analytical evaluation of the channel capacity of the 1bit quantized oversampled AWGN channel in the midtohigh SNR domain is still open. This capacity depends on the oversampling factor, as due to the 1bit quantization Nyquistsampling  like any other sampling rate  does not provide a sufficient statistic. This means that the samples do not contain the entire information on the input signal given in the continuoustime receive signal. In the present paper we study the capacity of the underlying continuoustime channel, which can be interpreted as the limiting case of increasing the oversampling rate to infinity. As for the capacity of the AWGN channel given by Shannon [14], without time quantization there is no quantization in the information carrying dimension. With our approach, we aim for a better understanding of the difference between using the magnitude domain versus the time domain for signaling. As the continuoustime additive noise channel with 1bit output quantization carries the information in the zerocrossings of the transmit signal, this channel corresponds to some extent to a timing channel as, e.g., studied in [15].
For the outlined scenario of short range multigigabit/s communication, e.g., for interboard communication, a link budget calculation in [4] yields a minimum receive SNR of 13.6 dB. We therefore focus on the midtohigh SNR domain above 10 dB. This requires different bounding techniques than the low SNR region since the nonlinear effects of 1bit quantization are more dominant. The main contributions and results of the paper are as follows

We derive approximate lower and upper bounds on the mutual information rate of the real and bandlimited continuoustime additive Gaussian noise channel with 1bit output quantization under an average power constraint. We base our derivation on a class of signals with an exponentially distributed zerocrossing distance at the input and a sineshaped transition waveform at the zeroscrossings. We show that the main error events to be considered are insertions and shifts of the zerocrossing time instants of the transmit signal.

We provide approximations that enable closedform bounding of the mutual information rate and analyze their validity regions. The approximations are suitable in the midtohigh SNR domain above 10 dB and are summarized in Sect. 8. A central assumption in this regard is that the intersymbol interference (ISI) due to bandlimitation is treated as noise.

We observe that for a given SNR the ratio between the derived lower bound and the AWGN capacity solely depends on the ratio of channel bandwidth and the rate parameter of the exponential distribution. The lower bound on the mutual information rate is maximized if this ratio is approximately 0.75. The derived lower bound on the mutual information rate saturates over the SNR only if a hard bandlimitation is considered. For the sineshaped transition waveform this yields ca. 1.54 bit/s/Hz.

The upper and lower bound are close in the midtohigh SNR regime, i.e., in the SNR range of approximately 10 to 20 dB. Treating ISI as noise is the dominating error effect for SNRs above 20 dB, while insertions are the dominant error effect below approximately 12 dB.

Compared to the capacity results under 1bit quantization and Nyquist signaling in [13], we observe that there is at least a 50 % increase in achievable rate possible in the highSNR domain. A practical binary 2fold FTN signaling scheme in [11] shows already a gain of ca. 35 % w.r.t. [13].
In the present work we assume that the receiver is perfectly synchronized to the transmitter. However, actually channel parameter estimation and synchronization based on 1bit quantized channel output samples is an active area of research, where [16] studies bounds on the achievable timing, phase, and frequency estimation performance and phase and frequency estimators were derived, e.g., in [17] and [18]. Under perfect synchronization the complex baseband can be decomposed into two real AWGN channels, such that we consider a real channel.
The paper is organized as follows. In Sect. 2, the system model is given. In Sect. 3 we introduce the relevant types of error events and model the impact of filtering, especially the ISI. Based on this, an upper and a lower bound on the mutual information rate are given in Sect. 4 and analyzed in detail in Sects. 5 and 6. In Sect. 7 we give the final form of the upper and the lower bound on the mutual information rate and discuss their behavior depending on various channel parameters. Section 8 provides the conclusion of our findings.
We apply the following notations: vectors are set bold, random variables sans serif. Thus, \(\varvec{\mathsf {X}}^{(K)}\) is a random vector of length K. Omitting the superscript denotes the corresponding random process \(\varvec{\mathsf {X}}\) for \(K \rightarrow \infty\). For information measures, \((\cdot )'\) denotes the corresponding rate. Furthermore, \((a)^{+}\) is the maximum of a and zero.
System model
We consider the baseband system model depicted in Fig. 1 transmitting over a real AWGN channel. A receiver relying on 1bit quantization can only distinguish between the level of the input signal being smaller or larger than zero. Hence, all information that can be conveyed through such a channel must be recovered from the sequence of time instants of the zerocrossings (ZC)^{Footnote 1}. In order to model this, we consider as channel input and output the vectors \({{\varvec {\mathsf{{A}}}}}^{(K)} = [\mathsf {A}_1,...,\mathsf {A}_k,...,\mathsf {A}_K]^T\) and \({{\varvec {\mathsf{{D}}}}}^{(M)}=[\mathsf {D}_1,...,\mathsf {D}_m,...,\mathsf {D}_M]^T\), which contain the temporal distances \(\mathsf {A}_k\) and \(\mathsf {D}_m\) of two consecutive zerocrossings of \(\mathsf {x}(t)\) and the received signal \(\mathsf {r}(t)\), respectively. Here K is not necessarily equal to M as noise can add or remove zerocrossings. For the analysis in this work, it is assumed that the time instants of the zero crossings can be resolved with infinite precision, which makes \(\mathsf {A}_k\) and \(\mathsf {D}_m\) continuous random variables. The mapper converts the random vector \({{\varvec {\mathsf{{A}}}}}^{(K)}\) into the continuoustime transmit signal \(\mathsf {x}(t)\), which is then lowpassfiltered with onesided bandwidth W and transmitted over an AWGN channel. At the receiver, lowpassfiltering with onesided bandwidth W ensures bandlimitation of the noise and the demapper realizes the conversion between the noisy received signal \(\mathsf {r}(t)\) and the sequence \({{\varvec {\mathsf{{D}}}}}^{(M)}\) of zerocrossing distances. The continuoustime 1bit ADC can hereby be understood as a prestage to the zerocrossing detector underlining the fact that the amplitude information is not available for signal processing.
Signal structure and input distribution
Figure 2 illustrates the mapping of the input sequence \({{\varvec {\mathsf{{A}}}}}^{(K)}\) to \(\mathsf {x}(t)\), which alternates between two levels \(\pm \sqrt{\hat{P}}\), where \(\hat{P}\) is the peak power of \(\mathsf {x}(t)\). The kth transition between the levels \(\pm \sqrt{\hat{P}}\) begins at time
and crosses zero at time \(\mathsf {T}'_k\). Without loss of generality, we assume \(t_0=0\). The input symbols \(\mathsf {A}_k\) correspond to the temporal distances between the kth and the \((k1)\)th zerocrossing of \(\mathsf {x}(t)\). We consider the \(\mathsf {A}_k\) to be independent and identically distributed (i.i.d.) based on an exponential distribution, i.e,
since the exponential distribution maximizes the entropy for positive continuous random variables with given mean. Here, \({\mathbb{1}}_{[u,v]}(x)\)is the indicator function, being one in the interval [u, v] and zero otherwise. This results in a mean symbol duration of
a variance of the input symbols of
and a Gammadistribution of the \(\mathsf {T}_k\) or any other sum of \(\mathsf {A}_k\), respectively,
In order to control the bandwidth of the channel input signal and match it to the channel, the transition from one level to the other is given by the waveform f(t), yielding the transmit signal
with the pulse shape
Here, f(t) is an odd function between \(({\beta }/{2},1)\) and \(({\beta }/{2},1)\) and zero otherwise, describing the transition of the signal. The transition time \(\beta\) is chosen according to the available channel bandwidth W with
Implications of this choice will be discussed in Sects. 4, 7, and Appendix A. With \(\beta\) being the minimal value of the \(\mathsf {A}_{k}\), it is guaranteed that \(\mathsf {x}(t)\) reaches the level \(\pm \sqrt{\hat{P}}\) between two transitions. This is not necessarily capacityachieving but simplifies the derivation of a lower bound on the mutual information rate. The resulting time instant of the kth zerocrossing is
The results throughout the paper are given for a sine halfwave as transition, i.e.,
In the limiting case of \(\lambda \rightarrow \infty\), this leads to a onesided signal bandwidth of W. However, \(\mathsf {x}(t)\) is not strictly bandlimited as a small portion of its energy is outside of the interval \([W, W ]\). Strict bandlimitation is ensured by the lowpass (LP) filters at transmitter and receiver, which are considered to be ideal LPs with onesided bandwidth W and amplitude one. The normalized bandwidth
is an important design parameter, which relates the channel bandwidth to the rate parameter of the exponential input distribution.
Channel model
The LPfiltered signal \(\hat{\mathsf {x}}(t)\) is transmitted over a continuoustime AWGN channel. The received signal after LPfiltering and quantization is given by
where \(Q(\cdot )\) is a binary quantizer with threshold zero, i.e., \(Q(x)=1\) if \(x\ge 0\) and \(Q(x)=1\) if \(x<0\). Here, \(\hat{\mathsf {n}}(t)\) is the filtered version of the zeromean additive white Gaussian noise \(\mathsf {n}(t)\) with power spectral density (PSD) \({N_0}/{2}\). Its variance is \(\sigma_{\hat{\mathsf {n}}}^2 = N_0 W\) and its PSD is given by
The filtered transmit signal \(\hat{\mathsf {x}}(t)\) is depicted in Fig. 2 and can be obtained by superposition, analogous to (6), of the filtered transmit pulses
where \({{\,\mathrm{Si}\,}}(\cdot )\) and \({{\,\mathrm{Ci}\,}}(\cdot )\) are the sine and cosine integral, respectively. Here, (14) can be obtained by Fouriertransform of g(t) yielding \(G(\omega )\). Hard bandlimitation limits the spectrum of \(G(\omega )\) to \([W,W]\), yielding \(\hat{G}(\omega )\), such that the inverse Fourier transform of \(\hat{G}(\omega )\) yields \(\hat{g}(t)\). An expression for \(G(\omega )\) is given below in (82) in general and in (19) for the sine waveform. The distortion between the signal \(\mathsf {x}(t)\) containing the designed sequence of zero crossings and \(\hat{\mathsf {x}}(t)\) is given by \(\tilde{\mathsf {x}}(t) = \hat{\mathsf {x}}(t)  \mathsf {x}(t)\), which has the variance
where \(S_{\mathsf {x}}(\omega )\) is the PSD of \(\mathsf {x}(t)\). The transmit power of the system is, thus, given as \(P_{\hat{\mathsf {x}}} = P{\sigma }^2_{\tilde{\mathsf {x}}}\), where P is the average power of \(\mathsf {x}(t)\). It is given by
Note that despite the deterministic nature of the filtering, it is not clear yet how to consider the information contained in the ISI in the derivation of bounds on the mutual information rate. For the purpose of lowerbounding the mutual information rate, we thus treat the ISI as noise, which is discussed in more detail in the corresponding sections. An upper bound on the achievable rate is constructed by not considering the filter distortion. Furthermore, we cannot evaluate the exact transmit power \(P_{\hat{\mathsf {x}}}\) as we only obtain an upper and a lower bound on \(\sigma ^2_{\tilde{\mathsf {x}}}\), cf. Sect. 3. Thus, we define the SNR w.r.t. \(\mathsf {x}(t)\) as
Error events and filtering
In this section, we introduce the relevant error events that occur in the system described above. Furthermore, we define signal parameters required for the further analysis, especially w.r.t to the statistics of the ISI. As discussed above, we treat the ISI as noise in order to obtain analytical bounds on the mutual information rate. To quantify the impact of the ISI caused by filtering, we require a tractable model of the ISI. Therefore, we approximate the probability density function (pdf) of the ISI in this section.
Error Events
Transmitting the signal \(\mathsf {x}(t)\) over the channel described in the previous section, including LPdistortion and AWGN, may cause three types of error events:

shifts of zerocrossings leading to errors in the magnitudes of the received symbol corresponding to \(\mathsf {A}_k\)

insertion of zerocrossings causing an insertion of received symbols

deletion of zerocrossing pairs, leading to the deletion of received symbols.
For channels with insertions and deletions are, to the best of our knowledge, only capacity bounds for binary channels available, e.g., [19,20,21,22]. In (8), we match the transition time \(\beta\) of the input sequence to the channel bandwidth. Thus, the filtered noise process at time instants spaced by a temporal distance larger than \(\beta\) can assumed to be uncorrelated and the possibility of a noise event deleting two consecutive zerocrossings  and, hence, an entire symbol  can be neglected. This argument is supported by the simulation results presented in Appendix A.
Thus, the error events to be considered are shifts and insertions of zerocrossing. Insertions are synchronization errors, that prevent the receiver from correctly identifying the beginning of a transmit symbol. Dobrushin has proven information stability and Shannon’s coding theorem for channels with synchronization errors given discrete and finite random variables [23], although to him “it appears that these restrictions are not essential”. For the case of the continuous random processes \({{\varvec {\mathsf{{A}}}}}\) and \({{\varvec {\mathsf{{D}}}}}\) this proof remains for future work.
In order to analyze the achievable rate, we use the temporal separation of the two error events (shifts and insertions of zerocrossings) to separately evaluate their impact. This separation is given as long as there is only one zerocrossing in each transition interval (TI) \(\left[ \mathsf {T}_k,\mathsf {T}_k+\beta \right]\). Since the noise is bandlimited with bandwidth W, which is matched to the length \(\beta\) of one TI, cf. (8), the dynamics of the noise within the TI are limited. Thus, in the midtohigh SNR regime, multiple zerocrossings per TI occur only with very small probability. Numerical evaluations of curvecrossing problems for Gaussian random processes support this argument for an SNR above 5 dB, cf. Appendix B. For this analysis, the distribution of the distortion \(\tilde{\mathsf {x}}(t)\) is assumed to be Gaussian, which is justified below in Sect. 3.3.
Some signal properties induced by filtering
One important parameter to quantify the ISI is the variance \(\sigma ^2_{\tilde{\mathsf {x}}}\) of the LPdistortion, cf. (15). In order to evaluate (15), we require information on the spectrum \(S_{\mathsf {X}}(\omega )\). In Appendix C, we show that for \(\omega \ne 0\)
with \(c(\omega ) = \frac{1}{\sqrt{1+\omega ^2 \lambda ^{2}}  1}\). For the sinewaveform introduced in (10), we have
With this, (15), and \(\Gamma = \int \nolimits_{2 \pi W}^{\infty } \left G(\omega )\right ^2 \hbox {d}\omega\), we can bound \(\sigma ^2_{\tilde{\mathsf {x}}}\) by
In order to obtain (20) one further bounding step is applied. Note, that \(c(\omega )\) is monotonically decreasing w.r.t. \(\omega \) and, hence, for all \(\omega  \ge 2\pi W\) it holds \(c(\omega ) \le c(2 \pi W) = c_1.\) Given the sinewaveform in (10), we have \(\Gamma = \frac{\beta c_0}{2 \pi }\), such that
where \(c_0 =  3\gamma  3 \log (2\pi ) + 3 {{\,\mathrm{Ci}\,}}(2 \pi )  \pi ^2 + 4 \pi {{\,\mathrm{Si}\,}}(\pi )  \pi {{\,\mathrm{Si}\,}}(2\pi )\), with \(\gamma \approx 0.5772\) being the EulerMascheroni constant.
Later on, we will also require the parameter \(s^{\prime\prime}_{\tilde{\mathsf {x}}\tilde{\mathsf {x}}}(0)\), which is the second derivative of the autocorrelation function (ACF) of \(\tilde{\mathsf {x}}(t)\) at \(t=0\), see Sects. 3.3 and 6. The ACF of the lowpassdistortion \(\tilde{\mathsf {x}}(t)\) is given by
such that for its second derivative it can be written
where the exchangeability of differentiation and integration has been shown via Lebesgue’s dominated convergence theorem [24, Theorem 1.34] with the dominating function \(g(\omega ) = \omega ^2 S_{\mathsf {X}}(\omega )\). Due to \(\frac{\partial ^2}{\partial \tau ^2} \cos (\omega \tau ) \big _{\tau =0}\) \(=  \omega ^2\) in (23) and since \(S_{\mathsf {X}}(\omega )\) is nonnegative for all \(\omega\), an upper bound on \(S_{\mathsf {X}}(\omega )\) results in a lower bound on \(s^{\prime\prime}_{\tilde{\mathsf {x}}\tilde{\mathsf {x}}}(0)\). We use (18) and for the sine waveform in (10) we have \(\int_{{2 \pi W}}^{\infty } \omega ^2 \left G(\omega )\right ^2 {\hbox {d} \omega } = \frac{\pi c_2}{2 \beta }\) with \(c_2 = \left[ \pi ^2  \gamma  \log (2 \pi )  \pi {{\,\mathrm{Si}\,}}(2\pi ) + {{\,\mathrm{Ci}\,}}(2 \pi )\right]\). This yields
Furthermore, the description of the filtered pulse \(\hat{g}(t)\) can be tedious since for \(t>\beta\) the pulse \(\hat{g}(t)\) exhibits the typical ringing, which can be difficult to characterize compactly. The value
represents the lowest signal level of \(\hat{g}(t)\) for \(t>\beta\), cf. Fig. 3, and thus can serve as a lower bound on \(\hat{g}(t)\) for \(t>\beta\). A simplified description for the transition can be obtained by using the slope of \(\hat{g}(t)\) at \(t={\beta }/{2}\), which corresponds to the slope of the filtered version \(\hat{f}(t)\) of \(f (t)\) at \(t=0\). Thus, with \(\hat{f}(t) = \hat{g}(t+{\beta }/{2})1\) for \(\beta /2 \le t\le \beta /2\), we have
and we can define an approximated version of \(\hat{g}(t)\) as
Probability distribution of the ISI
Our approach to approximate the ISI distribution is shown in Fig. 3. It depicts the designed waveform g(t), the transmit waveform \(\hat{g}(t)\), and the approximation \(\hat{g}_{\rm {appr}}(t)\). The original sequence \(\mathsf {x}(t)\) is designed such that there is no ISI. Due to LPfiltering, \(\hat{g}(t)\) shows the typical ringing, such that depending on the temporal distances between the pulses, given by the data symbols \(\mathsf {A}_k\), interference occurs. Starting from \(\hat{g}_{\rm {appr}}(t)\), we already have a characterization of the impact of filtering on the pulse starting at \(\mathsf {T}_k\), which we refer to as the kth pulse. It remains to characterize the ISI generated by all neighboring pulses. Due to the separability of the error events, cf. Sect. 3.1, we divide the time interval belonging to the kth pulse in a TI and a hold period (HP).
In general, the interfering signal \(\tilde{\mathsf {x}}(t_k)\) at any time \(t_k \in [\mathsf {T}_k,\mathsf {T}_{k+1}]\), i.e., either TI or HP, can be represented by the sum of ISIcontributions of all other pulses as
The \(\tilde{\mathsf {x}}_l(t_k)\) are obtained via a deterministic mapping using \(\tilde{g}(t) = \hat{g}(t)g(t)\) as
where \(\tilde{t}_k = t_k\mathsf {T}_k\) and the sums \(\mathsf {L}_{n+1}^m = \sum \nolimits_{i=n+1}^{m} \mathsf {A}_i\), \(m>n\), follow the Gammadistribution in (5). Since the \(\mathsf {A}_k\) are i.i.d., \(\tilde{\mathsf {x}}_{\rm {lhs}}(t_k)\) and \(\tilde{\mathsf {x}}_{\rm {rhs}}(t_k)\) are independent, such that
Unfortunately, \(\tilde{g}(t)\) cannot be inverted, which makes the analytical derivation of \(p(\tilde{\mathsf {x}}(t_k))\) infeasible. Furthermore, the \(\tilde{\mathsf {x}}_l(t_k)\) for \(l>k\) and \(l<k\), respectively, are not independent, such that the overall distributions of \(p(\tilde{\mathsf {x}}_{\rm {rhs}}(t_k))\) and \(p(\tilde{\mathsf {x}}_{\rm {lhs}}(t_k))\) cannot be obtained by simple convolution of the densities \(p(\tilde{\mathsf {x}}_l(t_k))\). Thus, we obtain an empirical distribution by analyzing \(10^5\) sequences \({{\varvec {\mathsf{{A}}}}}^{(K)}\) with 100 interfering pulses for each, \(\tilde{\mathsf {x}}_{\rm {lhs}}(t_k)\) and \(\tilde{\mathsf {x}}_{\rm {rhs}}(t_k)\). The results are depicted in Fig. 4. Given the symmetry, we only analyze the scenario of an upcrossing symbol, i.e., a positive transition slope as depicted in Fig. 3. Due to the temporal separation of the two error types, zerocrossingshifts and inserted zerocrossings cf. Sect. 3.1, we discuss \(p(\tilde{\mathsf {x}}(t_k))\) separately for these two cases.
Case a) represents \(t_k = t_{k,\text {TI}} = \mathsf {T}'_k=\mathsf {T}_k+\frac{\beta }{2}\), i.e., the zeroscrossing of the kth pulse of \(\mathsf {x}(t)\) in the TI. This corresponds to the zerocrossingshift error analyzed to obtain the lower and the upper bound on the mutual information rate in Sect. 5. With (29), we have
From (31) it can be seen that interfering pulses separated by the same number of symbols from \(t_{k,\text {TI}}\), i.e., with the same probability distribution of \(\mathsf {L}_{n+1}^m\) are weighted with inverted signs. Thus, the convolution in (30) becomes an autocorrelation, such that we expect an even function with mean zero as can be seen in Fig. 4a) for different values of \(\kappa\). Using the bounds on the variance of the ISI obtained below, cf. (34) and (21), we see that the Gaussian distribution is well suited for describing the effect of the ISI up to ratios κ = W/λ ≲ 3.
Case b) considers \(t_k = t_{k,\text {HP}}=\mathsf {T}_k + \frac{\mathsf {A}_{k+1}+\beta }{2}\), which is in the middle of the hold period. This is the worst case assumption for analyzing the impact of additional zerocrossings in Sect. 6: The kth pulse \(\hat{g}(t\mathsf {T}_k)1\) (green) is lowerbounded by its lowest value in the HP \(u \approx 0.81 \sqrt{\hat{P}}\). For \(\mathsf {T}_k + \beta< t <t_{k,\text {HP}}\), the strongest interference comes from the \((k+1)\)th (red) pulse. Given the monotonically decreasing envelope of \(\tilde{g}(t)\) for \(t\ge {\beta }/{2}\), the interference of the \((k+1)\)th pulse can be highest at \(t_{k,\text {HP}}\). For \(t_{k,\text {HP}}<t<\mathsf {T}_{k+1}\), the scenario can be analyzed with the red pulse approximated by u and the green one as interferer. Thus, in the middle of the HP (29) becomes
It can be seen that different to (31) now the interferers with the same probability distribution of \(\mathsf {L}_{n+1}^m\) are weighted with the same sign. For \(\mathsf {A}_{k} +\frac{\mathsf {A}_{k+1}}{2}\) and \(\frac{\mathsf {A}_{k+1}}{2}+\mathsf {A}_{k+2}\), these are the pulses \(k1\) (orange) and \(k+2\) (violet). Thus, the convolution in (30) yields a function with a mean deviating from zero towards positive values, cf. Fig. 4b). However, given that we look at an upcrossing, i.e., \(\mathsf {x}(\mathsf {T}_k+\beta )>0\), for studying the additional zerocrossings, the tail of the distribution towards negative \(\tilde{\mathsf {x}}\) is significant. As can be seen, the assumed Gaussian distribution with mean zero and upperbounded variance from below, cf. (34) and (21), has a heavier tail towards negative values compared to the actual distribution for κ ≲ 3 and, thus, enables us to give an upper bound on the probability of additional zerocrossings in that region of \(\kappa\).
The variances of \(\tilde{\mathsf {x}}\) in both cases, TI and HP, depend on the amount of energy \(\sigma ^2_{\tilde{\mathsf {x}}}\) removed by LPfiltering, for which we obtained bounds in Sect. 3.2. Here, \(\sigma ^2_{\tilde{\mathsf {x}}}\) captures besides the ISI also the distortion of the current pulse, which is already considered in the approximation \(\hat{g}_{\rm {appr}}\), cf. Fig. 3. The portion of the energy of \(\tilde{g}(t)\) that contributes to ISI is the one, for which \(t\ge t_{\rm {min}}\), where \(t_{\rm {min}}\) is the minimum temporal distance between an interfering pulse and \(t_{k,\text {TI}}\) or \(t_{k,\text {HP}}\), respectively. With \(\mathsf {A}_k \ge \beta\), (31) and (32), for the TI we have \(t_{\rm {min}} = \min (\mathsf {L}_{n+1}^m) +\frac{\beta }{2} = \beta + \frac{\beta }{2}\), while for the HP \(t_{\rm {min}} = \frac{\min (\mathsf {A}_{k+1})}{2}+\frac{\beta }{2} = \beta\) holds. Thus, we consider the fraction \(\alpha\) of \(\sigma ^2_{\tilde{\mathsf {x}}}\) contributing to ISI to be
Numerical evaluation of (33) leads to \(\alpha_{\rm {HP}}\approx 0.425\) and \(\alpha_{\rm {TI}}\approx 0.325\), such that
and
Bounding the achievable rate
The capacity of a communication channel represents the highest rate at which we can transmit over the channel with an arbitrary small probability of error and is defined as
where the supremum is taken over all distributions of the input signal, for which \(\hat{\mathsf {x}}(t)\) is constrained to the average power \(P\sigma_{\tilde{\mathsf {x}}}^2\) and the bandwidth W. In (36) the mutual information rate is given by
with \(I\big ({{\varvec {\mathsf{{A}}}}}^{(K)};{{\varvec{\mathsf{{D}}}}}^{(M)}\big )\) being the mutual information. Despite the fact that M is a random variable that can be larger than K, both processes \({{\varvec{\mathsf{{A}}}}}^{(K)}\) and \({{\varvec {\mathsf{{D}}}}}^{(M)}\) occupy the same time interval, such that we define the mutual information rate based on a normalization with respect to the expected transmission time \(K T_{{\rm avg}}\). In the present paper, we derive a lower bound on the capacity by restricting ourselves to input signals as described in Sect. 2.1. However, later we will consider the supremum of \(I'\big ({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}}\big )\) over the parameter \(\lambda\) of the distribution of the \(\mathsf {A}_k\) in (2). The AWGN capacity serves as an upper bound on the capacity of the considered system due to the data processing inequality. Furthermore, we derive an upper bound on the achievable rate of this specific signaling scheme in order to quantify the impact of the bounding steps taken.
We use the concept of a genieaided receiver as in [21], which has information on the inserted zerocrossings contained in an auxiliary process \({{\varvec {\mathsf{{V}}}}}\). Based on \({{\varvec {\mathsf{{V}}}}}\), which is described below, the genieaided receiver can remove the additional zerocrossings. Let \(\hat{{{\varvec {\mathsf{{D}}}}}}\) contain the temporal distances of the zerocrossings at the receiver when the additional zerocrossings are removed. The process \(\hat{{{\varvec {\mathsf{{D}}}}}}\) can be determined based on \({{\varvec {\mathsf{{{D}}}}}}\) and \({{\varvec {\mathsf{{V}}}}}\) such that the mutual information rate in case the receiver has side information about the inserted zerocrossings is given by
Using the chain rule of mutual information, we have
Here, \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) is the mutual information rate without the side information on additional zerocrossings at the receiver. The effect of the shifted zerocrossings is captured in \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}})\) and the impact of the inserted zerocrossings is described by \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{V}}}}}{{\varvec {\mathsf{{D}}}}})\).
Given that \(I'({{\varvec{\mathsf{{A}}}}};{{\varvec {\mathsf{{V}}}}}{{\varvec {\mathsf{{D}}}}})\ge 0\) since mutual information is always nonnegative, an upper bound on the mutual information rate \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) can be given independently of the nature of the auxiliary process \({{\varvec {\mathsf{{V}}}}}\) as
For the characterization of the auxiliary process \({{\varvec {\mathsf{{V}}}}}\), consider the transmission of one input symbol \(\mathsf {A}_k\). Its bounding zerocrossings \(\mathsf {T}'_{k1}\) and \(\mathsf {T}'_k\) will be shifted to \(\hat{\mathsf {T}}_{k1}\) and \(\hat{\mathsf {T}}_k\) by the noise process, such that
where \(\mathsf {S}_k\) denotes the error introduced by the shift. Additionally introduced zerocrossings will lead to a vector of received symbols \({{\varvec {\mathsf{{D}}}}}_k\) corresponding to every input symbol \(\mathsf {A}_k\). The latter is reversible, if the receiver knows which zerocrossings correspond to the originally transmitted ones. The receiver needs to sum up the distances \(\mathsf {D}_m\) contained in \({{\varvec {\mathsf{{D}}}}}_k\) in order to remove the additional zerocrossings. Then, for every input symbol \({\mathsf {A}}_k\) we obtain a symbol \(\hat{\mathsf {D}}_k\) that only contains the error event of shifted zerocrossings, such that we have \(\hat{{{\varvec {\mathsf{{D}}}}}}^{(K)}=[\hat{\mathsf {D}}_1,...,\hat{\mathsf {D}}_k,...,\hat{\mathsf {D}}_K]\). Intuitively, one would start such an algorithm with the first received symbol, such that instead of providing the receiver with the exact positions in time of the additional zerocrossings, it suffices to know for each transmit symbol \(\mathsf {A}_k\) how many received symbols have to be summed up to obtain \(\hat{\mathsf {D}}_k\) and, thus, the sequence \(\hat{{{\varvec {\mathsf{{D}}}}}}^{(K)}\). Hence, the auxiliary sequence \({{\varvec {\mathsf{{V}}}}}^{(K)}\) consists of positive integer numbers \(\mathsf {V}_k \in {\mathbb{N}}\), representing for each input symbol the number of corresponding output symbols. Thus, the auxiliary process \({{\varvec {\mathsf{{V}}}}}\) is discrete, which we use for lowerbounding the information rate in (39). With
being the entropy rate of the process \({{\varvec {\mathsf{{V}}}}}\), we have
where (43) results from the fact that the entropy rate of a discrete random process is nonnegative and (44) is due to the fact that conditioning cannot increase entropy. In the following, we will derive bounds on \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}})\) and \(H'({{\varvec {\mathsf{{V}}}}})\).
Achievable rate of the genieaided receiver
To evaluate the mutual information rate \(I'({{\varvec {\mathsf{{A}}}}};\hat{{{\varvec {\mathsf{{D}}}}}}) = I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}})\) of the genieaided receiver, we have to evaluate the mutual information rate between the sequences of temporal spacings of zerocrossings \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and \(\hat{{{\varvec {\mathsf{{D}}}}}}^{(K)}\). Note, that in contrast to the original channel here both vectors \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and \(\hat{{{\varvec {\mathsf{{D}}}}}}^{(K)}\) are of the same length as additional zerocrossings are removed by the genieaided receiver. The only error remaining is a shift \(\mathsf {S}_k\) of every zerocrossings instant \(\mathsf {T}'_k\) to \(\hat{\mathsf {T}}_k\). Hence, on a symbol level we can write with (41) and (1) for the channel output
In order to derive an upper and a lower bound on the mutual information rate of this channel, knowledge on the probability distribution of \(\mathsf {S}_k\) is required.
The distribution of the shifting errors
As \(\hat{\mathsf {x}}(t)\) is bandlimited, it can be completely described by a sampled representation with sampling rate \(1/\beta\) to fulfill the Nyquist condition, cf. (8). Note that we here refer to the concept of sampling only to evaluate the value of the overall distortion
at the time instant \(\mathsf {T}'_k\) of the original zerocrossing. We still assume the receiver to be able to resolve the zerocrossing instants with infinite resolution.
The distribution of the shifting error \(\mathsf {S}_{k}\) can be evaluated by mapping the pdf of the additive noise \(\mathsf {z}(\mathsf {T}'_k)\) at the time instant \(\mathsf {T}'_k\) into the zerocrossing error \(\mathsf {S}_{k}\) on the time axis. This is represented in Fig. 5, where the due to bandlimitation slowly varying noise leads approximately to a shift of the filtered transmit waveform \(\hat{f}(t) = \hat{g}(t+{\beta }/{2})  1\). We assume small noise amplitudes as we are interested in the behavior in the midtohigh SNR regime. The mapping can then be given as
Based on the midtohigh SNR assumption, we have small \(\mathsf {z}(\mathsf {T}'_k)\) such that we assume \(\mathsf {S}_k \ll \beta\). Thus, we can linearize \(\hat{f}(t)\) using its first order Taylor approximation at \(t = 0\) as in (26). This corresponds to approximating g(t) by \(g_{\rm {appr}}(t)\) in the TI, cf. (27). We show in Appendix D, that this is valid for ρ ≳ 10 dB.
In order to derive the pdf of \(\mathsf {S}_k\), we need to obtain the pdf of the additive noise \(\mathsf {z}(\mathsf {T}'_k)\), which is composed of two parts: the LPfiltered Gaussian noise \(\hat{\mathsf {n}}(\mathsf {T}'_k)\) and the ISI caused by the oscillation of the neighboring pulses \(\hat{g}(t)\) due to the LPfiltering. Again, due to the 1bit quantization it is not clear, how to consider the information contained in the ISI for bounding the mutual information rate. Thus, we model it as additional noise for the purpose of lowerbounding the mutual information rate since it affects the position of the zerocrossings. In order to construct an upper bound on the mutual information rate, the ISI is not considered and only \(\hat{\mathsf {n}}(\mathsf {T}'_k)\) contributes to \(\mathsf {z}(\mathsf {T}'_k)\).
In Sect. 3.3 we have shown that the Gaussian distribution is a good approximation of the distribution of \(\tilde{\mathsf {x}}(t)\) for ratios \(\kappa ={W}/{\lambda }\) in the order of one. These will prove to be the relevant ones in the scenarios considered in this paper. We thus model
with
where \(\sigma_{{\rm ISI}}^2\) is the variance of the ISI. We then have
Hence, in the midtohigh SNR case the zerocrossing errors \(\mathsf {S}_{k}\) are approximately Gaussian distributed, i.e., \(\mathsf {S}_{k}\sim \mathcal {N}\left( 0,\sigma_{\mathsf {S}}^{2}\right)\) with
Upper bound on the achievable rate of the genieaided receiver
Given \(\mathsf {S}_{k}\sim \mathcal {N}\left( 0,\sigma_{\mathsf {S}}^{2}\right)\), cf. (52), and using \(\sigma_{\mathsf {z}}^2=\sigma_{\hat{\mathsf{n}}}^2\), cf. (49), we have for \(\mathsf {\Delta }_k = \mathsf {S}_k  \mathsf {S}_{k1}\) in (45)
as the \(\mathsf {S}_k\) are approximately independent given the minimum temporal distance of the zerocrossings being \(\beta\) and (8). The values \(\mathsf {\Delta }_k\) are correlated as they always depend on the current and the previous \(\mathsf {S}_k\), such that the ACF of \(\varvec{\Delta }\) is given by
which yields for the covariance matrix \(\mathbf {R}^{(K)}_{\mathsf {\Delta }}\) of zerocrossing shifts \(\varvec{\Delta }^{(K)}=[\mathsf {\Delta }_{1},...,\mathsf {\Delta }_K]^T\)
Hence, the channel with the genieaided receiver is a colored additive Gaussian noise channel with input \({{\varvec {\mathsf{{A}}}}}\), output \(\hat{{{\varvec {\mathsf{{D}}}}}}\), and noise \(\varvec{\Delta }\), which is independent of \({{\varvec {\mathsf{{A}}}}}\) cf. (49). The capacity of the colored additive Gaussian noise channel is achieved for Gaussian distributed input symbols [25, Chapter 9, Eq. (9.97)] and provides an upper bound on the mutual information rate of the channel with the genieaided receiver and the chosen input distribution. Thus, we get
where \(\nu\) is chosen such that
with \(\sigma_{\mathsf {A}}^2\) given in (4). Moreover, \(S_{\mathsf {\Delta}}(f)\) is the PSD of \(\varvec{\Delta }\) and it is given by the ztransform of (54) as
Although \(S_{\mathsf {\Delta}}(f)\) is equal to zero for \(f = 0\) it can be shown that the integral in (56) exists, using that \(\nu \ge (\nu S_{\mathsf {\Delta }}(f))^{+}\,\forall f\) and solving
Lower bound on the achievable rate of the genieaided receiver
For the genieaided receiver, the mutual information between \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and \({\hat{\varvec {\mathsf{D}}}}^{(K)}\) is given by
where \(h(\cdot )\) denotes the differential entropy. Moreover, \(\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\) is the linear minimum meansquared error estimate of \({{\varvec {\mathsf{{A}}}}}^{(K)}\) based on \({\hat{\varvec {\mathsf {D}}}}^{(K)}\). Equality (60) follows from the facts that addition of a constant does not change differential entropy and that \(\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\) can be treated as a constant while conditioning on \({\hat{\varvec{\mathsf {D}}}}^{(K)}\) as it is a deterministic function of \({\hat{\varvec{\mathsf {D}}}}^{(K)}\).
Next, we will upperbound the second term on the RHS of (60), i.e., \(h\big ({{\varvec {\mathsf{{A}}}}}^{(K)}\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\big {\hat{\varvec{\mathsf {D}}}}^{(K)}\big )\). This term describes the randomness of the linear minimum meansquared estimation error while estimating \({{\varvec {\mathsf{{A}}}}}^{(K)}\) based on the observation \({\hat{\varvec{\mathsf {D}}}}^{(K)}\). It can be upperbounded by the differential entropy of a Gaussian random variable having the same covariance matrix [25, Theorem 8.6.5]. With (45), the estimation error covariance matrix of the linear minimum meansquared error (LMMSE) estimator is given by
where all covariance matrices \(\mathbf {Q}\) are of dimension \(K\times K\) and \(\mathbf {Q}_{\mathsf {A}} = \sigma_{\mathsf {A}}^2 \mathbf {I}^{(K)}\). Furthermore, \(\mathbf {I}^{(K)}\) is the identity matrix of size \(K\times K\), \(\sigma_{\mathsf {A}}^2\) is given in (4), \(\mathbf {Q}_{\mathsf {A\Delta}} = {{\,{\mathbb{E}}\,}}[{{\varvec {\mathsf{{A}}}}}^{(K)}(\varvec{\Delta }^{(K)})^T]\), and \(\mathbf {Q}_{\mathsf {\Delta}} = {{\,{\mathbb{E}}\,}}[\varvec{\Delta }^{(K)}(\varvec{\Delta }^{(K)})^T]\).
Ignoring the correlation between \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and \(\varvec{\Delta }^{(K)}\) corresponds to \(\mathbf {Q}_{\mathsf {A \Delta}} = \varvec{0}\) and neglecting the correlation between the ISI samples \(\tilde{\mathsf {x}}(\mathsf {T}'_k)\), \(k=1,...,K\) is equivalent to \(\mathbf {Q}_{\mathsf {\Delta }} = \mathbf {R}_{\mathsf {\Delta }}\), with \(\mathbf {R}_{\mathsf {\Delta}}\) given in (55). We show in Appendix E that this results in an upper bound on \(h({{\varvec {\mathsf{{A}}}}}^{(K)}\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\big {\hat{\varvec{\mathsf {D}}}}^{(K)})\) which is given by
with
This yields the following lower bound for the mutual information in (60)
The first term of (65) follows from the independence of the elements of \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and for the second term we have used (64) and the matrix inversion lemma. With (65) the mutual information rate in (38) is lowerbounded by
where for (66) we have used Szegö’s theorem on the asymptotic eigenvalue distribution of Hermitian Toeplitz matrices [26, pp. 6465], [27]. Here, \(S_{\mathsf {\Delta }}(f)\) is the PSD of \(\varvec{\Delta }\) given in (58) and corresponding to the sequence of covariance matrices \(\mathbf {R}_{\mathsf {\Delta }}^{(K)}\). Despite the discontinuity of the integrand in (66), it can be shown that the integral exists analogously as in (59), here with \(\sigma_{\mathsf {A}}^2\) instead of ν. As \(\mathsf {A}_{k}\) is exponentially distributed, we have
With (3), (4), (8), (59), and (67), the lower bound in (66) can be written as
Characterization of the process of additional zerocrossings
It remains to find an upper bound for \(H'({{\varvec {\mathsf{{V}}}}})\), cf. (44). The random variable \(\mathsf {V}_k\) describes the number of received symbols that correspond to the transmitted symbol \(\mathsf {A}_k\). It depends on the number \(\mathsf {N}_k\) of inserted zerocrossings by \(\mathsf {V}_k = \mathsf {N}_k+1\). Given the separability of shift and insertion errors, we do not need to consider the TIs as they just contain the shifted zerocrossing. It remains the hold period of average duration \(T_{{\rm HP}} = {{\,{\mathbb{E}}\,}}[\mathsf {A}_k]\beta = \lambda ^{1}\) in which \(\mathsf {x}(t)\) maintains the signal level \(\pm \sqrt{\hat{P}}\). Without LPfiltering, this would lead to a levelcrossing problem. However, since \(\hat{\mathsf {x}}(t)\) shows the typical ringing, cf. Fig. 2, we have a curve crossing problem. In order to obtain a closed form expression for an upper bound on \(\mathsf {N}_k\), we resort to a further bounding step: We consider a level crossing problem using the lowest value of the kth pulse outside the TI given by u, cf. (25) in Sect. 3.2 and Fig. 3.
Levelcrossing problems, especially for Gaussian processes, have been widely studied, e.g., by Kac [28], Rice [29], Cramer and Leadbetter [30]. We derive an upper bound on \(H'({{\varvec {\mathsf{{V}}}}})\) based on the first moment of the distribution of \(\mathsf {V}_k\). For a stationary zeromean Gaussian random process, the expected number of crossings of the level u in the time interval \(T_{{\rm sat}} = \lambda ^{1}\) is given by the Rice formula [29]
Here, \(s_{\mathsf {zz}}(\tau )\) is the ACF of the Gaussian process \(\mathsf {z}(t)\) and \(s^{\prime\prime}_{\mathsf {zz}}(\tau )={\partial ^2}/({\partial \tau ^2})\, s_{\mathsf {zz}}(\tau )\). Analogously to (49), we have
where \(s_{{\rm{ ISI}}}(\tau )\) is the ACF of the ISI and \(s^{\prime\prime}_{{\rm ISI}}(0)\) is finite for finite bandwidths W, see Sect. 3. This ensures \(s^{\prime\prime}_{\mathsf {zz}}(0) < \infty\) and, thus, \({{\,{\mathbb {E}}\,}}[\mathsf {N}_k]\) being finite. For a given mean \(\mu\) in (69), the entropy maximizing distribution for a discrete random variable on \({\mathbb {N}}\) is the geometric distribution, cf. [31, Section 2.1]. Hence, we can upperbound the entropy \(H(\mathsf {V}_k)\) by
Since independent \(\mathsf {V}_k\) maximize the entropy rate, we obtain for the entropy rate of the auxiliary process
Note that the bound on \(H(\mathsf {V}_k)\) is an increasing function in the expected number of levelcrossings \(\mu\) of the Gaussian random process and that \(\mu\) increases with the variance \(\sigma_{\mathsf {z}}^2\). Hence, to evaluate (69) an upper bound for \(\sigma_{\mathsf {z}}^2\) is required, which we obtain using the upper bound on \(\sigma_{\tilde{\mathsf {x}}}^2\) in (20) via (34) and (49). Moreover, in (69) we need an upper bound on \(s^{\prime\prime}_{\mathsf {zz}}(0)\), which results from the lower bound on \(s^{\prime\prime}_{\tilde{\mathsf {x}}{\tilde{\mathsf {x}}}}(0)\) in (24) via (35) and (70).
Results and discussion
Lower and upper bound on the achievable rate
Substituting (38), (68), (52), (8) and (72), (3) into (44), a lower bound on the mutual information rate of the 1bit quantized time continuous channel is obtained. It holds for small ratios \(\kappa ={W}/{\lambda }\) due to the limitations of the Gaussian approximation of the LPdistortion and is given by
The indices \((\cdot )_{\rm {LB}}\) and \((\cdot )_{\rm {UB}}\) refer to lower and upper bounds on the indexed variable, respectively.
Furthermore, the corresponding upper bound for the given signaling scheme results from (56) and is valid for all \(\kappa\). With (3) and (8), it is given by
Here, \(S_{\mathsf {\Delta },\text {LB}}(f)\) is a lower bound on \(S_{\mathsf {\Delta }}(f)\) in (58) since for evaluating \(\sigma_{\mathsf {z}}^2\) and \(\sigma_{\mathsf {S}}^2\) it is assumed that \(\sigma_{{\rm ISI}}^2=0\), cf. (49). Both bounds, (73) and (74), hold for ρ ≳ 10 dB, where \(\mathsf {S}_k < {\beta }/{2}\) with high probability and such, the temporal separation of error events (zerocrossingshifts and additional zerocrossings) is valid.
In (73), with (11), (8), (17), (16) and (25), we can express \(\sigma ^2_{\mathsf {z},\text {UB}} = \hat{P} \big (\frac{\frac{1}{2}+2\kappa }{(1+2\kappa )\rho }+\frac{\alpha_{\rm {TI/HP}} (1+2 c_1) c_0}{2 \pi ^2 (1+2 \kappa )}\big )\) and \(\frac{s^{\prime\prime}_{\mathsf {zz},\text {LB}}(0)}{\lambda ^2} = \hat{P} \kappa ^2 \big (\frac{4}{3} \frac{\frac{1}{2}+2\kappa }{(1+2\kappa )\rho } + \frac{\alpha_{\rm {HP}} 2(1+2 c_1) c_2}{1+2 \kappa }\big )\) as functions of \(\hat{P}\), \(\rho\), and \(\kappa\) where \(c_1\) is a function of \(\kappa\). Hence, both \(\mu_{\rm {UB}}\) and the normalized lower bound \(I_{\rm {LB}}'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})/W\) depend solely on \(\kappa\) and \(\rho\). With \(\sigma ^2_{\mathsf {S},\text {UB}}\sigma_{\mathsf {A}}^{2} = \frac{\sigma ^2_{\mathsf {z},\text {UB}}}{\hat{P}} \frac{1}{4 \kappa ^2 {{\,\mathrm{Si}\,}}^2(\pi )}\), the same behaviour can be shown for \(I_{\rm {UB}}'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})/W\).
For comparison, the capacity \(C_{{\rm AWGN}} = W \log \left( 1+ \rho \right)\) of the AWGN channel without output quantization represents an upper bound on the mutual information rate with 1bit quantization. The ratio between \(C_{{\rm AWGN}}\) and \(I'_{\rm {LB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) in (73) is
Since \(\Delta {I}\) in (75) is solely a function of \(\kappa\) and \(\rho\), we are looking for the \(\kappa\) minimizing \(\Delta {I}\) for a given \(\rho\). The results are depicted in Fig. 6. It can be seen, that the optimal \(\kappa ={W}/{\lambda }\) is in the order of one. Hence, the randomness of the input signal needs to be matched to the channel bandwidth, which is achieved by allowing \(\lambda\) to grow linearly with W. In the high SNR regime the optimal \(\kappa\) is approximately 0.75.
Note, that this optimum heavily depends on linking the transition time \(\beta\) to the signal bandwidth W, cf. (8). The utilization of the spectrum could be improved by reducing W, e.g., choosing \(W={1}/{(2 T_{ {\rm avg}})}\). However, then the minimum symbol duration would not longer correspond to the coherence time of the noise, based on which we can neglect symbol deletions. The error event of deletions would then have to be included in the model, see Appendix A.
Spectral efficiency results
Figure 7 shows the resulting bounds over the SNR \(\rho\) in terms of spectral efficiency, i.e., normalized by the bandwidth 2W, such that they depend solely on \(\kappa\). We compare \(I'_{\rm {LB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) from (73), \(I'_{\rm {UB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) from (74), and the lower bound if we neglect the ISI, \({I'_{\rm {LB,noISI}}({{\varvec {\mathsf{{A}}}}};{{{\varvec {\mathsf{{D}}}}}})}\), which is obtained by setting \(\alpha_{\rm {TI}} = \alpha_{\rm {HP}} = 0\), cf. (34). The latter two bounds hold for any \(\kappa\), whereas \(I'_{\rm {LB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) applies only to values of \(\kappa\) in the order of one, due to the limitations of the Gaussian approximation of the LPdistortion, cf. Sect. 3.3. However, this restriction is uncritical since \(\kappa\) in the order of one yields the highest mutual information rates and, thus, the best lower bounds. It can be seen that in the lowtomid SNR range, \(I'_{\rm {LB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) and \(I'_{\rm {UB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) approach each other with increasing SNR. This is due to the decreasing impact of \(H'({{\varvec {\mathsf{{V}}}}})\), as additional zerocrossings are not considered in \(I_{\rm {UB}}'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) and their probability decreases with the SNR. However, in the high SNR domain the upper and lower bound diverge again since the system becomes dominated by the ISI. The lower bound \(I'_{\rm {LB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) saturates at a spectral efficiency of approximately 1.54 bit/s/Hz for \(\kappa =0.75\). On the other hand \({I'_{\rm {LB,noISI}}({{\varvec {\mathsf{{A}}}}};{{{\varvec {\mathsf{{D}}}}}})}\) does not saturate over the SNR and is very close to \(I'_{\rm {UB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\), which coincides with the observation in [15] that under noise free condition (\(\rho \rightarrow \infty\)) the achievable rate of the timing channel tends to infinity.
By comparison to the capacity in bits per channel use (bpcu) under 1bit quantization in [13] assuming Nyquist signaling and sampling, it can be seen that there is at least a 50 % increase in achievable rate possible in the highSNR domain. Moreover, the achievable rate for a 1bit quantization based system with 2fold oversampling and 2fold FTN signaling [11] is given, which already attains approximately 35% of gain w.r.t. [13]. Note that under perfect phase, frequency, and timing synchronization with complex signaling all presented spectral efficiencies could be doubled, however, this requires two 1bit quantizers, one for the inphase and one for the quadrature component.
Conclusions
We derived an approximate lower bound on the mutual information rate of the real and bandlimited 1bit quantized continuoustime AWGN channel focusing on the midtohigh SNR regime. It is valid for SNR values above approximately 10 dB and κ = W/λ ≲ 3, for which we can approximate the filter distortion by a Gaussian distribution. We furthermore provided an approximate upper bound on the mutual information rate of the specific signaling scheme used for deriving the lower bound. We have identified the parameter ranges in which both bounds are close and have given explanations for those, in which they are not. As the lower and the upper bound are close in an SNR range between approximately 10 and 20 dB, they provide a valuable characterization of the actual mutual information rate with the given signaling scheme on 1bit quantized channels. The bounds hold given the following statements:

For the lower bound, the LPdistortion error \(\tilde{\mathsf {x}}(t)\) is approximated to be Gaussian, which enables closed form analytical treatment. This is appropriate for κ ≲ 3, for which the best lower bounds are obtained, cf. Sect. 3.3.

For the considered input signals and the mid to high SNR scenario, the occurrence of deletions is negligible if \(W=\frac{1}{2 \beta }\). This was confirmed by simulations, cf. Appendix A.

There is only one zerocrossing in each transition interval \(\left[ \mathsf {T}_k,\mathsf {T}_k+\beta \right]\). This follows from the bandlimitation of the noise, which prevents the signal from rapid changes, and it has been verified for an SNR above 5 dB by numerical computation based on curvecrossing problems for Gaussian random processes, cf. Appendix B.

For the upper bound, the individual elements of the process \({{\varvec {\mathsf{{S}}}}}\) are i.i.d. The minimum temporal separation of the individual \(\mathsf {S}_k\) is \(\beta\), which is matched to the bandwidth of the noise, cf. (8), and ISI, which mainly contributes to correlation, is neglected for upperbounding.

In the midtohigh SNRdomain it holds \(\mathsf {S}_{k} \ll \beta\), such that the transition can be linearized around the zerocrossing, cf. (47). We show in Appendix D, that this is valid for ρ ≳ 10 dB.
We have shown that in order to maximize the lower bound on the mutual information rate for a given bandwidth, the parameter \(\lambda\) of the exponential distribution of the \(\mathsf {A}_k\) needs to grow linearly with the channel bandwidth. For the given system model, the optimal coefficient \(\kappa\) depends on the SNR and tends towards 0.75 for high SNR. When allowing the filter bandwidth W to take on values smaller than \({1}/{(2 \beta) }\), deletion errors have to be incorporated into the model as otherwise the spectral efficiency of the system can be overestimated. This remains for future work. In contrast to the AWGN channel capacity, the lower bound on the mutual information rate with 1bit quantization saturates when increasing the SNR to infinity. This is due to the LPdistortion that is introduced since the designed signal \(\mathsf {x}(t)\) is not strictly bandlimited.
Methods/Experimental
We analyze the bandlimited 1bit quantized AWGNchannel. We derive a lower bound on the capacity by lowerbounding the mutual information rate for a given set of waveforms with exponentially distributed zerocrossing distances and an average power constraint. Furthermore, we derive an upper bound based on the specific signaling scheme in order to quantify the impact of the applied bounding steps. The main results are closed form expressions obtained analytically. During the derivation, it was required to make assumptions and approximations in order to be able to treat the problem analytically. The parameter regions, in which these assumption—and therefore the obtained bounds—are valid, have been determined by numerical computation or simulation in MATLAB.
Availability of data and materials
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
Change history
25 April 2021
Following publication, figures 36 and the formatting of several equations have been corrected in this article.
24 June 2021
A Correction to this paper has been published: https://doi.org/10.1186/s13638021019828
Notes
 1.
Note that one additional bit is carried by the sign of the first sample. However, its effect on the mutual information rate between channel input and output can be neglected when studying the capacity as it converges to zero for infinite blocklength.
 2.
As upcrossing we denote zerocrossings with positive transition slope, i.e., from \(\sqrt{\hat{P}}\) to \(\sqrt{\hat{P}}\). Correspondingly, downcrossings have a negative slope.
Abbreviations
 AWGN:

Additive white Gaussian noise
 ADC:

Analogtodigital converter
 ACF:

Autocorrelation function
 bpcu:

Bits per channel use
 FTN:

FasterthanNyquist
 HP:

Hold period
 i.i.d.:

Independent and identically distributed
 ISI:

Intersymbol interference
 LMMSE:

Linear minimum meansquared error
 LP:

Lowpass
 PSD:

Power spectral density
 pdf:

Probability density function
 SNR:

Signaltonoise ratio
 TI:

Transition interval
 ZC:

Zerocrossing
 w.r.t.:

With respect to
References
 1.
S. Bender, M. Dörpinghaus, G. Fettweis, On the achievable rate of bandlimited continuoustime 1bit quantized AWGN channels, in Proceedings of IEEE International Symposium on Information Theory (ISIT), Aachen, Germany (2017)
 2.
S. Bender, M. Dörpinghaus, G. Fettweis, On the achievable rate of bandlimited continuoustime AWGN channels with 1bit output quantization. arXiv preprint, arXiv:1612.08176
 3.
B. Murmann, ADC Performance Survey 1997–2018. (2018). http://web.stanford.edu/~murmann/adcsurvey.html
 4.
G.P. Fettweis, M. Dörpinghaus, J. Castrillon, A. Kumar, C. Baier, K. Bock, F. Ellinger, A. Fery, F.H.P. Fitzek, H. Härtig, K. Jamshidi, T. Kissinger, W. Lehner, M. Mertig, W.E. Nagel, G.T. Nguyen, D. Plettemeier, M. Schröter, T. Strufe, Architecture and advanced electronics pathways toward highly adaptive energyefficient computing. Proc. IEEE 107(1), 204–231 (2019)
 5.
E.N. Gilbert, Increased information rate by oversampling. IEEE Trans. Inf. Theory 39(6), 1973–1976 (1993)
 6.
S. Shamai, Information rates by oversampling the sign of a bandlimited process. IEEE Trans. Inf. Theory 40(4), 1230–1236 (1994)
 7.
T. Koch, A. Lapidoth, Increased capacity per unitcost by oversampling, in Proceedings of the IEEE Convention of Electrical and Electronics Engineers in Israel (IEEEI), Eilat, Israel, pp. 684–688 (2010)
 8.
W. Zhang, A general framework for transmission with transceiver distortion and some applications. IEEE Trans. Commun. 60(2), 384–399 (2012)
 9.
L. Landau, G. Fettweis, Information rates employing 1bit quantization and oversampling at the receiver, in Proceedings of the IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Toronto, Canada, pp. 219–223 (2014)
 10.
L.T. Landau, M. Dörpinghaus, G.P. Fettweis, 1bit quantization and oversampling at the receiver: sequencebased communication. EURASIP J. Wirel. Commun. Netw. 2018(1), 83 (2018)
 11.
L. Landau, M. Dörpinghaus, G.P. Fettweis, 1bit quantization and oversampling at the receiver: communication over bandlimited channels with noise. IEEE Commun. Lett. 21(5), 1007–1010 (2017)
 12.
S. Bender, L. Landau, M. Dörpinghaus, G. Fettweis, Communication with 1bit quantization and oversampling at the receiver: spectral constrained waveform optimization, in Proceedings of the IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Edinburgh, UK (2016)
 13.
J. Singh, O. Dabeer, U. Madhow, On the limits of communication with lowprecision analogtodigital conversion at the receiver. IEEE Trans. Inf. Theory 57(12), 3629–3639 (2009)
 14.
C.E. Shannon, A mathematical theory of communication. Bell Syst. Tech. J. 27, 623–656 (1948)
 15.
V. Anantharam, S. Verdú, Bits through queues. IEEE Trans. Inf. Theory 42(1), 4–18 (1996)
 16.
M. Schlüter, M. Dörpinghaus, G.P. Fettweis, Bounds on phase, frequency, and timing synchronization in fully digital receivers with 1bit quantization and oversampling. IEEE Trans. Commun. 68(10), 6499–6513 (2020)
 17.
M. Schlüter, M. Dörpinghaus, G.P. Fettweis, Least squares phase estimation of 1bit quantized signals with phase dithering, in Proceedings of the IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 1–5 (2019)
 18.
A. Wadhwa, U. Madhow, Nearcoherent QPSK performance with coarse phase quantization: a feedbackbased architecture for joint phase/frequency synchronization and demodulation. IEEE Trans. Signal Process. 64(17), 4432–4443 (2016)
 19.
R.G. Gallager, Sequential Decoding for Binary Channels with Noise and Synchronization Errors. Technical Report, Massachusetts Institute of Technology: Lincoln Laboratory (1961)
 20.
K.S. Zigangirov, Sequential decoding for a binary channel with dropouts and insertions. Probl. Inf. Transm. 5, 17–22 (1969)
 21.
D. Fertonani, T.M. Duman, M.F. Erden, Bounds on the capacity of channels with insertions, deletions and substitutions. IEEE Trans. Commun. 59(1), 2–6 (2011)
 22.
S. Diggavi, M. Mitzenmacher, H. Pfister, Capacity upper bounds for deletion channels, in Proceedings of IEEE International Symposium Information Theory (ISIT), Nice, France, pp. 1716–1720 (2007)
 23.
R.L. Dobrushin, Shannon’s theorems for channels with synchronization errors. Probl. Peredachi Inf. 3, 18–36 (1967)
 24.
W. Rudin, Real and Complex Analysis, 3rd edn. (McGrawHill Book Co., New York, 1987), p. 416
 25.
T. Cover, J. Thomas, Elements of Information Theory, 2nd edn. (Wiley, New York, 2006)
 26.
U. Grenander, G. Szegö, Toeplitz Forms and Their Applications (University of California Press, Berkeley, 1958)
 27.
R.M. Gray, Toeplitz and circulant matrices: a review. Found. Trends Commun. Inf. Theory 2(3), 155–239 (2006)
 28.
M. Kac, On the average number of real roots of a random algebraic equation. Bull. Am. Math. Soc. 49(4), 314–320 (1943)
 29.
S.O. Rice, Mathematical analysis of random noise. Bell Syst. Tech. J. 23(3), 282–332 (1944)
 30.
H. Cramer, M.R. Leadbetter, Stationary and Related Stochastic Processes (Wiley, New York, 1967)
 31.
J.N. Kapur, Maximumentropy Models in Science and Engineering (Wiley Eastern, New Dehli, 1993)
 32.
M.F. Kratz, Level crossings and other level functionals of stationary Gaussian processes. Probab. Surveys 3, 230–288 (2006)
Acknowledgements
We gratefully acknowledge the constructive comments of the reviewers.
Funding
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in the Collaborative Research Center SFB 912, “Highly Adaptive EnergyEfficient Computing”, HAEC, ProjectID 164481002. Open Access funding enabled and organized by Projekt DEAL.
Author information
Affiliations
Contributions
All authors contributed to the conception and design of the study. SB drafted the manuscript and did the main analysis work. All authors contributed to the interpretation of the results. MD reviewed the manuscript and contributed the calculations in Sect. 5.3. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Parts of this work have been presented at the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, June, 2017 [1], and in an arXiv preprint [2].
Appendices
Appendix A: Occurrence of zerocrossing deletions
By removing the fixed relation of bandwidth W and transition time \(\beta\) in (8) and allowing W to take on any value, we augment the design space and potentially increase the spectral efficiency of the system. However, the minimum distance between two zerocrossings \(\beta\) is no longer linked to the coherence time of the noise, which can potentially lead to deletion errors. In this section, we verify this by simulation.
We consider a long sequence of \(K=10^3\) symbols and a time resolution \(\Delta t=10^{3} \lambda ^{1}\). For a given SNR, \(\lambda\), and \(\beta\), we generate \(\mathsf {x}(t)\) and analyze the corresponding signal \(\mathsf {r}(t)\) after the receive filter. In order to identify the locations of insertions and deletion, we match every received upcrossing^{Footnote 2} in \(\mathsf {r}(t)\) to the closest upcrossing in \(\mathsf {x}(t)\), likewise for the downcrossings, and count the deleted symbols.
The number of deletions is depicted in Fig. 8 for two different SNR values of 6 dB and 15 dB, where we defined \(\tilde{\kappa }={1}/{(2\beta \lambda )}\). It can be seen that in the midtohigh SNR regime, the SNR has a rather small impact on the number of deletions occurred. The black line represents the case, when \(W={1}/{(2 \beta) }\). It can be seen that for bandwidths \(W\ge {1}/{(2 \beta) }\), i.e., above the black line, the number of deletions is negligible because the dynamics of the noise are high compared to the minimum symbol duration \(\beta\). However, when the bandwidth W becomes smaller than \({1}/{(2 \beta) }\), deletions are possible and have to be considered in the system model—otherwise the spectral efficiency of the system will be overestimated.
Appendix B: Number of zerocrossings within a transition interval
Obtaining the expected number of zerocrossings within the interval \([\mathsf{T}_k,\mathsf{T}_k+\beta]\) is a curve crossing problem depending on the deterministic transmit waveform and the random process \(\mathsf {z}(t)\), cf. (46). We show in Sect. 3, that \(\tilde{\mathsf {x}}(t)\) and therefore also \(\mathsf {z}(t)\) can be approximated to be Gaussian, see also (48). Hence, an equivalent way of looking at this problem is to study the zerocrossings of a nonstationary Gaussian process \(\mathsf {q}(t) = \mathsf {z}(t)  \psi (t)\), where \(\psi (t)\) is the deterministic curve to be crossed by \(\mathsf {z}(t)\). For this purpose we define the TI \({\mathbb {Y}}=[0,\beta ]\), where \(y \in {\mathbb {Y}}\) is the time variable within the TI. Then \(\psi (y)\) is related to the filtered transmit pulse \(\hat{g}(y)\) as
For the sine transition in (10), \(\hat{g}(t)\) it is given in (14). The process \(\mathsf {q}(t)\) has a zerocrossing in \({\mathbb {Y}}\) only if \(\mathsf {z}(y)=\psi (y)\). For the number of crossings \(N_{T}(\psi )\) of a curve \(\psi\) by a stationary Gaussian process in the time interval of length T it holds [32]
where \(s(\tau )\) is the autocorrelation function (ACF) of the Gaussian Process, \('\) denotes the derivative in time, i.e., w.r.t. y, and \(\varphi\) and \(\Phi\) are the zeromean Gaussian density and distribution functions with variance \(\sigma ^2_{\mathsf {z}}\), respectively. The variance of the number of zerocrossings is given by [32]
where the subscripts \(t_1\) and \(t_2\) denote the time instants and \(\phi\) is the multivariate zeromean normal distribution of \(\mathsf {q}(t_1)\), \(\mathsf {q}'(t_1)\), \(\mathsf {q}(t_2)\), and \(\mathsf {q}'(t_2)\) with covariance matrix \(\Sigma\)
The equations (77) and (78) are evaluated and depicted in Fig. 9. For \(\kappa ={W}/{\lambda } \ge 0.5\), the expectation of the number of zerocrossings converges to one for SNR \(\ge\) 5 dB while at the same time the variance converges to 0. Hence, for an SNR \(\ge\) 5 dB there exists with high probability only one zerocrossing in every TI. For \(\kappa \ll 1\) the lower bound on the mutual information rate in (73) becomes zero and, hence, the validity of the assumption is not relevant.
Appendix C: Power spectral density of the transmit signal
The PSD of a random process is defined as
with \(\mathsf {X}(\omega )\) being the spectrum of the random process \(\mathsf {x}(t)\) defined in (6) and given by
where \(G(\omega )\) is the Fourier transformation of the waveform g(t) in (7). It holds that
where \(a(\omega )\) is a real function in \({\mathbb {R}}\) given by
We then obtain the squared magnitude of (81) as
The third term of the RHS of (84) can be written as
where
The first two terms of (84) represent a DCcomponent, which is not of interest for further calculations. Exploiting the fact that the cosine is an even function, it remains for the PSD of the transmit signal in (80)
where \(n=kv\) is the index describing the distance between two arbitrary zerocrossing instances and \(\mathsf {L}_n = \mathsf {T}_k\mathsf {T}_v = \sum \nolimits_{i = 1}^{n} \mathsf {A}_{k+i}\) is the corresponding random variable. As sum of exponentially distributed random variables \(\mathsf {L}_n\) follows a Gammadistribution, cf. (5). We thus can calculate the expectation in (87) as
with \(q = \frac{\lambda }{\sqrt{\lambda ^2+\omega ^2}}\) and upperbound the infinite sum in (87) by
Numerically we find that the infinite sum in (87) has periodic minima. They occur when \(\omega \beta + \arctan \left( \frac{\omega }{\lambda }\right) = 2 m \pi\), \(m \in {\mathbb {Z}}\), for which the cosine is always one such that it remains
Based on (90), \(S_{\mathsf {X}}(\omega )\) in (87) can be lowerbounded by (18) where we have used that \(12 \frac{\lambda }{\sqrt{\lambda ^2+\omega ^2} + \lambda } = \frac{1}{1+2 c(\omega )}\) with \(c(\omega )\) given in (89).
Appendix D: Midtohigh SNR assumption \(\mathsf {S}_k \ll \beta\)
In order to quantify the SNR region for which \(\mathsf {S}_k \ll \beta\) and, thus, the linearization in (47) is valid, the variances of both densities, (50) and (51), have been evaluated and compared numerically. The corresponding normalized variances \(W^2 \sigma_{\mathsf {S}}^2\) are depicted in Fig. 10a), where the variance of the original pdf in (50) is only shown, when \(\Pr (\mathsf {S}_k<{\beta }/{2}) = \int_{\beta /2}^{\beta /2} p_{\mathsf {S}}(s)\hbox {d}s \ge 0.95\), i.e., with large probability \(\mathsf {S}_k < {\beta }/{2}\), cf. Fig. 10b). The numerical evaluation in Fig. 10 shows that for the relevant regime of \(\kappa = {W}/{\lambda } \ge 0.5\), the variances \(\sigma_{\mathsf {S},\text {orig}}^2\) and \(\sigma_{\mathsf {S}}^2\) are very close when ρ ≳ 10 dB, which is when \(\Pr (\mathsf {S}_k<{\beta }/{2})>0.99\). This means, as soon as the SNR is high enough in order for the temporal separation of error events (zerocrossingshifts and zerocrossinginsertions) not to be violated, the linearization is valid as well. For \(\kappa \ll 1\) the lower bound on the mutual information rate in (73) becomes zero and, hence, the validity of the assumption is not relevant. Comparing the variances is sufficient for our purpose as the further bounding of \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}})\) is solely based on the variance of a Gaussian random process with equal covariance matrix.
Appendix E: Independent noise assumption
In order to compare the impact of \(\mathbf {Q}_{\rm {err}}\) and \(\mathbf {R}_{\rm {err}}\), cf. (61) and (64), we need to obtain \(\mathbf {Q}_{\mathsf {A\Delta}}\) and \(\mathbf {Q}_{\mathsf {\Delta}}\). By using a short notation for all random processes at time \(\mathsf {T}'_j\), e.g., \(\mathsf {z}_j = \mathsf {z}(\mathsf {T}'_j)\), we write for the respective entries \(q_{\mathsf {A\Delta },(i,j)} = {{\,{\mathbb {E}}\,}}[\mathsf {A}_i \mathsf {\Delta }_j]\) and \(q_{\mathsf {\Delta },(i,j)} = {{\,{\mathbb {E}}\,}}[\mathsf {\Delta }_i \mathsf {\Delta }_j]\)
where \(a = {\beta }/{\left( {{\,\mathrm{Si}\,}}(\pi )\sqrt{\hat{P}}\right) }\). Equation (91) results since the Gaussian noise \(\hat{\mathsf {n}}(t)\) is independent of the transmit signal and only \(\tilde{\mathsf {x}}(t)\) can contribute to a correlation between \({{\mathbf {\mathsf{{A}}}}}\) and \(\varvec{\Delta }\). The ISI at time \(\mathsf {T}'_j\) is given by
where \(\mathsf {L}_{j+l}^{j+m} = \sum_{k=j+l}^{j+m} \mathsf {A}_k\). In order to obtain \(\mathbf {Q}_{\mathsf {A\Delta}}\) we define with \(n=ml+1\) and \(m\ge l\)
and express the (i, j)th entries of \(\mathbf {Q}_{\mathsf {A\Delta}}\) as
The weights of \(\nu_n\) and \(\xi_n\) for other values of j are given in Table 1. By numerically solving the integrals in (94) and (95) using the probability distributions of \(\mathsf {A}_i\) and \(\mathsf {L}_{j+l}^{j+m}\), cf. (2) and (5), we obtain
From (92) we obtain \(\mathbf {Q}_{\mathsf {\Delta}}\) as
where \(\mathbf {R}_{\mathsf {\Delta}}\) is given by (55) and the elements of \(\mathbf {R}_{\tilde{\mathsf {x}}}\) are
From (93), we see that \({{\,{\mathbb {E}}\,}}[\tilde{\mathsf {x}}_i\tilde{\mathsf {x}}_j]\) yields sums of expectations \({{\,{\mathbb {E}}\,}}\left[ \tilde{g}\left( \mathsf {L}_{i+l_1}^{i+m_1}\right) \tilde{g}\left( \mathsf {L}_{j+l_2}^{j+m_2}\right) \right]\). Depending on \(ij\), \(m_{1/2}\), and \(l_{1/2}\), a number n of the \(\mathsf {A}_k\) in the two sums \(\mathsf {L}_{i+l_1}^{i+m_1}\) and \(\mathsf {L}_{j+l_2}^{j+m_2}\) coincide, whereas w and p summands \(\mathsf {A}_k\) are unique to the first and the second sum, respectively. We therefore define three random variables \(\mathsf {T}_n\), \(\mathsf {X}_w\), and \(\mathsf {L}_p\), which are the sums of disjoint sets of n, w, and p summands \(\mathsf {A}_k\), respectively. The expectation above becomes \({{\,{\mathbb {E}}\,}}\left[ \tilde{g}\left( \mathsf {T}_n+\mathsf {X}_w\right) \tilde{g}\left( \mathsf {T}_n+\mathsf {L}_p\right) \right]\), where \(p(\mathsf {T}_n,\mathsf {X}_w,\mathsf {L}_p) = p(\mathsf {T}_n) p(\mathsf {X}_w) p(\mathsf {L}_p)\) since the \(\mathsf {A}_k\) are independent. Thus, with (5), we can numerically evaluate \({{\,{\mathbb {E}}\,}}\left[ \tilde{g}\left( \mathsf {T}_n+\mathsf {X}_m\right) \tilde{g}\left( \mathsf {T}_n+\mathsf {L}_p\right) \right]\) and obtain \({{\,{\mathbb {E}}\,}}[\tilde{\mathsf {x}}_i\tilde{\mathsf {x}}_j]\), \(r_{\tilde{\mathsf {x}},(i,j)}\), and \(\mathbf {Q}_{\mathsf {\Delta}}\).
With \(\mathbf {Q}_{\mathsf {A \Delta}}\), \(\mathbf {Q}_{\mathsf {\Delta}}\), and \(\mathbf {Q}_{\mathsf {A}}\) we compute \(\mathbf {Q}_{\rm {err}}\). The difference between the two bounds in (62) and (63) is \(\frac{1}{2} (\log \det \mathbf {R}_{\rm {err}}  \log \det \mathbf {Q}_{\rm {err}})\), which is always positive as can be seen in Fig. 11, i.e., indicating that the inequality in (63) holds.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bender, S., Dörpinghaus, M. & Fettweis, G.P. On the achievable rate of bandlimited continuoustime AWGN channels with 1bit output quantization. J Wireless Com Network 2021, 54 (2021). https://doi.org/10.1186/s13638021018929
Received:
Accepted:
Published:
Keywords
 Channel capacity
 Onebit quantization
 Timing channel
 Continuoustime channel