Skip to main content

On the achievable rate of bandlimited continuous-time AWGN channels with 1-bit output quantization


We consider a real continuous-time bandlimited additive white Gaussian noise channel with 1-bit output quantization. On such a channel the information is carried by the temporal distances of the zero-crossings of the transmit signal. We derive an approximate lower bound on the capacity by lower-bounding the mutual information rate for input signals with exponentially distributed zero-crossing distances, sine-shaped transition waveform, and an average power constraint. The focus is on the behavior in the mid-to-high signal-to-noise ratio (SNR) regime above 10 dB. For hard bandlimited channels, the lower bound on the mutual information rate saturates with the SNR growing to infinity. For a given SNR the loss with respect to the unquantized additive white Gaussian noise channel solely depends on the ratio of channel bandwidth and the rate parameter of the exponential distribution. We complement those findings with an approximate upper bound on the mutual information rate for the specific signaling scheme. We show that both bounds are close in the SNR domain of approximately 10–20 dB.


In digital communications, we typically assume that the analog-to-digital converter (ADC) at the receiver provides a sufficiently fine grained quantization of the magnitude of the received signal. In the present paper, we consider very short range high data rate communication, where high carrier frequencies and large bandwidths are used. In such a scenario, the power consumption of the ADC becomes a major factor. The consumed energy per conversion step increases with the sampling rate [3], such that high resolution ADCs become unfeasible in the sub-THz regime at the very high sampling rates required. An exemplary application is wireless communications between computer boards within a server [4]. The above problem can be circumvented by using 1-bit quantization and oversampling of the received signal with respect to (w.r.t.) the Nyquist rate. One-bit quantization is fairly simple to realize as it requires neither an automatic gain control, nor linear amplification at the receiver. The loss in amplitude information can be partly compensated by oversampling, such that one could say that quantization resolution of the signal magnitude is traded-off against resolution in time domain. Optimal communication over the resulting channel including the ADC requires a modulation and signaling scheme adapted to this specific channel. Since this coarse quantization reduces the achievable rate and oversampling can partly compensate this effect, the question is how much the channel capacity is degraded compared to an additive white Gaussian noise (AWGN) channel sampled at Nyquist rate.

For the noise free case it has been shown already in the early works by Gilbert [5] and Shamai [6] that oversampling of a bandlimited channel can increase the information rate w.r.t. Nyquist sampling. The latter lower-bounded the capacity by \(\log_2 (n + 1)\) [bits/Nyquist interval] where n is the oversampling factor w.r.t. Nyquist sampling. However, for assessing the performance of practical communication systems with oversampled 1-bit quantization finite SNR performance is highly relevant. Regarding the low signal-to-noise ratio (SNR) domain, Koch and Lapidoth have shown in [7] that oversampling increases the capacity per unit-cost of bandlimited Gaussian channels with 1-bit output quantization. In [8] it has been shown that oversampling increases the achievable rate based on the study of the generalized mutual information. Moreover, in [9] bounds on the achievable rate in a discrete-time scenario are studied, which are evaluated via simulation in [10] w.r.t. 90% and 95% power containment bandwidth, in [11] considering hard bandlimitation, and in [12] w.r.t. a spectral mask. In some of these approaches so-called faster-than-Nyquist (FTN) signaling is applied. FTN signaling is closely related to oversampling as both increase the resolution of the grid on which the zero-crossings of the transmit signal can be placed. In addition, in [13] the capacity for coarsely quantized systems under Nyquist-signaling is studied.

However, an analytical evaluation of the channel capacity of the 1-bit quantized oversampled AWGN channel in the mid-to-high SNR domain is still open. This capacity depends on the oversampling factor, as due to the 1-bit quantization Nyquist-sampling - like any other sampling rate - does not provide a sufficient statistic. This means that the samples do not contain the entire information on the input signal given in the continuous-time receive signal. In the present paper we study the capacity of the underlying continuous-time channel, which can be interpreted as the limiting case of increasing the oversampling rate to infinity. As for the capacity of the AWGN channel given by Shannon [14], without time quantization there is no quantization in the information carrying dimension. With our approach, we aim for a better understanding of the difference between using the magnitude domain versus the time domain for signaling. As the continuous-time additive noise channel with 1-bit output quantization carries the information in the zero-crossings of the transmit signal, this channel corresponds to some extent to a timing channel as, e.g., studied in [15].

For the outlined scenario of short range multigigabit/s communication, e.g., for inter-board communication, a link budget calculation in [4] yields a minimum receive SNR of 13.6 dB. We therefore focus on the mid-to-high SNR domain above 10 dB. This requires different bounding techniques than the low SNR region since the non-linear effects of 1-bit quantization are more dominant. The main contributions and results of the paper are as follows

  • We derive approximate lower and upper bounds on the mutual information rate of the real and bandlimited continuous-time additive Gaussian noise channel with 1-bit output quantization under an average power constraint. We base our derivation on a class of signals with an exponentially distributed zero-crossing distance at the input and a sine-shaped transition waveform at the zeros-crossings. We show that the main error events to be considered are insertions and shifts of the zero-crossing time instants of the transmit signal.

  • We provide approximations that enable closed-form bounding of the mutual information rate and analyze their validity regions. The approximations are suitable in the mid-to-high SNR domain above 10 dB and are summarized in Sect. 8. A central assumption in this regard is that the intersymbol interference (ISI) due to bandlimitation is treated as noise.

  • We observe that for a given SNR the ratio between the derived lower bound and the AWGN capacity solely depends on the ratio of channel bandwidth and the rate parameter of the exponential distribution. The lower bound on the mutual information rate is maximized if this ratio is approximately 0.75. The derived lower bound on the mutual information rate saturates over the SNR only if a hard bandlimitation is considered. For the sine-shaped transition waveform this yields ca. 1.54 bit/s/Hz.

  • The upper and lower bound are close in the mid-to-high SNR regime, i.e., in the SNR range of approximately 10 to 20 dB. Treating ISI as noise is the dominating error effect for SNRs above 20 dB, while insertions are the dominant error effect below approximately 12 dB.

  • Compared to the capacity results under 1-bit quantization and Nyquist signaling in [13], we observe that there is at least a 50 % increase in achievable rate possible in the high-SNR domain. A practical binary 2-fold FTN signaling scheme in [11] shows already a gain of ca. 35 % w.r.t. [13].

In the present work we assume that the receiver is perfectly synchronized to the transmitter. However, actually channel parameter estimation and synchronization based on 1-bit quantized channel output samples is an active area of research, where [16] studies bounds on the achievable timing, phase, and frequency estimation performance and phase and frequency estimators were derived, e.g., in [17] and [18]. Under perfect synchronization the complex baseband can be decomposed into two real AWGN channels, such that we consider a real channel.

The paper is organized as follows. In Sect. 2, the system model is given. In Sect. 3 we introduce the relevant types of error events and model the impact of filtering, especially the ISI. Based on this, an upper and a lower bound on the mutual information rate are given in Sect. 4 and analyzed in detail in Sects. 5 and 6. In Sect. 7 we give the final form of the upper and the lower bound on the mutual information rate and discuss their behavior depending on various channel parameters. Section 8 provides the conclusion of our findings.

We apply the following notations: vectors are set bold, random variables sans serif. Thus, \(\varvec{\mathsf {X}}^{(K)}\) is a random vector of length K. Omitting the superscript denotes the corresponding random process \(\varvec{\mathsf {X}}\) for \(K \rightarrow \infty\). For information measures, \((\cdot )'\) denotes the corresponding rate. Furthermore, \((a)^{+}\) is the maximum of a and zero.

System model

We consider the baseband system model depicted in Fig. 1 transmitting over a real AWGN channel. A receiver relying on 1-bit quantization can only distinguish between the level of the input signal being smaller or larger than zero. Hence, all information that can be conveyed through such a channel must be recovered from the sequence of time instants of the zero-crossings (ZC)Footnote 1. In order to model this, we consider as channel input and output the vectors \({{\varvec {\mathsf{{A}}}}}^{(K)} = [\mathsf {A}_1,...,\mathsf {A}_k,...,\mathsf {A}_K]^T\) and \({{\varvec {\mathsf{{D}}}}}^{(M)}=[\mathsf {D}_1,...,\mathsf {D}_m,...,\mathsf {D}_M]^T\), which contain the temporal distances \(\mathsf {A}_k\) and \(\mathsf {D}_m\) of two consecutive zero-crossings of \(\mathsf {x}(t)\) and the received signal \(\mathsf {r}(t)\), respectively. Here K is not necessarily equal to M as noise can add or remove zero-crossings. For the analysis in this work, it is assumed that the time instants of the zero crossings can be resolved with infinite precision, which makes \(\mathsf {A}_k\) and \(\mathsf {D}_m\) continuous random variables. The mapper converts the random vector \({{\varvec {\mathsf{{A}}}}}^{(K)}\) into the continuous-time transmit signal \(\mathsf {x}(t)\), which is then lowpass-filtered with one-sided bandwidth W and transmitted over an AWGN channel. At the receiver, lowpass-filtering with one-sided bandwidth W ensures bandlimitation of the noise and the demapper realizes the conversion between the noisy received signal \(\mathsf {r}(t)\) and the sequence \({{\varvec {\mathsf{{D}}}}}^{(M)}\) of zero-crossing distances. The continuous-time 1-bit ADC can hereby be understood as a pre-stage to the zero-crossing detector underlining the fact that the amplitude information is not available for signal processing.

Fig. 1

Block diagram of the system model

Signal structure and input distribution

Figure 2 illustrates the mapping of the input sequence \({{\varvec {\mathsf{{A}}}}}^{(K)}\) to \(\mathsf {x}(t)\), which alternates between two levels \(\pm \sqrt{\hat{P}}\), where \(\hat{P}\) is the peak power of \(\mathsf {x}(t)\). The kth transition between the levels \(\pm \sqrt{\hat{P}}\) begins at time

$$\begin{aligned} \mathsf {T}_k = \sum \limits_{i=1}^{k} \mathsf {A}_i + t_0. \end{aligned}$$

and crosses zero at time \(\mathsf {T}'_k\). Without loss of generality, we assume \(t_0=0\). The input symbols \(\mathsf {A}_k\) correspond to the temporal distances between the kth and the \((k-1)\)th zero-crossing of \(\mathsf {x}(t)\). We consider the \(\mathsf {A}_k\) to be independent and identically distributed (i.i.d.) based on an exponential distribution, i.e,

$$\begin{aligned} \mathsf {A}_k \sim \lambda e^{-\lambda (a-\beta )} {\mathbb{1}}_{\left[ \beta ,\infty \right) }(a) \end{aligned}$$

since the exponential distribution maximizes the entropy for positive continuous random variables with given mean. Here, \({\mathbb{1}}_{[u,v]}(x)\)is the indicator function, being one in the interval [uv] and zero otherwise. This results in a mean symbol duration of

$$\begin{aligned} T_{{\rm avg}} = \frac{1}{\lambda } + \beta \end{aligned}$$

a variance of the input symbols of

$$\begin{aligned} \sigma_{\mathsf {A}^2} = {1}/{\lambda ^2} \end{aligned}$$

and a Gamma-distribution of the \(\mathsf {T}_k\) or any other sum of \(\mathsf {A}_k\), respectively,

$$\begin{aligned} p_{\mathsf {T}_k}(t) = \frac{\lambda ^k e^{-\lambda \left( t - k \beta \right) } \left( t-k \beta \right) ^{k-1}}{(k-1)!}, ~ t \ge k \beta . \end{aligned}$$

In order to control the bandwidth of the channel input signal and match it to the channel, the transition from one level to the other is given by the waveform f(t), yielding the transmit signal

$$\begin{aligned} \mathsf {x}(t) = \left( \sum_{k=1}^{K} \sqrt{\hat{P}} (-1)^k g(t-\mathsf {T}_k)\right) + \sqrt{\hat{P}} \end{aligned}$$

with the pulse shape

$$\begin{aligned} g(t) = \left( 1+f\left( t-\frac{\beta }{2}\right) \right) \cdot {\mathbb{1}}_{\left[ 0,\beta \right] }(t) + 2\cdot {\mathbb{1}}_{\left[ \beta ,\infty \right) }(t). \end{aligned}$$

Here, f(t) is an odd function between \((-{\beta }/{2},-1)\) and \(({\beta }/{2},1)\) and zero otherwise, describing the transition of the signal. The transition time \(\beta\) is chosen according to the available channel bandwidth W with

$$\begin{aligned} \beta =\frac{1}{2 W}. \end{aligned}$$

Implications of this choice will be discussed in Sects. 4, 7, and Appendix A. With \(\beta\) being the minimal value of the \(\mathsf {A}_{k}\), it is guaranteed that \(\mathsf {x}(t)\) reaches the level \(\pm \sqrt{\hat{P}}\) between two transitions. This is not necessarily capacity-achieving but simplifies the derivation of a lower bound on the mutual information rate. The resulting time instant of the kth zero-crossing is

$$\begin{aligned} \mathsf {T}'_k = \mathsf {T}_k +\frac{\beta }{2}. \end{aligned}$$

The results throughout the paper are given for a sine halfwave as transition, i.e.,

$$\begin{aligned} f(t)&= {\left\{ \begin{array}{ll} \sin \left( \pi \frac{t}{\beta }\right) &{}\text {for } |t| \le {\beta }/{2}\\ 0 &{}\text {otherwise} \end{array}\right. }. \end{aligned}$$

In the limiting case of \(\lambda \rightarrow \infty\), this leads to a one-sided signal bandwidth of W. However, \(\mathsf {x}(t)\) is not strictly bandlimited as a small portion of its energy is outside of the interval \([-W, W ]\). Strict bandlimitation is ensured by the lowpass (LP) filters at transmitter and receiver, which are considered to be ideal LPs with one-sided bandwidth W and amplitude one. The normalized bandwidth

$$\begin{aligned} \kappa = {W}/{\lambda }. \end{aligned}$$

is an important design parameter, which relates the channel bandwidth to the rate parameter of the exponential input distribution.

Fig. 2

Mapping input sequence \({{\varvec {\mathsf{{A}}}}}^{(K)}\) to \(\mathsf {x}(t)\) and transmit signal \(\hat{\mathsf {x}}(t)\). The distances between the zero-crossings correspond to the sequence \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and the impact of LP-filtering on the transition slope and the signal level is seen in \(\hat{\mathsf {x}}(t)\)

Channel model

The LP-filtered signal \(\hat{\mathsf {x}}(t)\) is transmitted over a continuous-time AWGN channel. The received signal after LP-filtering and quantization is given by

$$\begin{aligned} \mathsf {y}(t)= Q(\mathsf {r}(t)) = Q(\hat{\mathsf {x}}(t)+\hat{\mathsf {n}}(t)) \end{aligned}$$

where \(Q(\cdot )\) is a binary quantizer with threshold zero, i.e., \(Q(x)=1\) if \(x\ge 0\) and \(Q(x)=-1\) if \(x<0\). Here, \(\hat{\mathsf {n}}(t)\) is the filtered version of the zero-mean additive white Gaussian noise \(\mathsf {n}(t)\) with power spectral density (PSD) \({N_0}/{2}\). Its variance is \(\sigma_{\hat{\mathsf {n}}}^2 = N_0 W\) and its PSD is given by

$$\begin{aligned} S_{\hat{\mathsf {n}}}(f) = {\left\{ \begin{array}{ll} {N_0}/{2} &{} \text {for } |f| \le W\\ 0 &{}\text {otherwise} \end{array}\right. }. \end{aligned}$$

The filtered transmit signal \(\hat{\mathsf {x}}(t)\) is depicted in Fig. 2 and can be obtained by superposition, analogous to (6), of the filtered transmit pulses

$$\begin{aligned} \hat{g}(t) =&\, 1+\frac{1}{\pi } \left[ {{\,\mathrm{Si}\,}}(2 \pi W t)+{{\,\mathrm{Si}\,}}(2 \pi W t-\pi )- \frac{\cos (2 \pi W t)}{2} [{{\,\mathrm{Si}\,}}(4 \pi W t) - {{\,\mathrm{Si}\,}}(4 \pi W t- 2 \pi )] \right. \nonumber \\&\left. + \frac{\sin (2 \pi W t)}{2} [{{\,\mathrm{Ci}\,}}(4 \pi W t) - {{\,\mathrm{Ci}\,}}(4 \pi W t-2 \pi ) - \ln (2 \pi W t) + \ln (2 \pi W t-\pi )]\right] \end{aligned}$$

where \({{\,\mathrm{Si}\,}}(\cdot )\) and \({{\,\mathrm{Ci}\,}}(\cdot )\) are the sine and cosine integral, respectively. Here, (14) can be obtained by Fourier-transform of g(t) yielding \(G(\omega )\). Hard bandlimitation limits the spectrum of \(G(\omega )\) to \([-W,W]\), yielding \(\hat{G}(\omega )\), such that the inverse Fourier transform of \(\hat{G}(\omega )\) yields \(\hat{g}(t)\). An expression for \(G(\omega )\) is given below in (82) in general and in (19) for the sine waveform. The distortion between the signal \(\mathsf {x}(t)\) containing the designed sequence of zero crossings and \(\hat{\mathsf {x}}(t)\) is given by \(\tilde{\mathsf {x}}(t) = \hat{\mathsf {x}}(t) - \mathsf {x}(t)\), which has the variance

$$\begin{aligned} \sigma_{\tilde{\mathsf {x}}}^2 = {{\,{\mathbb{E}}\,}}\big [\left| \hat{\mathsf {x}}(t)-\mathsf {x}(t)\right| ^2\big ] = \frac{1}{\pi } \int_{2 \pi W}^{\infty } S_{\mathsf {x}}(\omega ) \hbox {d}\omega \end{aligned}$$

where \(S_{\mathsf {x}}(\omega )\) is the PSD of \(\mathsf {x}(t)\). The transmit power of the system is, thus, given as \(P_{\hat{\mathsf {x}}} = P-{\sigma }^2_{\tilde{\mathsf {x}}}\), where P is the average power of \(\mathsf {x}(t)\). It is given by

$$\begin{aligned} P=\frac{\hat{P}}{T_{{\rm avg}}} \left( \int_{0}^{\beta }\cos ^2\left( \frac{\pi }{\beta }t\right) dt+\frac{1}{\lambda }\right) = \frac{\frac{1}{2} + 2 W \lambda ^{-1}}{1 + 2 W \lambda ^{-1}} \hat{P}. \end{aligned}$$

Note that despite the deterministic nature of the filtering, it is not clear yet how to consider the information contained in the ISI in the derivation of bounds on the mutual information rate. For the purpose of lower-bounding the mutual information rate, we thus treat the ISI as noise, which is discussed in more detail in the corresponding sections. An upper bound on the achievable rate is constructed by not considering the filter distortion. Furthermore, we cannot evaluate the exact transmit power \(P_{\hat{\mathsf {x}}}\) as we only obtain an upper and a lower bound on \(\sigma ^2_{\tilde{\mathsf {x}}}\), cf. Sect. 3. Thus, we define the SNR w.r.t. \(\mathsf {x}(t)\) as

$$\begin{aligned} \rho = \frac{P}{N_0 W}. \end{aligned}$$

Error events and filtering

In this section, we introduce the relevant error events that occur in the system described above. Furthermore, we define signal parameters required for the further analysis, especially w.r.t to the statistics of the ISI. As discussed above, we treat the ISI as noise in order to obtain analytical bounds on the mutual information rate. To quantify the impact of the ISI caused by filtering, we require a tractable model of the ISI. Therefore, we approximate the probability density function (pdf) of the ISI in this section.

Error Events

Transmitting the signal \(\mathsf {x}(t)\) over the channel described in the previous section, including LP-distortion and AWGN, may cause three types of error events:

  • shifts of zero-crossings leading to errors in the magnitudes of the received symbol corresponding to \(\mathsf {A}_k\)

  • insertion of zero-crossings causing an insertion of received symbols

  • deletion of zero-crossing pairs, leading to the deletion of received symbols.

For channels with insertions and deletions are, to the best of our knowledge, only capacity bounds for binary channels available, e.g., [19,20,21,22]. In (8), we match the transition time \(\beta\) of the input sequence to the channel bandwidth. Thus, the filtered noise process at time instants spaced by a temporal distance larger than \(\beta\) can assumed to be uncorrelated and the possibility of a noise event deleting two consecutive zero-crossings - and, hence, an entire symbol - can be neglected. This argument is supported by the simulation results presented in Appendix A.

Thus, the error events to be considered are shifts and insertions of zero-crossing. Insertions are synchronization errors, that prevent the receiver from correctly identifying the beginning of a transmit symbol. Dobrushin has proven information stability and Shannon’s coding theorem for channels with synchronization errors given discrete and finite random variables [23], although to him “it appears that these restrictions are not essential”. For the case of the continuous random processes \({{\varvec {\mathsf{{A}}}}}\) and \({{\varvec {\mathsf{{D}}}}}\) this proof remains for future work.

In order to analyze the achievable rate, we use the temporal separation of the two error events (shifts and insertions of zero-crossings) to separately evaluate their impact. This separation is given as long as there is only one zero-crossing in each transition interval (TI) \(\left[ \mathsf {T}_k,\mathsf {T}_k+\beta \right]\). Since the noise is bandlimited with bandwidth W, which is matched to the length \(\beta\) of one TI, cf. (8), the dynamics of the noise within the TI are limited. Thus, in the mid-to-high SNR regime, multiple zero-crossings per TI occur only with very small probability. Numerical evaluations of curve-crossing problems for Gaussian random processes support this argument for an SNR above 5 dB, cf. Appendix B. For this analysis, the distribution of the distortion \(\tilde{\mathsf {x}}(t)\) is assumed to be Gaussian, which is justified below in Sect. 3.3.

Some signal properties induced by filtering

One important parameter to quantify the ISI is the variance \(\sigma ^2_{\tilde{\mathsf {x}}}\) of the LP-distortion, cf. (15). In order to evaluate (15), we require information on the spectrum \(S_{\mathsf {X}}(\omega )\). In Appendix C, we show that for \(\omega \ne 0\)

$$\begin{aligned} \frac{ \hat{P} \left| G(\omega )\right| ^2 }{T_{{\rm avg}} (1+2 c(\omega ))} \le S_{\mathsf {X}}(\omega ) \le \frac{ \hat{P} \left| G(\omega )\right| ^2 }{T_{{\rm avg}}} (1+2 c(\omega )) \end{aligned}$$

with \(c(\omega ) = \frac{1}{\sqrt{1+\omega ^2 \lambda ^{-2}} - 1}\). For the sine-waveform introduced in (10), we have

$$\begin{aligned} \left| G(\omega )\right| ^2 = 2 (1+\cos (\omega \beta )) \left[ \frac{\pi ^2}{\omega (\pi ^2-\omega ^2 \beta ^2)}\right] ^2. \end{aligned}$$

With this, (15), and \(\Gamma = \int \nolimits_{2 \pi W}^{\infty } \left| G(\omega )\right| ^2 \hbox {d}\omega\), we can bound \(\sigma ^2_{\tilde{\mathsf {x}}}\) by

$$\begin{aligned} \frac{ \hat{P} }{(1+2 c_1) \pi T_{{\rm avg}}} \Gamma \le \sigma ^2_{\tilde{\mathsf {x}}} \le \frac{ \hat{P} (1+2 c_1) }{\pi T_{{\rm avg}}} \Gamma . \end{aligned}$$

In order to obtain (20) one further bounding step is applied. Note, that \(c(\omega )\) is monotonically decreasing w.r.t. \(|\omega |\) and, hence, for all \(|\omega | \ge 2\pi W\) it holds \(c(\omega ) \le c(2 \pi W) = c_1.\) Given the sine-waveform in (10), we have \(\Gamma = \frac{\beta c_0}{2 \pi }\), such that

$$\begin{aligned} \frac{\hat{P} \beta }{2 (1+2 c_1) T_{{\rm avg}} \pi ^2} c_0 \le \sigma_{\tilde{\mathsf {x}}}^2 \le \frac{(1+2 c_1) \hat{P} \beta }{2 T_{{\rm avg}} \pi ^2} c_0 \end{aligned}$$

where \(c_0 = - 3\gamma - 3 \log (2\pi ) + 3 {{\,\mathrm{Ci}\,}}(2 \pi ) - \pi ^2 + 4 \pi {{\,\mathrm{Si}\,}}(\pi ) - \pi {{\,\mathrm{Si}\,}}(2\pi )\), with \(\gamma \approx 0.5772\) being the Euler-Mascheroni constant.

Later on, we will also require the parameter \(s^{\prime\prime}_{\tilde{\mathsf {x}}\tilde{\mathsf {x}}}(0)\), which is the second derivative of the autocorrelation function (ACF) of \(\tilde{\mathsf {x}}(t)\) at \(t=0\), see Sects. 3.3 and 6. The ACF of the lowpass-distortion \(\tilde{\mathsf {x}}(t)\) is given by

$$\begin{aligned} s_{\tilde{\mathsf {x}}\tilde{\mathsf {x}}}(\tau ) = \frac{1}{\pi } \int_{2 \pi W}^{\infty } S_{\mathsf {X}} (\omega ) \cos (\omega \tau ) \hbox {d} \omega \end{aligned}$$

such that for its second derivative it can be written

$$\begin{aligned} s^{\prime\prime}_{\tilde{\mathsf {x}}\tilde{\mathsf {x}}}(\tau ) = \frac{1}{\pi } \int_{2 \pi W}^{\infty } S_{\mathsf {X}} (\omega ) \frac{\partial ^2}{\partial \tau ^2} \cos (\omega \tau ) \hbox {d} \omega \end{aligned}$$

where the exchangeability of differentiation and integration has been shown via Lebesgue’s dominated convergence theorem [24, Theorem 1.34] with the dominating function \(g(\omega ) = \omega ^2 S_{\mathsf {X}}(\omega )\). Due to \(\frac{\partial ^2}{\partial \tau ^2} \cos (\omega \tau ) \big |_{\tau =0}\) \(= - \omega ^2\) in (23) and since \(S_{\mathsf {X}}(\omega )\) is non-negative for all \(\omega\), an upper bound on \(S_{\mathsf {X}}(\omega )\) results in a lower bound on \(s^{\prime\prime}_{\tilde{\mathsf {x}}\tilde{\mathsf {x}}}(0)\). We use (18) and for the sine waveform in (10) we have \(\int_{{2 \pi W}}^{\infty } \omega ^2 \left| G(\omega )\right| ^2 {\hbox {d} \omega } = \frac{\pi c_2}{2 \beta }\) with \(c_2 = \left[ \pi ^2 - \gamma - \log (2 \pi ) - \pi {{\,\mathrm{Si}\,}}(2\pi ) + {{\,\mathrm{Ci}\,}}(2 \pi )\right]\). This yields

$$\begin{aligned} s^{\prime\prime}_{\tilde{\mathsf {x}}\tilde{\mathsf {x}}}(0) \ge - \frac{(1+2 c_1) \hat{P}}{2 T_{{\rm avg}} \beta } c_2. \end{aligned}$$

Furthermore, the description of the filtered pulse \(\hat{g}(t)\) can be tedious since for \(t>\beta\) the pulse \(\hat{g}(t)\) exhibits the typical ringing, which can be difficult to characterize compactly. The value

$$\begin{aligned} u=(\hat{g}(\beta )-1) \sqrt{\hat{P}} = \frac{2{{\,\mathrm{Si}\,}}(\pi )+{{\,\mathrm{Si}\,}}(2 \pi )}{2\pi } \sqrt{\hat{P}} \approx 0.8 \sqrt{\hat{P}} \end{aligned}$$

represents the lowest signal level of \(\hat{g}(t)\) for \(t>\beta\), cf. Fig. 3, and thus can serve as a lower bound on \(\hat{g}(t)\) for \(t>\beta\). A simplified description for the transition can be obtained by using the slope of \(\hat{g}(t)\) at \(t={\beta }/{2}\), which corresponds to the slope of the filtered version \(\hat{f}(t)\) of \(f (t)\) at \(t=0\). Thus, with \(\hat{f}(t) = \hat{g}(t+{\beta }/{2})-1\) for \(-\beta /2 \le t\le \beta /2\), we have

$$\begin{aligned} \frac{\hbox {d}\, \hat{f}(t)}{\hbox {d}t}\big |_{t=0} = \frac{{{\,\mathrm{Si}\,}}(\pi )}{\beta } \end{aligned}$$

and we can define an approximated version of \(\hat{g}(t)\) as

$$\begin{aligned} \hat{g}_{\rm {appr}}(t) = {\left\{ \begin{array}{ll} 0, &{} t<0\\ \frac{{{\,\mathrm{Si}\,}}(\pi )}{\beta } \left( t-\frac{\beta }{2}\right) +1, &{} 0 \le t \le \beta \\ 1+\frac{u}{\sqrt{\hat{P}}},&{}t>\beta \\ \end{array}\right. }. \end{aligned}$$
Fig. 3

ISI model: original \(g(t-\mathsf {T}_k)-1\), filtered \(\hat{g}(t-\mathsf {T}_k)-1\), and interfering pulses \(\hat{g}(t-\mathsf {T}_l)-1|_{l \ne k}\). The ISI at a time \(t_k\) is the superposition of the contribution of all pulses \(\hat{g}(t-\mathsf {T}_l)-1\), where \(l\ne k\)

Probability distribution of the ISI

Our approach to approximate the ISI distribution is shown in Fig. 3. It depicts the designed waveform g(t), the transmit waveform \(\hat{g}(t)\), and the approximation \(\hat{g}_{\rm {appr}}(t)\). The original sequence \(\mathsf {x}(t)\) is designed such that there is no ISI. Due to LP-filtering, \(\hat{g}(t)\) shows the typical ringing, such that depending on the temporal distances between the pulses, given by the data symbols \(\mathsf {A}_k\), interference occurs. Starting from \(\hat{g}_{\rm {appr}}(t)\), we already have a characterization of the impact of filtering on the pulse starting at \(\mathsf {T}_k\), which we refer to as the kth pulse. It remains to characterize the ISI generated by all neighboring pulses. Due to the separability of the error events, cf. Sect. 3.1, we divide the time interval belonging to the kth pulse in a TI and a hold period (HP).

In general, the interfering signal \(\tilde{\mathsf {x}}(t_k)\) at any time \(t_k \in [\mathsf {T}_k,\mathsf {T}_{k+1}]\), i.e., either TI or HP, can be represented by the sum of ISI-contributions of all other pulses as

$$\begin{aligned} \tilde{\mathsf {x}}(t_k) = \sum_{l=1}^{k-1} \tilde{\mathsf {x}}_l(t_k) +\sum_{l=k+1}^{K} \tilde{\mathsf {x}}_l(t_k) = \tilde{\mathsf {x}}_{\rm {lhs}}(t_k)+\tilde{\mathsf {x}}_{\rm {rhs}}(t_k). \end{aligned}$$

The \(\tilde{\mathsf {x}}_l(t_k)\) are obtained via a deterministic mapping using \(\tilde{g}(t) = \hat{g}(t)-g(t)\) as

$$\begin{aligned} \tilde{\mathsf {x}}_l(t_k) = {\left\{ \begin{array}{ll} (-1)^l \tilde{g}(\sum \nolimits_{i=l+1}^{k} \mathsf {A}_i +\tilde{t}_k) &{}~~l<k\\ (-1)^{l+1} \tilde{g}(\sum \nolimits_{i=k+1}^{l} \mathsf {A}_i -\tilde{t}_k+\beta ) &{}~~l>k \end{array}\right. } \end{aligned}$$

where \(\tilde{t}_k = t_k-\mathsf {T}_k\) and the sums \(\mathsf {L}_{n+1}^m = \sum \nolimits_{i=n+1}^{m} \mathsf {A}_i\), \(m>n\), follow the Gamma-distribution in (5). Since the \(\mathsf {A}_k\) are i.i.d., \(\tilde{\mathsf {x}}_{\rm {lhs}}(t_k)\) and \(\tilde{\mathsf {x}}_{\rm {rhs}}(t_k)\) are independent, such that

$$\begin{aligned} p(\tilde{\mathsf {x}}(t_k)) = p(\tilde{\mathsf {x}}_{\rm {lhs}}(t_k)) *p(\tilde{\mathsf {x}}_{\rm {rhs}}(t_k)). \end{aligned}$$

Unfortunately, \(\tilde{g}(t)\) cannot be inverted, which makes the analytical derivation of \(p(\tilde{\mathsf {x}}(t_k))\) infeasible. Furthermore, the \(\tilde{\mathsf {x}}_l(t_k)\) for \(l>k\) and \(l<k\), respectively, are not independent, such that the overall distributions of \(p(\tilde{\mathsf {x}}_{\rm {rhs}}(t_k))\) and \(p(\tilde{\mathsf {x}}_{\rm {lhs}}(t_k))\) cannot be obtained by simple convolution of the densities \(p(\tilde{\mathsf {x}}_l(t_k))\). Thus, we obtain an empirical distribution by analyzing \(10^5\) sequences \({{\varvec {\mathsf{{A}}}}}^{(K)}\) with 100 interfering pulses for each, \(\tilde{\mathsf {x}}_{\rm {lhs}}(t_k)\) and \(\tilde{\mathsf {x}}_{\rm {rhs}}(t_k)\). The results are depicted in Fig. 4. Given the symmetry, we only analyze the scenario of an up-crossing symbol, i.e., a positive transition slope as depicted in Fig. 3. Due to the temporal separation of the two error types, zero-crossing-shifts and inserted zero-crossings cf. Sect. 3.1, we discuss \(p(\tilde{\mathsf {x}}(t_k))\) separately for these two cases.

Fig. 4

Numerically obtained distribution of \(\tilde{\mathsf {x}}(t_k)\) for a \(t_k = t_{k,\text {TI}}\) and b \(t_k = t_{k,\text {HP}}\) (solid lines) and bounds on the distribution (dashed and dashdotted lines) for different \(\kappa\). For lower-bounding the mutual information rate, a Gaussian distribution with the obtained upper bound on the variance of \(\tilde{\mathsf {x}}(t_k)\) is suitable for κ 3

Case a) represents \(t_k = t_{k,\text {TI}} = \mathsf {T}'_k=\mathsf {T}_k+\frac{\beta }{2}\), i.e., the zeros-crossing of the kth pulse of \(\mathsf {x}(t)\) in the TI. This corresponds to the zero-crossing-shift error analyzed to obtain the lower and the upper bound on the mutual information rate in Sect. 5. With (29), we have

$$\begin{aligned} \tilde{\mathsf {x}}_{l}(t_{k,\text {TI}}) = {\left\{ \begin{array}{ll} (-1)^l \tilde{g}(\mathsf {L}_{l+1}^k + \frac{\beta }{2}) &{}~~l<k\\ (-1)^{l+1} \tilde{g}(\mathsf {L}_{k+1}^l + \frac{\beta }{2}) &{}~~l>k \end{array}\right. }. \end{aligned}$$

From (31) it can be seen that interfering pulses separated by the same number of symbols from \(t_{k,\text {TI}}\), i.e., with the same probability distribution of \(\mathsf {L}_{n+1}^m\) are weighted with inverted signs. Thus, the convolution in (30) becomes an auto-correlation, such that we expect an even function with mean zero as can be seen in Fig. 4a) for different values of \(\kappa\). Using the bounds on the variance of the ISI obtained below, cf. (34) and (21), we see that the Gaussian distribution is well suited for describing the effect of the ISI up to ratios κ = W/λ 3.

Case b) considers \(t_k = t_{k,\text {HP}}=\mathsf {T}_k + \frac{\mathsf {A}_{k+1}+\beta }{2}\), which is in the middle of the hold period. This is the worst case assumption for analyzing the impact of additional zero-crossings in Sect. 6: The kth pulse \(\hat{g}(t-\mathsf {T}_k)-1\) (green) is lower-bounded by its lowest value in the HP \(u \approx 0.81 \sqrt{\hat{P}}\). For \(\mathsf {T}_k + \beta< t <t_{k,\text {HP}}\), the strongest interference comes from the \((k+1)\)th (red) pulse. Given the monotonically decreasing envelope of \(\tilde{g}(t)\) for \(t\ge {\beta }/{2}\), the interference of \((k+1)\)th pulse can be highest at \(t_{k,\text {HP}}\). For \(t_{k,\text {HP}}<t<\mathsf {T}_{k+1}\), the scenario can be analyzed with the red pulse approximated by u and the green one as interferer. Thus, in the middle of the HP (29) becomes

$$\begin{aligned} \tilde{\mathsf {x}}_{l}(t_{k,\text {HP}}) ={\left\{ \begin{array}{ll} (-1)^l \tilde{g}\left( \mathsf {L}_{l+1}^k +\frac{\mathsf A_{k+1}}{2}+ \frac{\beta }{2}\right) &{}l<k\\ (-1)^{l+1} \tilde{g}\left( \mathsf {L}_{k+2}^l +\frac{\mathsf A_{k+1}}{2} + \frac{\beta }{2}\right) &{}l>k \end{array}\right. }. \end{aligned}$$

It can be seen that different to (31) now the interferers with the same probability distribution of \(\mathsf {L}_{n+1}^m\) are weighted with the same sign. For \(\mathsf {A}_{k} +\frac{\mathsf {A}_{k+1}}{2}\) and \(\frac{\mathsf {A}_{k+1}}{2}+\mathsf {A}_{k+2}\), these are the pulses \(k-1\) (orange) and \(k+2\) (violet). Thus, the convolution in (30) yields a function with a mean deviating from zero towards positive values, cf. Fig. 4b). However, given that we look at an up-crossing, i.e., \(\mathsf {x}(\mathsf {T}_k+\beta )>0\), for studying the additional zero-crossings, the tail of the distribution towards negative \(\tilde{\mathsf {x}}\) is significant. As can be seen, the assumed Gaussian distribution with mean zero and upper-bounded variance from below, cf. (34) and (21), has a heavier tail towards negative values compared to the actual distribution for κ 3 and, thus, enables us to give an upper bound on the probability of additional zero-crossings in that region of \(\kappa\).

The variances of \(\tilde{\mathsf {x}}\) in both cases, TI and HP, depend on the amount of energy \(\sigma ^2_{\tilde{\mathsf {x}}}\) removed by LP-filtering, for which we obtained bounds in Sect. 3.2. Here, \(\sigma ^2_{\tilde{\mathsf {x}}}\) captures besides the ISI also the distortion of the current pulse, which is already considered in the approximation \(\hat{g}_{\rm {appr}}\), cf. Fig. 3. The portion of the energy of \(\tilde{g}(t)\) that contributes to ISI is the one, for which \(t\ge t_{\rm {min}}\), where \(t_{\rm {min}}\) is the minimum temporal distance between an interfering pulse and \(t_{k,\text {TI}}\) or \(t_{k,\text {HP}}\), respectively. With \(\mathsf {A}_k \ge \beta\), (31) and (32), for the TI we have \(t_{\rm {min}} = \min (\mathsf {L}_{n+1}^m) +\frac{\beta }{2} = \beta + \frac{\beta }{2}\), while for the HP \(t_{\rm {min}} = \frac{\min (\mathsf {A}_{k+1})}{2}+\frac{\beta }{2} = \beta\) holds. Thus, we consider the fraction \(\alpha\) of \(\sigma ^2_{\tilde{\mathsf {x}}}\) contributing to ISI to be

$$\begin{aligned} \alpha = \frac{\int_{t_{\rm {min}}}^\infty \tilde{g}^2(t) \hbox {d}t}{\int_{\frac{\beta }{2}}^\infty \tilde{g}^2(t) \hbox {d}t},\,t_{\rm {min}}={\left\{ \begin{array}{ll} \beta &{}\text {HP}\\ \beta + \frac{\beta }{2} &{}\text {TI} \end{array}\right. }. \end{aligned}$$

Numerical evaluation of (33) leads to \(\alpha_{\rm {HP}}\approx 0.425\) and \(\alpha_{\rm {TI}}\approx 0.325\), such that

$$\begin{aligned} \sigma_{\rm {ISI}}^2 = {\left\{ \begin{array}{ll} \alpha_{\rm {HP}} \sigma ^2_{\tilde{\mathsf {x}}} &{}\text {in the HP}\\ \alpha_{\rm {TI}} \sigma ^2_{\tilde{\mathsf {x}}} &{}\text {in the TI} \end{array}\right. } \end{aligned}$$


$$\begin{aligned} s^{\prime\prime}_{{\rm ISI}}(0) = \alpha_{\rm {HP}} s^{\prime\prime}_{\tilde{\mathsf {x}}\tilde{\mathsf {x}}}(0). \end{aligned}$$

Bounding the achievable rate

The capacity of a communication channel represents the highest rate at which we can transmit over the channel with an arbitrary small probability of error and is defined as

$$\begin{aligned} C = \sup \,I'\left( {{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}}\right) \end{aligned}$$

where the supremum is taken over all distributions of the input signal, for which \(\hat{\mathsf {x}}(t)\) is constrained to the average power \(P-\sigma_{\tilde{\mathsf {x}}}^2\) and the bandwidth W. In (36) the mutual information rate is given by

$$\begin{aligned} I'\left( {{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}}\right) = \lim \limits_{K \rightarrow \infty } \frac{1}{K T_{{\rm avg}}}\,I\left( {{\varvec {\mathsf{{A}}}}}^{(K)};{{\varvec {\mathsf{{D}}}}}^{(M)}\right) \end{aligned}$$

with \(I\big ({{\varvec {\mathsf{{A}}}}}^{(K)};{{\varvec{\mathsf{{D}}}}}^{(M)}\big )\) being the mutual information. Despite the fact that M is a random variable that can be larger than K, both processes \({{\varvec{\mathsf{{A}}}}}^{(K)}\) and \({{\varvec {\mathsf{{D}}}}}^{(M)}\) occupy the same time interval, such that we define the mutual information rate based on a normalization with respect to the expected transmission time \(K T_{{\rm avg}}\). In the present paper, we derive a lower bound on the capacity by restricting ourselves to input signals as described in Sect. 2.1. However, later we will consider the supremum of \(I'\big ({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}}\big )\) over the parameter \(\lambda\) of the distribution of the \(\mathsf {A}_k\) in (2). The AWGN capacity serves as an upper bound on the capacity of the considered system due to the data processing inequality. Furthermore, we derive an upper bound on the achievable rate of this specific signaling scheme in order to quantify the impact of the bounding steps taken.

We use the concept of a genie-aided receiver as in [21], which has information on the inserted zero-crossings contained in an auxiliary process \({{\varvec {\mathsf{{V}}}}}\). Based on \({{\varvec {\mathsf{{V}}}}}\), which is described below, the genie-aided receiver can remove the additional zero-crossings. Let \(\hat{{{\varvec {\mathsf{{D}}}}}}\) contain the temporal distances of the zero-crossings at the receiver when the additional zero-crossings are removed. The process \(\hat{{{\varvec {\mathsf{{D}}}}}}\) can be determined based on \({{\varvec {\mathsf{{{D}}}}}}\) and \({{\varvec {\mathsf{{V}}}}}\) such that the mutual information rate in case the receiver has side information about the inserted zero-crossings is given by

$$\begin{aligned} I'({{\varvec {\mathsf{{A}}}}};\hat{{{\varvec {\mathsf{{D}}}}}}) = I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}}). \end{aligned}$$

Using the chain rule of mutual information, we have

$$\begin{aligned} I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}}) = I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}}) - I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{V}}}}}|{{\varvec {\mathsf{{D}}}}}). \end{aligned}$$

Here, \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) is the mutual information rate without the side information on additional zero-crossings at the receiver. The effect of the shifted zero-crossings is captured in \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}})\) and the impact of the inserted zero-crossings is described by \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{V}}}}}|{{\varvec {\mathsf{{D}}}}})\).

Given that \(I'({{\varvec{\mathsf{{A}}}}};{{\varvec {\mathsf{{V}}}}}|{{\varvec {\mathsf{{D}}}}})\ge 0\) since mutual information is always non-negative, an upper bound on the mutual information rate \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) can be given independently of the nature of the auxiliary process \({{\varvec {\mathsf{{V}}}}}\) as

$$\begin{aligned} I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}}) \le I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}}) = I'({{\varvec {\mathsf{{A}}}}};\hat{{{\varvec {\mathsf{{D}}}}}}). \end{aligned}$$

For the characterization of the auxiliary process \({{\varvec {\mathsf{{V}}}}}\), consider the transmission of one input symbol \(\mathsf {A}_k\). Its bounding zero-crossings \(\mathsf {T}'_{k-1}\) and \(\mathsf {T}'_k\) will be shifted to \(\hat{\mathsf {T}}_{k-1}\) and \(\hat{\mathsf {T}}_k\) by the noise process, such that

$$\begin{aligned} \hat{\mathsf {T}}_k = \mathsf {T}'_k + \mathsf {S}_k \end{aligned}$$

where \(\mathsf {S}_k\) denotes the error introduced by the shift. Additionally introduced zero-crossings will lead to a vector of received symbols \({{\varvec {\mathsf{{D}}}}}_k\) corresponding to every input symbol \(\mathsf {A}_k\). The latter is reversible, if the receiver knows which zero-crossings correspond to the originally transmitted ones. The receiver needs to sum up the distances \(\mathsf {D}_m\) contained in \({{\varvec {\mathsf{{D}}}}}_k\) in order to remove the additional zero-crossings. Then, for every input symbol \({\mathsf {A}}_k\) we obtain a symbol \(\hat{\mathsf {D}}_k\) that only contains the error event of shifted zero-crossings, such that we have \(\hat{{{\varvec {\mathsf{{D}}}}}}^{(K)}=[\hat{\mathsf {D}}_1,...,\hat{\mathsf {D}}_k,...,\hat{\mathsf {D}}_K]\). Intuitively, one would start such an algorithm with the first received symbol, such that instead of providing the receiver with the exact positions in time of the additional zero-crossings, it suffices to know for each transmit symbol \(\mathsf {A}_k\) how many received symbols have to be summed up to obtain \(\hat{\mathsf {D}}_k\) and, thus, the sequence \(\hat{{{\varvec {\mathsf{{D}}}}}}^{(K)}\). Hence, the auxiliary sequence \({{\varvec {\mathsf{{V}}}}}^{(K)}\) consists of positive integer numbers \(\mathsf {V}_k \in {\mathbb{N}}\), representing for each input symbol the number of corresponding output symbols. Thus, the auxiliary process \({{\varvec {\mathsf{{V}}}}}\) is discrete, which we use for lower-bounding the information rate in (39). With

$$\begin{aligned} H'\left( {{\varvec {\mathsf{{V}}}}}\right) = \lim \limits_{K \rightarrow \infty } \frac{1}{K T_{{\rm avg}}}\,H({{\varvec {\mathsf{{V}}}}}^{(K)}) \end{aligned}$$

being the entropy rate of the process \({{\varvec {\mathsf{{V}}}}}\), we have

$$\begin{aligned} I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})&= {I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}})} - H'({{\varvec {\mathsf{{V}}}}}|{{\varvec {\mathsf{{D}}}}}) + H'({{\varvec {\mathsf{{V}}}}}|{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{A}}}}}) \nonumber \\&\ge I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}}) - {{H'({{\varvec {\mathsf{{V}}}}}|{{\varvec {\mathsf{{D}}}}})}} \end{aligned}$$
$$\begin{aligned} &\ge I^{\prime}({{\varvec {\mathsf{{A}}}}};{{\varvec{\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}}) - {H^{\prime}({{\varvec{\mathsf{{V}}}}})} \end{aligned}$$

where (43) results from the fact that the entropy rate of a discrete random process is non-negative and (44) is due to the fact that conditioning cannot increase entropy. In the following, we will derive bounds on \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}})\) and \(H'({{\varvec {\mathsf{{V}}}}})\).

Achievable rate of the genie-aided receiver

To evaluate the mutual information rate \(I'({{\varvec {\mathsf{{A}}}}};\hat{{{\varvec {\mathsf{{D}}}}}}) = I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}})\) of the genie-aided receiver, we have to evaluate the mutual information rate between the sequences of temporal spacings of zero-crossings \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and \(\hat{{{\varvec {\mathsf{{D}}}}}}^{(K)}\). Note, that in contrast to the original channel here both vectors \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and \(\hat{{{\varvec {\mathsf{{D}}}}}}^{(K)}\) are of the same length as additional zero-crossings are removed by the genie-aided receiver. The only error remaining is a shift \(\mathsf {S}_k\) of every zero-crossings instant \(\mathsf {T}'_k\) to \(\hat{\mathsf {T}}_k\). Hence, on a symbol level we can write with (41) and (1) for the channel output

$$\begin{aligned} \hat{\mathsf {D}}_k = \hat{\mathsf {T}}_k - \hat{\mathsf {T}}_{k-1} = \mathsf {A}_k + \mathsf {S}_k - \mathsf {S}_{k-1} = \mathsf {A}_k + \mathsf {\Delta }_k. \end{aligned}$$

In order to derive an upper and a lower bound on the mutual information rate of this channel, knowledge on the probability distribution of \(\mathsf {S}_k\) is required.

The distribution of the shifting errors

As \(\hat{\mathsf {x}}(t)\) is bandlimited, it can be completely described by a sampled representation with sampling rate \(1/\beta\) to fulfill the Nyquist condition, cf. (8). Note that we here refer to the concept of sampling only to evaluate the value of the overall distortion

$$\begin{aligned} \mathsf {z}(t) = \hat{\mathsf {n}}(t)+\tilde{\mathsf {x}}(t) \end{aligned}$$

at the time instant \(\mathsf {T}'_k\) of the original zero-crossing. We still assume the receiver to be able to resolve the zero-crossing instants with infinite resolution.

The distribution of the shifting error \(\mathsf {S}_{k}\) can be evaluated by mapping the pdf of the additive noise \(\mathsf {z}(\mathsf {T}'_k)\) at the time instant \(\mathsf {T}'_k\) into the zero-crossing error \(\mathsf {S}_{k}\) on the time axis. This is represented in Fig. 5, where the due to bandlimitation slowly varying noise leads approximately to a shift of the filtered transmit waveform \(\hat{f}(t) = 1-\hat{g}(t+{\beta }/{2})\). We assume small noise amplitudes as we are interested in the behavior in the mid-to-high SNR regime. The mapping can then be given as

$$\begin{aligned} \mathsf {z}(\mathsf {T}'_k)&= \sqrt{\hat{P}} \hat{f}(\mathsf {S}_k) = \sqrt{\hat{P}} \left( 1-\hat{g}\left( \mathsf {S}_k+\frac{\beta }{2}\right) \right) \approx -\sqrt{\hat{P}} \frac{{{\,\mathrm{Si}\,}}(\pi )}{\beta } \mathsf {S}_k. \end{aligned}$$

Based on the mid-to-high SNR assumption, we have small \(\mathsf {z}(\mathsf {T}'_k)\) such that we assume \(\mathsf {S}_k \ll \beta\). Thus, we can linearize \(\hat{f}(t)\) using its first order Taylor approximation at \(t = 0\) as in (26). This corresponds to approximating g(t) by \(g_{\rm {appr}}(t)\) in the TI, cf. (27). We show in Appendix D, that this is valid for ρ 10 dB.

Fig. 5

Transformation from amplitude noise \(\mathsf {z}(\mathsf {T}'_k)\) to shift error \(\mathsf {S}_k\) for an SNR of 10 dB. In the transition interval \([\mathsf {T}_k,\mathsf {T}_k+\beta ]\), the transmitted signal is represented by the filtered transmit waveform \(\hat{f}(t)\)

In order to derive the pdf of \(\mathsf {S}_k\), we need to obtain the pdf of the additive noise \(\mathsf {z}(\mathsf {T}'_k)\), which is composed of two parts: the LP-filtered Gaussian noise \(\hat{\mathsf {n}}(\mathsf {T}'_k)\) and the ISI caused by the oscillation of the neighboring pulses \(\hat{g}(t)\) due to the LP-filtering. Again, due to the 1-bit quantization it is not clear, how to consider the information contained in the ISI for bounding the mutual information rate. Thus, we model it as additional noise for the purpose of lower-bounding the mutual information rate since it affects the position of the zero-crossings. In order to construct an upper bound on the mutual information rate, the ISI is not considered and only \(\hat{\mathsf {n}}(\mathsf {T}'_k)\) contributes to \(\mathsf {z}(\mathsf {T}'_k)\).

In Sect. 3.3 we have shown that the Gaussian distribution is a good approximation of the distribution of \(\tilde{\mathsf {x}}(t)\) for ratios \(\kappa ={W}/{\lambda }\) in the order of one. These will prove to be the relevant ones in the scenarios considered in this paper. We thus model

$$\begin{aligned} \mathsf {z}(t) \sim \mathcal {N}(0,\sigma_{\mathsf {z}}^2) \end{aligned}$$


$$\begin{aligned} \sigma_{\mathsf{z}}^{2} = \left\{\begin{array}{ll} \sigma_{\hat{\mathsf{n}}}^2+\sigma_{{\rm ISI}}^2& {\text {for the lower bound}}\\ \sigma_{\hat{\mathsf{n}}}^2 & {\text {for the upper bound}}\\ \end{array}\right.\end{aligned}$$

where \(\sigma_{{\rm ISI}}^2\) is the variance of the ISI. We then have

$$\begin{aligned} p_{\mathsf {S}}(s)&= \left| \sqrt{\hat{P}} \frac{\partial \hat{f}(s)}{\partial s} p_{\mathsf {z}}\left( \sqrt{\hat{P}} \hat{f}(s)\right) \right| \end{aligned}$$
$$\begin{aligned} & \approx \sqrt{\frac{\hat{P}{{\,\text{Si}\,}}^2(\pi )}{2 \pi \beta ^2 \sigma_{\mathsf {z}}^2}}\exp \left\{ -\frac{\hat{P}}{2\sigma_{\mathsf {z}}^2}\left(\frac{{{\,\text{Si}\,}}(\pi )}{\beta }s \right) ^{2}\right\}.\end{aligned}$$

Hence, in the mid-to-high SNR case the zero-crossing errors \(\mathsf {S}_{k}\) are approximately Gaussian distributed, i.e., \(\mathsf {S}_{k}\sim \mathcal {N}\left( 0,\sigma_{\mathsf {S}}^{2}\right)\) with

$$\begin{aligned} \sigma_{\mathsf {S}}^{2} = \frac{\sigma_{\mathsf {z}}^2 \beta ^2}{{{\,\mathrm{Si}\,}}^2(\pi ) \hat{P}} . \end{aligned}$$

Upper bound on the achievable rate of the genie-aided receiver

Given \(\mathsf {S}_{k}\sim \mathcal {N}\left( 0,\sigma_{\mathsf {S}}^{2}\right)\), cf. (52), and using \(\sigma_{\mathsf {z}}^2=\sigma_{\hat{\mathsf{n}}}^2\), cf. (49), we have for \(\mathsf {\Delta }_k = \mathsf {S}_k - \mathsf {S}_{k-1}\) in (45)

$$\begin{aligned} \mathsf {\Delta }_k \sim \mathcal {N}(0,2\sigma_{\mathsf {S}}^2) \end{aligned}$$

as the \(\mathsf {S}_k\) are approximately independent given the minimum temporal distance of the zero-crossings being \(\beta\) and (8). The values \(\mathsf {\Delta }_k\) are correlated as they always depend on the current and the previous \(\mathsf {S}_k\), such that the ACF of \(\varvec{\Delta }\) is given by

$$\begin{aligned} \phi_{\mathsf {\Delta } \mathsf {\Delta }}(k) = {{\,{\mathbb{E}}\,}}\left[ \mathsf {\Delta }_l \mathsf {\Delta }_{l+k}\right] = {\left\{ \begin{array}{ll} 2 \sigma_{\mathsf {S}}^2, &{} k=0\\ - \sigma_{\mathsf {S}}^2, &{}|k|=1\\ 0, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$

which yields for the covariance matrix \(\mathbf {R}^{(K)}_{\mathsf {\Delta }}\) of zero-crossing shifts \(\varvec{\Delta }^{(K)}=[\mathsf {\Delta }_{1},...,\mathsf {\Delta }_K]^T\)

$$\begin{aligned} \mathbf {R}^{(K)}_{\mathsf {\Delta }} = {{\,{\mathbb{E}}\,}}\left[ \varvec{\Delta }^{(K)} (\varvec{\Delta }^{(K)})^T\right] =\sigma_{\mathsf {S}}^{2}\left( \begin{array}{ccccc} 2 &{} -1 &{} 0 &{} \ldots &{} 0\\ -1 &{} 2 &{} -1 &{} \ddots &{} \vdots \\ 0 &{} -1 &{} 2 &{} \ddots &{} 0\\ \vdots &{} \ddots &{} \ddots &{} \ddots &{} -1\\ 0 &{} \ldots &{} 0 &{} -1 &{} 2 \end{array}\right) . \end{aligned}$$

Hence, the channel with the genie-aided receiver is a colored additive Gaussian noise channel with input \({{\varvec {\mathsf{{A}}}}}\), output \(\hat{{{\varvec {\mathsf{{D}}}}}}\), and noise \(\varvec{\Delta }\), which is independent of \({{\varvec {\mathsf{{A}}}}}\) cf. (49). The capacity of the colored additive Gaussian noise channel is achieved for Gaussian distributed input symbols [25, Chapter 9, Eq. (9.97)] and provides an upper bound on the mutual information rate of the channel with the genie-aided receiver and the chosen input distribution. Thus, we get

$$\begin{aligned} I'({{\varvec {\mathsf{{A}}}}};\hat{{{\varvec {\mathsf{{D}}}}}}) \le \frac{1}{2 T_{\rm{avg}}} \int \nolimits_{-\frac{1}{2}}^{\frac{1}{2}} \log \left( 1+\frac{(\nu -S_{\mathsf {\Delta }}(f))^{+}}{S_{\mathsf {\Delta }}(f)}\right) df \end{aligned}$$

where \(\nu\) is chosen such that

$$\begin{aligned} \int \nolimits_{-\frac{1}{2}}^{\frac{1}{2}} (\nu -S_{\mathsf {\Delta }}(f))^{+} df = \sigma_{\mathsf {A}}^2 \end{aligned}$$

with \(\sigma_{\mathsf {A}}^2\) given in (4). Moreover, \(S_{\mathsf {\Delta}}(f)\) is the PSD of \(\varvec{\Delta }\) and it is given by the z-transform of (54) as

$$\begin{aligned} S_{\mathsf {\Delta}}(f)=2 \sigma_{\mathsf {S}}^2 (1-\cos (2 \pi f)), \qquad |f|<0.5. \end{aligned}$$

Although \(S_{\mathsf {\Delta}}(f)\) is equal to zero for \(f = 0\) it can be shown that the integral in (56) exists, using that \(\nu \ge (\nu -S_{\mathsf {\Delta }}(f))^{+}\,\forall f\) and solving

$$\begin{aligned} \int \nolimits_{-\frac{1}{2}}^{\frac{1}{2}} \log \left( 1 +\frac{{\nu }/{(2 \sigma ^2_{\mathsf {S}})}}{1-\cos (2 \pi f)}\right) df = {{\,\mathrm{arcosh}\,}}\left( \frac{\nu }{2 \sigma ^2_{\mathsf {S}}}+1\right) . \end{aligned}$$

Lower bound on the achievable rate of the genie-aided receiver

For the genie-aided receiver, the mutual information between \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and \({\hat{\varvec {\mathsf{D}}}}^{(K)}\) is given by

$$\begin{aligned} I\big ({{\varvec {\mathsf{{A}}}}}^{(K)};{\hat{{\varvec {\mathsf{{D}}}}}}^{(K)}\big )&=h\big ({{\varvec {\mathsf{{A}}}}}^{(K)}\big )-h\big ({{\varvec {\mathsf{{A}}}}}^{(K)}|{\hat{{\varvec {\mathsf{{D}}}}}}^{(K)}\big )\nonumber \\ {}&=h\big ({{\varvec {\mathsf{{A}}}}}^{(K)}\big )-h\big ({{\varvec {\mathsf{{A}}}}}^{(K)}-\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\big |{\hat{{\varvec {\mathsf{{D}}}}}}^{(K)}\big ) \end{aligned}$$

where \(h(\cdot )\) denotes the differential entropy. Moreover, \(\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\) is the linear minimum mean-squared error estimate of \({{\varvec {\mathsf{{A}}}}}^{(K)}\) based on \({\hat{\varvec {\mathsf {D}}}}^{(K)}\). Equality (60) follows from the facts that addition of a constant does not change differential entropy and that \(\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\) can be treated as a constant while conditioning on \({\hat{\varvec{\mathsf {D}}}}^{(K)}\) as it is a deterministic function of \({\hat{\varvec{\mathsf {D}}}}^{(K)}\).

Next, we will upper-bound the second term on the RHS of (60), i.e., \(h\big ({{\varvec {\mathsf{{A}}}}}^{(K)}-\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\big |{\hat{\varvec{\mathsf {D}}}}^{(K)}\big )\). This term describes the randomness of the linear minimum mean-squared estimation error while estimating \({{\varvec {\mathsf{{A}}}}}^{(K)}\) based on the observation \({\hat{\varvec{\mathsf {D}}}}^{(K)}\). It can be upper-bounded by the differential entropy of a Gaussian random variable having the same covariance matrix [25, Theorem 8.6.5]. With (45), the estimation error covariance matrix of the linear minimum mean-squared error (LMMSE) estimator is given by

$$\begin{aligned} \mathbf {Q}_{{\rm err}}^{(K)}=&{{\,{\mathbb{E}}\,}}\big [\big ({{\varvec {\mathsf{{A}}}}}^{(K)}-\hat{{{\varvec {\mathsf{{A}}}}}}_{ {\rm LMMSE}}^{(K)}\big )\big ({{\varvec {\mathsf{{A}}}}}^{(K)}-\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\big )^{T}\big ] \nonumber \\ =&\, \mathbf {Q}_{\mathsf {A}} - (\mathbf {Q}_{\mathsf {A}} + \mathbf {Q}_{\mathsf {A\Delta }})(\mathbf {Q}_{\mathsf {A}} + \mathbf {Q}_{\mathsf {\Delta }}+\mathbf {Q}_{\mathsf {A\Delta}}+\mathbf {Q}_{\mathsf {A\Delta}}^T)^{-1} (\mathbf {Q}_{\mathsf {A}} + \mathbf {Q}_{\mathsf {A\Delta}})^T \end{aligned}$$

where all covariance matrices \(\mathbf {Q}\) are of dimension \(K\times K\) and \(\mathbf {Q}_{\mathsf {A}} = \sigma_{\mathsf {A}}^2 \mathbf {I}^{(K)}\). Furthermore, \(\mathbf {I}^{(K)}\) is the identity matrix of size \(K\times K\), \(\sigma_{\mathsf {A}}^2\) is given in (4), \(\mathbf {Q}_{\mathsf {A\Delta}} = {{\,{\mathbb{E}}\,}}[{{\varvec {\mathsf{{A}}}}}^{(K)}(\varvec{\Delta }^{(K)})^T]\), and \(\mathbf {Q}_{\mathsf {\Delta}} = {{\,{\mathbb{E}}\,}}[\varvec{\Delta }^{(K)}(\varvec{\Delta }^{(K)})^T]\).

Ignoring the correlation between \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and \(\varvec{\Delta }^{(K)}\) corresponds to \(\mathbf {Q}_{\mathsf {A \Delta}} = \varvec{0}\) and neglecting the correlation between the ISI samples \(\tilde{\mathsf {x}}(\mathsf {T}'_k)\), \(k=1,...,K\) is equivalent to \(\mathbf {Q}_{\mathsf {\Delta }} = \mathbf {R}_{\mathsf {\Delta }}\), with \(\mathbf {R}_{\mathsf {\Delta}}\) given in (55). We show in Appendix E that this results in an upper bound on \(h({{\varvec {\mathsf{{A}}}}}^{(K)}-\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\big |{\hat{\varvec{\mathsf {D}}}}^{(K)})\) which is given by

$$\begin{aligned} h({{\varvec {\mathsf{{A}}}}}^{(K)}-\hat{{{\varvec {\mathsf{{A}}}}}}_{{\rm LMMSE}}^{(K)}\big |{{\varvec {\mathsf{{D}}}}}^{(K)})&\le \frac{1}{2}\log \det \left( 2\pi e \mathbf {Q}_{{\rm err}}^{(K)}\right) \end{aligned}$$
$$\begin{aligned} & \le \frac{1}{2}\log \det \left(2\pi e {\mathbf{R}}_{{\rm err}}^{(K)}\right) \end{aligned}$$


$$\begin{aligned} {\mathbf {R}}_{{\rm err}}^{(K)}&=\sigma_{\mathsf {A}}^{2}\mathbf {I}^{(K)}-\sigma_{\mathsf {A}}^{4}\big (\sigma_{\mathsf {A}}^{2}\mathbf {I}^{(K)}+\mathbf {R}_{\mathsf {\Delta }}^{(K)}\big )^{-1}. \end{aligned}$$

This yields the following lower bound for the mutual information in (60)

$$\begin{aligned} I({{\varvec {\mathsf{{A}}}}}^{(K)};{\hat{\varvec{\mathsf {D}}}}^{(K)})&\ge h({{\varvec {\mathsf{{A}}}}}^{(K)})-\frac{1}{2}\log \det \left( 2\pi e \mathbf {R}_{{\rm err}}^{(K)}\right) \nonumber \\&=K h(\mathsf {A}_{k})+\frac{1}{2}\log \det \left( (2\pi e)^{-1} \left( \sigma_{\mathsf {A}}^{-2}\mathbf {I}^{(K)}+(\mathbf {R}_{\mathsf {\Delta }}^{(K)})^{-1}\right) \right). \end{aligned}$$

The first term of (65) follows from the independence of the elements of \({{\varvec {\mathsf{{A}}}}}^{(K)}\) and for the second term we have used (64) and the matrix inversion lemma. With (65) the mutual information rate in (38) is lower-bounded by

$$\begin{aligned} I'({{\varvec {\mathsf{{A}}}}};{\hat{\varvec{\mathsf {D}}}})&\ge \lim_{K\rightarrow \infty }\frac{1}{KT_{{\rm avg}}}\bigg \{K h(\mathsf {A}_{k}) +\frac{1}{2}\log \det \left[ \frac{1}{2\pi e} \left( \sigma_{\mathsf {A}}^{-2}\mathbf {I}^{(K)}+(\mathbf {R}_{\mathsf {\Delta }}^{(K)})^{-1}\right) \right] \bigg \} \nonumber \\&=\frac{1}{T_{{\rm avg}}}\bigg \{h(\mathsf {A}_{k}) +\frac{1}{2}\int_{-\frac{1}{2}}^{\frac{1}{2}}\log \left( \frac{\sigma_{\mathsf {A}}^{-2}}{2\pi e} \left( 1+\frac{\sigma_{\mathsf {A}}^2}{S_{\mathsf {\Delta }}(f)}\right) \right) df\bigg \} \end{aligned}$$

where for (66) we have used Szegö’s theorem on the asymptotic eigenvalue distribution of Hermitian Toeplitz matrices [26, pp. 64-65], [27]. Here, \(S_{\mathsf {\Delta }}(f)\) is the PSD of \(\varvec{\Delta }\) given in (58) and corresponding to the sequence of covariance matrices \(\mathbf {R}_{\mathsf {\Delta }}^{(K)}\). Despite the discontinuity of the integrand in (66), it can be shown that the integral exists analogously as in (59), here with \(\sigma_{\mathsf {A}}^2\) instead of ν. As \(\mathsf {A}_{k}\) is exponentially distributed, we have

$$\begin{aligned} h(\mathsf {A}_{k})=1-\log (\lambda ). \end{aligned}$$

With (3), (4), (8), (59), and (67), the lower bound in (66) can be written as

$$\begin{aligned} I'({{\varvec {\mathsf{{A}}}}};{\hat{\varvec{\mathsf {D}}}}) \ge \frac{W \left[ \log \left( \frac{e}{2\pi }\right) +{{\,\mathrm{arcosh}\,}}\left( \frac{1}{2 \sigma_{\mathsf {S}}^2 \lambda ^{2}}+1\right) \right] }{1+2W\lambda ^{-1}}. \end{aligned}$$

Characterization of the process of additional zero-crossings

It remains to find an upper bound for \(H'({{\varvec {\mathsf{{V}}}}})\), cf. (44). The random variable \(\mathsf {V}_k\) describes the number of received symbols that correspond to the transmitted symbol \(\mathsf {A}_k\). It depends on the number \(\mathsf {N}_k\) of inserted zero-crossings by \(\mathsf {V}_k = \mathsf {N}_k+1\). Given the separability of shift and insertion errors, we do not need to consider the TIs as they just contain the shifted zero-crossing. It remains the hold period of average duration \(T_{{\rm HP}} = {{\,{\mathbb{E}}\,}}[\mathsf {A}_k]-\beta = \lambda ^{-1}\) in which \(\mathsf {x}(t)\) maintains the signal level \(\pm \sqrt{\hat{P}}\). Without LP-filtering, this would lead to a level-crossing problem. However, since \(\hat{\mathsf {x}}(t)\) shows the typical ringing, cf. Fig. 2, we have a curve crossing problem. In order to obtain a closed form expression for an upper bound on \(\mathsf {N}_k\), we resort to a further bounding step: We consider a level crossing problem using the lowest value of the kth pulse outside the TI given by u, cf. (25) in Sect. 3.2 and Fig. 3.

Level-crossing problems, especially for Gaussian processes, have been widely studied, e.g., by Kac [28], Rice [29], Cramer and Leadbetter [30]. We derive an upper bound on \(H'({{\varvec {\mathsf{{V}}}}})\) based on the first moment of the distribution of \(\mathsf {V}_k\). For a stationary zero-mean Gaussian random process, the expected number of crossings of the level u in the time interval \(T_{{\rm sat}} = \lambda ^{-1}\) is given by the Rice formula [29]

$$\begin{aligned} \mu = {{\,{\mathbb {E}}\,}}[\mathsf {V}_k] = \frac{1}{\pi } \sqrt{\frac{-s^{\prime\prime}_{\mathsf {zz}}(0)}{\sigma_{\mathsf {z}}^2}} \exp \left( - \frac{u^2}{2 \sigma_{\mathsf {z}}^2}\right) \frac{1}{\lambda } +1. \end{aligned}$$

Here, \(s_{\mathsf {zz}}(\tau )\) is the ACF of the Gaussian process \(\mathsf {z}(t)\) and \(s^{\prime\prime}_{\mathsf {zz}}(\tau )={\partial ^2}/({\partial \tau ^2})\, s_{\mathsf {zz}}(\tau )\). Analogously to (49), we have

$$\begin{aligned} s^{\prime\prime}_{\mathsf {zz}}(0) = s^{\prime\prime}_{\hat{\mathsf {n}}\hat{\mathsf {n}}}(0) + s^{\prime\prime}_{{\rm ISI}}(0) = -\frac{4}{3} N_0 W^3 + s^{\prime\prime}_{{\rm ISI}}(0). \end{aligned}$$

where \(s_{{\rm{ ISI}}}(\tau )\) is the ACF of the ISI and \(s^{\prime\prime}_{{\rm ISI}}(0)\) is finite for finite bandwidths W, see Sect. 3. This ensures \(-s^{\prime\prime}_{\mathsf {zz}}(0) < \infty\) and, thus, \({{\,{\mathbb {E}}\,}}[\mathsf {N}_k]\) being finite. For a given mean \(\mu\) in (69), the entropy maximizing distribution for a discrete random variable on \({\mathbb {N}}\) is the geometric distribution, cf. [31, Section 2.1]. Hence, we can upper-bound the entropy \(H(\mathsf {V}_k)\) by

$$\begin{aligned} H(\mathsf {V}_k) \le (1-\mu ) \log \left( \mu - 1\right) + \mu \log \mu . \end{aligned}$$

Since independent \(\mathsf {V}_k\) maximize the entropy rate, we obtain for the entropy rate of the auxiliary process

$$\begin{aligned} H'({{\varvec {\mathsf{{V}}}}}) \le \frac{H(\mathsf {V}_k)}{T_{{\rm avg}}} \le \frac{(1-\mu ) \log \left( \mu -1\right) +\mu \log \mu }{T_{{\rm avg}}}. \end{aligned}$$

Note that the bound on \(H(\mathsf {V}_k)\) is an increasing function in the expected number of level-crossings \(\mu\) of the Gaussian random process and that \(\mu\) increases with the variance \(\sigma_{\mathsf {z}}^2\). Hence, to evaluate (69) an upper bound for \(\sigma_{\mathsf {z}}^2\) is required, which we obtain using the upper bound on \(\sigma_{\tilde{\mathsf {x}}}^2\) in (20) via (34) and (49). Moreover, in (69) we need an upper bound on \(-s^{\prime\prime}_{\mathsf {zz}}(0)\), which results from the lower bound on \(s^{\prime\prime}_{\tilde{\mathsf {x}}{\tilde{\mathsf {x}}}}(0)\) in (24) via (35) and (70).

Results and discussion

Lower and upper bound on the achievable rate

Substituting (38), (68), (52), (8) and (72), (3) into (44), a lower bound on the mutual information rate of the 1-bit quantized time continuous channel is obtained. It holds for small ratios \(\kappa ={W}/{\lambda }\) due to the limitations of the Gaussian approximation of the LP-distortion and is given by

$$\begin{aligned} I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}}) \ge&\frac{W \lambda }{2 W+\lambda } \left[ {{\,\mathrm{arcosh}\,}}\left( \frac{2 {{\,\mathrm{Si}\,}}^2(\pi ) W^2 \hat{P}}{\sigma_{\mathsf {z},\text {UB}}^2 \lambda ^2}+1\right) + \log \left( \frac{e}{2 \pi }\right) \right. \nonumber \\&\left. + 2 \mu_{\rm {UB}} \log \left( \frac{\mu_{\rm {UB}}-1}{\mu_{\rm {UB}}}\right) -2 \log (\mu_{\rm {UB}}-1)\right] = I'_{\rm {LB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}}). \end{aligned}$$

The indices \((\cdot )_{\rm {LB}}\) and \((\cdot )_{\rm {UB}}\) refer to lower and upper bounds on the indexed variable, respectively.

Furthermore, the corresponding upper bound for the given signaling scheme results from (56) and is valid for all \(\kappa\). With (3) and (8), it is given by

$$\begin{aligned} I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}}) \le \frac{W \lambda }{2 W+\lambda } \int \nolimits_{-\frac{1}{2}}^{\frac{1}{2}} \log \left( 1+\frac{(\nu -S_{\mathsf {\Delta },\text {LB}}(f))^{+}}{S_{\mathsf {\Delta },\text {LB}}(f)}\right) df = I'_{\rm {UB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}}). \end{aligned}$$

Here, \(S_{\mathsf {\Delta },\text {LB}}(f)\) is a lower bound on \(S_{\mathsf {\Delta }}(f)\) in (58) since for evaluating \(\sigma_{\mathsf {z}}^2\) and \(\sigma_{\mathsf {S}}^2\) it is assumed that \(\sigma_{{\rm ISI}}^2=0\), cf. (49). Both bounds, (73) and (74), hold for ρ 10 dB, where \(|\mathsf {S}_k| < {\beta }/{2}\) with high probability and such, the temporal separation of error events (zero-crossing-shifts and additional zero-crossings) is valid.

In (73), with (11), (8), (17), (16) and (25), we can express \(\sigma ^2_{\mathsf {z},\text {UB}} = \hat{P} \big (\frac{\frac{1}{2}+2\kappa }{(1+2\kappa )\rho }+\frac{\alpha_{\rm {TI/HP}} (1+2 c_1) c_0}{2 \pi ^2 (1+2 \kappa )}\big )\) and \(\frac{-s^{\prime\prime}_{\mathsf {zz},\text {LB}}(0)}{\lambda ^2} = \hat{P} \kappa ^2 \big (\frac{4}{3} \frac{\frac{1}{2}+2\kappa }{(1+2\kappa )\rho } + \frac{\alpha_{\rm {HP}} 2(1+2 c_1) c_2}{1+2 \kappa }\big )\) as functions of \(\hat{P}\), \(\rho\), and \(\kappa\) where \(c_1\) is a function of \(\kappa\). Hence, both \(\mu_{\rm {UB}}\) and the normalized lower bound \(I_{\rm {LB}}'({{\mathbf {\mathsf{{A}}}}};{{\mathbf {\mathsf{{D}}}}})/W\) depend solely on \(\kappa\) and \(\rho\). With \(\sigma ^2_{\mathsf {S},\text {UB}}\sigma_{\mathsf {A}}^{-2} = \frac{\sigma ^2_{\mathsf {z},\text {UB}}}{\hat{P}} \frac{1}{4 \kappa ^2 {{\,\mathrm{Si}\,}}^2(\pi )}\), the same behaviour can be shown for \(I_{\rm {UB}}'({{\mathbf {\mathsf{{A}}}}};{{\mathbf {\mathsf{{D}}}}})/W\).

For comparison, the capacity \(C_{{\rm AWGN}} = W \log \left( 1+ \rho \right)\) of the AWGN channel without output quantization represents an upper bound on the mutual information rate with 1-bit quantization. The ratio between \(C_{{\rm AWGN}}\) and \(I'_{\rm {LB}}({{\mathbf {\mathsf{{A}}}}};{{\mathbf {\mathsf{{D}}}}})\) in (73) is

$$\begin{aligned} \Delta {I} = \frac{\log (1+\rho )}{\frac{1}{W} I_{\rm {LB}}'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})} = f(\kappa ,\rho ). \end{aligned}$$

Since \(\Delta {I}\) in (75) is solely a function of \(\kappa\) and \(\rho\), we are looking for the \(\kappa\) minimizing \(\Delta {I}\) for a given \(\rho\). The results are depicted in Fig. 6. It can be seen, that the optimal \(\kappa ={W}/{\lambda }\) is in the order of one. Hence, the randomness of the input signal needs to be matched to the channel bandwidth, which is achieved by allowing \(\lambda\) to grow linearly with W. In the high SNR regime the optimal \(\kappa\) is approximately 0.75.

Fig. 6

Optimal ratio \(\kappa =W \lambda ^{-1}\) over the SNR and corresponding ratio \({C_{{\rm AWGN}}}/{I_{\rm {LB}}'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})}\), valid for ρ 10 dB. The value of \(\kappa\) that minimizes the loss w.r.t. the AWGN-capacity is approximately 0.75 for most of the mid-to-high SNR regime

Note, that this optimum heavily depends on linking the transition time \(\beta\) to the signal bandwidth W, cf. (8). The utilization of the spectrum could be improved by reducing W, e.g., choosing \(W={1}/{(2 T_{ {\rm avg}})}\). However, then the minimum symbol duration would not longer correspond to the coherence time of the noise, based on which we can neglect symbol deletions. The error event of deletions would then have to be included in the model, see Appendix A.

Spectral efficiency results

Figure 7 shows the resulting bounds over the SNR \(\rho\) in terms of spectral efficiency, i.e., normalized by the bandwidth 2W, such that they depend solely on \(\kappa\). We compare \(I'_{\rm {LB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) from (73), \(I'_{\rm {UB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) from (74), and the lower bound if we neglect the ISI, \({I'_{\rm {LB,noISI}}({{\varvec {\mathsf{{A}}}}};{{{\varvec {\mathsf{{D}}}}}})}\), which is obtained by setting \(\alpha_{\rm {TI}} = \alpha_{\rm {HP}} = 0\), cf. (34). The latter two bounds hold for any \(\kappa\), whereas \(I'_{\rm {LB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) applies only to values of \(\kappa\) in the order of one, due to the limitations of the Gaussian approximation of the LP-distortion, cf. Sect. 3.3. However, this restriction is uncritical since \(\kappa\) in the order of one yields the highest mutual information rates and, thus, the best lower bounds. It can be seen that in the low-to-mid SNR range, \(I'_{\rm {LB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) and \(I'_{\rm {UB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) approach each other with increasing SNR. This is due to the decreasing impact of \(H'({{\varvec {\mathsf{{V}}}}})\), as additional zero-crossings are not considered in \(I_{\rm {UB}}'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) and their probability decreases with the SNR. However, in the high SNR domain the upper and lower bound diverge again since the system becomes dominated by the ISI. The lower bound \(I'_{\rm {LB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\) saturates at a spectral efficiency of approximately 1.54 bit/s/Hz for \(\kappa =0.75\). On the other hand \({I'_{\rm {LB,noISI}}({{\varvec {\mathsf{{A}}}}};{{{\varvec {\mathsf{{D}}}}}})}\) does not saturate over the SNR and is very close to \(I'_{\rm {UB}}({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})\), which coincides with the observation in [15] that under noise free condition (\(\rho \rightarrow \infty\)) the achievable rate of the timing channel tends to infinity.

Fig. 7

Lower and upper bounds on the spectral efficiency \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}})/(2W)\). The bounds depend on the SNR \(\rho\) and \(\kappa\) and are valid for the mid-to-high SNR regime ρ 10 dB; for comparison the achievable rate with 1-bit quantization in bpcu for Nyquist sampling [13] and 2-fold FTN [11] as well as \(C_{\rm {AWGN}}\) is given

By comparison to the capacity in bits per channel use (bpcu) under 1-bit quantization in [13] assuming Nyquist signaling and sampling, it can be seen that there is at least a 50 % increase in achievable rate possible in the high-SNR domain. Moreover, the achievable rate for a 1-bit quantization based system with 2-fold oversampling and 2-fold FTN signaling [11] is given, which already attains approximately 35% of gain w.r.t. [13]. Note that under perfect phase, frequency, and timing synchronization with complex signaling all presented spectral efficiencies could be doubled, however, this requires two 1-bit quantizers, one for the in-phase and one for the quadrature component.


We derived an approximate lower bound on the mutual information rate of the real and bandlimited 1-bit quantized continuous-time AWGN channel focusing on the mid-to-high SNR regime. It is valid for SNR values above approximately 10 dB and κ = W/λ 3, for which we can approximate the filter distortion by a Gaussian distribution. We furthermore provided an approximate upper bound on the mutual information rate of the specific signaling scheme used for deriving the lower bound. We have identified the parameter ranges in which both bounds are close and have given explanations for those, in which they are not. As the lower and the upper bound are close in an SNR range between approximately 10 and 20 dB, they provide a valuable characterization of the actual mutual information rate with the given signaling scheme on 1-bit quantized channels. The bounds hold given the following statements:

  • For the lower bound, the LP-distortion error \(\tilde{\mathsf {x}}(t)\) is approximated to be Gaussian, which enables closed form analytical treatment. This is appropriate for κ 3, for which the best lower bounds are obtained, cf. Sect. 3.3.

  • For the considered input signals and the mid to high SNR scenario, the occurrence of deletions is negligible if \(W=\frac{1}{2 \beta }\). This was confirmed by simulations, cf. Appendix A.

  • There is only one zero-crossing in each transition interval \(\left[ \mathsf {T}_k,\mathsf {T}_k+\beta \right]\). This follows from the bandlimitation of the noise, which prevents the signal from rapid changes, and it has been verified for an SNR above 5 dB by numerical computation based on curve-crossing problems for Gaussian random processes, cf. Appendix B.

  • For the upper bound, the individual elements of the process \({{\varvec {\mathsf{{S}}}}}\) are i.i.d. The minimum temporal separation of the individual \(\mathsf {S}_k\) is \(\beta\), which is matched to the bandwidth of the noise, cf. (8), and ISI, which mainly contributes to correlation, is neglected for upper-bounding.

  • In the mid-to-high SNR-domain it holds \(\mathsf {S}_{k} \ll \beta\), such that the transition can be linearized around the zero-crossing, cf. (47). We show in Appendix D, that this is valid for ρ 10 dB.

We have shown that in order to maximize the lower bound on the mutual information rate for a given bandwidth, the parameter \(\lambda\) of the exponential distribution of the \(\mathsf {A}_k\) needs to grow linearly with the channel bandwidth. For the given system model, the optimal coefficient \(\kappa\) depends on the SNR and tends towards 0.75 for high SNR. When allowing the filter bandwidth W to take on values smaller than \({1}/{(2 \beta) }\), deletion errors have to be incorporated into the model as otherwise the spectral efficiency of the system can be overestimated. This remains for future work. In contrast to the AWGN channel capacity, the lower bound on the mutual information rate with 1-bit quantization saturates when increasing the SNR to infinity. This is due to the LP-distortion that is introduced since the designed signal \(\mathsf {x}(t)\) is not strictly bandlimited.


We analyze the bandlimited 1-bit quantized AWGN-channel. We derive a lower bound on the capacity by lower-bounding the mutual information rate for a given set of waveforms with exponentially distributed zero-crossing distances and an average power constraint. Furthermore, we derive an upper bound based on the specific signaling scheme in order to quantify the impact of the applied bounding steps. The main results are closed form expressions obtained analytically. During the derivation, it was required to make assumptions and approximations in order to be able to treat the problem analytically. The parameter regions, in which these assumption—and therefore the obtained bounds—are valid, have been determined by numerical computation or simulation in MATLAB.

Availability of data and materials

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.


  1. 1.

    Note that one additional bit is carried by the sign of the first sample. However, its effect on the mutual information rate between channel input and output can be neglected when studying the capacity as it converges to zero for infinite blocklength.

  2. 2.

    As upcrossing we denote zero-crossings with positive transition slope, i.e., from \(-\sqrt{\hat{P}}\) to \(\sqrt{\hat{P}}\). Correspondingly, downcrossings have a negative slope.



Additive white Gaussian noise


Analog-to-digital converter


Autocorrelation function


Bits per channel use




Hold period


Independent and identically distributed


Intersymbol interference


Linear minimum mean-squared error




Power spectral density


Probability density function


Signal-to-noise ratio


Transition interval




With respect to


  1. 1.

    S. Bender, M. Dörpinghaus, G. Fettweis, On the achievable rate of bandlimited continuous-time 1-bit quantized AWGN channels, in Proceedings of IEEE International Symposium on Information Theory (ISIT), Aachen, Germany (2017)

  2. 2.

    S. Bender, M. Dörpinghaus, G. Fettweis, On the Achievable Rate of Bandlimited Continuous-Time AWGN Channels with 1-Bit Output Quantization. arXiv preprint, arXiv:1612.08176

  3. 3.

    B. Murmann, ADC Performance Survey 1997–2018. (2018).

  4. 4.

    G.P. Fettweis, M. Dörpinghaus, J. Castrillon, A. Kumar, C. Baier, K. Bock, F. Ellinger, A. Fery, F.H.P. Fitzek, H. Härtig, K. Jamshidi, T. Kissinger, W. Lehner, M. Mertig, W.E. Nagel, G.T. Nguyen, D. Plettemeier, M. Schröter, T. Strufe, Architecture and advanced electronics pathways toward highly adaptive energy-efficient computing. Proc. IEEE 107(1), 204–231 (2019)

    Article  Google Scholar 

  5. 5.

    E.N. Gilbert, Increased information rate by oversampling. IEEE Trans. Inf. Theory 39(6), 1973–1976 (1993)

    Article  Google Scholar 

  6. 6.

    S. Shamai, Information rates by oversampling the sign of a bandlimited process. IEEE Trans. Inf. Theory 40(4), 1230–1236 (1994)

    MathSciNet  Article  Google Scholar 

  7. 7.

    T. Koch, A. Lapidoth, Increased capacity per unit-cost by oversampling, in Proceedings of the IEEE Convention of Electrical and Electronics Engineers in Israel (IEEEI), Eilat, Israel, pp. 684–688 (2010)

  8. 8.

    W. Zhang, A general framework for transmission with transceiver distortion and some applications. IEEE Trans. Commun. 60(2), 384–399 (2012)

    Article  Google Scholar 

  9. 9.

    L. Landau, G. Fettweis, Information rates employing 1-bit quantization and oversampling at the receiver, in Proceedings of the IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Toronto, Canada, pp. 219–223 (2014)

  10. 10.

    L.T. Landau, M. Dörpinghaus, G.P. Fettweis, 1-bit quantization and oversampling at the receiver: sequence-based communication. EURASIP J. Wirel. Commun. Netw. 2018(1), 83 (2018)

    Article  Google Scholar 

  11. 11.

    L. Landau, M. Dörpinghaus, G.P. Fettweis, 1-bit quantization and oversampling at the receiver: communication over bandlimited channels with noise. IEEE Commun. Lett. 21(5), 1007–1010 (2017)

    Article  Google Scholar 

  12. 12.

    S. Bender, L. Landau, M. Dörpinghaus, G. Fettweis, Communication with 1-bit quantization and oversampling at the receiver: spectral constrained waveform optimization, in Proceedings of the IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Edinburgh, UK (2016)

  13. 13.

    J. Singh, O. Dabeer, U. Madhow, On the limits of communication with low-precision analog-to-digital conversion at the receiver. IEEE Trans. Inf. Theory 57(12), 3629–3639 (2009)

    Google Scholar 

  14. 14.

    C.E. Shannon, A mathematical theory of communication. Bell Syst. Tech. J. 27, 623–656 (1948)

    MathSciNet  Article  Google Scholar 

  15. 15.

    V. Anantharam, S. Verdú, Bits through queues. IEEE Trans. Inf. Theory 42(1), 4–18 (1996)

    Article  Google Scholar 

  16. 16.

    M. Schlüter, M. Dörpinghaus, G.P. Fettweis, Bounds on phase, frequency, and timing synchronization in fully digital receivers with 1-bit quantization and oversampling. IEEE Trans. Commun. 68(10), 6499–6513 (2020)

    Article  Google Scholar 

  17. 17.

    M. Schlüter, M. Dörpinghaus, G.P. Fettweis, Least squares phase estimation of 1-bit quantized signals with phase dithering, in Proceedings of the IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 1–5 (2019)

  18. 18.

    A. Wadhwa, U. Madhow, Near-coherent QPSK performance with coarse phase quantization: a feedback-based architecture for joint phase/frequency synchronization and demodulation. IEEE Trans. Signal Process. 64(17), 4432–4443 (2016)

    MathSciNet  Article  Google Scholar 

  19. 19.

    R.G. Gallager, Sequential Decoding for Binary Channels with Noise and Synchronization Errors. Technical Report, Massachusetts Institute of Technology: Lincoln Laboratory (1961)

  20. 20.

    K.S. Zigangirov, Sequential decoding for a binary channel with drop-outs and insertions. Probl. Inf. Transm. 5, 17–22 (1969)

    Google Scholar 

  21. 21.

    D. Fertonani, T.M. Duman, M.F. Erden, Bounds on the capacity of channels with insertions, deletions and substitutions. IEEE Trans. Commun. 59(1), 2–6 (2011)

    Article  Google Scholar 

  22. 22.

    S. Diggavi, M. Mitzenmacher, H. Pfister, Capacity upper bounds for deletion channels, in Proceedings of IEEE International Symposium Information Theory (ISIT), Nice, France, pp. 1716–1720 (2007)

  23. 23.

    R.L. Dobrushin, Shannon’s theorems for channels with synchronization errors. Probl. Peredachi Inf. 3, 18–36 (1967)

    MathSciNet  MATH  Google Scholar 

  24. 24.

    W. Rudin, Real and Complex Analysis, 3rd edn. (McGraw-Hill Book Co., New York, 1987), p. 416

    Google Scholar 

  25. 25.

    T. Cover, J. Thomas, Elements of Information Theory, 2nd edn. (Wiley, New York, 2006)

    Google Scholar 

  26. 26.

    U. Grenander, G. Szegö, Toeplitz Forms and Their Applications (University of California Press, Berkeley, 1958)

    Google Scholar 

  27. 27.

    R.M. Gray, Toeplitz and circulant matrices: a review. Found. Trends Commun. Inf. Theory 2(3), 155–239 (2006)

    Article  Google Scholar 

  28. 28.

    M. Kac, On the average number of real roots of a random algebraic equation. Bull. Am. Math. Soc. 49(4), 314–320 (1943)

    MathSciNet  Article  Google Scholar 

  29. 29.

    S.O. Rice, Mathematical analysis of random noise. Bell Syst. Tech. J. 23(3), 282–332 (1944)

    MathSciNet  Article  Google Scholar 

  30. 30.

    H. Cramer, M.R. Leadbetter, Stationary and Related Stochastic Processes (Wiley, New York, 1967)

    Google Scholar 

  31. 31.

    J.N. Kapur, Maximum-entropy Models in Science and Engineering (Wiley Eastern, New Dehli, 1993)

    Google Scholar 

  32. 32.

    M.F. Kratz, Level crossings and other level functionals of stationary Gaussian processes. Probab. Surveys 3, 230–288 (2006)

    MathSciNet  Article  Google Scholar 

Download references


We gratefully acknowledge the constructive comments of the reviewers.


This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in the Collaborative Research Center SFB 912, “Highly Adaptive Energy-Efficient Computing”, HAEC, Project-ID 164481002. Open Access funding enabled and organized by Projekt DEAL.

Author information




All authors contributed to the conception and design of the study. SB drafted the manuscript and did the main analysis work. All authors contributed to the interpretation of the results. MD reviewed the manuscript and contributed the calculations in Sect. 5.3. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Sandra Bender.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Parts of this work have been presented at the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, June, 2017 [1], and in an arXiv preprint [2].


Appendix A: Occurrence of zero-crossing deletions

By removing the fixed relation of bandwidth W and transition time \(\beta\) in (8) and allowing W to take on any value, we augment the design space and potentially increase the spectral efficiency of the system. However, the minimum distance between two zero-crossings \(\beta\) is no longer linked to the coherence time of the noise, which can potentially lead to deletion errors. In this section, we verify this by simulation.

We consider a long sequence of \(K=10^3\) symbols and a time resolution \(\Delta t=10^{-3} \lambda ^{-1}\). For a given SNR, \(\lambda\), and \(\beta\), we generate \(\mathsf {x}(t)\) and analyze the corresponding signal \(\mathsf {r}(t)\) after the receive filter. In order to identify the locations of insertions and deletion, we match every received upcrossingFootnote 2 in \(\mathsf {r}(t)\) to the closest upcrossing in \(\mathsf {x}(t)\), likewise for the downcrossings, and count the deleted symbols.

The number of deletions is depicted in Fig. 8 for two different SNR values of 6 dB and 15 dB, where we defined \(\tilde{\kappa }={1}/{(2\beta \lambda )}\). It can be seen that in the mid-to-high SNR regime, the SNR has a rather small impact on the number of deletions occurred. The black line represents the case, when \(W={1}/{(2 \beta) }\). It can be seen that for bandwidths \(W\ge {1}/{(2 \beta) }\), i.e., above the black line, the number of deletions is negligible because the dynamics of the noise are high compared to the minimum symbol duration \(\beta\). However, when the bandwidth W becomes smaller than \({1}/{(2 \beta) }\), deletions are possible and have to be considered in the system model—otherwise the spectral efficiency of the system will be overestimated.

Fig. 8

Number of deletions over normalized bandwidth \(\kappa ={W}/{\lambda }\) and over \(\tilde{\kappa }\). In both parts of the figure, a \(\rho\) = 6 dB and b \(\rho\) = 15 dB, almost no deletions occur as soon as \(\kappa \ge \tilde{\kappa }\), i.e., \(W \ge {1}/{(2 \beta) }\)

Appendix B: Number of zero-crossings within a transition interval

Obtaining the expected number of zero-crossings within the interval \([\mathsf{T}_k,\mathsf{T}_k+\beta]\) is a curve crossing problem depending on the deterministic transmit waveform and the random process \(\mathsf {z}(t)\), cf. (46). We show in Sect. 3, that \(\tilde{\mathsf {x}}(t)\) and therefore also \(\mathsf {z}(t)\) can be approximated to be Gaussian, see also (48). Hence, an equivalent way of looking at this problem is to study the zero-crossings of a non-stationary Gaussian process \(\mathsf {q}(t) = \mathsf {z}(t) - \psi (t)\), where \(\psi (t)\) is the deterministic curve to be crossed by \(\mathsf {z}(t)\). For this purpose we define the TI \({\mathbb {Y}}=[0,\beta ]\), where \(y \in {\mathbb {Y}}\) is the time variable within the TI. Then \(\psi (y)\) is related to the filtered transmit pulse \(\hat{g}(y)\) as

$$\begin{aligned} \psi (y) = \hat{g}\left( y\right) -1. \end{aligned}$$

For the sine transition in (10), \(\hat{g}(t)\) it is given in (14). The process \(\mathsf {q}(t)\) has a zero-crossing in \({\mathbb {Y}}\) only if \(\mathsf {z}(y)=\psi (y)\). For the number of crossings \(N_{T}(\psi )\) of a curve \(\psi\) by a stationary Gaussian process in the time interval of length T it holds [32]

$$\begin{aligned} {{\,{\mathbb {E}}\,}}[N_{T}(\psi )]=&\sqrt{-s^{\prime\prime}(0)}\int_{0}^{T}\varphi (\psi (y))\left[ 2 \varphi \left( \frac{\psi '(y)}{\sqrt{-s^{\prime\prime}(0)}}\right) \right. \nonumber \\&\left. + \frac{\psi '(y)}{\sqrt{-s^{\prime\prime}(0)}} \left( 2 \Phi \left( \frac{\psi '(y)}{\sqrt{-s^{\prime\prime}(0)}}\right) -1\right) \right] dy \end{aligned}$$

where \(s(\tau )\) is the autocorrelation function (ACF) of the Gaussian Process, \('\) denotes the derivative in time, i.e., w.r.t. y, and \(\varphi\) and \(\Phi\) are the zero-mean Gaussian density and distribution functions with variance \(\sigma ^2_{\mathsf {z}}\), respectively. The variance of the number of zero-crossings is given by [32]

$$\begin{aligned} {{\,\mathrm{Var}\,}}(N_{T}(\psi ))=&{{\,{\mathbb {E}}\,}}[N_{T}(\psi )]-{{\,{\mathbb {E}}\,}}^2[N_{T}(\psi )]+\int \limits_{0}^{T}\int \limits_{0}^{T}\int \limits_{\mathbb {R}} |\mathsf {q}'_{t_1}-\psi '_{t_1}| \nonumber \\&\cdot |\mathsf {q}'_{t_2}-\psi '_{t_2}| \phi_{t_1,t_2}(\psi_{t_1},\mathsf {q}'_{t_1},\psi_{t_2},\mathsf {q}'_{t_2}) \hbox {d} \mathsf {q}'_{t_1} \hbox {d} \mathsf {q}'_{t_2} \hbox {d} t_1 \hbox {d} t_2 \end{aligned}$$

where the subscripts \(t_1\) and \(t_2\) denote the time instants and \(\phi\) is the multivariate zero-mean normal distribution of \(\mathsf {q}(t_1)\), \(\mathsf {q}'(t_1)\), \(\mathsf {q}(t_2)\), and \(\mathsf {q}'(t_2)\) with covariance matrix \(\Sigma\)

$$\begin{aligned} \Sigma = \begin{Bmatrix} s(0)&0&s(\tau )&s^{\prime}(\tau ) \\ 0&-s^{\prime\prime}(0)&-s^{\prime}(\tau )&-s^{\prime\prime}(\tau )\\ s(\tau )&-s^{\prime}(\tau )&s(0)&0 \\ s^{\prime}(\tau )&- s^{\prime\prime}(\tau )&0&-s^{\prime\prime}(0) \end{Bmatrix}. \end{aligned}$$

The equations (77) and (78) are evaluated and depicted in Fig. 9. For \(\kappa ={W}/{\lambda } \ge 0.5\), the expectation of the number of zero-crossings converges to one for SNR \(\ge\) 5 dB while at the same time the variance converges to 0. Hence, for an SNR \(\ge\) 5 dB there exists with high probability only one zero-crossing in every TI. For \(\kappa \ll 1\) the lower bound on the mutual information rate in (73) becomes zero and, hence, the validity of the assumption is not relevant.

Fig. 9

Expectation and variance of the number of zero-crossings in the TI \([\mathsf{T}_k,\mathsf{T}_k+\beta]\). For SNR values above 5 dB, the expectation of the number of zero-crossings goes to one, while the variance converges to zero

Appendix C: Power spectral density of the transmit signal

The PSD of a random process is defined as

$$\begin{aligned} S_{\mathsf {X}}(\omega ) = \lim \limits_{K \rightarrow \infty } \frac{{\mathbb {E}} \left[ \left| \mathsf {X}(\omega )\right| ^2\right] }{K T_{{\rm avg}}} \end{aligned}$$

with \(\mathsf {X}(\omega )\) being the spectrum of the random process \(\mathsf {x}(t)\) defined in (6) and given by

$$\begin{aligned} \frac{\mathsf {X}(\omega )}{\sqrt{\hat{P}}} = \left( \sum_{k=1}^{K} (-1)^k \, G(\omega ) \, e^{-j \omega \mathsf {T}_k} \right) +2 \pi \delta (\omega ) \end{aligned}$$

where \(G(\omega )\) is the Fourier transformation of the waveform g(t) in (7). It holds that

$$\begin{aligned} G(\omega ) = -j \left[ \frac{1+e^{-j\omega \beta }}{\omega }+e^{-j\omega \frac{\beta }{2}} a(\omega )\right] \end{aligned}$$

where \(a(\omega )\) is a real function in \({\mathbb {R}}\) given by

$$\begin{aligned} a(\omega ) = - \frac{1}{j} \int_{-\frac{\beta }{2}}^{\frac{\beta }{2}} f(t) e^{-j \omega t} dt. \end{aligned}$$

We then obtain the squared magnitude of (81) as

$$ \begin{aligned} \frac{|\mathsf {X}(\omega )|^{2}}{\hat{P}} =4 {\pi^{2}} {\delta^{2}}(\omega )+4 {\pi} {\delta} (\omega ) {\mathfrak {R}}\left\{ G(\omega )\sum \limits_{k=1}^{K} (-1)^{k} e^{-j \omega \mathsf {T}_{k}}\right\} +\left| G(\omega )\sum \limits_{k=1}^{K} (-1)^{k} e^{-j \omega \mathsf {T}_{k}}\right|^{2}. \end{aligned} $$

The third term of the RHS of (84) can be written as

$$\begin{aligned} \left| G(\omega )\right| ^2 \sum \limits_{k=1}^{K} \sum \limits_{v=1}^{K} (-1)^{k+v} \cos (\omega (\mathsf {T}_k-\mathsf {T}_v)) \end{aligned}$$


$$\begin{aligned} \left| G(\omega )\right| ^2= \frac{2(1+\cos (\omega \beta ))}{\omega ^2}+a^2(\omega )+\frac{4 a(\omega )\cos \left( \frac{\omega \beta }{2}\right) }{\omega }. \end{aligned}$$

The first two terms of (84) represent a DC-component, which is not of interest for further calculations. Exploiting the fact that the cosine is an even function, it remains for the PSD of the transmit signal in (80)

$$\begin{aligned} S_{\mathsf {X}}(\omega ) = \frac{\hat{P} \left| G(\omega )\right| ^2}{T_{{\rm avg}}} \left( 1 + \lim \limits_{K\rightarrow \infty } 2 \sum_{n=1}^{K-1} (-1)^n \left( 1-\frac{n}{K}\right) {{\,{\mathbb {E}}\,}}[\cos (\omega \mathsf {L}_n)]\right) \end{aligned}$$

where \(n=k-v\) is the index describing the distance between two arbitrary zero-crossing instances and \(\mathsf {L}_n = \mathsf {T}_k-\mathsf {T}_v = \sum \nolimits_{i = 1}^{n} \mathsf {A}_{k+i}\) is the corresponding random variable. As sum of exponentially distributed random variables \(\mathsf {L}_n\) follows a Gamma-distribution, cf. (5). We thus can calculate the expectation in (87) as

$$\begin{aligned} {\mathbb {E}}[\cos (\omega \mathsf {L}_n)]&= q^n \cos \left( n\left( \omega \beta + \arctan \left( \frac{\omega }{\lambda }\right) \right) \right) \le q^n \end{aligned}$$

with \(q = \frac{\lambda }{\sqrt{\lambda ^2+\omega ^2}}\) and upper-bound the infinite sum in (87) by

$$\begin{aligned} \lim \limits_{K \rightarrow \infty } \sum \limits_{n=1}^{K-1} \left( 1-\frac{n}{K}\right) q^n = \frac{\lambda }{\sqrt{\lambda ^2+\omega ^2} - \lambda } = c(\omega ). \end{aligned}$$

Numerically we find that the infinite sum in (87) has periodic minima. They occur when \(\omega \beta + \arctan \left( \frac{\omega }{\lambda }\right) = 2 m \pi\), \(m \in {\mathbb {Z}}\), for which the cosine is always one such that it remains

$$\begin{aligned} \lim \limits_{K \rightarrow \infty } \sum \limits_{n=1}^{K-1} (-1)^n \left( 1-\frac{n}{K}\right) q^n = -\frac{\lambda }{\sqrt{\lambda ^2+\omega ^2} + \lambda }. \end{aligned}$$

Based on (90), \(S_{\mathsf {X}}(\omega )\) in (87) can be lower-bounded by (18) where we have used that \(1-2 \frac{\lambda }{\sqrt{\lambda ^2+\omega ^2} + \lambda } = \frac{1}{1+2 c(\omega )}\) with \(c(\omega )\) given in (89).

Appendix D: Mid-to-high SNR assumption \(\mathsf {S}_k \ll \beta\)

In order to quantify the SNR region for which \(\mathsf {S}_k \ll \beta\) and, thus, the linearization in (47) is valid, the variances of both densities, (50) and (51), have been evaluated and compared numerically. The corresponding normalized variances \(W^2 \sigma_{\mathsf {S}}^2\) are depicted in Fig. 10a), where the variance of the original pdf in (50) is only shown, when \(\Pr (|\mathsf {S}_k|<{\beta }/{2}) = \int_{-\beta /2}^{\beta /2} p_{\mathsf {S}}(s)\hbox {d}s \ge 0.95\), i.e., with large probability \(|\mathsf {S}_k| < {\beta }/{2}\), cf. Fig. 10b). The numerical evaluation in Fig. 10 shows that for the relevant regime of \(\kappa = {W}/{\lambda } \ge 0.5\), the variances \(\sigma_{\mathsf {S},\text {orig}}^2\) and \(\sigma_{\mathsf {S}}^2\) are very close when ρ 10 dB, which is when \(\Pr (|\mathsf {S}_k|<{\beta }/{2})>0.99\). This means, as soon as the SNR is high enough in order for the temporal separation of error events (zero-crossing-shifts and zero-crossing-insertions) not to be violated, the linearization is valid as well. For \(\kappa \ll 1\) the lower bound on the mutual information rate in (73) becomes zero and, hence, the validity of the assumption is not relevant. Comparing the variances is sufficient for our purpose as the further bounding of \(I'({{\varvec {\mathsf{{A}}}}};{{\varvec {\mathsf{{D}}}}},{{\varvec {\mathsf{{V}}}}})\) is solely based on the variance of a Gaussian random process with equal covariance matrix.

Fig. 10

Properties of the original distribution \(p_{\mathsf {S}}(s)\) (50) and the Gaussian approximation \(p_{\mathsf {S},\text {Gauss}}(s)\) (51). a Normalized variances \(\sigma ^2_{\mathsf {S},\text {orig}}\) of \(p_{\mathsf {S}}(s)\) and \(\sigma ^2_{\mathsf {S}}\) of \(p_{\mathsf {S},\text {Gauss}}(s)\) and b \(\Pr (|\mathsf {S}_k|<{\beta }/{2}) = \int_{-{\beta }/{2}}^{{\beta }/{2}} p_{\mathsf {S}}(s)\hbox {d}s\) for \(p_{\mathsf {S}}(s)\) and \(p_{\mathsf {S},\text {Gauss}}(s)\)

Appendix E: Independent noise assumption

In order to compare the impact of \(\mathbf {Q}_{\rm {err}}\) and \(\mathbf {R}_{\rm {err}}\), cf. (61) and (64), we need to obtain \(\mathbf {Q}_{\mathsf {A\Delta}}\) and \(\mathbf {Q}_{\mathsf {\Delta}}\). By using a short notation for all random processes at time \(\mathsf {T}'_j\), e.g., \(\mathsf {z}_j = \mathsf {z}(\mathsf {T}'_j)\), we write for the respective entries \(q_{\mathsf {A\Delta },(i,j)} = {{\,{\mathbb {E}}\,}}[\mathsf {A}_i \mathsf {\Delta }_j]\) and \(q_{\mathsf {\Delta },(i,j)} = {{\,{\mathbb {E}}\,}}[\mathsf {\Delta }_i \mathsf {\Delta }_j]\)

$$\begin{aligned} q_{\mathsf {A\Delta },(i,j)}&= {{\,{\mathbb {E}}\,}}\left[ \mathsf {A}_i a (\mathsf {z}_j-\mathsf {z}_{j-1})\right] = a {{\,{\mathbb {E}}\,}}\left[ \mathsf {A}_i \left( \tilde{\mathsf {x}}_j-\tilde{\mathsf {x}}_{j-1}\right) \right] \end{aligned}$$
$$\begin{aligned} q_{\mathsf {\Delta },(i,j)}&= a^2 {{\,{\mathbb {E}}\,}}\left[ (\mathsf {z}_i-\mathsf {z}_{i-1})(\mathsf {z}_j-\mathsf {z}_{j-1})\right] \nonumber \\&=a^2 \big \lbrace {{\,{\mathbb {E}}\,}}\left[ (\hat{\mathsf {n}}_i-\hat{\mathsf {n}}_{i-1})(\hat{\mathsf {n}}_j-\hat{\mathsf {n}}_{j-1})\right] + {{\,{\mathbb {E}}\,}}\left[ (\tilde{\mathsf {x}}_i-\tilde{\mathsf {x}}_{i-1})(\tilde{\mathsf {x}}_j-\tilde{\mathsf {x}}_{j-1})\right] \big \rbrace \end{aligned}$$

where \(a = -{\beta }/{\left( {{\,\mathrm{Si}\,}}(\pi )\sqrt{\hat{P}}\right) }\). Equation (91) results since the Gaussian noise \(\hat{\mathsf {n}}(t)\) is independent of the transmit signal and only \(\tilde{\mathsf {x}}(t)\) can contribute to a correlation between \({{\mathbf {\mathsf{{A}}}}}\) and \(\varvec{\Delta }\). The ISI at time \(\mathsf {T}'_j\) is given by

$$\begin{aligned} \tilde{\mathsf {x}}_j = (-1)^j \left[ ... - \tilde{g}(\mathsf {L}_{j-2}^{j})+\tilde{g}(\mathsf {L}_{j-1}^{j})-\tilde{g}(\mathsf {L}_{j}^{j})+\tilde{g}(\mathsf {L}_{j+1}^{j+1})-\tilde{g}(\mathsf {L}_{j+1}^{j+2})+\tilde{g}(\mathsf {L}_{j+1}^{j+3})...\right] \end{aligned}$$

where \(\mathsf {L}_{j+l}^{j+m} = \sum_{k=j+l}^{j+m} \mathsf {A}_k\). In order to obtain \(\mathbf {Q}_{\mathsf {A\Delta}}\) we define with \(n=m-l+1\) and \(m\ge l\)

$$\begin{aligned} \xi_n&= a {{\,{\mathbb {E}}\,}}[\mathsf {A}_i] {{\,{\mathbb {E}}\,}}\left[ \tilde{g}\left( \mathsf {L}_{j+l}^{j+m}\right) \right] , i \notin [j+l,j+m] \end{aligned}$$
$$\begin{aligned} \nu_{n}&= a {{\,{\mathbb {E}}\,}}\left[ \mathsf {A}_i \tilde{g}\left( \mathsf {L}_{j+l}^{j+m}\right) \right] ,\,j+l\le i \le j+m \end{aligned}$$

and express the (ij)th entries of \(\mathbf {Q}_{\mathsf {A\Delta}}\) as

$$\begin{aligned} q_{\mathsf {A\Delta },(i,i)} =&(-1)^i \left[ ... - \nu_3-\xi_{3} + \nu_2+\xi_{2} - \xi_{1} + \xi_{1} -\xi_{2}-\nu_2 + \xi_{3}+\nu_3 ...\right] = q_1 = 0\end{aligned}$$
$$\begin{aligned} q_{\mathsf {A\Delta },(i,i+1)} =&(-1)^{i+1} \left[ ... -2\nu_3 + 2 \nu_2 - \nu_1 + \xi_{1} -2\xi_{2} +2\xi_{3} ...\right] = (-1)^{i+1} q_2\end{aligned}$$
$$\begin{aligned} q_{\mathsf {A\Delta },(i,i-1)} =&(-1)^{i-1} \left[ ... -2 \xi_{3} + 2 \xi_{2} -\xi_{1} + \nu_1 -2 \nu_2 +2 \nu_3 ...\right] = (-1)^{i-1} (-q_2). \end{aligned}$$

The weights of \(\nu_n\) and \(\xi_n\) for other values of j are given in Table 1. By numerically solving the integrals in (94) and (95) using the probability distributions of \(\mathsf {A}_i\) and \(\mathsf {L}_{j+l}^{j+m}\), cf. (2) and (5), we obtain

$$\begin{aligned} \mathbf {Q}_{\mathsf {A \Delta}} = \left[ \begin{array}{ccccc} 0 &{} q_{2} &{} q_{3} &{}... &{} q_{K} \\ q_{2} &{} 0 &{} -q_{2} &{}... &{} -q_{K-1} \\ -q_{3} &{} -q_{2} &{} 0&{}... &{} q_{K-2}\\ &{}\vdots &{} &{}\ddots &{}\vdots \\ (-1)^{K} q_{K} &{} (-1)^{K} q_{K-1} &{} (-1)^{K} q_{K-2}&{}... &{} 0\\ \end{array}\right] . \end{aligned}$$

From (92) we obtain \(\mathbf {Q}_{\mathsf {\Delta}}\) as

$$\begin{aligned} \mathbf {Q}_{\mathsf {\Delta }} = a^2 \frac{\sigma_{\hat{\mathsf {n}}}^2}{\sigma_{\mathsf {S}}^2} \mathbf {R}_{\mathsf {\Delta}} + a^2 \mathbf {R}_{\tilde{\mathsf {x}}} \end{aligned}$$

where \(\mathbf {R}_{\mathsf {\Delta}}\) is given by (55) and the elements of \(\mathbf {R}_{\tilde{\mathsf {x}}}\) are

$$\begin{aligned} r_{\tilde{\mathsf {x}},(i,j)}= 2 {{\,{\mathbb {E}}\,}}[\tilde{\mathsf {x}}_i\tilde{\mathsf {x}}_j]-{{\,{\mathbb {E}}\,}}[\tilde{\mathsf {x}}_i\tilde{\mathsf {x}}_{j-1}]-{{\,{\mathbb {E}}\,}}[\tilde{\mathsf {x}}_{i-1}\tilde{\mathsf {x}}_{j}]. \end{aligned}$$

From (93), we see that \({{\,{\mathbb {E}}\,}}[\tilde{\mathsf {x}}_i\tilde{\mathsf {x}}_j]\) yields sums of expectations \({{\,{\mathbb {E}}\,}}\left[ \tilde{g}\left( \mathsf {L}_{i+l_1}^{i+m_1}\right) \tilde{g}\left( \mathsf {L}_{j+l_2}^{j+m_2}\right) \right]\). Depending on \(i-j\), \(m_{1/2}\), and \(l_{1/2}\), a number n of the \(\mathsf {A}_k\) in the two sums \(\mathsf {L}_{i+l_1}^{i+m_1}\) and \(\mathsf {L}_{j+l_2}^{j+m_2}\) coincide, whereas w and p summands \(\mathsf {A}_k\) are unique to the first and the second sum, respectively. We therefore define three random variables \(\mathsf {T}_n\), \(\mathsf {X}_w\), and \(\mathsf {L}_p\), which are the sums of disjoint sets of n, w, and p summands \(\mathsf {A}_k\), respectively. The expectation above becomes \({{\,{\mathbb {E}}\,}}\left[ \tilde{g}\left( \mathsf {T}_n+\mathsf {X}_w\right) \tilde{g}\left( \mathsf {T}_n+\mathsf {L}_p\right) \right]\), where \(p(\mathsf {T}_n,\mathsf {X}_w,\mathsf {L}_p) = p(\mathsf {T}_n) p(\mathsf {X}_w) p(\mathsf {L}_p)\) since the \(\mathsf {A}_k\) are independent. Thus, with (5), we can numerically evaluate \({{\,{\mathbb {E}}\,}}\left[ \tilde{g}\left( \mathsf {T}_n+\mathsf {X}_m\right) \tilde{g}\left( \mathsf {T}_n+\mathsf {L}_p\right) \right]\) and obtain \({{\,{\mathbb {E}}\,}}[\tilde{\mathsf {x}}_i\tilde{\mathsf {x}}_j]\), \(r_{\tilde{\mathsf {x}},(i,j)}\), and \(\mathbf {Q}_{\mathsf {\Delta}}\).

Table 1 Weights for computing the entries of cross-covariance matrix \(\mathbf {Q}_{A\Delta }\)

With \(\mathbf {Q}_{\mathsf {A \Delta}}\), \(\mathbf {Q}_{\mathsf {\Delta}}\), and \(\mathbf {Q}_{\mathsf {A}}\) we compute \(\mathbf {Q}_{\rm {err}}\). The difference between the two bounds in (62) and (63) is \(\frac{1}{2} (\log \det \mathbf {R}_{\rm {err}} - \log \det \mathbf {Q}_{\rm {err}})\), which is always positive as can be seen in Fig. 11, i.e., indicating that the inequality in (63) holds.

Fig. 11

Difference between the two bounds in (62) and (63). The difference is normalized by the blocklength K and depicted for different \(\kappa ={W}/{\lambda }\) and different K; it is always non-negative

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bender, S., Dörpinghaus, M. & Fettweis, G.P. On the achievable rate of bandlimited continuous-time AWGN channels with 1-bit output quantization. J Wireless Com Network 2021, 54 (2021).

Download citation


  • Channel capacity
  • One-bit quantization
  • Timing channel
  • Continuous-time channel