Skip to main content

Compressive cyclostationary spectrum sensing with a constant false alarm rate

Abstract

Spectrum sensing is a crucial component of opportunistic spectrum access schemes, which aim at improving spectrum utilization by allowing for the reuse of idle licensed spectrum. Sensing a spectral band before using it makes sure the legitimate users are not disturbed. To that end, a number of different spectrum sensing method have been developed in the literature. Cyclostationary detection is a particular sensing approach that takes use of the built-in periodicities characteristic to most man-made signals. It offers a compromise between achievable performance and the amount of prior information needed. However, it often requires a significant amount of data in order to provide a reliable estimate of the cyclic autocorrelation (CA) function. In this work, we take advantage of the inherent sparsity of the cyclic spectrum in order to estimate CA from a low number of linear measurements and enable blind cyclostationary spectrum sensing. Particularly, we propose two compressive spectrum sensing algorithms that exploit further prior information on the CA structure. In the first one, we make use of the joint sparsity of the CA vectors with regard to the time delay, while in the second one, we introduce structure dictionary to enhance the reconstruction performance. Furthermore, we extend a statistical test for cyclostationarity to accommodate sparse cyclic spectra. Our numerical results demonstrate that the new methods achieve a near constant false alarm rate behavior in contrast to earlier approaches from the literature.

1 Introduction

The scarcity of radio spectrum constitutes a major roadblock to current and future innovation in wireless communications. To alleviate this problem, it has been proposed to make spectral resources, which are currently underutilized, available for reuse under a paradigm that goes by the name of opportunistic spectrum access (OSA) [1]. Spectrum sensing is one of its core technologies. It allows an unlicensed transceiver, a so-called secondary user (SU), to access a licensed spectral band without interfering with the owner of the band’s license, the so-called primary user (PU). The fundamental task in spectrum sensing is to decide between two hypotheses, the first of which states that the spectral band under investigation is free (\(\mathcal {H}_{0}\)), while the second asserts that it is occupied (\(\mathcal {H}_{1}\)). Considering the baseband signal x(t) observed at a secondary system receiver, the two hypotheses can be written as

$$ \begin{array}{lll} \mathcal{H}_{0} : {x(t)} &=& \eta(t),\\ \mathcal{H}_{1} : {x(t)} &=& s'(t)+ \eta(t),\\ \end{array} $$
(1)

where η(t) denotes receiver noise and s (t) stands for a PU signal after propagation effects.

A number of spectrum sensing algorithms have been proposed in the literature [24]. Broadly speaking, they can be divided into three major types, namely, energy detection, stochastic feature detection, and matched filter detection, where different types require different amounts of prior knowledge about the PU signal. While matched filter ([5], Ch. 4.3) detectors require the knowledge of the exact waveform of at least a part of the PU signal, energy detection [6] does not require any prior knowledge. Feature detectors are an in-between as they only make assumptions about structural or statistical properties of the signal. One of the stochastic features which lets an SU receiver discriminate between pure stationary noise (\({\mathcal {H}_{0}}\)) and a communication signal contaminated with noise (\(\mathcal {H}_{1}\)) is cyclostationarity. In contrast to pure stationary noise, most man-made signals vary periodically with time [7] and can thus be characterized as cyclostationary. Although the data contained in a modulated signal may be a purely stationary random process, the coupling with sine wave carriers, pulse trains, repeating, spreading, hopping sequences, and cyclic prefixes going along with its modulation causes a built-in periodicity [8].

The use of cyclostationarity for the purpose of spectrum sensing has been investigated from a variety of perspectives ranging from single node signal detection [916] to collaborative approaches that take use of the spatial diversity [17, 18]. One of the particularly well-known algorithms for cyclostationary spectrum sensing is the so-called time-domain test (TDT) as introduced in [9]. The test can decide between the presence and absence of cyclostationarity for a pre-specified potential cycle frequency α. It operates on the cyclic autocorrelation (CA), which, given an observed signal x(t), is defined as [7]

$$ {R_{x}^{\alpha}(\tau)} = \underset{T\rightarrow \infty}{\text{lim}} \frac{1}{T} \int\limits_{-T/2}^{T/2} {x({t + \tau/2})} {x^{\ast}(t - \tau/2)} e^{-j2\pi\alpha t} \mathrm{d} t $$
(2)

for a potential cycle frequency α and a delay τ. For purely stationary signals, \( {R_{x}^{\alpha }(\tau)} = 0\) for all α≠0, while for cyclostationary signals, \( {R_{x}^{\alpha }(\tau)} \ne 0\) for some α≠0. The α with non-zero CA coefficients are called cycle frequencies. The set of cycle frequencies caused by one of potentially multiple incommensurate second-order periodicities in a cyclostationary signal comprises the periodicity’s fundamental cycle frequency (the reciprocal of the fundamental period) as well as its harmonics. Given the above information, we can rewrite the hypothesis test (1) as

$$ \begin{array}{l} {\mathcal{H}_{0}}: \forall \{\alpha \in {\mathbb{R}} |\alpha \ne 0\} : {R_{x}^{\alpha}(\tau)} = 0,\\ {\mathcal{H}_{1}}: \exists \{\alpha \in {\mathbb{R}} |\alpha \ne 0\} : {R_{x}^{\alpha}(\tau)} \ne 0.\\ \end{array} $$
(3)

It is important to note that practically, instead of the statistical CA (2), one normally operates on the sample CA obtained from a limited number of signal samples. The coefficients of the sample CA are not constant but rather follow different probability distributions, depending on whether \({\mathcal {H}_{0}}\) or \({\mathcal {H}_{1}}\) is true. To account for this, the hypothesis test (3) is modified by considering different test statistic under \(\mathcal {H}_{0}\) and \(\mathcal {H}_{1}\). As a result, the TDT provides a typical constant false alarm rate (CFAR) performance.

Returning to (3), we note that the CA is zero on its whole support except the set of cycle frequencies and α=0. Therefore, it can be called sparse. The exploitation of sparsity in signal processing has a long history [19]. The recent years, however, have seen a vastly accelerated development of the field resulting in a new sampling paradigm called compressive sampling (CS) [20, 21]. It postulates that sparse or compressible signals, i.e., the signals that can be represented or well approximated by only a few non-zero coefficients in some domain, can be sampled and recovered from fewer number of measurements than traditionally required. A crucial observation here is that one can design an effective measurement strategy that is governed by the amount of the signals’ information content, rather than its ambient dimension. To date, there is a large amount of powerful algorithms available that solve sparse recovery problems in the CS context ranging from optimization approaches to classical pursuits such as the orthogonal matching pursuit (OMP) [22] and more specialized algorithms such as the compressive sampling matching pursuit (CoSaMP) [23] for instance.

Multiple contributions have been made in the field of compressive cyclostationary spectrum sensing. The authors of [24] for instance formulate the estimation of the cyclic autocorrelation as a sparse recovery problem, which they solve using the OMP [22]. Based on the sparse estimate of the CA, they propose two detection methods that exploit different CA properties. The first one, called slot comparison method (SCM), compares the biggest CA components OMP finds in two consecutive blocks of samples. If for both blocks, the same discrete cycle frequencies are chosen, \(\mathcal {H}_{1}\) is selected; otherwise, \(\mathcal {H}_{0}\) is selected. The second detection algorithm is called symmetry method (SM). It exploits the fact, that for certain types of signals, the CA is symmetric around the direct current (DC) component. Although both of them present blind detectors meaning that they operate without prior knowledge of the cycle frequencies present in the signal, they do not allow for a CFAR performance, which is considered a desired detector feature [4]. Instead of the CA, the authors of [25] use the spectral correlation (SC), which is the Fourier transform of the CA over τ, for detecting multiple transmitters in a wideband signal using compressive sampling. In order to estimate the SC from compressed samples via CS, they established a direct linear relation between the compressed samples and the SC. Based on [25], the authors of [26] derive a method for recovering the SC from sub-Nyquist samples using a reduced complexity approach, for which they provide a closed-form solution. In [27], the modulated wideband converter (MWC) [28] is used to obtain the SC from sub-Nyquist samples to then apply cyclostationarity detection. Furthermore, it has been recently shown that under certain conditions, the CA can be efficiently recovered from a low number of samples even without enforcing the sparsity property [2931]. This can be done by exploiting cross-correlations between different outputs of the compressive sampler. The main drawback of the aforementioned works is that providing the estimate of the entire cyclic spectrum, they still require the knowledge of the cycle frequencies for the detection step.

In this work, we employ a composite approach that combines the sparse recovery of CA from its compressive measurements for blind cycle frequency estimation with a CFAR TDT detection. This said the contribution of this paper is manifold. We propose two novel sparsity-aided CA estimation algorithms, both of which exploit further prior information about the CA in addition to its sparsity: the simultaneous OMP-based (SOber) and the dictionary assisted (Dice) compressive CA estimator. The first one exploits the joint sparsity of the CA vectors with regard to the time delay in order to recover the CA matrix for all delays simultaneously, while the second one takes advantage of the signal-induced structure of the CA by introducing structure dictionaries into the recovery process. In order to evaluate the performance of the proposed CA estimators, we derive a closed-form expression of the CA of sampled linearly modulated signals with rectangular pulse shape. Furthermore, we show how this expression can be used as prior information in the dictionary assisted approach. Note that the use of sparse recovery in the novel CA estimation approaches results in the automatic detection of signal’s cycle frequencies. This in turn allows blind spectrum sensing by eliminating the integral need of the classical TDT for the perfect knowledge of the said cycle frequencies. However, the resulting sparse structure of the compressive CA estimates does not allow for the application of the traditional TDT since the noise statistics are missing. To compensate for this phenomenon, we develop a modified TDT and thus enable blind compressive cyclostationary spectrum sensing. Numerical tests show that the proposed method achieves a near CFAR behavior.

The remainder of this paper is structured as follows. Section 2.1 introduces the signal model and presents the classical method for CA estimation, while Section 2.2 presents the time-domain test based on the classical CA estimation. A CA estimator based on joint sparsity of multiple vectors is introduced in Section 3 and the CA estimator exploiting additional prior knowledge is described in Section 4. An extension of the TDT to accommodate sparse CA estimates is developed in Section 5. The numerical evaluation of the proposed estimation and detection approaches as well as the interpretation of the results is given in Section 6. Section 7 concludes the paper.

2 Cyclostationary spectrum sensing

2.1 System model and CA estimation

Consider a secondary system receiver that needs to decide whether a certain spectral band is occupied or free. It samples the baseband signal x(t) uniformly with a sampling period T e . This results in the vector of discrete samples \(\mathbf {x}_{t_{0}} \in {\mathbb {C}}^{N}\), where

$$\begin{array}{*{20}l} \mathbf{x}_{t_{0}} = \left[{x\left(t_{0}\right)}, {x\left(t_{0} + T_{e}\right)}, \dotsc, {x\left(t_{0} + (N-1)T_{e}\right)} \right]^{\mathrm{T}}. \end{array} $$
(4)

We assume that the vector \(\mathbf {x}_{t_{0}}\) is discrete and zero mean, and due to the nature of man-made signals, it represents an (almost [32], Ch. 1.3) cyclostationary process [9]. The presence of stochastic periodicity in the samples and thus the presence of a man-made signal can be revealed by applying a detection algorithm such as the TDT to the CA of the samples. There are different ways of obtaining the CA from the baseband samples, one of which is the following (classical) estimator

$$\begin{array}{*{20}l} \hat{R}_{x, t_{0}}^{a}(\nu) =& \frac{1}{N} \sum\limits_{n=0}^{N-1-\nu} {x\left(t_{0} + {nT}_{e}\right)} {x^{\ast}\left(t_{0} + (n+ \nu) T_{e}\right)}\\ &\times e^{-j2\pi\frac{a}{N}n} e^{-j\pi\frac{a}{N}\nu}. \end{array} $$
(5)

Evaluating this function results in the CA coefficient for the cycle frequency \(\alpha = \frac {a}{NT_{e}}\) and the time delay τ=ν T e , where a stands for the discrete cycle frequency and ν denotes the discrete time delay. Note that the factor \(e^{-j\pi \frac {a}{N}\nu }\) remains constant throughout the sum. It is a phase shift necessary to maintain compatibility with the symmetric CA (2). The estimator (5) is biased but exhibits a smaller estimation variance than an unbiased one [9].

We define \( {\hat {\mathbf {r}}_{x}^{\nu }}\) as an N length CA vector whose nth element is \(\hat {R}_{x, t_{0}}^{n}(\nu)\), i.e.,

$$\begin{array}{*{20}l} {\hat{\mathbf{r}}_{x}^{\nu}} = \left[\hat{R}_{x, t_{0}}^{0}(\nu), \dotsc, \hat{R}_{x, t_{0}}^{N-1}(\nu)\right]^{\mathrm{T}}. \end{array} $$
(6)

Subsequently, we re-write the estimation of the CA vector as a matrix-vector product. To do so, we need the (N element) delay product with time delay τ=ν T e , which is given by

$$\begin{array}{*{20}l} \mathbf{y}_{N}^{\nu} = \mathbf{x}_{t_{0}} \circ \mathbf{x}^{\ast}_{t_{0} + \nu T_{e}}, \end{array} $$
(7)

where denotes component-wise multiplication. Note that since the receiver only takes N samples, \(\mathbf {x}^{\ast }_{t_{0} + \nu T_{e}}\) is zero-padded at the end while \(\mathbf {y}^{\nu }_{N}\) is a vector of length N. The CA vector is now given by

$$\begin{array}{*{20}l} {\hat{\mathbf{r}}_{x}^{\nu}} = \frac{1}{N} {F} \mathbf{y}_{N}^{\nu}, \end{array} $$
(8)

where F denotes the (N×N) discrete Fourier transform (DFT) matrix. The N×n ν CA matrix for time delays \(\nu _{1}T_{e}, \dotsc, \nu _{n_{\nu }}T_{e}\) is given by

$$\begin{array}{*{20}l} \hat{\mathbf{R}}_{x} = \left[ {\hat{\mathbf{r}}_{x}^{\nu_{1}}}, \dotsc, {\hat{\mathbf{r}}_{x}^{\nu_{n_{\nu}}}} \right] = \frac{1}{N} \mathbf{F} \mathbf{Y}_{N}, \end{array} $$
(9)

with \(\mathbf {Y}_{N} = \left [\mathbf {y}_{N}^{\nu _{1}}, \dotsc, \mathbf {y}_{N}^{\nu _{n_{\nu }}}\right ] {\in \mathbb {C}^{N \times {n_{\nu }}}}\).

2.2 The time-domain test (TDT) for cyclostationarity

Given the statistical CA, one could decide between \({\mathcal {H}_{0}}\) and \({\mathcal {H}_{1}}\) by testing it for being non-zero at the signal’s inherent cycle frequencies according to (3). However, as mentioned in Section 1, instead of the statistical CA, we only have access to its estimation, the sample CA (which asymptotically converges to the statistical CA). This hinders the direct applicability of (3) for signal detection as coefficients of the sample CA are not constant anymore. In the seminal work [9], the probability distributions that the sample CA coefficients follow under \({\mathcal {H}_{0}}\) or \(\mathcal {H}_{1}\) have been identified and a test for cyclostationarity based on this knowledge has been designed. The test is briefly described in the following.

Consider the 2n ν ×1 vector

$$ \begin{aligned} {\hat{\mathbf{r}}_{xx^{\ast}}}({a_{0}}) =\left[ {\mathfrak{Re}\left\{ \hat{\mathbf{R}}_{x}\left[{a_{0}},\nu_{1}\right] \right\} }, \dotsc, {\mathfrak{Re}\left\{ \hat{\mathbf{R}}_{x}\left[{a_{0}},\nu_{n_{\nu}}\right] \right\} }, \right. \\ \left. {\mathfrak{Im}\left\{ \hat{\mathbf{R}}_{x}\left[{a_{0}},\nu_{1}\right] \right\} }, \dotsc, {\mathfrak{Im}\left\{ \hat{\mathbf{R}}_{x}\left[{a_{0}},\nu_{n_{\nu}}\right] \right\} } \right]^{\mathrm{T}}, \end{aligned} $$
(10)

which represents the concatenation of the real and the imaginary part of the row of \(\hat {\mathbf {R}}_{x}\) corresponding to the discrete cycle frequency a 0. The frequency a 0 is the cycle frequency of interest, i. e., the one for the presence of which we want to test the signal. Given this vector, we can formulate the following non-asymptotic hypotheses

$$ \begin{array}{lll} {\mathcal{H}_{0}}: {\hat{\textbf{r}}_{xx^{\ast}}}({a_{0}}) &=& {\boldsymbol{\epsilon}_{xx^{\ast}}}({a_{0}}),\\ {\mathcal{H}_{1}}: {\hat{\textbf{r}}_{xx^{\ast}}}({a_{0}}) &=& {\textbf{r}_{xx^{\ast}}}({a_{0}}) + {\boldsymbol{\epsilon}_{xx^{\ast}}}({a_{0}}),\\ \end{array} $$
(11)

where \({\textbf {r}_{xx^{\ast }}}({a_{0}})\phantom {\dot {i}\!}\) is the deterministic but unknown asymptotic counterpart of \({\hat {\mathbf {r}}_{xx^{\ast }}}({a_{0}})\phantom {\dot {i}\!}\) and \(\phantom {\dot {i}\!}{\boldsymbol {\epsilon }_{xx^{\ast }}}({a_{0}})\) is the estimation error. Note that in contrast to the hypotheses from Eq. (3), this formulation considers the presence of cyclostationarity in the received signal for one fixed cycle frequency a 0.

Since \({\textbf {r}_{xx^{\ast }}}({a_{0}})\phantom {\dot {i}\!}\) is non-random, the distribution of \(\phantom {\dot {i}\!}{\hat {\textbf {r}}_{xx^{\ast }}}({a_{0}})\) under \({\mathcal {H}_{0}}\) and \({\mathcal {H}_{1}}\) only differs in mean. As shown in [9], the estimation error \({\boldsymbol {\epsilon }_{xx^{\ast }}}({a_{0}})\phantom {\dot {i}\!}\) asymptotically follows a Gaussian distribution, i. e.,

$$ \underset{N\rightarrow \infty}{\text{lim}} \sqrt{N} {\boldsymbol{\epsilon}_{xx^{\ast}}}({a_{0}}) {\overset{\mathrm{D}}{=}} {\mathcal{N}(0, {\mathbf{\Sigma}_{xx^{\ast}} }({a_{0}}))}, $$
(12)

where \({\mathbf {\Sigma }_{xx^{\ast }} }({a_{0}})\phantom {\dot {i}\!}\) is the statistical covariance matrix of \({\hat {\mathbf {r}}_{xx^{\ast }}}({a_{0}})\phantom {\dot {i}\!}\) and \({\overset {\mathrm {D}}{=}}\) denotes convergence in distribution. The 2n ν ×2n ν covariance matrix \({\mathbf {\Sigma }_{xx^{\ast }}}({a_{0}})\phantom {\dot {i}\!}\) can be computed as [9]

$$ {\mathbf{\Sigma}_{xx^{\ast}} }({a_{0}}) = \left[ \begin{array}{cc} {\mathfrak{Re}\left\{ \frac{\mathbf{Q} + \mathbf{Q}^{\ast}}{2} \right\}} & {\mathfrak{Im}\left\{ \frac{\mathbf{Q} - \mathbf{Q}^{\ast}}{2} \right\}} \\ {\mathfrak{Im}\left\{ \frac{\mathbf{Q} + \mathbf{Q}^{\ast}}{2} \right\}} & {\mathfrak{Re}\left\{ \frac{\mathbf{Q}^{\ast} - \mathbf{Q}}{2} \right\} } \end{array} \right], $$
(13)

where the (m,n)th entries of the n ν ×n ν matrices Q and Q are given by

$$\begin{array}{*{20}l} \mathbf{Q}(m,n) &= S_{\mathbf{y}_{N}^{\nu_{m}}\mathbf{y}_{N}^{\nu_{n}}}(2{a_{0}}, {a_{0}}),~\text{and} \end{array} $$
(14)
$$\begin{array}{*{20}l} \mathbf{Q}^{\ast}(m,n) &= S^{\ast}_{\mathbf{y}_{N}^{\nu_{m}}\mathbf{y}_{N}^{\nu_{n}}}(0, -{a_{0}}) \end{array} $$
(15)

respectively. The term \(S_{\mathbf {y}_{N}^{\nu _{m}}\mathbf {y}_{N}^{\nu _{n}}}(\cdot, \cdot)\) denotes the unconjugated, while the term \(S^{\ast }_{\mathbf {y}_{N}^{\nu _{m}}\mathbf {y}_{N}^{\nu _{n}}}(\cdot, \cdot)\) denotes the conjugated cyclic spectrum of a signal. One way to estimate these is to determine the following frequency-smoothed periodograms:

$$\begin{array}{*{20}l}{} \hat{S}_{\mathbf{y}_{N}^{\nu_{m}}\mathbf{y}_{N}^{\nu_{n}}}(2{a_{0}}, {a_{0}}) =&\frac{1}{N L} \sum\limits_{s = - \frac{L-1}{2}}^{\frac{L-1}{2}} W(s) \\ & \times \hat{\mathbf{R}}_{x}[a_{0}-s,\nu_{n}]\hat{\mathbf{R}}_{x}[a_{0}+s,\nu_{m}] \end{array} $$
(16)
$$\begin{array}{*{20}l}{} \hat{S}^{\ast}_{\mathbf{y}_{N}^{\nu_{m}}\mathbf{y}_{N}^{\nu_{n}}}(0, -{a_{0}}) = & \frac{1}{N L} \sum\limits_{s = - \frac{L-1}{2}}^{\frac{L-1}{2}} W(s)\\ & \times \hat{\mathbf{R}}_{x}^{\ast}[a_{0}+s,\nu_{n}]\hat{\mathbf{R}}_{x}[a_{0}+s,\nu_{m}], \end{array} $$
(17)

where W is a normalized spectral window of odd length L. Looking at the Eqs. (16) and (17), it becomes clear why the cyclic spectrum is often referred to as the spectral correlation.

Given the estimated quantities described above, the following generalized likelihood ratio (GLR) test statistic can be derived [17]

$$ \mathcal{T}_{xx^{\ast}} = N {\hat{\mathbf{r}}_{xx^{\ast}}}({a_{0}}){\hat{\mathbf{\Sigma}}_{xx^{\ast}}}^{-1}({a_{0}}) {\hat{\mathbf{r}}_{xx^{\ast}}}^{\mathrm{T}}({a_{0}}). $$
(18)

The test statistic can be interpreted as a normalized energy. The inverse of the covariance matrix scales \({\hat {\mathbf {r}}_{xx^{\ast }}}({a_{0}})\phantom {\dot {i}\!}\) such that under \({\mathcal {H}_{0}}\), its entries follow a standard normal distribution. Thus, under \({\mathcal {H}_{0}}\), the test statistic asymptotically follows a central chi-squared distribution with 2n τ degrees of freedom i. e., \(\phantom {\dot {i}\!}\underset {N\rightarrow \infty }{\lim }\mathcal {T}_{xx^{*}} \overset {\mathrm {D}}{=} \chi _{2n_{\nu }}^{2}\), while under \({\mathcal {H}_{1}}\), the test statistic asymptotically follows a non-central chi-squared distribution with unknown non-centrality parameter λ, i. e., \(\phantom {\dot {i}\!}\underset {N\rightarrow \infty }{\lim }\mathcal {T}_{xx^{*}} \overset {\mathrm {D}}{=} \chi _{2n_{\nu }}^{2}(\lambda)\). Based on the above test statistic, we can design a CFAR detector with some false alarm rate P fa by finding the corresponding decision threshold in the \(\chi _{2n_{\nu }}^{2}\) tables. We cannot design a test based on a desired detection rate P d since although \(\phantom {\dot {i}\!}{\textbf {r}_{xx^{\ast }}}({a_{0}})\) is deterministic, it depends on the type of signal emitted by the transmitter as well as the signal to noise ratio (SNR) at the receiver, both of which are assumed to be unknown.

The classical approach for cyclostationary spectrum sensing is to apply the TDT to the CA estimate from (5). However, to do so, one needs to know which cycle frequency to test beforehand, which eliminates the possibility of true blind spectrum sensing. One could sequentially test the received signal for all possible cycle frequencies. However, with high probability, the estimation noise at some cycle frequency would have a value above the decision threshold, leading to a false alarm.

3 Sparsity-aided CA estimation: simultaneous OMP-based estimator

As discussed in Section 1, for most man-made signals the CA is (asymptotically) sparsely occupied, containing spikes only at the DC component as well as the cycle frequencies of inherent signal periodicities and their harmonics. In this section, we take advantage of this inherent sparsity and cast the CA estimation as a joint sparse recovery problem. Since this method is able to detect the CA’s support, it removes the traditional approach’s requirement of knowing the cycle frequencies beforehand, enabling thus blind cyclostationarity-based spectrum sensing possible.

We begin by rewriting Eq. (9) as

$$ \mathbf{Y}_{N} = N \mathbf{F}^{-1}\hat{\mathbf{R}}_{x}, $$
(19)

where F −1 is the inverse discrete Fourier transform (IDFT) matrix. Now, consider an m×N matrix M, which consists of a selection of m rows of the N×N identity matrix I N . It represents the undersampling operation. Applying M to \(\mathbf {x}_{t_{0}}\), we obtain an m×1 vector of compressive samples1 \(\bar {\mathrm {x}}_{t_{0}} = \mathbf {M} \mathrm {x}_{t_{0}}\). Now, we can calculate an m element delay product with time delays τ=ν i T e , i.e., \(\mathbf {y}_{m}^{\nu } = \bar {\mathbf {x}}_{t_{0}} \circ \bar {\mathbf {x}}_{t_{0} + \nu T_{e}}^{*} \). Stacking all \(\mathbf {y}_{m}^{\nu }\) together into one matrix Y m , we finally obtain

$$ \mathbf{Y}_{m} = \mathbf{M} \mathbf{Y}_{N} = N \mathbf{M}\mathbf{F}^{-1}\hat{\mathbf{R}}_{x}, $$
(20)

where Y m contains a selection of m coefficients of the delay products for different delays ν i T e . Note that ν i [ 0,N−1] are chosen such that \(\bar {\mathrm {x}}{t_{0} + \nu _{i} T_{e}}\) is non-empty. We now want to recover \(\hat {\mathbf {R}}_{x}\) from Y m by solving the underdetermined inverse problem (20). To do so, we exploit our knowledge about the CA’s sparsity.

The straightforward solution would be to solve the following optimization problem

$$ \begin{array}{rl} \text{min}&\left\| {\text{vec}\left\{ \hat{\mathbf{R}}_{x} \right\}} \right\|{~}_{\ell_{0}} \\ \text{s.t.}&\mathbf{Y}_{m} = N \mathbf{M}\mathbf{F}^{-1}\hat{\mathbf{R}}_{x}, \end{array} $$
(21)

where \(\left \| \cdot \right \|{~}_{\ell _{0}}\) denotes the 0 “norm” [20], which is the number of non-zero entries in a vector, and vec{·} stands for the vectorization of a matrix, i. e., the concatenation of its columns to a single vector. Eq. 21 is known to be a non-convex combinatorial problem [20]. One way to solve it within a practically feasible amount of time is to substitute the 0-“norm” by its tightest convex relaxation, the 1 norm. With high probability, this produces the same result since for most large underdetermined systems of linear equations, the minimal 1-norm solution is also the sparsest solution [33]. Another way of solving (21) efficiently is by applying one of the many greedy sparse recovery algorithms that have been developed in the field of CS, such as, e.g., orthogonal matching pursuit (OMP) [22].

OMP is a greedy algorithm that iteratively determines a vector’s support from an underdetermined system of linear equations and subsequently recovers the vector by solving a least-square problem. Using it, we could solve (21) for each column of \(\hat {\mathbf {R}}_{x}\) individually (as in [24]), i. e., we could solve

$$ \begin{array}{rl} \text{min}& \left\| {\hat{\mathbf{r}}_{x}^{\nu}} \right\|{~}_{\ell_{0}} \\ \text{s.t.}&\mathbf{y}_{m}^{\nu} = N \mathbf{M}\mathbf{F}^{-1} {\hat{\mathbf{r}}_{x}^{\nu}},\\ \end{array} $$
(22)

for each ν. Instead, we notice that the vectors \(\left. {\hat {\mathbf {r}}_{x}^{\nu }} \right |{~}_{\nu = \nu _{1}}^{\nu _{n_{\nu }}}\) are jointly sparse with regard to the time delay. Therefore, stacking \({\hat {\mathbf {r}}_{x}^{\nu }}\) in \(\hat {\mathbf {R}}_{x}\) results in a row-sparse matrix whose rows are (asymptotically) non-zero only at the indices corresponding to the cycle frequencies. In order to exploit this additional structure, we propose to use an extension of OMP called simultaneous orthogonal matching pursuit (SOMP) [34] to recover the CA matrix \(\hat {\mathbf {R}}_{x}\) at once. The CA estimation based on SOMP is summarized in Algorithm 1, which we further refer to as SOber.

The goal of SOber is to find the indices of the atoms contained in Y m , i. e., the support of the columns of \(\hat {\mathbf {R}}_{x}\), and subsequently recover the identified non-zero rows of \(\hat {\mathbf {R}}_{x}\) by solving least-square problems. We start with an empty support S 0. Each iteration, one atom (a column of the matrix A=N M F −1) index is added to the support. The index is selected according to the sum of the absolute correlation values between the corresponding atoms and the delay products of different time delays (lines 3–4). Using the new support set S i , a least-square problem is solved for each column in \(\hat {\mathbf {R}}_{x}\) (lines 5–6). In each iteration, the atom index to be added to the index set is chosen according to the correlation between the residuum of Y m and the atom set. Since every iteration adds one index to the support set, one usually chooses n iter greater than or equal to the sparsity of the signal to be recovered. The difference between OMP (used in, e. g., [24]) and SOMP can be found in line 4, where SOMP jointly considers the amount of correlation between atoms and the delay products of multiple delays, while OMP would select the support of \(\left.{\hat {\mathbf {r}}_{x}^{\nu }}\right |{~}_{l=1}^{n_{\nu }}\) for each l individually.

4 Sparsity-aided CA estimation: dictionary-assisted estimator

In Section 3, we have described a SOMP-based algorithm that estimates the cycle frequencies and the CA from fewer samples than required using the classic approach by taking into consideration the inherent sparsity of the CA. In this section, we develop an algorithm that makes use of additional prior knowledge about the signal’s structure in the form of structure dictionaries to further enhance the cycle frequency and CA estimation. Like SOber, the new algorithm does not require the prior knowledge about the cycle frequencies contained in the signal.

One fact about the CA that could be exploited is that using a rectangular pulse shape, a linearly modulated signal’s CA exhibits spikes not only at the signal’s fundamental cycle frequency but also at the harmonics thereof. Another one is the symmetry of the CA around the DC component. First steps in this direction showing promising results have been taken in [35]. The drawback of the solution proposed in [35] is that the convex optimization problem used to recover the CA becomes huge for practical parameter choices, which results in a prohibitively large computational complexity. To circumvent this, we propose an OMP-based greedy algorithm that takes advantage of the additional prior knowledge while featuring a much smaller complexity than the optimization problem.

The proposed Dice algorithm (Algorithm 2) follows the same idea as the SOber algorithm (Algorithm 1) in that, it iteratively determines the support of the sparse CA and subsequently recovers it by solving an overdetermined least-square problem. However, in contrast to SOber, Dice facilitates the use of further prior knowledge in addition to the CA’s sparsity in the recovery process. Thus, in addition to the inputs received by SOber, Dice needs a set of structure dictionaries \({\overset {\circ }{\mathbf {D}}^{\left (\frac {N}{2}\right)}_{l}|{~}_{l=1}^{n_{\nu }}}\), one dictionary for each delay value ν l that is to be considered in the recovery process. Since the structure dictionaries do not necessarily model the DC component of the CA, it is added to the support set in the initialization phase in Dice (line 1). Instead of working with the amount of correlation between the residuum and the atoms directly as in SOber, the Dice algorithm computes combinations of these as dictated by the structure dictionaries in use (lines 3, 4). This way, the decision about the non-zero cycle frequencies (line 5) takes into account the structure of the CA. Additionally, instead of adding a single element to the support set per iteration, Algorithm 2 adds all indices to the support set that have a non-zero value in the selected dictionary word. The recovery step (cf. lines 6, 7) remains unchanged. Note that in Algorithm 2, the abs(·) operator stands for the element-wise absolute value of a matrix, while the selection operator [ ·] l: denotes the lth row of a matrix.

In the following, we introduce two particular structure dictionaries that can be used with the proposed algorithm: (i) the dictionary that accounts for the symmetry of the CA and (ii) the dictionary that describes the harmonic structure of the CA as well as its shape.

4.1 Symmetry dictionary

Let \( {\mathbf {D}^{(\frac {N}{2})}_{\text {sym}}} \in \{0,1\}^{\frac {N}{2}\times \frac {N}{2}}\) denote the symmetry dictionary. Its columns represent possible cycle frequencies contained in the set \(a \in \{1, \dotsc, \frac {N}{2}\}\). For simplicity, this set is chosen such that the frequencies contained in it lie at the center frequencies of the CA’s DFT bins. An entry of the dictionary covers elements 1 to \(\frac {N}{2}\) of \({\hat {\mathbf {r}}_{x}^{\nu }} \) which is indexed from 0 to N−1. The symmetry dictionary is simply given by the identity matrix, i. e., \({\mathbf {D}^{(\frac {N}{2})}_{\text {sym}}} = {\mathbf {I}_{\frac {N}{2}}}\). To model the whole vector \( {\hat {\mathbf {r}}_{x}^{\nu }} \), the dictionary is extended to include the DC component, which is set to zero, as well as the negative cycle frequencies. Note that the DC component is set to zero because its value is independent of the presence of cyclostationarity. The resulting full dictionary is exemplarily given by

$$ {\overset{\circ}{\mathbf{D}}^{(3)}_{\text{sym}}} = \left(\begin{array}{ccc} 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & 0\\ \end{array}\right). $$
(23)

The circle above the symbol indicates that it is the full version of the dictionary, i. e., the one spanning the whole Fourier range. The ones in the matrix specify the locations of the non-zero coefficients in the CA fitting the format of (9). Note that in the case of the symmetry dictionary, all of \(\overset {\circ }{\mathbf {D}}^{(\frac {N}{2})}_{l}|{~}_{l=1}^{n_{\nu }}\) in Algorithm 2 are identical i. e., \(\overset {\circ }{\mathbf {D}}_{l}^{(\frac {N}{2})}|{~}_{l=1}^{n_{\nu }} = \overset {\circ }{\mathbf {D}}^{(\frac {N}{2})}_{\text {sym}}\).

4.2 Asymptotic CA and asymptotic dictionary

The symmetry structure dictionary exploits one of the facts we know about the CA. In order to explore an extreme in terms of prior knowledge, we create a dictionary that contains the maximum possible amount of prior information about the CA, i. e., the one containing the asymptotic CA itself. This requires knowledge of the analytic expression for the discrete asymptotic CA vector, which we derive in the following.

To assess the performance of different CA estimation algorithms, we employ common linearly modulated signals with symbol length T s as described by the following equation ([7], Eq. 73)

$$ {s(t)} = \sum\limits_{n = -\infty}^{\infty} c_{n} {p\left(t - {nT}_{s} + \phi\right)}. $$
(24)

Here, p(t) is a deterministic finite-energy pulse, ϕ represents a fixed pulse-timing phase parameter and c n stands for the nth symbol to be transmitted. We are now interested in an expression for the discrete asymptotic CA vector of the above signal type.

The fundamental cycle frequency of the built-in periodicity of the signal from (24) is \(\frac {1}{T_{s}}\). Its continuous CA is given by ([7], Eq. 81)

$${} { {R_{s, T_{s}}^{\alpha}(\tau)} = \left\{ \begin{array}{ll} 0 & \text{for}~\alpha \ne \frac{k}{T_{s}}\\ \frac{1}{T_{s}} \sum\limits_{n = -\infty}^{\infty} {R_{c}(n T_{s})} r_{p}^{\alpha}(\tau - {nT}_{s}) e^{j2\pi \alpha \phi} & \text{otherwise,} \end{array}\right.} $$
(25)

where \(k \in {\mathbb {Z}}\) and \(r_{\alpha }^{\alpha }(\tau)\) is defined as ([7], Eq. 82)

$$ r_{\alpha}^{\alpha}(\tau) \triangleq \int\limits_{-\infty}^{\infty} {p\left(t + \tau / 2\right)} p^{\ast}\left(t - \tau / 2\right) e^{-j2\pi \alpha t} {\,\mathrm{d}} t. $$
(26)

The symbol \({\mathbb {Z}}\) denotes the set of integers, i. e., k{…,−2,−1,0,1,2,…}.

We consider the case where c n is a purely stationary random sequence. Thus, its autocorrelation \( {R_{c}(n T_{s})} = {R_{c}^{0}(n T_{s})}\) is non-zero only at n=0 (cf. (2)), reducing (25) to

$$ {R_{s, T_{s}}^{\alpha}}(\tau) = \left\{ \begin{array}{ll} 0 & \text{for}~\alpha T_{s} \notin {\mathbb{Z}}\\ \frac{\sigma^{2}_{c}}{T_{s}} r_{p}^{\alpha}(\tau) e^{j2\pi \alpha \phi} & \text{otherwise,} \end{array}\right. $$
(27)

where \(\sigma ^{2}_{c}\) is the average power of c n . In the following, we assume a rectangular pulse shape of length T s , i. e., \({p(t)} = \text {rect}(\frac {t}{T_{s}})\), which leads to \(p({t + \frac {\tau }{2}})p^{*}({t - \frac {\tau }{2}}) = \text {rect}\left (\frac {t}{T_{s} - |\tau |}\right)\). Thus, applying the Fourier transform to (26) yields

$${} { {R_{s, T_{s}}^{\alpha}}(\tau) = \left\{ \begin{array}{ll} 0 & \text{for}~\alpha T_{s} \notin {\mathbb{Z}}\\ \sigma^{2}_{c} \frac{T_{s} - |\tau|}{T_{s}} \text{sinc}(\alpha (T_{s} - |\tau|)) e^{j2\pi \alpha \phi} & \text{otherwise,} \end{array}\right.} $$
(28)

for |τ|≤T s where \(\text {sinc}(x) = \frac {\text {sin}(\pi x)}{\pi x}\). Note that the use of the absolute value of the delay stems from the fact that for a real symmetric pulse shape p(t), the expression \(p({t + \frac {\tau }{2}})p^{*}({t - \frac {\tau }{2}})\) is symmetric with respect to τ.

Equation 28 represents the CA of the continuous-time signal described by (24). The CA of the sampled version of (24) at its fundamental cycle frequency and the harmonics thereof is given by

$$ \left.{{R'}_{s, n_{s}}^{a}(\nu)} \right|{~}_{a = k\frac{N}{n_{s}}} = \frac{\sigma^{2}_{c}}{n_{s}} \frac{\text{sin}(\pi \frac{a}{N}(n_{s} - |\nu|))}{\text{sin}\left(\pi \frac{a}{N} \right)} e^{j2\pi \frac{a}{N}d_{\phi}}. $$
(29)

The derivation of this expression can be found in the appendix.

The coefficients of the closed-form expression (29) together with the alternative case \(\left.{R'}^{a}_{s,n_{s}} {(\nu)}\right |{~}_{a \ne k\frac {N}{n_{s}}} = 0\) at different discrete cycle frequencies a are arranged in a vector \(\mathbf {r}^{\nu }_{s,n_{s}}[\!{a}]\) matching the format of the DFT matrix, such that

$$ {\mathbf{r}_{s, n_{s}}^{\nu}[\!a]} = \left\{ \begin{array}{ll} {{R'}_{s, n_{s}}^{\alpha}(\nu)} & \text{for}~a \in \{0, \dotsc,\frac{N}{2}\},\\ {{R'}_{s, n_{s}}^{(a - N)}(\nu)} & \text{for}~a \in \{\frac{N}{2}+1, \dotsc,N-1\}. \end{array}\right. $$
(30)

Note that adding purely stationary noise to the signal s(t) does not change its asymptotic CA (with the exception of (a,ν)=(0,0), at which point the CA’s value is the average power of signal and noise, cf. (2)) since the noise exhibits no inherent periodic behavior. Due to this fact, (30) can also be used as a reference for the CA of signals contaminated with additive white Gaussian noise (AWGN) with the exception mentioned.

Given (30), we can now construct the asymptotic dictionary:

$$ {\overset{\circ}{\mathbf{D}}^{(\frac{N}{2})}_{\text{asy},l}} = \left[ \frac{\text{abs}\left(\mathbf{r}^{\nu_{l}}_{s,n_{s}=\frac{N}{1}}\right)}{\left\| \mathbf{r}^{\nu_{l}}_{s,n_{s}=\frac{N}{1}} \right\|{~}_{\ell_{1}}},\dotsc, \frac{\text{abs}\left(\mathbf{r}^{\nu_{l}}_{s,n_{s}=\frac{N}{N/2}} \right)}{\left\| \mathbf{r}^{\nu_{l}}_{s,n_{s}=\frac{N}{N/2}} \right\|{~}_{\ell_{1}}} \right]. $$
(31)

Note that in contrast to the single symmetry dictionary, there is a whole set of asymptotic dictionaries, one for each delay value of interest. The columns of the dictionaries correspond to actual symbol lengths i. e., actual cycle frequencies. Thus, each column contains the absolute value of the normalized asymptotic CA of a cycle frequency candidate where the discrete symbol lengths \(n_{s} \in \left \{ \frac {N}{1}, \dotsc, \frac {N}{{N}/{2}} \right \}\) correspond to the discrete cycle frequencies a{1,…,N/2}. It is worth noting that in addition to its role as the basis of the second structure dictionary for Algorithm 2, the expression (30) serves as a reference for the direct comparison of different CA estimation methods in Section 6.

5 Cyclostationarity detection from sparse cyclic spectra

Both the SOMP-based (Algorithm 1) and the dictionary-assisted CA estimation (Algorithm 2) are able to recover the CA without knowing which cycle frequencies are contained in the signal beforehand. Furthermore, since the the (row) support of \(\hat {\mathbf {R}}_{x}\) corresponds to the candidate cycle frequencies, its identification in Algorithm 1 (line 4) or in Algorithm 2 (line 5) can be interpreted as blind cyclostationary spectrum sensing by itself. However, under practical limitations on the number of samples available for CA estimation and attainable SNR levels, the support estimate is likely to contain errors, e.g., missed and/or falsely identified support entries. This calls for a further testing of the candidate cyclofrequencies that can be performed by applying the TDT method described in Section 2.2. However, the obtained sparse CA is not directly compatible with the traditional TDT because, since only few of the coefficients of \(\hat {\mathbf {R}}_{x}\) are recovered and all other coefficients are set to zero, it is not possible to reliably estimate the covariance matrix \({{\hat {\mathbf {\Sigma }}_{xx^{\ast }}}}\phantom {\dot {i}\!}\) under \(\mathcal {H}_{0}\). To tackle this problem, we present a modification of the traditional TDT.

The traditional TDT is a CFAR detector, i. e., the probability density function (PDF) of its test statistic under \({\mathcal {H}_{0}}\) is asymptotically independent of any signal parameters like, e. g., the noise power. To achieve this, the TDT first estimates the CA noise covariance and then rescales the original CA by this estimate so that the scaled CA follows a standard Gaussian distribution. This is where the problem occurs. Although, we are ultimately only interested in the CA coefficients that are located at the signal’s cycle frequencies, for the estimation of the noise covariance, we need the coefficients lying between the cycle frequencies, which only carry estimation noise. SOber and Dice do not recover these. Thus, we propose an extension to the TDT, the sparse TDT, to bridge this gap in the following.

To obtain optimal CA recovery performance, one would choose the sensing matrix A with minimum structure, i. e., the selection of the m entries of the delay product would be completely random. However, to tackle the aforementioned problem, we choose a combination of consecutive and random delay product elements. The consecutive part comprises the first β m rows of Y m , where β [ 0.01,0.5] and · denotes the ceiling operation. The remainder of the rows of Y m is a random selection of the remaining rows of Y N . The first step of the sparse TDT is to determine the classical CA estimation of the consecutive block of delay product elements. In the next step, the cycle frequency of interest a 0 is determined using either Algorithm 1 or Algorithm 2. Next, the covariance matrix for the cycle frequency a 0 corresponding to the N size CA \(\left ({{\hat {\mathbf {\Sigma }}_{xx^{\ast }} }}^{(N)}(a_{0})\right)\phantom {\dot {i}\!}\) needs to be determined, where the superscript (N) indicates the corresponding CA size. It is obtained as

$$ {\hat{\mathbf{\Sigma}}_{xx^{\ast}} }^{(N)}({a_{0}}) = \frac{\hat{\mathbf{\Sigma}}_{xx^{\ast}}^{(\lceil\beta{m}\rceil)}(\lceil\beta\frac{m}{N}a_{0}\rceil)}{\sqrt{\beta\frac{m}{N}}}, $$
(32)

where \({\hat {\mathbf {\Sigma }}_{xx^{\ast }} }^{(\lceil \beta {m}\rceil)}\) is the covariance matrix corresponding to the β m size CA estimated from the consecutive samples in the first step. The test statistic is subsequently evaluated as (cf. (18))

$$ \mathcal{T}^{\text{\,sparse}}_{xx^{\ast}} = N {\hat{\mathbf{r}}_{xx^{\ast}}}({a_{0}}) \left(\frac{\hat{\mathbf{\Sigma}}^{(\lceil \beta{m}\rceil)}(\lceil\beta\frac{m}{N}a_{0}\rceil)} {\sqrt{\beta\frac{m}{N}}} \right)^{-1} {\hat{\mathbf{r}}_{xx^{\ast}}}^{T}(a_{0}). $$
(33)

The consecutive sample ratio β is a trade-off parameter. The optimal sparse recovery performance is to be expected for the case that A=N M F −1 has the smallest possible amount of structure, which here corresponds to the case where the set of known delay product elements is chosen completely at random i. e., for β=0. Contrarily, the best estimation quality for the CA covariance matrix \(\hat {\mathbf {\Sigma }}_{xx^{\ast }}\) is achieved when all known delay product elements are consecutive i. e., for β=1.

6 Numerical evaluation

In this section, we compare the performance of the methods presented in the preceding sections. The parameters used throughout this section are given in Table 1.

Table 1 System parameters

We begin by investigating the influence of the consecutive sample ratio β on the spectrum sensing performance. Figure 1 shows how the detection rate changes with β for an SNR of 0 dB and different false alarm rates. For all methods but the OMP, β=0.15 seems to be a good choice. For the OMP, the detection rate increases monotonically with β. However, as can be seen below, even for the OMP, a high β is no good choice regarding other performance categories.

Fig. 1
figure 1

Detection rate over consecutive sample ratio for different false alarm rates at 0 dB SNR

In Fig. 2, the best achievable detection rate, i. e., the detection rate for the individual best choice of β, of the different detectors is plotted over the receiver SNR for different false alarm rates. The term oracle expresses that a method has prior knowledge about the exact cycle frequencies contained in the signal. The classic method depends on this knowledge while for the sparse recovery, it reduces the CA recovery to solving the overdetermined least-square problem for the given support (cf. lines 5 and 6 in Algorithm 1 or lines 6 and 7 in Algorithm 2). As expected, the oracle methods outperform the methods which have to determine the CA support themselves by a large margin. Regarding the case of missing support knowledge, the Dice algorithm clearly outperforms the SOber algorithm as well as OMP. It is to be noted that both, Fig. 1 as well as Fig. 2 do not show a significant performance advantage of exploiting the full knowledge of the asymptotic CA (Dice (asy)) over just exploiting its symmetry property (Dice (sym)) for a sensible choice of β.

Fig. 2
figure 2

Maximum detection rate (optimal individual consecutive sample ratio selection) over SNR for different false alarm rates

The lines in Fig. 3 show which false alarm rate according to the ideal chi-squared distribution has to be set in order to achieve 1, 3, 5, and 10% false alarm rate in the actual system. The dashed lines cross at the desired false alarm rate with β=0.15. While the two Dice methods roughly keep within a 1% offset, OMP, and SOber show a decreasing degree of equivalence for an increasing false alarm rate. This indicates that using the chi-squared distribution for setting the decision threshold of the Dice algorithm is viable, which is an important observation. It means that in contrast to many other spectrum sensing algorithms, Dice approximately possesses a desirable feature called constant false alarm rate i. e., its test statistic is independent of system parameters like the receiver noise power.

Fig. 3
figure 3

False alarm rate that has to be selected according to the chi-squared distribution to obtain different actual false alarm rates over the consecutive sample ratio

Figure 4 shows how well the support of the CA is recovered by the different methods. Since different types of communication signals feature different cycle frequencies, this information can be used for system identification. The hit rate is the chance of exactly recovering the correct support while the absolute index error is the mean recovery error in terms of CA bins. Obviously, the Dice methods have superior support recovery capabilities.

Fig. 4
figure 4

Left hit rate over SNR, right absolute index error over SNR. Both at consecutive sample ratio 0.15

The final performance category we evaluate is the CA estimation quality achievable by sparse recovery methods measured by the mean squared error (MSE). In the left graph of Fig. 5, the MSE over the whole CA is plotted while the right graph shows the MSE at the spikes of the CA, i. e., the MSE at the actual cycle frequencies. To determine the error, we use the analytic expression for the asymptotic CA vector as derived in Section 4.2, i. e., the MSE is defined as

$$ \frac{\|\hat{\mathbf{R}}_{x} - \left[\mathbf{r}^{\nu_{1}}_{s,n_{s}},\ldots,\mathbf{r}^{\nu_{n_{\nu}}}_{s,n_{s}}\right]\|{~}_{F}^{2}}{N{n_{\nu}}}, $$
(34)
Fig. 5
figure 5

MSE between the CAF estimation and the actual (analytic) value. Left over the whole support, right at the cycle frequencies. Both at consecutive sample ratio 0.15

where ·s F is the Frobenius norm. The sparse recovery method has a much lower overall MSE. This is caused by the fact that it sets all CA coefficients but the detected support to zero while the classical method results in a CA that features estimation noise between the spikes. Regarding the spike MSE, both methods seem to perform roughly equivalently.

7 Conclusions

Blind operation and constant false alarm rate (CFAR) are desirable characteristics of spectrum sensing algorithms. Unfortunately, cyclostationarity-based approaches typically only feature either one or the other. We showed that this can be changed by using sparse recovery methods in the CA estimation. Subsequently, we developed a way to use further prior knowledge in addition to sparsity for superior CA estimation. We derived a closed-form expression of the CA of sampled linearly modulated signals with rectangular pulse shape to be used both as prior information for the CA estimation and as a reference for comparison. Finally, we extended a well-known statistical test for cyclostationarity to accommodate sparse input. The results allow us to conclude that the proposed Dice algorithm in combination with the symmetric structure dictionary constitutes a viable alternative to the classical TDT for the case of missing prior information about the cycle frequencies contained in the signal.

8 Endnote

1 Note that although we use the vector of Nyquist rate samples \(\mathrm {x}_{t_{0}}\) to calculate \(\bar {\mathrm {x}}_{t_{0}}\), we do so for notational convenience only. In practice, one can directly obtain the sub-Nyquist samples \(\bar {\mathrm {x}}_{t_{0}}\) by means of non-uniform sampling for instance.

9 Appendix

9.1 Discrete asymptotic CA

The relation between the CA of the continuous time-domain signal s(t), and its sampled counterpart {s(n T e )} is given by ([36], Ch. 11, Sec. C, Eq. (111))

$$ \begin{array}{l} \mathcal{R'}^{{\alpha }}_{s, T_{s}}(\nu T_{e})= \sum\limits_{l=- \infty}^{\infty} R^{{\alpha + \frac{l}{T_{e}}}}_{s, T_{s}} (\nu T_{e})e^{j \pi l\nu}. \end{array} $$
(35)

The sum over l reflects the infinite aliasing caused by the sampling. In the next step, we insert (28) into (35). Also, we express quantities in terms of the sampling period T e , i. e., \(T_{s} \rightarrow n_{s} T_{e}, \alpha \rightarrow \frac {a}{ N T_{e}}, \phi \rightarrow d_{\phi } T_{e}\), with \(n_{s}, a, N \in {\mathbb {Z}}\). This leads to

$${} {R'}^{a}_{s,n_{s}}(\nu) = \left\{\begin{array}{ll} 0 & \text{for}~a\!\ne\!\frac{kN}{n_{s}}\\ \sigma^{2}_{c} \frac{n_{s} - |\nu|}{n_{s}} e^{j2\pi \frac{a}{N}d_{\phi}} \sum\limits_{l=- \infty}^{\infty} e^{j\pi l\nu} &\\ \cdot e^{j2\pi l d_{\phi}} \text{sinc}((\frac{a}{N} + l) (n_{s} - |\nu|)) & \text{otherwise,} \end{array}\right. $$
(36)

for |ν|≤n s , where n s is the oversampling factor. In this step, we used the fact that for our assumptions all aliases of the fundamental cycle frequency and its harmonics lie on top of the actual fundamental cycle frequency and its harmonics, i. e., \((\alpha + \frac {l}{T_{e}}) T_{s} \in \mathbb {Z}\) iff \(\alpha T_{s} \in \mathbb {Z}\). Inserting the discrete quantities given above, we get \((\frac {a}{N} + l) n_{s} \in {\mathbb {Z}}\) iff \(\frac {a}{N} n_{s} \in \mathbb {Z}\). Since \(n_{s} \in {\mathbb {Z}}\) and \(l \in {\mathbb {Z}}\), this always holds. To rule out any spectral leakage, we choose N as an integer multiple of n s , since then, \(a = k \frac {N}{n_{s}}\) is also an integer and thus the fundamental discrete cycle frequency and its harmonics hit center frequencies of frequency bins.

For \(a = k\frac {N}{n_{s}}\) expression (36) can be shown to be

$$ \begin{array}{l} \left. {R'}^{a}_{s,n_{s}}(\nu)\right|{~}_{a=k\frac{N}{n_{s}}}= \sigma^{2}_{c}\frac{\text{sin}(\pi \frac{a}{N}(n_{s} - |\nu|))} {\pi n_{s}} e^{j2\pi \frac{a}{N}d_{\phi}} \sum\limits_{l=- \infty}^{\infty} \frac{(-1)^{l}}{\frac{a}{N} + l}. \end{array} $$
(37)

To obtain (37) we used the definition of the sinc and exploited the facts that e jπk=(−1)k for \(k \in \mathbb {Z}\) and that sin(x+k π)=(−1)ksin(x) for \(k \in \mathbb {Z}\). The pulse timing phase parameter d ϕ was set to \(\frac {n_{s} + 1}{2}\). This has the following reason. In order to simplify the numerical evaluation, we want to choose ϕ such that the beginning of the observed receiver signal is aligned with the rectangular pulse shapes, i. e., we would set \(\phi = \frac {T_{s}}{2}\). However, doing so would lead to the need to sample at the discontinuities caused by the instant change in amplitudes at the transition between symbols. To avoid this, we choose \(\phi = \frac {T_{s}}{2} + \epsilon \), where ε(0,T e ). Note that (37) is the same for any ε(0,T e ). In order to ease the derivation, we can thus choose \(\epsilon = \frac {T_{e}}{2}\), i. e., \(d_{\phi } = \frac {n_{s} + 1}{2}\).

The infinite series in (37) can be expressed as

$${} \begin{array}{l} \sum\limits_{l=- \infty}^{\infty} \frac{(-1)^{l}}{\frac{a}{N} + l} = \frac{N}{a} + \sum\limits_{l=1}^{\infty} \frac{(-1)^{l}}{\frac{a}{N} + l} + \frac{(-1)^{l}}{\frac{a}{N} - l} \\ = \frac{N}{a} + \frac{1}{2} \sum\limits_{l=1}^{\infty} -\frac{1}{l + \frac{a}{2N} - \frac{1}{2}} + \frac{1}{l - \frac{a}{2N} - \frac{1}{2}} - \frac{1}{l - \frac{a}{2N}} + \frac{1}{l + \frac{a}{2N}}\\ = \frac{N}{a} + \frac{1}{2} \sum\limits_{l=1}^{\infty} - \frac{l + \frac{a}{2N} - \frac{1}{2} - \frac{a}{2N} + \frac{1}{2}}{l(l + \frac{a}{2N} - \frac{1}{2})} + \frac{l - \frac{a}{2N} - \frac{1}{2} + \frac{a}{2N} + \frac{1}{2}}{l(l - \frac{a}{2N} - \frac{1}{2})}\\ \quad- \frac{l - \frac{a}{2N} + \frac{a}{2N}}{l(l - \frac{a}{2N})} + \frac{l + \frac{a}{2N} - \frac{a}{2N}}{l(l + \frac{a}{2N})} \\ = \frac{N}{a} + \frac{1}{2} \sum\limits_{l=1}^{\infty} \frac{\frac{a}{2N} - \frac{1}{2}}{l(l + \frac{a}{2N} - \frac{1}{2})} - \frac{- \frac{a}{2N} - \frac{1}{2}}{l(l - \frac{a}{2N} - \frac{1}{2})} + \frac{- \frac{a}{2N}}{l(l - \frac{a}{2N})} - \frac{\frac{a}{2N}}{l(l + \frac{a}{2N})}. \\ \end{array} $$
(38)

The digamma function, denoted by ψ(z), possesses a series expansion given by ([37], Eq. (6.3.16))

$$ \begin{array}{l} \psi(1 + z) = -\gamma + \sum\limits_{n=1}^{\infty} \frac{z}{n(n+z)}~\text{for}~z \notin \{-1,-2,-3,\dotsc\}, \end{array} $$
(39)

where γ denotes the Euler-Mascheroni constant. We can thus simplify (38) by expressing it in terms of the digamma function as

$$ \begin{array}{ll} \sum\limits_{l=- \infty}^{\infty} \frac{(-1)^{l}}{\frac{a}{N} + l} &= \frac{N}{a} + \frac{1}{2} \left(\psi\left(\frac{1}{2} + \frac{a}{2N} \right) - \psi\left(\frac{1}{2} - \frac{a}{2N} \right) \right.\\ & \left. + \psi\left(1 - \frac{a}{2N} \right) - \psi\left(1 + \frac{a}{2N} \right) \right). \end{array} $$
(40)

Since the reflection and the recurrence formulas of the digamma function are known to be ([37], Eq. (6.3.7))

$$ \begin{array}{l} \psi(1-z) - \psi(z) = \pi \text{cot}(\pi z) \end{array} $$
(41)

and ([37], Eq. (6.3.5))

$$ \begin{array}{l} \psi(1 + z) = \psi(z) + \frac{1}{z}, \end{array} $$
(42)

respectively, we obtain

$$ \begin{array}{l} \psi\left(\frac{1}{2} + \frac{a}{2N} \right) - \psi\left(\frac{1}{2} - \frac{a}{2N} \right) \\ = \psi\left(1 - \left(\frac{1}{2} - \frac{a}{2N} \right) \right) - \psi\left(\frac{1}{2} - \frac{a}{2N} \right) \\ = \pi \text{cot}\left(\pi \left(\frac{1}{2} - \frac{a}{2N}\right)\right) \end{array} $$
(43)

and

$$ \begin{array}{l} \psi\left(1 - \frac{a}{2N} \right) - \psi\left(1 + \frac{a}{2N} \right) \\ = \psi\left(1 - \frac{a}{2N} \right) - \psi\left(\frac{a}{2N} \right) - 2 \frac{N}{a}\\ = \pi \text{cot}(\pi \frac{a}{2N}) - 2 \frac{N}{a}. \end{array} $$
(44)

Inserting (43) and (44) into (40) results in

$$ \begin{array}{l} \sum\limits_{l=- \infty}^{\infty} \frac{(-1)^{l}}{\frac{a}{N} + l} = \frac{1}{2} \pi \left(\text{cot}\left(\pi \left(\frac{1}{2} - \frac{a}{2N} \right)\right) + \text{cot}\left(\pi \frac{a}{2N} \right) \right)\\ = \frac{1}{2} \pi \left(\text{tan}\left(\pi \frac{a}{2N} \right) + \text{cot}\left(\pi \frac{a}{2N} \right) \right) = \frac{\pi}{ \text{sin}\left(\pi \frac{a}{N} \right)}. \end{array} $$
(45)

Finally, substituting (45) into (37) gives us the expression (29).

References

  1. Q Zhao, BM Sadler, A survey of dynamic spectrum access. IEEE Signal Process. Mag. 24(3), 79–89 (2007).

    Article  Google Scholar 

  2. T Yücek, H Arslan, A survey of spectrum sensing algorithms for cognitive radio applications. IEEE Commun. Surv. Tutorials. 11(1), 116–130 (2009).

    Article  Google Scholar 

  3. Y Zeng, Y-C Liang, AT Hoang, R Zhang, A review on spectrum sensing for cognitive radio: challenges and solutions. EURASIP J. Adv. Signal Process. 2010: (2010).

  4. E Axell, G Leus, EG Larsson, HV Poor, Spectrum sensing for cognitive radio: state-of-the-art and recent advances. IEEE Signal Process. Mag. 29(3), 101–116 (2012).

    Article  Google Scholar 

  5. SM Kay, Fundamentals of Statistical Signal Processing, Vol, II: Detection Theory (Prentice Hall, Upper Saddle River, 1998).

    Google Scholar 

  6. H Urkowitz, Energy detection of unknown deterministic signals. Proc. IEEE. 55(4), 523–531 (1967).

    Article  Google Scholar 

  7. WA Gardner, Exploitation of spectral redundancy in cyclostationary signals. IEEE Signal Process. Mag. 8(2), 14–36 (1991).

    Article  Google Scholar 

  8. WA Gardner, Signal interception: a unifying theoretical framework for feature detection. IEEE Trans. Commun. 36(8), 897–906 (1988).

    Article  Google Scholar 

  9. AV Dandawate, GB Giannakis, Statistical tests for presence of cyclostationarity. IEEE Trans. Signal Process. 42(9), 2355–2369 (1994).

    Article  Google Scholar 

  10. WA Gardner, A Napolitano, L Paura, Cyclostationarity: half a century of research. Signal Process. 86(4), 639–697 (2006).

    Article  MATH  Google Scholar 

  11. K Kim, IA Akbar, KK Bae, J-S Um, CM Spooner, JH Reed, in New Frontiers in Dynamic Spectrum Access Networks, 2007. DySPAN 2007. 2nd IEEE International Symposium On. Cyclostationary approaches to signal detection and classification in cognitive radio (IEEEDublin, 2007), pp. 212–215.

    Chapter  Google Scholar 

  12. J Chen, A Gibson, J Zafar, in Cognitive Radio and Software Defined Radios: Technologies and Techniques, 2008 IET Seminar On. Cyclostationary spectrum detection in cognitive radios (IETLondon, 2008), pp. 1–5.

    Google Scholar 

  13. PD Sutton, KE Nolan, LE Doyle, Cyclostationary signatures in practical cognitive radio applications. IEEE J. Selected Areas Commun. 26(1), 13–24 (2008).

    Article  Google Scholar 

  14. CM Spooner, RB Nicholls, Spectrum sensing based on spectral correlation. Cogn. Radio Technol. 2:, 593–634 (2009).

    Article  Google Scholar 

  15. Z Khalaf, A Nafkha, J Palicot, in Circuits and Systems (MWSCAS), 2011 IEEE 54th International Midwest Symposium On. Blind cyclostationary feature detector based on sparsity hypotheses for cognitive radio equipment (IEEESeoul, 2011), pp. 1–4.

    Google Scholar 

  16. A Napolitano, Cyclostationarity: new trends and applications. Signal Process. 120:, 385–408 (2016).

    Article  Google Scholar 

  17. J Lundén, V Koivunen, A Huttunen, HV Poor, Collaborative cyclostationary spectrum sensing for cognitive radio systems. IEEE Transa. Signal Process. 57(11), 4182–4195 (2009).

    Article  MathSciNet  Google Scholar 

  18. M Derakhshani, T Le-Ngoc, M Nasiri-Kenari, Efficient cooperative cyclostationary spectrum sensing in cognitive radios at low SNR regimes. IEEE Trans. Wireless Commun. 10(11), 3754–3764 (2011).

    Article  Google Scholar 

  19. DL Donoho, Scanning the technology. Proc. IEEE. 98(6), 910–912 (2010).

    Article  Google Scholar 

  20. DL Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).

    Article  MathSciNet  MATH  Google Scholar 

  21. EJ Candès, MB Wakin, An introduction to compressive sampling. IEEE Signal Proc. Mag. 25(2), 21–30 (2008).

    Article  Google Scholar 

  22. JA Tropp, AC Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory. 53(12), 4655–4666 (2007).

    Article  MathSciNet  MATH  Google Scholar 

  23. D Needell, JA Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Commun. ACM. 53(12), 93–100 (2010).

    Article  MATH  Google Scholar 

  24. Z Khalaf, J Palicot, in Cognitive Communication and Cooperative HetNet Coexistence. New blind free-band detectors exploiting cyclic autocorrelation function sparsity (SpringerBerlin, 2014), pp. 91–117.

    Chapter  Google Scholar 

  25. Z Tian, Y Tafesse, BM Sadler, Cyclic feature detection with sub-Nyquist sampling for wideband spectrum sensing. IEEE J. Selected Topics Signal Process.6(1), 58–69 (2012).

    Article  Google Scholar 

  26. E Rebeiz, V Jain, D Cabric, in IEEE International Conference on Communications (ICC). Cyclostationary-based low complexity wideband spectrum sensing using compressive sampling (Ottawa, 2012), pp. 1619–1623.

  27. D Cohen, E Rebeiz, V Jain, YC Eldar, D Cabric, in IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). Cyclostationary feature detection from sub-Nyquist samples (San Juan, 2011), pp. 333–336.

  28. M Mishali, YC Eldar, From theory to practice: sub-Nyquist sampling of sparse wideband analog signals. IEEE J. Selected Topics Signal Process.4(2), 375–391 (2010).

    Article  Google Scholar 

  29. G Leus, Z Tian, in Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2011 4th IEEE International Workshop On. Recovering second-order statistics from compressive measurements (IEEESan Juan, 2011), pp. 337–340.

    Chapter  Google Scholar 

  30. DD Ariananda, G Leus, in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference On. Non-uniform sampling for compressive cyclic spectrum reconstruction (IEEEFlorence, 2014), pp. 41–45.

    Chapter  Google Scholar 

  31. D Cohen, YC Eldar, Sub-Nyquist cyclostationary detection for cognitive radio. IEEE Trans. Signal Process. 65(11), 3004–3019 (2017).

    Article  MathSciNet  Google Scholar 

  32. A Napolitano, Generalizations of Cyclostationary Signal Processing: Spectral Analysis and Applications (John Wiley & Sons, Hoboken, 2012).

    Book  MATH  Google Scholar 

  33. DL Donoho, For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 59:, 797–829 (2004).

    Article  MathSciNet  MATH  Google Scholar 

  34. JA Tropp, AC Gilbert, MJ Strauss, Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit. Signal Process. 86(3), 572–588 (2006).

    Article  MATH  Google Scholar 

  35. A Bollig, R Mathar, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Dictionary-based reconstruction of the cyclic autocorrelation via 1-minimization for cyclostationary spectrum sensing (Vancouver, 2013).

  36. WA Gardner, Statistical Spectral Analysis: A Nonprobabilistic Theory (Prentice-Hall, New Jersey, 1986).

    MATH  Google Scholar 

  37. M Abramowitz, IA Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (Courier Corporation, New York, 1964).

    MATH  Google Scholar 

Download references

Funding

This work was partly supported by the Deutsche Forschungsgemeinschaft (DFG) projects CoCoSa (grant MA 1184/26-1) and CLASS (grant MA 1184/23-1).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Bollig.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bollig, A., Lavrenko, A., Arts, M. et al. Compressive cyclostationary spectrum sensing with a constant false alarm rate. J Wireless Com Network 2017, 135 (2017). https://doi.org/10.1186/s13638-017-0920-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-017-0920-5

Keywords