Skip to main content

Robust detectors for cognitive radio system

Abstract

Increasing demand for wireless technology overburdens the existing frequency spectrum due to the licensed spectrum management. On the other hand, field studies indicate that the spectrum is often underutilized. This leads to a need to reallocate the spectrum dynamically so that the unlicensed users could access the spectrum not required by the primary users. Dynamic spectrum access can be achieved by cognitive radio technology which in turn requires detection of primary user signals at the secondary user locations.

In this paper, we investigate the detection of primary user signals in the environment with impulsive noise. We propose proper robust detectors to replace several popular detection schemes that have been developed for the Gaussian noise case. The basis of our development is modelling the noise as consisting of two components, one of them being Gaussian, which has proven itself as a good model for thermal noise, and the other being uniform, which appears with certain probability and models the impulsive noise. In this paper, several detectors arising from this model are proposed and analysed.

1 Introduction

Spectrum usage is regulated in every part of the world so that essential services can be provided and be protected from interference. The spectrum regulatory bodies have always allocated frequency blocks for different uses and assigned licenses for those blocks for spectrum users. This strategy has led to several successful applications, and nowadays, there is a shortage of spectrum that could be shared to new innovative applications. At the same time, there are several field studies that indicate underutilization of spectrum [1, 2].

Cognitive radio [3, 4] is a promising new technology that provides a way for opportunistic and efficient reuse of radio spectrum resources. The technology allows the secondary users to occupy radio spectrum in times or locations where the licensed user does not require it. The key enabler to this technology is reliable detection of spectral holes which could be used by the secondary users. In the literature, there are several detectors proposed for this purpose [57], requiring various amounts of information about the primary signal. The energy detector does not require any knowledge about the primary user signal, it just detects whether there is an excess energy above the noise floor. The correlation matrix-based detectors like cyclostationary feature detector, which assumes that the primary signal, exhibits cyclostationary features, and the eigenvalue ratio detector, which assumes the primary signal correlation matrix to have a structure. Finally, the matched filter for which one needs to know the primary signal with a great precision. Most popular of them is probably the energy detector. Its popularity is partly because of simplicity of the energy detector and partly because of the low information need about the primary signal.

In this paper, we discuss the single user techniques for narrowband spectrum sensing. Individual decisions are the basis of secondary spectrum usage regardless if the final decision is made individually or cooperatively by the cognitive users. Cooperative spectrum exploration and exploitation are discussed in depth in [8]. The paper [8] discusses identification and access of the underutilized spectrum in multiband and multiuser environments. In this paper, we investigate the decision-making process in one single node and one frequency band but in the presence of impulsive noise.

Spectrum sensing for cognitive radio has to cope with several impairments like fading, shadowing, and presence of noise. Usually, the noise is assumed to be white and Gaussian, but in real-life situations, this does not need to be the case. In particular, one has to consider the presence of impulsive noise, both man-made and natural. Man-made impulsive noise occurs most commonly due to large electrical discharges. The impulses caused by such discharges have typically short duration and may vary in strength and frequency. The sources of the impulses are for instance spark ignition systems of engines, electrical machinery, discharge lighting and so on. The electromagnetic pulses may have durations of 5 to 10 ns and may occur at rates up to several hundred pulses per second [9]. For sake of simplicity, we model the impulsive noise in this paper using Bernoulli-Uniform distribution [10]. It is straightforward to extend the results to other distributions. The required knowledge is the probability that impulse occurs.

To cope with the impulsive noise, one needs to build some robustness [11, 12] into the detector. One way of achieving the robustness is by using a heavy tail distribution like Laplace or alpha stable distribution to model the noise like it is done in [13]. This approach attempts to model both impulses and the Gaussian noise floor with one single distribution and derive the algorithm from there. Another approach is to limit the signal in some way. For example, extreme limiting can be achieved using sign algorithm like in [14]. More reluctant limiting can be attained by using M-estimators as done in [15, 16]. The exact point where the limiting needs to occur is however left open in the previous works. In this paper, we derive the exact location for limiting the input signal based on our composite noise model.

In this paper, we will develop all three kinds of robust detectors. The derivation is based on modelling the impulsive component of the noise explicitly by a uniform distribution and preserving the Gaussian noise component as usual. The choice of the uniform distribution to model the impulses is motivated by the fact that we do not have any information about the amplitude of the impulsive noise other than the fact that in any practical equipment, there is a certain limit on how large the amplitude entering the device can be. This practical limit determines the endpoints of the uniform distribution. In the analysis part of the paper, we derive the formulae for probabilities of detection, P D , and false alarm, P F , of the proposed detectors if it is mathematically tractable. Otherwise, we restrict ourselves with the results obtained using numerical techniques.

For the sake of simplicity, we will assume throughout the paper that the signals are real valued. Extension to the complex case is straightforward. The italic, boldface lowercase and boldface uppercase letters will be used for scalars, column vectors and matrices, respectively. The superscript T denotes transposition of a matrix, the operator E[·] denotes mathematical expectation and det[ ·] stands for determinant of a matrix. The operators min and max extract the absolutely largest and the smallest of their arguments, respectively.

This paper is organized as follows. In Section 2, we have discussed our data model and derived a approximate probability density function (PDF) for the noise. In Section 3, we have introduced a robust detection scheme based on well-known energy detector. Robust detector using full knowledge of the primary signal is proposed in Section 4. Robust correlation matrix-based (eigenvalue and cyclostationary) detectors are examined in Section 5. Our simulation results are given in Section 6. Section 7 concludes the paper.

2 Data model

We consider the problem of detecting the presence of primary users in a given frequency band [6, 17]. Suppose that at time n we have received M samples of signal x(n) and we have stacked the samples into a vector x(n). The detection problem we need to solve is [18]

$$ \begin{array}{l} H_{0}: \mathbf{x}(n) = \mathbf{v}(n) \\ H_{1}: \mathbf{x}(n) = \mathbf{s}(n)+\mathbf{v}(n), \end{array} $$
((1))

i.e. the received waveform x(n) may be noise v(n) only or it may consist of a sum of the primary user signal s(n) and noise v(n). The detector has to decide which of the hypotheses is more likely given the received waveform x(n).

We assume that the noise v(n) comprises a weighted sum of zero mean additive white Gaussian noise process and an additional impulsive noise component. The PDF of the Gaussian component is as follows:

$$ p_{g} \left(\mathbf{x} \right) = \frac{1}{\sqrt{(2\pi)^{M} \det(\mathbf{R})}} \exp \left(- \frac{\mathbf{x}(n)^{T} \mathbf{R}^{-1} \mathbf{x}(n)}{2} \right), $$
((2))

where R is the correlation matrix of the Gaussian noise and we have assumed that the noise has a zero mean.

The impulsive noise component is assumed not to be present most of the time but appears with certain probability c so that the impulsive component obeys the probability density function

$$ p_{i}(\mathbf{x}) = \left\lbrace \begin{array}{cc} \frac{c}{(b-a)^{M}} + (1-c) \delta(\mathbf{x}), & b < x(n) < a, \forall n\\ 0, & \text{otherwise} \end{array} \right., $$
((3))

with 0<c<1, a and b being the lower and upper limits on the values that the impulsive noise can take and δ(·) denotes the Dirac delta function. In practice, a and b may for instance be the smallest and largest numbers that can be represented at the output of analogue-to-digital (A/D) converter. For the sake of simplicity, we assume that b=−a. The uniform distribution is selected because of its maximum entropy property, i.e. there is nothing assumed to be known about the origin of the impulses. For instance, the impulses may arise due to failures of the A/D converter or some interferences that are not well modelled by a Gaussian noise process.

The noise vector v(n) is thus according to our assumption a sum of two components

$$ \mathbf{v}(n) = \mathbf{v}_{g}(n) + \mathbf{v}_{i}(n), $$
((4))

where v g (n) is the Gaussian component and v i (n) is the impulsive noise component at time n. The noise model obtained this way is intuitively very satisfying as most of the time the noise is actually Gaussian, and in addition to that, there are relatively rare impulses present. It is believed that this model represents the actual situation rather accurately.

The PDF of v(n) thus consists of two additive components and is, as such, a convolution of the individual PDFs. This can be evaluated as follows:

$$ \begin{aligned} p_{v}(\mathbf{v}) & \,=\, \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \cdots \int_{-\infty}^{\infty} \frac{1}{\sqrt{(2\pi)^{M} \det(\mathbf{R})}} \exp \left(\!- \frac{\mathbf{u}(n)^{T} \mathbf{R}^{-1} \mathbf{u}(n)}{2} \right) \\ & \quad \cdot \left[ \frac{c}{(b-a)^{M}} \mathbf{W}(\mathbf{v}-\mathbf{u}) + (1-c) \delta(\mathbf{v}-\mathbf{u }) \right] d\mathbf{u}, \end{aligned} $$
((5))

where W(x) is a function that equals one if all the components of the argument vector x lie between a and b and is zero otherwise. Integration goes over all the components of vector u. For two-dimensional case, the PDF is depicted in the left side of Fig. 1 for b=−a=10,c=0.05 and

$$\mathbf{R} = \left[ \begin{array}{cc} 1 & 0.5 \\ 0.5 & 1 \end{array} \right]. $$
Fig. 1
figure 1

Exact (left) and approximate (right) PDFs in two-dimensional case. This illustrates the difference between exact and approximate PDFs in non-Gaussian case

The integral is quite complicated to evaluate and even more complicated to use in actual equipment, and we therefore approximate it with

$$ \begin{aligned} p_{av}(\mathbf{v}) = \beta \max \left[ \frac{1-c}{\sqrt{(2\pi)^{M} \det(\mathbf{R})}} \exp \left(- \frac{\mathbf{v}(n)^{T} \mathbf{R}^{-1} \mathbf{v}(n)}{2} \right), \frac{c}{(b-a)^{M}}\right], \end{aligned} $$
((6))

where the constant β is a normalization factor used to guarantee that the approximate PDF integrates to unity. In fact, we have replaced the sum in (4) with max operator. This is intuitively reasonable as impulsive noise is, by definition, a short pulse having a large amplitude. Hence, if a noise impulse is present, the amplitude of it is usually much larger than any other signal or noise components and the error made by replacing summation with max operator is relatively small. If the impulsive noise is not present, we have only the Gaussian component and the max operator holds again.

For this result to hold, we have assumed that ba is much larger than σ and also much larger than any possible signal component in the received waveform. This is a reasonable assumption if we think of a and b as being the limits of the dynamic range that is available for the waveform. Then, the impulsive noise can take any value inside these limits, and in fact, it is distinguishable from the Gaussian noise component only if it takes on absolutely large values as compared to the rest of the waveform components. If we assume that the waveform is obtained via an analogue-to-digital converter operating in the range a<x(n)<b, we see that the Gaussian component gets limited, too. Another interpretation of changing the summation with picking the one with the largest absolute value would be that if impulses are present, they replace the original samples as it would in fact be in the case of A/D converter failures.

The approximation is depicted in the right side of Fig. 1 using the same parameters as before. One can see that p v and p av are indeed very close to each other.

The border between the part determined by Gaussian component and the component determined by the uniform component is given by the following:

$$ \frac{1-c}{\sqrt{(2\pi)^{M} \det(\mathbf{R})}} \exp \left(- \frac{\mathbf{v}(n)^{T} \mathbf{R}^{-1} \mathbf{v}(n)}{2} \right) = \frac{c}{(b-a)^{M}}. $$
((7))

Taking the logarithm from both sides the above can be rewritten as

$$ \mathbf{v}(n)^{T} \mathbf{R}^{-1} \mathbf{v}(n) = -2 \ln \left(\frac{c}{1-c} \frac{\sqrt{(2\pi)^{M} \det(\mathbf{R})}}{(b-a)^{M}} \right), $$
((8))

which is an equation of an M–dimensional ellipse.

Several algorithms discussed in this paper are based on the correlation matrix of the observed data. The data is however contaminated by the impulsive noise so the first thing to do is to obtain a robust estimate of the correlation matrix. To do so, assume that we have observed N statistically independent realizations of vector x. The likelihood function is given by the following:

$$ {\footnotesize{\begin{aligned} {}p(\mathbf{x}; \mathbf{R}) \,=\, \prod_{n=0}^{N-1} \beta \max \left[ \frac{1-c}{\sqrt{(2\pi)^{M} \det(\mathbf{R})}} \exp \left(- \frac{\mathbf{x}(n)^{T} \mathbf{R}^{-1} \mathbf{x}(n)}{2} \right), \frac{c}{(b-a)^{M}}\right]. \end{aligned}}} $$
((9))

The log-likelihood function is thus

$$ \begin{aligned} \ln p(\mathbf{x}; \mathbf{R})&= \max \left[ N \ln \frac{\beta (1-c)}{\sqrt{(2\pi)^{M}}} - \frac{N}{2} \ln \det(\mathbf{R})\right.\\ & \quad\left.-\sum_{n=0}^{N-1} \left(\frac{\mathbf{x}(n)^{T} \mathbf{R}^{-1} \mathbf{x}(n)}{2} \right), \ln \frac{N \beta c}{(b-a)^{M}}\right]. \end{aligned} $$
((10))

Computing derivative of the log-likelihood function with respect to matrix R [19] results in

$$ \begin{aligned} {}\frac{\partial \ln p(\mathbf{x}; \mathbf{R})}{\partial \mathbf{R}} &= \max \left[ \left(\mathbf{R}^{T}\right)^{-1} \sum_{n=0}^{N-1} \left(\mathbf{x}(n) \mathbf{x}(n)^{T} \right)^{T} \left(\mathbf{R}^{T}\right)^{-1}\right.\\ &\left.\quad - \frac{1}{2} \left(\mathbf{R}^{T}\right)^{-1},0 \vphantom{\sum_{n=0}^{N-1}}\right]. \end{aligned} $$
((11))

Setting the derivative to zero and using the first expression under the max operation in (11), we obtain

$$ \hat{\mathbf{R}}^{T} = \sum_{n=0}^{N-1} \mathbf{x}(n) \mathbf{x}(n)^{T}. $$
((12))

From the second expression, we obtain the equality 0=0. This result means that there is no information about R to gain from vectors falling outside the ellipse (8), and we can ignore them. Unfortunately, the borders of the area of interest depend on the unknown matrix R and thus we cannot use the result directly.

Instead, we can find the borders of the area of interest projecting the ellipse to the axes defined by the signal samples and treat specially the samples that fall outside the hypercube. The length of projection of an ellipse centred at the origin is given by [20]

$$ l = \left \|\frac{\mathbf{L}^{-1} \mathbf{u}}{\mathbf{u}^{T}\mathbf{u}} \right\|, $$
((13))

where L is the lower triangular Cholesky factor [21] of R −1 so that

$$ \mathbf{R}^{-1} = \mathbf{LL}^{T} $$
((14))

and the vector u shows the line we want to project the ellipse on.

In our case, the vector u is a vector along one of the coordinate axes and hence has one element equal to unity and the rest of them equal to zero. It follows from (14) that R=L T L −1 and hence, the projection is determined by the Cholesky factor of the covariance matrix R. Squaring (13), we have

$$ l^{2} = \mathbf{u}^{T} \mathbf{Ru}, $$
((15))

which is an element of the matrix R lying on its main diagonal. The elements of the main diagonal of a Toeplitz covariance matrix are however all equal to the variance of the signal σ 2. Using (8), we now conclude that all the ellipses are confined in a hypercube with squared length side

$$ \eta = -2 \sigma^{2} \ln \left(\frac{c}{1-c} \frac{\sqrt{(2\pi)^{M} M\sigma^{2} }}{(b-a)^{M}} \right), $$
((16))

where we have used det(R)=M σ 2. Hence, we find that the signal samples that are larger than \(\sqrt {\eta }\) are due to the impulsive noise and can be eliminated to gain a robust covariance estimate. For two-dimensional case, the ellipses forming the borders between Gaussian and uniform areas are illustrated in Fig. 2 for σ 2=1,b=−a=100,c=10−4 and the different correlation coefficients ρ.

Fig. 2
figure 2

The border ellipses for correlation. Ellipse for different correlation coefficient as ρ=0,0.5,0.7,0.8 and 0.9

There are several possibilities what to do with the samples greater than \(\sqrt {\eta }\) resulting in slightly different algorithms. First, we can limit the signal at level \(\sqrt {\eta }\). This approach will be referred to as limiting detector. Second, we can replace the samples grater than \(\sqrt {\eta }\) with zeros. This is referred to as nullifying detector. To distinguish between the two detectors, we introduce a variable κ which takes value one the limiting detector and value zero in case of the nullifying detector. We shall investigate these options closer in the sequel of the paper.

3 Energy detector

3.1 Derivation

In this section, we consider a detector that does not use the possible dependencies between the sequential samples and hence, our data model reduces to the M=1 dimensional model. The conditional probability density of the received waveform being noise only can be written as

$$ p(x|H_{0}) = \left\{ \begin{array}{ll} \beta_{0} \max \left(\frac{1-c}{\sqrt{2 \pi} \sigma_{n}}e^{-\frac{x^{2}}{2 {\sigma_{n}^{2}}}}, \frac{c}{b-a} \right) & a<x<b \\ 0, & \text{otherwise} \end{array} \right. $$
((17))

and the conditional probability density of the received waveform being signal plus noise as

$$ p(x|H_{1}) = \beta_{1} \max \left(\frac{1-c}{\sqrt{2 \pi ({\sigma_{n}^{2}} + {\sigma_{s}^{2}})}} e^{-\frac{x^{2}}{2 ({\sigma_{n}^{2}} + {\sigma_{s}^{2}})}}, \frac{c}{b-a} \right) $$
((18))

if a<x<b and 0, otherwise. The variables \({\sigma _{s}^{2}}\) and \({\sigma _{n}^{2}}\) denote the variances of primary user signal and noise, respectively. Let us also denote a common variance as

$${\sigma^{2}_{l}} = \left\lbrace \begin{array}{ll} {\sigma_{n}^{2}} & l = 0 \\ {\sigma_{n}^{2}} + {\sigma_{s}^{2}} & l = 1 \end{array} \right.. $$

With this notation, we can express the conditional PDFs corresponding to our two hypotheses for l=0,1 as

$$ p(x|H_{l}) = \left\{ \begin{array}{ll} \beta_{l} \max \left(\frac{1-c}{\sqrt{2 \pi} \sigma_{l}}e^{-\frac{x^{2}}{2 {\sigma_{l}^{2}}}}, \frac{c}{b-a} \right) & a<x<b \\ 0, & \text{otherwise} \end{array} \right.. $$
((19))

The normalization factors β l can be found by solving \({\int _{a}^{b}} p(x|H_{l}) dx = 1\) for β l . This results in

$$ \beta_{l} = \left[ (1-c) \text{erf} \left(\sqrt{\frac{\eta_{l}}{2{\sigma_{l}^{2}}}} \right) + c \left(1- \frac{2 \sqrt{\eta_{l}} }{b-a} \right) \right]^{-1}, $$
((20))

where \(\text {erf}(x) = \frac {2}{\sqrt {\pi }} {\int _{0}^{x}} \exp (-t^{2}) \text {dt}\) and

$$ \eta_{l} = -2 {\sigma_{l}^{2}} \text{ln} \left(\frac{c}{1-c} \frac{\sqrt{2 \pi {\sigma_{l}^{2}} }}{b-a} \right) $$
((21))

is the intersection point of the Gaussian and uniform distributions.

We can give to PDFs of x in the interval axb a more convenient form for future derivation

$$ {\small{\begin{aligned} p(x|H_{l}) &= \beta_{l} \max \left(\frac{1-c}{\sqrt{2 \pi {\sigma_{l}^{2}} }} e^{-\frac{x^{2}}{2 {\sigma_{l}^{2}} }}, \frac{c}{b-a} \right)\\ &= \frac{\beta_{l} (1-c)}{ \sqrt{2 \pi {\sigma_{l}^{2}}} } e^{- \frac{1}{2 {\sigma_{l}^{2}}} \min \left(x^{2}, \eta_{l} \right) }. \end{aligned}}} $$
((22))

PDFs of y=x 2 are then \(p(y) = \frac {p(x)}{ \frac {dy}{dx} }\) and hence

$$ p(y|H_{l}) = \frac{\beta_{l} (1-c)}{ \sqrt{2 \pi y {\sigma_{l}^{2}}}} e^{-\frac{1}{2 {\sigma_{l}^{2}}} \min(y, \eta_{l})}. $$
((23))

The minimum operation in the above equation gives us the limiting nonlinearity and hence the limiting detector. To include also the nullifying detector into our common discussion, we introduce a new variable

$$ z_{E}= h(y) =\left\{ \begin{array}{cc} y, & 0 \leq y \leq \eta \\ \kappa \eta, & y > \eta \end{array} \right., $$
((24))

so that we have

$$ p(y|H_{l}) = \frac{\beta_{l} (1-c)}{ \sqrt{2 \pi y {\sigma_{l}^{2}}}} e^{-\frac{1}{2 {\sigma_{l}^{2}}} z_{E,l}}. $$
((25))

The function h(y) is saturation nonlinearity if κ=1 and a nullifying nonlinearity if κ=0.

Suppose that we have made N observations of the variable y and we have collected these observations into a vector y and also assume that the observations at different time instances are statistically independent of each other, then the joint probability density function is a product of the individual probability densities

$$ \begin{array}{ll} p(\mathbf{y} \mid H_{l}) = \prod_{n=1}^{N} p(y(n) \mid H_{l}), & l = 0,1. \end{array} $$
((26))

The likelihood ratio for the above hypothesis reads

$$ L(y) = \prod_{n=1}^{N} \frac{\beta_{1}}{\beta_{0}} \sqrt{\frac{{\sigma_{0}^{2}}}{{\sigma_{1}^{2}}} } \frac{e^{-\frac{1}{2 {\sigma_{1}^{2}}} z_{E,1}(n)}}{e^{-\frac{1}{2 {\sigma_{0}^{2}}} z_{E,0}(n)}} . $$
((27))

Taking the logarithm of both sides of (27) and simplifying, we readily obtain the log-likelihood ratio

$$ {\small{\begin{aligned} {}\ell(y) = \frac{N}{2} \ln \left(\frac{{\beta_{1}^{2}} {\sigma_{0}^{2}}}{{\beta_{0}^{2}} {\sigma_{1}^{2}}} \right) - \frac{1}{2 {\sigma_{1}^{2}}} \sum_{n=1}^{N} z_{E,1}(n) + \frac{1}{2 {\sigma_{0}^{2}}} \sum_{n=1}^{N} z_{E,0}(n). \end{aligned}}} $$
((28))

Our detector thus needs to decide in favour of H 1 if the log-likelihood ratio is larger than a threshold. Otherwise, the hypothesis H 0 is selected.

If there is no impulsive noise, i.e. c→0, we have

$$\begin{array}{@{}rcl@{}} {\lim}_{c\rightarrow 0} \eta_{l} & =& -2{\sigma_{l}^{2}} \ln(0) = \infty \\ {\lim}_{c\rightarrow 0} \beta_{l} & = & 1 \\ {\lim}_{c\rightarrow 0} \frac{N}{2} \ln \left(\frac{{\beta_{1}^{2}} {\sigma_{0}^{2}}}{{\beta_{0}^{2}} {\sigma_{1}^{2}}} \right) & = & \frac{N}{2} \ln \left(\frac{{\sigma_{0}^{2}}}{{\sigma_{1}^{2}}} \right) \end{array} $$

and the test reduces to an ordinary energy detector

$$ \frac{1}{N} \sum_{n=1}^{N} y(n) > \frac{{\sigma_{0}^{2}} {\sigma_{1}^{2}}}{{\sigma_{1}^{2}} - {\sigma_{0}^{2}}} \ln \left(\frac{{\sigma_{0}^{2}}}{{\sigma_{1}^{2}}} \right). $$
((29))

3.2 Asymptotic analysis

In this section, we perform the asymptotic analysis of the detector in case of large N. We first note that the detector computes if

$$ \frac{1}{2 {\sigma_{0}^{2}}} \frac{1}{N} \sum_{n=1}^{N} z_{E,0}(n) -\frac{1}{2 {\sigma_{1}^{2}}} \frac{1}{N} \sum_{n=1}^{N} z_{E,1}(n) > \gamma, $$
((30))

where

$$\gamma = \frac{\ell(y)}{N} - \frac{1}{2} \ln \left(\frac{{\beta_{1}^{2}} {\sigma_{0}^{2}}}{{\beta_{0}^{2}} {\sigma_{1}^{2}}} \right). $$

We thus need to find a difference between weighted arithmetical means of saturated or nullified variables and compare the result to a threshold in order to perform the detection.

The probability density function of z E,k =h(y) is given by [10].

$$ p_{z_{E}}(z_{E,k}) = \left. \frac{p_{y}(y)}{\frac{dz_{E,k}}{dy}} \right|_{y=h^{-1}_{i}(z_{E,k})}. $$
((31))

For the sake of simplicity, let us assume that b=−a. We need to investigate PDFs in four different cases, two sums in (30), k=0,1, and two hypothesis l=0,1. Substituting (25) into above in those four cases, we get the following four PDFs:

$$ \begin{aligned} p(z_{E,0}|H_{0}) &= \frac{\beta_{0}(1-c)}{\sqrt{2 \pi z_{E,0} {\sigma_{0}^{2}}}} e^{-\frac{z_{E,0}}{2{\sigma_{0}^{2}}}} \Pi (0, \eta_{0})\\ &\quad+ c\beta_{0} \left(1 - \frac{\sqrt{\eta_{0}}}{b} \right) \delta(z_{E,0}-\kappa \eta_{0}) \end{aligned} $$
((32))

if l=0 and k=0,

$$ {\small{\begin{aligned} {}p(z_{E,1}|H_{0})& = \frac{\beta_{0}(1-c)}{\sqrt{2 \pi z_{E,1} {\sigma_{0}^{2}}}} e^{-\frac{z_{E,1}}{2{\sigma_{0}^{2}}}} \Pi (0, \eta_{0}) + \frac{c\beta_{0} }{2 b \sqrt{z_{E,1}}} \Pi(\eta_{0}, \eta_{1}) \\ &\quad+ c\beta_{0} \left(1 - \frac{\sqrt{\eta_{1}}}{b} \right) \delta(z_{E,1}-\kappa \eta_{1}) \end{aligned}}} $$
((33))

if l=0 and k=1,

$$ \begin{aligned} p(z_{E,0}|H_{1})&= \frac{\beta_{1} (1-c)}{\sqrt{2 \pi z_{E,0} {\sigma_{1}^{2}}}} e^{-\frac{z_{E,0}}{2{\sigma_{1}^{2}}}} \Pi (0, \eta_{0})\\ &\quad+ \left[ \beta_{1} (1-c) \left(\text{erf} \sqrt{\frac{\eta_{1}}{2 {\sigma_{1}^{2}}}} - \text{erf} \sqrt{\frac{\eta_{0}}{2 {\sigma_{1}^{2}}} }\right)\right.\\ &\left.\quad+ \beta_{1} c \left(1- \frac{\sqrt{\eta_{1}}} {b} \right)\vphantom{\left(\text{erf} \sqrt{\frac{\eta_{1}}{2 {\sigma_{1}^{2}}}} - \text{erf} \sqrt{\frac{\eta_{0}}{2 {\sigma_{1}^{2}}} }\right)}\right] \delta(z_{E,0}- \kappa \eta_{0}) \end{aligned} $$
((34))

if l=1 and k=0 and

$$ \begin{aligned} p(z_{E,1}|H_{1}) &= \frac{\beta_{1} (1-c)}{\sqrt{2 \pi z_{E,1} {\sigma_{1}^{2}}}} e^{-\frac{z_{E,1}}{2{\sigma_{1}^{2}}}} \Pi (0, \eta_{1})\\ &\quad + \frac{\beta_{1} c(b -\sqrt{\eta_{1}})}{b} \delta(z_{E,1}-\kappa \eta_{1}) \end{aligned} $$
((35))

if l=1 and k=1. The function Π(c,d) equals one between c and d and is zero otherwise. The cases are illustrated in Fig. 3. The main body of the PDFs is identical for limiting and nullifying detectors. The delta impulses that distinguish the two are plotted in dashed line for the case of limiting detector and in dashed dotted line for nullifying detector.

Fig. 3
figure 3

Probability density functions of the four cases. This shows the PDFs of four different conditions in robust energy detection

Combining the results, we can reach a common expression covering all the cases as

$$ {\small{\begin{aligned} {}p(z_{E,k}|H_{l}) &= \frac{\beta_{l}(1-c)}{\sqrt{2\pi {\sigma_{l}^{2}} z_{E,k} }} e^{-\frac{z_{E,k}}{2 {\sigma_{l}^{2}}}} \Pi(0,\eta_{m_{1}}) \\ &\quad+ m_{2} \frac{\beta_{0} c}{2 b \sqrt{z_{E,k}}} \Pi(\eta_{0},\eta_{1}) + \delta(z_{E,k}- \kappa \eta_{k}) \theta_{k,l}, \end{aligned}}} $$
((36))

where \(\theta _{k,l} = \beta _{l} (1-c) m_{3} \left [ \text {erf} \left (\sqrt {\frac {\eta _{1}}{2 {\sigma _{1}^{2}}}} \right) - \text {erf} \left (\sqrt {\frac {\eta _{0}}{2 {\sigma _{1}^{2}}}} \right) \right ] + \beta _{l} c \left (1 - \frac {\sqrt {\eta _{m_{4}}} }{b} \right), m_{1} = 1,\) if l=1 and k=1 and is zero otherwise; m 2=1, if l=0 and k=1 and is zero otherwise; m 3=1, if l=1 and k=0 and is zero otherwise; m 4=0, if l=0 and k=0 and is one otherwise.

This distribution has mean

$$ \begin{aligned} E\left[z_{E,k}|H_{l}\right] &= \beta_{l} (1-c) \left[ {\sigma_{l}^{2}} \text{erf} \left(\sqrt{\frac{\eta_{m_{1}}}{2 {\sigma_{l}^{2}}}} \right) - \sqrt{\frac{2{\sigma_{l}^{2}} \eta_{m_{1}}}{\pi}} e^{-\frac{\eta_{m_{1}}}{2 {\sigma_{l}^{2}}}} \right] \\ &\quad+ \frac{\beta_{0} c}{3 b} \left(\eta_{1}^{\frac{3}{2}} - \eta_{0}^{\frac{3}{2}}\right)m_{2} + \eta_{k} \kappa \theta_{k,l} \end{aligned} $$
((37))

and second moment

$$\begin{aligned} E\left[z_{E,k}^{2}|H_{l}\right] &= \beta_{l} (1\,-\,c)\! \left[ \!3 {\sigma_{l}^{4}} \text{erf} \left(\!\sqrt{\frac{\eta_{m_{1}}}{2 {\sigma_{l}^{2}}}} \right) \,-\, \sqrt{\frac{2{\sigma_{l}^{2}} \eta_{m_{1}}}{\pi}} e^{-\frac{\eta_{m_{1}}}{2 {\sigma_{l}^{2}}}} \left(\eta_{m_{1}} \,+\, 3 {\sigma_{l}^{2}}\right) \right] \\ &\quad+ \frac{\beta_{0} c}{5 b} \left(\eta_{1}^{\frac{5}{2}} - \eta_{0}^{\frac{5}{2}}\right)m_{2} + (\kappa \eta_{k})^{2} \theta_{k,l}. \end{aligned} $$
((38))

The cross-correlation between z E,0 and z E,1 is perfect if z E,1<η 0 and in this case, \(E[z_{E,0}z_{E,1}|H_{l}] = E[z_{E,0}^{2}|H_{l}]\). This happens with probability

$$ \begin{aligned} P(z_{E,1} < \eta_{0}) &= \int_{0}^{\eta_{0}} p_{z_{E}}(z_{E,1}|H_{l}) dz_{E,1}\\ &= \beta_{l} (1-c) \text{erf} \left(\sqrt{\frac{\eta_{0}}{2 {\sigma_{l}^{2}}}} \right). \end{aligned} $$
((39))

If z E,1>η 0 and we consider the limiting detector, i.e. κ=1, we have z E,0=η 0 and hence \(\phantom {\dot {i}\!}E[z_{E,0}z_{E,1}] = \eta _{0} E_{z_{E,1}>\eta _{0}}[z_{E,1}],\) where \(\phantom {\dot {i}\!}E_{z_{E,1}>\eta _{0}}[z_{E,1}]\) is the mean of z E,1 above η 0. In case of the nullifying detector where κ=0 if z E,1>η 0, we have z E,0=0 and consequently E[z E,0 z E,1]=0. The occasion z E,1>η 0 occurs with probability 1−P(z E,1<η 0) and the cross-correlation is therefore

$$\begin{aligned} E\left[z_{E,0}z_{E,1}|H_{l}\right] &= P(z_{E,1} < \eta_{0}) E\left[z_{E,0}^{2}|H_{l}\right]\\ &\quad+ \left[1 \,-\, P(z_{E,1} < \eta_{0})\right] \kappa \eta_{0} E_{z_{E,1}> \eta_{0}}\left[z_{E,1}|H_{l}\right]. \end{aligned} $$
((40))

Examining (30), we see that to proceed we need the moments of the variable

$$ w_{E} = \frac{1}{2 {\sigma_{0}^{2}}} z_{E,0} -\frac{1}{2 {\sigma_{1}^{2}}} z_{E,1}. $$
((41))

The mean of w E is

$$ E\left[w_{E}|H_{l}\right] = \frac{E[\!z_{E,0}|H_{l}]}{2 {\sigma_{0}^{2}}} - \frac{E[\!z_{E,1}|H_{l}] }{2 {\sigma_{1}^{2}} } $$
((42))

and its second moment equals

$${} E\left[{w_{E}^{2}}|H_{l}\right] = \frac{E\left[z_{E,0}^{2}|H_{l}\right]}{4 {\sigma_{0}^{4}}} -\frac{2 E\left[z_{E,0} z_{E,1}|H_{l}\right] }{4{\sigma_{0}^{2}} {\sigma_{1}^{2}}} + \frac{E\left[z_{E,1}^{2}|H_{l}\right] }{4 {\sigma_{1}^{4}}}. $$
((43))

The variance is equal to

$$ \sigma^{2}_{E,H_{l}} = E\left[{w_{E}^{2}}|H_{l}\right] - E^{2}[\!w_{E}|H_{l}]. $$
((44))

Let us now note that according to (30), the detector computes a sample average of N independent and identically distributed (i.i.d.) random variables w E . According to the central limit theorem [10], the distribution of such a sum approaches Gaussian with mean E[w E |H l ] and variance \(\frac {\sigma ^{2}_{E,H_{l}}}{N},l=0,1\) when N increases, independent of the shape of the original distribution of the variables w E . We can therefore for large N evaluate the probability of correct detection as

$$ \begin{aligned} P_{D} &= \int_{\gamma}^{\infty} p_{w_{E}}(w_{E} \mid H_{1}) dw_{E}\\ & = \frac{1}{2} \text{erfc} \left(\frac{(\gamma - E[\!w_{E} \mid H_{1}]) \sqrt{N}}{\sqrt{2} \sigma_{E,H_{1}}} \right). \end{aligned} $$
((45))

The probability of fault alarm is correspondingly

$$ \begin{aligned} P_{F} &= \int_{\gamma}^{\infty} p_{w_{E}}(w_{E} \mid H_{0}) dw_{E}\\ &= \frac{1}{2} \text{erfc} \left(\frac{(\gamma - E[w_{E} \mid H_{0}]) \sqrt{N}}{\sqrt{2} \sigma_{E,H_{0}}} \right). \end{aligned} $$
((46))

The threshold γ and the number of samples N that are required to reach given P F and P D can be found by solving system of equations formed by (45) and (46)

$$ \left\{ \begin{array}{ll} \sqrt{2} \sigma_{E,H_{0}} \text{erfc}^{-1}(2P_{F}) = \left[\gamma - E(w_{E} |H_{0})\right] \sqrt{N} \\ \sqrt{2} \sigma_{E,H_{0}} \text{erfc}^{-1}(2P_{D}) = \left[\gamma - E(w_{E}|H_{1})\right] \sqrt{N}\end{array}. \right. $$
((47))

Solving the system for N and γ, we obtain that in order to reach the operating point (P F ,P D ), we need

$$ \left\{ \begin{array}{ll} N = 2\left[ \frac{\sigma_{E,H_{0}} \text{erfc}^{-1}(2P_{D})-\sigma_{E,H_{0}} \text{erfc}^{-1}(2P_{F})}{E(w_{E} |H_{0})- E(w_{E} |H_{1})} \right]^{2} \\ \gamma = \frac{\sigma_{E,H_{0}} \text{erfc}^{-1}(2P_{D})E(w_{E}|H_{0})-\sigma_{E,H_{0}} \text{erfc}^{-1}(2P_{F})E(w_{E} |H_{1})}{\sigma_{E,H_{0}} \text{erfc}^{-1}(2P_{D})-\sigma_{E,H_{0}} \text{erfc}^{-1}(2P_{F})} \end{array}. \right. $$
((48))

It should be noted that the energy detector assumes that the Gaussian noise level is known. If there is some uncertainty, the performance of the energy detector will deteriorate [22].

4 Coherent detection

As in the previous section, we are dealing here with a one-dimensional problem, but in this case, we assume that the primary user signal s(n) is known. Using this knowledge, we are going to build a detector that is able to detect the primary signal in noise.

4.1 Derivation

The conditional probability density of the received waveform being noise only is given by (17) and the conditional probability density of the received waveform being signal plus noise as

$$ {}p(x|H_{1}) = \left\{ \begin{array}{ll} \beta \max \left(\frac{1-c}{\sqrt{2 \pi} \sigma}e^{-\frac{(x-s)^{2}}{2 \sigma^{2}}}, \frac{c}{b-a} \right), & a<x<b\\ 0, & \text{otherwise} \end{array} \right.. $$
((49))

With this approximation, the signal to be detected appears as the mean value of the Gaussian process while the impulsive noise component is not affected by the presence or absence of the signal.

The factor β is used in the above equations to scale p to satisfy the requirements for probability density function and can be found by solving

$$ {\int_{a}^{b}} p(x|H_{0}) dx = 1 $$
((50))

for β. Note, however, that the particular value of β does not affect the resulting detector, and we do not therefore pursue the issue any further.

Instead, we proceed simplifying the expressions for probability densities p(x|H 0) and p(x|H 1). As the two differ just by the mean value of the Gaussian process, we concentrate only at p(x|H 1) for the moment. An expression for p(x|H 0) will follow by similar calculations. For p(x|H 1), we have

$$\begin{array}{@{}rcl@{}} p(x|H_{1}) & = &\beta \max \left(\frac{1-c}{\sqrt{2 \pi} \sigma } e^{-\frac{(x-s)^{2}}{2 \sigma^{2}}}, \frac{c}{b-a} \right) \\ & = & \frac{\beta (1-c) }{ \sqrt{2 \pi} \sigma } \max \left[ e^{-\frac{(x-s)^{2}}{2 \sigma^{2}}}, e^{\ln{\left(\frac{c}{1-c} \frac{\sqrt{2 \pi} \sigma}{b-a} \right) }} \right] \\ & = & \frac{\beta (1-c)}{ \sqrt{2 \pi} \sigma } e^{- \frac{1}{2 \sigma^{2}} \min \left((x-s)^{2}, -2\sigma^{2} \ln{\left(\frac{c}{1-c} \frac{\sqrt{2 \pi} \sigma}{b-a} \right)} \right)}. \end{array} $$
((51))

With this result and assuming that we have received N samples of waveform x(n) that are statistically independent of each other, we can now design the likelihood ratio test as follows. The log-likelihood ratio can be written as

$$ {\small{\begin{aligned} {}\ln{\Lambda} & = \ln{\frac{\prod_{n=1}^{N} p(x|H_{1})}{\prod_{n=1}^{N} p(x|H_{0})}} \\ & = \!- \frac{1}{2 \sigma^{2}} \sum_{n=1}^{N} \min \left(\!(x(n)\,-\,s(n))^{2}, -2\sigma^{2} \ln\!{\left(\!\frac{c}{1\,-\,c} \frac{\sqrt{2 \pi} \sigma}{b\,-\,a} \right)}\! \right) \\ & \quad+ \frac{1}{2 \sigma^{2}} \sum_{n=1}^{N} \min \left(x(n)^{2}, -2\sigma^{2} \ln{\left(\frac{c}{1-c} \frac{\sqrt{2 \pi} \sigma}{b-a} \right)} \right). \end{aligned}}} $$
((52))

The hypothesis H 1 is selected if the log-likelihood ratio is greater than a threshold and the hypothesis H 0 otherwise. Cancellation of the common terms in the above equation results in the limiting detector.

Select H 1 if

$$ \sum_{n=1}^{N} \min \left((x(n)-s(n))^{2},\eta \right) - \sum_{n=1}^{N} \min \left(x(n)^{2},\eta \right) > \gamma^{\prime}, $$
((53))

and H 0 otherwise. In the above, η is given by (16), and γ is the threshold selected in accordance with the a priori probabilities and costs given to the different possible events [23].

The structure of the resulting robust limiting detector is shown in Fig. 4. The corresponding nullifying detector is obtained by setting the samples that are absolutely larger than \(\sqrt {\eta }\) to zero.

Fig. 4
figure 4

Structure of the proposed robust detector

4.2 Asymptotic analysis

As derived in the previous subsection, the detector needs to compute two arithmetic means and compare the result of their difference to a threshold

$$ {}\frac{1}{N} \sum_{n=1}^{N} \min \left((x(n)-s(n))^{2},\eta \right) - \frac{1}{N} \sum_{n=1}^{N} \min \left(x(n)^{2},\eta \right) > \gamma. $$
((54))

To perform our analysis, we assume that the signal s is small so that the intersection points of Gaussian and uniform distributions η computed for the two sums in the above equation are close to each other.

Again, we have two sums k=0,1 and two hypotheses l=0,1 we need to consider and let us again denote the variables under summations by z C so that z C,0= min((x(n)−s(n))2,η) and z C,1= min(x(n)2,η). In the case k=0 and l=0, the received signal comprises of noise only and the PDF for both limiting and nullifying detector is hence

$$ {\small{\begin{aligned} p(z_{C,0}|H_{0})& = \frac{\beta (1-c)}{2\sqrt{2 \pi z_{C,0}} \sigma} e^{-\frac{z_{C,0}}{2 \sigma^{2}}} \Pi(0, \eta)\\ &\quad+ (1-\frac{\sqrt{\eta}}{b}) c \beta \delta(z_{C,0} - \kappa \eta). \end{aligned}}} $$
((55))

Likewise, in the case k=1 and l=0, we have

$$ \begin{aligned} {}p(z_{C,1}|H_{0}) &= \frac{\beta (1-c)}{2\sqrt{2 \pi z_{C,1}} \sigma} e^{-\frac{z_{C,1}}{2 \sigma^{2}}} \Pi(0, \eta) \\ &\quad+ (1-\frac{\sqrt{\eta}}{b}) c \beta \delta(z_{C,1} - \kappa \eta). \end{aligned} $$
((56))

In case of H 1, the received signal x includes the signal component s in addition to the noise. It turns out that the problem is symmetric with

$$ p(z_{C,0}|H_{1}) = p(z_{C,1}|H_{0}) $$
((57))

and

$$ p(z_{C,1}|H_{1}) = p(z_{C,0}|H_{0}). $$
((58))

Next, we need to find the moments of the distributions. The mean in case of k=0 and l=0 equals

$$ {\small{\begin{aligned} {}E[z_{C,0}|H_{0}] & = \frac{\beta (1-c)}{2\sqrt{2 \pi} \sigma} \left\lbrace \sqrt{2 \pi} \sigma (s^{2}+ \sigma^{2}) \left[ \text{erf} \frac{\sqrt{\eta} - s}{\sqrt{2} \sigma}\right.\right.\\ & \left.\quad+ \text{erf} \frac{\sqrt{\eta} + s}{\sqrt{2} \sigma} \right] - 2 \sigma^{2} \left[ (\sqrt{\eta} + s) e^{-\frac{(\sqrt{\eta} - s)^{2}}{2 \sigma^{2}}} \right.\\ &\left.\left.\quad+ (\sqrt{\eta} - s) e^{-\frac{(\sqrt{\eta} + s)^{2}}{2 \sigma^{2}}} \right] \right\rbrace + \kappa \eta \left(1 - \frac{\sqrt{\eta}}{b} \right) c \beta \end{aligned}}} $$
((59))

and the second moment equals

$$\begin{aligned} E\left[z_{C,0}^2|H_{0}\right]\! &=\! \frac{\beta (1-c)}{2\sqrt{2 \pi} \sigma} \left\lbrace \sqrt{2 \pi} \sigma (s^4 + 6 s^2 \sigma^2 +3 \sigma^4)\left[ \text{erf} \frac{\sqrt{\eta} - s}{\sqrt{2} \sigma}\right.\right. \\ &\quad \left.\left.+ \text{erf} \frac{\sqrt{\eta} + s}{\sqrt{2} \sigma} \!\right]\,+\, 2 \sigma^2 e^{-\frac{(\sqrt{\eta} + s)^{2}}{2 \sigma^{2}} }\! \left[\!-\eta \sqrt{\eta} \,-\, \sqrt{\eta}\left(s^{2}\! +\! 3\sigma^{2}\right) \right.\right. \\ &\quad \left. + \eta s +s^3 +5s \sigma^2- 2 \sigma^2 e^{-\frac{(\sqrt{\eta} - s)^{2}}{2 \sigma^{2}} } \left[ \eta \sqrt{\eta} + \sqrt{\eta} \right.\right. \\ &\quad \left.\left. \times(s^2 + 3\sigma^2) + \eta s +s^3 +5s \sigma^2 \right] \right\rbrace \\ &\quad + (\kappa \eta)^2 \left(1 - \frac{\sqrt{\eta}}{b} \right) c \beta. \end{aligned} $$

In case of k=1 and l=0, we have

$$ \begin{aligned} E[\!z_{C,1}|H_{0}] &\,=\, \beta (1\,-\,c) \sigma^{2} \text{erf} \left(\frac{\sqrt{\eta}}{\sqrt{2}\sigma}\right) \,-\, \frac{2 \beta (1\,-\,c) }{\sqrt{2 \pi}} \sigma \sqrt{\eta} e^{-\frac{\eta}{2\sigma^{2}} } \\ &\quad + \kappa \eta \left(1 - \frac{\sqrt{\eta}}{b} \right) c \beta \end{aligned} $$
((60))

and

$$\begin{aligned} E\left[z_{C,1}^{2}|H_{0}\right] &= 3 \beta (1-c) \sigma^{4} \text{erf} \left(\frac{\sqrt{\eta}}{\sqrt{2}\sigma}\right)\\ &\quad - \frac{2 \beta (1-c) }{\sqrt{2 \pi}} \sqrt{\eta} \sigma e^{-\frac{\eta}{2\sigma^{2}} } \left(\eta + 3 \sigma^{2}\right) \\ &\quad+ (\kappa \eta)^{2} \left(1 - \frac{\sqrt{\eta}}{b} \right) c \beta. \end{aligned} $$

Naturally, it holds that \(E\left [z_{C,0}|H_{1}\right ] = E\left [z_{C,1}|H_{0}\right ],E\left [z_{C,1}|H_{1}\right ] = E\left [z_{C,0}|H_{0}\right ],E\left [z_{C,0}^{2}|H_{1}\right ] = E\left [z_{C,1}^{2}|H_{0}\right ]\) and \(E\left [z_{C,1}^{2}|H_{1}\right ] = E\left [z_{C,0}^{2}|H_{0}\right ].\) The noise is the same in both sums while the signal s only appears in one of the sums. As we have assumed, the signal and noise to be independent the cross correlation is

$$ E\left[z_{C,0} z_{C,1}\right] = E\left[z_{C,1}^{2}|H_{0}\right] $$
((61))

no matter what hypothesis we are interested in.

To proceed, we need the moments of the variable

$$ w_{C} = z_{C,0} - z_{C,1}. $$
((62))

The mean of w C is

$$ E\left[w_{C}|H_{l}\right] = E\left[z_{C,0}|H_{l}\right] - E\left[z_{C,1}|H_{l}\right] $$
((63))

the second moment is

$$ {}E\left[{w_{C}^{2}}|H_{l}\right] = E\left[z_{C,0}^{2}|H_{l}\right] - 2 E\left[z_{C,0} z_{C,1}|H_{l}\right] + E\left[z_{C,1}^{2}|H_{l}\right]. $$
((64))

The variance equals

$$ \sigma^{2}_{C,H_{l}} = E\left[{w_{C}^{2}}|H_{l}\right] - E^{2}[\!w_{C}|H_{l}]. $$
((65))

We see that in case of coherent detection the terms involving κ cancel from the expressions of the moments of w C and consequently the nullifying and limiting detectors behave equally in this case.

Let us now note that according to (54), the detector computes a sample average of N i.i.d. random variables w C . According to the central limit theorem [10], the distribution of such a sum approaches Gaussian with mean E[w C |H l ] and variance \(\frac {\sigma ^{2}_{C,H_{l}}}{N},l=0,1\) when N increases, independent of the shape of the original distribution of the variables w C . We can therefore for large N evaluate the probability of correct detection as

$$ \begin{aligned} P_{D} &= \int_{\gamma}^{\infty} p_{w_{C}}(w_{C} \mid H_{1}) dw_{C}\\ &= \frac{1}{2} \text{erfc} \left(\frac{(\gamma - E[\!w_{C} \mid H_{1}]) \sqrt{N}}{\sqrt{2} \sigma_{C,H_{1}}} \right). \end{aligned} $$
((66))

The probability of false alarm is correspondingly

$$ \begin{aligned} P_{F} &= \int_{\gamma}^{\infty} p_{w_{C}}(w_{C} \mid H_{0}) dw_{C}\\ &= \frac{1}{2} \text{erfc} \left(\frac{(\gamma - E[\!w_{C} \mid H_{0}]) \sqrt{N}}{\sqrt{2} \sigma_{C,H_{0}}} \right). \end{aligned} $$
((67))

5 Robust detectors based on correlation matrix

In this section, we consider the detectors that are based on autocorrelation of the received signal like eigenvalue ratio test and cyclostationary feature detector.

5.1 Eigenvalue ratio detector

It was recently proposed to use the eigenvalues of the correlation matrix of the received signal for detection purposes [4, 24]. The method consists of computing the sample covariance matrix which is given by

$$ \hat{\mathbf{R}}_{x} = \frac{1}{K} \sum_{k=1}^{K-1} \mathbf{x}(k) \mathbf{x}^{H}(k), $$
((68))

where N is number of collected samples of the received signal. Then, computing its maximum and minimum eigenvalues λ max and λ min, it is then decided that the signal is present if T(x)=λ max/λ min>γ, where γ is the threshold of the test.

It is demonstrated in [24] that based upon the theory of random matrices [25], one can obtain approximate expressions for the probability of false alarm and probability of detection of this detector if the noise is Gaussian. For non-Gaussian noise, the analysis is considered to be mathematically untractable.

5.2 Cyclostationary detector

The oversampled modulated signals exhibit cyclostationarity due to hidden periodicities caused by cyclic prefix, pilot carrier, sampling, multiplexing, coding and modulation etc. This means that some of the statistics of the received signal exhibit periodicity if the primary signal is present. In particular, the statistics are mean and autocorrelation function. This property can be used to distinguish between signal and noise which are stationary in nature, even at low signal-to-noise ratio (SNR) regimes [6, 26]. The detector computes the spectral correlation function

$$ {S}_{xx}^{\alpha}(f)= \sum_{k=-\infty}^{\infty} {r}_{x}^{\alpha}(k) e^{-j2\pi f k}, $$
((69))

where

$$ {r}_{xx}^{\alpha}(k)= E\left[ {x}(n) {x}^{*}(n+k)e^{-j2\pi\alpha n}\right], $$
((70))

and α is the cyclic frequency. The cyclostationary detector then evaluates

$$ \left\|{{S}}_{xx}^{\alpha}(\,\,f)\right\|^{2} > \gamma. $$
((71))

In this paper, the results are based on single cycle detector [6]. It should be noted that the cyclostationary detector assumes a knowledge of the cyclic frequency α. The analysis of the test is again not mathematically tractable; however, it is known [27] that approximate expression for the probability of false alarm in Gaussian case is given by

$$ P_{f}\approx \exp\left[ -\frac{(2N+1)\gamma^{2}}{2\sigma^{4}}\right]. $$
((72))

5.3 Robust detection

Robust counterparts of the correlation matrix based detectors are obtained by substituting a robust correlation matrix estimate as it was discussed in Section 2 (12)

$$ \mathbf{R}_{r} = \left\{ \begin{array}{ll} \mathbf{R},& \text{if}\ \mathbf{x} \leq \eta \\ \kappa \eta,& \text{otherwise} \end{array} \right. $$
((73))

into (70) and (68).

6 Simulation results

6.1 Energy detector

In Fig. 5, we plot the receiver operating characteristic (probability of detection as a function of probability of false alarm) (ROC) of the nullifying and limiting energy detectors with parameters \({\sigma _{n}^{2}} = 1,{\sigma _{s}^{2}} = 2, c = 0.001, N = 30\) and b=−a=100. The data signal used in this simulation is white Gaussian random process so that the signal to ambient Gaussian noise ratio is 3 dB. We see that the difference between the two is relatively small with the limiting detector being superior. We therefore use only the limiting detector in the following energy detector simulation examples.

Fig. 5
figure 5

ROCs of limiting and nullifying energy detectors. This illustrates the difference in the ROCs of two proposed robust energy detectors

Next, we investigate how many samples should the detector involve for our analysis to apply. In the simulation example, we have used the following parameters to compute the probability of miss \(P_{m}(\gamma) = 1-P_{D}(\gamma):{\sigma _{n}^{2}} = 1,{\sigma _{s}^{2}} = 2, c = 0.01\) and b=−a=100. In Fig. 6, one can see that with N=5, the simulation and theory vaguely remember each other. The situation improves if we increase the number of samples to 15 and already with N=30 the theoretical curve and simulation dots are rather close to each other. The simulation points are averages over 1 million independent trials. We note that N=30 is much smaller than N found from (48) for signal-to-noise ratios required for proper operation of the energy detector in cognitive radio applications. A similar result can be obtained for the probability of false alarm P F .

Fig. 6
figure 6

Probability of detection as a function of threshold

Figure 7 depicts the dependence of the probability of false alarm from the number of samples N for ordinary energy detector if there is no impulsive noise (black dashed line). It also shows the curves corresponding to the ordinary energy detector (blue line) and the proposed robust detector (magenta line) in the presence of impulsive noise with intensity c=0.001. For comparison, we show the results of the robust L p norm detector with p=1 (red line) and p=1.5 (green line) of [13] in the same noise. The proposed limiting detector operates in these conditions almost as well as the ordinary energy detector in Gaussian noise and outperforms all the others.

Fig. 7
figure 7

Probability of false alarm as function of N

6.2 Coherent detection

In the case of known primary signal, we present graphs for probability of detection and probability of false alarm as functions of the threshold γ. The comparison of theoretical and simulation results for the case \(s=1,{\sigma _{n}^{2}} = 1,c=0.01,a = -100,b=100\) and N=100 is given in Fig. 8. One can see that for low values of threshold, both probabilities are close to unity. Then, the probability of false alarm starts decreasing followed by the probability of detection, and for high threshold values, both probabilities are close to zero. One can also observe that the theoretical and simulation results match each other well.

Fig. 8
figure 8

Probabilities of detection and miss

6.3 Correlation matrix-based detection

6.3.1 Eigenvalue ratio detector

Here, we compare the proposed detector with two other recently proposed robust detectors. The first of them is the detector using sign correlation as proposed in [14].

The second one is a Huber M-estimate-type detector proposed in [16].

Figure 9 shows the ROCs of the detectors. All the curves in this subsection are obtained for simulations. The number of samples in each trial was 100, and the curves are averages over 100,000 trials. Since wave s(n)=A sin(0.32π n+φ 0) with φ 0 being a random initial phase, uniformly distributed in the interval [−π,π] was used as the input signal. The signal power was 0.5, and the Gaussian noise variance was set to σ 2=1. For each run, we constructed a 10×10 covariance matrix to calculate the eigenvalues. The limits of impulsive noise were set to a=−100 and b=100. The probability of impulses was c=0.01 in the simulation.

Fig. 9
figure 9

ROCs of eigenvalue ratio detectors

The blue line is performance of the ordinary eigenvalue ratio detector, designed for Gaussian noise, in case there is no impulsive noise. The green line represents the performance of this detector in the presence of impulsive noise. One can see that in the presence of the impulsive noise performance of the detector is significantly deteriorated. The cyan line is the performance of the limiting detector and the red line is that of the detector that forces the samples that include suspected impulses to zero. The black line is the detector based on the sign correlation and the magenta line is the performance of the detector that uses M-estimate of the correlation matrix with a h =8.

One can see that the detector that sets samples with suspected impulses to zero performs best giving almost as good performance as the ordinary eigenvalue ratio detector in Gaussian noise. The performance of the limiting detector is somewhat worse than that of these two. Both detectors proposed in this paper outperform the detector using M-estimate of correlation matrix with a h =8 and the algorithm using sign correlation. We see that the nullifying detector performs almost as good as the usual detector in Gaussian noise and the other detectors are somewhat worse. The signal and noise parameters are the same as in the earlier simulation examples.

In Fig. 10, we show the probability of miss as a function of noise power. We have selected the same signal and noise as before but now the noise variance is varied from 0.1 to 5 keeping the signal power equal to 0.5. The impulse probability c=0.01 with b=−a=100. The detection threshold is γ=5 in this simulation. It can be seen that if the noise variance increases the probability of miss becomes higher for all the detectors.

Fig. 10
figure 10

Dependence of probability of miss of on noise power

6.3.2 Cyclostationary detector

In Fig. 11, we show the ROCs for the cyclostationary detectors. We have used the same simulation parameters as we used in simulating the eigenvalue ratio detectors except that the input signal is amplitude modulated so that it shows cyclostationary behaviour and the SNR is −5 dB.

Fig. 11
figure 11

ROCs of the cyclostationary detectors

The performance of ordinary cyclostationary detector is represented by blue line, in case there is only Gaussian noise. The performance of same detector in the presence of impulsive noise is represented by green line. It is clearly visible from the figure that impulsive noise significantly deteriorates performance of the detector. The cyan line shows the performance of the limiting detector and the red line that of the nullifying detector in impulsive noise. The black and the magenta lines are for the sign correlation and the Huber detectors, respectively. Once again, it is evident that the nullifying detector gives in impulsive noise better performance than the other robust detectors.

Finally, we show the probability of detection as function of impulse probability c in Fig. 12. The probability of false alarm is kept constant P f =0.01, the impulse probability c changes from 10−4 to 10−1 and the SNR is −5 dB. Here, we see that the performance of the nullifying detector is superior being in case of low and moderate impulse probabilities almost as good as the ordinary detector without impulsive noise. This is followed by the limiting detector and the others.

Fig. 12
figure 12

Probability of detection of the cyclostationary detector as function of impulse probability

7 Conclusions

This paper addresses the problem of reliable detection in the presence of impulsive noise. We base our derivation on the noise model that explicitly includes two parts, one of them being Gaussian to model thermal noise and the other being uniform to model the rare impulses. We have investigated two versions of the resulting robust detectors, and it turns out that for energy detector limiting, the noise works best; there is no difference in performance of nullifying detector and limiting detector in case of coherent detection, and finally, in case of correlation matrix-based detectors, the nullifying approach works best. The detectors perform almost as well as the ordinary detectors, derived for Gaussian noise if the noise actually is Gaussian.

Abbreviations

PDF:

Probability density function

A/D:

Analog-to-digital

SNR:

Signal-to-noise ratio

SCF:

Spectral correlation function

ROC:

Receiver operating characteristics

P D :

Probability of detection

P M :

Probability of miss-detection

P F :

Probability of false alarm

References

  1. M McHenry, Shared Spectrum Co., E Livsics, T Nguyen, N Majumdar, XG dynamic spectrum access field test results. IEEE Communications Magazine, IEEE. 45(6), 51–57 (2007).

    Article  Google Scholar 

  2. KB Letaief, Fellow IEEE, W Zhang, 97. Cooperative Communications for Cognitive Radio Networks Proceedings of the IEEE, (2009), pp. 878–893.

  3. J Mitola, GQ Maguire Jr., Cognitive radio: making software radios more personal. Personal Communications, IEEE. 6(4), 13–18 (1999).

    Article  Google Scholar 

  4. E Biglieri, LJ Greenstein, NB Mandayam, HW Poor, Principles of Cognitive Radio (Cambridge University Press, New York, 2013).

    Google Scholar 

  5. S Haykin, DJ Thomson, JH Reed, Spectrum sensing for cognitive radio. Proc. IEEE. 97:, 849–877 (2009).

    Article  Google Scholar 

  6. Z Quan, HV Poor, AH Sayed, Collaborative wideband sensing for cognitive radios. IEEE Signal Proc. Mag. 25:, 60–73 (2008).

    Article  Google Scholar 

  7. E Axell, G Leus, EG Larsson, HV Poor, Spectrum sensing for cognitive radio. IEEE Signal Proc. Mag. 29:, 101–116 (2012).

    Article  Google Scholar 

  8. J Lunden, V Koivunen, HV Poor, Spectrum exploration and exploration for cognitive radio. IEEE Signal Proc. Mag. 32:, 123–140 (2015).

    Article  Google Scholar 

  9. S Haykin, M Moher, Modern Wireless Communication (Prentice-Hall, Inc, Upper Saddle River, NJ, USA, 2004).

    Google Scholar 

  10. A Papoulis, P SU, Probability, Random Variables and Stochastic Processes (McGraw Hill, International Edition, 2002).

    Google Scholar 

  11. PJ Huber, Robust Statistics (Wiley, New Yersey, 2004).

    Google Scholar 

  12. FR Hampel, EM Ronchetti, PJ Rousseeuw, WA Stahel, Robust Statistics, The Approach Based on Influence Functions (Wiley, New York, 1986).

    MATH  Google Scholar 

  13. F Moghimi, A Nasri, R Schober, in IEEE Global Telecommunications Conference (Globecom). l p –norm spectrum sensing for cognitive radio networks impaired by non-Gaussian noise (IEEEHonolulu, Hawaii, 2008).

    Google Scholar 

  14. J Lunden, SA Kassam, V Koivunen, Robust nonparametric cyclic correlation-based spectrum sensing for cognitive radio. Signal Process. IEEE Trans. 58(1), 38–52 (2010).

    Article  MathSciNet  Google Scholar 

  15. AM Zoubir, V Koivunen, Y Chakhchoukh, M Muma, Robust estimation in signal processing. IEEE Signal Proc. Mag. 29:, 61–80 (2012).

    Article  Google Scholar 

  16. TE Biedka, L Mili, JH Reed, in Signals, Systems and Computers, 1995. 1995 Conference Record of the Twenty-Ninth Asilomar Conference On, 1. Robust estimation of cyclic correlation in contaminated gaussian noise (IEEEPcific Grove, California USA, 1995), pp. 511–5151.

    Google Scholar 

  17. D Ramirez, J Via, I Santamaria, R Lopez–Valcarce, LL Scharf, in Proc. ICASSP. Multiantenna spectrum sensing: detection of spatial correlation among time-series with unknown spectra (IEEEDallas, Texas USA, 2010), pp. 2954–2957.

    Google Scholar 

  18. SM Kay, Statistical Signal Processing, Volume II, Detection Theory (Prentice Hall, New Yersey, 1998).

    Google Scholar 

  19. A Hjorungnes, Complex–Valued Matrix Derivatives with Applications in Signal Processing and Communications (Cambridge University Press, Cambridge, 2011).

    Book  Google Scholar 

  20. SP Pope, Algorithms of ellipsoids. New York, Tech. Rep. FDA-08-01, Sibely School of Mechanical and Aerospace Engineering, Cornell University, Ithanca (2008). Available at https://tcg.mae.cornell.edu/pubs/Pope_FDA_08.pdf 1. Dec. 2015.

  21. GH Golub, VanLoan C, Matrix Computation, 2nd edn. (University Press, Baltimore, 1996).

    Google Scholar 

  22. R Tandra, A Sahai, SNR walls for signal detection. Sel. Top. Signal Proc. IEEE J. 2(1), 4–17 (2008).

    Article  Google Scholar 

  23. HL Van Trees, Detection, Estimation and Modulation Theory (Wiley, New York, 1968).

    MATH  Google Scholar 

  24. Y Zeng, Y-C Liang, Eigenvalue-based spectrum sensing algorithms for cognitive radio. IEEE Trans. Commun. 57(6), 1784–1793 (2009).

    Article  Google Scholar 

  25. C Tracy, H Widom, in Calogero–Moser–Sutherland Models, ed. by J van Diejen, L Vinet. The distribution of the largest eigenvalue in the Gaussian ensembles. CRM Series in Mathematical Physics (SpringerNew York, 2000), pp. 461–472.

    Google Scholar 

  26. WA Gardner, Exploitation of spectral redundancy in cyclostationary signals. IEEE Signal Proc. Mag. 8:, 14–36 (1991).

    Article  Google Scholar 

  27. W-j Yue, B-y Zheng, Q-m Meng, Cyclostationary property based spectrum sensing algorithms for primary detection in cognitive radio systems. J. Shanghai Jiaotong University (Science). 14(6), 676–680 (2009).

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the Erasmus Mundus Exchange Programme. We also thank our colleague Ivo Müürsepp who provided insight and expertise that greatly assisted the research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tõnu Trump.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Trump, T., Khan, I. Robust detectors for cognitive radio system. J Wireless Com Network 2015, 255 (2015). https://doi.org/10.1186/s13638-015-0487-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-015-0487-y

Keywords