In this section, we propose an IF-DDT spectrum sensing scheme by exploiting the inertia property of the PU’s activity described in Section 2.1. Following the presentation of the decision rule, we formulate an optimization problem to acquire the optimal dynamic double thresholds. Furthermore, we finally transform the optimization problem into an easily solvable problem and derive the optimal dynamic double thresholds with very low complexity.
The decision rule using IF-DDT
To improve the probability of correct decision in the ED schemes, we consider double dynamic thresholds, by which we may elaborate the received energies into three decision ranges in order to exploit the inertia property of the PU’s activity. The decision rule using IF-DDT is expressed as follows
$$\begin{array}{@{}rcl@{}} d_{k}=\left\{ \begin{array}{ll} 1, & U_{k}\geq\lambda_{1,k} \\ 0, & U_{k}<\lambda_{0,k} \\ d_{k-1},& otherwise \end{array} \right. \end{array} $$
(9)
where λ
1,k
and λ
0,k
represent the double dynamic thresholds for frame k at the SU, and λ
0,k
≤λ
1,k
.
For k = 1, the two thresholds should be identically initialized as λ
0,1 = λ
1,1. For k > 1, if the received energy U
k
lies in the open range (λ
0,k
,λ
1,k
), the sensing result d
k
for frame k retains the same as the sensing result d
k − 1 for the preceding frame k − 1.
It is easy to show that the Markov chain for the PU’s activity is reversible according to the steady-state distribution in Eq. (1), the transition matrix in Eq. (2), and hence the detailed balance equation [24, 27]
$$P\{z_{k-1}=i,z_{k}=j\}=P\{z_{k}=i,z_{k-1}=j\} $$
Define the reverse transition probability as
$$\begin{array}{@{}rcl@{}} p_{ij,k}^{*} \triangleq P\{z_{k-1}=j | z_{k}=i\} \end{array} $$
(10)
where i,j∈{0,1}. Then, \(p_{ij,k}^{*}=p_{ij,k~-~1}\) is derived.
According to the decision rule using IF-DDT in Eq. (9), the probabilities P
f,k
and P
d,k
are obtained as follows (see Appendix 1)
$$ P_{f,k}=\mathbb{Q}\left(\frac{\lambda_{0,k}-N}{\sqrt{2N}}\right)P_{0,k}+\mathbb{Q}\left(\frac{\lambda_{1,k}-N}{\sqrt{2N}}\right)(1-P_{0,k}) $$
(11)
$$ \begin{aligned} P_{d,k}&=\mathbb{Q}\left(\frac{\lambda_{0,k}-N(1+\gamma)}{\sqrt{2N(1+2\gamma)}}\right)P_{1,k}\\&\quad+\mathbb{Q}\left(\frac{\lambda_{1,k}-N(1+\gamma)}{\sqrt{2N(1+2\gamma)}}\right)(1-P_{1,k}) \end{aligned} $$
(12)
where
$$ P_{0,k}\triangleq P\{d_{k-1}=1 | z_{k}=0\}=P_{f,k-1}p^{*}_{00,k}+P_{d,k-1}p^{*}_{01,k} $$
(13)
$$ P_{1,k}\triangleq P\{d_{k-1}=1 | z_{k}=1\}=P_{f,k-1}p^{*}_{10,k}+P_{d,k-1}p^{*}_{11,k} $$
(14)
For ease of understanding the IF-DDT scheme, Fig. 2 shows the definitions for multiple events and associated probabilistic measures on the reversible Markov chain.
Derivation of thresholds
To derive the double thresholds, there exists two hypothesis testing criterions: the Neyman-Pearson (NP) test [8] and the Bayesian test [28]. We consider only the NP test criterion, which aims at maximizing P
d,k
with the constraint P
f,k
≤ P
f,target, or alternatively minimizing P
f,k
with the constraint P
d,k
≥P
d,t
a
r
g
e
t
, where P
f,target and P
d,target represent the tolerable maximum false alarm probability and minimum detection probability, respectively.
For any sensing frame k > 1, the probabilities P
0,k
and P
1,k
are considered constant since \(p_{ij,k}^{*}\) are determined by Eq. (2), and P
f,k − 1 and P
d,k − 1 are obtained at frame k − 1. Thus, by Eqs. (11) and (12), it is clear that P
f,k
and P
d,k
are strictly decreasing with λ
0,k
and λ
1,k
, respectively. According to the NP test criterion, the search for the optimal thresholds \(\lambda _{0,k}^{*}\) and \(\lambda _{1,k}^{*}\) with respect to each frame k is formulated as the optimization problem below
$$\begin{array}{*{20}l} &\max_{\lambda_{0,k},\lambda_{1,k}} P_{d,k} (\lambda_{0,k},\lambda_{1,k})\\ &s.t. \quad \lambda_{1,k}\geq \lambda_{0,k} \geq 0 \\ & \qquad P_{f,k}(\lambda_{0,k},\lambda_{1,k})\leq P_{f,\text{target}} \end{array} $$
(15)
where P
d,k
and P
f,k
are written as bivariate functions of thresholds.
Theorem 1
P
d,k
reaches the maximum value \(P_{d,k}^{*}\) only when P
f,k
=P
f,t
a
r
g
e
t
, i.e., the optimization problem in Eq. (15) can be rewritten as (proved in Appendix 2)
$$\begin{array}{*{20}l} &\max_{\lambda_{0,k},\lambda_{1,k}} P_{d,k} (\lambda_{0,k},\lambda_{1,k})\\ &s.t. \quad \lambda_{1,k}\geq \lambda_{0,k} \geq 0 \\ & \qquad P_{f,k}(\lambda_{0,k},\lambda_{1,k})= P_{f,target} \end{array} $$
(16)
Based on the second constraint in Eq. (16), we define the Lagrange function
$$ \begin{aligned} f(\lambda_{0,k},\lambda_{1,k},c)&=P_{d,k}(\lambda_{0,k},\lambda_{1,k})\\&\quad+c\left(P_{f,target}-P_{f,k}(\lambda_{0,k},\lambda_{1,k})\right) \end{aligned} $$
(17)
where c is the Lagrange multiplier.
Taking the first-order partial derivative of Eq. (17) with respect to λ
0,k
and λ
1,k
, respectively
$$\begin{array}{@{}rcl@{}} \frac{\partial f(\lambda_{0,k},\lambda_{1,k},c)}{\partial \lambda_{0,k}}=0,\quad \frac{\partial f(\lambda_{0,k},\lambda_{1,k},c)}{\partial \lambda_{1,k}}=0 \end{array} $$
(18)
then
$$\begin{array}{*{20}l} \left\{ \begin{array}{ll} \frac{1}{\sqrt{1+2\gamma}}{\Huge e}^{\frac{(\lambda_{0,k}-N)^{2}}{4N}-\frac{[\lambda_{0,k}-N(1+\gamma)]^{2}}{4N(1+2\gamma)}} =c \frac{P_{0,k}}{P_{1,k}}\\ \frac{1}{\sqrt{1+2\gamma}}e^{\frac{(\lambda_{1,k}-N)^{2}}{4N}-\frac{[\lambda_{1,k}-N(1+\gamma)]^{2}}{4N(1+2\gamma)}} =c \frac{1-P_{0,k}}{1-P_{1,k}} \end{array} \right. \end{array} $$
(19)
Eliminating c in Eq. (19) by simple manipulations, we get
$$\begin{array}{@{}rcl@{}} (\lambda_{1,k}-a)^{2}-(\lambda_{0,k}-a)^{2}=b \end{array} $$
(20)
where
$$\begin{array}{@{}rcl@{}} a=\frac{N}{2},\quad b=\frac{2N(1+2\gamma)}{\gamma} \ln \frac{P_{1,k}(1-P_{0,k})}{P_{0,k}(1-P_{1,k})} \end{array} $$
(21)
Then, recalling the second constraint in Eq. (16), we obtain
$$ \mathbb{Q}\left(\frac{\lambda_{0,k}-N}{\sqrt{2N}}\right)P_{0,k}+\mathbb{Q}\left(\frac{\lambda_{1,k}-N}{\sqrt{2N}}\right)(1-P_{0,k})=P_{f,target} $$
(22)
So we convert the optimization problem in Eq. (16) into a nonlinear programming problem consisting of Eqs. (20) and (22).
Clearly, the existence of the solution to the optimization problem depends upon the system parameters, including the sample length N, the average SNR value γ, the frame lengths T
F
k
, the expectation of ON and OFF state period \(\overline {T}_{1}\), \(\overline {T}_{0}\), and the target false alarm probability P
f,target. On the other hand, the solution to the nonlinear programming problem cannot be expressed in a closed form. Therefore, in order to facilitate a heuristic approach for the solution, we derive the range for the above nonlinear programming problem firstly.
Notice that the function \(\mathbb {Q}(x)\) is a strictly decreasing function. Under the initial constraint that λ
1,k
≥ λ
0,k
≥0 and the basic probability property that 0<P
0,k
,1 − P
0,k
<1, we can derive a necessary condition on the threshold λ
0,k
below
$$ \min \left[\frac{P_{f,target}}{P_{0,k}},\mathbb{Q}\left(-\sqrt{\frac{N}{2}}\right)\right]\geq \mathbb{Q}\left(\frac{\lambda_{0,k}-N}{\sqrt{2N}}\right) \geq P_{f,\text{target}} $$
(23)
Thus, the range of λ
0,k
is \(\left [\lambda _{0,k}^{*},\lambda ^{*}\right ]\),it is given by
$$ \lambda_{0,k}^{*} \triangleq \sqrt{2N}\mathbb{Q}^{-1}\left\{ \min\left[ \frac{P_{f,\text{target}}}{P_{0,k}}, \mathbb{Q}\left(-\sqrt{\frac{N}{2}}\right)\right] \right\}+N $$
(24)
$$\begin{array}{@{}rcl@{}} \lambda^{*} \triangleq \sqrt{2N}\mathbb{Q}^{-1}(P_{f,\text{target}})+N \end{array} $$
(25)
With incorporating the range of λ
0,k
, we obtain a necessary condition on the threshold λ
1,k
as follows
$$ P_{f,\text{target}}\geq \mathbb{Q}\left(\frac{\lambda_{1,k}-N}{\sqrt{2N}} \right) \geq \frac{P_{f,\text{target}}- \mathbb{Q}\left(\frac{\lambda_{0,k}^{*}-N}{\sqrt{2N}} \right)P_{0,k}}{1-P_{0,k}} $$
(26)
and the range \(\left [\lambda ^{*},\lambda _{1,k}^{*}\right ]\) of λ
1,k
is given by
$$ \lambda_{1,k}^{*} \triangleq \sqrt{2N}\mathbb{Q}^{-1}\left(\frac{P_{f,\text{target}}-\mathbb{Q}\left(\frac{\lambda_{0,k}^{*}-N}{\sqrt{2N}} \right)P_{0,k}} {1-P_{0,k}} \right)+N $$
(27)
In practice, it is easy to show that λ
∗>a since P
f,target < 0.5 and \(\mathbb {Q}^{-1}(P_{f,\text {target}})>0\). By analysing Eq. (20) and Eq. (24), we can obtain the relationship of \(\lambda _{1,k}^{*}\), \(\lambda _{0,k}^{*}\), and a as follows:
$$ {\begin{aligned} \left\{ \begin{array}{ll} \lambda_{0,k}^{*}=0,\lambda_{1,k}^{*}<+\infty \qquad & \frac{P_{f,\text{target}}}{P_{0,k}}\geq \mathbb{Q}\left(- \sqrt{\frac{N}{2}} \right) \qquad \\ 0<\lambda_{0,k}^{*}\leq a,\lambda_{1,k}^{*} \rightarrow +\infty \qquad & {\mathbb{Q}\left(- \frac{1}{ 2} \sqrt{\frac{N}{ 2}} \right)} \leq \frac{P_{f,\text{target}}}{P_{0,k}} < \mathbb{Q}\left(- \sqrt{\frac{N}{ 2}} \right) \\ \lambda_{0,k}^{*} > a,\lambda_{1,k}^{*} \rightarrow +\infty \qquad & \frac{P_{f,\text{target}}}{ P_{0,k}}< {\mathbb{Q}\left(- \frac{1}{ 2} \sqrt{\frac{N}{ 2}} \right)} \end{array} \right. \end{aligned}} $$
(28)
Combined with the range of λ
0,k
and λ
1,k
in three cases in Eqs.(28), (20), and (21) can be further demonstrated in the two-dimension Cartesian coordinate system as showed in Fig. 3. When \(\frac {P_{f,\text {target}}}{P_{0,k}}\geq \mathbb {Q}\left (- \sqrt {\frac {N}{2}} \right)\), Eqs. (20) and (21) are demonstrated in Fig. 3a; when \({\mathbb {Q}\left (- \frac {1}{2} \sqrt {\frac {N}{2}}\right)} \leq \frac {P_{f,\text {target}}}{P_{0,k}} < \mathbb {Q}\left (- \sqrt {\frac {N}{ 2}} \right)\), Eqs. (20) and (21) are demonstrated in Fig. 3b; and when \( \frac {P_{f,\text {target}}}{ P_{0,k}}<{\mathbb {Q}\left (- \frac {1}{2} \sqrt {\frac {N}{2}}\right)}\), Eqs. (20) and (21) are demonstrated in Fig. 3c.
Solution analysis
Based on Eqs. (20) and (21), λ
1,k
can respectively be seen as the functions of λ
0,k
within the range obtained above. First, g
1(λ
0,k
) and g
2(λ
0,k
) are defined respectively as
$$ g_{1} \left(\lambda_{0,k} \right) \triangleq \sqrt{2N}\mathbb{Q}^{-1}\left(\frac{P_{f,\text{target}}-\mathbb{Q}\left(\frac{\lambda_{0,k}-N}{\sqrt{2N}} \right)P_{0,k}} {1-P_{0,k}} \right)+N $$
(29)
$$\begin{array}{@{}rcl@{}} g_{2} \left(\lambda_{0,k} \right) \triangleq \sqrt{\left(\lambda_{0,k}-a\right)^{2}+b}+a \end{array} $$
(30)
And g(λ
0,k
) is further defined as
$$\begin{array}{@{}rcl@{}} {g \left(\lambda_{0,k} \right) \triangleq g_{1} \left(\lambda_{0,k}\right)- g_{2} \left(\lambda_{0,k} \right)} \end{array} $$
(31)
Obviously, the zero point of g(λ
0,k
) is the solution of λ
0,k
to the nonlinear programming problem in Eqs. (20) and (21) meanwhile.
The first-order differential function of g(λ
0,k
) is then deduced as
$$ \begin{aligned} g^{\prime} \left(\lambda_{0,k} \right) & \triangleq g_{1}^{\prime} \left(\lambda_{0,k} \right)- g_{2}^{\prime} \left(\lambda_{0,k} \right)\\ &=-\frac{P_{0,k}}{1-P_{0,k}}e^{\frac{1}{4N}\left[(\lambda_{1,k}-N)^{2}-(N-\lambda_{0,k})^{2} \right]}-\frac{\lambda_{0,k}-a}{\lambda_{1,k}-a} \end{aligned} $$
(32)
where \(\lambda _{1,k} = \sqrt {2N}\mathbb {Q}^{-1}\left (\frac {P_{f,\text {target}}-\mathbb {Q}\left (\frac {\lambda _{0,k}-N}{\sqrt {2N}}\right)P_{0,k}} {1-P_{0,k}} \right)+N\).
Together with Eq. (28), the negativity of g
′(λ
0,k
) in the range of three cases in Eq. (28) is proved as follows.
-
(1)
\( \frac {P_{f,\text {target}}}{P_{0,k}}<{ \mathbb {Q}\left (- \frac {1}{2} \sqrt {\frac {N}{2}} \right)}\). It can be easily obtained that \( \frac {\lambda _{0,k}-a}{\lambda _{1,k}-a}>0\), and thus, we can get g
′(λ
0,k
)<0.
-
(2)
\( \frac {P_{f,\text {target}}}{P_{0,k}} \geq { \mathbb {Q}\left (- \frac {1}{2} \sqrt {\frac {N}{2}} \right)}\).
-
λ
0,k
∈(a,λ
∗). Similar to (1), it can be obtained that g
′(λ
0,k
) < 0.
-
\(\lambda _{0,k}\in (\lambda _{0,k}^{*},a)\), Eq. (20) can be transformed into
$$\begin{array}{@{}rcl@{}} &&\left[1-\frac{1}{2}er~f~c\left(\frac{N-\lambda_{0,k}}{2\sqrt {N}} \right) \right]P_{0,k}\\&&\quad+\frac{1}{2}er~f~c\left(\frac{\lambda_{1,k}-N}{2\sqrt {N}} \right)(1-P_{0,k})=P_{f,\text{target}} \end{array} $$
(33)
where \(er~f~c(x)~=~2\mathbb {Q}(\sqrt {2}x)\).
With λ
0,k
< a, it can be deduced that
$$\begin{array}{@{}rcl@{}} \frac{N-\lambda_{0,k}}{2\sqrt {N}} > \frac{N-a}{2\sqrt {N}} = {\frac{\sqrt {N}}{4}} \end{array} $$
(34)
and
$$\begin{array}{@{}rcl@{}} \frac{\lambda_{1,k}-N}{2\sqrt {N}}&& = \frac{1}{\sqrt{2}} \mathbb{Q}^{-1} \left(\frac{P_{f,\text{target}}- \mathbb{Q} \left(\frac{\lambda_{0,k}-N}{\sqrt {2N}} \right)P_{0,k}}{1-P_{0,k}} \right)\\ && > \frac{1}{\sqrt{2}} \mathbb{Q}^{-1} \left(\frac{P_{f,\text{target}}- \mathbb{Q} \left(- \frac{\sqrt {N}}{2 \sqrt {2}} \right) P_{0,k}}{1-P_{0,k}} \right)\\ && > \frac{1}{\sqrt{2}} \mathbb{Q}^{-1} \left(\frac{P_{0,k}- \mathbb{Q} \left(- \frac{\sqrt {N}}{2 \sqrt {2}} \right) P_{0,k}}{1-P_{0,k}} \right)\\ && > \frac{1}{\sqrt{2}} \mathbb{Q}^{-1} \left(1- \mathbb{Q} \left(- \frac{\sqrt {N}}{2 \sqrt {2}} \right) \right)\\ &&=\frac{\sqrt {N}}{4} \end{array} $$
(35)
Notice that in Eq. (35), P
0,k
> P
f,target since P
d,k − 1 >P
f,target must hold to ensure a valid detection and P
f,k − 1 = P
f,target in Eq. (13). With γ≪1 and N≫1, the approximation \(er~f~c(x)\approx \frac {1}{\sqrt {\pi }x}e^{-x^{2}}\) can be taken in Eq. (33), and thus, we can get
$$ {\begin{aligned} &\frac{P_{0,k}}{1-P_{0,k}}e^{\frac{1}{4N} \left[ (\lambda_{1,k}-N)^{2}-(N-\lambda_{0,k})^{2} \right]}\\ &\quad= \frac{N-\lambda_{0,k}}{\lambda_{1,k}-N}+\frac{P_{0,k}-P_{f,\text{target}}}{1-P_{0,k}} \sqrt{\frac{\pi}{N}}(N-\lambda_{0,k})e^{\frac{1}{4N}(\lambda_{1,k}-N)^{2}} \end{aligned}} $$
(36)
By taking Eq. (36) into Eq. (32), it can be proved that g
′(λ
0,k
) < 0 when \(\lambda _{0,k}\in \left (\lambda _{0,k}^{*},a \right)\) ; the nonlinear programming problem is then transformed into a problem of searching for the zero point of a strictly decreasing function, which can be easily solved by the dichotomy or Newton’s method. It should be noticed that no zero point exists when \(g\left (\lambda _{0,k}^{*} \right)~<~0 \) since g(λ
∗) < 0.