Open Access

Node selection algorithm based on Fisher information

EURASIP Journal on Wireless Communications and Networking20162016:249

Received: 16 July 2016

Accepted: 6 October 2016

Published: 18 October 2016


Traditional positioning needs lots of measurements between the target and anchors. However, this requirement is faced with significant challenge in the most practical scenarios. The cooperation between mobile nodes is an effective solution. In order to avoid large computational complexity, we need to cooperate with neighbors selectively. This paper proposes a novel node selection algorithm based on the Fisher information matrix. We represented cooperation information with the equivalent Fisher information matrix and then selected neighbors. Simulation results show that the proposed algorithm is able to improve positioning accuracy obviously compared with distance-based node selection algorithm.


Cooperative positioning Fisher information Information quantization Node selection

1 Introduction

Wireless positioning technology is necessary for a large number of new and traditional applications. In a wireless sensor network, we usually distinguish two different types of nodes: anchor with known position and mobile node with unknown position. Traditionally, mobile nodes need to get at least three measurements from the anchor for positioning. Two traditional positioning methods are the global positioning system (GPS) [1] and beacon positioning [2]. However, GPS is not effective in some harsh environment, such as indoor or underground, because of the obstacles. Beacon positioning is based on the anchor located on the land, for example, WiFi access point and base station. However, these conditions cannot be satisfied in some cases. Considering the cost, increasing the number of anchor is unrealistic in these complex environments. In addition, since mobile node is battery powered, taking into account energy saving [36], communication radius of nodes cannot be increased unlimitedly. Cooperative positioning can overcome the traditional restriction. But information exchange and fusion would inevitably bring about large computational complexity and communication burden. In fact, some measurements from the neighbors cannot improve the positioning accuracy but increase the computational complexity. That means that some cooperation information are redundant. So it is necessary to select neighbors which are beneficial to the target. Some common cooperative positioning algorithms also considered the node selection [7]. But those node selection algorithms are based on distance. Positioning accuracy of node selection based on distance is low. Based on the theory of Fisher information matrix (FIM), this paper uses FIM for node selection.

The remainder of this paper is organized as follows. Section 2 introduces the system model. Section 3 presents the information decomposition. Section 4 describes the proposed algorithm. Section 5 presents the simulation results. The conclusion and future work are discussed in the last section.

2 System model

2.1 Signal model

We assume that a network is consists of N a mobile nodes and N b anchors, denoted by \({{\mathcal {N}}_{a}} = \{ 1,2 \ldots {N_{a}}\} \) and \({{\mathcal {N}}_{b}} = \{ 1,2 \ldots {N_{b}}\} \), respectively. Common signal model is measurement based on receiving signal, for example, angle and distance. The measurement of distance from the kth node \(\left ({k \in {{\mathcal {N}}_{a}}} \right)\) to the jth node \(\left ({j \in {{\mathcal {N}}_{a}} \cup {{\mathcal {N}}_{b}}\backslash \{ k\}} \right)\) is as follows:
$$\begin{array}{@{}rcl@{}} {\mathrm{z}_{kj}} = \left\{ \begin{array}{l} {\left\| {{\boldsymbol{p}_{k}} - {\boldsymbol{p}_{j}}} \right\|_{2}} + {v_{kj}}\\ 0 \end{array} \right., \end{array} $$
where v kj is the observation noise and p=[x,y]T represents the location of node in a 2-D plain. Different from Eq. 1, we select the original signal as measurement. The signal model is written as
$$\begin{array}{@{}rcl@{}} {\mathrm{z}_{kj}}\left(t \right) = \left\{ \begin{array}{l} \sum\limits_{l = 1}^{{L_{kj}}} {\alpha_{kj}^{(l)}s\left({t - \tau_{kj}^{(l)}} \right)} + {v_{kj}}\left(t \right)\\ 0 \end{array} \right., \end{array} $$
where s(t) is the sending signal, \(\tau _{kj}^{(l)}\) and \(\alpha _{kj}^{(l)}\) are the delay and amplitude of the lth path, respectively, L kj is the number of multipath components, and v kj (t) is the observation noise. We introduce X as the vector of unknown parameters.
$$\begin{array}{@{}rcl@{}} \boldsymbol{X} = \left[\begin{array}{ccc} \boldsymbol{P}^{\mathrm{T}}&\boldsymbol{\kappa}_{1}^{\mathrm{T}}& \begin{array}{cc} \ldots &\boldsymbol{\kappa}_{N_{a}}^{\mathrm{T}} \end{array} \end{array} \right]^{\mathrm{T}}, \end{array} $$
$$\boldsymbol{P} = \left[\boldsymbol{p}_{1}^{\mathrm{T}}\boldsymbol{p}_{2}^{\mathrm{T}}\ldots \boldsymbol{p}_{{N_{a}}}^{\mathrm{T}}\right]^{\mathrm{T}} $$
$$\boldsymbol{\kappa}_{k} = \left[\boldsymbol{\kappa}_{k,1}^{\mathrm{T}}\ldots\boldsymbol{\kappa}_{k,k - 1}^{\mathrm{T}} \, \boldsymbol{\kappa}_{k,k + 1}^{\mathrm{T}}\ \ldots \ \boldsymbol{\kappa}_{k,{N_{a}} + {N_{b}}}^{\mathrm{T}} \right]^{\mathrm{T}} $$
$$\boldsymbol{\kappa}_{kj} = \left[\alpha_{kj}^{(1)} \ \ \tau_{kj}^{(1)} \ldots \alpha_{kj}^{({L_{kj}})} \ \ \tau_{kj}^{({L_{kj}})}\right]^{\mathrm{T}}. $$
Besides, we introduce Z as the observation vector
$$\begin{array}{@{}rcl@{}} \boldsymbol{Z} = \left[ \boldsymbol{z}_{1}^{\mathrm{T}}\quad {\boldsymbol{z}_{2}^{\mathrm{T}}}\quad \ldots \quad \boldsymbol{z}_{{N_{a}}}^{\mathrm{T}} \right]^{\mathrm{T}}, \end{array} $$

where \({{\boldsymbol {{z}}}_{k}} = \left [\boldsymbol {{z}}_{k,1}^{\mathrm {T}} \ldots \boldsymbol {{z}}_{{k,k} - 1}^{\mathrm {T}} \ \boldsymbol {{z}}_{{k,k} + 1}^{\mathrm {T}} \ldots \boldsymbol {{z}}_{k,{N_{a}} + {N_{b}}}^{\mathrm {T}}\right ]^{\mathrm {T}}\) and z kj is obtained from the Karhunen −Loeve (KL) expansion of z kj (t) [8].

2.2 Error bound

We make that the estimator of X is denoted by \(\tilde {\boldsymbol {{X}}} = {\left [\tilde {\boldsymbol {P}}^{\mathrm {T}} \tilde {\boldsymbol {\kappa }}_{1}^{\mathrm {T}} \ \tilde {\boldsymbol {{\kappa }}}_{2}^{\mathrm {T}} \ldots \tilde {\boldsymbol {{\kappa }}}_{{N_{a}}}^{\mathrm {T}}\right ]^{\mathrm {T}}}\). The mean square error satisfies the information inequality [9].
$$\begin{array}{@{}rcl@{}} E_{\boldsymbol{Z},\boldsymbol{X}}\left\{ {\left({\tilde{\boldsymbol{{X}}} - \boldsymbol{{X}}} \right)\left({\tilde{\boldsymbol{{X}}} - \boldsymbol{{X}}} \right)}^{\mathrm{T}} \right\} \ge J_{\boldsymbol{{X}}}^{- 1}, \end{array} $$
where J X is the FIM for X, given by
$$\begin{array}{@{}rcl@{}} {J_{\boldsymbol{{X}}}} = \mathbf{E}_{\boldsymbol{Z},\boldsymbol{X}}\left\{ - \frac{{{\partial^{2}}}}{{\partial \boldsymbol{{X}}\partial \boldsymbol{{X}}^{\mathrm{T}}}}\ln f\left({\boldsymbol{{Z}}, \boldsymbol{{X}}} \right)\right\}, \end{array} $$
where X consists of two parts: location and channel parameters. Considering the independence of location information and channel parameters, so we ignore channel parameters and simplify the expression of J X . The simplified FIM is a 2N a ×2N a diagonal matrix if in a 2-D environment we called it the equivalent Fisher information matrix (EFI)[10]. Combing with Eq. 5, the mean square error can be written as follows:
$$\begin{array}{@{}rcl@{}} E_{\boldsymbol{Z}, \boldsymbol{X}}\left\{ {\left({\tilde{\boldsymbol{P}} -\boldsymbol{P}} \right){{\left({\tilde{\boldsymbol{P}} -\boldsymbol{P}} \right)}^{\mathrm{T}}}} \right\} \ge {\left[ {J_{\boldsymbol{X}}^{- 1}} \right]_{2{N_{a}} \times 2{N_{a}}}}, \end{array} $$
$${\left[ {{J_{\boldsymbol{X}}}} \right]_{2{N_{a}} \times 2{N_{a}}}} \buildrel \Delta \over = {J_{e}}\left(\boldsymbol{P} \right), $$
where J e (P) is the EFI for P. For simplicity, we make the following definition:

Definition 1

The squared position error bound of the kth mobile node in the network is defined to be
$$\begin{array}{@{}rcl@{}} {\mathcal{P}} \left({{\boldsymbol{p}_{k}}} \right) \buildrel \Delta \over = \text{tr}\left\{ {{{\left[ {J_{\boldsymbol{{X}}}^{- 1}} \right]}_{2 \times 2,k}}} \right\} \end{array} $$

3 Information decomposition and quantization

3.1 Decomposition of FIM

In order to analyze the relationship between positioning accuracy and cooperation information. We need to break down the positioning information of the entire network. Bayesian methods deal with estimation problem by establishing the posterior probability density function of the current state with measurements. Once the posterior probability density function is obtained, a minimum mean square error estimation and maximum posterior probability density estimation can be accomplished by compute mean and mode, respectively.

Assuming that the observation sequence z and estimation sequence x are Markov sequences. Based on the Bayesian rule, update the posterior probability density of each node can be performed in two steps:
  • Prediction:
    $$\begin{array}{@{}rcl@{}} \begin{array}{l} p\left({{\boldsymbol{x}^{(\mathrm{T})}}|{\boldsymbol{z}^{(1:\mathrm{T} - 1)}}} \right)\\ = \int {p\left({{\boldsymbol{x}^{(T)}}|{\boldsymbol{x}^{(\mathrm{T} - 1)}}} \right)p\left({{\boldsymbol{x}^{(\mathrm{T} - 1)}}|{\boldsymbol{z}^{(1:\mathrm{T} - 1)}}} \right)} d{\boldsymbol{x}^{(\mathrm{T} - 1)}} \end{array} \end{array} $$
  • Correction:
    $$\begin{array}{@{}rcl@{}} \begin{array}{l} p\left({{\boldsymbol{x}^{(\mathrm{T})}}|{\boldsymbol{z}^{(1:\mathrm{T})}}} \right)\\ = \int {p\left({{\boldsymbol{x}^{(\mathrm{T})}},{\boldsymbol{x}^{(\mathrm{T} - 1)}}|{\boldsymbol{z}^{(1:\mathrm{T})}}} \right)} d{\boldsymbol{x}^{(\mathrm{T} - 1)}}\\ \propto p\left({{\boldsymbol{z}^{(\mathrm{T})}}|{\boldsymbol{x}^{(\mathrm{T})}}} \right)p\left({{\boldsymbol{x}^{(\mathrm{T})}}|{\boldsymbol{z}^{(1:\mathrm{T} - 1)}}} \right) \end{array} \end{array} $$
Using the fact that the measurements of different nodes are independent and the Markov assumptions are
$$\begin{array}{@{}rcl@{}} \begin{array}{l} p\left({{\boldsymbol{z}^{(1:\mathrm{T})}}|{\boldsymbol{x}^{(0:\mathrm{T})}}} \right)\\ = p\left({\boldsymbol{z}_{\text{rel}}^{(1:\mathrm{T})}|{\boldsymbol{x}^{(0:\mathrm{T})}}} \right)p\left({\boldsymbol{z}_{\text{self}}^{(1:\mathrm{T})}|{\boldsymbol{x}^{(0:\mathrm{T})}}} \right). \end{array} \end{array} $$
$$\begin{array}{@{}rcl@{}} p\left({\boldsymbol{{z}}_{\text{self}}^{(1:\mathrm{T})}|{\boldsymbol{{x}}^{(0:\mathrm{T})}}} \right) = \prod\limits_{t = 1}^{\mathrm{T}} {p\left({\boldsymbol{{z}}_{\text{self}}^{(t)}|{\boldsymbol{{x}}^{(t - 1)}},{\boldsymbol{{x}}^{(t)}}} \right)}, \end{array} $$
where \(\boldsymbol {z}^{(1:\mathrm {T})}_{\text {rel}}\) is inter-node measurements and \(\boldsymbol {z}^{(1:\mathrm {T})}_{\mathrm {\text {self}}}\) is intra-node measurements. Substituting Eqs. 11 and 12 into Eq. 10, it is rewritten as follows:
$$\begin{array}{@{}rcl@{}} \begin{array}{l} p\left({{\boldsymbol{x}^{(0:\mathrm{T})}}|{\boldsymbol{z}^{(1:\mathrm{T})}}} \right)\\ \propto p\left({{\boldsymbol{x}^{(0)}}} \right) \cdot \prod\limits_{t = 1}^{\mathrm{T}} {p\left({\boldsymbol{z}_{\text{rel}}^{(t)}|{\boldsymbol{x}^{(t)}}} \right)} \cdot \\ \prod\limits_{t = 1}^{\mathrm{T}} {\left\{ {p\left({{\boldsymbol{x}^{(t)}}|{\boldsymbol{x}^{(t - 1)}}} \right)p\left({\boldsymbol{z}_{\text{self}}^{(t)}|{\boldsymbol{x}^{(t - 1)}},{\boldsymbol{x}^{(t)}}} \right)} \right\}}, \end{array} \end{array} $$
where \(\prod \nolimits _{t = 1}^{T} {\left \{ {p\left ({{\boldsymbol {{x}}^{(t)}}|{\boldsymbol {{x}}^{(t - 1)}}} \right)p\left ({\boldsymbol {{z}}_{\text {self}}^{(t)}|{\boldsymbol {{x}}^{(t - 1)}},{\boldsymbol {{x}}^{(t)}}} \right)} \right \}}\), \(\prod \nolimits _{t = 1}^{T} {p\left ({\boldsymbol {{z}}_{\text {rel}}^{(t)}|{\boldsymbol {{x}}^{(t)}}} \right)} \), and p(x (0)) correspond to the temporal cooperation information, spatial cooperation information, and prior knowledge, respectively. This article do not consider temporal cooperation, so we just break down \(\prod \nolimits _{t = 1}^{T} {p\left ({\boldsymbol {{z}}_{\text {rel}}^{(t)}|{\boldsymbol {{x}}^{(t)}}}\right)} \). Because spatial information only associate with a time step, we need not to consider the sequence. Since the measurements of different nodes are independent, then the spatial information of \(k{\text {th}}\left ({k \in {{\mathcal {N}}_{a}}} \right)\) node is broke down
$$\begin{array}{@{}rcl@{}} p\left({{\boldsymbol{{z}}_{kj}}|\boldsymbol{{x}}} \right) = \prod\limits_{j \in {{\mathcal{N}}_{b}}} {p\left({{\boldsymbol{{z}}_{kj}}|\boldsymbol{{x}}} \right)} \cdot \prod\limits_{j \in {{\mathcal{N}}_{a}}\backslash \{ k\}} {p\left({{\boldsymbol{{z}}_{kj}}|\boldsymbol{{x}}} \right)}. \end{array} $$
Then, the FIM of spatial cooperation information can be decomposed into the following three parts [12]:
$$\begin{array}{@{}rcl@{}} {J_{\boldsymbol{X}}} = J_{\boldsymbol{X}}^{P} + J_{\boldsymbol{X}}^{A} + J_{\boldsymbol{X}}^{C}, \end{array} $$

where \(J_{\boldsymbol {X}}^{P}\), \(J_{\boldsymbol {X}}^{A}\), and \(J_{\boldsymbol {X}}^{C}\) correspond to a prior knowledge, neighboring anchor, and neighboring mobile node, respectively.

3.2 Decomposition and presentation of EFI

We will further decompose the EFI. Similarly, ranging information intensity (RII) λ kj [13]and ranging direction matrix (RDM) J r (ϕ kj ) [13] between nodes k and j are introduced to express the EFI. Assuming that a priori knowledge of node is unavailable, the EFI can be decomposed into two basic building blocks:
$$\begin{array}{@{}rcl@{}} {J_{e}^{A}}\left({{\boldsymbol{p}_{k}}} \right) = \sum\limits_{j \in {N_{b}}} {\lambda_{kj}}{J_{r}}\left({{\phi_{kj}}} \right) \end{array} $$
$$\begin{array}{@{}rcl@{}} {C_{kj}}={C_{jk}} = \left({{\lambda_{kj}} + {\lambda_{jk}}} \right){J_{r}}\left({{\phi_{kj}}} \right), \end{array} $$

where \({J_{e}^{A}}\left ({{\boldsymbol {p}_{k}}} \right)\) and C jk are associated with anchor and mobile node, respectively.

Through the above analysis, we make the following conclusions:
  • Large amount of node information means a small positioning error.

  • Each node newly introduced can increase the information, meanwhile reduce the positioning error.

  • Cooperation information can be represented by RII and RDM.

  • There are many factors affecting the localization error including the distribution of nodes and the channel quality.

4 Node selection algorithm

Cooperation localization algorithm makes nodes share information with each other; the key of the algorithm is to quantify the uncertainty of the cooperation information.

4.1 Node selection algorithm based on distance

Because cooperative information may come from anchor or mobile node. We want to consider not only the error of measurements between nodes but also the error caused by inaccuracy of neighboring mobile node. If there are no appropriate methods to quantify uncertainty of mobile node, positioning error is likely to be magnified. Literature [14] quantified the uncertainty employing position estimated covariance matrix:
$$\begin{array}{@{}rcl@{}} { \tilde{\sigma}_{m,n}^{2}} = { \sigma_{m,n}^{2}} + \text{tr}\left({{\sigma^{2}_{n}}} \right), \end{array} $$

where σ m,n2 corresponds to the intrinsic range measurement variance between node m and n and \(\text {tr}({\sigma ^{2}_{n}})\) is the trace of the position estimated covariance matrix. If node n is an anchor, \(\text {tr}({\sigma ^{2}_{n}})= 0\). Consequently, great \(\text {tr}({\sigma ^{2}_{n}})\) means that node n has a small effect on node m. Thus, this method will minimize the impact of the node with large error, while maximizing the effect of nodes with small error.

There was another method to quantify uncertainty for cooperative tracking of nodes. Supposing node i is the target, node j is a neighbor of i. If node j is an anchor, then its true location p j (k) can be used. Otherwise, only a predicted position p j (k|k−1) is known, which will bring out great positioning error. In order to quantify the error, the literature [15] redefined a new measurement model. Equation (1) was expanded around the predicted location p j (k|k−1) of node j using a first-order Taylor series expansion.
$$ {\mathrm{z}_{ij}}\left(k \right) = {\hat{r}_{ij}}\left(k \right) + {h_{ij}}{\left(k \right)^{\mathrm{T}}}\left({{\boldsymbol{p}_{j}}\left(k \right) - {\boldsymbol{p}_{j}}\left({k|k - 1} \right)} \right) + {v_{ij}}, $$

where \({\hat r_{ij}}(k) = ||{\boldsymbol {p}_{i}}\left (k \right) - {\boldsymbol {p}_{j}}\left ({k|k - 1} \right)|{|_{2}}\) and h ij (k) is the first-order partial of derivative of r ij (k) evaluated at p j (k|k−1). The proposed algorithm is based on EFI, containing comprehensive factors that affect positioning accuracy. Because EFI and SPEB have a direct relationship, we can select neighbors that are advantageous to the target according to EFI.

4.2 Cooperation based on EFI

In order to see which factors can affect the positioning error, we diagonalize the EFI. When a mobile node k only communicates with neighboring anchors, the EFI can be written as [16] follows:
$$ {J_{e}}\left({{\boldsymbol{p}_{k}}} \right) = \sum\limits_{j \in {{\mathcal{N}}_{b}}} {{\lambda_{kj}}{J_{r}}\left({{\phi_{kj}}} \right)} = F\left({\mu,\eta,v} \right), $$
where μ is the bigger eigenvalue indicating the principal information, η is the smaller eigenvalue indicating the secondary information, and v is a rotation angle indicating the direction of the principal information. Since the EFI can be diagonalized into a diagonal matrix, combing with Eq. (8), the SPEB is
$$ {\mathcal{P}}\left(\boldsymbol{p} \right) = \frac{1}{\mu} + \frac{1}{\eta }. $$
Next, we would analyze how neighboring anchor affects the positioning error. F(μ,η,v) and F(a,0,ϕ) are the EFI of the target and a neighboring anchor, respectively. Obviously, information from a anchor is 1-D, along the direction ϕ. When target receives the information from anchor, EFI will be updated \(F\left ({\tilde {\mu },\tilde {\eta },\tilde {v}} \right) = F\left ({\mu,\eta,v} \right) + F\left ({a,0,\phi } \right)\).
$$ \begin{array}{l} F\left({\tilde{\mu},\tilde{\eta},\tilde{v}} \right) = \\ \left[ \begin{array}{l} \frac{{\left({\mu + \eta + a} \right) + \left[ {A\cos 2v + B\sin 2v} \right]}}{2}\frac{{A\sin 2v + B\cos 2v}}{2}\\ \frac{{A\sin 2v + B\cos 2v}}{2} \frac{{\left({\mu + \eta + a} \right) - \left[ {A\cos 2v + B\sin 2v} \right]}}{2} \end{array} \right], \end{array} $$
where A=μη+a cos2ϕ , B=a sin2ϕ , and ϕ =ϕv.
$$\begin{array}{l} F\left({\tilde{\mu},\tilde{\eta},\tilde{v}} \right) = \\ \left[ \begin{array}{l} \frac{{\left({\mu + \eta + a} \right) + \sqrt {{A^{2}} + {B^{2}}} \cos (2v - \theta)}}{2}\frac{{\sqrt {{A^{2}} + {B^{2}}} \sin(2v + \theta)}}{2}\\ \frac{{\sqrt {{A^{2}} + {B^{2}}} \sin(2v + \theta)}}{2} \frac{{\left({\mu + \eta + a} \right) - \sqrt {{A^{2}} + {B^{2}}} \cos (2v - \theta)}}{2} \end{array} \right], \end{array}$$
where θ= arctanB/A. Two eigenvalues of \(F\left ({\tilde {\mu },\tilde {\eta },\tilde {v}} \right)\) can be obtained
$$ \begin{array}{l} \tilde{\mu} = \frac{{\left({\mu + \eta + a} \right) + \sqrt {{A^{2}} + {B^{2}}} }}{2}\\ \tilde{\eta} = \frac{{\left({\mu + \eta + a} \right) - \sqrt {{A^{2}} + {B^{2}}} }}{2} \end{array} $$
Substituting Eq. 23 into Eq. 22, the two eigenvectors and rotation angle can be calculated.
$$ \tilde{v} = \arctan \frac{B}{A} + v. $$
We can compute the updated SPEB by Eq. 21 and conclude that SPEB has a relationship with RII and RDM. When the target cooperates with a neighboring mobile node, it would be more complicated. We supposed that \({J_{e}^{A}}\left ({{\boldsymbol {p}_{1}}} \right)=F\left ({{\mu _{1}},{\eta _{1}},{v_{1}}} \right)\), \({J_{e}^{A}}\left ({{\boldsymbol {p}_{2}}} \right)=F\left ({{\mu _{2}},{\eta _{2}},{v_{2}}} \right)\) are the initial EFI of nodes 1 and 2, and λ 1,2, J r (ϕ 1,2) are the RII and RDM between node 1 and node 2.
$$ F\left({\tilde{\mu},\tilde{\eta},\tilde{v}} \right) = F\left({{\mu_{1}},{\eta_{1}},{v_{1}}} \right) + F\left({{\mu'_{2}},0,{\phi_{1,2}}} \right), $$

where μ2′=ξ 1,2 λ 1,2, ξ 1,2=1/[1+λ 1,2 Δ 2(ϕ 1,2)], \({\Delta _{2}} \left ({{\phi _{1,2}}} \right) = \boldsymbol {q}_{12}^{\mathrm {T}}{\left [{{J_{e}^{A}}\left ({{\boldsymbol {p}_{2}}} \right)} \right ]^{- 1}}{\boldsymbol {q}_{12}}\), and q 12=[ cosϕ 12 sinϕ 12] T . We can see that 0<ξ 1,2≤1, it represents the uncertainty resulting from neighboring mobile node. Similarly, we can obtain the \(\tilde {\mu }, \tilde {\eta } \), and \(\tilde {v}\) according to Eqs. 23 and 24, then calculate the SPEB.

4.3 Neighbor selection algorithm based on EFI

We assume that the target trajectory is fixed. There are a large number of neighbors around target, containing mobile node and anchor. In view of power consumption, we select neighbors within a communication range denoted by R k . In addition, the number of cooperative nodes also has been set to be N min. These two parameters are dependent on the circumstance and density of the network node. Besides, the value will be different according to the requirement of the positioning accuracy. In general, N min is at least equal to 3. \({\mathcal {Z}}_{k} = \{ \boldsymbol {z}_{k,1} \ldots \boldsymbol {z}_{{k,k}-1} \ \boldsymbol {z}_{{k,k} + 1} \ldots \}\) is the neighbor set of k node and its size is not fixed. \({\mathcal {Z}}_{k}\) contains all the information about the neighbors. The neighbor selection algorithm took as inputs: \({\mathcal {Z}}_{k}\), that was used to calculate EFI for each node and then obtained SPEB. As outputs, set \({\mathcal {N}}_{k}\) of the selected neighbors were obtained. The whole algorithm is reported as a pseudo code in Algorithm 1.

In Algorithm 1, if the cardinality N of the set \({\mathcal {Z}}_{k}\) is greater than or equal to N min, the algorithm selects the neighbor subset with smaller SPEB. Note that R 0 is set as the smallest communication range. On the contrary, if N is less than N min, there are not enough measurements to locate the mobile node and the communication range is simply increased by Δ R, whose value is chosen according to the node density of the network. As reported in row 7 of Algorithm 1, we must distinguish between neighboring anchors and neighboring mobile nodes. If neighbor is not a anchor, ξ k,i must be computed.

5 Simulation results

Our simulation results have been carried out in a 2-D environment of size 100×100 m. Firstly, there are a target numbered 1, a fixed unknown node numbered 2, a mobile anchor numbered 3 moving around node 2 along a circumference, and a fixed anchor numbered 4 (Fig. 1). Nodes 2 and 4 are neighbors of node 1, and node 3 is a neighbor of node 2. Seen from Fig. 2, when the angle is equal to 0 and π, nodes 1, 2, and 3 are collinear and v 2=ϕ 1,2. When v 2=ϕ 1,2, ξ 1,2=1. In this case, error from node 2 is minimal and the SPEB of node 1 is minimal. Besides, the horizontal line indicates that the SPEB in case node 2 is also an anchor.
Fig. 1

Network deployment of a target and three different neighbors

Fig. 2

The SPEB of target as a function of the information angle of mobile anchor

Secondly, all neighboring nodes are distributed on a straight line. According to the distance to the target, neighbors have been divided into three types: 0−25, 25−50, and 0−50 (Fig. 3). Since all nodes are located in the same straight line, the effect of the angle on SPEB can be ignored. Seen from Fig. 4, the SPEB is inversely proportional to distance and proportional to the number of neighbors.
Fig. 3

Neighbor distribution depending on the distance to target

Fig. 4

The SPEB of the target with respect to the number of neighbors in the three cases of neighbor distribution

Assuming that a target and a neighboring anchor are fixed. In addition, there are two nodes moving around the target. They do circular motion around the target with radius of 4 and 5, respectively (Fig. 5). And the target only can communicate with the nearer one, but the two moving nodes can communicate. Initial EFI of the target is provided by the fixed anchor. Figure 6 depicts the SPEB of two different cases. We can see that if the mobile unknown node is replaced with a mobile neighboring anchor, the SPEB is smaller than that the nearer one is unknown node. As can be seen, cooperation with the unknown node would introduce larger error than cooperation with the anchor.
Fig. 5

Network node distribution of one fixed anchor and two moving neighbors

Fig. 6

The SPEB with respect to the angle of neighboring node information in the case of two different types of neighbors

We consider a moving target, a network composed of 100 unknown nodes and 20 anchors. The 120 nodes are candidates. Unknown nodes are randomly distributed, and anchors are distributed uniformly. The trajectory of the target is an ellipse shown in Fig. 7. Figure 8 shows the SPEB of different neighbor selection algorithms. It should be pointed out that the number of selected neighbors is the same for the two neighbor selection algorithms. Where blue line indicates cooperation without node selection, red indicates neighbor selection algorithm based on distance and green indicates neighbor selection algorithm based on EFI. Obviously, node selection algorithm based on EFI can improve the positioning accuracy.
Fig. 7

Distribution of network nodes

Fig. 8

Performance of node selection algorithm based on EFI comparison with node selection algorithm based on distance

6 Conclusions

In this paper, we presented a new node selection algorithm based on EFI in wireless networks. Compared to traditional algorithm, the proposed algorithm took into account power consumption and positioning accuracy simultaneously. Simulation results show that the proposed algorithm performs more effectively than the distance-based node selection algorithm. Since the position of a moving target at the adjacent time point is of great relevance, considering time-domain cooperative information to improve accuracy would be an interesting work in the future.



This work was supported by the National Science Foundation of China (61471077, 61301126) and the science and technology projects of Chongqing Municipal Education Commission (KJ1400413).

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

Chongqing Key Laboratory of Optical Communication and Networks, Chongqing University of Posts and Telecommunications
School of Communication and Information engineering, Chongqing University of Posts and Telecommunications


  1. Z Xiong, F Sottile, MA Caceres, in 2011 IEEE-APS Topical Conference on Antennas and Propagation in Wireless Communications. Hybrid WSN-RFID cooperative positioning based on extended Kalman filter (IEEEItaly, 2011), pp. 990–993.View ArticleGoogle Scholar
  2. SFA Shah, S Srirangarajan, AH Tewfik, Implementation of a directional beacon-based position location algorithm in a signal processing framework. IEEE Trans. Wireless Commun. 9(3), 1044–1053 (2010).View ArticleGoogle Scholar
  3. W Dai, Y Shen, MZ Win, in Wireless Communications and Networking Conference (WCNC). Network navigation algorithms with power control (IEEENew Orleans, 2015), pp. 1231–1236.Google Scholar
  4. W Dai, Y Shen, MZ Win, Energy-efficient network navigation algorithms. IEEE J. Selected Areas Commun. 33(7), 1418–1430 (2015).View ArticleGoogle Scholar
  5. O Demigha, WK Hidouci, T Ahmed, On energy efficiency in collaborative target tracking in wireless sensor network: a review. IEEE Commun. Surv. Tutor. 15(3), 1210–22 (2013).View ArticleGoogle Scholar
  6. MB Dai, F Sottile, MA Spirito, in IEEE 8th International Conference on Wireless and Mobile Computing, Network and Communications(WiMob). An energy efficient tracking algorithm in UWB-based sensor networks (IEEESpain, 2012), pp. 173–178.Google Scholar
  7. S Hadzic, J Rodriguez, in Indoor Positioning and Indoor Navigation (IPIN). Utility based node selection scheme for cooperative localization (IEEEPortugal, 2011), pp. 1–6.Google Scholar
  8. HV Poor, An Introduction to Signal Detection and Estimation, 2nd edn (Springer, New York, 1994).View ArticleMATHGoogle Scholar
  9. I Reuven, H Messer, A Barankin-type lower bound on the estimation error of a hybrid parameter vector. IEEE Trans. Inform. Theory. 43(3), 1084–1093 (1997).View ArticleMATHGoogle Scholar
  10. Y Shen, MZ Win, in IEEE Wireless Communications and Networking Conference. Fundamental limits of wideband localization accuracy via Fisher information (IEEEChina, 2007), pp. 3046–3051.Google Scholar
  11. S Yuan, S Mazuelas, MZ Win, Network Navigation: Theory and Interpretation. IEEE J. Selected Areas Commun. 30(9), 1823–1834 (2012).View ArticleGoogle Scholar
  12. MZ Win, A Conti, S Mazuelas, Network localization and navigation via cooperation. IEEE Commun. Mag. 49(5), 56–62 (2011).View ArticleGoogle Scholar
  13. S Yuan, MZ Win, Fundamental limits of wideband localization—part I: a general framework. IEEE Trans. Inform. Theory. 56(10), 4956–4980 (2010).MathSciNetView ArticleGoogle Scholar
  14. Z Xiong, M Dai, F Sottile, MA Spirito, R Garello, in IEEE International Conference on Communications. Cognitive and cooperative tracking approach in wireless networks (IEEEHungary, 2013), pp. 2717–2721.Google Scholar
  15. T Sathyan, M Hedley, in IEEE Trans. Mobile Comput. Fast and accurate cooperative tracking in wireless networks, (2013), pp. 1801–1813.Google Scholar
  16. Y Shen, H Wymeersch, MZ Win, Fundamental limits of wideband localization: part II: cooperative networks. IEEE Trans. Inform. Theory. 56(10), 4981–5000 (2010).MathSciNetView ArticleGoogle Scholar


© The Author(s) 2016