One character of iterative localization is that the localization error propagates. We deduce an upper bound of the localization error propagation based on orthogonally invariant norms. In addition, an algorithm is proposed by using the upper bound as a LLS measurement.
Orthogonally invariant norms
The orthogonally invariant norms are used to conduct the upper bound of the localization error propagation. The notion and its characters are introduced as follows.
Definition 1 (Orthogonally Invariant Norms (Watson et al., [24]). Consider SVD of a given matrix A, A have singular value decomposition
$$ A=U\varSigma {V}^{\mathrm{T}} $$
(6)
where U and V are orthogonal matrices and Σ is an m × n diagonal matrix, where the diagonal terms are the singular values of A in descending order
$$ {\sigma}_1\ge {\sigma}_2\cdots \ge {\sigma}_n $$
(7)
Orthogonally invariant norms can be defined by
$$ \left\Vert A\right\Vert =\phi \left(\sigma \right) $$
(8)
where σ = (σ
1, ⋯, σ
n
)T and ϕ is a symmetric gauge function, such a function satisfies the following conditions:
-
(1)
Φ(x) > 0, x ≠ 0,
-
(2)
Φ(α
x) = |α|Φ(x), ∀ α ∈ ℝ,
-
(3)
Φ(x + y) ⩽ Φ(x) + Φ(y),
-
(4)
\( \varPhi \left({\varepsilon}_1{x}_{i_1},\dots, {\varepsilon}_n{x}_{i_n}\right) = \varPhi \left(\boldsymbol{x}\right), \)
where α is a scalar, ε
i
= ± 1 for all i, and i
1, ⋯, i
n
is a permutation of 1, 2, ⋯, n.
Also, the following characters of the norm, which will be used in the Section 4.2, are obtained:
-
(1)
‖A
T‖ = ‖A‖, ∀ A ∈ ℂm × n
-
(2)
‖x‖ = ‖x‖2, ∀ x ∈ ℂn
-
(3)
‖AB‖ ⩽ ‖A‖2‖B‖, ∀ A ∈ ℂm × n, ∀ B ∈ ℂn × l
-
(4)
‖AB‖ ⩽ ‖A‖‖B‖2, ∀ A ∈ ℂm × n, ∀ B ∈ ℂn × l
-
(5)
‖A − B‖ ≤ ‖A‖ + ‖B‖, ∀ A, B ∈ ℂm × n
-
(6)
‖ A ‖ − ‖ B ‖ | ⩽ ‖ A − B ‖
Upper boundary of localization error of LLS-RSS iterative algorithm
Upper boundary of localization of error LLS
We propose a lemma which describes the upper bound of localization error for a LLS. The lemma issues an abstract but a useful formula for calculating the upper bound of the error propagation.
Theorem 1. Assuming ith anchor node is chosen as the BAN, a LLS is expressed as
$$ {\widehat{A}}_i{{\widehat{\boldsymbol{x}}}_i}^{\mathrm{T}}={\widehat{\boldsymbol{b}}}_i $$
(9)
where Â
i
= A
i
+ ΔÂ
i
is a matrix constructed by anchors’ positions, A
i
represents precise physical position of anchor nodes, ΔÂ
i
is the coordinate errors of the anchors;
\( {\widehat{\boldsymbol{b}}}_i={\boldsymbol{b}}_i+\varDelta {\widehat{\boldsymbol{b}}}_i \)
is a vector collection of the anchors’ position and the measurement data,
b
i
denotes the noiseless measurement data,
\( \varDelta {\widehat{\boldsymbol{b}}}_i \)
represents the noise of the measurement data. The ratio of localization coordinate to physical coordinate satisfies
$$ \frac{\left\Vert {\widehat{\boldsymbol{x}}}_i\right\Vert }{\left\Vert \boldsymbol{x}\right\Vert}\leqslant \kappa \left(1+\alpha \right)\left(1+\beta \right) $$
(10)
where
$$ \begin{array}{l}\kappa =\left\Vert {\widehat{A}}_i^{\dagger}\right\Vert \left\Vert {\widehat{A}}_i\right\Vert \\ {}\alpha =\frac{{\left\Vert \varDelta {\widehat{A}}_i\right\Vert}_2}{{\left\Vert {\widehat{A}}_i\right\Vert}_2}\\ {}\beta =\frac{1}{\left|{\left\Vert {\widehat{\boldsymbol{b}}}_i\right\Vert}_2/{\left\Vert \varDelta {\widehat{\boldsymbol{b}}}_i\right\Vert}_2-1\right|}\end{array} $$
(11)
Proof. First, based on the LLS expression \( {\widehat{A}}_i{\widehat{\boldsymbol{x}}}_i={\boldsymbol{b}}_i+\varDelta {\widehat{\boldsymbol{b}}}_i \), \( {\widehat{\boldsymbol{x}}}_i \) is calculated as
$$ {\widehat{\boldsymbol{x}}}_i={{\widehat{A}}^{\dagger}}_i{\boldsymbol{b}}_i+{{\widehat{A}}^{\dagger}}_i\varDelta {\widehat{\boldsymbol{b}}}_i $$
(12)
where Â
†
i
is Moore-Penrose pseudo-inverse of matrix Â
i
.
Then, applying norm characters on (12), an inequality is obtained
$$ \left\Vert {{\widehat{\boldsymbol{x}}}_i}^{\mathrm{T}}\right\Vert \leqslant {\left\Vert {{\widehat{A}}^{\dagger}}_i\right\Vert}_2{\left\Vert {\boldsymbol{b}}_i\right\Vert}_2+{\left\Vert {{\widehat{A}}^{\dagger}}_i\right\Vert}_2{\left\Vert \varDelta {\widehat{\boldsymbol{b}}}_i\right\Vert}_2 $$
(13)
The inequality can be transformed into
$$ \frac{\left\Vert {\widehat{\boldsymbol{x}}}_i\right\Vert }{\left\Vert \boldsymbol{x}\right\Vert}\leqslant {\left\Vert {{\widehat{A}}^{\dagger}}_i\right\Vert}_2\frac{{\left\Vert {\boldsymbol{b}}_i\right\Vert}_2}{\left\Vert \boldsymbol{x}\right\Vert }+{\left\Vert {{\widehat{A}}^{\dagger}}_i\right\Vert}_2\frac{{\left\Vert \varDelta {\widehat{\boldsymbol{b}}}_i\right\Vert}_2}{\left\Vert \boldsymbol{x}\right\Vert } $$
(14)
It is noted that A
i
x
i
T = b
i
. Inequalities
$$ \left\{\begin{array}{c}\hfill \left\Vert {\boldsymbol{b}}_i\right\Vert \leqslant {\left\Vert {A}_i\right\Vert}_2{\left\Vert \boldsymbol{x}\right\Vert}_2\hfill \\ {}\hfill 1/\left\Vert \boldsymbol{x}\right\Vert \leqslant {\left\Vert {A}_i\right\Vert}_2/{\left\Vert \boldsymbol{b}\right\Vert}_2\hfill \end{array}\right. $$
(15)
are concluded. Using those inequalities, (14) becomes
$$ \frac{\left\Vert {\widehat{\boldsymbol{x}}}_i\right\Vert }{\left\Vert \boldsymbol{x}\right\Vert}\leqslant {\left\Vert {{\widehat{A}}^{\dagger}}_i\right\Vert}_2{\left\Vert {{\widehat{A}}^{\dagger}}_i\right\Vert}_2\left(1+\frac{{\left\Vert \varDelta {\widehat{\boldsymbol{b}}}_i\right\Vert}_2}{{\left\Vert {\boldsymbol{b}}_i\right\Vert}_2}\right) $$
(16)
For \( {\left\Vert \overset{\wedge }{b_i}\right\Vert}_2-{\left\Vert {\overset{\wedge }{b}}_i-\varDelta {\overset{\wedge }{b}}_i\right\Vert}_2 \), it is \( {\left\Vert \overset{\wedge }{b_i}\right\Vert}_2\ge \left|{\left\Vert {\overset{\wedge }{b}}_i-\varDelta {\overset{\wedge }{b}}_i\right\Vert}_2\right| \). Therefore,
$$ \frac{\left\Vert {\widehat{\boldsymbol{x}}}_i\right\Vert }{\left\Vert \boldsymbol{x}\right\Vert}\leqslant {\left\Vert {{\widehat{A}}^{\dagger}}_i\right\Vert}_2{\left\Vert {A}_i\right\Vert}_2\left(1+\beta \right) $$
(17)
where \( \beta ={\left|{\left\Vert {\widehat{\boldsymbol{b}}}_i\right\Vert}_2/{\left\Vert \varDelta {\widehat{\boldsymbol{b}}}_i\right\Vert}_2-1\right|}^{-1} \).
Because of ‖A
i
‖2 = ‖Â
i
− (Â
i
− A
i
)‖2, it is obtained that
$$ {\left\Vert {A}_i\right\Vert}_2\leqslant {\left\Vert {\widehat{A}}_i\right\Vert}_2+{\left\Vert \varDelta {\widehat{A}}_i\right\Vert}_2. $$
(18)
Thus,
$$ {\left\Vert {\widehat{A}}_i^{\dagger}\right\Vert}_2{\left\Vert {A}_i\right\Vert}_2\le \kappa \left({1+{\left\Vert \varDelta {\widehat{A}}_i\right\Vert}_2/\left\Vert {\widehat{A}}_i\right\Vert}_2\right) $$
(19)
where \( \kappa =\left\Vert {\widehat{A}}_i^{\dagger}\right\Vert \left\Vert {\widehat{A}}_i\right\Vert \). Finally, combining (17) and (19), the conclusion is obtained.
Error upper boundary of LLS-RSS
The Theorem 1 gives a universal upper bound of the measurement error. Since RSS is widely used as the measurement data, we propose a concrete numeral upper bound of the measurement error of a LLS-based RSS (LLS-RSS). The upper bound will be fundamental of algorithm which can construct the optimum LLS in the next subsection.
To calculate the upper bound of Lemma 1, we need to calculate k, α, and β. The k and α are calculable, because all components are only related with known coordinate of anchors (origin-anchors, pseudo-anchors, or combination of them). However, the measurement data noise \( \varDelta {\widehat{\boldsymbol{b}}}_i \) is random and unmeasurable. It makes instantaneous value of β that cannot be calculated. The value can be obtained is the mean of β which is upper boundary. As an extension of Theorem 1, the mean of error upper bound of LLS-RSS is given by Lemma 2.
Theorem 2. In a LLS-RSS expressed as
\( {\widehat{A}}_i{{\widehat{\boldsymbol{x}}}_i}^{\mathrm{T}}={\widehat{\boldsymbol{b}}}_i+\varDelta {\widehat{\boldsymbol{b}}}_i \)
, ΔÂ
i
is a matrix constructed by the minimum upper bound of localization error of corresponding anchors. If the radio propagation model between ith node and kth node satisfies the model of distance-dependent path loss with log-normal fading, whose parameters are η and
\( {X}_{\sigma_i} \)
, and random variable
\( {X}_{\sigma_i}\left(i=1,\cdots, n-1\right) \)
are independent and identically distributed, there is
$$ E\left[\frac{\left\Vert {\widehat{x}}_i\right\Vert }{\left\Vert x\right\Vert}\right]\le \kappa \left(1+\varsigma \right)\left(1+\frac{1}{\xi -1}\right) $$
(20)
where
$$ \begin{array}{l}\varsigma =\frac{\sqrt{{\displaystyle {\sum}_{k=1\left(k\ne i\right)}^n{\left\Vert \varDelta {\boldsymbol{x}}_k\right\Vert}_2^2}}+\left(n-1\right){\left\Vert \varDelta {\boldsymbol{x}}_k\right\Vert}_2^2}{\sqrt{{\displaystyle {\sum}_{k=1\left(k\ne i\right)}^n{\left\Vert {\boldsymbol{x}}_k-{\boldsymbol{x}}_i\right\Vert}_2^2}}}\\ {}\xi =\frac{\sqrt{{{\displaystyle {\sum}_{k=1\left(k\ne i\right)}^n\left({\left\Vert {\boldsymbol{x}}_k\right\Vert}_2^2-{\widehat{d}}_k^2-{\left\Vert {\boldsymbol{x}}_i\right\Vert}_2^2+{\widehat{d}}_i^2\right)}}^2}}{\left|c\right|\sqrt{{\displaystyle {\sum}_{k=1,k\ne i}^n{\left({\widehat{d}}_i^2-{\widehat{d}}_k^2\right)}^2}}+\sqrt{{\displaystyle {\sum}_{k=1\left(k\ne i\right)}^n{\left\Vert \varDelta {\boldsymbol{x}}_k\right\Vert}_2^2}}+\left(n-1\right){\left\Vert \varDelta {\boldsymbol{x}}_k\right\Vert}_2^2}\\ {}c=1- \exp \left[\frac{1}{2}{\left(\frac{2 \ln (10)}{10\eta}\sigma \right)}^2\right]\end{array} $$
(21)
Proof. According to the radio propagation model of distance-dependent path loss with lognormal fading [25], we have:
$$ {P}_r\left({d}_r\right)={P}_0-\eta 10{ \log}_{10}\left(\frac{d_i}{d_0}\right)+{X}_{\sigma_i} $$
(22)
The distance between ith anchor and the unknown node, denoted as d
i
, should be calculated as
$$ {d}_k={d}_0{10}^{\frac{P_0-{P}_r\left({d}_r\right)+{X}_{\sigma_i}}{10\eta }} $$
(23)
However, since \( {X}_{\sigma_i} \) is a physically immeasurable random variable, the kth estimated distance, denoted as \( {\widehat{d}}_k \) is calculated as
$$ {\widehat{d}}_k={d}_0{10}^{\frac{P_0-{P}_r\left({d}_r\right)}{10\eta }} $$
(24)
Introduce a ∆d
2, which is defined as
$$ \varDelta {\widehat{d}}_k^2\triangleq {\widehat{d}}_k^2-{d}_k^2={\widehat{d}}_k^2\left(1-{10}^{\frac{2{X}_{\sigma_i}}{10\eta }}\right) $$
(25)
The mean of ∆d
2 is known as [3]
$$ E\left[\varDelta {\widehat{d}}_k^2\right]={\widehat{d}}_k^2\left(1- \exp \frac{1}{2}{\left(\frac{2 \ln (10)}{10\eta}\sigma \right)}^2\right) $$
(26)
Consider the definition of \( {b}_k^i \), kth component of b
i
, then the \( {b}_k^i \) can be donated as
$$ {b}_k^i={\left\Vert {\boldsymbol{x}}_k\right\Vert}_2^2+{\widehat{d}}_k^2-\left({\left\Vert {\boldsymbol{x}}_i\right\Vert}_2^2-{\widehat{d}}_i^2\right) $$
(27)
In a practical localization system, the noise always exists in measurement data. It is defined as
$$ {b}_k^i\triangleq {\widehat{b}}_k^i-\left(\varDelta {\widehat{b}}_{k,1}^i+\varDelta {\widehat{b}}_{k,2}^i\right) $$
(28)
We introduce \( \varDelta {\widehat{b}}_{i,1} \) and \( \varDelta {\widehat{b}}_{i,2} \) to denote the vectors whose elements are \( {\widehat{b}}_{i,1} \) and \( {\widehat{b}}_{i,2} \), respectively. Thus, there is
$$ E\left(\frac{{\left\Vert {\widehat{\boldsymbol{b}}}_i\right\Vert}_2}{{\left\Vert \varDelta {\widehat{\boldsymbol{b}}}_i\right\Vert}_2}\right)\leqslant \frac{{\left\Vert {\widehat{\boldsymbol{b}}}_i\right\Vert}_2}{{\left\Vert E\left(\varDelta {\widehat{b}}_{k,1}^i\right)\right\Vert}_2+{\left\Vert E\left(\varDelta {\widehat{b}}_{k,2}^i\right)\right\Vert}_2} $$
(29)
where \( {\left\Vert E\left(\varDelta {\widehat{\boldsymbol{b}}}_{i,2}\right)\right\Vert}_2=\left\Vert E\left(\varDelta {\widehat{d}}_i\right)-E\left(\varDelta {\widehat{d}}_k\right)\right\Vert \) since the random variables are independent and identically distributed.
Use Theorem 1, Combine (25), (29) and norm definition, the (20) and (21) are obtained.
Optimum algorithm of constructing LLS
Theorem 2 gives the numerical result of localization accuracy influenced by node information and measurement data. The theorem can handle the situation that the positions of the anchor nodes can exist error. Although the error upper bound is calculated in statistical significance, ∆b
i/b
i appears with a high probability, which is tested in experiments.
Therefore, the minimum upper bound can be used as a localization quality indicator of the LLS. An LLS construction algorithm, optimum algorithm of constructing LLS (OAC-LLS) shown as Algorithm 1, is proposed. This algorithm utilizes the minimum upper bound to choose the best candidate from the LLS set with a high probability.
Remark: The environment parameter σ/η is needed to calculate the E
i
in (19). But, a estimated value can be used. It will not significantly affect the result. It means that the algorithm could be fully “blind” based on an assumption value. Of course, any knowledge of parameter can improve the algorithm performance. This conclusion is discussed in Section 5.2.