Skip to main content

Adaptive robust time-of-arrival source localization algorithm based on variable step size weighted block Newton method

Abstract

We propose a line-of-sight (LOS)/non-line-of-sight (NLOS) mixture source localization algorithms that utilize the weighted block Newton (WBN) and variable step size WBN (VSSWBN) method, in which the weighting matrix is determined in the form of the inverse of the squared error or as an exponential function with a negative exponent. The proposed WBN and VSSWBN algorithms converge in two iterations; thus, the required number of extra samples in the transient period is negligible. Also, we perform an analysis of the mean square error (MSE) of the weighted block Newton method. To verify the superiority of the proposed methods, the MSE performances are compared via extensive simulation.

1 Introduction

The aim of the source localization system is to find a geometrical point of intersection using the measurements from each receiver, such as the time difference of arrival (TDOA), time of arrival (TOA), or received signal strength (RSS). Localizing a point source in which passive and stationary sensors are used has been a popular research issue in the areas of radar, sonar, global positioning system, video conferencing, and telecommunication. Two key issues need to be resolved in source localization. The first is that the wireless localization systems must be able to cope with rapidly varying channel conditions. To do this, adaptive filters have been important in adaptive source localization methods to stabilize localization performance under varying channel conditions [13]. However, these adaptive source localization algorithms were designed for the Gaussian noise situation and their localization performance is severely degraded in impulsive noise environments, such as Gaussian mixture and Student’s t distribution. Also, the convergence rate of the least mean square (LMS) algorithm adopted in [13] may be much slower in some adverse environments. The second key issue is the challenge of localization in environments of line-of-sight (LOS)/non-line-of-sight (NLOS) mixture. LOS/NLOS mixture scenarios occur when there are obstructions between transmitters and receivers located in indoor environments and outdoor situations such as urban areas. The localization performance of traditional approaches, which assume only LOS conditions, is severely degraded under NLOS conditions; thus, mitigation of NLOS errors has become an urgent task and has been extensively investigated in the last decade. In general, research of the LOS/NLOS mixture problem for localization takes one of two approaches: (1) the constrained least squares (LS) method using optimization, such as semidefinite relaxation and second-order cone relaxation [49] and (2) localization based on the “NLOS identify and discard” [1014]. Although localization using the optimization method has relatively high accuracy, the computational load is intensive. Localization using the “NLOS identify and discard” method also has relatively high accuracy when the LOS/NLOS mixture sensors are perfectly separated from the LOS sensors. However, the complete classification of LOS sensors and LOS/NLOS mixture sensors is nearly impossible, making it evident that classification error exists and the resulting false classification incurs drastically increased localization error. Furthermore, when the number of sensors is large, the number of cases to be calculated is increased; thus, the computational burden is much high. Although our research results deal with the case in which the positions of sensors are accurately known, some recent works assume the unknown coordinates of sensors [1518].

The motivation for this paper is as follows. Adaptive localization methods have been proposed for positioning in changing environments [13]. However, conventional adaptive localization algorithms, which minimize the squared error sum, are not robust to non-Gaussian noise (impulsive noise) situations, so they must be adapted to impulsive noise conditions. Our proposed algorithm minimizes the weighted squared error sum instead of the squared error sum, and the weighting matrix in the proposed algorithm counteracts the adverse effects of the LOS/NLOS mixture sensor. Namely, the weight is small when the sample is an outlier, which means an outlier-corrupted measurement is prevented from entering into the minimization of the cost function. Also, the convergence rate of the LMS method is slower than that of the Newton method and this slow convergence rate increases the number of samples in the transient period of the adaptive filter and this is clearly a waste of resources. Thus, we propose a robust weighted block Newton (WBN) and variable step size WBN (VSSWBN) algorithms that use the Newton method instead of the LMS method. As can be seen from simulation results, the proposed WBN and VSSWBN methods converge in two iterations. Therefore, it is possible to neglect the number of additional samples in the transient period of the adaptive localization algorithm.

The organization of this paper is as follows. Section 2 explains the LOS/NLOS mixture source localization problem to be solved in this paper. In Section 3, the details of the existing localization methods are addressed. The proposed adaptive localization methods using the weighting matrix and variable step size are addressed in Section 4. The MSE analysis of the proposed WBN algorithm is performed in Section 5. The estimation performances of the proposed methods are evaluated via simulation results in Section 6, comparing them with those of the existing algorithms. Finally, the conclusion is presented in Section 7.

2 Problem formulation

The main goal of the TOA-based source localization method is to accurately determine the position of a source using multiple circles whose centers are the locations of sensors. In the LOS/NLOS mixture source localization context, the measurement equation is represented as

$$\begin{array}{@{}rcl@{}} {r}_{{i}}&=&{d}_{i}+{n}_{{i}} =\sqrt{({x}-{x}_{i})^{2} + ({y}-{y}_{i})^{2}}+{n}_{{i}}, \end{array} $$
(1)

where \({n}_{{i}} \sim (1-\epsilon){N}(0,\sigma _{1}^{2})+ \epsilon {N}(\mu _{2},\sigma _{2}^{2}), \sigma _{1}^{2}\ll \sigma _{2}^{2},\;\;{i}=1,2,{\ldots },{M},\;\;\;\) with M denoting the number of sensors [7, 1921]. Also, r i is the measured distance between the source and the ith sensor and d i is the range (distance) model between the source and the ith sensor. The measurement noise n i is modeled as a Gaussian mixture distribution, where the LOS noise is distributed according to \({N}\left (0,\sigma _{1}^{2}\right)\) with a probability (1−ε) and the NLOS noise distributed by \({N}\left (\mu _{2},\sigma _{2}^{2}\right)\) with a probability of ε. It is assumed that while the statistics of the inlier can be obtained, the mean and variance of the outlier distribution are unknown. In practice, \(\sigma _{1}^{2}\) can be estimated by observing the energy bins in an absence of the transmitted signal and indeed the sample variance is usually adopted. Here, ε (0≤ε≤1) is the contamination ratio (i.e., fraction of contamination) which is a small number (typically smaller than 0.1) [7, 1921]. Also, [x y]T is the true source position and [x i y i ]T is the position of the ith sensor. Note that, throughout this paper, a lowercase boldface letter denotes a vector, an uppercase boldface letter indicates a matrix and the superscript T signifies the vector/matrix transpose. The purpose of this paper is to determine the source position that minimizes the MSE of the position estimate.

3 Review of the existing TOA localization methods

In this section, we briefly discuss the block LMS algorithm, M, and LMedS estimators in terms of the formulation of the source localization.

3.1 Block LMS source localization [1]

Squaring (1) and rearranging yield the following equation:

$$ \begin{aligned} &{x}_{{i}}{x}+{y}_{{i}}{y}-0.5{R}+{m}_{{i}}=0.5\left({{x}_{{i}}}^{2}+{{y}_{{i}}}^{2}-{r}_{{i}}^{2}\right),\\ &\quad{i}=1,2,{\ldots},{M}, \end{aligned} $$
(2)

where \({R}={x}^{2}+{y}^{2}, {m}_{{i}}= -{d}_{{i}}{n}_{{i}}-\frac {1}{2} {n}_{{i}}^{2}.\) For convenience, (2) can be simply represented in a matrix form as

$$\begin{array}{@{}rcl@{}} {{A}}{{x}}+{q}={{b}}, \;\;\;\;\;\;\; \end{array} $$
(3)

where q=[m 1,,m M ]T, x=[x y R ]T,

$$\begin{array}{@{}rcl@{}} {A}&=&\left(\begin{array}{ccc} {x}_{1 }& {y}_{1} & -0.5 \\ \vdots & \vdots & \vdots \\ {x}_{{M}} & {y}_{{M}} & -0.5 \\ \end{array} \right) \text{and}\;\; {b}=\frac{1}{2}\left(\begin{array}{c} {x}_{1}^{2}+{y}_{1}^{2}-{r}_{1}^{2} \\ \vdots \\ {x}_{{M}}^{2}+{y}_{{M}}^{2}-{r}_{{M}}^{2} \\ \end{array} \right). \end{array} $$

The block LMS location estimate is obtained iteratively as follows:

$$\begin{array}{@{}rcl@{}} {x}^{({k}+1)}={x}^{({k})}+\mu {A}^{T}{e}^{({k})} \end{array} $$
(4)

where e (k)=b k A x (k), superscript is the iteration number, and μ is a positive step size (see [1] for details). Note that the input signal in the conventional LMS algorithm was substituted with A T in the block LMS method.

3.2 LMedS estimator

Classical LS regression consists of minimizing the sum of the squared residuals. In the LMedS algorithm, the sum is replaced by the median [22] of the squared residuals. The LMedS estimator can resist the effect of nearly 50% of the contamination in the data. The algorithm used to obtain a solution with this method can be summarized as follows [23]:

  1. (1)

    Calculate the m subsets of three measurements.

  2. (2)

    For each subset S, compute a location by trilateration P S .

  3. (3)

    For each solution P S , the residues R S are obtained as

    $$\begin{array}{@{}rcl@{}} {R}_{S}=\left[({r}_{1}-\hat{d}_{1})^{2},({r}_{2}-\hat{d}_{2})^{2},\cdots,({r}_{M}-\hat{d}_{M})^{2}\right] \end{array} $$
    (5)

    where r i is the same with (1), \(\hat {d}_{i}=\sqrt {\left ({x}_{ps}-{x}_{i}\right)^{2}+ \left ({y}_{ps}-{y}_{i}\right)^{2}}\) and the median of the residuals R S is computed.

  4. (4)

    The solution P S , which gives the minimum median, is determined as the source location.

3.3 M-estimator

The M-estimator is a class of robust estimator that has been considered for NLOS mitigation purposes. The location estimate using the M-estimator is obtained in the following:

$$\begin{array}{@{}rcl@{}} \widehat{{x}}^{\mathrm{M}}=\min_{{x}}\sum_{{i}=1}^{{M}} \rho\left\{\frac{ {b}_{{i}}-{a}_{{i}}^{T}{x}}{s}\right\} \end{array} $$
(6)

where \( {a}_{{i}}^{T} \) is the ith row vector of A, x is the position parameter to be estimated, and b i is the sample in the ith sensor. The standard deviation in (6) is an unknown value, so it should be estimated. The median absolute deviation (MAD), represented as s, is used as the estimate of the standard deviation and it is defined as

$$\begin{array}{@{}rcl@{}} {s}=1.483\;\text{med}_{{i}}\left[\left|\left\{ {b}_{{i}}-{a}_{{i}}^{T}{x}\right\}-\text{med}_{{k}}\left\{ {b}_{{k}}-{a}_{{k}}^{T}{x}\right\}\right|\right] \end{array} $$
(7)

where med is the abbreviation of the median. Also, ρ(·) is defined as follows:

$$\begin{array}{@{}rcl@{}} \rho(\nu)=\left\{\begin{array}{ll} \nu^{2}/2 &|\nu|\leq \eta\\ \eta|\nu|-\eta^{2}/2&|\nu|>\eta. \end{array}\right. \end{array} $$
(8)

Then, the position estimate using the M-estimator is obtained by using the Newton method as follows:

$$\begin{array}{@{}rcl@{}} \widehat{{x}}^{({k}+1)}=\widehat{{x}}^{({k})}+{s} ({A}^{T}{A})^{-1}{A}^{T}\psi\left\{\frac{ {b}-{A}\widehat{{x}}^{({k})}}{s}\right\} \end{array} $$
(9)

where

$$\begin{array}{@{}rcl@{}} \psi(\nu)=\left\{\begin{array}{ll} \nu &|\nu|\leq \eta\\ \eta\cdot \text{sign}(\nu)&|\nu|>\eta, \end{array}\right. \end{array} $$
(10)

and sign(·) is the sign function defined as \(\text {sign}(\nu)=\left \{\begin {array}{lll}1, & \text {if }\nu >0 \\0, & \text {if }\nu =0 \\ -1, & \text {if }\nu < 0. \end {array} \right.\)

4 Proposed adaptive robust localization methods

In this paper, the LOS/NLOS mixture state is divided into the LOS and LOS/NLOS states. The LOS state denotes the case where the contamination ratio is zero (ε=0) and the LOS/NLOS state is the condition in which 0<ε≤1. The adaptive robust localization algorithms in this paper can be represented as follows.

4.1 WBN method

This proposed algorithm modifies the block LMS source localization algorithm [1] to robustify the block LMS algorithm against outliers by using a weighting matrix which is given as the inverse of the square error or an exponential function with a negative exponent. Also, we adopt the block Newton method instead of the block LMS algorithm because the convergence rate of the Newton method is known to be much faster than that of the LMS algorithm [24]. The simulation results show that the MSE performance of the WBN algorithm converged in two iterations. The cost function to be minimized is defined as follows:

$$\begin{array}{@{}rcl@{}} ({b}-{A}{x})^{T}{Q}^{-1}({b}-{A}{x}). \end{array} $$
(11)

The solution of the WBN method is represented as follows:

$$\begin{array}{@{}rcl@{}} x^{({k}+1)}&=&{x}^{({k})}-\mu{{H}^{({k})}}^{-1}{g}^{({k})}\\ &=&{x}^{({k})}+\mu({A}^{T} {Q}^{({k})}{A})^{-1}{A}^{T}{Q}^{({k})}{e}^{({k})} \end{array} $$
(12)

where H (k) is the Hessian matrix, g (k) is the gradient vector of the cost function in the kth iteration, \( {Q}^{({k})}=\text {diag}\left [\!\frac {1}{({b}_{{1},{k}}- {a}_{1}^{T} {x}^{({k})})^{2}}\cdots \frac {1}{({b}_{{M},{k}}- {a}_{M}^{T} {x}^{({k})})^{2}}\right ]\) or \(\text {diag}\left [e^{-\frac {({b}_{{1},{k}}- {a}_{1}^{T} {x}^{({k})})^{2}}{2\zeta }}\cdots \right.\left. e^{-\frac {({b}_{{M},{k}}- {a}_{M}^{T} {x}^{({k})})^{2}}{2\zeta }}\right ]\), ζ is a tuning parameter to be determined through offline work, e (k)=b k A x (k), b k =[b 1,k b M,k ]T, and b i,k denotes the sample of the ith sensor at the kth iteration. We use the weighting matrix Q (k) to alleviate the adverse effect of the LOS/NLOS mixture components. Note that the weights are small for samples contaminated by outliers because the squared residuals are large and they are large for samples in the LOS state because the squared residuals are small. The advantage of using the weighting matrix (Q (k)) is that it alleviates the adverse effects of outliers. Meanwhile, the disadvantages are that the estimation performance may be inferior to the LOS-based algorithm in the LOS situation and the weight can diverge to infinity when the squared residual is close to zero. In this case, the small positive value can be added to the squared residual for the stability of the algorithm.

4.2 WCBN method

The clipped LMS method has been widely used to reduce the complexity of the LMS algorithm [25]. The computational complexity is reduced by clipping the input data to their polarity bits because the clipped LMS algorithm is a multiplication-free method. Following this algorithm, the matrix A is quantized by a sign function because A T in the block Newton method corresponds to the input signal in the conventional LMS algorithm. Then, the proposed WCBN algorithm is obtained as follows:

$$\begin{array}{@{}rcl@{}} {x}^{({k}+1)}&=&{x}^{({k})}+\mu\left(\text{sign}\left({A}^{T}\right) {Q}^{({k})}\text{sign}({A})\right)^{-1}\\&&\text{sign}\left({A}^{T}\right){Q}^{({k})}{e}^{({k})}. \end{array} $$
(13)

4.3 VSSWBN algorithm

The variable step size adaptive algorithm has been used to improve the MSE performance of the fixed step size adaptive method [2628]. This algorithm updates the step size by minimizing the MSE cost function and adopting this technique to the WBN method yields the following iterative equation:

$$\begin{array}{*{20}l} \mu^{({k})}&=\mu^{({k}-1)}-\frac{\rho}{2}\frac{\partial J^{({k})}}{\partial {{x}^{({k})}}^{T}}\cdot\frac{\partial{x}^{({k})}}{\partial \mu^{({k}-1)}}\\ &=\mu^{({k}-1)}+\rho\left({{e}^{({k})}}^{T}{Q}^{({k})}{A}\right) \left({A}^{T}{Q}^{({k}-1)}{A}\right)^{-1}\\ &\quad\times\left({A}^{T}{Q}^{({k}-1)}{e}^{({k}-1)}\right)\\ \end{array} $$
(14)
$$\begin{array}{@{}rcl@{}} \mu^{({k})}&=&\left\{ \begin{array}{ll} \mu_{\text{max}}, & \text{\(\mu^{({k})}\geq\mu_{\text{max}}\)} \\ \mu_{\text{min}}, & \text{\(\mu^{({k})}<\mu_{\text{min}}\)} \\ \mu^{({k})}, & \text{\text{othewise}} \end{array} \right.\\ \end{array} $$
(15)
$$\begin{array}{*{20}l} {x}^{({k}+1)}={x}^{({k})}+\mu^{({k})}\left({A}^{T}{Q}^{({k})}{A}\right)^{-1} {A}^{T}{Q}^{({k})}{e}^{({k})} \end{array} $$
(16)

where J (k)=e (k) T Q (k) e (k), e (k)=b k A x (k), and μ max,ρ are generally selected through experiments to provide the maximum convergence rate preserving steady state misadjustment error small. The value of μ min is determined as the level of steady-state misadjustment and the required tracking capabilities. The length of the transient period of the adaptive source localization method is desired to be short because the extra samples are consumed until the algorithm converges compared to the existing robust localization algorithm. Therefore, the step size is determined as large as possible in the proposed method to aid fast convergence. In our simulation results, the VSSWBN algorithm showed a superior MSE performance and a similar convergence rate compared to the WBN method for large step sizes (both algorithms converged in two iterations). However, the MSE performances of the VSSWBN and WBN algorithms were similar for small or medium step sizes.

5 Performance analysis

5.1 MSE performance analysis

This section presents the MSE analysis of the proposed algorithm. We derive the MSE of the WBN algorithm instead of the VSSWBN method for convenience of analysis. The state error vector is represented as

$$ \begin{aligned} x^{({k}+1)}-{x}^{o}&={x}^{({k})}-{x}^{o}+\mu\left({A}^{T}{Q}^{({k})}{A}\right)^{-1}{A}^{T} {Q}^{({k})}\\ &\quad\times\left({b}_{k}-{A}{x}^{({k})}\right)\\ &=(1-\mu)\left({x}^{({k})}-{x}^{o}\right)+\mu\left({A}^{T}{Q}^{({k})}{A}\right)^{-1}\\ &\qquad{A}^{T}{Q}^{({k})}\left({b}_{k}-{A}{x}^{o}\right) \end{aligned} $$
(17)

where x o is the true source position. The steady-state error vector (v ()) can be attained as

$$ \begin{aligned} &{v}^{({k}+1)}\\ &=(1-\mu){v}^{({k})}+\mu({A}^{T}{Q}^{({k})}{A})^{-1}{A}^{T}{Q}^{({k})} ({b}_{k}-{A}{x}^{o})\\ &=(1-\mu)^{k}{v}^{(0)}+\mu\sum_{{j}=0}^{{k}-1}\bigg\{(1-\mu)^{({k}-{j}-1)}({A}^{T}{Q}^{({j})}{A})^{-1}{A}^{T}\\ &\qquad\qquad\qquad\qquad\qquad\quad{Q}^{({j})}({b}_{j}-{A}{x}^{o})\bigg\}\\ &{v}^{(\infty)}\simeq{\lim}_{{k}\rightarrow\infty}\mu\sum_{{j}=0}^{{k}-1}\bigg\{(1-\mu)^{({k}-{j}-1)} ({A}^{T}{Q}^{({j})}{A})^{-1}{A}^{T}{Q}^{({j})}\\ &\qquad\qquad\qquad\qquad\quad({b}_{j}-{A}{x}^{o})\bigg\}. \end{aligned} $$
(18)

The MSE is the same with the trace of the steady-state error covariance matrix. Therefore, the MSE is represented as follows:

$$ \begin{aligned} &\text{tr}\left\{{E}\left[{v}^{(\infty)}{{v}^{(\infty)}}^{T}\right]\right\}\\ &\simeq{\lim}_{{k}\rightarrow\infty}\text{tr}\left\{\mu^{2}\sum_{{j}=0}^{{k}-1}\sum_{{q}=0}^{{k}-1} \left[{\vphantom{{\vphantom{\sum_{{j}=0}^{{k}-1}}}}}(1-\mu)^{({k}-{q}-1)}({A}^{T}{Q}^{(q)}{A})^{-1}{A}^{T}{Q}^{({q})}\right.\right.\\ &\quad\times{E}\left[\left({b}_{q}-{A}{x}^{o}\right)\left({b}_{j}-{A}{x}^{o}\right)^{T}\right] {Q}^{({j})}{A}\left({A}^{T}{Q}^{({j})}{A}\right)\\ &\left.\left.{\vphantom{\sum_{{j}=0}^{{k}-1}}}\;\;\times(1-\mu)^{({k}-{j}-1)}\right]\right\}\\ &={\lim}_{{k}\rightarrow\infty}\text{tr}\left[\mu^{2}\sum_{{l}=0}^{{k}-1}\left\{(1-\mu)^{2({k}-{l}-1)} \left({A}^{T}{Q}^{({l})}{A}\right)^{-1}\right.\right.\\ &\left.\left.\quad\times{A}^{T}{Q}^{({l})}{R}_{{l}}{Q}^{({l})}{A}({A}^{T}{Q}^{({l})}{A})^{-1} {\vphantom{\left({A}^{T}{Q}^{({l})}{A}\right)^{-1}}}\right\}{\vphantom{\sum_{{l}=0}^{{k}-1}}}\right]\\ &\simeq \text{tr}\left[\mu^{2}\left\{\left({A}^{T}{Q}^{({k}-1)}{A}\right)^{-1}{A}^{T}{Q}^{({k}-1)} {R}_{{k}-1}{Q}^{({k}-1)}\right.\right.\\&\quad\left.\left.\times{A}\left({A}^{T}{Q}^{({k}-1)}{A}\right)^{-1}\right\}\right]\\ &\simeq\text{tr}\left[\mu^{2}({A}^{T}{Q}^{({k}-1)}{A})^{-1}\right] \end{aligned} $$
(19)

where

$$\begin{array}{@{}rcl@{}} {R}_{l}&=&{E}\left[\left({b}_{l}-{A}{x}^{o}\right)\left({b}_{l}-{A}{x}^{o}\right)^{T}\right]\\ &=&\text{diag}\left[R_{1,1}^{l}\cdots R_{{i},{i}}^{l} \cdots R_{{M},{M}}^{l}\right]\\ R_{{i},{i}}^{l}&=&\left\{ \begin{array}{ll} {d}_{i}^{2}\sigma_{1}^{2}, & \text{if {i}\(\in\) LOS sensor;} \\ (1-\epsilon){d}_{i}^{2}\sigma_{1}^{2}+\epsilon ({d}_{i}^{2}\sigma_{2}^{2} + \frac{1}{2}\sigma_{2}^{4}+\mu_{2}^{2}\sigma_{2}^{2}), & \text{if {i}\(\in\) LOS/NLOS mixture sensor;} \\ \end{array} \right. \end{array} $$
(20)

and Q (l) was treated as the constant matrix for the ease of derivation. The WBN algorithm converges when 0<μ<1 from (18). In the derivation of the last term of (19), it is assumed that \( {Q}^{({k}-1)}\simeq {R}_{{k}-1}^{-1}\) (Q is the instantaneous form of R −1) and the summation is approximately equal to μ 2{(A T Q (k−1) A)−1 A T Q (k−1) R k−1 Q (k−1) A(A T Q (k−1) A)−1} because 0<1−μ<1. Note that the inverse of the squared error is relatively large in the LOS sensor and is small when the outliers exist. Then, although σ 2 is much larger than σ 1, the MSE is nearly constant with respect to the standard deviation of NLOS noise and bias because the effect of large error standard deviation of LOS/NLOS mixture sensor in R l is attenuated by the weighting matrix Q (l) (see (19) and (20)).

5.2 Computational complexity analysis

We compared the computational complexity for the localization algorithms. The computational complexities of localization methods are represented in the Table 1, and M is the number of sensors, and N is the number of unknown variables to be estimated. The inverse and multiplication operations for the matrix were mainly considered because they are computationally intensive. The computational complexity of the VSSWBN method was higher than that of the WBN algorithm because it requires the additional computation of the step size. Also, the computational complexity of the WCBN method was lower than those of WBN and VSSWBN algorithms.

Table 1 Comparison of the computational complexity

6 Simulation results

In this section, we compare the MSE performances of the proposed LOS/NLOS mixture source localization methods with those of the M-estimator [22, 29] and LMedS estimator [23]. In these simulation settings, the source was assumed to be located within a 400- m2 region to determine the performance over the entire area. Note that the number of sensors used in this experiment was seven. Next, 30 different source locations were generated with a uniform distribution and sensors fixed. Five hundred Monte-Carlo simulations were performed for each given standard deviation of the NLOS noise. The standard deviation of the LOS noise of all of the sensors was assumed to be identical. In addition, the single and omni-directional source was assumed to be in the stationary state. The MSE average was calculated as follows:

$$ \begin{aligned} \textrm{MSE}\; \text{average}={\frac{\sum_{{i}=1}^{30}\sum_{{k}=1}^{500} \left[\left(\widehat{{x}}^{{k}}({i})-{x}({i})\right)^{2} +\left(\widehat{{y}}^{{k}}({i})-{y}({i})\right)^{2}\right]}{30\times500}} \end{aligned} $$
(21)

where \(\widehat {{x}}^{{k}}({i}), \; \widehat {{y}}^{{k}}({i})\) is the estimated position of the source in the ith position set and kth iteration and x(i) and y(i) indicate the ith true position of the source. Figure 1 illustrates a deployment of sensors, in which the radius of the sensor network was set to 10 m. The localization accuracy as a function of the standard deviation of the NLOS noise is shown in Fig. 2. In Fig. 2 a, the contamination ratio (ε) was 20%, the standard deviation of the LOS noise (σ 1) was 0.01 m, the bias of the NLOS noise (μ 2) was 4 m, sensors 5, 6, and 7 were the LOS/NLOS sensors, and the remaining sensors were LOS sensors. The step size (μ) was set to 0.99 in the WBN algorithm. Also, the initial step size (μ (0)) was 0.99, ρ was 0.1, and μ max and μ min were one and 0.01 in the VSSWBN algorithm. It is clear that the MSE averages of the VSSWBN method are lower than those of other methods and nearly constant with respect to the standard deviation of NLOS error. This observation is caused because the weighting matrix attenuates the effect of the large variance of LOS/NLOS mixture sensor. In Fig. 2 b, the contamination ratio was 30% and the remaining conditions are the same as those in Fig. 2 a. Figure 2 b shows that the MSE average performances of the VSSWBN method are much superior to those of the other methods. Figure 3 assumes the same condition as that in Fig. 2, with the exception that sensors 4, 5, 6, and 7 are the LOS/NLOS sensors in Fig. 3. Again, the proposed VSSWBN method outperformed the other methods, as shown in Fig. 3. Figure 4 shows the variation of the MSE averages as a function of the contamination ratio. The MSE averages gradually increase as the contamination ratio becomes larger, and the VSSWBN method was superior to the other methods. Figure 5 shows the MSE averages with respect to the standard deviation of LOS noise. The MSE averages of all robust methods increase as the LOS noise increases. Figure 6 illustrates MSE averages as a function of bias. As can be seen from Fig. 6, MSE averages do not change as the bias is varied and this is due to the weighting matrix which attenuates the effect of large variance caused by the bias (\(\mu _{2}^{2}\sigma _{2}^{2}\) in (20)) because the corresponding weight is far small (squared residual is much large). Figure 7 shows the adaptation error as a function of the iteration number when the impulsive noise exists in the 10th iteration. The WBN and VSSWBN methods converged in two iterations; thus, the additional samples which are required in the transient period are negligible. Additionally, the proposed WBN and VSSWBN algorithm accurately tracked the abrupt change caused by the impulsive noise. Figure 8 shows the MSE average of the proposed algorithms as a function of the number of sensors. In this case, the number of sensors increases from 5 to 9 and the number of LOS/NLOS sensors is fixed to two and the number of LOS sensors is increased. The standard deviation of the LOS noise was 0.01 m, that of the NLOS noise was 10 m, the bias was 4 m, and the contamination ratio was 30%. We can see that the MSE averages of the localization decrease as the number of sensors increases. Meanwhile, Fig. 9 shows the MSE averages of the proposed methods as a function of the number of sensors when the number of the LOS/NLOS sensors increases. The number of the LOS/NLOS sensors is one when the number of sensors is five and then increases in parallel with the number of sensors. The MSE averages of the proposed methods decrease as the number of LOS/NLOS sensors increases, and the decreasing rate of the MSE averages is lower compared to the case in which the number of LOS sensors increases.

Fig. 1
figure 1

Deployment of sensors

Fig. 2
figure 2

Comparison of MSE averages of the proposed estimators with those of the existing methods when the sensors 5, 6, and 7 are the LOS/NLOS mixture sensors and the remaining sensors are the LOS sensors. a Contamination ratio (ε): 20%, the bias of NLOS noise (μ 2): 4 m, standard deviation of LOS noise (σ 1): 0.01 m. b ε: 30%, σ 1: 0.01 m, μ 2: 4 m

Fig. 3
figure 3

Comparison of MSE averages of the proposed estimators with those of the existing methods when the sensor 4, 5, 6 and 7 are the LOS/NLOS mixture sensors and the remaining sensors are the LOS sensors. a Contamination ratio (ε): 20%, the bias of NLOS noise (μ 2): 4 m, standard deviation of LOS noise (σ 1): 0.01 m. b ε: 30%, σ 1: 0.01 m, μ 2: 4 m

Fig. 4
figure 4

MSE averages of the localization algorithms as a function of contamination ratio (bias of NLOS noise (μ 2): 4 m, standard deviation of LOS noise (σ 1): 0.01 m, standard deviation of NLOS noise (σ 2): 10 m)

Fig. 5
figure 5

MSE averages of the localization algorithms as a function of standard deviation of LOS noise (bias of NLOS noise (μ 2): 4 m, contamination ratio: 30%, standard deviation of NLOS noise (σ 2): 10 m)

Fig. 6
figure 6

MSE averages of the localization algorithms as a function of bias (contamination ratio: 30%, standard deviation of LOS noise (σ 1): 0.01 m, standard deviation of NLOS noise (σ 2): 10 m)

Fig. 7
figure 7

Adaptation error of adaptive localization algorithms (contamination ratio: 30%, standard deviation of LOS noise (σ 1): 0.01 m, standard deviation of NLOS noise (σ 2): 10 m, bias (μ 2): 4 m)

Fig. 8
figure 8

Comparison of MSE averages of the proposed estimators as a function of the number of sensors (when the number of LOS sensors increases)

Fig. 9
figure 9

Comparison of MSE averages of the proposed estimators as a function of the number of sensors (when the number of LOS/NLOS sensors increases)

7 Conclusions

The WBN algorithm was developed by modifying the block LMS algorithm to make it robust to outliers and the proposed method employed a weighting matrix. Furthermore, the VSSWBN method was proposed to improve the MSE performances of the WBN algorithm. We also analyzed the MSE of the WBN algorithm. In the simulation results, the MSE averages of the proposed methods were smaller than that of the other adaptive localization methods and robust positioning algorithms.

References

  1. CH Park, KS Hong, Block LMS-based source localization using range measurement. Digit. Signal Process. 21(2), 367–374 (2011).

    Article  Google Scholar 

  2. Y Sun, J Xiao, X Li, F Cabrera-Mora, in Proc.of GLOBECOM. Adaptive source localization by a mobile robot using signal power gradient in sensor networks (New Orleans, 2008), pp. 1–5.

  3. S Zhong, W Xia, Z He, in Proc.of IEEE China Summit and International Conference on Signal and Information Process. Adaptive direct position determination of emitters based on time differences of arrival (Beijing, 2013), pp. 230–234.

  4. S Venkatesh, RM Buehrer, in Proc. of IEEE International Symposium on Information Processing in SensorNetworks (IPSN). A linear programming approach to NLOS error mitigation in sensor networks (Nashville, 2006), pp. 301–308.

  5. X Wang, Z Wang, BO Dea, A TOA based location algorithm reducing the errors due to non-line-of-sight (NLOS) propagation. IEEE Trans. Veh. Technol. 52(1), 112–116 (2003).

    Article  Google Scholar 

  6. S Venkatesh, RM Buehrer, NLOS mitigation using linear programming in ultrawideband location-aware networks. IEEE Trans. Veh. Technol. 56(5), 3182–3198 (2007).

    Article  Google Scholar 

  7. Y Feng, C Fritsche, F Gustafsson, AM Zoubir, EM- and JMAP-ML based joint estimation algorithms for robust wireless geolocation in mixed LOS/NLOS environments. IEEE Trans. Signal Process. 62(1), 168–182 (2014).

    Article  MathSciNet  Google Scholar 

  8. H Shen, Z Ding, S Dasgupta, C Zhao, Multiple source localization in wireless sensor networks based on time of arrival measurements. IEEE Trans. Signal Process. 62(8), 1938–1949 (2014).

    Article  MathSciNet  Google Scholar 

  9. G Wang, H Chen, Y Li, N Ansari, NLOS error mitigation for TOA-based localization via convex relaxation. IEEE Trans. Wirel. Commun. 13(8), 4119–4131 (2014).

    Article  Google Scholar 

  10. YT Chan, WY Tsui, HC So, PC Ching, Time-of-arrival based localization under NLOS conditions. IEEE Trans. Veh. Technol. 55(1), 17–24 (2006).

    Article  Google Scholar 

  11. J Riba, A Urruela, in Proc. of IEEE International Conference on Acoustics, Speech, and Signal Process. A non-line-of-sight mitigation technique based on ML-detection (ICASSPQuebec, 2004), pp. 153–156.

    Google Scholar 

  12. Y Qi, H Kobayashi, H Suda, Analysis of wireless geolocation in a non-line-of-sight environment. IEEE Trans. Wirel. Commun. 5(3), 672–681 (2006).

    Article  Google Scholar 

  13. I Enosh, AJ Weiss, Outlier identification for TOA-based source localization in the presence of noise. Signal Process. 102:, 85–95 (2014).

    Article  Google Scholar 

  14. A Abbasi, H Liu, Improved line-of-sight/non-line-of-sight classification methods for pulsed ultrawideband localisation. IET Commun. 8(5), 680–688 (2014).

    Article  Google Scholar 

  15. M Crocco, A Del Bue, V Murino, A bilinear approach to the position self-calibration of multiple sensors. IEEE Trans. Signal Process. 60(2), 660–673 (2012).

    Article  MathSciNet  Google Scholar 

  16. I Dokmanić, R Parhizkar, J Ranieri, M Vetterli, Euclidean distance matrices: Essential theory, algorithms and applications. IEEE Trans. Signal Process. Mag. 32(6), 12–30 (2015).

    Article  Google Scholar 

  17. T-K Le, N Ono, Closed-form and near closed-form solutions for TOA-based joint source and sensor localization. IEEE Trans. Signal Process. 64(18), 4751–4766 (2016).

    Article  MathSciNet  Google Scholar 

  18. T-K Le, N Ono, Closed-form and near closed-form solutions for TDOA-based joint source and sensor localization. IEEE Trans. Signal Process. 65(5), 1207–1221 (2017).

    Article  MathSciNet  Google Scholar 

  19. F Gustafsson, F Gunnarsson, Mobile positioning using wireless networks. IEEE Signal Process. Mag. 22(4), 41–53 (2005).

    Article  Google Scholar 

  20. U Hammes, E Wolsztynski, AM Zoubir, Robust tracking and geolocation for wireless networks in NLOS environments. IEEE J. Sel. Top. Signal Process. 3(5), 889–901 (2009).

    Article  Google Scholar 

  21. Y Feng, C Fritsche, F Gustafsson, AM Zoubir, TOA-based robust wireless geolocation and Cramer-Rao lower bound analysis in harsh LOS/NLOS environments. IEEE Trans. Signal Process. 61(9), 2243–2255 (2013).

    Article  Google Scholar 

  22. P Huber, Robust statistics (Wiley, Hoboken, 2009).

    Book  MATH  Google Scholar 

  23. R Casas, A Marco, JJ Guerrero, J Falco, Robust estimator for non-line-of-sight error mitigation in indoor localization. EURASIP J. Adv. Signal Process. Article ID 43429:, 1–8 (2006).

    Article  Google Scholar 

  24. S Haykin, Adaptive filter theory (Pearson, Upper Saddle River, 2013).

    MATH  Google Scholar 

  25. JL Moschner, Adaptive filtering with clipped input data. Ph.d.thesis (Stanford University, Stanford, 1970).

    Google Scholar 

  26. RH Kwong, EW Johnson, A variable step-size LMS algorithm. IEEE Signal Process. 40(7), 1633–1642 (1992).

    Article  MATH  Google Scholar 

  27. VJ Mathews, Z Xie, A stochastic gradient adaptive filter with gradient adaptive step size. IEEE Trans. Signal Process. 41(6), 2075–2087 (1993).

    Article  MATH  Google Scholar 

  28. B Farhang-Boroujeny, Adaptive filters: theory and applications (Wiley, Chichester, 2013).

    Book  MATH  Google Scholar 

  29. X-W Chang, Y Guo, Huber’s M-estimation in relative GPS positioning: computational aspects. J. Geod. 79(6), 351–362 (2005).

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT, and Future Planning (No.2014R1A2A1A10049735).

Author information

Authors and Affiliations

Authors

Contributions

In this research paper, the authors proposed a robust localization algorithm. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Joon-Hyuk Chang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Park, CH., Chang, JH. Adaptive robust time-of-arrival source localization algorithm based on variable step size weighted block Newton method. J Wireless Com Network 2017, 121 (2017). https://doi.org/10.1186/s13638-017-0909-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-017-0909-0

Keywords