The two-dimensional LSSVR localization principle is extended to the three-dimensional environment. We assume that there are N anchor nodes S
i
(x
i
, y
i
, z
i
)(i = 1, 2, …, N), and M unknown nodes S
j
(x
j
, y
j
, z
j
)(j = 1, 2, …, M) in the three-dimensional environment. We also assume that the communication radius of all nodes and the ranging error factor are the same,and the unknown nodes can move randomly.
If the actual distance between the anchor node S
i
and the unknown node E is d
iE
, then the distance vector is composed of the actual distance from N anchor nodes to the unknown node E, V = [d1E, d2E, …, d
NE
]. Since the unknown node moves randomly, the coordinates of the unknown node E also change randomly, and the distance vector V is also changed. It can be concluded that there is a non-linear mapping between the unknown node coordinates and the distance vector. In this paper, the LSSVR is used to establish the mapping model between the unknown node coordinates and the distance vector. The three-dimensional mobile node localization algorithm based on LSSVR is as follows.
-
(1)
Construction of the training set with virtual nodes
Due to complexity of three-dimensional mobile node localization, the three-dimensional environment is meshed into small cubes with a length of l
x
. Assume that the K grid points are regarded as K virtual nodes after three-dimensional space is meshed into cubes. The coordinates of each virtual node can be described as \( {S}_l^{\prime}\left({x}_l^{\prime },{y}_l^{\prime },{z}_l^{\prime}\right)\left(\mathrm{l}=1,2,\dots, K\right) \). Hence, d
il
is the actual distance of anchor node S
i
to the virtual node \( {S}_l^{\prime } \), and the corresponding distance vector is V
l
= [d1l, d2l, …, d
Nl
]. The training sets U
x
, U
y
and U
z
, shown in Eq. (1), were constructed of V
l
and \( {S}_l^{\prime } \). The training sets are preprocessed by the input vector standard normalization, and then the output coordinates of the unbiased regression model are obtained.
$$ \left\{\begin{array}{l}{U}_x=\left\{\left({V}_l,{x}_l^{\prime}\right)|l=1,2,\dots, K\right\}\\ {}{U}_y=\left\{\left({V}_l,{y}_l^{\prime}\right)|l=1,2,\dots, K\right\}\\ {}{U}_z=\left\{\left({V}_l,{z}_l^{\prime}\right)|l=1,2,\dots, K\right\}\end{array}\right. $$
(1)
-
(2)
Training of the LSSVR localization model
There are many LSSVR kernel functions, but the most commonly used function is the radial basis function (RBF), the expression of RBF can be described as follows.
$$ Y\left({u}_i,{u}_j\right)=\exp \left(-\frac{{\left\Vert {u}_i-{u}_j\right\Vert}^2}{\sigma^2}\right),\left(i,j=1,2,\dots, K\right) $$
(2)
The LSSVR is used for training the sample sets U
x
, U
y
and U
z
. For U
x
, the optimization problem is constructed and solved by Eq. (3).
$$ \left\{\begin{array}{l}{\min}_{\omega, \xi, b}\frac{1}{2}{\left\Vert \omega \right\Vert}^2+\gamma \frac{1}{2}{\sum}_{i=1}^m{\xi}_i^2\\ {}s.t.{x}_l^{\prime }={\omega}^T\phi \left({V}_l\right)+b+{\xi}_i,i=1,2,\dots, K\end{array}\right. $$
(3)
In Eq. (3), \( \phi \left(\cdot \right):{R}^n\to {R}^{n_k} \) represents the nonlinear mapping function, b is the deviation, ω is the weight, γ is the regularization parameter, and ξ
i
is the random error.
Using Eq. (2) and Eq. (3), the problem can be converted to the solution of the Lagrange operators α and b by Eq. (4).
$$ \left[\begin{array}{cc}0& {1}^{-T}\\ {}\overline{1}& \Omega +{\gamma}^{-1}I\end{array}\right]\left[\begin{array}{c}b\\ {}\alpha \end{array}\right]=\left[\begin{array}{c}0\\ {}{x}^{\prime}\end{array}\right] $$
(4)
where \( {x}^{\prime }=\left[{x}_1^{\prime },{x}_2^{\prime },\dots, {x}_K^{\prime}\right] \), α = [α1, α2, …, α
K
]T, \( \overline{1}={\left[{1}_1,{1}_2,\dots, {1}_K\right]}^T \), and Ω
ij
= Y(V
i
, V
j
).
Parameters α and b can be obtained by \( \left[\begin{array}{c}b\\ {}\alpha \end{array}\right]=\left[\begin{array}{cc}0& {1}^{-T}\\ {}\overline{1}& \Omega +{\gamma}^{-1}I\end{array}\right]\left[\begin{array}{c}0\\ {}v\end{array}\right] \). Thus, the decision function is obtained.
$$ {f}_x\left({V}_l\right)={\sum}_{i=1}^K{\alpha}_iY\left({V}_i,{V}_j\right)+b $$
(5)
where f
x
(V
l
) represents the X-LSSVR localization model. Similarly, f
y
(V
l
) and f
Z
(V
l
) can be obtained, which represent the Y-LSSVR and Z-LSSVR localization models.
-
(3)
Optimization of the kernel function parameter σ and regularization parameter γ
The kernel function parameter and regularization parameter directly affect the model performance, so the particle-swarm optimization algorithm is used for optimization of the LSSVR regularization parameter and the kernel function parameter, the fitness function is defined as Eq. (6) [22].
$$ {f}_{\mathrm{fitness}}=\sqrt{\sum \limits_{i=1}^M\left({\left({f}_X\left({V}_l\right)-{x}_l^{\prime}\right)}^2+{\left({f}_Y\left({V}_l\right)-{y}_l^{\prime}\right)}^2+{\left({f}_Z\left({V}_l\right)-{z}_l^{\prime}\right)}^2\right)} $$
(6)
Here, \( {x}_l^{\prime } \), \( {y}_l^{\prime } \), and \( {z}_l^{\prime } \) are the actual location coordinates of the virtual sampling point \( {S}_l^{\prime } \) in the detection area, V
l
is the distance vector from the sampling point to the anchor node, and f
x
, f
y
, f
z
are the estimated values of the regression model established by the optimization model parameter.
-
(4)
Localization of mobile node
The distance between the anchor node S
i
to the unknown node E is defined as measured distance \( {d}_{iE}^{\prime } \). Distance vector \( {V}^{\prime }=\left[{d}_{1E}^{\prime },{d}_{2E}^{\prime },\dots .,{d}_{NE}^{\prime}\right] \) is composed of the measured distances \( {d}_{iE}^{\prime } \), where i = 1, 2, …, N. In order to simplify the calculation, the distance vector V′ is normalized. Then, the distance vector is used as the input of the localization model. Through the anti-standardized output value, the estimated coordinates of the unknown node \( \left({x}_E^{\prime },{y}_E^{\prime },{z}_E^{\prime}\right) \) are obtained. Therefore, the three-dimensional mobile node localization based on LSSVR is achieved.