 Research
 Open Access
Evaluation of nonlinear filtering for radar data tracking
 Yanan Liu^{1},
 Sese Wang^{2}Email author,
 Zhuo Sun^{2} and
 Jihong Shen^{3}
https://doi.org/10.1186/s136380150249x
© Liu et al.; licensee Springer. 2015
 Received: 7 November 2014
 Accepted: 6 January 2015
 Published: 1 February 2015
Abstract
Radar tracking plays an important role in the area of early warning and detection system, whose precision is closely connected with filtering algorithm. With the development of noise jamming technology in radar echo signal, linear filtering becomes more and more difficult to satisfy the demands of radar tracking, while nonlinear filtering can solve problems such as nonGaussian noises. There exist a lot of nonlinear filtering algorithms at present, owning their particular characteristics. With this in mind, we provide a comprehensive overview of different nonlinear filtering algorithms in radar tracking, including basic ideas and concrete steps of them. For a more clear presentation, we also make comparisons of them from all sides. Through the analyses of different nonlinear data filters, we find that the unscented Kalman data filter (UKF) can achieve better performance than others. Therefore, we will simulate and show the performance of UKF, and performance of the extended Kalman data filter (EKF) under the same condition will be taken as comparison, whose accuracy was not ideal for radar tracking data filtering.
Keywords
 Radar tracking
 Kalman filter (KF)
 Nonlinear filter
 Unscented Kalman filter
1 Introduction
Gaussian applied generalized least squares method to radar data processing in the early nineteenth century. He created a mathematical approach to deal with observations, which then became the basis of the modern filter theory [1]. Tracking filter is an important part of radar data processing technology. Its main goal is to estimate target location, speed, and other parameters in real time base on the metrical information [2]. And at the same time, tracking filter should extrapolate the location information of the target in the next antenna scan period. The location information will then be used to examine the rationality of the measurement in the next time and can be used for trackrelated processing in searching radars [3,4].
Target tracking with combined multisensor is widely used in practical applications. The problem of nonlinear filtering is to estimate the optimal state of the common nonlinear stochastic dynamic systems. It is essential to find out an effective filtering method to estimate and predict the dynamic system status and the statistics of error from the sequential measurements in real time.
Following 1960s, when Rudolph E. Kalman invented and named Kalman filter [5,6], there emerged many modern filtering technologies, and most of them are created based on Kalman filter. Kalman filter plays an irreplaceable role in the development of filter which has been applied to many fields such as radar and computer vision. At the beginning, most of Kalman filters are linear filters, which are the minimum variance state estimation of linear dynamic systems [5]. However, as technology develops, linear filtering technique was hard to meet the demands and nonlinear filter got more use. Gaussian filters and particle filters are mainly applied to nonlinear filtering estimation, and general Gaussian approximation methods include linearization and sampling approximation with applicants of extended Kalman filter (EKF) and unscented Kalman filter (UKF) separately.
In the following paper, we first present the process of target tracking based on multiple radar and the system model. Then, we make an introduction of nonlinear filters as well as algorithm procedures of them. In the simulation, we do filtering by UKF and compare the data image with that of EKF algorithm. The experimental results show that it is more accurate when using unscented Kalman filter on radar tracking data filtering problem.
2 Radar tracking and system model

Data formatting: Store the measurements by unified format.

Data correction: Do unbiased estimation and defection compensation according to data theory.

Coordinate transformation: Most of the data measured by radar is based on the spherical coordinate system, which needs to be converted to the Cartesian coordinate system in order to simplify the calculation.

Tracking filter processing: Tracking filter is the core device of a radar data processing system. It can estimate the state of the dynamic system using a series of measurements containing noise and other inaccuracies and predict the coordinate position and velocity of the object according to the observation sequence of the noise.

Target track processing: The tracking filter should estimate the target’s motion parameters like speed and position in real time using radar measurements and calculate the position and orientation of the target in the next time using the iteration formula.
where x _{ k } is the state vector of the target, z _{ k } is the observation vector, f _{ k } and h _{ k } are the state transition and the observation of the system, and u _{ k } and v _{ k } are the unrelated status noise and measurement noise of system.
3 Nonlinear filtering in radar tracking
3.1 Introduction of nonlinear filtering
The result of Kalman filter is the optimal closed solution for linear systems, while for nonlinear systems, it is very difficult or even impossible to get an accurate optimal solution. So a lot of methods on suboptimal approximate filtering are proposed [9].
Nonlinear filtering methods can be classified into five types [10,11]: 1) extended Kalman filtering (EKF), 2) interpolation filtering, 3) unscented Kalman filtering (UKF), 4) particle filtering, and 5) neural network filtering.
The most widely used method for nonlinear filtering is EKF, which transforms nonlinear issues into linear issues, and then, linear filtering theory can be applied in it. This method is usually regarded as a suboptimal method.
The key point of nonlinear filtering is to seek an effective linear approximation. Interpolation filtering utilizes interpolation polynomial to operate a linear approximation based on Stirling interpolation formula.
UKF needs no linearization. It makes an approximation of the probability density function of state vector with a series of chosen sampling points.
Particle filtering is a filtering algorithm based on Monte Carlo and recursive Bayesian estimation. It assumes getting a series of estimated values according to samples acquired by posterior probability distribution. When there is enough particles, the statistical properties of particles can be regarded as statistical properties of the posterior probability distribution.
Sigmoidal multilayer feedforward neural networks can realize arbitrary continuous nonlinear function at arbitrary precision, which is applied to neural network filtering.
3.2 Filtering algorithms of nonlinear systems
3.2.1 EKF filtering
The firstorder EKF equations [12] are shown as follows:
where I is an identity matrix that has the same dimension with the covariance matrix.

Nonlinear function processed by Taylorseries expansion (TSE) [13]: The norder TSE approximation of f(x,t) at \(\textbf {x} = \hat {\textbf {x}}\) is:$$\begin{array}{@{}rcl@{}} f(\textbf{x},t) \approx TSE\left(\textbf{x},t;n,\hat {\textbf{x}}\right). \end{array} $$(9)For scalar f(x,t), there is:$$ \begin{array}{l} TSE\left(\textbf{x},t;n,\hat {\textbf{x}}\right) = f\left(\hat {\textbf{x}},t\right) + f'\left(\hat {\textbf{x}},t\right)\bar {\textbf{x}} + \frac{1}{{2!}}\,f^{\prime\prime}\left(\hat {\textbf{x}},t\right){\bar {\textbf{x}}^{2}} \\+ \frac{1}{{3!}}{f^{(3)}}\left(\hat {\textbf{x}},t\right){\bar {\textbf{x}}^{3}} + \ldots + \frac{1}{{n!}}\,{f^{(n)}}\left(\hat {\textbf{x}},t\right){\bar {\textbf{x}^{n}}}\\ \end{array} $$(10)
where \(\bar {\textbf {x}} = \textbf {x}  \hat {\textbf {x}}\) and \(f'(\hat {\textbf {x}},t) = \frac {{\partial f}}{{\partial {\textbf {x}}}}{_{\textbf {x} = \hat {\textbf {x}}}},f''(\hat {\textbf {x}},t) = \frac {{{\partial ^{2}}f}}{{\partial {{\textbf {x}}^{2}}}}{_{\textbf {x} = \hat {\textbf {x}}}},\ldots,{f^{(n)}}(\hat {\textbf {x}},t) = \frac {{{\partial ^{n}}f}}{{\partial {{\textbf {x}}^{n}}}}{_{{\textbf {x}} = \hat {\textbf {x}}}}\).

Calculate the gains and updates using EKF equations: EKF is a typical example of approximate nonlinear filtering [14]. By processing the nonlinear model basing on linear Taylorseries expansion, we can get a firstorder approximation as an expression of the original state equation and measurement equation. As pointed in [8], EKF is easy to implement and has been widely used but with a lot of limitations.
3.2.2 Interpolation filtering

Interpolation expansion of onedimensional Stirling [15].
We define two operators δ and μ which satisfy:$$\begin{array}{@{}rcl@{}} \begin{array}{l} \delta f(x) = f\left(x + \frac{h}{2}\right)  f\left(x  \frac{h}{2}\right)\\ \mu f(x) = \frac{1}{2}\left(f\left(x + \frac{h}{2}\right) + f\left(x  \frac{h}{2}\right)\right) \end{array} \end{array} $$(11)where h is the length of interpolation. Expanding f(x) with the secondorder Stirling interpolation at \(x = \bar x\), we can obtain:$$\begin{array}{@{}rcl@{}} {}f(x) \approx f(\bar x) + {f'_{DD}}(\bar x)(x  \bar x) + \frac{{{{f'}_{DD}}(\bar x)}}{{2!}}{(x  \bar x)^{2}}. \end{array} $$(12)Then, we use the centered difference instead of the derivative of Taylorseries expansion:$$\begin{array}{@{}rcl@{}} \begin{array}{l} {{f'}_{DD}}(\bar X) = \frac{{f(\bar x + h)  f(\bar x  h)}}{{2h}} = \frac{{\delta f(\overline x)}}{h}\\ {{f^{\prime\prime}}_{DD}}(\bar X) = \frac{{f(\bar x + h) + f(\bar x  h)  2f(\bar x)}}{{{h^{2}}}} = \frac{{\mu f(\overline x)  f(\overline x)}}{{\frac{{{h^{2}}}}{8}}}. \end{array} \end{array} $$(13)Plugging the Taylor series into (12), we obtain:$$ \begin{aligned} &f(\bar x)\! + \!{{f'}_{DD}}(\bar x)\! +\! \frac{{{{{f}^{\prime\prime}}_{DD}}(\bar x)}}{{2!}}{(x  \bar x)^{2}} = f(\bar x) \,+\, f'(\bar x)(x  \bar x)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \frac{{{f}^{\prime\prime}(\bar x)}}{{2!}}{(x  \bar x)^{2}}\\ &\qquad+ \left(\frac{{{f^{(3)}}(\bar x)}}{{3!}}{h^{2}} + \frac{{{f^{(5)}}(\bar x)}}{{5!}}{h^{2}} + \ldots\right) \cdot (x  \bar x)\\ &\qquad+ \left(\frac{{{f^{(4)}}(\bar x)}}{{4!}}{h^{2}} + \frac{{{f^{(6)}}(\bar x)}}{{6!}}{h^{2}} + \ldots\right){(x  \bar x)^{2}}. \end{aligned} $$(14) 
Interpolation expansion of multidimensional Stirling [13].
We suppose x∈R ^{ n }, and y=f(x) is a function vector. By applying Stirling interpolation expansion we get:$$\begin{array}{@{}rcl@{}} \textbf{y} = f(\bar {\textbf{x}}) + {\tilde D_{\Delta x}}f + \frac{1}{{2!}}\tilde D_{\Delta x}^{2}f \end{array} $$(15)where \({\tilde D_{\Delta x}}\) is the differential operators that satisfies:$${} {\fontsize{8.6pt}{9.3pt}\selectfont{\begin{array}{l} {{\tilde D}_{\Delta {\textbf{x}}}}\,f \,=\, \frac{1}{h}\left(\sum\limits_{p = 1}^{n} {\Delta {{\textbf{x}}_{p}}} {\mu_{p}}{\delta_{p}}\right)f(\bar {\textbf{x}})\\ \tilde D_{\Delta {\textbf{x}}}^{2}\,f \,=\, \frac{1}{{{h^{2}}}}\!\left(\sum\limits_{p = 1}^{n} \!{\Delta\! {\textbf{x}}_{p}^{2}} {\delta_{p}^{2}} \,+\, \!\sum\limits_{p = 1}^{n} {\sum\limits_{q = 1,q \ne p}^{n} \!\! {\Delta {{\textbf{x}}_{p}}} } \Delta {{\textbf{x}}_{q}} \!\cdot \!\left({\mu_{p}}{\delta_{p}}\right)\left({\mu_{q}}{\delta_{q}}\right)f(\bar {\textbf{x}})\right. \end{array}}} $$(16)where δ _{ p } and μ _{ q } are the pseudodifferential operator and the average operator satisfying:$$\begin{array}{@{}rcl@{}} \begin{array}{l} {\delta_{p}}f(\bar {\textbf{x}}) = f\left(\bar {\textbf{x}} + \frac{h}{2}{e_{p}}\right)  f\left(\bar {\textbf{x}}  \frac{h}{2}{e_{p}}\right)\\ {\mu_{q}}f(\bar {\textbf{x}}) = \frac{1}{2}\left(f\left(\bar {\textbf{x}} + \frac{h}{2}{e_{p}}\right) + f\left(\bar {\textbf{x}}  \frac{h}{2}{e_{p}}\right)\right) \end{array} \end{array} $$(17)where e _{ p } is the pth unit vector.
The above interpolation filtering algorithm has a great advantage compared with TSE. Different expansion contains onedimensional Stirling interpolation expansion and multidimensional Stirling interpolation expansion. In the method of Interpolation filtering, the calculation of the partial derivative can be omitted. So it can be applied to any function cases, even for cases of nonlinear discontinuities with singularities, and it is more accurate than the method of EKF [13].
3.2.3 Particle filtering

Initialization: Set k=0 and generate particle swarm from the prior density function p(x _{0}).

Importance sampling: For the case when k∈{1,2,…,∞}, get N samples from the conversion prior density function \(p\left ({\textbf {x}_{k}}\textbf {x}_{k  1}^{i}\right)\) and calculate the weight of each particle \({\omega _{k}^{i}} = \omega _{k  1}^{i}p\left ({\textbf {y}_{k}}\textbf {x}_{k}^{i}\right)\) for i=1,2,…,N, then do regularization for the weights by the rule \(\tilde {\omega _{k}^{i}} = {\omega _{k}^{i}}{\left [\sum \limits _{j = 1}^{N} {{\omega _{k}^{j}}} \right ]^{ 1}}\).

Resampling: Setting \(\tilde N =\left [\tilde N{\omega _{k}^{i}}\right ]\), we derive \(\overline {{N_{k}}} = N  {\sum \nolimits }_{i = 1}^{N} {{{\tilde N}_{i}}}\) particles by sample importance resampling (SIR) and the corresponding weights are \({\omega _{k}^{i}} \!= {\overline N_{k}}^{ 1}\!\left ({\omega _{k}^{i}}N  {\tilde N_{i}}\right)\). In this case, the total number of particles N is invariant and \({\omega _{k}^{i}} = \tilde {\omega _{k}^{i}} = \frac {1}{N}\).
The basic idea of the particle filter is to represent the state vector of interest as a set of random samples with associated weights and then work out the state estimation values based on these samples and weights. When the number of random samples is sufficient, the estimated probability density function of the particle would level off to the real probability density, and the particle filtering would accordingly level off to the optimal Bayesian filtering [10].
3.2.4 Neural network filtering [11]
Where ξ(t) and v(t) are colored noise with nonzero mean.
With the rapid development of artificial intelligence technology, researchers found that the neural network filter was suitable for issues of nonlinear. Its main features are smart and welladapted. And the drawback of this method is its poor universality, and the estimated accuracy is unsatisfactory.
3.2.5 UKF filtering
 1.Construct the sigma points. In the first place, work out the sampling points ξ _{ i } and its corresponding weights W _{ i } [10,17].$$\begin{array}{@{}rcl@{}} \left\{ {\begin{array}{*{20}{c}} {{\xi_{0}} = \overline {\textbf{X}} }&,&{i = 0}\\ {{\xi_{i}} = \overline {\textbf{X}} + {{(\sqrt {({n_{x}} + \kappa){\textbf{P}_{x}}})}_{i}}}&,&{i = 1,\ldots,{n_{x}}}\\ {{\xi_{i + {n_{x}}}} = \overline {\textbf{X}}  {{(\sqrt {({n_{x}} + \kappa){{\textbf{P}}_{x}}})}_{i}}}&,&{i = 1,\ldots,{n_{x}}} \end{array}} \right. \end{array} $$(23)$$\begin{array}{@{}rcl@{}} {}\left\{ {\begin{array}{*{20}{c}} {{W_{0}} = \frac{\kappa }{{({n_{x}} + \kappa)}}}&,&{i = 0}\\ {{W_{i}} = \frac{\kappa }{{[2({n_{x}} + \kappa)]}}}&,&{i = 1,\ldots,{n_{x}}}\\ {{W_{i + {n_{x}}}} = \frac{\kappa }{{[2({n_{x}} + \kappa)]}}}&,&{i = 1,\ldots,{n_{x}}} \end{array}} \right. \end{array} $$(24)
where κ is a scale parameter that can be any value satisfying (n _{ x }+κ)≠0, \({\left (\sqrt {({n_{x}} + \kappa){{\textbf {P}}_{x}}} \right)_{i}}\) is the ith line and the ith column of the root mean square (RMS) matrix (n _{ x }+κ)P _{ x } and n _{ x } denotes the dimension of the state vector.
 2.Give nonlinear transformation to the sigma points. Each of δ samples are transmitted by nonlinear function and we obtain.$$\begin{array}{@{}rcl@{}} {y_{i}} = g({\xi_{i}}),\begin{array}{*{20}{c}} {}&{i = 0,\ldots,2{n_{x}}}. \end{array} \end{array} $$(25)
 3.
Calculate the mean and variance of y. The estimated mean and covariance of y are as follows.

Calculate the δ sampling point \(\chi _{kk}^{i}\);

Predict \({\hat x_{k + 1k}}\) and P _{ k+1k } in use of \(\chi _{kk}^{i}\) according to UT;

Update the observation value by the following equations.$$ {\fontsize{8.9pt}{9.3pt}\selectfont{\begin{array}{l} Z_{k + 1k}^{i} = {h_{k + 1}}\left(\chi_{k + 1k}^{i}\right)\\ {\mathop z\limits^ \wedge_{k + 1k}} = \sum\limits_{i = 0}^{2{n_{x}}} {{W_{i}}Z_{k + 1k}^{i}} \\ {S_{k + 1}} = \sum\limits_{i = 0}^{2{n_{x}}} {{W_{i}}\left[Z_{k + 1k}^{i}  {{\mathop x\limits^ \wedge }_{k + 1k}}\right]{{\left[Z_{k + 1k}^{i}  {{\mathop x\limits^ \wedge }_{k + 1k}}\right]}^{T}} + {R_{k}}} \\ {P_{xx,k + 1}} = \sum\limits_{i = 0}^{2{n_{x}}} {{W_{i}}\left[\chi_{k + 1k}^{i}  {{\mathop x\limits^ \wedge }_{k + 1k}}\right]{{\left[\chi_{k + 1k}^{i}  {{\mathop x\limits^ \wedge }_{k + 1k}}\right]}^{T}}} \\ {K_{k + 1}} = {P_{xx,k + 1}}{S^{ 1}}_{k + 1}\\ {\mathop x\limits^ \wedge_{k + 1k}} = {\mathop x\limits^ \wedge_{k + 1k}} + {K_{k + 1}}\left({z_{k + 1}}  {\mathop z\limits^ \wedge_{k + 1k}}\right)\\ {P_{k + 1k + 1}} = {P_{k + 1k}}  {K_{k + 1}}{S_{k + 1}}K_{k + 1}^{T}.\\[12pt] \end{array}}} $$(26)
UKF conducts recurrence and updates of the nonlinear model status and error covariance by nonlinear UT method. It is not about the approximation of the nonlinear model but the approximation of the state’s probability density function [18]. The similarity between UKF and EKF is that the used parametric analytical form is both based on the Gaussian assumption [19].
3.3 Comparison of different nonlinear filters
In [20], by using EKF, it is obvious that besides some improvements, we can see that the magnitude of the error correction is small from the image after the radar data filtering and simulation. The error of linearization usually seriously affects the final filtering accuracy, sometimes even leads to filtering divergence.
In our work, we try to use UKF which is more accurate than EKF, what is more, in the method of UKF, the approximation of the maximum term of Taylor expression and the calculation of the Jacobi matrix are avoided. However, it should be added that the application is limited due to the assumption of the Gaussian distribution of noises.
For the interpolation filter, we need more sampling points in order to achieve ideal precision. In the method, Taylor expansion should be used and approximation must be done. Considering that the performance of applications on digital data is worse than applications on image data, in our study, we do not use interpolation filter.
In practical, the estimation of nonlinear and nonGaussian stochastic systems is more significant. An effective way to solve this problem is nonparametric particle filtering. Particle filtering can be applied to any statetransformation model and measuring model in any environment, which gets rid of the constraint that random amounts must satisfy Gaussian distribution. However, there would exist particle degradation when do particle filtering practically, which means that as the increase of the number of samples, the weight of many particles may get smaller, and the variance of the sample would increase over time. Based on the judgment in [20], the system is under the assumption of Gaussian noise, so we would not use particle filtering algorithm here.
Although with a short history, neural network filtering algorithm can be combined with the other filters like EKF and generates new adaptive extended Kalman filters. However, we do not choose it for it is better used under similar systems.
From the procedure of UKF, we can see that the method is faster than EKF for there is no need to calculate the Jacobi matrix. And the mean and variance of the nonlinear function can be estimated more accurately with UT so get a higher accuracy. For any nonlinear functions, the posterior mean and covariance by UKF can be accurate to the secondorder so that UKF can be used in any dynamic model while EKF can only obtain a precision of first order.
Basing on the above analyses, we choose the method UKF which can provide more accurate data for further calculation. And it has a great significance for practical applications by applying UKF to [20] which is bound to lay better foundation for the future data fusion problem.
4 Numerical results

Radar period: T=2.

Random errors of radar distance and the position: 200 and 0.3.

Initial values of the state are:$$\begin{array}{@{}rcl@{}} \begin{array}{l} {\textbf{X}_{0}} = \left\{ \textbf{Z}(1,2),\frac{{\textbf{Z}(1,2)  \textbf{Z}(1,1)}}{{{T_{s}}}},\textbf{Z}(2,2),\frac{{\textbf{Z}(2,2)  \textbf{Z}(2,1)}}{{{T_{s}}}}\right\} \\ \textbf{H} = \left({\begin{array}{*{20}{c}} 1&0&0&0\\ 0&0&1&0 \end{array}} \right),\textbf{F} = \left({\begin{array}{*{20}{c}} 1&{{T_{s}}}&0&0\\ 0&1&0&0\\ 0&0&1&{{T_{s}}}\\ 0&0&0&1 \end{array}} \right). \end{array} \end{array} $$(27)
Initial covariance is as follows:
$${} {\fontsize{8.9pt}{9.3pt}\selectfont{\textbf{P} \!= \!\!\left(\! {\begin{array}{*{20}{c}} {\textbf{R}(1,1)}&{\frac{{\textbf{R}(1,1)}}{{{T_{s}}}}}&{\textbf{R}(1,2)}&{\frac{{\textbf{R}(1,2)}}{{{T_{s}}}}}\\ {\frac{{\textbf{R}(1,1)}}{{{T_{s}}}}}&{\frac{{2\textbf{R}(1,1)}}{{{T_{s}}^{2}}}}&{\frac{{\textbf{R}(1,2)}}{{{T_{s}}}}}&{\frac{{2\textbf{R}(1,2)}}{{{T_{s}}^{2}}}}\\ {\textbf{R}(1,2)}&{\frac{{\textbf{R}(1,2)}}{{{T_{s}}}}}&{\textbf{R}(2,2)}&{\frac{{\textbf{R}(2,2)}}{{{T_{s}}}}}\\ {\frac{{\textbf{R}(1,2)}}{{{T_{s}}}}}&{\frac{{2\textbf{R}(1,2)}}{{{T_{s}}^{2}}}}&{\frac{{\textbf{R}(2,2)}}{{{T_{s}}}}}&{\frac{{2\textbf{R}(2,2)}}{{{T_{s}}^{2}}}} \end{array}}\! \right)\!,\!\textbf{Q}\! \,=\,\! \left(\begin{array}{*{20}{c}} {50}&{50}&{10}&{10}\\ {50}&{50}&{10}&{10}\\ {10}&{10}&{50}&{50}\\ {10}&{10}&{50}&{50} \end{array} \!\right)}}\!. $$(28)
Radar measurements obtained by EKF algorithm and UKF algorithm
Values  

(x0,y0)  (768550, 786690)  (766260, 786700)  (764390, 787190)  (762280, 787080) 
(x _{1},y _{1})  (768540, 786700)  (766430, 786860)  (764120, 787280)  (761740, 787410) 
(x _{1} ^{′},y _{1} ^{′})  (767784, 786694)  (766716, 786699)  (765136, 786995)  (763243, 787088) 
(x0,y0)  (757850, 786890)  (755880, 787360)  (753780, 787530)  (751550, 787370) 
(x _{1},y _{1})  (757840, 787090)  (755730, 787450)  (753420, 787580)  (751140, 787820) 
(x _{1} ^{′},y _{1} ^{′})  (760133, 787052)  (757436, 787221)  (754930, 787394)  (752499, 787453) 
(x0,y0)  (747210, 787300)  (745100, 787400)  (742990, 787490)  (740740, 787240) 
(x _{1},y _{1})  (747200, 787510)  (745090, 787670)  (742780, 787690)  (740500, 787820) 
(x _{1} ^{′},y _{1} ^{′})  (749442, 787473)  (746636, 787513)  (744005, 787565)  (741462, 787537) 
(x0,y0)  (736750, 788000)  (734630, 788040)  (732380, 787720)  (730470, 788030) 
(x _{1},y _{1})  (736750, 788010)  (734640, 788070)  (732320, 788390)  (730040, 789020) 
(x _{1} ^{′},y _{1} ^{′})  (738559, 787701)  (735821, 787837)  (733182, 787875)  (730688, 787968) 
From the above images, we can clearly see that the track in the second figure by UKF is smoother especially around the points 8 and 9. Since actually the flight path of an aircraft cannot be poignant as broken line form, the smooth curve in Figure 3b is much closer to the actual trajectory of the aircraft. That is to say, in the test cases, UKF algorithm is a more suitable method.
Compared with the curve in Figure 4a, the track by UKF is much smoother. And the curve behind point 9 is almost in a smooth form. Although the curve by EKF algorithm is close to the original measurement data, according to the actual principles of flight, the aircraft should have a smooth movement, which fully illustrates that there is measurement error, and we should filter the data before using it.
From Figure 5a, we can find that after unscented Kalman filtering, the curve is almost a smooth one. It states that filtering the set of measurements by UKF, the curve we obtain is much closer to the actual track.
5 Conclusions
We present algorithm procedures of five types of nonlinear filters used in radar tracking data filtering, respectively, and have a contrast of them, among which UKF shows the best performance. For further illustration, we mainly carry out simulations of UKF, and that of EKF is taken as comparison. Both theory and simulation results show that the precision of data and images we get by UKF is superior to the other filters, especially better than the result by EKF, and shows smoother and better tracking performance. What is more, for nonlinear function, UKF algorithm can greatly improve the accuracy of the calculation in practical applications, if we apply the UKF algorithm to the method in [20], the issue would be a preferable basis for data fusion in the future and may have great significance for practical applications.
Declarations
Acknowledgements
This work was supported by the National Science and Technology Major Project of China under grant 2013ZX03001003003 and the BUPT Research Innovation Project under grant 2013RC0104.
Authors’ Affiliations
References
 BH Ding, H Huang, Radar Tracking Filter Technology Research [J] (Shanghai Jiaotong University, Shanghai, 2011).Google Scholar
 Q Liang, B Zhang, C Zhao, Y Pi, TDoABased Passive Localization: Terrestrial versus Underwater Environment[J]vol. 24, no. 10, (IEEE Trans on Parallel and Distributed Processing, 2011), pp. 21002108, Oct 2013.Google Scholar
 Q Liang, X Cheng, Sherwood Samn, NEW: Networkenabled Electronic Warfare for Target Recognition [J]vol. 46, no. 2, (IEEE Trans on Aerospace and Electronic Systems), pp. 558568, April 2010.Google Scholar
 MY Ge, Studies of Monitoring Radar Target Tracking and Localization [J] (91,913 troops of the PLA Navy, Dalian, 2012).Google Scholar
 CQ Hong, Studies of Target Searching Algorithm in Video Based on Content [D] (Zhejiang University, Hangzhou, 2011).Google Scholar
 ZX Yao, Design and Research of DC/DC Converter and State Observer [D] (Guangxi University, Nanning, 2008).Google Scholar
 DC Liu, X Tan, Performance comparison of nonlinear filtering algorithm [J] (Aba Teachers College, Sichuan, 2011).Google Scholar
 B Liu, Z Chen, X Liu, F Yang, J Geng, A New Method Based on the Polytopic Linear Differential Inclusion for the Nonlinear Filter [J] (Beijing Institute of Technology, Beijing, 2013).Google Scholar
 Q Liang, Radar Sensor Wireless Channel Modeling in Foliage Environment: UWB versus Narrowband [J],vol. 11, no. 6 (IEEE Sensors Journal, June 2011), pp. 1448–1457.Google Scholar
 Q Liang, Automatic Target Recognition Using Waveform Diversity in Radar Sensor Networks [J],vol. 29, no. 2 (Pattern Recognition Letters (Elsevier), 2008), pp. 377–381.Google Scholar
 J Liang, Q Liang, Design and Analysis of Distributed Radar Sensor Networks [J],vol. 22, no. 11 (IEEE Trans on Parallel and Distributed Processing, November 2011), pp. 1926–1933.Google Scholar
 Y He, JJ Xiu, JW Zhang, X Guan, Radar Data Processing and Application [M] (Electronic Industry Press, Beijing, July, 2009).Google Scholar
 CS Qu, HL Xu, Y Tan, Overview of Nonlinear Bayesian Filtering Algorithms [J] (Second Artillery Engineering College, Xi’an, 2008).Google Scholar
 B Liu, Z Chen, X Liu, F Yang, A New Method Based on polytypic Linear Inclusion for Nonlinear filter with NonGaussian Noise [J] (Beijing Institute of Technology, Beijing, 2013).Google Scholar
 CP Yu, GY Li, HY Zhang, Comparison and Analysis of Performance of Several Nonlinear Filtering Algorithms [J] (PLA Information Engineering University, Zhengzhou, November, 2008).Google Scholar
 J Xu, GM Dimirovski, Y Jing, C Shen, UKF Design and Stability for Nonlinear Stochastic Systems with Correlated Noises [J] (LA, USA, December, 2007).Google Scholar
 Y Ma, Z Wang, A UKF Algorithm Based on the Singular Value Decomposition of State Covariance [J] (Graduate School of the Chinese Academy of Sciences, Beijing, January, 2010).Google Scholar
 F Gao, 3D Ect Data Correction and Image Reconstruction Based on the Theory of Stochastic Optimization [D] (Zhejiang University, Hangzhou, April, 2009).Google Scholar
 D Simon, Kalman Filtering with State Constraints: A Survey Of Linear and Nonlinear Algorithms [J] (Cleveland State University, Cleveland, Ohio44115, USA, January, 2009).Google Scholar
 Z Lin, Research Information Fusion Systems Engineering Design Criteria [D] (Harbin Engineering University, Harbin, 2005).Google Scholar
 ZX Hu, Filtering Based On Neural Networks [J] (East China University of Technology, Shanghai, March, 1996).Google Scholar
Copyright
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.