Skip to content

Advertisement

  • Research
  • Open Access

Evaluation of nonlinear filtering for radar data tracking

EURASIP Journal on Wireless Communications and Networking20152015:18

https://doi.org/10.1186/s13638-015-0249-x

  • Received: 7 November 2014
  • Accepted: 6 January 2015
  • Published:

Abstract

Radar tracking plays an important role in the area of early warning and detection system, whose precision is closely connected with filtering algorithm. With the development of noise jamming technology in radar echo signal, linear filtering becomes more and more difficult to satisfy the demands of radar tracking, while nonlinear filtering can solve problems such as non-Gaussian noises. There exist a lot of nonlinear filtering algorithms at present, owning their particular characteristics. With this in mind, we provide a comprehensive overview of different nonlinear filtering algorithms in radar tracking, including basic ideas and concrete steps of them. For a more clear presentation, we also make comparisons of them from all sides. Through the analyses of different nonlinear data filters, we find that the unscented Kalman data filter (UKF) can achieve better performance than others. Therefore, we will simulate and show the performance of UKF, and performance of the extended Kalman data filter (EKF) under the same condition will be taken as comparison, whose accuracy was not ideal for radar tracking data filtering.

Keywords

  • Radar tracking
  • Kalman filter (KF)
  • Nonlinear filter
  • Unscented Kalman filter

1 Introduction

Gaussian applied generalized least squares method to radar data processing in the early nineteenth century. He created a mathematical approach to deal with observations, which then became the basis of the modern filter theory [1]. Tracking filter is an important part of radar data processing technology. Its main goal is to estimate target location, speed, and other parameters in real time base on the metrical information [2]. And at the same time, tracking filter should extrapolate the location information of the target in the next antenna scan period. The location information will then be used to examine the rationality of the measurement in the next time and can be used for track-related processing in searching radars [3,4].

Target tracking with combined multi-sensor is widely used in practical applications. The problem of nonlinear filtering is to estimate the optimal state of the common nonlinear stochastic dynamic systems. It is essential to find out an effective filtering method to estimate and predict the dynamic system status and the statistics of error from the sequential measurements in real time.

Following 1960s, when Rudolph E. Kalman invented and named Kalman filter [5,6], there emerged many modern filtering technologies, and most of them are created based on Kalman filter. Kalman filter plays an irreplaceable role in the development of filter which has been applied to many fields such as radar and computer vision. At the beginning, most of Kalman filters are linear filters, which are the minimum variance state estimation of linear dynamic systems [5]. However, as technology develops, linear filtering technique was hard to meet the demands and nonlinear filter got more use. Gaussian filters and particle filters are mainly applied to nonlinear filtering estimation, and general Gaussian approximation methods include linearization and sampling approximation with applicants of extended Kalman filter (EKF) and unscented Kalman filter (UKF) separately.

In the following paper, we first present the process of target tracking based on multiple radar and the system model. Then, we make an introduction of nonlinear filters as well as algorithm procedures of them. In the simulation, we do filtering by UKF and compare the data image with that of EKF algorithm. The experimental results show that it is more accurate when using unscented Kalman filter on radar tracking data filtering problem.

2 Radar tracking and system model

Modern radar usually processes data with digital computers. Using parameter estimation techniques, we can estimate a lot of motion parameters such as the specific location of the target, velocity, and acceleration basing on the radar measurements and generate a variety of information about the target such as the expected position and the current and the next state of the target [5]. There are five steps when processing TT&C radar data, and the procedure is as shown in Figure 1.
Figure 1
Figure 1

Procedure of radar data processing. The five steps when processing TT&C radar data.

  • Data formatting: Store the measurements by unified format.

  • Data correction: Do unbiased estimation and defection compensation according to data theory.

  • Coordinate transformation: Most of the data measured by radar is based on the spherical coordinate system, which needs to be converted to the Cartesian coordinate system in order to simplify the calculation.

  • Tracking filter processing: Tracking filter is the core device of a radar data processing system. It can estimate the state of the dynamic system using a series of measurements containing noise and other inaccuracies and predict the coordinate position and velocity of the object according to the observation sequence of the noise.

  • Target track processing: The tracking filter should estimate the target’s motion parameters like speed and position in real time using radar measurements and calculate the position and orientation of the target in the next time using the iteration formula.

State equation and observation equation of nonlinear systems [7,8] are as follows:
$$\begin{array}{@{}rcl@{}} \begin{array}{l} {\textbf{x}_{k + 1}} = {f_{k}}({\textbf{x}_{k}}) + {\textbf{u}_{k}}\\ {\textbf{z}_{k}} = {h_{k}}({\textbf{x}_{k}}) + {\textbf{v}_{k}} \end{array} \end{array} $$
(1)

where x k is the state vector of the target, z k is the observation vector, f k and h k are the state transition and the observation of the system, and u k and v k are the unrelated status noise and measurement noise of system.

3 Nonlinear filtering in radar tracking

3.1 Introduction of nonlinear filtering

The result of Kalman filter is the optimal closed solution for linear systems, while for nonlinear systems, it is very difficult or even impossible to get an accurate optimal solution. So a lot of methods on suboptimal approximate filtering are proposed [9].

Nonlinear filtering methods can be classified into five types [10,11]: 1) extended Kalman filtering (EKF), 2) interpolation filtering, 3) unscented Kalman filtering (UKF), 4) particle filtering, and 5) neural network filtering.

The most widely used method for nonlinear filtering is EKF, which transforms nonlinear issues into linear issues, and then, linear filtering theory can be applied in it. This method is usually regarded as a suboptimal method.

The key point of nonlinear filtering is to seek an effective linear approximation. Interpolation filtering utilizes interpolation polynomial to operate a linear approximation based on Stirling interpolation formula.

UKF needs no linearization. It makes an approximation of the probability density function of state vector with a series of chosen sampling points.

Particle filtering is a filtering algorithm based on Monte Carlo and recursive Bayesian estimation. It assumes getting a series of estimated values according to samples acquired by posterior probability distribution. When there is enough particles, the statistical properties of particles can be regarded as statistical properties of the posterior probability distribution.

Sigmoidal multi-layer feedforward neural networks can realize arbitrary continuous nonlinear function at arbitrary precision, which is applied to neural network filtering.

3.2 Filtering algorithms of nonlinear systems

3.2.1 EKF filtering

The first-order EKF equations [12] are shown as follows:

With knowledge of the present state and covariance of the present state, we can get the prediction of the next state as well as covariance of the next state:
$$\begin{array}{@{}rcl@{}} \mathop \textbf{X}\limits^{\wedge} \left(k + 1|k\right) = f\left[k,\mathop \textbf{X}\limits^{\wedge} \left(k|k\right)\right] \end{array} $$
(2)
$$\begin{array}{@{}rcl@{}} \textbf{P}\left(k + 1|k\right) = {f_{x}}(k)P\left(k|k\right){f'_{x}}\left(k\right) + \textbf{Q}\left(k\right). \end{array} $$
(3)
Then, according to equations above, we get the prediction of measurement and innovation covariance:
$$\begin{array}{@{}rcl@{}} \mathop \textbf{Z}\limits^ \wedge \left(k + 1|k\right) = h\left[k + 1,\mathop \textbf{X}\limits^ \wedge \left(k + 1|k\right)\right] \end{array} $$
(4)
$$ {}\textbf{S}(k + 1) = {h_{x}}(k + 1|)\textbf{P}(k + 1|k){h'_{X}}(k + 1) + \textbf{R}(k + 1). $$
(5)
Therefore, the optimal Kalman gain can be described as:
$$\begin{array}{@{}rcl@{}} {}\mathop \textbf{X}\limits^ \wedge \left(k + 1|k + 1\right) = \mathop \textbf{X}\limits^{\wedge} \left(k + 1|k\right){h'_{X}}\left(k + 1\right)\,{\textbf{S}^{- 1}}(k + 1). \end{array} $$
(6)
From the the derivation above, we can finally obtain the updated state equation and the updated covariance equation:
$$ \begin{aligned} \mathop \textbf{X}\limits^ \wedge (k + 1|k + 1) &= \mathop \textbf{X}\limits^ \wedge (k + 1|k) + \textbf{K}(k + 1)\\ &\left\{ \textbf{Z}(k + 1) - h\left[k + 1,\mathop \textbf{X}\limits^ \wedge (k + 1|k)\right]\right\} \end{aligned} $$
(7)
$$ {\small{\begin{aligned} &\textbf{P}(k+1|k+1)=\left [\textbf{I} -\textbf{K}(k + 1){h_{x}}(k +1)\right]\textbf{P}(k+1|k)\\&\times\left[\textbf{I} + \textbf{K}(k + 1){h_{x}}(k + 1)\right]' - \textbf{K}(k + 1)\textbf{R}(k + 1)\textbf{K}'(k + 1) \end{aligned}}} $$
(8)

where I is an identity matrix that has the same dimension with the covariance matrix.

Specific applications follow these steps:
  • Nonlinear function processed by Taylor-series expansion (TSE) [13]: The n-order TSE approximation of f(x,t) at \(\textbf {x} = \hat {\textbf {x}}\) is:
    $$\begin{array}{@{}rcl@{}} f(\textbf{x},t) \approx TSE\left(\textbf{x},t;n,\hat {\textbf{x}}\right). \end{array} $$
    (9)
    For scalar f(x,t), there is:
    $$ \begin{array}{l} TSE\left(\textbf{x},t;n,\hat {\textbf{x}}\right) = f\left(\hat {\textbf{x}},t\right) + f'\left(\hat {\textbf{x}},t\right)\bar {\textbf{x}} + \frac{1}{{2!}}\,f^{\prime\prime}\left(\hat {\textbf{x}},t\right){\bar {\textbf{x}}^{2}} \\+ \frac{1}{{3!}}{f^{(3)}}\left(\hat {\textbf{x}},t\right){\bar {\textbf{x}}^{3}} + \ldots + \frac{1}{{n!}}\,{f^{(n)}}\left(\hat {\textbf{x}},t\right){\bar {\textbf{x}^{n}}}\\ \end{array} $$
    (10)

    where \(\bar {\textbf {x}} = \textbf {x} - \hat {\textbf {x}}\) and \(f'(\hat {\textbf {x}},t) = \frac {{\partial f}}{{\partial {\textbf {x}}}}{|_{\textbf {x} = \hat {\textbf {x}}}},f''(\hat {\textbf {x}},t) = \frac {{{\partial ^{2}}f}}{{\partial {{\textbf {x}}^{2}}}}{|_{\textbf {x} = \hat {\textbf {x}}}},\ldots,{f^{(n)}}(\hat {\textbf {x}},t) = \frac {{{\partial ^{n}}f}}{{\partial {{\textbf {x}}^{n}}}}{|_{{\textbf {x}} = \hat {\textbf {x}}}}\).

  • Calculate the gains and updates using EKF equations: EKF is a typical example of approximate nonlinear filtering [14]. By processing the nonlinear model basing on linear Taylor-series expansion, we can get a first-order approximation as an expression of the original state equation and measurement equation. As pointed in [8], EKF is easy to implement and has been widely used but with a lot of limitations.

3.2.2 Interpolation filtering

  • Interpolation expansion of one-dimensional Stirling [15].

    We define two operators δ and μ which satisfy:
    $$\begin{array}{@{}rcl@{}} \begin{array}{l} \delta f(x) = f\left(x + \frac{h}{2}\right) - f\left(x - \frac{h}{2}\right)\\ \mu f(x) = \frac{1}{2}\left(f\left(x + \frac{h}{2}\right) + f\left(x - \frac{h}{2}\right)\right) \end{array} \end{array} $$
    (11)
    where h is the length of interpolation. Expanding f(x) with the second-order Stirling interpolation at \(x = \bar x\), we can obtain:
    $$\begin{array}{@{}rcl@{}} {}f(x) \approx f(\bar x) + {f'_{DD}}(\bar x)(x - \bar x) + \frac{{{{f'}_{DD}}(\bar x)}}{{2!}}{(x - \bar x)^{2}}. \end{array} $$
    (12)
    Then, we use the centered difference instead of the derivative of Taylor-series expansion:
    $$\begin{array}{@{}rcl@{}} \begin{array}{l} {{f'}_{DD}}(\bar X) = \frac{{f(\bar x + h) - f(\bar x - h)}}{{2h}} = \frac{{\delta f(\overline x)}}{h}\\ {{f^{\prime\prime}}_{DD}}(\bar X) = \frac{{f(\bar x + h) + f(\bar x - h) - 2f(\bar x)}}{{{h^{2}}}} = \frac{{\mu f(\overline x) - f(\overline x)}}{{\frac{{{h^{2}}}}{8}}}. \end{array} \end{array} $$
    (13)
    Plugging the Taylor series into (12), we obtain:
    $$ \begin{aligned} &f(\bar x)\! + \!{{f'}_{DD}}(\bar x)\! +\! \frac{{{{{f}^{\prime\prime}}_{DD}}(\bar x)}}{{2!}}{(x - \bar x)^{2}} = f(\bar x) \,+\, f'(\bar x)(x - \bar x)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \frac{{{f}^{\prime\prime}(\bar x)}}{{2!}}{(x - \bar x)^{2}}\\ &\qquad+ \left(\frac{{{f^{(3)}}(\bar x)}}{{3!}}{h^{2}} + \frac{{{f^{(5)}}(\bar x)}}{{5!}}{h^{2}} + \ldots\right) \cdot (x - \bar x)\\ &\qquad+ \left(\frac{{{f^{(4)}}(\bar x)}}{{4!}}{h^{2}} + \frac{{{f^{(6)}}(\bar x)}}{{6!}}{h^{2}} + \ldots\right){(x - \bar x)^{2}}. \end{aligned} $$
    (14)
  • Interpolation expansion of multi-dimensional Stirling [13].

    We suppose xR n , and y=f(x) is a function vector. By applying Stirling interpolation expansion we get:
    $$\begin{array}{@{}rcl@{}} \textbf{y} = f(\bar {\textbf{x}}) + {\tilde D_{\Delta x}}f + \frac{1}{{2!}}\tilde D_{\Delta x}^{2}f \end{array} $$
    (15)
    where \({\tilde D_{\Delta x}}\) is the differential operators that satisfies:
    $${} {\fontsize{8.6pt}{9.3pt}\selectfont{\begin{array}{l} {{\tilde D}_{\Delta {\textbf{x}}}}\,f \,=\, \frac{1}{h}\left(\sum\limits_{p = 1}^{n} {\Delta {{\textbf{x}}_{p}}} {\mu_{p}}{\delta_{p}}\right)f(\bar {\textbf{x}})\\ \tilde D_{\Delta {\textbf{x}}}^{2}\,f \,=\, \frac{1}{{{h^{2}}}}\!\left(\sum\limits_{p = 1}^{n} \!{\Delta\! {\textbf{x}}_{p}^{2}} {\delta_{p}^{2}} \,+\, \!\sum\limits_{p = 1}^{n} {\sum\limits_{q = 1,q \ne p}^{n} \!\! {\Delta {{\textbf{x}}_{p}}} } \Delta {{\textbf{x}}_{q}} \!\cdot \!\left({\mu_{p}}{\delta_{p}}\right)\left({\mu_{q}}{\delta_{q}}\right)f(\bar {\textbf{x}})\right. \end{array}}} $$
    (16)
    where δ p and μ q are the pseudo-differential operator and the average operator satisfying:
    $$\begin{array}{@{}rcl@{}} \begin{array}{l} {\delta_{p}}f(\bar {\textbf{x}}) = f\left(\bar {\textbf{x}} + \frac{h}{2}{e_{p}}\right) - f\left(\bar {\textbf{x}} - \frac{h}{2}{e_{p}}\right)\\ {\mu_{q}}f(\bar {\textbf{x}}) = \frac{1}{2}\left(f\left(\bar {\textbf{x}} + \frac{h}{2}{e_{p}}\right) + f\left(\bar {\textbf{x}} - \frac{h}{2}{e_{p}}\right)\right) \end{array} \end{array} $$
    (17)

    where e p is the pth unit vector.

    The above interpolation filtering algorithm has a great advantage compared with TSE. Different expansion contains one-dimensional Stirling interpolation expansion and multi-dimensional Stirling interpolation expansion. In the method of Interpolation filtering, the calculation of the partial derivative can be omitted. So it can be applied to any function cases, even for cases of nonlinear discontinuities with singularities, and it is more accurate than the method of EKF [13].

3.2.3 Particle filtering

According to the posterior probability p(X k |Y k ) of different observations Y K , we obtained the optimal estimation under the minimum mean square error criterion as follows [15]:
$$\begin{array}{@{}rcl@{}} {\textbf{x}_{k}}\sim E\left\{ {\textbf{x}_{k}}|{\textbf{y}_{k}}\right\} = \int {{\textbf{x}_{k}}} \cdot p\left({\textbf{x}_{k}}|{\textbf{y}_{k}}\right)d{\textbf{x}_{k}}. \end{array} $$
(18)
After sequential importance sampling (SIS) of the posterior probability distribution, we obtain the particle swarm \(\left (\textbf {x}_{k}^{i},\textbf {w}_{k}^{i}\right)\), and the posterior probability density can be approximated as:
$$\begin{array}{@{}rcl@{}} \hat p\left({\textbf{x}_{0:k}}|{\textbf{y}_{1:k}}\right) \approx {\sum\nolimits}_{i = 1}^{N} {{\mathbf{\omega}}_{k}^{i}} \delta \left({{\textbf{x}}_{0:k}} - {\textbf{x}}_{1:k}^{i}\right). \end{array} $$
(19)
And the state estimation can be expressed as:
$$\begin{array}{@{}rcl@{}} \hat {\textbf{x}} = {\sum\nolimits}_{i = 1}^{N} {{\omega_{k}^{i}}} \textbf{x}_{k}^{i}. \end{array} $$
(20)
The basic procedures of the algorithm [15] are as follows:
  • Initialization: Set k=0 and generate particle swarm from the prior density function p(x 0).

  • Importance sampling: For the case when k{1,2,…,}, get N samples from the conversion prior density function \(p\left ({\textbf {x}_{k}}|\textbf {x}_{k - 1}^{i}\right)\) and calculate the weight of each particle \({\omega _{k}^{i}} = \omega _{k - 1}^{i}p\left ({\textbf {y}_{k}}|\textbf {x}_{k}^{i}\right)\) for i=1,2,…,N, then do regularization for the weights by the rule \(\tilde {\omega _{k}^{i}} = {\omega _{k}^{i}}{\left [\sum \limits _{j = 1}^{N} {{\omega _{k}^{j}}} \right ]^{- 1}}\).

  • Resampling: Setting \(\tilde N =\left [\tilde N{\omega _{k}^{i}}\right ]\), we derive \(\overline {{N_{k}}} = N - {\sum \nolimits }_{i = 1}^{N} {{{\tilde N}_{i}}}\) particles by sample importance re-sampling (SIR) and the corresponding weights are \({\omega _{k}^{i}} \!= {\overline N_{k}}^{- 1}\!\left ({\omega _{k}^{i}}N - {\tilde N_{i}}\right)\). In this case, the total number of particles N is invariant and \({\omega _{k}^{i}} = \tilde {\omega _{k}^{i}} = \frac {1}{N}\).

The basic idea of the particle filter is to represent the state vector of interest as a set of random samples with associated weights and then work out the state estimation values based on these samples and weights. When the number of random samples is sufficient, the estimated probability density function of the particle would level off to the real probability density, and the particle filtering would accordingly level off to the optimal Bayesian filtering [10].

3.2.4 Neural network filtering [11]

Consider the system model as:
$$\begin{array}{@{}rcl@{}} \begin{array}{l} X(t) = f(X(t - 1)) + \xi (t - 1)\\ y(t) = H(t)X(t) + v(t) \end{array} \end{array} $$
(21)
where X(t) is a n-dimensional state vector, f is a n-dimensional nonlinear vector function, y(t) denotes the m-dimensional detection vector, H is a matrix of size m×n, ξ(t) is the n-dimensional dynamic noise sequence, and v(t) is the m-dimensional measurement noise sequence. ξ(t) and v(t) are generally uncorrelated zero mean normal white noise sequence. That is, for all t and j, the noise statistical properties satisfy:
$$\begin{array}{@{}rcl@{}} \begin{array}{l} E{\xi_{i}} = 0, \text{cov} ({\xi_{i}},{\xi_{j}}) = E{\xi_{i}}{\xi_{j}^{T}} = {Q_{t}}{\delta_{ti}}\\ E{v_{i}} = 0, \text{cov} ({v_{t}},{v_{i}}) = E{v_{t}}{v_{j}^{T}} = {R_{t}}{\delta_{tj}}\\ \text{cov} ({\xi_{t}},{v_{j}}) = E{\xi_{t}}{v_{j}^{T}} = 0. \end{array} \end{array} $$
(22)

Where ξ(t) and v(t) are colored noise with nonzero mean.

With the rapid development of artificial intelligence technology, researchers found that the neural network filter was suitable for issues of nonlinear. Its main features are smart and well-adapted. And the drawback of this method is its poor universality, and the estimated accuracy is unsatisfactory.

3.2.5 UKF filtering

The basic idea of UFK is unscented transformation (UT) [12,16], and the procedure of the method can be summarized as follows.
  1. 1.
    Construct the sigma points. In the first place, work out the sampling points ξ i and its corresponding weights W i [10,17].
    $$\begin{array}{@{}rcl@{}} \left\{ {\begin{array}{*{20}{c}} {{\xi_{0}} = \overline {\textbf{X}} }&,&{i = 0}\\ {{\xi_{i}} = \overline {\textbf{X}} + {{(\sqrt {({n_{x}} + \kappa){\textbf{P}_{x}}})}_{i}}}&,&{i = 1,\ldots,{n_{x}}}\\ {{\xi_{i + {n_{x}}}} = \overline {\textbf{X}} - {{(\sqrt {({n_{x}} + \kappa){{\textbf{P}}_{x}}})}_{i}}}&,&{i = 1,\ldots,{n_{x}}} \end{array}} \right. \end{array} $$
    (23)
    $$\begin{array}{@{}rcl@{}} {}\left\{ {\begin{array}{*{20}{c}} {{W_{0}} = \frac{\kappa }{{({n_{x}} + \kappa)}}}&,&{i = 0}\\ {{W_{i}} = \frac{\kappa }{{[2({n_{x}} + \kappa)]}}}&,&{i = 1,\ldots,{n_{x}}}\\ {{W_{i + {n_{x}}}} = \frac{\kappa }{{[2({n_{x}} + \kappa)]}}}&,&{i = 1,\ldots,{n_{x}}} \end{array}} \right. \end{array} $$
    (24)

    where κ is a scale parameter that can be any value satisfying (n x +κ)≠0, \({\left (\sqrt {({n_{x}} + \kappa){{\textbf {P}}_{x}}} \right)_{i}}\) is the ith line and the ith column of the root mean square (RMS) matrix (n x +κ)P x and n x denotes the dimension of the state vector.

     
  2. 2.
    Give nonlinear transformation to the sigma points. Each of δ samples are transmitted by nonlinear function and we obtain.
    $$\begin{array}{@{}rcl@{}} {y_{i}} = g({\xi_{i}}),\begin{array}{*{20}{c}} {}&{i = 0,\ldots,2{n_{x}}}. \end{array} \end{array} $$
    (25)
     
  3. 3.

    Calculate the mean and variance of y. The estimated mean and covariance of y are as follows.

     
Procedure of the algorithm [10]:
  • Calculate the δ sampling point \(\chi _{k|k}^{i}\);

  • Predict \({\hat x_{k + 1|k}}\) and P k+1|k in use of \(\chi _{k|k}^{i}\) according to UT;

  • Update the observation value by the following equations.
    $$ {\fontsize{8.9pt}{9.3pt}\selectfont{\begin{array}{l} Z_{k + 1|k}^{i} = {h_{k + 1}}\left(\chi_{k + 1|k}^{i}\right)\\ {\mathop z\limits^ \wedge_{k + 1|k}} = \sum\limits_{i = 0}^{2{n_{x}}} {{W_{i}}Z_{k + 1|k}^{i}} \\ {S_{k + 1}} = \sum\limits_{i = 0}^{2{n_{x}}} {{W_{i}}\left[Z_{k + 1|k}^{i} - {{\mathop x\limits^ \wedge }_{k + 1|k}}\right]{{\left[Z_{k + 1|k}^{i} - {{\mathop x\limits^ \wedge }_{k + 1|k}}\right]}^{T}} + {R_{k}}} \\ {P_{xx,k + 1}} = \sum\limits_{i = 0}^{2{n_{x}}} {{W_{i}}\left[\chi_{k + 1|k}^{i} - {{\mathop x\limits^ \wedge }_{k + 1|k}}\right]{{\left[\chi_{k + 1|k}^{i} - {{\mathop x\limits^ \wedge }_{k + 1|k}}\right]}^{T}}} \\ {K_{k + 1}} = {P_{xx,k + 1}}{S^{- 1}}_{k + 1}\\ {\mathop x\limits^ \wedge_{k + 1|k}} = {\mathop x\limits^ \wedge_{k + 1|k}} + {K_{k + 1}}\left({z_{k + 1}} - {\mathop z\limits^ \wedge_{k + 1|k}}\right)\\ {P_{k + 1|k + 1}} = {P_{k + 1|k}} - {K_{k + 1}}{S_{k + 1}}K_{k + 1}^{T}.\\[-12pt] \end{array}}} $$
    (26)

UKF conducts recurrence and updates of the nonlinear model status and error covariance by nonlinear UT method. It is not about the approximation of the nonlinear model but the approximation of the state’s probability density function [18]. The similarity between UKF and EKF is that the used parametric analytical form is both based on the Gaussian assumption [19].

3.3 Comparison of different nonlinear filters

In [20], by using EKF, it is obvious that besides some improvements, we can see that the magnitude of the error correction is small from the image after the radar data filtering and simulation. The error of linearization usually seriously affects the final filtering accuracy, sometimes even leads to filtering divergence.

In our work, we try to use UKF which is more accurate than EKF, what is more, in the method of UKF, the approximation of the maximum term of Taylor expression and the calculation of the Jacobi matrix are avoided. However, it should be added that the application is limited due to the assumption of the Gaussian distribution of noises.

For the interpolation filter, we need more sampling points in order to achieve ideal precision. In the method, Taylor expansion should be used and approximation must be done. Considering that the performance of applications on digital data is worse than applications on image data, in our study, we do not use interpolation filter.

In practical, the estimation of nonlinear and non-Gaussian stochastic systems is more significant. An effective way to solve this problem is nonparametric particle filtering. Particle filtering can be applied to any state-transformation model and measuring model in any environment, which gets rid of the constraint that random amounts must satisfy Gaussian distribution. However, there would exist particle degradation when do particle filtering practically, which means that as the increase of the number of samples, the weight of many particles may get smaller, and the variance of the sample would increase over time. Based on the judgment in [20], the system is under the assumption of Gaussian noise, so we would not use particle filtering algorithm here.

Although with a short history, neural network filtering algorithm can be combined with the other filters like EKF and generates new adaptive extended Kalman filters. However, we do not choose it for it is better used under similar systems.

From the procedure of UKF, we can see that the method is faster than EKF for there is no need to calculate the Jacobi matrix. And the mean and variance of the nonlinear function can be estimated more accurately with UT so get a higher accuracy. For any nonlinear functions, the posterior mean and covariance by UKF can be accurate to the second-order so that UKF can be used in any dynamic model while EKF can only obtain a precision of first order.

Basing on the above analyses, we choose the method UKF which can provide more accurate data for further calculation. And it has a great significance for practical applications by applying UKF to [20] which is bound to lay better foundation for the future data fusion problem.

4 Numerical results

Using the data in [20], after overall analyses, we can get the radar data map as Figure 2.
Figure 2
Figure 2

Figure of overall radar data.

When simulating the extended Kalman filtering algorithm and the unscented Kalman filtering algorithm by Matlab, we set the initial values as follows.
  • Radar period: T=2.

  • Random errors of radar distance and the position: 200 and 0.3.

  • Initial values of the state are:
    $$\begin{array}{@{}rcl@{}} \begin{array}{l} {\textbf{X}_{0}} = \left\{ \textbf{Z}(1,2),\frac{{\textbf{Z}(1,2) - \textbf{Z}(1,1)}}{{{T_{s}}}},\textbf{Z}(2,2),\frac{{\textbf{Z}(2,2) - \textbf{Z}(2,1)}}{{{T_{s}}}}\right\} \\ \textbf{H} = \left({\begin{array}{*{20}{c}} 1&0&0&0\\ 0&0&1&0 \end{array}} \right),\textbf{F} = \left({\begin{array}{*{20}{c}} 1&{{T_{s}}}&0&0\\ 0&1&0&0\\ 0&0&1&{{T_{s}}}\\ 0&0&0&1 \end{array}} \right). \end{array} \end{array} $$
    (27)

    Initial covariance is as follows:

    $${} {\fontsize{8.9pt}{9.3pt}\selectfont{\textbf{P} \!= \!\!\left(\! {\begin{array}{*{20}{c}} {\textbf{R}(1,1)}&{\frac{{\textbf{R}(1,1)}}{{{T_{s}}}}}&{\textbf{R}(1,2)}&{\frac{{\textbf{R}(1,2)}}{{{T_{s}}}}}\\ {\frac{{\textbf{R}(1,1)}}{{{T_{s}}}}}&{\frac{{2\textbf{R}(1,1)}}{{{T_{s}}^{2}}}}&{\frac{{\textbf{R}(1,2)}}{{{T_{s}}}}}&{\frac{{2\textbf{R}(1,2)}}{{{T_{s}}^{2}}}}\\ {\textbf{R}(1,2)}&{\frac{{\textbf{R}(1,2)}}{{{T_{s}}}}}&{\textbf{R}(2,2)}&{\frac{{\textbf{R}(2,2)}}{{{T_{s}}}}}\\ {\frac{{\textbf{R}(1,2)}}{{{T_{s}}}}}&{\frac{{2\textbf{R}(1,2)}}{{{T_{s}}^{2}}}}&{\frac{{\textbf{R}(2,2)}}{{{T_{s}}}}}&{\frac{{2\textbf{R}(2,2)}}{{{T_{s}}^{2}}}} \end{array}}\! \right)\!,\!\textbf{Q}\! \,=\,\! \left(\begin{array}{*{20}{c}} {50}&{50}&{10}&{10}\\ {50}&{50}&{10}&{10}\\ {10}&{10}&{50}&{50}\\ {10}&{10}&{50}&{50} \end{array} \!\right)}}\!. $$
    (28)
When using the extended Kalman filtering algorithm, we select 16 sets of data, where x 0 is the horizontal spatial position of the plane and y 0 is the vertical spatial position. After filtering data (x 0,y 0) by EKF and UKF, we get the values (x 1,y 1) [21] and (x 1 ,y 1 ), which are shown in Table 1. (x 1,y 1) and (x 1 ,y 1 ) represent measurements obtained by EKF algorithm and UKF algorithm, respectively.
Table 1

Radar measurements obtained by EKF algorithm and UKF algorithm

 

Values

(x0,y0)

(768550, 786690)

(766260, 786700)

(764390, 787190)

(762280, 787080)

(x 1,y 1)

(768540, 786700)

(766430, 786860)

(764120, 787280)

(761740, 787410)

(x 1 ,y 1 )

(767784, 786694)

(766716, 786699)

(765136, 786995)

(763243, 787088)

(x0,y0)

(757850, 786890)

(755880, 787360)

(753780, 787530)

(751550, 787370)

(x 1,y 1)

(757840, 787090)

(755730, 787450)

(753420, 787580)

(751140, 787820)

(x 1 ,y 1 )

(760133, 787052)

(757436, 787221)

(754930, 787394)

(752499, 787453)

(x0,y0)

(747210, 787300)

(745100, 787400)

(742990, 787490)

(740740, 787240)

(x 1,y 1)

(747200, 787510)

(745090, 787670)

(742780, 787690)

(740500, 787820)

(x 1 ,y 1 )

(749442, 787473)

(746636, 787513)

(744005, 787565)

(741462, 787537)

(x0,y0)

(736750, 788000)

(734630, 788040)

(732380, 787720)

(730470, 788030)

(x 1,y 1)

(736750, 788010)

(734640, 788070)

(732320, 788390)

(730040, 789020)

(x 1 ,y 1 )

(738559, 787701)

(735821, 787837)

(733182, 787875)

(730688, 787968)

The tracks of (x 1,y 1) after UKF and (x 1 ,y 1 ) after EKF are shown in Figure 3a,b.
Figure 3
Figure 3

Filter image of applying (a) EKF algorithm and (b) UKF algorithm. Sixteen sets of data are in use.

From the above images, we can clearly see that the track in the second figure by UKF is smoother especially around the points 8 and 9. Since actually the flight path of an aircraft cannot be poignant as broken line form, the smooth curve in Figure 3b is much closer to the actual trajectory of the aircraft. That is to say, in the test cases, UKF algorithm is a more suitable method.

In the following, we choose 20 sets of data, then smooth and track the path of aircraft using the Kalman filtering algorithm. We obtain the tracks of data as are shown in Figure 4a by EKF and Figure 4b by UKF.
Figure 4
Figure 4

Filter image of applying (a) EKF algorithm and (b) UKF algorithm. Twenty sets of data are in use.

Compared with the curve in Figure 4a, the track by UKF is much smoother. And the curve behind point 9 is almost in a smooth form. Although the curve by EKF algorithm is close to the original measurement data, according to the actual principles of flight, the aircraft should have a smooth movement, which fully illustrates that there is measurement error, and we should filter the data before using it.

In the next test case, we select 80 sets of data, then smooth and track the path of aircraft using the Kalman filtering algorithm, and we obtain the track after EKF shown in Figure 5a and the track after UKF shown in Figure 5b.
Figure 5
Figure 5

Filter image of applying (a) EKF algorithm and (b) UKF algorithm. Eighty sets of data are in use.

From Figure 5a, we can find that after unscented Kalman filtering, the curve is almost a smooth one. It states that filtering the set of measurements by UKF, the curve we obtain is much closer to the actual track.

5 Conclusions

We present algorithm procedures of five types of nonlinear filters used in radar tracking data filtering, respectively, and have a contrast of them, among which UKF shows the best performance. For further illustration, we mainly carry out simulations of UKF, and that of EKF is taken as comparison. Both theory and simulation results show that the precision of data and images we get by UKF is superior to the other filters, especially better than the result by EKF, and shows smoother and better tracking performance. What is more, for nonlinear function, UKF algorithm can greatly improve the accuracy of the calculation in practical applications, if we apply the UKF algorithm to the method in [20], the issue would be a preferable basis for data fusion in the future and may have great significance for practical applications.

Declarations

Acknowledgements

This work was supported by the National Science and Technology Major Project of China under grant 2013ZX03001003-003 and the BUPT Research Innovation Project under grant 2013RC0104.

Authors’ Affiliations

(1)
College of Automation, Harbin Engineering University, No.145 Nantong Street, Nangang District, Harbin, 150110, China
(2)
Key Laboratory of Universal Wireless Communication, Ministry of Education, Beijing University of Posts and Telecommunications, No.10 Xitucheng Road, Haidian District, Beijing, 100876, China
(3)
Faculty of Science, Harbin Engineering University, No.145 Nantong Street, Nangang District, Harbin, 150110, China

References

  1. BH Ding, H Huang, Radar Tracking Filter Technology Research [J] (Shanghai Jiaotong University, Shanghai, 2011).Google Scholar
  2. Q Liang, B Zhang, C Zhao, Y Pi, TDoA-Based Passive Localization: Terrestrial versus Underwater Environment[J]vol. 24, no. 10, (IEEE Trans on Parallel and Distributed Processing, 2011), pp. 2100-2108, Oct 2013.Google Scholar
  3. Q Liang, X Cheng, Sherwood Samn, NEW: Network-enabled Electronic Warfare for Target Recognition [J]vol. 46, no. 2, (IEEE Trans on Aerospace and Electronic Systems), pp. 558-568, April 2010.Google Scholar
  4. MY Ge, Studies of Monitoring Radar Target Tracking and Localization [J] (91,913 troops of the PLA Navy, Dalian, 2012).Google Scholar
  5. CQ Hong, Studies of Target Searching Algorithm in Video Based on Content [D] (Zhejiang University, Hangzhou, 2011).Google Scholar
  6. ZX Yao, Design and Research of DC/DC Converter and State Observer [D] (Guangxi University, Nanning, 2008).Google Scholar
  7. DC Liu, X Tan, Performance comparison of nonlinear filtering algorithm [J] (Aba Teachers College, Sichuan, 2011).Google Scholar
  8. B Liu, Z Chen, X Liu, F Yang, J Geng, A New Method Based on the Polytopic Linear Differential Inclusion for the Nonlinear Filter [J] (Beijing Institute of Technology, Beijing, 2013).Google Scholar
  9. Q Liang, Radar Sensor Wireless Channel Modeling in Foliage Environment: UWB versus Narrowband [J],vol. 11, no. 6 (IEEE Sensors Journal, June 2011), pp. 1448–1457.Google Scholar
  10. Q Liang, Automatic Target Recognition Using Waveform Diversity in Radar Sensor Networks [J],vol. 29, no. 2 (Pattern Recognition Letters (Elsevier), 2008), pp. 377–381.Google Scholar
  11. J Liang, Q Liang, Design and Analysis of Distributed Radar Sensor Networks [J],vol. 22, no. 11 (IEEE Trans on Parallel and Distributed Processing, November 2011), pp. 1926–1933.Google Scholar
  12. Y He, JJ Xiu, JW Zhang, X Guan, Radar Data Processing and Application [M] (Electronic Industry Press, Beijing, July, 2009).Google Scholar
  13. CS Qu, HL Xu, Y Tan, Overview of Nonlinear Bayesian Filtering Algorithms [J] (Second Artillery Engineering College, Xi’an, 2008).Google Scholar
  14. B Liu, Z Chen, X Liu, F Yang, A New Method Based on polytypic Linear Inclusion for Nonlinear filter with Non-Gaussian Noise [J] (Beijing Institute of Technology, Beijing, 2013).Google Scholar
  15. CP Yu, GY Li, HY Zhang, Comparison and Analysis of Performance of Several Nonlinear Filtering Algorithms [J] (PLA Information Engineering University, Zhengzhou, November, 2008).Google Scholar
  16. J Xu, GM Dimirovski, Y Jing, C Shen, UKF Design and Stability for Nonlinear Stochastic Systems with Correlated Noises [J] (LA, USA, December, 2007).Google Scholar
  17. Y Ma, Z Wang, A UKF Algorithm Based on the Singular Value Decomposition of State Covariance [J] (Graduate School of the Chinese Academy of Sciences, Beijing, January, 2010).Google Scholar
  18. F Gao, 3D Ect Data Correction and Image Reconstruction Based on the Theory of Stochastic Optimization [D] (Zhejiang University, Hangzhou, April, 2009).Google Scholar
  19. D Simon, Kalman Filtering with State Constraints: A Survey Of Linear and Nonlinear Algorithms [J] (Cleveland State University, Cleveland, Ohio44115, USA, January, 2009).Google Scholar
  20. Z Lin, Research Information Fusion Systems Engineering Design Criteria [D] (Harbin Engineering University, Harbin, 2005).Google Scholar
  21. ZX Hu, Filtering Based On Neural Networks [J] (East China University of Technology, Shanghai, March, 1996).Google Scholar

Copyright

Advertisement