- Research
- Open Access

# Particle filter track-before-detect implementation on GPU

- Xu Tang
^{1}Email author, - Jinzhou Su
^{1}, - Fangbin Zhao
^{1}, - Jian Zhou
^{1}and - Ping Wei
^{1}

**2013**:38

https://doi.org/10.1186/1687-1499-2013-38

© Tang et al.; licensee Springer. 2013

**Received:**11 December 2012**Accepted:**8 January 2013**Published:**19 February 2013

## Abstract

Track-before-detect (TBD) based on the particle filter (PF) algorithm is known for its outstanding performance in detecting and tracking of weak targets. However, large amount of calculation leads to difficulty in real-time applications. To solve this problem, effective implementation of the PF-based TBD on the graphics processing units (GPU) is proposed in this article. By recasting the particles propagation process and weights calculating process on the parallel structure of GPU, the running time of this algorithm can greatly be reduced. Simulation results in the infrared scenario and the radar scenario are demonstrated to compare the implementation on two types of the GPU card with the CPU-only implementation.

## Keywords

- Track-before-detect
- Particle filter
- GPU

## 1. Introduction

Classical target detection and tracking is performed on the basis of pre-processed measurements, which are composed of the threshold output of the sensor. In this way, no effective integrations over time can be taken place and much information is lost. To avoid this problem, the track-before-detect (TBD) technique is developed to use directly the un-threshold or low threshold measurements of sensors to utilize the raw information. The TBD-based procedures jointly process more consecutive measurements, thus can increase the signal-to-noise ratio (SNR), and realize the detection and tracking of weak targets simultaneously.

The scenarios faced by TBD are almost nonlinear and non-Gaussian, so the particle filter (PF) [1] is a reasonable solution. The PF is a Monte Carlo simulation method and widely used in target tracking of linear or nonlinear dynamic systems [2, 3]. Salmond and coauthors [4, 5] first introduced the PF implementation of TBD (PFTBD) in infrared scenario. Then, Rutten et al. [6–8] proposed several improved PFTBD algorithms. Boers and Driessen [9] extended the work of PFTBD into the radar targets detection and the tracking application. PFTBD algorithms have demonstrated the improved track accuracy and the ability to follow the low SNR targets but at the price of an extreme increase of the computational complexity.

In recent years, the field programmable gate array (FPGA) and the graphics processing unit (GPU) are the most important architectures in parallel computing. As the rapid development of GPU technology, GPU is famous for its significant ability in parallel computing for both the graphic processing and the general-purpose computing. Moreover, the compute unified device architecture (CUDA) [10] is introduced to facilitate a hybrid utilization of GPU and central process unit (CPU) [11]. FPGA has been used to implement PFs, such as in [12, 13]. However, with the increasing of number of particles, GPU is expected to obtain a better performance than FPGA. More specifically, the PF algorithms have been implemented on GPU [14–16], and achieve significant speedup ratio over the implementations on the traditional CPU fashion but with no losing the performance for its float point computation ability.

From the best of the authors’ knowledge, no PFTBD algorithm implemented on GPU is given in the literature. In this article, we propose a novel implementation of PFTBD algorithm on GPU by the CUDA programming. Concerned with the difficulty of PFTBD beyond the PF, new scheme to dispatch the GPU resources for particles are developed and the programming about the likelihood ratio area are considered carefully.

Simulations in both the infrared scenario and the radar scenario are given. Two types of the GPU card are utilized. The implementations of PFTBD algorithm on both of them achieve significant speedup over the CPU-only implementation. The initial version of this research first appeared in [17].

This article is organized as follows. Section 2 reviews the theory about PFTBD. In Section 3, we discuss the details about the parallel implementation of PFTBD on GPU and CUDA programming. The simulations results and discussions can be found in Section 4. Finally, we conclude this article in Section 5.

## 2. PFTBD theory

In this article, the single target recursive TBD algorithms are presented. The way to process raw measurement of sensor in TBD is different from the classical target tracking methods. The measurement model and the way of data processing vary with the sensor type. The models are set mathematically with a summary of the infrared scenario and the radar scenario. The simulations in Section 4 are based on these models.

### 2.1. Target model and measurement model of infrared sensor

*x*

_{ k }and

*y*

_{ k }are the positions of the target and ${\overline{x}}_{k},{\overline{y}}_{k}$ are the velocity of the target.

*I*

_{ k }is the returned unknown intensity from the target. The process noise

*V*

_{ k }is the standard white Gaussian noise. A CV process model is used, which is defined by the transition matrix and the process noise covariance matrix

where *T* is the period of time between measurements, *q*_{1} and *q*_{2} denote the variance of the acceleration noise and the noise in target return intensity, respectively.

The variable ${E}_{k}\in \left\{e,\overline{e}\right\}$ denotes the existence or non-existence of the target and evolves according to a two-state Markov chain. The transitional probability matrix is defined by ${\prod}_{\mathit{ij}}=}\left(\begin{array}{cc}\hfill 1-{P}_{b}\hfill & \hfill {P}_{b}\hfill \\ \hfill {P}_{d}\hfill & \hfill 1-{P}_{d}\hfill \end{array}\right)$, where *P*(*E*_{
k
} = 1|*E*_{k-1} = 0) = *P*_{
b
} is the probability of target birth and *P*(*E*_{
k
} = 0|*E*_{k-1} = 1) = *P*_{
d
} is the probability of target disappearance.

*z*

_{ k }at each time is a two-dimensional intensity image of the interested region consisting of the

*n*×

*m*resolution cells. The measurement of each cell

*z*

_{ k }

^{(i,j)}with

*i*= 1, …,

*n*,

*j*= 1, …,

*m*is as

*h*

^{(i,j)}(

*X*

_{ k }) is the intensity of the target in the cell (

*i*,

*j*).

*h*

^{(i,j)}(

*X*

_{ k }) is also the spread reflection form of target and is defined for each cell by

_{ x }and Δ

_{ y }denote the size of a resolution cell in each dimension. The parameter ∑ represents the extent of blurring. Then the likelihood function during the presence and absence of the target, respectively, in each cell can be written as

*i*,

*j*) is giving as

and *C*_{
x
}(*X*_{
k
}), *C*_{
y
}(*X*_{
k
}) are the index sets of cells that are affected by the target in the *x* and *y* dimensions, named as the likelihood ratio area. The size of them is determined by the application parameter, such as the resolution of the observation area and the intensity of the target. The bigger likelihood ratio area, the more latent target information can be utilized.

*W*

_{ k }

^{(i,j)}in each cell is assumed as the independent white Gaussian distribution with zero mean and variance

*σ*

^{2}. The SNR for the target is defined by

### 2.2. Target model and measurement model of radar sensor

*x*

_{ k }and

*y*

_{ k }are the positions of the target and ${\overline{x}}_{k},{\overline{y}}_{k}$ are the velocity of the target. If we make

*T*as the update time, then the transition matrix and the process noise covariance matrix can be defined as

where *a*_{maxx}, *a*_{maxy} is the maximum accelerations and the process noise *V*_{
k
} is the standard white Gaussian noise.

*k*, the measurement

*z*

_{ k }is the reflected power of target.

*z*

_{ k }is based on the presence of the target and is defined by

*N*

_{ r }×

*N*

_{ d }×

*N*

_{ b }sensor cells. Thus,

*z*

_{ k }can be defined by ${z}_{k}=\{{z}_{k}^{(i,j,l)}$ :

*i*= 1, …,

*N*

_{ r },

*j*= 1, …,

*N*

_{ d },

*l*= 1, …,

*N*

_{ b }}. The power measurements per range-Doppler-bearing cell can be defined as

*z*

_{ k }

^{(i,j,l)}= |

*z*

_{A,k}

^{(i,j,l)}|

^{2}, where

*z*

_{A,k}

^{(i,j,l)}represents the complex amplitude data of the target, which is

*A*

_{ k }is the complex amplitude and ${A}_{k}={\tilde{A}}_{k}{e}^{i{\varphi}_{k}},{\varphi}_{k}\in \left(0,2\pi \right)$.

*n*

_{ k }is complex Gaussian noise defined by

*n*

_{ k }=

*n*

_{ Ik }+

*in*

_{ Qk }, where

*n*

_{ Ik }and

*n*

_{ Qk }are independent, zero mean white Gaussian noise with variance

*σ*

_{ n }

^{2}. They are related to

*W*

_{ k }as

*W*

_{ k }= |

*n*

_{ Ik }+

*n*

_{ Qk }|

^{2}.

*h*

_{ A }

^{(i,j,l)}(

*X*

_{ k }) is the reflection form that is defined for every range-Doppler-bearing cell by

*i*= 1,…,

*N*

_{ r },

*j*= 1,…,

*N*

_{ d }, and

*l*= 1,…,

*N*

_{ b }with

*L*

_{ r },

*L*

_{ d }, and

*L*

_{ b }are constants of power losses.

*R*,

*D*, and

*B*are related to the size of a range, a Doppler, and a bearing cell. In summary, the power in every range-Doppler-bearing measurement cell can be defined as

which generalizes the power of the target in every range-Doppler-bearing cell.

The process of the likelihood ratio in the measurement model of radar sensor is the same with that in the measurement model of infrared sensor of (6).

### 2.3. PF solution for TBD

*posteriori*filtering distribution is calculated with a mixture of two parts of particles.

- (1)
One part is the birth particles $\left\{{X}_{k}^{\left(b\right)i},{\tilde{w}}_{k}^{\left(b\right)i}\right\}$, which did not existed in the previous time and are sampled from proposal distribution, when ${E}_{k-1}^{\left(b\right)i}=\overline{e}$, but

*E*_{ k }^{(b)i}=*e*. - (2)
The other part is the continuing particles $\left\{{X}_{k}^{\left(c\right)i},{\tilde{w}}_{k}^{\left(c\right)i}\right\}$, which keep existence and are sampled from state transition probability density, when

*E*_{k-1}^{(c)i}=*e*, but*E*_{ k }^{(c)i}=*e*.

The algorithm routine of PFTBD is given as follows [6]:

*N*

_{ b }birth particles from the proposal density ${X}_{k}^{\left(b\right)i}\sim {q}_{b}\left({X}_{k}|{E}_{k}=e,{E}_{k-1}=\overline{e},{z}_{k}\right)$ and calculate the unnormalized weights of birth particles from the likelihood ratio:

where ${q}_{b}\left({X}_{k}^{\left(b\right)i}|{E}_{k}^{\left(b\right)i}=e,{E}_{k-1}^{\left(b\right)i}=\overline{e}\right)$ is the prior density of the target.

*N*

_{ c }continuing particles from state transition probability density

*X*

_{ k }

^{(c)i}∼

*q*

_{ c }(

*X*

_{ k }|

*E*

_{ k }=

*e*,

*E*

_{k-1}=

*e*,

*z*

_{ k }). The unnormalized weights of continuing particles are given as

where *q*_{
c
}(*X*_{
k
}|*E*_{
k
} = *e*, *E*_{k-1} = *e*, *z*_{
k
}) is the state transition density of the target.

*N*

_{ b }+

*N*

_{ c }particles down to

*N*

_{ c }particles {

*X*

_{ k }

^{ i }, 1/

*N*

_{ c }}. Give the estimate of the target state at time

*k*and calculate the root mean square error (RMSE) of location error by:

The main difference of PFTBD from the general PF in both the infrared scenario and the radar scenario is that a product of cell’s intensity in the observation area is needed in the calculation of particle weight, as in (6). Moreover, this operation is the main body that contributes the high time complexity of PFTBD. Suppose that the time complexity in weight process of PF is *O*(*m*) with *m* particles. Then in PFTBD, the time complexity of weight process is *O*(*m* × *n*^{2} × *n*) for a sequential algorithm with *n* × *n* cells and *m* particles. Thus, some efficient parallel implementations should be introduced to relief this overhead.

## 3. The implementation of PFTBD on GPU

### 3.1. Parallel processing on CUDA

In the modern GPUs, there are hundreds of processor cores, which are named as the stream multiprocessor (SM). Each SM contains many scalars stream processors (SP) and can perform the same instructions simultaneously. CUDA is a general purpose parallel computing architecture that makes GPUs to solve complex problems in a more efficient way than on a CPU. In CUDA programming, GPU can be responsible for the parallel computationally intensive parts and CPU can accomplish the other parts. On GPU, each task schedule unit, named as the kernel, is performed in the thread on the SP. The threads are organized into the block that is performed on the SM [18]. Threads can communicate with the other threads in the same block by using the shared memory efficiently. Moreover, two thumb rules should be noted: (1) Overhead data transferring between the GPU and the CPU should be avoided. (2) Access data in the shared memory is much cheaper than in the global memory of GPU [18].

### 3.2. PFTBD on GPU

Obviously, in both implementations of PF and PFTBD, the particles propagation process and weights computing process have the high computational cost but with high concentration of parallelizability. Considering the implementation of PF on GPU, both processes above can be realized in one kernel because there are regular operations in individual threads. However, in PFTBD, different from the particles propagation process in PF, there are two kinds of particles, *X*_{
k
}^{(c)i} and *X*_{
k
}^{(b)i}. The way to get the states of this two kinds of particles is different, as the difference of ${q}_{b}\left({X}_{k}^{\left(b\right)i}|{E}_{k}^{\left(b\right)i}=e,{E}_{k-1}^{\left(b\right)i}=\overline{e}\right)$ and *q*_{
c
}(*X*_{
k
}|*E*_{
k
} = *e*, *E*_{k-1} = *e*, *z*_{
k
}) in Section 2.3. Besides that, there are product operations among threads in the calculation of particle weight which is also different from PF. Therefore, we schedule PFTBD with two CUDA kernels named as the birth kernel and the continue kernel, respectively, onto GPU. The birth kernel calculates the state and weight of birth particles. The continue kernel do the same calculations with continue particles.

The input data of both kernels, which is transferred from the CPU memory to the GPU global memory, are the measurement data in the current time step. For continue kernel, the state of continuing particles in the previous time step is also needed. GPU blocks and threads are allocated according to the number and state of particles. The state of continuing particles *X*_{
k
}^{(c)i} updates by the prior density of target. Meanwhile, the state of birth particles *X*_{
k
}^{(b)i} samples from the proposal density *q*(•) with uniform distribution on GPU. The noise with Gaussian distribution is generated on GPU by the CUDA library functions.

After obtained the states of the particles, both kernels calculate the weights of the particles. During the process of weights computing in (20) and (21), the states of particles are needed. On this point, the process of getting the states of particles and weights computing are combined into one kernel to overcome excessive data transmission between CPU and GPU. This part of kernel can be designed as various forms according to different size of *C*_{
x
}(*X*_{
k
}) and *C*_{
y
}(*X*_{
k
}) in (7). The number of blocks is equal to the number of particles and the size of threads is equal to the size of likelihood ratio area cells. In our implementation, depending on the size of surveillance region, we extend *C*_{
x
}(*X*_{
k
}) and *C*_{
y
}(*X*_{
k
}) from all the sets of cell indices to part. According to this approach, we can extend the application background from small scenes to large scenes. This problem is made a farther discussion in Section 3.3.

*z*

_{ k }as the inputs. Meanwhile, to update the continuing particles, the previous states of continuing particles

*X*

_{k-1}

^{(c)i}are also needed. After computing state and weight of particles on GPU, the state and weights of both parts of particles, as the outputs, are transferred back to CPU.

Other operations, such as calculating the probability of detection, resampling, and estimating the state of the target which needs interaction for all state of particles and their weights cannot be implemented in parallel, are arranged on CPU.

### 3.3. Likelihood ratio area programming

The likelihood ratio function is a multiplication over all the contributions of likelihood ratio area cells. For a scale of *n*^{2} array of likelihood ratio area cells with *m* particles are used, the computing of weight will entail *m* blocks and resulting *n*^{2} threads in each block. The value of *n*^{2} should be smaller than the maximum number of threads limited by the hardware. Under this condition, the calculating of likelihood ratio in each cell can be parallelized in every thread, but the process of product cannot be parallelized.

*O*(log

_{2}

*n*) comparing to

*O*(

*n*) without using the shared memory.

*R*×

*D*×

*B*, which always exceeds the maximum available threads in one block. Thus, some strategies must be applied to resolve the massive parallelism in large scene. According to the number of threads in one block, the likelihood area is divided into small areas as illustrated in Figure 3. The multiplications of each area

*A*

_{1},

*A*

_{2},…,

*A*

_{last}are calculated on GPU at the same time and other areas are sequentially calculated. Then the result of the likelihood ratio area is the product of all the small areas.

Obviously, the operations in Figure 3 are complex. From algorithm aspect, Torstensson and Trieb [19] have made a research on different size of likelihood ratio areas in radar application. Its scheme is to use small likelihood ratio areas to obtain the tradeoff between the performance and the extremely high computational cost. In the GPU implementation, we can follow the idea of [19] to sidestep the complex operations discussed above. More specifically, by using likelihood ratio areas with sizes that are just lower than the block, we can obtain better performance but with little computational cost increase. The simulations on different size of likelihood ratio areas are given in Section 4.2.

## 4. Simulation results

### 4.1. Simulations in infrared scenario

*n*×

*m*= 20 × 20 cells and the cell size is Δ

*x*= Δ

*=*1. The probability of birth and death is set as

*P*

_{ b }= 0.05 and

*P*

_{ d }= 0.05. The initial state of the target is

*X*

_{7}= [4.2 0.45 7.2 0.25 20]

^{T}. The SNR is 3 dB. More information about the parameters can be seen in [7]. Various numbers of particles are adopted with each 100 Monte Carlo trials. To verify the effect of different implementation, simulations are performed on three systems, which are given in Table 1.

**Benchmark systems**

System 1 | System 2 | System 3 | |
---|---|---|---|

Software | Visual studio 2010 professional with CUDA 4.1 SDK | MATLAB 2010a | |

Hardware | Nvidia GeForce GT9500 | Nvidia GeForce 240GT | Pentium(R) Dual-Core E5800 @ 3.20 GHz |

32 cores @ 550 MHz | 96 cores @ 550 MHz | ||

16.0 GB/s GDDR2 | 54.4 GB/s GDDR3 |

#### 4.1.1. The performances with different numbers of particle

Figure 4 shows that with the increase in the number of particles, the probability of detection improves significantly. When the number of particle is 100, the existence probability is always below the detection threshold, so the target cannot be detected. However, with 1,00,000 particles, not only the target can be detected faster, but also the detection probability is increased rapidly. With the time accumulation, when the target appears, the detection probability can eventually reach more than 0.8. Therefore, the number of particles is one of the key factors of the detection performance in PFTBD. From the other side, Figure 5 shows that the location error decreases efficiently with the increase in the number of particles. Note that both the results are consistent with the theory algorithm and have faint difference with the results of System 3, which are not given here for simplicity.

#### 4.1.2. The speedup ratio of GPU to CPU

Figure 6 shows that most of the time is spent on the two kernel: Birth particle(∙ ) and Continue particle(∙ ). Eighty percentage of executive time is cost on GPU. It means that GPU has fully been utilized.

From Figure 7, we can find that with the growing number of particles, the speedup ratio between GPUs and CPU improves significantly. Moreover, Figure 7 shows that the speedup ratio of 240GT is quadruple than GT9500. It is consistent with the specifications in Table 1 that 240GT has the number of CUDA cores triple than that in GT9500 and the memory interface width is much larger than that in GT9500.

### 4.2. The simulation in radar scenario

The simulation in radar scenario is based on the model in [19]. Length of observation time is 30 and target presents from frames 7 to 21. Initially, the range and Doppler cells of particles are uniformly distributed between [85, 90]km, [-0.22, -0.10]km/s in *x* direction, and [-0.1, 0.1]km, [-0.10, 0.10]km/s in *y* direction. The measurements are consisting of *N*_{
r
} × *N*_{
d
} × *N*_{
b
} = 50 × 16 × 1 sensor cells in each time. The initial state of the target is *X*_{7} = [89.6 0.2 0 0]^{T}. The SNR is 3 dB. The number of both the birth particles and continue particles is 10,000. More information about the parameters can be seen in [19]. In this simulation, the same benchmark systems with Section 4.1 are used.

**Running time for various sub areas compared between systems 2 and 3**

Condition | Sub areas | |||||
---|---|---|---|---|---|---|

1 × 1 | 3 × 3 | 5 × 5 | 7 × 7 | 13 × 13 | 15 × 15 | |

System 3 time(s) | 3.422 | 6.070 | 10.025 | 16.171 | 41.322 | 50.853 |

System 2 time (s) | 10.343 | 10.436 | 10.578 | 10.976 | 15.972 | 20.446 |

## 5. Conclusions

In this article, we propose an efficient implementation of PFTBD algorithm on GPU by CUDA programming. Since the parallel part of the PFTBD algorithm bears the main computation, the running time of the GPU-implemented PFTBD algorithm is greatly reduced by effectively dealing with the particles and the likelihood ratio computations. The implementations are tested on two types of GPU card for the infrared scenario and the radar scenario. As a result, the performance of the GPU-implemented PFTBD algorithm can significantly be improved by employing much more particles in GPU than in CPU.

## Declarations

### Acknowledgments

This study was supported by the Fundamental Research Funds for the Central Universities of China (ZYGX2011J012).

## Authors’ Affiliations

## References

- Ristic B, Arulampalam S, Gordon N:
*Beyond the Kalman Filter: Particle Filters, for Tracking Applications*. Boston: Artech House; 2004.Google Scholar - Salmond DJ, Fisher D, Gordon NJ: Tracking in the presence of spurious objects and clutter. In
*Proceedings of SPIE, Signal and Data Processing of Small Targets, vol. 3373*. Farnborough, Hants, UK; 1998:460-747.Google Scholar - Arulampalam MS, Maskell S, Gordon N, Clapp T: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking.
*IEEE Trans Signal Process*2002, 50(2):174-188. 10.1109/78.978374View ArticleGoogle Scholar - Salmond DJ, Birch H: A particle filter for track-before-detect. vol. 5. In
*Proceedings of the American Control Conference*. Arlington, VA, USA; 2001:3755-3760. 10.1109/ACC.2001.946220Google Scholar - Rollason M, Salmond D: A particle filter for track-before-detect of a target with unknown amplitude. vol. 1. In
*Proceedings of the IEEE International Seminar on Target Tracking Algorithms and Applications*. Farnborough, UK: QinetiQ; 2001:14-1-14-4. 10.1049/ic:20010240Google Scholar - Rutten MG, Gordon NJ, Maskell S: Efficient Particle based track before detect in Rayleigh noise. In
*Proceedings of SPIE, Signal and Data Processing of Small Targets, vol. 5428*. Orlando, FL; 2004:509-519.Google Scholar - Rutten MG, Ristic B, Gredon NJ: A comparison of particle filters for recursive track-before-detect. vol. 1. In
*Proceedings of the 8th International Conference on Information Fusion*. Piscataway; 2005:169-175. 10.1109/ICIF.2005.1591851Google Scholar - Rutten MG, Gordon NJ, Maskell S: Recursive track-before-detect with target amplitude fluctuations.
*IEE Proc Radar Sonar Navigat*2005, 152: 345-352. 10.1049/ip-rsn:20045041View ArticleGoogle Scholar - Boers Y, Driessen JN: Multitarget particle filter track before detect application.
*IEE Proc Radar Sonar Navigat*2004, 151: 351-357. 10.1049/ip-rsn:20040841View ArticleGoogle Scholar - The resource for CUDA developers (2010) 2010.http://www.nvidia.com/object/cuda_home.html
- Shu Z, Yanli C:
*GPU Computing for High Performance-CUDA*. China: Beijing; 2009.Google Scholar - Bolic M, Djuric PM, Hong S: Resampling algorithms and architectures for distributed particle filters.
*IEEE Trans Signal Process*2005, 53(7):2442-2450.MathSciNetView ArticleGoogle Scholar - Bolic M, Athalye A, Hong S, Djuric PM: Study of algorithmic and architectural characteristics of Gaussian particle filters.
*J Signal Process Syst*2009, 61: 205-218.View ArticleGoogle Scholar - Lenz C, Panin G, Knoll A: A GPU-accelerated particle filter with pixel-level likelihood. In
*International Workshop on Vision, Modeling and Visualization (VMV)*. Konstanz, Germany; 2008:235-241.Google Scholar - Hendeby G, Hol J, Karlsson R, Gustafsson F: A graphics processing unit implementation of the particle filter. In
*Proceedings of the 15th European Statistical Signal Processing*. Poznan, Poland; 2007:1639-1643.Google Scholar - Peihua L: An efficient particle filter–based tracking method using graphics processing unit (GPU).
*J Signal Process Syst*2012, 68: 317-332. 10.1007/s11265-011-0620-zView ArticleGoogle Scholar - Xu T, Jinzhou S, Fangbin Z: Particle filter track-before-detect implementation on GPU. In
*Proceedings of the International Conference on Communications, Signal Processing, and Systems (CSPS)*. Beijing, China; 2012:16-18.Google Scholar - NVIDIA:
*CUDA (Compute Unified Device Architecture) C programming guide 4.1*. 2011. https://developer.nvidia.com/cuda-downloadsGoogle Scholar - Torstensson J, Trieb M:
*Particle Filtering for Track Before Detect Applications*. Sweden: University of Linkoping; 2005.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.