 Research
 Open Access
Localization of acoustic sources using a decentralized particle filter
 Florian Xaver^{1}Email author,
 Gerald Matz^{1},
 Peter Gerstoft^{2} and
 Christoph Mecklenbräuker^{1}
https://doi.org/10.1186/16871499201194
© Xaver et al; licensee Springer. 2011
 Received: 19 January 2011
 Accepted: 12 September 2011
 Published: 12 September 2011
Abstract
This paper addresses the decentralized localization of an acoustic source in a (wireless) sensor network based on the underlying partial differential equation (PDE). The PDE is transformed into a distributed statespace model and augmented by a source model. Inferring the source state amounts to a nonlinear nonGaussian Bayesian estimation problem for whose solution we implement a decentralized particle filter (PF) operating within and across clusters of sensor nodes. The aggregation of the local posterior distributions from all clusters is achieved via an enhanced version of the maximum consensus algorithm. Numerical simulations illustrate the performance of our scheme.
Keywords
 source localization
 acoustic wave equation
 distributed statespace model
 sequential Bayesian estimation
 decentralized particle filter
 argumentummaximi consensus algorithm
I Introduction
Background and state of the art
In this paper, we use a physicsbased model and a Bayesian approach to develop a decentralized particle filter (PF) for acoustic source localization in a sensor network (SN). In a decentralized PF, the processing is done locally at the sensors without using a fusion center. Thereby, the estimated position is known at every sensor in consequence of this decentralized process.
The problem formulation in this paper is motivated by indoor localization of an acoustic source. A hallway is modeled including basic boundary conditions for windows (membranes) and walls.
The source localization problem has been studied, e.g., in [1–3], [[4], p. 4089 ff], [[5], p. 746 ff] and [6], all of which use a sequential Bayesian estimator [7] to infer the source position states from observations using multiple sensors. These papers build on a statespace transition equation describing the global source state trajectory over time and the measurement equation between these states and the measurements. The underlying model of the physical process is modeled in the measurement equation. A decentralized approach aims at identifying global source states that are common to all decentralized units. Each decentralized unit typically consists of a sensor and a Bayesian estimator associated with the sensor's neighborhood.
A different approach consists of incorporating the partial differential equation (PDE) describing the dynamics of the physical process. In source tracking applications, this implies that the field itself becomes part of the state, which thus is distributed over all space. For instance, the acoustic wave field is described by a hyperbolic PDE for pressure and hence the state vector comprises the spatiotemporal pressure field. This approach is used in (ocean) acoustic models [8, 9] and geophysical models [[4], p. 4089 ff], [10–13]. For localization, the model is augmented with a source model providing a relation between global source states, e.g., position, and distributed field states, i.e., pressure.
Our approach belongs to the realm of the second approach. The novel aspects include the formulation of a source model suitable for distributed processing, the design of a distributed particle for the estimation of the posterior distribution of field and source states, and the development of a modified version of the maximum consensus (MC) algorithm [14] for the maximum aposteriori (MAP) estimation of the source location. For several loosely connected agents, a consensus algorithm makes the agents to converge to a group decision based on local information.
Contributions and outline
where $\mathcal{L}$ denotes the PDE operator; $\mathcal{B}$ the boundary/initial conditions; p(r, t) the quantity of interest, and s(r, t) the source term. If the PDE parameters, the source term, and the boundary/initial conditions are known, determining p(r, t) is the forward problem. In contrast, inverse problems amount to estimating PDE parameters or states like source locations from measurements of p(r, t).
Here, y_{ k }is the observation, v_{ k }denotes the measurement noise, and the mapping h_{ k } characterizes the measurement. Taken together, (2) and (3) constitute the statespace model, see Section II and [17].
In the Gaussian case, Bayesian estimation based on the statespace model (2), (3) leads to various kinds of Kalman filters [1, 7, 18]. Here, the Bayesian estimator builds on the particle filter (PF) framework due to (i) various possible geometries and (ii) the nonlinearity of the statespace model. After discussing a centralized PF for source localization and tracking in Section III, we develop a decentralized implementation of the PF by splitting the nodes of the sensor networks (SN) into clusters. The clustered SN architecture entails a corresponding decomposition of the statespace model, and the decentralized PF performs intracluster computation and intercluster communication on the decomposed statespace model (see Section IV).
The decentralized PF yields local posterior distributions within each cluster. Localization of the acoustic sources amounts to finding the maxima of the global posterior distribution. To this end, we propose a modified maximum consensus algorithm in Section V. After a summary in Section VI, in Section VII, we describe extensive numerical simulations that illustrate the properties and performance of our source localization method.
II System model
In this section, we develop a statespace model from the PDE of the spatiotemporal acoustic field using the finite difference method (FDM) [15, 19] to obtain a discretization in space and time.
A Forward modelspatiotemporal field
we use (4d) and (4f) for modeling a hallway. ∂Ω_{1} is the transparent part of the boundary of Ω (with normal vector n) modeling an infinite domain for the behind uncovered area. The boundary ∂Ω_{2} (disjoint from ∂Ω_{1} models windows, whereas ∂Ω_{3} (disjoint from ∂Ω_{1} and ∂Ω_{2}) models walls. The choice of these boundary conditions indeed affects the resulting statespace model but does not change the general formulation of the decentralized approach.
B Finite difference method
Here, k is the discrete time index, and Δ _{ t } is the temporal sampling period. It is upper bounded by Δ _{ r } /c to ensure numerical stability. The right choice of Δ _{ t } is beyond the scope of our paper, so that we refer our reader to [16].
C Forward model
D Source model
We assume that there are S sources whose positions form a subset $\mathcal{S}$ of the discretization lattice $\mathcal{L}$, i.e., $s\left[i,j,k\right]={\sum}_{l=1}^{s}{s}_{0}\left[k{k}_{l}\right]\delta \left(i{i}_{l},j{j}_{l}\right)$, where s_{0}[k] is a known waveform, but the positions (i_{ l } , j_{ l } ) and activation times k_{ l } are unknown. These unknowns are captured via the integer variables n[i, j, k] that describe, for a lattice point (i, j), the time between the source occurrence and the current time instant k, i.e., for the l th source there is n[i_{ l } , j_{ l } , k] = max{k  k_{ l } , 0}. If there is no source at position (i, j), then n[i, j, k] = 0.
thereby linking the state Equation 6 and the forward model (5).
E Noise model
So far, no process noise has been considered and specified. Since the source function depends on time and space, these are the only quantities that suffer from noise and are modeled in the following: The temporal noise models the perturbation of a source's life span by an additional term in (6), while this is not possible for the spatial perturbation. This is due to the fact that the position of sources is coded into the subvector n_{ k }by placing its elements. From a practical perspective, this is done by a timedependent matrix D_{ k }which displaces the elements of a vector to other positions (jitter) according to the mapping between grid and subvector n_{ k }.
and a random integer jitter d(l) whose probability mass is concentrated about zero.
F Augmented statespace model
Note that nonlinearity is inherent in (11).
with e_{ l }denoting the l th unit vector.
III Bayesian estimation
A Particle filter
To perform Bayesian estimation (e.g., MAP or MMSE) of (part of) the state vector x_{ k }given the past observations ${y}_{1:k}={\left[{y}_{1}^{T}...{y}_{k}^{T}\right]}^{T}$, the posterior distribution f(x_{ k } y_{1.k}) is compute sequentially.
Here, the transition PDF f (x_{k+1}x_{ k }) is known and f (x_{ k } y_{1:k}) has been computed in the previous time step.
${u}_{k}^{\left[l\right]}$ can be computed from the particle ${x}_{k}^{\left[l\right]}$ according to (13). The dependency of the matrices on k issues from spatial noise.
Once all unnormalized weights have been obtained, the actual weights are computed via the normalization ${\omega}_{k+1}^{\left[l\right]}={\stackrel{\u0303}{\omega}}_{k+1}^{\left[l\right]}\u2215{\sum}_{{l}^{\prime}=1}^{M}{\stackrel{\u0303}{\omega}}_{k+1}^{\left[{l}^{\prime}\right]}$. Particle filters suffer from a general problem termed sample degeneracy, i.e., after sometime only few particles have nonnegligible weights. This problem is circumvented using resampling [21]. With sampling importance resampling (SIR), new samples are drawn from the distribution ${\sum}_{l=1}^{L}{\omega}_{k}^{\left[l\right]}\delta \left({x}_{k}{x}_{k}^{\left[l\right]}\right)$ and all weights are identical, i.e., ${\omega}_{k}^{\left[l\right]}=1\u2215L$.
where ${n}_{0}^{\left[l\right]}$ and ${u}_{\ell}^{\left[l\right]}$ are determined by the realizations of the source parameters (cf. (13) and Section IID). The random variable k_{start} denotes the time duration between source occurrence and activation of the estimator.
B Source localization
(Note that n_{ k }contains all information about position and activation time of the sources.)
where L_{i,j,k}is the number of particles for which ${\left[{n}_{k}^{\left[l\right]}\right]}_{i+\left(j1\right)I}>0$.
IV Decentralized scheme
The particle filter developed in the previous section is centralized in nature since it requires all pressure measurements and the observation modalities described by the globally assembled likelihood function and operates on the full state vector x_{ k }in a fusion center. Additionally, the computed estimates are inherently unknown on the individual sensor nodes. In a SN context, such constraints are undesirable since they imply a large communication overhead to collect the measured data, a high computational effort due to the highdimensional state vector, a feedback to the sensor nodes to spread the estimates, and a central knowledge of measurement noise. Therefore, a decentralized scheme that distributes the data collection and computational costs among several clusters of sensor nodes is developed. This is achieved by splitting the statespace model (11), (14) into lowerdimensional submodels (each corresponding to a cluster), cf. with [22, 23]. Due to the sparsity of the statespace matrices Φ and Γ, these submodels are only loosely coupled, thus a decentralized PF that requires little communication between the clusters can be developed.
A SN clusters and partitioned statespace model
and the superscsript ^{(m)}refers to region m.
This coupling Equation 25 is only possible for the timeindependent part of these matrices. However, for uncorrelated noise between clusters, the timedependent part, i.e., D_{ k }, is calculated separately according to Section IIE on every cluster at each time step, see below.
Here, ${\mathcal{N}}^{\left(m\right)}$ is the set of subregions adjacent to Ω^{(m)}, and ${\Phi}_{12}^{\left(m,{m}^{\prime}\right)}$ is obtained from Φ_{12} by extracting the rows and columns corresponding to ${\mathcal{L}}^{\left(m\right)}$ and ${\mathcal{L}}^{\left({m}^{\prime}\right)}$. The offdiagonals of Φ_{12} are extremely sparsely populated; in fact, (26) contains only few nonzero terms corresponding to adjacent pressure samples and the change of sources from one to another cluster. ${D}_{k}^{\left(m,{m}^{\prime}\right)}$ is generated from every cluster m' such that the composition of all submatrices ${D}_{k}^{\left(m\right)}$ and ${D}_{k}^{\left(m,{m}^{\prime}\right)}$ equals D_{ k }. From a practical perspective, elements of ${D}_{k}^{\left(m\right)}$ are calculated separately on every cluster by means of spatial noise with additional triggering of a message to neighbor clusters whenever a source hop (migration) from one cluster to another is detected (this takes over the purpose of ${D}_{k}^{\left(m,{m}^{\prime}\right)}$ and supersedes (28)). Furthermore, the coupling term ${\xi}_{k}^{\left(m\right)}$ means that pressure samples at subregion boundaries are exchanged between neighboring clusters in order to compute the finite differences.
Boundary conditions do not play a role in the decomposition step as long as (i) they do not depend on adjacent neighbors and (ii) their numerical solution fits into (5). In the first situation, an additional term ${\Phi}_{11}^{\left(m,{m}^{\prime}\right)}$ or ${\Phi}_{21}^{\left(m,{m}^{\prime}\right)}$ arises in matrix.${T}_{k}^{\left(m,{m}^{\prime}\right)}$.
B Decentralized particle filter
For the decentralized PF, we need to distribute the sampling (particle generation) step and the weight computation step. Based on the local particles and weights, each cluster can then compute posterior source probabilities in a similar manner as in Section IIIB.
Here, ${x}_{k}^{\left[l,m\right]}$ is a randomly chosen previous particle and ${{n}^{\prime}}_{k}^{\left[l,m\right]}$ is a (local) noise vector realization. Furthermore, ${\xi}_{k}^{\left[l,m\right]}={\sum}_{{m}^{\prime}\in {\mathcal{N}}^{\left(m\right)}}{T}_{k}^{\left(m,{m}^{\prime}\right)}{x}_{k}^{\left[l,{m}^{\prime}\right]}$ and ${\xi}_{k}^{\left[l,m\right]}={\sum}_{{m}^{\prime}\in {\mathcal{N}}^{\left(m\right)}}{R}_{k}^{\left(m,{m}^{\prime}\right)}{u}_{k}^{\left[l,{m}^{\prime}\right]}$, respectively. In order to compute the latter, only elements of ${x}_{k}^{\left[l,{m}^{\prime}\right]}$ that correspond to pressure samples from the boundaries of adjacent subregions are exchanged, and in the event of source hopping from one to another cluster, a message is sent.
are computed within each cluster and then are shared among all clusters to obtain the final unnormalized weight [24] and [25] are treating the issue of computation of the global factorizable likelihood by means of distributed protocols. If these take longer than the time span between two estimator iterations, the particle filter converts to a particle predictor.
3) (Re)sampling: A remaining problem with the decentralized PF is that the sampling (particle generation) step (30) requires that the clusters pick local particles ${x}_{k}^{\left[l,m\right]}$m = 1, ..., M, that correspond to the same global particle ${x}_{k}^{\left[l\right]}$. This choice is made at random according to the weights ${\omega}_{k}^{\left[l\right]}$. The same problem occurs for the resampling procedure. Since a central random number generator whose output is distributed to each cluster incurs a large communication overhead, we propose to use identical pseudonumber generators in all clusters and initialize those with the same seed, thereby ensuring that all clusters perform the same (re)sampling (cf. with [24] and [26]).
V Decentralized source localization
the maximum and the maximizing state of the posterior PDF Ps(i, j, k) in (23) must be found. In the decentralized scheme, each cluster disposes only of the local posterior PDF for the state subvector ${x}_{k}^{\left(m\right)}$. To find the global maximizing state, each cluster determines the local maximizing state and afterward the clusters use a distributed consensus protocol to determine the global maximum. For simplicity, this procedure is here developed for one source.
While the local maxima with regard to ${\mathcal{L}}^{\left(m\right)}$ can be determined within each cluster, the global maximization with regard to m requires communication between the clusters. Since sharing the local maxima among all clusters via broadcast transmissions requires a large coordinated transmission, we compute the global maximum via the maximum consensus (MC) algorithm [14]. For the MC algorithm, we assume that only neighboring clusters communicate with each other. Thus, each cluster sends to the adjacent clusters a message which contains the local maximum and the position for which the local maximum is achieved. In the subsequent steps, each cluster compares the incoming "maximum" messages with their current estimate of the global position and retain the most likely and its associated position. In the next iteration, this message will be sent to the neighboring clusters.
 1)
Send a message containing the estimates ${\widehat{P}}_{k,max}^{\left(m\right)}$ and $\left({\xee}_{k}^{\left(m\right)},{\u0135}_{k}^{\left(m\right)}\right)$ to the neighbor clusters ${\mathcal{N}}^{\left(m\right)}$.
 2)
Receive corresponding messages from the neighbor cluster, if a neighbor ${m}^{\prime}\in {\mathcal{N}}^{\left(m\right)}$ remains silent, then ${\widehat{P}}_{k,max}^{\left({m}^{\prime}\right)}={\widehat{P}}_{k1,max}^{\left({m}^{\prime}\right)}$.
 3)Update the maximum probability and position as${\widehat{P}}_{k+1,max}^{\left(m\right)}={\widehat{P}}_{k,max}^{\left({m}_{0}\right)},\phantom{\rule{1em}{0ex}}\left({\xee}_{k+1}^{\left(m\right)},{\u0135}_{k+1}^{\left(m\right)}\right)=\left({\xee}_{k}^{\left({m}_{0}\right)},{\u0135}_{k}^{\left({m}_{0}\right)}\right),$
 4)
If ${\widehat{P}}_{k+1,max}^{\left(m\right)}\ne {\widehat{P}}_{k,\mathsf{\text{max}}}^{\left(m\right)}$ to go 1), otherwise go to 2).
When the maximum is fixed, all clusters converge to the true maximum after some iterations (depending on the diameter of the cluster communication graph). Here, the position of the maximum moves as the distributed PF evolves and the AMC will then allow the clusters to jointly track the maximum.
VI Algorithm summary
A Dimensions and tradeoffs
Since we are estimating the 2D position and activation time for each of the S sources, the number of unknowns equals 3S. This is relevant for the choice of the number of particles, cf. [4]. For the calculation of the forward model (state transition), however, the dimension of the state vector x_{ k }is relevant which equals 3IJ. In the decentralized case, the computational complexity of the forward model is distributed across all clusters.
We now face the behavior of a high number of clusters. Generally, the volume of a polytope (cluster) ${\mathcal{L}}^{\left(m\right)}$ with edge lengths e_{ i } (m) in a ddimensional lattice $\mathcal{L}\subset {\mathbb{Z}}^{d}$ is given by $\left{\mathcal{L}}^{\left(m\right)}\right={\prod}_{i=1}^{d}{e}_{i}^{\left(m\right)}$ while its (d  1)dimensional surface equals $\left\partial {\mathcal{L}}^{\left(m\right)}\right=2{\sum}_{j=1}^{d}{\partial}_{j}{\prod}_{i=1}^{d}{e}_{i}^{\left(m\right)}$.
Generally, the dimension per cluster of the equation system to be calculated is $3\left{\mathcal{L}}^{\left(m\right)}\right$ which, in comparison, equals in the centralized case $3\left\mathcal{L}\right$.
In our 2D problem, let the lattice $\mathcal{L}$ be partitioned into M = M_{i}M_{j} clusters of same size, M_{i} clusters in idirection and M_{j} clusters in jdirection. Then, e_{1} = I/M_{i} and e_{2} = J/M_{j}. Furthermore, the volume $\left{\mathcal{L}}^{\left(m\right)}\right=IJ\u2215{M}_{\mathsf{\text{i}}}{M}_{\mathsf{\text{j}}}$. When M → ∞, then the dimension of the equation system, which specifies the amount of computation, becomes in $\mathcal{O}\left(1\u2215M\right)$[27]. Thus the computational effort per cluster decreases when the number of clusters increases. On the other hand, an increasing number of clusters leads to a larger number of boundaries and hence to a larger communication overhead (i.e., message exchange between adjacent clusters).
Algorithm 1: Global initialization
generate priors ${\mathcal{X}}_{0}$; // Equation (20)
decompose ${\mathcal{X}}_{0}$ to $\left\{{\mathcal{X}}_{0}^{\left(m\right)}\right\}$; // Equation (24)
choose seed s_{0} (Section IVB3);
for m = 1 to M parallel do
DDSIRPF(${\mathcal{X}}_{0}^{\left(m\right)}$, s_{0}) of cluster m;
Algorithm 2: DDSIRPF(): Decentralized distributed SIR particle filter of cluster m
input : ${\mathcal{X}}_{0}^{\left(m\right)}$, s_{0}
k ← 1;
wait while no signal sensed and no wakeup call;
send wakeup call to other clusters;
while estimating do
observe: ${y}_{k}^{\left(m\right)}$
$\left\{{\stackrel{\u0304}{\mathcal{W}}}_{k}^{\left(m\right)},{\mathcal{X}}_{k}^{\left(m\right)}\right\}\leftarrow SI\left({\mathcal{X}}_{k1}^{\left(m\right)},{y}_{k}^{\left(m\right)}\right)$;
transmit $\left\{{\stackrel{\u0304}{\mathcal{W}}}_{k}^{\left(m\right)},{\mathcal{P}}_{k}^{\left(m\right)},{\widehat{P}}_{k1,max}^{\left(m\right)},{\widehat{\mathcal{S}}}_{k1}^{\left(m\right)}\right\}$;
wait until reception from other clusters;
$\left\{{\mathcal{W}}_{k},{\mathcal{X}}_{k}^{\left(m\right)}\right\}\leftarrow $
modify $\left({\stackrel{\u0304}{\mathcal{W}}}_{k}^{1},\cdots {\stackrel{\u0304}{\mathcal{W}}}_{k}^{M},{\mathcal{X}}_{k}^{\left(m\right)},{\mathcal{P}}_{k}^{\left({\mathcal{N}}^{\left(m\right)}\right)}\right)$
calculate $\left\{{\widehat{P}}_{k,max}^{\left(m\right)},{\widehat{\mathcal{S}}}_{k}^{\left(m\right)}\right\}$; // Equation (33)
${\mathcal{X}}_{k}^{\left(m\right)}$ ← resampling(${\mathcal{W}}_{k}$, ${\mathcal{X}}_{k}^{\left(m\right)}$, s_{0});
${\mathcal{W}}_{k}^{\left(m\right)}\leftarrow {\left\{1\u2215L\right\}}_{\ell =1}^{L}$;
k ← k + 1;
B Communication between clusters
The first subset ${\stackrel{\u0304}{\mathcal{W}}}_{k}^{\left(m\right)}=\left\{{\stackrel{\u0304}{w}}_{k}^{\left[1,m\right]},\cdots \phantom{\rule{0.3em}{0ex}},{\stackrel{\u0304}{w}}_{k}^{\left[L,m\right]}\right\}$ collects the local PF weights, while ${\mathcal{P}}_{k}^{\left(m\right)}={\left\{{\left[{p}_{k}^{\left[l,m\right]}\right]}_{i+\left(j1\right)I}\left(i,j\right)\in \partial {\mathcal{L}}^{\left(m\right)}\right\}}_{l=1}^{L}$ collects all pressure substate particles on the boundary. The third, ${\mu}_{k}^{\left(i,m\right)}$, signifies a message about sources which migrate across boundaries from one cluster to another. Every message includes the new location and the current time duration since the occurrence of the sources. The last two terms stem from the AMC algorithm where ${\widehat{\mathcal{S}}}_{k}^{\left(m\right)}=\left({\xee}_{k}^{\left(m\right)},{\u0135}_{k}^{\left(m\right)}\right)$.
Here, the ${\mu}_{k}^{\left(i,m\right)}$ messages are disregarded. The amount of transmission in the decentralized case to adjacent neighbors for M_{i} → ∞ and M_{j} → ∞ is in $\mathcal{O}\left(\mathsf{\text{1}}/{M}_{\mathsf{\text{i}}}\right)$ and $\mathcal{O}\left(1/{M}_{\mathsf{\text{j}}}\right)$, respectively. The transmission of weights is in $\mathcal{O}\left(M\right)$ for M → ∞, while the overall communication load is in $\mathcal{O}\left({M}^{2}\right)$.
Necessary message exchange
Neighbor  Not neighbor  

p _{ k }  Boundary elements  
n _{ k }  Source migration*  
${w}_{k}^{\left[l,m\right]}$  All  (All if not relaying/forwarding) 
${\widehat{\mathcal{S}}}_{k}^{\left(m\right)}$  All  
${P}_{k,max}^{\left(m\right)}$  All 
C Algorithm
The algorithm of the decentralized and distributed SIR PF together with the AMC is drawn in Algorithms 14. Compare it with that one in [28] and note that the forloop can be parallelized.
The joint setup of the computational nodes is shown in Algorithm 1 which consists of the calculation of the priors and the synchronization of the pseudorandom generator. Subsequently, each individual PF is launched (Algorithm 2). Two important subroutines are plotted in their own tableaus:

Algorithm 3 calculates particles and sends messages when a source jumps over to another cluster.

Algorithm 4 adds states from the neighbor clusters according to (25) and calculates the overall weight (31).
Algorithm 3: SI(): sample importance part
Input: ${\mathcal{X}}_{k1}^{\left(m\right)},{y}_{k}^{\left(m\right)}$
output: $\left\{{\stackrel{\u0304}{\mathcal{W}}}_{k}^{\left(m\right)},{\mathcal{X}}_{k}^{\left(m\right)}\right\}$
for i = 1 to L do
Draw ${x}_{k}^{\left[l,m\right]}~f\left({x}_{k}^{\left(m\right)}{x}_{k1}^{\left(m\right)}\right)$;
if source(s) cross(es) boundary then
send message to adjacent cluster
${\stackrel{\u0304}{\omega}}_{k}^{\left[l,m\right]}\leftarrow f\left({y}_{k}^{\left(m\right)}{x}_{k}^{\left[l,m\right]}\right)$;
Algorithm 4: modify(): contribution of the neighbors. T^{(m)}is a mapping from neighbors' pressure substates to the own substates with ${T}^{\left(m\right)}{\mathcal{P}}_{k}^{\left({\mathcal{N}}^{\left(m\right)}\right)}$ assembles to ${\left\{{\xi}_{k}^{\left[l,m\right]}\right\}}_{l=1}^{L}$.
input: $\left\{{\stackrel{\u0304}{\mathcal{W}}}_{k}^{1},\cdots \phantom{\rule{0.3em}{0ex}},{\stackrel{\u0304}{\mathcal{W}}}_{k}^{M},{\mathcal{X}}_{k}^{\left(m\right)},{\mathcal{P}}_{k}^{\left({\mathcal{N}}^{\left(m\right)}\right)}\right\}$
output: $\left\{{\mathcal{W}}_{k},{\mathcal{X}}_{k}^{\left(m\right)}\right\}$
${\mathcal{X}}_{k}^{\left(m\right)}\leftarrow {\mathcal{X}}_{k}^{\left(m\right)}+{T}^{\left(m\right)}{\mathcal{P}}_{k}^{\left({\mathcal{N}}^{\left(m\right)}\right)}$; // Equation (27)
${\widehat{\mathcal{W}}}_{k}\leftarrow {\stackrel{\u0304}{\mathcal{W}}}_{k}^{1}\cdots {\stackrel{\u0304}{\mathcal{W}}}_{k}^{M}$; // Equation (31)
normalize ${\widehat{\mathcal{W}}}_{k}$;
VII Simulations
Parameters for simulated hallway
FDM  Δ_{ t }  371 ns 

Δ_{ r }  12.24 cm  
I × J  50 × 50  
Speed  c  340 m/s 
Noise  w  i.i.d. $\mathcal{N}\left\{0,100pPa/{s}^{2}\right\}$ 
v  i.i.d. $\mathcal{N}\left\{0,100pPa\right\}$  
Source  s_{0}(t)  ricker(t  16.7 ms) 
(i_{0}, j_{0})  (25, 25)  
Sensors  Setup  Figure 4 
PF parameters
Particles  L  20,000 

Space/time jitter  x, y  $\mathcal{N}\left\{0,{\Delta}_{r}^{2}/{8}^{2}\right\}$ 
t  $\mathcal{N}\left\{0,{\Delta}_{t}^{2}/{8}^{2}\right\}$  
v  i.i.d. $\mathcal{N}\left\{0,5\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{mPa}}\right\}$  
Priors  k _{start}  $\mathcal{U}\left\{0,41345\right\}$ 
i, j  $\mathcal{U}\left\{0,50\right\}$ 
A Estimation of posterior PDF
B Decentralized MAP source localization
After about 6 iterations, the PF achieves a localization accuracy on the order of the lattice spacing Δ_{ r }. These estimates could be further improved (with higher computational complexity) by refining the discretization lattice and increasing the number of particles.
VIII Conclusions
We proposed a scheme for the localization of multiple acoustic sources in a sensor network (SN). The method uses an augmented nonlinear nonGaussian statespace model for the acoustic field and on a particle filter (PF) for sequential Bayesian estimation of source positions. This statespace representation for the wave equation gives additional prior physical knowledge and incorporates perturbations and distortion like echoes, thereby resulting in improved estimation accuracy. In addition to the source positions, our PF implicitly provides an estimate of the acoustic field itself. We further developed a decentralized PF in which the computational complexity is distributed over several clusters of the SN. The decentralized PF exploits the sparsity of the matrices involved in the statespace model. In fact, the loose coupling between the components of the state vector allows separate and parallel computation of equation subsystems of much smaller dimension in each cluster heads. To determine the global MAP estimate of the position of a source, we proposed an argumentummaximiconsensus algorithm in which the clusters exchange their best MAP probability and source position.
Declarations
Acknowledgements
This work is funded by Grant ICT0844 of "Wiener Wissenschafts, Forschungsund Technologiefonds" (WWTF). This work in part was previously presented at the Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, Nov. 2010.
Authors’ Affiliations
References
 Ristic B, Arulampalam S, Gordon N: Beyond the Kalman Filter: Particle Filters for Tracking Applications, (Artech House, Boston). 2004.MATHGoogle Scholar
 Hlinka O, Djuric P, Hlawatsch F: Timespacesequential distributed particle filtering with lowrate communications. Proceedings 43rd Asilomar Conference on Signals, Systems, and Computers, (Pacific Grove, CA) 2009.Google Scholar
 Hlinka O, Slučiak O, Hlawatsch F, Djurić PM, Rupp M: Likelihood consensus: Principles and application to distributed particle filtering. Proceeding 44th Asilomar Conference on Signals, Systems, and Computers, (Pacific Grove, CA) 2010.Google Scholar
 van Leeuwen P: Particle filtering in geophysical systems. Mon Weather Rev 2009, 137: 4089. 10.1175/2009MWR2835.1View ArticleGoogle Scholar
 Yardim C, Gerstoft P, Hodgkiss WS: Tracking of geoacoustic parameters using kalman and particle filters. J Acoust Soc Am 2009, 125: 746. 10.1121/1.3050280View ArticleGoogle Scholar
 Ihler A, Fisher J III, Moses R, Willsky A: Nonparametric belief propagation for selflocalization of sensor networks. Sel Areas Commun, IEEE J 2005, 23(4):809819.View ArticleGoogle Scholar
 Kay S: Fundamentals of Statistical Signal Processing, Estimation Theory. Volume 1. Pearson Education, New Jersey; 1993.MATHGoogle Scholar
 Xaver F, Mecklenbräuker CF, Gerstoft P, Matz G: Distributed state and field estimation using a particle filter. Proceeding 44th Asilomar Conference on Signals, Systems, and Computers, (Pacific Grove, CA) 2010.Google Scholar
 Candy J, Sullivan E: Modelbased identification: An adaptive approach to oceanacoustic processing. Oceanic Eng IEEE J 2002, 21(3):273289.View ArticleGoogle Scholar
 Rossi L, Krishnamachari B, Kuo C: Distributed parameter estimation for monitoring diffusion phenomena using physical models. IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks (SECON) 2004.Google Scholar
 Zhao T, Nehorai A: Distributed sequential Bayesian estimation of a diffusive source in wireless sensor networks. IEEE Trans Signal Process 2007, 55(4):1511.MathSciNetView ArticleGoogle Scholar
 Sawo F: Nonlinear state and parameter estimation of spatially distributed systems. In Ph.D. dissertation. Universität Karlsruhe; 2009.Google Scholar
 Sawo F, Huber M, Hanebeck U: Parameter identification and reconstruction for distributed phenomena based on hybrid density filter. Information Fusion, 2007 10th International Conference on 2007, 18.View ArticleGoogle Scholar
 Bauso D, Giarré L, Pesenti R: Nonlinear protocols for optimal distributed consensus in networks of dynamic agents. Sys Control Lett 2006, 55(11):918928. 10.1016/j.sysconle.2006.06.005View ArticleMathSciNetMATHGoogle Scholar
 Tarantola A: Inverse Problem Theory and Methods for Model Parameter Estimation, (Society for Industrial and Applied mathematics, Philadelphia. 2005.View ArticleMATHGoogle Scholar
 Jensen F, Kuperman W, Porter M, Schmidt H: Computational ocean acoustics. American Institute of Physics Press, New York; 1994.MATHGoogle Scholar
 Kailath T: Linear Systems. PrenticeHall, New Jersey; 1980.MATHGoogle Scholar
 Doucet A, Godsill S, Andrieu C: On sequential monte carlo sampling methods for bayesian filtering. Stat Comput 2000, 10(3):197208. 10.1023/A:1008935410038View ArticleGoogle Scholar
 Mattheij R, Rienstra S, ten Thije Boonkkamp J: Partial Differential Equations: Modeling, Analysis, Computation. Society for Industrial and Applied mathematics, Philadelphia; 2005.View ArticleMATHGoogle Scholar
 Zhdanov M: Geophysical Inverse Theory and Regularization Problems. Elsevier Science Ltd, Amsterdam; 2002.Google Scholar
 Hol JD, Schön TB, Gustafsson F: On resampling algorithms for particle filters. Nonlinear Statistical Signal Processing Workshop, 2006 IEEE, Sept., Ed. IEEE 2006, 7982.View ArticleGoogle Scholar
 Sawo F, Roberts K, Hanebeck U: Bayesian estimation of distributed phenomena using discretized representations of partial differential equations. 3rd International Conference on Informatics in Control, Automation and Robotics (ICINCO) 2006, 1623.Google Scholar
 Sawo F, Klumpp V, Hanebeck U: Simultaneous state and parameter estimation of distributedparameter physical systems based on sliced gaussian mixture filter. Information Fusion, 2008 11th International Conference on 2008, 18.Google Scholar
 Farahmand S, Roumeliotis S, Giannakis G: Particle filter adaptation for distributed sensors via set membership. Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on. IEEE 2010, 33743377.View ArticleGoogle Scholar
 Oreshkin B, Coates M: Asynchronous distributed particle filter via decentralized evaluation of gaussian products. In Proceedings of the Sixth International Conference of Information Fusion. IEEE, Edinburgh, Scotland; 2010.Google Scholar
 Coates M: Distributed particle filters for sensor networks. In Proceedings of the 3rd international symposium on Information Processing in Sensor Networks. ACM; 2004:99107.Google Scholar
 Knuth D: Big omicron and big omega and big theta. ACM Sigact News 1976, 8(2):1824. 10.1145/1008328.1008329View ArticleGoogle Scholar
 Arulampalam M, Maskell S, Gordon N, Clapp T: A tutorial on particle filters for online nonlinear/nonGaussian Bayesian tracking. IEEE Trans. Signal Process 2002, 50(2):174188. 10.1109/78.978374View ArticleGoogle Scholar
 Ryan H: A choice of wavelets. CSEG Recorder 1994.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.