A distributed multi-robot adaptive sampling scheme for the estimation of the spatial distribution in widespread fields
- Muhammad F Mysorewala^{1}Email author,
- Lahouari Cheded^{1} and
- Dan O Popa^{2}
https://doi.org/10.1186/1687-1499-2012-223
© Mysorewala et al.; licensee Springer. 2012
Received: 21 July 2011
Accepted: 30 May 2012
Published: 18 July 2012
Abstract
Monitoring widespread environmental fields is undoubtedly a practically important area of research with many complex and challenging tasks. It involves the building of models of the fields or natural phenomena to be monitored, the estimation of the spatio-temporal distribution of a variety of environmental parameters of interest, such as moisture or salinity in a crop field, or the spatial distribution of vital natural resources such as oil and gas, etc. Sampling, a key operation of the monitoring process, is a broad methodology for gathering statistical information about the phenomenon, or environmental variable, being monitored. To efficiently monitor widespread fields and estimate the spatio-temporal distribution of some particular environmental variable, calls for the use of a sampling strategy can fuse information from different scales of sensors. Such an attractive strategy is well catered for by both the capabilities and distributed nature of wireless sensor networks and the mobility of robots performing the sampling (sensing) tasks. This sampling strategy could even be rendered “adaptive” in that the decision of “where to sample next” evolves temporally with past measurements and is optimally computed. In this article, we examine various single-robot and multi-robot adaptive sampling schemes based on different extended Kalman filter filtering structures such as centralized and decentralized filters as well as our own novel decentralized and distributed filters. Our investigation shows that, whereas the first two filters suffer from a heavy computational or communication load, our proposed method, through its key feature of distributing the filtering task amongst the robots used, manages to reduce both loads and the total reconstruction time. It also enjoys the added attractive feature of scalability that allows the structure of the proposed monitoring scheme to grow with the complexity of the field under study. Our results are corroborated by our simulation work and offer ample encouragement for a further theoretical investigation of some properties of the proposed scheme and its implementation on a physical system. Both of these activities are currently underway.
Keywords
Introduction
Mobile robots are being increasingly used as sensor-carrying agents to perform sampling missions, such as searching for harmful biological and chemical agents, search and rescue in disaster areas, and environmental mapping and monitoring. One of the objectives of these sampling missions is ‘Field Estimation’. Field estimation is the construction of an estimate of how a certain parameter varies in space and time, i.e., an estimate of its spatio-temporal distribution, based on observed or sampled data. As the field of interest is spread over a wide area, using a dense and fixed sampling scheme for an efficient field mapping would simply be too costly and will involve a possibly prohibitive computational load. Instead, it is far more interesting to use a mobile sampling scheme that would collect samples at few judiciously selected locations, in a way that would enable it to gain enough information about the field to be able to infer, with significant accuracy, the value of the parameter of interest at the unsampled locations. A multitude of research groups have published results on sampling using mobile robots for chemical plume source localization[1, 2], soil–moisture mapping for crop monitoring[3], ocean sampling[4, 5], forest-fire mapping[6], etc.
The sensor fusion schemes for sampling missions can broadly be classified into three categories based on (i) physical parametric models, (ii) feature-based inference techniques such as clustering algorithms, neural networks, etc., which are generally non-parametric in nature but can lead to black or grey box parametric representation of the process, and (iii) cognitive-based models, which use the inference processes of humans and animals and which are based on fuzzy logic rules, search techniques, information-theoretic approaches, etc. Models acquired using these three broad classes of approaches can be either purely deterministic or purely stochastic. In many cases, deterministic models affected by some random noise can also be assumed.
In the area of physical deterministic parametric modeling representing the first category of sampling missions, Christopoulos and Roumeliotis[2] presented an approach for estimating the parameters of the diffusion equation that describes the propagation of an instantaneously released gas. Cannell and Stilwell[4] presented two approaches for adaptive sampling (AS) of underwater processes using AUVs. The first one assumes a parametric model, while the second one uses an information-theoretic approach. A number of strategies for non-parametric AS can also be found in the literature. A solution for non-parametric ocean sampling is proposed in[7] based on a classification of the sampling area. The multi-robot path planning problem is addressed in[8] using the mutual information collected using different paths. The study of[5] is also similar to that of[8] in the sense that both deal with generating optimal trajectories for multiple underwater vehicles for sampling purposes. Rule-based non-parametric approaches are also used widely in chemical plume tracing on land and in water, odor sensing[2], mine detection, etc.
Forest fires, chemical source leaks, and temperature variations in oceans are examples of complex natural phenomena for which the exact nonlinear model descriptions are unattainable due to the high-level of complexity involved. Demetriou and Hussein[9] present a solution to the problem of estimating a spatial distribution when the process is described by a partial differential equation. In[10], a non-parametric model is considered, and a distributed scheme for field estimation is developed using a Kalman filter-like recursive scheme.
In geostatistics, spatial processes are generally modeled as random fields, and estimation is performed using Kriging Interpolation techniques[11, 12]. Kriging is termed “simple” if the mean of the distribution is also known, and “universal” if the mean is treated as an unknown linear combination of known basis functions. In[13], a distributed algorithm is presented for spatial estimation using the Kriged Kalman filter. Graham and Cortes[14] proposed a Kriged Kalman filter-based approach for a spatiotemporal field where the discrete-time evolution of the state is governed by the Kalman filter used. In[15], the authors represent the time-varying field with a random process with a covariance known up to a scaling parameter. They proposed gradient descent algorithm which can run in a distributed fashion on multiple robots. Olfati-Saber[16, 17] developed a distributed Kalman filter approach along with consensus filters to estimate the state of a process and reach consensus of all nodes.
Due to the time and energy-critical nature of some of these sampling scenarios, simply requiring the robots to perform a raster scan or randomly sample the field of interest would clearly be a sub-optimal and highly inefficient sampling strategy. Moreover, many time-varying distributions of interest encompass a wide area, and must therefore be observed with sensors having variables characteristics such as multiple size scales, rates, and accuracies[18]. For example, a forest fire is monitored using satellite images which provide a large spatial field-of-view (FOV) but a low-resolution or fidelity. On the other hand, a plane flying at low altitude would provide a low-spatial FOV but high-fidelity information.
In order to effectively fuse these different types of measurements, we proposed a Multi-scale Multi-rate Adaptive Sampling approach with a parametric description of the field[6]. In this approach, sampling strategies continuously adapt in response to real-time measurements from sensors of different scales. This scheme relies on building parametric models of the field using spatial sensor measurements collected from a high-altitude, and which are thus less accurate, and then improving the models by using more accurate spot measurements. The extended Kalman filter (EKF) is used to derive a quantitative information measure that is needed for the selection of sampling locations that are mostly likely to yield optimal information. In this approach, the existing low-resolution information of the field is first used to acquire an initial parametric representation of the field whose parameters have a higher initial error covariance which gradually reduces as high-resolution samples are taken and processed.
In our previous work[6], we presented a framework that extends our estimation of a simple parametric field to that of complex time-varying (e.g., forest fires[6]) by representing these with sums of overlapping Gaussians. The resulting algorithm was called EKF–NN–GAS, and is based on (a) a Radial Basis Function (RBF) neural network (NN) for the parameterization of the non-parametric field, (b) an EKF for parameter estimation, and (c) a heuristic search scheme called ‘Greedy Adaptive Sampling’ (GAS).
A further investigation of the AS algorithm using multiple robots is presented in this article. For widespread fields, it may be impractical and certainly inefficient for a single-robot to map the entire field by navigating to different sampling locations, even when guided by an efficient sampling algorithm. However, when using multiple robots, the sampling area is first divided into smaller regions, and then each sampling instance in a particular region gains information about the parameters which have a dominant effect in that region. Therefore, in order to distribute computations, we need to be able to fuse the parameter estimates in order to construct the map of the field density distribution.
This problem is similar to reformulating the algorithm originally designed for a conventional single-sensor single-processor system to work on a more general multi-sensor, multi-processor system. Distributed algorithms have been used before in many applications, and the degree of parallelism used in them varies from one algorithm to another, depending on the application at hand. An example of distributing processing includes target location estimation using several sensors for data collection, and then fusing together the collected measurements either at the central station or at each sensor in a multi-sensor fusion algorithm[19–21].
Since complex fields are represented by hundreds of parameters[6], it is computationally cumbersome for a single-robot to compute and store all parameter estimates and the uncertainty measures. It also quickly becomes unfeasible for individual robots to run a large AS algorithm, and share large covariance matrices wirelessly. Furthermore, with multi-robot sampling, the resources can be allocated efficiently if some resources are either busy or not available.
If the filter computation can be distributed among multiple robots, the number of computations performed by all the robots, i.e., the overall computational efficiency would be greater than the processing carried out by a single-robot having to carry-out both the sampling and computational tasks. Moreover, we expect that the concomitant advantages such as the flexible degree of parallelism, speed of convergence, and reduction in complexity that will be thus gained would be significant. With a single-robot, the total field estimation time includes the time necessary for navigation, sensing, and computation of the estimate (as there is no communication involved in this case). With multiple robots, the field estimation time includes the time taken for sensing, computation, communication, and final fusion to recover the field density distribution. We expect that the speed of convergence would increase by using multiple robots simply because of the sampling being done in parallel, and that the navigation time would be reduced significantly at the cost of modest increases in computation, communication, and fusion.
The rest of the article is organized as follows: in Section 2, we present the general formulation of the AS problem; Section 3 summarizes the existing centralized and decentralized filters, and their application to sensor network for field estimation; in Section 4, we present the novel federated distributed KF; Section 5 presents the simulation results for the proposed algorithm, and their discussion; finally Section 6 concludes the article.
Formulation of multi-robot AS algorithm
- (1)
Low-resolution sampling: The field g(x,y) of size m × m is divided into uniform square-sized grids n × n such that n < m, and samples are collected at the centers of each of the n × n grids. Hence, m/n × m/n samples are collected as a low-resolution representation of the actual field.
- (2)
Parameterization: Parametric representation of the field g(x,y) is achieved by training a B-neuron RBF neural network with the acquired low-resolution data. This results in a representation of the field as a sum of B Gaussians (one per neuron), and an offset (or bias) parameter b, with each neuron having its own parameters such as its peak ${a}_{i}$, variance ${\sigma}_{i}$, and center $({x}_{0i},{y}_{0i})$. Each of these parameters has an initial estimate value A_{0}, and an initial error covariance P_{0}. The number of neurons B is chosen depending on the complexity of the field and in such a way that the initial field estimation error is minimized to a value less than an acceptable threshold. Note at this stage that unlike the low-resolution samples which are uniformly distributed since they are acquired from uniformly distributed grids, the Gaussians (one Gaussian per RBF node) are distributed non-uniformly depending on the density of the field. We actually use more Gaussians in denser areas and fewer Gaussians in smoother areas of the field to be mapped. Further details on the relationship between the number of low-resolution samples and the number of neurons can be seen in [6].
- (3)
High-resolution sampling: In order to improve the field estimate, spot-measurements are made by a robotic vehicle which collects samples Z_{ k } in a grid of size p × p(where p ≤ n) based on a heuristic GAS algorithm [6]. According to the GAS algorithm, the next sampling location is searched within the vicinity of the currently sampled location, based on a criterion of minimization of the norm of the parameters’ error covariance matrix.
where Q is the process noise covariance, R is the measurement noise covariance and (x_{ k }, y_{ k }) are the robot sampling locations.
The multi-agent (or multi-robot) AS problem considered here can be described as follows:
- (i)
A nonlinear spatio-temporal field variable is described via a parametric approximation Z = Z(A, X, t) depending on an unknown parameter vector A, position vector X, and time t.
- (ii)
N robotic vehicles (agents) sample the field with sensing uncertainty in order to obtain higher resolution estimates of the field.
- (iii)
The number of field parameters (L) and their initial guesses are based on a hypothesis originating from prior knowledge of the field consistent with a low-resolution image of the entire field.
As a complex spatial field is spread over a large area, its parameterization will require a large number of parameters. Therefore, it becomes unfeasible for a single-robot to navigate to different locations, collect samples, and improve parameter estimates in a short period of time. In addition to time constraints, the sampling problem also experiences constraints in the amount of energy available to the robot, as well as suffers from a considerable computational burden. These constraints limited the performance of our single-robot AS algorithm as described in[22]. Therefore, a key contribution of this article is to propose a better alternative that greatly alleviates the time and energy constraints imposed on the sampling process by the single-robot approach of mapping a spatio-temporal stationary field.
It is assumed here that only a single parameter Z vector is measured by all of the mobile robots used. However, in the case where multiple parameter vectors are to be measured, and the measurement model of each measured parameter vector is known, then the general EKF-based framework of AS presented[6] can be used. In[23], we considered the scenario with two measurements only: the field measurement and the location of the robots.
- (i)
How can the sampling area be divided efficiently?
- (ii)
How can the density distribution be estimated through efficient data fusion when robots are collecting measurements in parallel?
- (iii)
How can the computational and communication burden be distributed efficiently amongst the many robots used?
To address the last two issues, several possible algorithms are first presented in Sections 3&4, and then their respective simulation results presented and discussed in Section 5.
Partitioning of sampling area
A method is clearly needed to efficiently divide the sampling area into clusters, in order to run a parallel AS algorithm with multiple robots. Here, we propose an approach to efficiently divide the sampling area for parametric distributions using Fuzzy c-means clustering (FCM) and Centroidal Voronoi Tessellation (CVT) diagrams.FCM has frequently been used in the past for the classification of numerical data. CVT diagrams[24] have also been used for forming non-uniform size grids to better explore high-variance areas for non-parametric distributions[7]. Here, we employ a scheme to efficiently divide the sampling areas for parametric distributions using both FCM and CVT. In this approach, FCM clusters samples based on the estimated centers of the approximating Gaussians used to map the field. Note here that we have assumed that the partitioning is performed once only at the beginning of the Fusion filter. For a time-varying field, further accuracy can be obtained by re-partitioning the field (and hence repositioning the Gaussians) after some samples to account for the field evolution in time.
where$L=4B+1$ is the number of Gaussian centers, N is the number of clusters which is equal to the number of robots in this case, u_{ ij } is the degree of membership of center x_{ i } in cluster j, c_{ j } is the centroid of the cluster j and m is a real number greater than 1. Next, a CVT diagram based on Lloyd’s algorithm uses the centroid locations acquired by fuzzy clustering to classify all points in discrete space that are closest to the centroid, as a single group. Mathematically, given C clusters, each with a centroid denoted by c_{ s }, then a point p on the field is said to be part of the cluster r if the following distance inequality is satisfied:$\left|p-{c}_{r}\right|\le \left|p-{c}_{s}\right|,s=1,\dots ,N,s\ne r$.
As a result of this mapping scheme, more Gaussians will overlap in areas where there are large field variations. The use of FCM and the CVT diagram for area classification may result in regions which have more variations and which must be as small as required in order to sample them thoroughly, i.e., so as not to miss out on any vital information. The areas with less variation, though they may be large, would require fewer samples, since they are represented by only a few parameters.
Centralized, completely decentralized, and federated decentralized filters
In this section, we first examine completely centralized, completely decentralized, and federated decentralized filters, and their use in running the proposed multi-robot AS algorithm. We then argue that a new and efficient filter is needed for this application which will be discussed in detail in the following section.
Using completely centralized filter
In a completely centralized sampling approach, each robot$j=1,2,\dots ,N$ takes sensor measurement${Z}_{j,k+1}$ and transmits them to the central processor, which then calculates the required parameter estimates${\stackrel{A}{}}_{k+1}$ and error covariances${P}_{k+1}$. The central processor computes these estimates, shown below in (4), using the ‘KF equations for a single robot’, (while single-handedly) taking on the task of fusing the multiple measurements it acquires from the N robots used.
Here we assume a stationary field and hence time prediction is not needed, i.e., the a priori estimates will be${\stackrel{A}{}}_{k+1}^{-}={\stackrel{A}{}}_{k}$ and${P}_{k+1}^{-}={P}_{k}$. In[6], we assumed a slow time-varying field, a single sampling robot was used, and we included the prediction too considering the time evolution of the field.
This type of scheme is simple, as there is little communication involved and no redundant computations. But, the disadvantage is that the sensing robots do not carry any information on the field to be estimated. Therefore, this algorithm cannot be adaptive for every sample because the latest estimates are required to generate new sampling locations, and these estimates are not calculated at every robot. Simulation results are shown in Section 5, where multiple sampling locations are chosen based on the current field estimate, and then all the measurement data collected are transmitted to the central filter for fusion, further processing and determination of the next sampling locations.
Using completely decentralized filter
For a completely decentralized filter implementation, each robot not only takes the sensor measurement, but also runs locally the AS algorithm. However, it only calculates partial estimates of the field parameters and error covariance. It also generates new sampling locations within the vicinity of its current position. After every few samples, the robots communicate and share with each other their partial field estimate information, in order to calculate the complete estimates. The parameter estimate vector and the error covariance are the two terms each robot needs to transmit to the other robots. Each robot assimilates the received information using a decentralized EKF scheme formulated in[19, 26].
Note that G_{ j,k,LE }, where j, k, LE stand for the sensor number, sample number, and LE, respectively, is the Jacobian of the Gaussian vector g_{ j,k,LE }, and is used in the above linearized EKF measurement update equation to estimate${\stackrel{A}{}}_{j,k,LE}$.
Comparison of simulation results for single robot, multi-robot decentralized and federated decentralized filter
Single-robot | Multi-robot centralized KF (non-AS) | Multi-robot decentralized federated and non-federated fusion | |
---|---|---|---|
Field size (m × m) | 300 × 300 | 300 × 300 | 300 × 300 |
Grid size for initial samples collection (n × n) | 30 × 30 | 30 × 30 | 30 × 30 |
Number of neurons (B) | 40 | 40 | 40 |
RBF variances (σ) | 30 | 30 | 30 |
Number of sampling robots (N) | 1 | 4 | 4 |
Grid size for adaptive sampling (p × p) | 5 × 5 | 5 × 5 | 5 × 5 |
Horizon size (in grids) for next sample selection for each robot | 10 | 30 | 10 |
Initial parameters error covariances$\left[\begin{array}{ccccc}\hfill b\hfill & \hfill {a}_{i}\hfill & \hfill si\hfill & \hfill {x}_{0i}\hfill & \hfill {y}_{0i}\hfill \end{array}\right]$ | $\left[\begin{array}{ccccc}\hfill 200\hfill & \hfill 50\hfill & \hfill {10}^{-7}\hfill & \hfill 4\hfill & \hfill 4\hfill \end{array}\right]$ | $\left[\begin{array}{ccccc}\hfill 200\hfill & \hfill 50\hfill & \hfill {10}^{-7}\hfill & \hfill 4\hfill & \hfill 4\hfill \end{array}\right]$ | $\left[\begin{array}{ccccc}\hfill 200\hfill & \hfill 50\hfill & \hfill {10}^{-7}\hfill & \hfill 4\hfill & \hfill 4\hfill \end{array}\right]$ |
Sensor measurement error covariance (R) | 1 | 1 | 1 |
Initial norm of error covariance of all parameters$(\Vert {P}_{0}\Vert )$ | 375.9 | 375.9 | 375.9 |
Final norm of error covariance of all parameters$(\Vert {P}_{k+1}\Vert )$ | 13.25 | 241.0 | 17.27 |
Norm of error between original and initial estimated field$(\Vert g-{g}_{est\_0}\Vert )$ | 25.05 | 25.05 | 25.05 |
Norm of error between original and final estimated field$({E}_{2F}=\Vert g-{g}_{est\_k+1}\Vert )$ | 19.67 | 48.0 | 19.33 |
Time taken to reach$(\Vert g-{g}_{est\_k+1}\Vert )<20$ | 11.92 min | 5.48 min | 2.89 min |
No. of samples (qN) | 300 | 300 | 320 |
No. of times KF runs for calculating the parameter estimates | 300 (complete estimate) | 1 (complete estimate) | 320 (partial estimate) |
# of samples/robot after which global estimate is calculated (q/r) | 1 | 300 | 20 |
No. of times fusion is performed using LEs (r) | N/A | N/A | 4 |
By the way of example, in adaptively sampling a field (shown in Figure2) represented by B = 401 parameters. The field is divided into N = 8 partitions and the sampling operation is performed using 1 robot/partition. Running this decentralized algorithm would require each robot to calculate the partial estimate of 401 parameters, and to wirelessly transmit an error covariance matrix of size 401 × 401, and a parameter estimate vector of size 401 × 1 to every other robot. Clearly, such a scheme would be very inefficient and not scalable.
Using a federated decentralized filter
In this approach, each robot takes some sensor measurements, estimates partial error covariances and field parameters, and transmits this information to a global fusion filter for assimilation, in a similar fashion to the approach proposed in[20, 21, 27]. Each robot runs Equation (5), but the fusion is done only at the fusion filter using Equation (6). Then these estimates are transmitted by the global fusion center (or filter) to all of the robots. So, the only difference between federated and completely decentralized approach is that in the federated case, these estimates are centrally calculated by the common global fusion filter while in complete decentralization, these are locally estimated at each robot.
Federated distributed Kalman filter
A decentralized and a distributed KF are two different formulations of the same KF algorithm[19]. In a decentralized algorithm, the filter is full-order, which means that every local filter carries partial information about all parameters, and the information is shared in a star topology to reach consensus amongst all robots on the final parameter estimates. The objective of distributed algorithms is to efficiently decompose the full-order filter into several reduced-order filters, in order to reduce the computational complexity and communication overhead, and hence improve the scalability. It can be said that decentralization is the first step toward efficient distribution. In case of no distribution, every collected sample is used to compute the estimates of all parameters in the field. But with distribution, this sample is used to compute the estimates of only those parameters which have significant impact on the region where this sample has been collected.
The objective of the work presented in this section is to modify the formulation of a federated decentralized scheme, in order to reduce both the communication overheads and the computational load involved. This formulation considers only the cross-covariance terms contributed by neighboring Gaussians only and ignores those contributed by distant Gaussians as a trade-off between accuracy and computational complexity. The decision behind ignoring distant Gaussians is supported by the analysis provided in Section 5, where a threshold of 0.001% in the relative contribution of each Gaussian was used in deciding the number of Gaussians to keep. An accurate DKF is not possible in this AS problem because local measurement models are not available. Furthermore, the use of global measurement models at each node requires the estimate of all parameters, which will contradict the motivation behind the implementation of DKF. There are other schemes that handle the error covariance terms “very lightly” such as Kalman Consensus schemes, which take the average of the error covariances of the parameter estimates in order to implement the DKF with only communication between neighboring nodes being used[16, 17].
Decentralized approaches are good enough for applications involving a small number of states such as tracking of objects, etc. But problems such as parametric sampling involve hundreds of parameters, and hence distributing the KF filter becomes all the more important for an efficient operation.
Approach to distributed computations and communications
Assume that we have a continuous field distribution within a certain perimeter, which means that there is discontinuity between the field and its surroundings. As shown in Figure2, this field is represented by L parameters, where$L=4B+1$, and the field estimate is calculated at the central station based on the LEs received from N sampling robots. In the example shown in Figure2, B = 100, N = 8, and L = 401. The circles shown are the center$({x}_{0i},{y}_{0i})$of B Gaussians. One of the highlighted partitions has S parameters, the estimates of which are expected to change by collecting samples from that partition. S includes all the parameters inside a partition, as well as the surrounding parameters which have a significant impact on that partition. The collection of a single sample leads to the change in M parameter estimates, whereas collecting multiple samples results in the change of C parameter estimates. Hence, from a set-theoretic point of view, we can state that$M\subset C\subset S\subset L$.For the decentralized case, M = C = S = L and all the cross-covariance terms contributed by all the Gaussians are considered. However, for the distributed case, we have$M\subset C\subset S\subset L$and an increase in M, C, and S will lead to a better accuracy at the cost of a higher number of computations.
The idea behind this approach is to run a reduced-order KF rather than a full-order one so as to reduce the computational load, as well as the communication overheads by transmitting only the smallest amount of information needed.
- 1.
Transformation from $({P}_{L,},{\stackrel{A}{}}_{L})$to $({P}_{S},{\stackrel{A}{}}_{S})$at the fusion filter.
- 2.
Transmit the estimates of S parameters $({P}_{S},{\stackrel{A}{}}_{S})$to Robot #j
- 3.Collect the measurement- and estimate pair, $({P}_{M,k+1},{\stackrel{A}{}}_{M,k+1})$${P}_{M,k+1}={\left[{\left({U}_{SM,k+1}^{T}{P}_{S}{U}_{SM,k+1}\right)}^{-1}+{G}_{M,k+1}^{T}{R}_{k+1}^{-1}{G}_{M,k+1}\right]}^{-1}\in {\mathfrak{R}}^{{M}_{k+1}\times {M}_{k+1}}$(10)
- 4.Transformation from $({P}_{M,k+1},{\stackrel{A}{}}_{M,k+1})$to $({P}_{C,k+1},{\stackrel{A}{}}_{C,k+1})$$\begin{array}{l}{P}_{C,k+1}={U}_{C,k+1}^{T}{P}_{C,k}{U}_{C,k+1}+{U}_{MC,k+1}^{T}({P}_{M,k+1}-{U}_{SM,k+1}^{T}{P}_{S,k}{U}_{SM,k+1}){U}_{MC,k+1}\\ {\stackrel{A}{}}_{C,k+1}={U}_{C,k+1}^{T}{\stackrel{A}{}}_{C,k}+{U}_{MC,k+1}^{T}({\stackrel{A}{}}_{M,k+1}-{U}_{SM,k+1}^{T}{\stackrel{A}{}}_{S,k})\end{array}$(14)
- 5.
Repeat steps3 and 4 until an update is requested from the fusion filter.
- 6.
Transmit the pair $({P}_{C},{\stackrel{A}{}}_{C})$to the fusion filter
- 7.The fusion filter then substitutes $({P}_{C},{\stackrel{A}{}}_{C})$into $({P}_{j,L,LE},{\stackrel{A}{}}_{j,L,LE})$ which is unique to each robot.${\left[{P}_{L,k+n}\right]}_{j}={\left[{P}_{L,k}+{U}_{SL,k+n}^{T}({U}_{CS,k+n}^{T}{P}_{C,k+n}{U}_{CS,k+n}-{U}_{CS,k}^{T}{P}_{C,k}{U}_{CS,k}){U}_{SL,k+n}\right]}_{j}$(16)
- 8.
The fusion filter finally runs the global update Equation (6) considering all the different pairs $({P}_{j,L,LE},{\stackrel{A}{}}_{j,L,LE})$to be local updates from different robots.
Let the sample taken at time (k + 1),(k + 2) and (k + 3), respectively, estimate the parameters (2,4,7), (3,4,6), and (1,2,4,9). Then,
Computational and communication complexities
EKF has an O(L^{3}) computational complexity if each sample updates all of the L parameters of the two-dimensional parametric field. However, as a first-order approximation, it can be assumed that a single sample affects only neighboring parameters. With this assumption, the algorithm can run in a distributed fashion, and the computational complexity at the sampling nodes can then be reduced. Only the fusion filter’s complexity remains of order O(L^{3}), because it needs to combine information about all the L parameters. However, this central field parameter fusion process occurs less frequently and hence will have only a small effect on the overall computational burden.
Comparison of computational complexity and communication overhead for centralized, decentralized, federated decentralized, and federated distributed filter
Computations | Communication | |||
---|---|---|---|---|
Robot | Fusion center | Combined | ||
Centralized filter | – | O(qNL^{3}) | O(qNL^{3}) | O(qN) |
Completely decentralized filter | O(qL^{3} + (N – 1)rL^{3}) | – | O(NqL^{3} + N(N – 1)rL^{3}) | O(N(N – 1)r(L^{2} + L)) |
Federated decentralized filter | O(qL^{3}) | O(rL^{3}) | O(NqL^{3} + rL^{3}) | O(2Nr(L^{2} + L)) |
Federated distributed filter | O(qM^{3}) | O(rL^{3}) | O(NqM^{3} + rL^{3}) | O(Nr(C^{2} + C + S^{2} + S)) |
For the centralized filter, the sensing robots do not perform any computation. Hence, the computational and communication complexity are O(qNL^{3}) and O(qN^{3}), respectively.
For a completely decentralized filter, the computational complexity involved in calculating the LE at each robot is O(qL^{3}), whereas that involved in calculating the global estimate at each robot is$O\left(\right(N-1\left)r{L}^{3}\right)$, after taking estimates from (N-1) robots at a frequency r. Hence, the combined computational complexity becomes$O(Nq{L}^{3}+N(N-1\left)r{L}^{3}\right)$. At the same time, the communication complexity is$O\left(N\right(N-1\left)r\right({L}^{2}+L\left)\right)$.
In order to reduce the communication overhead and computational complexity, a federated filter calculates the global estimate on the fusion filter only, which reduces the computational complexity to$O(Nq{L}^{3}+r{L}^{3})$, and the communication complexity to$O\left(2Nr\right({L}^{2}+L\left)\right)$.
Finally, for the proposed distributed version of the federated decentralized filter, instead of calculating the estimates of L states at a single robot, we simply calculate the estimates of M (M < L) states at a single robot for each sample collected. This approach reduces the computational and communication complexity to$O(Nq{M}^{3}+r{L}^{3})$ and$O\left(Nr\right({C}^{2}+C+{S}^{2}+S\left)\right)$, respectively.
Simulation results
In our previous work, we have shown simulation and experimental results for a single-robot AS procedure to validate our approach[6, 22, 23, 28, 29].We now consider the multi-robot algorithm with centralized, decentralized, federated decentralized, and distributed filtering structures.
Here a complex field, of size$m\times m=300\times 300\text{pixels}$, is generated as the truth field, and is to be reconstructed by AS using N = 4 robots. The field is divided into uniformly-sized grids of size$n\times n=30\times 30$ each, and$m/n\times m/n=10\times 10=100\text{low}-\text{resolution}$ samples are initially collected by considering a sample from the middle of each grid. These samples provide a low-resolution description of the field. These initial samples are used for training the RBF neural network and the training method used is of the ‘Self-organized selection of centers’ type ([30]). We use the “new rb” function of MATLAB to train the neural network assuming B = 40 neurons and a spread parameter of σ = 30. This provides an initial estimate of the field with L 4B + 1 = 161 parameters. Spot measurement-based AS is then performed by robots roaming in smaller grids, each of size$p\times p=5\times 5$, in order to improve the field estimate. All assumptions used and results obtained are shown in Table1.
However, it is important to note here that, while the example here, based on a lower number (100) samples per grid, has a high initial field error, it achieves the same accuracy as the example (covered in[6]), which uses a higher number (900) of samples per grid. The accuracy achieved by this example is due to the fact that it relies on AS while using only a smaller total number of samples of 100 (initial samples) + 302(adaptively acquired) = 402 samples than the one used in the example of[6].
The other criterion is the 2-norm of the parameter error covariance matrix$\left({\Vert {P}_{k+1}\Vert}_{2}\right)$.
Computations | Communication | |||
---|---|---|---|---|
Robot | Fusion Center | Combined | ||
Centralized filter | – | 1.34 × 10^{6} | 1.34 × 10^{6} | 320 |
Completely decentralized filter | 383.94 × 10^{6} | – | 1,535.77 × 10^{6} | 1,251.94 × 10^{3} |
Federated decentralized filter | 333.86 × 10^{6} | 16.69 × 10^{6} | 1,352.13 × 10^{6} | 834.62 × 10^{3} |
Federated distributed filter | 5.51 × 10^{6} | 16.69 × 10^{6} | 38.73 × 10^{6} | 121.02 × 10^{3} |
The use of four robots instead of one for sampling also reduces the time for field reconstruction from 11.92 to 2.98 min which amounts approximately to a fourfold reduction in time. The reason for this reduction can be explained intuitively since, by sampling using four robots, instead of one, not only does the number of samples collected by each robot gets reduced, but so does the navigation time as well because of the smaller sampling area allocated to each robot.
It is important to point out at this juncture that the process by which only the average number of the most influencing Gaussians is kept is based on their percent contribution relative to the total contribution of all the Gaussians. These influencing Gaussians are selected whenever their relative percent contributions exceed a very small threshold chosen to be equal to 0.001% in our simulation.
Table3 illustrates the number of computations and communications involved in the above simulations. For the federated distributed filter, it is assumed that on the average, each collected sample influences the estimate of 10 neighboring Gaussians, and each communication update transmits the estimates of 15 Gaussians. Hence, the average number of parameters that can change after each sample is M = 41, since there are 10 Gaussians, 4 parameters per Gaussian and 1 free offset parameter (i.e.,$M=4B+1=4\times 10+1$).Furthermore in our simulation we are assuming that the number of all the parameters expected to change is equal to the number of all the parameters that actually change, i.e.,$S=C=4B+1=4\times 15+1$. Using the formulae shown in Table2 to calculate the number of computations and communication, the results we get for the federated decentralized case are, respectively, 1.14 and 1.5 times smaller than their counterparts in the completely decentralized case. Moreover, the number of computations and communication in the federated distributed case are, respectively, 35 and 7 times smaller than their counterparts in the federated decentralized case.
Scalability
- i.
The number of sampling robots increases but the number of field parameters is kept unchanged. As the number of sampling robots increases, the computational and communication load increases almost linearly in the case of both the federated decentralized and distributed filters, whereas for the completely decentralized filter, this load increases quadratically. Figures 9 and 10, respectively, show that the computational and communication loads increase when the number of robots used increases from 4 to 20 for all 4 types of filter structures.
- ii.
The number of parameters representing the field increases but the number of robots remains unchanged. This scenario may represent different cases where either a highly complex field is used which requires a large number of parameters for its description but does not necessarily cover a wide area or a field that is modestly complex but ranges over a very wide area or possibly a field that combines both features. If the field is spread over a wide area, and the number of robot is kept unchanged, then it would require more time to reconstruct the field and the number of computations and communications would depends on the number of parameters used to represent the field.
The communication complexity is related quadratically to the number of parameters for the completely decentralized and federated decentralized filters. For the centralized filter, this complexity is not a function of the number of parameters, because it is the measurement Z, rather than the parameter estimate, that is transmitted. For the federated distributed filter, the communication complexity is related quadratically to the number of parameters. However, when the number of parameters increases, the rate of growth of the communication load is smaller is smaller than the corresponding rate for the completely decentralized and federated decentralized filters. The reason for this is that it is M and C, rather than the larger L, that are, respectively, used in the last two entries in the columns titled: “Combined” and “Communications” in Table2.
Conclusion
In this article, we studied the problem of estimating the field distribution of some particular environmental variable (e.g., moisture, salinity, etc.) using both single-robot and multi-robot AS schemes and different filtering structures, such as the centralized and decentralized ones as well as our proposed federated distributed filtering structure. Our thorough simulation study, encompassing various AS schemes, clearly showed the superiority of using multi-robot-based AS schemes over their single-robot-AS counterparts.
These attractive advantages enjoyed by the multi-robot AS schemes are mainly due to their features of parallel sampling, a wider area coverage and a decentralization scheme offered by the multi-robot approach. We proposed a novel scalable structure termed the decentralized distributed filter approach where the full-order local KF filter used in the conventional decentralized approach has been distributed into several low-order KFs, thus leading to a further vital reduction in the field reconstruction time. Our simulation results corroborated very well our expectations of the higher performance of our novel decentralization-cum-distribution approach since the estimates of the communication and computational loads on the N robots used show that a dramatic in-excess of-N-fold reduction in the sampling time can be achieved, leading to a similar reduction in the field reconstruction time. These very encouraging results provide us with ample encouragement to further investigate both the efficiency and convergence properties of our proposed distributed filter scheme. This analytical investigation as well as our ultimate goal of successfully testing our proposed approach on a physical multi-robot system is both currently under way.
Declarations
Acknowledgment
This study was supported by the King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia, Projects # JF090014 and SB101017.
Authors’ Affiliations
References
- Jatmiko W, Sekiyama K, Fukuda T: A mobile robots PSO-based for odor source localization in dynamic advection–diffusion environment. IEEE/RSJ International Conference on Intelligent Robots and Systems 2006, 4527-4532.Google Scholar
- Christopoulos VN, Roumeliotis S: Adaptive sensing for instantaneous gas release parameter estimation. IEEE International Conference on Robotics and Automation 2005, 4450-4456.Google Scholar
- Robinson DA, Campbell CS, Hopmans JW, Hornbuckle BK, Jones SB, Knight RO, Ogden F, Selker J, Wendroth O: Soil moisture measurement for ecological and hydrological watershed-scale observatories: a review. Vadose Zone J. 2008, 7: 358-389. 10.2136/vzj2007.0143View ArticleGoogle Scholar
- Cannell CJ, Stilwell DJ: A comparison of two approaches for adaptive sampling of environmental processes using autonomous underwater vehicles. Proceedings of MTS/IEEE OCEANS 2005, 1514-1521.Google Scholar
- Leonard NE, Paley D, Lekien F, Sepulchre R, Fratantoni DM, Davis R: Collective motion, sensor networks and ocean sampling. Proc. IEEE 2007, 95(1):48-74.View ArticleGoogle Scholar
- Mysorewala MF, Popa DO: Multi-scale adaptive sampling with mobile agents for mapping of forest fires. J. Intell. Robot. Syst. 2009, 54(4):535-565. 10.1007/s10846-008-9246-1View ArticleGoogle Scholar
- Hombal V, Sanderson AC, Blidberg R: A non-parametric iterative algorithm for adaptive sampling and robotic vehicle path planning. IEEE/RSJ International Conference on Intelligent Robots and Systems 2006, 217-222.Google Scholar
- Singh A, Krause A, Guestrin C, Kaiser W: Efficient informative sensing using multiple robots. J. Artif. Intell. Res. (JAIR) 2009, 34: 707-755.MathSciNetGoogle Scholar
- Demetriou MA, Hussein II: Estimation of spatially distributed processes using mobile spatially distributed sensor network. SIAM J. Control. Optim. 2009, 48: 266-291. 10.1137/060677884MathSciNetView ArticleGoogle Scholar
- Martinez S: Distributed interpolation schemes for field estimation by mobile sensor networks. IEEE Trans. Control. Syst. Technol. 2010, 18(2):491-500.View ArticleGoogle Scholar
- NAC Cressie: Statistics for Spatial Data. Revised edition. Wiley, New York; 1993.Google Scholar
- Stein ML: Interpolation of Spatial Data. Some Theory for Kriging. Springer Series in Statistics. Springer, New York; 1999.View ArticleGoogle Scholar
- Cortes J: Distributed Kriged Kalman filter for spatial estimation. IEEE Trans. Automat. Control 2009, 54(12):2816-2827.MathSciNetView ArticleGoogle Scholar
- Graham R, Cortes J: Spatial statistics and distributed estimation by robotic sensor networks. American Control Conference (ACC) 2010, 2422-2427.Google Scholar
- Graham R, Cortes J: Cooperative adaptive sampling of random fields with partially known covariance. Int. J. Robust Nonlinear Control 2012, 22(5):504-534. 10.1002/rcn.1710MathSciNetView ArticleGoogle Scholar
- Olfati-Saber R: Distributed Kalman filter with embedded consensus filters. 44th IEEE Conference on Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC '05 2005, 8179-8184.Google Scholar
- Olfati-Saber R: Distributed Kalman filtering for sensor networks, in 46th IEEE Conference on Decision and. Control 2007, 2007(12–14):5492-5498.Google Scholar
- Singh A, Budzik D, Chen W, Batalin M, Stealey M, Borgstrom H, Kaiser W: Multiscale sensing: a new paradigm for actuated sensing of high frequency dynamic phenomena. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006 2006, 328-335.Google Scholar
- Mutambara AG: Decentralized Estimation and Control for Multisensor Systems, Chapters 2–3. CRC Press, Boca Raton; 1998:pp. 19-79. doi:.Google Scholar
- Hashmipour HR, Roy S, Laub AJ: Decentralized structures for parallel Kalman filtering. IEEE Trans. Automat. Control 1988, 33(1):88-93. 10.1109/9.364View ArticleGoogle Scholar
- Gao Y, Krakiwsky EY, Abousalem MA, Mclellan JF: Comparison and analysis of centralized, decentralized, and federated filters. Navigation 1993, 40(1):69-86.View ArticleGoogle Scholar
- Mysorewala MF, Cheded L, Baig MS, Popa DO: A distributed multi-robot adaptive sampling scheme for complex field estimation. 11th International Conference on Control Automation Robotics & Vision (ICARCV) 2010, 7–10 2010, 2466-2471.View ArticleGoogle Scholar
- Popa DO, Mysorewala MF, Lewis FL: EKF-based adaptive sampling with mobile robotic sensor nodes. International Conference on Intelligent Robots and Systems, 2006 IEEE/RSJ 2006, 2451-2456.Google Scholar
- Du Q, Faber V, Gunzburger M: Centroidal voronoi tessellations: applications and algorithms. SIAM Rev. 1999, 41: 637-676. 10.1137/S0036144599352836MathSciNetView ArticleGoogle Scholar
- Bezdek JC: Pattern Recognition with Fuzzy Objective Function Algorithms( Kluwer Academic Publishers. Norwell, MA; 1981.View ArticleGoogle Scholar
- Rao BS, Durrant-Whyte HF: Fully decentralized algorithm for multisensor Kalman filtering. IEE Proc. Control Theory Appl. 1991, 138: 413-420. 10.1049/ip-d.1991.0057View ArticleGoogle Scholar
- Carlson NA: Federated square root filter for decentralized parallel processes. IEEE Trans. Aerospace Electron. Syst. 1990, 26(3):517-525. 10.1109/7.106130View ArticleGoogle Scholar
- Popa DO, Sanderson AC, Komerska RJ, Mupparapu SS, Blidberg DR, Chappel SG: Adaptive sampling algorithms for multiple autonomous underwater vehicles. Autonomous Underwater Vehicles 2004, 108-118.Google Scholar
- Mysorewala MF, Cheded L, Qureshi A: Comparison of nonlinear filters for the estimation of parametrized spatial field by robotic sampling. 6th IEEE conference on Industrial Electronics and Applications 2011, 2005-2010.Google Scholar
- Haykin S: Neural Networks: A Comprehensive Foundation. Second edition. Prentice Hall PTR; 1998.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.