7.1 Localization with noise in measurements
So far, graph theory has been used to characterize the solvability of the sensor network localization problem so that if the underlying graph of the sensor network is generically globally rigid and there is a suitable set of anchors at known positions, then a unique set of sensor positions can be determined that is consistent with the measurements. However, distance and bearing measurements used for localization are measured in a physical medium that introduces errors. Therefore, in practice, distance and bearing measurements will never be exact, and the equations whose solutions deliver sensor positions in the noiseless case in general no longer have a solution.
One of the early studies focusing on robust distributed localization of noisy sensor networks using globally rigid robust quadrilaterals with certain distance measurement errors and ambiguities caused by these errors is presented in [15]. In that article, assuming that the measurement noise can be modeled as a random process, the proposed algorithm uses robust quadrilaterals as a building block for localization, adding an additional constraint beyond graph rigidity. This constraint permits localization of only those nodes which have a high likelihood of unambiguous realization.
The study by Cao et al. [49] introduces the Cayley-Menger determinant as an important tool for formulating the geometric relations among node positions in sensor networks as quadratic constraints. It also discusses solutions to optimization problems to estimate the errors in the inaccurate measured distances between sensor nodes and anchor nodes. The solution of the optimization problem, when used to adjust noisy distance measurements, gives a set of distances between nodes which are completely consistent with the fact that sensor nodes live in the same plane as the anchor nodes.
An elegant recent article by Anderson et al. [50] provides a formal theory to deal with noise in globally rigid formations for localization. For a cooperative network, we write the set of vertices of the corresponding graph as V = V
O
∪ V
A
, where V
A
is the set of vertices corresponding to the anchors, and V
O
is the set of vertices associated with ordinary nodes. Let the coordinate values of the anchors be for i ∈ V
A
. Note that the distance between any two anchors of the network is necessarily known. Let us denote the set of edges joining two vertices which correspond to anchor nodes by D
A
, which is a subset of D. Then the equations which apply to the framework after using the anchor node information include distance information and coordinate information and are of the form
(3)
(4)
Determining a set of values for all i ∈ V
O
satisfying these equations is the localization problem. We note that the equations are written with the squares to have polynomial equations in the variables. Suppose that each squared distance in (3) is replaced by , the quantity n
ij
being a (typically small) error in the squared distance (rather than in the distance itself); thus d
ij
remains the actual distance, and n
ij
constitutes the measurement noise effect. Then it is natural to consider the following set:
(5)
(6)
This equation set is still overdetermined but will have no solution in general. One example of this problem involves localizing a single sensor node given noisy measurements of its distance from three anchors, as treated in [49]. In that case, there are two unknown coordinates of the single sensor node to be localized. But there are three equations perturbed by noise, and there is generically no solution. Given the graphical conditions that would guarantee unique localizability in the noiseless case, localization in the noisy case can be posed as a minimization problem. Despite the inability to solve the noisy equation set (5-6), the apparent solution is to seek those coordinate values of p(i), call them for i ∈ V
O
= V \V
A
solving the following minimization problem:
(7)
Now we know that if all n
ij
are zero, there is generically a unique solution to the minimization problem, namely, the solution of the usual localization problem, which yields a zero value for the cost function. Let n denote the vector of n
ij
, corresponding to some arbitrary ordering of the subset of edges D\D
A
, i.e., edges incident on at least one ordinary (nonanchor) vertex. Let ||n|| denote the Euclidean norm so that . The central result of [50] is the following theorem.
Theorem 5 (Anderson et al. [50]). Consider a globally rigid and generic framework (G, ) defined by a graph G = (V, D) and vertex positions, i = 1, 2, .., |V |. Let V
A
⊂ V denote vertices of G corresponding to anchor nodes, of which there are at least three and for which the value ofis known, and let D
A
⊂ D denote those edges incident on two vertices of V
A
, with the graph G
A
= (V
A
, D
A
) then forming a complete subgraph of G. Let d
ij
denote the distance between nodes i and j when (i, j) is an edge of G. Consider the minimization problem (7), and denote the solution of the minimization problem by . Then there exists a suitably small positive Δ and an associated positive constant c such that if the measurement errors in the squares of the distances obey n < Δ, the solution of the minimization problem is unique and there holds .
This result establishes that a globally rigid network can be approximately localized when the internode distance measurements are contaminated with a sufficiently small noise. The solution of the minimization problem is unique and returns sensor position estimates which are not far from the correct values. In particular, a bound on the position errors can be found in terms of a bound on the distance errors. This result serves to fill a logical gap between the formal treatment of noiseless localization and typical practical approaches to sensor localization in nonideal circumstances.
Fang et al. [36] present a sequential algorithm, called "Sweeps" for estimating sensor positions when noiseless and noisy [51] distance measurements are available in globally rigid networks. They use experimental evaluation to demonstrate network instances on which the algorithm is effective.
In the case of other localization problems relying on other types of measurement modalities, for example, bearings in which, again in the noiseless case, an overdetermined set of equations determines the solution. For example, in bearing based localization in two dimensions, typically three or more lines have a common point of intersection. The same issue will arise in the presence of noise, and the treatment in [50] gives some of the formal machinery for dealing with it. The formal analysis in a forthcoming article [52] provides some understanding about the relationship between the errors in the bearing measurements and the corresponding errors in the sensor position estimates given a particular localization scheme. In particular, a bound on the position errors is found in terms of a bound on the bearing errors.
7.2 Computational complexity in easily localizable networks with noisy measurements
The main results in [50, 52] establish that a globally rigid network can be approximately localized when internode distance or bearing measurements are contaminated with sufficiently small noise. A related problem is solving the minimization problem numerically. The network localization problem using internode distances is, in general, NP-hard [18], and we may expect the same complexity for localization with bearing measurements. Nevertheless, several computational algorithms have been proposed to solve the noisy localization problem, e.g., algorithms using sum of squares relaxation [53], squared-range LS (SR-LS) [54], convex optimization-based algorithms and in particular semi-definite programming [55–57], ML location estimation method [58], the methods that use MDS [59], or other methods, e.g., described in [15, 60].
For a localization problem to be solvable in polynomial time, it is, in general, necessary that some special structure holds for the graph. Specifically, localization in the noiseless case can be done in linear time: (i) in the case of trilateration graphs for distance measurements; (ii) in the case of bilateration graphs for bearing measurements; (iii) in the case of double spanning trees for hybrid distance-bearing measurements.
For networks with noisy measurements, "polynomial time" sequential algorithms were introduced in the context of easily localizable networks. In such networks, localization can be carried out sequentially, sensor by sensor, in a distributed fashion, and central calculations are not required. In particular, recent articles by Bishop and Shames [61, 62] are two further steps in developing computationally efficient algorithms that extend the existing results for globally rigid networks in the context of easily localizable networks where distance and bearing measurements are noisy. While it is beyond the scope of this article to present a detailed discussion of such numerical schemes, we do give a brief explanation of measurement refinements carried out in sequential localization algorithms to be used in easily localizable networks.
A numerical recipe for noisy localization in globally rigid trilateration networks using distance measurements is provided in [62]. They consider the problem of improving the accuracy of localization using two types of algorithms, namely the "batch refinement" algorithm, and the "sequential refinement" algorithm. These algorithms refine distance measurements and localize a d-lateration graph sequentially. We will give a brief overview of sequential refinement here, which is based on Cayley-Menger determinant introduced as an important tool for formulating the geometric relations among node positions in sensor networks as quadratic constraints in [49].
Consider a globally rigid graph G(V, D) and a set of internode distance measurements. The problem of distance measurement refinement is to find a set of distances for all (i, j) ∈ D such that the following set of equations is consistent.
(8)
(9)
The Cayley-Menger matrix of a single n-tuple of points p0, ..., p
n-
1 in d-dimensional space is defined as,
(10)
The determinant of Cayley-Menger matrix provides a way of expressing the hyper-volume of a "simplex" using only the lengths of the edges. A simplex of n points is the smallest (n - 1)-dimensional convex hull containing these points. There is the following result stemming from the above definition of the volume of a simplex [63]: Consider an n-tuple of points p0, ..., p
n-
1 in d-dimensional space. If n ≥ d + 2, then the Cayley-Menger matrix is singular, namely |M(p0, ..., p
n-
1)| = 0.
Now consider the refinement problem for a network with a K4 underlying graph (complete graph with four vertices) with a set of measured internode distances. For this graph to be realizable in ℝ2, the Cayley-Menger determinant corresponding to the internode distances should be equal to zero, i.e., the volume of the tetrahedron defined by the four nodes should be zero.
So the problem is to find a set of such that the Cayley-Menger determinant over the square of distances is equal to zero and is minimum. Hence we have the following optimization problem.
(11)
For more general networks, if noisy distance information between sensor node s0 and r (r > 3) anchor nodes is available to s0, we can obtain r - 2 independent quadratic equality constraints [49] by imposing the coplanarity of the following node sets: {s0, s1, s2, s3}, {s0, s1, s2, s4}, ..., {s0, s1, s2, s
r
}. Let e
i
be the error in the estimated squared distances between sensor s0 and anchor s
i
. Each coplanarity condition yields a quadratic equality constraint in the form of f
i
(e1, e2, e
i
) = 0, i = 3, 4, ..., r. We want to minimize the sum of the squared errors , r ≥ 3, subject to r - 2 quadratic equality constraints. When r = 3, we have a least squares problem with one quadratic constraint, which is well studied [64]. When r > 3, we can use the following Lagrangian multiplier method. Let λ
i
, i = 1, ..., r - 2 be the Lagrangian multipliers. We can get the form of the solution . When λ
i
> 0, the Lagrangian H is a strictly convex function, because of the strict convexity of the function J and the positive semi-definiteness of the Hessians of functions f
i
+2. Then there exists a unique global minimum. Thus numerical methods, such as gradient methods, can be utilized to search for the minimum [49].
The problem of finding cliques (a clique, in brief, is a complete subgraph of a graph) of fixed size in a general graph is a part of solving this optimization problem. It is possible to define a k-clique-finding algorithm for an arbitrary graph (specifically d-lateration graphs, k ≤ d) that has a worst-case polynomial complexity in the number of vertices [62].
The sequential algorithm proposed in [62] is based on this Cayley-Menger method to refine distance measurements and localize a d-lateration graph (d = 2, 3, 4) sequentially. The general outline of such a sequential algorithm for trilateration is shown in Figure 14. The essence of these sequential approaches is that there are indeed numerical methods for localization in "polynomial time" for easily localizable networks.
Cyclically constrained optimization is exploited in [61] to reduce the random errors in the problem of localization with bearing measurements. For triangulation networks, which can be created by bilaterations, triangular cycles lead to a system of linear constraints on bearings. Notably, the problem of finding a set of independent polygonal angle constraints in an arbitrary triangulation graph is solvable in "polynomial time" in the number of vertices [61].
For triangulation graphs, the proposed optimization problem is a linear least-squares problem with linear constraints and thus can be solved using standard methods. The conceptual goal of the optimization algorithm which is simply to force the network to live consistently in the sense that the bearings pointing to each sensor must intersect in a common location as illustrated in Figure 15. The constraint-based algorithm forces this requirement to be obeyed whenever a sufficient number of constraints exists. In some cases, the complexity of such algorithms is linear in the number of sensors in the network [61].
7.3 Limited measurement coverage
It is natural to contemplate that, the measurement coverage can be an issue, and it might not be possible to extend sensing radii for distance and bearing measurements.
One of the central motivations behind hybrid distance-bearing measurements studied in this article is to eliminate the need of increasing sensing radius to satisfy the conditions of localizability in linear time complexity. We emphasize that, for networks with hybrid measurements, mere connectivity without increasing sensing radius provides a spanning tree that guarantees localizability in linear time.
For distance-only or bearing-only measurements, two issues should be noted in regard to increasing sensing radius for trilateration or bilateration to achieve localization in linear time [19]. First, in order that a sensor sense and be sensed by its two-hop distant neighbors, a doubling of the sensing radius may be excessively great. Suppose a particular sensor j has n
j
neighbors. Let every sensor pass to its neighbors the list of its own neighbors. Each sensor in this way can learn the list of its two-hop neighbors. Second, in order to communicate with two-hop neighbors, the communication may not need to be as frequent as that with the immediate neighbors, which results in a saving of power. In fact, it might only be required once. The point of communicating with two-hop neighbors is often to eliminate a flip ambiguity. Once this is eliminated, even for a moving sensor network, it may be enough to remain within range only of the original neighbors.
One could still contemplate networks where the bilateration or trilateration property failed. One might suspect that such networks could at least still be globally rigid, with parts of them in bilateration or trilateration 'clusters', linked by a certain number of edges. If the number of clusters is small, one might conjecture that the computational complexity of localizing such a globally rigid graph could be exponential in the number of clusters, but not the number of nodes.