Iterative algorithms to solve positioning problem based on ML or NLS for a non-cooperative network require a good initial estimate. POCS can provide such an estimate and was first applied to positioning in [24], where the positioning problem was formulated as a convex feasibility problem.
POCS, also called successive orthogonal projection onto convex sets [33] or alternative projections [34], was originally introduced to solve the CFP in [25]. POCS has then been applied to different problems in various fields, e.g., in image restoration problems [35, 36] and in radiation therapy treatment planning [26]. There are generally two versions of POCS: sequential and simultaneous. In this paper, we study sequential POCS and refer the reader to [33] for a study of both sequential and simultaneous projection algorithms. If the projection onto each convex set is easily computed, POCS is a suitable approach to solve CFP. In general, instead of POCS, other methods such as cyclic subgradient projection (CSP) or Oettli's method can be used [33].
In this section, we first review POCS for the positioning problem and then study variations of POCS. We then formulate a version of POCS for cooperative networks. For now, we will limit ourselves to positive measurement errors and consider the general case later.
In the absence of measurement errors, i.e., , it is clear that target i, at position z
i
, can be found in the intersection of a number of circles with radii d
ij
and centres z
j
. For non-negative measurement errors, we can relax circles to discs because a target definitely can be found inside the circles. We define the disc centered at z
j
as
(4)
It then is reasonable to define an estimate of z
i
as a point in the intersection of the discs
(5)
Therefore, the positioning problem can be transformed to the following convex feasibility problem:
(6)
In a non-cooperative network, there are M independent feasibility problems, while for the cooperative network, we have dependent feasibility problems.
4.1 Non-cooperative networks
4.1.1 Projection onto convex sets
For non-cooperative networks in (5). To apply POCS for non-cooperative networks, we choose an arbitrary initial point and find the projection of it onto one of the sets and then project that new point onto another set. We continue alternative projections onto different convex sets until convergence. Formally, POCS for a target i can be implemented as Algorithm 2, where are relaxation parameters, which are confined to the interval for arbitrary small ϵ1, ϵ2 > 0, and determines the individual set [26]. In Algorithm 2, we have introduced , which is the orthogonal projection of z onto set . To find the
Algorithm 2
POCS
1: Initialization: choose arbitrary initial target position for target i
2: for k = 0 until convergence or predefined number K do
3: Update:
4: end for
projection of a point z ∈ ℝnonto a closed convex set Ω ⊆ ℝn, we need to solve an optimization problem [37]:
(7)
When Ω is a disc, there is a closed-form solution for the projection:
(8)
where z
j
is the center of the disc . When projecting a point outside of onto , the updated estimate based on an unrelaxed, underrelaxed, or overrelaxed parameter (i.e., , respectively) is found on the boundary, the outside, or the inside of the disc, respectively. For the , unrelaxed parameter, the POCS estimate after k iterations is obtained as
(9)
There is a closed-form solution for the projection onto a disc, but for general convex sets, there are no closed-form solutions [29, 38], and for every iteration in POCS, a minimization problem should be solved. In this situation, a CSP method can be employed instead [33], which normally has slower convergence rate compared to POCS [33].
Suppose POCS generates a sequence . The following two theorems state convergence properties of POCS.
Theorem 4.1 (Consistent case) If the intersection ofin (5) is non-empty, then the sequenceconverges to a point in the non-empty intersection.
Proof See Theorem 5.5.1 in [33, Ch.5].
In practical cases, some distance measurements might be smaller than the real distance due to measurement noise, and the intersection might be empty. It has been shown that under certain circumstances, POCS converges as in the following sense. Suppose be a steering sequence defined as [26]
(10)
Let m be an integer. If in (10) we have
(11)
then the steering sequence is called m-steering sequence [26]. For such steering sequences, we have the following convergence result.
Theorem 4.2 (Inconsistent case) If the intersection ofin (5) is empty and steered sequences defined in (11) are used for POCS in Algorithm 2, then the sequenceconverges to the minimum of the convex function.
Proof See Theorem 18 in [39].
Note that in papers [18, 24, 29], and [19], the cost function minimized by POCS in the inconsistent case should be corrected to the one given in Theorem 4.2.
One interesting feature of POCS is that it is insensitive to very large positive biases in distance estimates, which can occur in NLOS conditions. For instance, in Figure 2, one bad measurement with large positive error (shown as big dashed circle) is assumed to be a NLOS measurement. As shown, a large positive measurement error does not have any effect on the intersection, and POCS will automatically ignore it when updating the estimate. Generally, for positive measurement errors, POCS considers only those measurements that define the intersection.
When a target is outside the convex hull of reference nodes, the intersection area is large even in the noiseless case, and POCS exhibits poor performance [37]. Figure 3 shows the intersection of three discs centered around reference nodes that contains a target's position when the target is inside or outside the convex hull of the three reference nodes. We assume that there is no error in measurements. As shown in Figure 3b, the intersection is large for the target placed outside the convex hull. In [29], a method based on projection onto hyperbolic sets was shown to perform better in this case; however, the robustness to NLOS is also lost.
4.1.2 Projection onto hybrid sets
The performance of POCS strongly depends on the intersection area: the larger the intersection area, the larger the error of the POCS estimate. In the POCS formulation, every point in the intersection area can potentially be an estimate of a target position. However, it is clear that all points in the intersection are not equally plausible as target estimates. In this section, we describe several methods to produce smaller intersection areas in the positioning process that are more likely to be targets' positions. To do this, we review POCS for hybrid convex sets for the positioning problem. In fact, here we trade the robustness property of POCS to obtain more accurate algorithms. The hybrid algorithms have a reasonable convergence speed and show better performance compared to POCS for line-of-sight (LOS) conditions. However, the robustness against NLOS is partially lost in projection onto hybrid sets. The reason is that in NLOS conditions, the disc defined in POCS method contains the target node; however, for the hybrid sets, this conclusion is no longer true, i.e., the set defined in hybrid approach might not contain the target node.
Projection onto Rings: Let us consider the disc defined in (4). It is obvious that the probability of finding a target inside the disc is not uniform. The target is more likely to be found near the boundary of the disc. When the measurement noise is small, instead of a disc , we can consider a ring (or more formally, an annulus) defined as
(12)
where ϵ
l
≥ 0, ϵ
u
≥ 0, and the control parameter ϵ
l
+ ϵ
u
determines the width of the ring that can be connected to the distribution of noise (if available). Then, projection onto rings (POR) can be implemented similar to POCS, except the disc in Algorithm 2 is replaced with the ring . When ϵ
l
= ϵ
u
= 0, POR changes to a well-known algorithm called Kaczmarz's method [33], also called algebraic reconstruction technique (ART) in the field of image processing [33, 40], or the boundary projection method in the positioning literature [41], which tries to find a point in intersection of a number of circles. The ART method may converge to local optima instead of the global optimum [37]. The ring in (12) can be written as the intersection of a convex and a concave set, and respectively, defined by
(13)
(14)
so that
(15)
Hence, the ring method changes the convex feasibility problem to a convex-concave feasibility problem [42]. This method has good performance for LOS measurements when .
In some situations, the performance of POCS can be improved by exploiting additional information in the measurements [29, 30]. In addition to discs, we can consider other types of convex sets, under assumption that the target lies in, or close to, the intersection of those convex sets. Note that we still have a convex feasibility problem. We will consider two such types of convex sets: the inside of a hyperbola and a halfplane.
Hybrid Hyperbolic POCS: By subtracting each pair of distance measurements, besides discs, we find a number of hyperbolas [29]. The hyperbola defined by subtracting measured distances in reference node j and k[29] divides the plane into two separated sets: one convex and one concave. The target is assumed to be found in the intersection of a number of discs and convex hyperbolic sets. For instance, for the target i,
(16)
where is the convex hyperbolic set defined by the hyperbola derived in reference node j and k[29]. Therefore, projection can be done sequentially onto both discs and hyperbolic sets. Figure 4 shows the intersection of two discs and one hyperbolic set that contains a target. Since there is no closed-form solution for the projection onto a hyperbola, the CSP approach is a good replacement for POCS [33]. Therefore, we can apply a combination of POCS and CSP for this problem. Simulation results in [29] shows significant improvement to the original POCS when discs are combined with hyperbolic sets, especially when target is located outside the convex hull of reference nodes.
Hybrid Halfplane POCS: Now we consider another hybrid method for the original POCS. Considering every pair of references, e.g., the two reference nodes in Figure 5, and drawing a perpendicular bisector to the line joining the two references, the whole plane is divided into two halfplanes. By comparing the distances from a pair of reference nodes to a target, we can deduce that the target most probably belongs to the halfplane containing the reference node with the smallest measured distance. Therefore, a target is more likely to be found in the intersection of a number of discs and halfplanes than in the intersection of only the discs. Formally, for target i, we have
(17)
where defines a halfplane that contains reference node j or k and is obtained as follows. Let aTx = b, for a,x ∈ ℝ2, and b ∈ ℝ, be the perpendicular bisector to the line joining reference nodes j and k, and suppose halfplanes {x ∈ ℝ2|aTx > b} and {x ∈ ℝ2|aTx ≤ b} contain reference nodes j and k, respectively. The halfplane containing the target i obtained as
(18)
There is a closed-form solution for the projection onto the halfplane [33]; hence, POCS can be easily applied to such hybrid convex sets. In [30], POCS for halfplanes was formulated, and we used the algorithm designed there for the projection onto the halfplane in Section 5.
When there are two different convex sets, we can deal with hybrid POCS in two different ways. Either POCS is sequentially applied to discs and other convex sets or POCS is applied to discs and other sets individually and then the two estimates can be combined as an initial estimate for another round of updating. This technique is studied for a specific positioning problem in [38].
4.1.3 Bounding the feasible set
In previous sections, we studied projection methods to solve the positioning problem. In this section, we consider a different positioning algorithm based on the convex feasibility problem. As we saw before, the position of an unknown target can be found in the intersection of a number of discs. The intersection in general may have any convex shape. We still assume positive measurement errors in this section, so that the target definitely lies inside the intersection. This assumption can be fulfilled for distance estimation based on, for instance, time of flight for a reasonable signal-to-noise ratio [43]. In contrast to POCS, which tries to find a point in the feasible set as an estimate, outer approximation (OA) tries to approximate the feasible set by a suitable shape and then one point inside of it is taken as an estimate. The main problem is how to accurately approximate the intersection. There is work in the literature to approximate the intersection by convex regions such as polytopes, ellipsoids, or discs [19, 44–46].
In this section, we consider a disc approximation of the feasible set. Using simple geometry, we are able to find all intersection points between different discs and finally find a smallest disc that passes through them and covers the intersection. Let , k = 1, ..., L be the set of intersection points. Among all intersection points, some of them are redundant and will be discarded. The common points that belong to the intersection are selected as . The problem therefore renders to finding a disc that contains and covers the intersection. This is a well-known optimization problem treated in, e.g., [20, 45]. We can solve this problem by, for instance, a heuristic in which we first obtain a disc covering and check if it covers the whole intersection. If the whole intersection is not covered by the disc, we increase the radius of disc by a small value and check whether the new disc covers the intersection. This procedure continues until a disc covering the intersection is obtained. This disc may not be the minimum enclosing disc, but we are at least guaranteed that the disc covers the whole intersection. A version of this approach was treated in [19].
Another approach was suggested in [45] that yields the following convex optimization problem:
(19)
where S
p
is a unit simplex, which is defined as , and |χ| is the cardinality of set χ. The final disc is given by a center and a radius , where
(20)
Note when there are two discs , the intersection can be efficiently approximated by a disc, i.e., the approximated disc is the minimum disc enclosing the intersection. For , there is no guarantee that the obtained disc is the minimum disc enclosing the intersection [45].
When the problem is inconsistent, a coarse estimate may be taken as an estimate, e.g., the arithmetic mean of reference nodes as
(21)
Finally, we introduce a method to bound the position error of POCS for the positive measurement errors where the target definitely lies inside the intersection. In the best case, the error of estimation is zero, and in the worst case, the absolute value of position error is equal to the largest Euclidian distance between two points in the intersection. Therefore, the maximum length of the intersection area determines the maximum absolute value of estimation error that potentially may happen. Hence, the maximum length of the intersection defines an upper bound on the absolute value of position error for the POCS estimator. To find an upper bound, for instance for target i, we need to solve the following optimization problem:
(22)
The optimization problem (22) is non-convex. We leave the solution to this problem as an open problem and instead use the method of OA described in this section to solve the problem, e.g., for the case when the measurement errors are positive, we can upper bound the position error with [found from (20)].
4.2 Cooperative networks
4.2.1 Cooperative POCS
It is not straightforward to apply POCS in a cooperative network. The explanation why follows in the next paragraph. However, we propose a variation of POCS for cooperative networks. We will only consider projection onto convex sets, although other sets, e.g., rings, can be considered.
To apply POCS, we must unambiguously define all the discs, , for every target i. From (4), it is clear that some discs, i.e., discs centered around a reference node, can be defined without any ambiguity. On the other hand, discs derived from measurements between targets have unknown centers. Let us consider Figure 6 where for target one, we want to involve the measurement between target two and target one. Since there is no prior knowledge about the position of target two, the disc centered around target two cannot be involved in the positioning process for target one. Suppose, based on applying POCS to the discs defined by reference nodes 5 and 6 (the red discs), we obtain an initial estimate ẑ2 for target two. Now, based on distance estimate , we can define a new disc centered around ẑ2 (the dashed disc). This new disc can be combined with the two other discs defined by reference nodes 3 and 4 (the black solid discs). Figure 6 shows the process for localizing target one. For target two, the same procedure is followed.
Algorithm 3 implements cooperative POCS (Coop-POCS). Note that even in the consistent case, discs may have an empty intersection during updating. Hence, we use relaxation parameters to handle a possibly empty intersection during updating. Note that the convergence properties of Algorithm 3 are unknown and need to be further explored in future work.
4.2.2 Cooperatively bounding the feasible sets
In this section, we introduce the application of the outer approximation to cooperative networks. Similar to non-cooperative networks, we assume that all measurement errors are positively biased. To apply OA for cooperative networks, we first determine an
Algorithm 3
Coop-POCS
1: Initialization:
2: for k = 0 until convergence or predefined number K do
3: for i = 1,...,M do
4: find ẑ
i
with POCS such that
5: for m = 1,...,M do
6: if m is such that , then update sets as
7: end for
8: end for
9: end for
outer approximation of the feasible set by a simple region that can be exchanged easily between targets. In this paper, we consider a disc approximation of the feasible set. This disc outer approximation is then iteratively refined at every iteration finding a smaller outer approximation of the feasible set. The details of the disc approximation were explained previously in Section 4.1.3, and we now extend the results to the cooperative network scenario.
To see how this method works, consider Figure 7 where target two helps target one to improve its positioning. Target two can be found in the intersection derived from two discs centered around z5 and z6 in non-cooperative mode (semi oval shape). Suppose that we outer-approximate this intersection by a disc (small dashed circle). In order to help target one to outer-approximate its intersection in cooperative mode, this region should be involved in finding the intersection for target one. We can extend every point of this disc by to come up with a large disc (big dashed circle) with the same center. It is easily verified that (1) target one is guarantee to be on the intersection of the extended disc and discs around reference nodes 3 and 4; (2) the outer-approximated intersection for target one is smaller than that for the non-cooperative case. Note if we had extended the exact intersection, we end up with an even smaller intersection of target one. Cooperative OA (Coop-OA) can be implemented as in Algorithm 4.
We can consider the intersection obtained in Coop-OA as a constraint for NLS methods (CNLS) to improve the performance of the algorithm in (3). Suppose that for target i, we obtain a final disc as with center ẑ
i
and radius . It is clear that we can define as a constraint for the ith target in the optimization problem (3). This problem can be solved iteratively similar to Algorithm 2 considering constraint obtained in Coop-OA. Algorithm 5 implements Coop-CNLS.
Algorithm 4
Coop-OA
1: Initialization:
2: for k = 0 until convergence or predefined number K do
3: for i = 1,...,M do
4: find outer approximation (by a disc with center ẑi and radius ) using (20) or other heuristic methods such that
5: for m = 1,...,M do
6: if m is such that , then update sets as
7: end for
8: end for
9: end for
Algorithm 5
Coop-CNLS
1: Run Algorithm 4 to obtain final discs
2: Initialization: initialize
3: for k = 0 until convergence or predefined number K do
4: for i = 1,...,M do
5: Obtain the position of i th target using non-linear LS as
6: end for
7: end for