Wireless network positioning as a convex feasibility problem

In this semi-tutorial paper, the positioning problem is formulated as a convex feasibility problem (CFP). To solve the CFP for non-cooperative networks, we consider the well-known projection onto convex sets (POCS) technique and study its properties for positioning. We also study outer-approximation (OA) methods to solve CFP problems. We then show how the POCS estimate can be upper bounded by solving a non-convex optimization problem. Moreover, we introduce two techniques based on OA and POCS to solve the CFP for cooperative networks and obtain two new distributed algorithms. Simulation results show that the proposed algorithms are robust against non-line-of-sight conditions.


Introduction
Wireless sensor networks (WSNs) have been considered for both civil and military applications. In every WSN, position information is a vital requirement for the network to be able to perform in practical applications. Due to drawbacks of using GPS in practical networks, mainly cost and lack of access to satellite signals in some scenarios, position extraction by the network itself has been extensively studied during the last few years. The position information is derived using fixed sensor nodes, also called reference nodes, with known positions and some type of measurements between different nodes [1][2][3][4][5][6][7]. From one point of view, WSNs can be divided into two groups based on collaboration between targets: cooperative networks and non-cooperative networks. In cooperative networks, the measurements between targets are also involved in the positioning process to improve the performance.
During the last decade, different solutions have been proposed for the positioning problem for both cooperative and non-cooperative networks, such as the maximum likelihood estimator (ML) [2,8], the maximum a posteriori estimator [9], multidimensional scaling [10], non-linear least squares (NLS) [11,12], linear least squares approaches [13][14][15], and convex relaxation techniques, e.g., semidefinite programming [12,16] and second-order cone programming [17]. In the positioning literature, complexity, accuracy, and robustness are three important factors that are generally used to evaluate the performance of a positioning algorithm. It is not expected for an algorithm to perform uniquely best in all aspects [7,18]. Some methods provide an accurate estimate in some situations, while others may have complexity or robustness advantages.
In practice, it is difficult to obtain a-priori knowledge of the full statistics of measurement errors. Due to obstacles or other unknown phenomena, the measurement errors statistics may have complicated distribution. Even if the distribution of the measurement errors is known, complexity and convergence issues may limit the performance of an optimal algorithm in practice. For instance, the ML estimator derived for positioning commonly suffers from non-convexity [3]. Therefore, when solving using an iterative search algorithm, a good initial estimate should be chosen to avoid converging to local minima. In addition to complexity and non-convexity, an important issue in positioning is how to deal with non-line-of-sight (NLOS) conditions, where some measurements have large positive biases [19]. Traditionally, there are methods to remove outliers that need tuning parameters [20,21]. In [22], a non-parametric method based on hypothesis testing was proposed for positioning under LOS/NLOS conditions. In spite of the good performance, the proposed method seems to have limitations for implementation in a large network, mainly due to the complexity. For a good survey on outlier detection techniques for WSNs, see [23]. A different approach was considered in [24] where the authors formulated the positioning problem as a convex feasibility problem (CFP) and applied the well-known successive projection onto convex sets (POCS) approach to solve the positioning problem. This method turns out to be robust to NLOS conditions. POCS was previously studied for the CFP [25,26] and has found applications in several research fields [27,28]. For non-cooperative positioning with positively biased range measurements, POCS converges to a point in the convex feasible set (i.e., the intersection of a number of discs). When measurements are not positively biased, the feasible set can be empty, in which case POCS, using suitable relaxations, converges to a point that minimizes the sum of squared distances to a number of discs. In the positioning literature, POCS was studied with distance estimates [29] and proximity [30]. Although POCS is a reliable algorithm for the positioning problem, its estimate might not be accurate enough to use for locating a target, especially when a target lies outside the convex hull of reference nodes. Therefore, POCS can be considered a pre-processing method that gives a reliable coarse estimate. Model-based algorithms such as ML or NLS can be initialized with POCS to improve the accuracy of estimation. The performance of POCS evaluated through practical data in [18,19] confirms these theoretical claims.
In this semi-tutorial paper, we study the application of POCS to the positioning problem for both non-cooperative and cooperative networks. By relaxing the robustness of POCS, we can derive variations of POCS that are more accurate under certain conditions. For the scenario of positively biased range estimates, we show how the estimation error of POCS can be upper-bounded by solving a non-convex optimization problem. We also formulate a version of POCS for cooperative networks as well as an error-bounding algorithm. Moreover, we study a method based on outer approximation (OA) to solve the positioning problem for positive measurement errors and propose a new OA method for cooperative networks positioning. We also propose to combine constraints derived in OA with NLS that yields a new constrained NLS. The feasibility problem that we introduce in cooperative positioning has not been tackled in the literature previously. Computer simulations are used to evaluate the performance of different methods and to study the advantages and disadvantages of POCS as well as OA.
The rest of this paper is organized as follows. In Section 2, the system model is introduced, and Section 3 discusses positioning using NLS. In Section 4, the positioning problem is interpreted as a convex feasibility problem, and consequently, POCS and OA are formulated for non-cooperative networks. Several extensions of POCS as well as an upper bound on the estimation error are introduced for non-cooperative networks. In the sequel of this section, a version of POCS and outerapproximation approach are formulated for cooperative networks. The simulation results are discussed in Section 5, followed by conclusions.

System model
Throughout this paper, we use a unified model for both cooperative and non-cooperative networks. Let us consider a two-dimensional network with N + M sensor nodes. Suppose that M targets are placed at positions z i ℝ 2 , i = 1,..., M, and the remaining N reference nodes are located at known positions z j ℝ 2 , j = M + 1,..., N + M. Every target can communicate with nearby reference nodes and also with other targets. Let us define A i = {j| reference node j can communicate with target i} and B i = {j|j ≠ i, target j can communicate with target i} as the sets of all reference nodes and targets that can communicate with target i. For non-cooperative networks, we set B i = ∅ . Suppose that sensor nodes are able to estimate distances to other nodes with which they communicate, giving rise to the following observation: where d ij = ||z i -z j || is the Euclidian distance between x i and x j and ij is the measurement error. As an example, Figure 1 shows a cooperative network consisting of two targets and four reference nodes. Since in practice the distribution of measurement errors might be complex or completely unknown, throughout this paper we only assume that measurement errors are independent and identically distributed (i.i.d.). In fact, we assume limited knowledge of ij is available. In some situations, we further assume measurement errors to be non-negative i.i.d.
The goal of a positioning algorithm is to find the positions of the M targets based on N known sensors' positions and measurements (1).

Conventional positioning
A classic method to solve the problem of positioning based on measurements (1) is to employ the ML estimator, which needs prior knowledge of the distribution of the measurement errors ij . When prior knowledge of the measurement error distribution is not available, one can apply non-linear least squares (NLS) minimization [31]: where Ẑ = [ẑ 1 , ..., ẑ M ]. Note that when B i = ∅ , we find the conventional non-cooperative LS [11].
The solution to (2) coincides with the ML estimate if measurement errors are zero-mean i.i.d. Gaussian random variables with equal variances [31]. It has been shown in [11] that in some situations, the NLS objective function in (2) is convex, in which case it can be solved by an iterative search method without any convergence problems. In general, however, NLS and ML have nonconvex objective functions.
NLS formulated in (2) is a centralized method which may not be suitable for practical implementation. Algorithm 1 shows a distributed approach to NLS for (noncooperative networks. 5: end for 6: end for To solve (3) using an iterative search algorithm, a good initial estimate for every target should be taken. To avoid drawbacks in solving NLS, the original nonconvex problem can be relaxed into a semidefinite program [16] or a second-order cone program [17], which can be solved efficiently. Assuming small variance of measurement errors and enough available reference nodes, a linear estimator can also be derived to solve the problem that is asymptotically efficient [13,15,32].

Positioning as a convex feasibility problem
Iterative algorithms to solve positioning problem based on ML or NLS for a non-cooperative network require a good initial estimate. POCS can provide such an estimate and was first applied to positioning in [24], where the positioning problem was formulated as a convex feasibility problem. POCS, also called successive orthogonal projection onto convex sets [33] or alternative projections [34], was originally introduced to solve the CFP in [25]. POCS has then been applied to different problems in various fields, e.g., in image restoration problems [35,36] and in radiation therapy treatment planning [26]. There are generally two versions of POCS: sequential and simultaneous. In this paper, we study sequential POCS and refer the reader to [33] for a study of both sequential and simultaneous projection algorithms. If the projection onto each convex set is easily computed, POCS is a suitable approach to solve CFP. In general, instead of POCS, other methods such as cyclic subgradient projection (CSP) or Oettli's method can be used [33].
In this section, we first review POCS for the positioning problem and then study variations of POCS. We then formulate a version of POCS for cooperative networks. For now, we will limit ourselves to positive measurement errors and consider the general case later.
In the absence of measurement errors, i.e.,d ij = d ij , it is clear that target i, at position z i , can be found in the intersection of a number of circles with radii d ij and centres z j . For non-negative measurement errors, we can relax circles to discs because a target definitely can be found inside the circles. We define the disc D ij centered at z j as It then is reasonable to define an estimate of z i as a point in the intersection D i of the discs D iĵ Therefore, the positioning problem can be transformed to the following convex feasibility problem: In a non-cooperative network, there are M independent feasibility problems, while for the cooperative network, we have dependent feasibility problems.

Non-cooperative networks 4.1.1 Projection onto convex sets
For non-cooperative networks B i = ∅ in (5). To apply POCS for non-cooperative networks, we choose an arbitrary initial point and find the projection of it onto one of the sets and then project that new point onto another set. We continue alternative projections onto different convex sets until convergence. Formally, POCS for a target i can be implemented as Algorithm 2, where λ i k k≥0 are relaxation parameters, which are confined to the interval ∈ 1 ≤ λ i k ≤ 2 − ∈ 2 for arbitrary small 1 , 2 > 0, and 1 ≤ j(k) k≥0 ≤ |A i | determines the individual set D ij(k) [26]. In Algorithm 2, we have introduced P D ij (z) , which is the orthogonal projection of z onto set D ij . To find the Algorithm 2 POCS 1: Initialization: choose arbitrary initial target position z 0 i ∈ R 2 for target i 2: for k = 0 until convergence or predefined number K do 3: Update: projection of a point z ℝ n onto a closed convex set Ω ⊆ ℝ n , we need to solve an optimization problem [37]: When Ω is a disc, there is a closed-form solution for the projection: where z j is the center of the disc D ij . When projecting a point outside of D ij(k) onto D ij(k) , the updated estimate based on an unrelaxed, underrelaxed, or overrelaxed parameter λ i k (i.e., λ i k = 1, λ i k < 1, λ i k > 1 , respectively) is found on the boundary, the outside, or the inside of the disc, respectively. For the λ i k = 1, unrelaxed parameter, the POCS estimate after k iterations is obtained as There is a closed-form solution for the projection onto a disc, but for general convex sets, there are no closed-form solutions [29,38], and for every iteration in POCS, a minimization problem should be solved. In this situation, a CSP method can be employed instead [33], which normally has slower convergence rate compared to POCS [33].
Suppose POCS generates a sequence z k i ∞ k=0 . The following two theorems state convergence properties of POCS. In practical cases, some distance measurements might be smaller than the real distance due to measurement noise, and the intersection D i might be empty. It has been shown that under certain circumstances, POCS converges as in the following sense. Suppose λ i k be a steering sequence defined as [26] lim Let m be an integer. If in (10) we have then the steering sequence λ i k is called m-steering sequence [26]. For such steering sequences, we have the following convergence result.
Proof See Theorem 18 in [39]. Note that in papers [18,24,29], and [19], the cost function minimized by POCS in the inconsistent case should be corrected to the one given in Theorem 4.2.
One interesting feature of POCS is that it is insensitive to very large positive biases in distance estimates, which can occur in NLOS conditions. For instance, in Figure 2, one bad measurement with large positive error (shown as big dashed circle) is assumed to be a NLOS measurement. As shown, a large positive measurement error does not have any effect on the intersection, and POCS will automatically ignore it when updating the estimate. Generally, for positive measurement errors, POCS considers only those measurements that define the intersection.
When a target is outside the convex hull of reference nodes, the intersection area is large even in the noiseless case, and POCS exhibits poor performance [37]. Figure  3 shows the intersection of three discs centered around reference nodes that contains a target's position when the target is inside or outside the convex hull of the three reference nodes. We assume that there is no error in measurements. As shown in Figure 3b, the intersection is large for the target placed outside the convex hull. In [29], a method based on projection onto hyperbolic sets was shown to perform better in this case; however, the robustness to NLOS is also lost.

Projection onto hybrid sets
The performance of POCS strongly depends on the intersection area: the larger the intersection area, the larger the error of the POCS estimate. In the POCS formulation, every point in the intersection area can potentially be an estimate of a target position. However, it is clear that all points in the intersection are not equally plausible as target estimates. In this section, we describe several methods to produce smaller intersection areas in the positioning process that are more likely to be targets' positions. To do this, we review POCS for hybrid convex sets for the positioning problem. In fact, here we trade the robustness property of POCS to obtain more accurate algorithms. The hybrid algorithms have a reasonable convergence speed and show better performance compared to POCS for line-of-sight (LOS) conditions. However, the robustness against NLOS is partially lost in projection onto hybrid sets. The reason is that in NLOS conditions, the disc defined in POCS method contains the target node; however, for the hybrid sets, this conclusion is no longer true, i.e., the set defined in hybrid approach might not contain the target node.
Projection onto Rings: Let us consider the disc defined in (4). It is obvious that the probability of finding a target inside the disc is not uniform. The target is more likely to be found near the boundary of the disc. When the measurement noise is small, instead of a disc D ij , we can consider a ring R ij (or more formally, an annulus) defined as   (12) where l ≥ 0, u ≥ 0, and the control parameter l + u determines the width of the ring that can be connected to the distribution of noise (if available). Then, projection onto rings (POR) can be implemented similar to POCS, except the disc D ij(k) in Algorithm 2 is replaced with the ring R ij(k) . When l = u = 0, POR changes to a well-known algorithm called Kaczmarz's method [33], also called algebraic reconstruction technique (ART) in the field of image processing [33,40], or the boundary projection method in the positioning literature [41], which tries to find a point in intersection of a number of circles. The ART method may converge to local optima instead of the global optimum [37]. The ring in (12) can be written as the intersection of a convex and a concave set, D ∈ u ij and C ∈l ij respectively, defined by so that Hence, the ring method changes the convex feasibility problem to a convex-concave feasibility problem [42]. This method has good performance for LOS measurements when E ∈ ij = 0.
In some situations, the performance of POCS can be improved by exploiting additional information in the measurements [29,30]. In addition to discs, we can consider other types of convex sets, under assumption that the target lies in, or close to, the intersection of those convex sets. Note that we still have a convex feasibility problem. We will consider two such types of convex sets: the inside of a hyperbola and a halfplane.
Hybrid Hyperbolic POCS: By subtracting each pair of distance measurements, besides discs, we find a number of hyperbolas [29]. The hyperbola defined by subtracting measured distances in reference node j and k [29] divides the plane into two separated sets: one convex and one concave. The target is assumed to be found in the intersection of a number of discs and convex hyperbolic sets. For instance, for the target i, where H i jk is the convex hyperbolic set defined by the hyperbola derived in reference node j and k [29]. Therefore, projection can be done sequentially onto both discs and hyperbolic sets. Figure 4 shows the intersection of two discs and one hyperbolic set that contains a target. Since there is no closed-form solution for the projection onto a hyperbola, the CSP approach is a good replacement for POCS [33]. Therefore, we can apply a combination of POCS and CSP for this problem. Simulation results in [29] shows significant improvement to the original POCS when discs are combined with hyperbolic sets, especially when target is located outside the convex hull of reference nodes.
Hybrid Halfplane POCS: Now we consider another hybrid method for the original POCS. Considering every pair of references, e.g., the two reference nodes in Figure 5, and drawing a perpendicular bisector to the line joining the two references, the whole plane is divided into two halfplanes. By comparing the distances from a pair of reference nodes to a target, we can deduce that the target most probably belongs to the halfplane containing the reference node with the smallest measured distance. Therefore, a target is more likely to be found in the intersection of a number of discs and halfplanes than in the intersection of only the discs. Formally, for target i, we havê where F i jk defines a halfplane that contains reference node j or k and is obtained as follows. Let a T x = b, for  a,x ℝ 2 , and b ℝ, be the perpendicular bisector to the line joining reference nodes j and k, and suppose halfplanes {x ℝ 2 |a T x >b} and {x ℝ 2 |a T x ≤ b} contain reference nodes j and k, respectively. The halfplane F i jk containing the target i obtained as There is a closed-form solution for the projection onto the halfplane [33]; hence, POCS can be easily applied to such hybrid convex sets. In [30], POCS for halfplanes was formulated, and we used the algorithm designed there for the projection onto the halfplane in Section 5.
When there are two different convex sets, we can deal with hybrid POCS in two different ways. Either POCS is sequentially applied to discs and other convex sets or POCS is applied to discs and other sets individually and then the two estimates can be combined as an initial estimate for another round of updating. This technique is studied for a specific positioning problem in [38].

Bounding the feasible set
In previous sections, we studied projection methods to solve the positioning problem. In this section, we consider a different positioning algorithm based on the convex feasibility problem. As we saw before, the position of an unknown target can be found in the intersection of a number of discs. The intersection in general may have any convex shape. We still assume positive measurement errors in this section, so that the target definitely lies inside the intersection. This assumption can be fulfilled for distance estimation based on, for instance, time of flight for a reasonable signal-to-noise ratio [43]. In contrast to POCS, which tries to find a point in the feasible set as an estimate, outer approximation (OA) tries to approximate the feasible set by a suitable shape and then one point inside of it is taken as an estimate. The main problem is how to accurately approximate the intersection. There is work in the literature to approximate the intersection by convex regions such as polytopes, ellipsoids, or discs [19,[44][45][46].
In this section, we consider a disc approximation of the feasible set. Using simple geometry, we are able to find all intersection points between different discs and finally find a smallest disc that passes through them and covers the intersection. Let z I k , k = 1, ..., L be the set of intersection points. Among all intersection points, some of them are redundant and will be discarded. The common points that belong to the intersection are selected as S int = z I k |z I k ∈ D i . The problem therefore renders to finding a disc that contains S int and covers the intersection. This is a well-known optimization problem treated in, e.g., [20,45]. We can solve this problem by, for instance, a heuristic in which we first obtain a disc covering S int and check if it covers the whole intersection. If the whole intersection is not covered by the disc, we increase the radius of disc by a small value and check whether the new disc covers the intersection. This procedure continues until a disc covering the intersection is obtained. This disc may not be the minimum enclosing disc, but we are at least guaranteed that the disc covers the whole intersection. A version of this approach was treated in [19].
Another approach was suggested in [45] that yields the following convex optimization problem: where S p is a unit simplex, which is defined as p i x i = 1 , and |c| is the cardinality of set c. The final disc is given by a centerẑ c i and a radiusR i , wherê Note when there are two discs (|A i | = 2) , the intersection can be efficiently approximated by a disc, i.e., the approximated disc is the minimum disc enclosing the intersection. For |A i | ≥ 3 , there is no guarantee that the obtained disc is the minimum disc enclosing the intersection [45].
When the problem is inconsistent, a coarse estimate may be taken as an estimate, e.g., the arithmetic mean of reference nodes aŝ Finally, we introduce a method to bound the position error of POCS for the positive measurement errors where the target definitely lies inside the intersection. In the best case, the error of estimation is zero, and in the worst case, the absolute value of position error is equal to the largest Euclidian distance between two points in the intersection. Therefore, the maximum length of the intersection area determines the maximum absolute value of estimation error that potentially may happen. Hence, the maximum length of the intersection defines an upper bound on the absolute value of position error for the POCS estimator. To find an upper bound, for instance for target i, we need to solve the following optimization problem: The optimization problem (22) is non-convex. We leave the solution to this problem as an open problem and instead use the method of OA described in this section to solve the problem, e.g., for the case when the measurement errors are positive, we can upper bound the position error withR i [found from (20)].

Cooperative networks 4.2.1 Cooperative POCS
It is not straightforward to apply POCS in a cooperative network. The explanation why follows in the next paragraph. However, we propose a variation of POCS for cooperative networks. We will only consider projection onto convex sets, although other sets, e.g., rings, can be considered.
To apply POCS, we must unambiguously define all the discs, D ij , for every target i. From (4), it is clear that some discs, i.e., discs centered around a reference node, can be defined without any ambiguity. On the other hand, discs derived from measurements between targets have unknown centers. Let us consider Figure 6 where for target one, we want to involve the measurement between target two and target one. Since there is no prior knowledge about the position of target two, the disc centered around target two cannot be involved in the positioning process for target one. Suppose, based on applying POCS to the discs defined by reference nodes 5 and 6 (the red discs), we obtain an initial estimate ẑ 2 for target two. Now, based on distance estimated 12 , we can define a new disc centered around ẑ 2 (the dashed disc). This new disc can be combined with the two other discs defined by reference nodes 3 and 4 (the black solid discs). Figure 6 shows the process for localizing target one. For target two, the same procedure is followed.
Algorithm 3 implements cooperative POCS (Coop-POCS). Note that even in the consistent case, discs may have an empty intersection during updating. Hence, we use relaxation parameters to handle a possibly empty intersection during updating. Note that the convergence properties of Algorithm 3 are unknown and need to be further explored in future work.

Cooperatively bounding the feasible sets
In this section, we introduce the application of the outer approximation to cooperative networks. Similar to noncooperative networks, we assume that all measurement errors are positively biased. To apply OA for cooperative networks, we first determine an if m is such that i ∈ B m , then update sets T mi as end for 8: end for 9: end for outer approximation of the feasible set by a simple region that can be exchanged easily between targets. In this paper, we consider a disc approximation of the feasible set. This disc outer approximation is then iteratively refined at every iteration finding a smaller outer approximation of the feasible set. The details of the disc approximation were explained previously in Section 4.1.3, and we now extend the results to the cooperative network scenario.
To see how this method works, consider Figure 7 where target two helps target one to improve its positioning. Target two can be found in the intersection derived from two discs centered around z 5 and z 6 in non-cooperative mode (semi oval shape). Suppose that we outer-approximate this intersection by a disc (small dashed circle). In order to help target one to outer-approximate its intersection in cooperative mode, this region should be involved in finding the intersection for target one. We can extend every point of this disc byd 12 to come up with a large disc (big dashed circle) with the same center. It is easily verified that (1) target one is guarantee to be on the intersection of the extended disc and discs around reference nodes 3 and 4; (2) the outer-approximated intersection for target one is smaller than that for the non-cooperative case. Note if we had extended the exact intersection, we end up with an even smaller intersection of target one. Cooperative OA (Coop-OA) can be implemented as in Algorithm 4.
We can consider the intersection obtained in Coop-OA as a constraint for NLS methods (CNLS) to improve the performance of the algorithm in (3). Suppose that for target i, we obtain a final disc asD i with center ẑ i and radiusR i .
It is clear that we can define z i −ẑ i ≤R i as a constraint for the ith target in the optimization problem (3). This problem can be solved iteratively similar to Algorithm 2 considering constraint obtained in Coop-OA. Algorithm 5 implements Coop-CNLS. find outer approximation (by a disc with center ẑ i and radiusR i ) using (20) or other heuristic methods such that

5:
for m = 1,...,M do 6: if m is such that i ∈ B m , then update sets T mi as end for 8: end for 9: end for Obtain the position of ith target using non-linear LS aŝ : end for 7: end for

Simulation results
In this section, we evaluate the performance of POCS for non-cooperative and cooperative networks. The network deployment shown in Figure 8 containing 13 reference nodes at fixed positions is considered for simulation for both non-cooperative and cooperative networks. In the simulation, we study two cases for the measurement noise: (1) all measurements are positive and (2) measurements noise can be both positive and negative. For positive measurement errors, we use an exponential distribution [47]: For the mixed positive and negative measurement errors, we use a zero-mean Gaussian distribution, i.e., ε ij ∼ N (0, σ 2 ). In the simulation for both non-cooperative and cooperative networks, we set γ = s = 1 m. For every scenario (cooperative or non-cooperative), we study both types of measurement noise, i.e., positive measurement noise and mixed positive and negative measurement errors. To compare different methods, we consider the cumulative distribution function (CDF) of the position Figure 7 Extending the convex region involving target two to help target one to find a smaller intersection.  Figure 8, and we assume a pair of nodes, i.e., a pair of (target, reference) or a pair of (target, target), can connect and estimate the distance between each other if that distance is less than 20 m. To evaluate the NLOS condition, we add a uniform random variable b ∼ U(0, U) to a measured distance in 20% of cases. For non-cooperative and cooperative networks, we set U = 100 m and U = 20 m, respectively.
For implementation of POCS for a target in both cooperative and non-cooperative networks, we run the algorithm for 10N a , where N a is the number of nodes connected to the target. In the simulation for inconsistent scenario, the relaxation parameters are first set to one, and after a given number k 0 of iteration, decrease as [29] where [x] denotes the smallest integer greater than or equal to x. In the simulation, we set k 0 = 5N a . To implement NLS for non-cooperative and constrained NLS for cooperative networks (Coop-NLS), we use the MATLAB routine lsqnonlin [48] initialized randomly and fmincon [48] initialized and constrained with outer approximation, respectively. For the cooperative network, every target broadcasts its estimates, i.e., a point or a disc, 20 times over the network.
For Gaussian measurement errors, the feasibility set might not be consistent. For the OA approach in this case, we take the average of (pseudo) reference nodes connected to a target as a coarse estimate. For hybrid approaches, we only study the combination of discs with halfplanes since it has not been studied previously and for other two methods introduced in Section 4.1.2, we refer the reader to [18,19,29].

Non-cooperative positioning
In this section, we evaluate the performance of POCS, Hybrid Halfplane POCS, OA, NLS, and CLNS for both LOS and NLOS. Figure 9 depicts the CDFs for different methods for both positive and positive-negative measurement errors in LOS conditions. As can be seen, NLS has almost the best performance among all algorithms. Since the objective function for NLS in this scenario is convex (see [11]), NLS converges to the global minimum and outperforms other methods. For positive measurement errors, it is seen that POCS outperforms NLS for small position errors, i.e., e ≤ 1m. Combining discs with halfplanes improves the performance of the POCS for large errors. OA shows good performance compared to other methods. To summarize for LOS conditions, we see that NLS outperforms other methods except for very small position error when measurement errors are positive. For the positive measurement errors, the performance of POCS, H-POCS, and OA are compared in Table 1.
To evaluate the robustness of different algorithms against NLOS conditions, we plot the CDFs of the various methods in Figure 10. We see that POCS and OA are robust against NLOS conditions for both scenarios. It is also seen that NLS has poor performance and the performance of NLS can be improved by involving the constraint derived from OA. The hybrid POCS, i.e., projection onto halfplanes and discs, has poor performance compared to POCS. The reason for the poor performance is that in NLOS conditions, the distance measured from a target to reference node i might be larger than the distance measured from the target to the reference node j even the target is closer to reference nodes i. Therefore, we might end up in the wrong halfplane which results in a large error. Here, we can compare different methods similar to LOS case and rank various algorithms and make some concluding remarks.
To assess the tightness of the upper bound on the position error for POCS, derived in Section 4.1.3, we will investigate the difference between the upper bound,R i and the true error e i = ||ẑ i -z i ||. In Figure 11, we have plotted the CDF of the relative difference, i.e., R i − e i /e i , for positive measurement errors for LOS and NLOS conditions. As seen, the bound is not always tight. In fact, in more than 10% of the simulated scenarios, the upper bound is more then 25 times as large as the true error. cooperative network for both LOS and NLOS conditions. Figure 12 shows the CDFs of different algorithms for LOS conditions. As can be seen, Coop-OA and Coop-CNLS show good performance. Coop-POCS exhibits an acceptable performance, and Coop-NLS has poor performance compared to the other methods. We also see that cooperation between targets can significantly improve the position estimates. In Table 2, we make a comparison between different methods for LOS conditions based on position error e.
To evaluate the performance of different methods in NLOS conditions, we plot the CDFs of various methods in Figure 13. As this figure shows, Coop-OA outperforms other methods. Involving constraints of outer approximation to Coop-NLS improves the performance of this nonlinear estimator.

Conclusion
In this semi-tutorial paper, the problem of positioning was formulated as a convex feasibility problem. For noncooperative networks, the method of projection onto convex sets (POCS) as well as outer approximation (OA) was employed to solve the problem. The main properties of Figure 9 The CDFs of different algorithms for non-cooperative network in LOS condition for a positive measurement errors (drawn from an exponential distribution) and both positive and negative measurement errors (drawn from a zero-mean Gaussian distribution).     Figure 13 The CDF of different algorithms for cooperative network (NLOS) for a positive measurement errors (drawn from an exponential distribution) and both positive and negative measurement errors (drawn from a zero-mean Gaussian distribution).
estimator. We also proposed to combine constraints derived in OA with NLS yielding a new constrained NLS. Simulation results show that the proposed methods are robust against non-line-of-sight conditions for both noncooperative and cooperative networks.