Iterative algorithms to solve positioning problem based on ML or NLS for a non-cooperative network require a good initial estimate. POCS can provide such an estimate and was first applied to positioning in [24], where the positioning problem was formulated as a convex feasibility problem.

POCS, also called successive orthogonal projection onto convex sets [33] or alternative projections [34], was originally introduced to solve the CFP in [25]. POCS has then been applied to different problems in various fields, e.g., in image restoration problems [35, 36] and in radiation therapy treatment planning [26]. There are generally two versions of POCS: sequential and simultaneous. In this paper, we study sequential POCS and refer the reader to [33] for a study of both sequential and simultaneous projection algorithms. If the projection onto each convex set is easily computed, POCS is a suitable approach to solve CFP. In general, instead of POCS, other methods such as cyclic subgradient projection (CSP) or Oettli's method can be used [33].

In this section, we first review POCS for the positioning problem and then study variations of POCS. We then formulate a version of POCS for cooperative networks. For now, we will limit ourselves to positive measurement errors and consider the general case later.

In the absence of measurement errors, i.e., {\widehat{d}}_{ij}={d}_{ij}, it is clear that target *i*, at position **z**_{
i
}, can be found in the intersection of a number of circles with radii *d*_{
ij
}and centres **z**_{
j
}. For non-negative measurement errors, we can relax circles to discs because a target definitely can be found inside the circles. We define the disc {\mathcal{D}}_{ij} centered at **z**_{
j
}as

{\mathcal{D}}_{ij}=\left\{z\in {\mathbb{R}}^{2}|\u2225z-{z}_{j}\u2225\le {\widehat{d}}_{ij}\right\},\phantom{\rule{1em}{0ex}}j\in {\mathcal{A}}_{i}\cup {\mathcal{B}}_{i}.

(4)

It then is reasonable to define an estimate of **z**_{
i
}as a point in the intersection {\mathcal{D}}_{i} of the discs {\mathcal{D}}_{ij}

{\widehat{z}}_{i}\in {\mathcal{D}}_{i}=\bigcap _{j\in {\mathcal{A}}_{i}\cup {\mathcal{B}}_{i}}{\mathcal{D}}_{ij}.

(5)

Therefore, the positioning problem can be transformed to the following *convex* feasibility problem:

\mathsf{\text{find}}\phantom{\rule{2.77695pt}{0ex}}Z=\left[{z}_{1},...,{z}_{M}\right]\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{such}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{that}}\phantom{\rule{2.77695pt}{0ex}}{z}_{i}\in {\mathcal{D}}_{i},\phantom{\rule{2.77695pt}{0ex}}i=1,...,M.

(6)

In a non-cooperative network, there are *M* independent feasibility problems, while for the cooperative network, we have dependent feasibility problems.

### 4.1 Non-cooperative networks

#### 4.1.1 Projection onto convex sets

For non-cooperative networks {\mathcal{B}}_{i}=\varnothing in (5). To apply POCS for non-cooperative networks, we choose an arbitrary initial point and find the projection of it onto one of the sets and then project that new point onto another set. We continue alternative projections onto different convex sets until convergence. Formally, POCS for a target *i* can be implemented as Algorithm 2, where {\left\{{\lambda}_{k}^{i}\right\}}_{k\ge 0} are relaxation parameters, which are confined to the interval {\in}_{1}\le {\lambda}_{k}^{i}\le 2-{\in}_{2} for arbitrary small ϵ_{1}, ϵ_{2} > 0, and 1\le {\left\{j\left(k\right)\right\}}_{k\ge 0}\le \left|{\mathcal{A}}_{i}\right| determines the individual set {\mathcal{D}}_{ij\left(k\right)}[26]. In Algorithm 2, we have introduced {\mathcal{P}}_{{\mathcal{D}}_{ij}}\left(z\right), which is the orthogonal projection of **z** onto set {\mathcal{D}}_{ij}. To find the

**Algorithm 2**
*POCS*

1: Initialization: choose arbitrary initial target position {z}_{i}^{0}\in {\mathbb{R}}^{2} for target *i*

2: **for** *k* = 0 until convergence or predefined number *K* **do**

3: Update:

{z}_{i}^{k+1}={z}_{i}^{k}+{\lambda}_{k}^{i}\left({\mathcal{P}}_{{\mathcal{D}}_{ij}\left(k\right)}\left({z}_{i}^{k}\right)-{z}_{i}^{k}\right)

4: **end for**

projection of a point **z** ∈ ℝ^{n}onto a closed convex set Ω ⊆ ℝ^{n}, we need to solve an optimization problem [37]:

{\mathcal{P}}_{\Omega}\left(z\right)=\underset{\mathsf{\text{x}}\in \Omega}{argmin}\u2225z-\mathsf{\text{x}}\u2225.

(7)

When Ω is a disc, there is a closed-form solution for the projection:

{\mathcal{P}}_{{\mathcal{D}}_{ij}}\left(z\right)=\left\{\begin{array}{cc}\hfill {z}_{j}+\frac{z-{z}_{j}}{\u2225z-{z}_{j}\u2225}{\widehat{d}}_{ij},\hfill & \hfill \u2225z-{z}_{j}\u2225\ge {\widehat{d}}_{ij}\hfill \\ \hfill z,\hfill & \hfill \u2225z-{z}_{j}\u2225\ge {\widehat{d}}_{ij},\hfill \end{array}\right.

(8)

where **z**_{
j
}is the center of the disc {\mathcal{D}}_{ij}. When projecting a point outside of {\mathcal{D}}_{ij\left(k\right)} onto {\mathcal{D}}_{ij\left(k\right)}, the updated estimate based on an unrelaxed, underrelaxed, or overrelaxed parameter {\lambda}_{k}^{i} (i.e., {\lambda}_{k}^{i}=1,\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{\lambda}_{k}^{i}<1,\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{\lambda}_{k}^{i}>1, respectively) is found on the boundary, the outside, or the inside of the disc, respectively. For the {\lambda}_{k}^{i}=1, unrelaxed parameter, the POCS estimate after *k* iterations is obtained as

{z}_{i}^{k}={\mathcal{P}}_{{\mathcal{D}}_{ij}\left(k\right)}{\mathcal{P}}_{{\mathcal{D}}_{ij}\left(k-1\right)}...{\mathcal{P}}_{{\mathcal{D}}_{ij}\left(0\right)}\left({z}_{i}^{0}\right).

(9)

There is a closed-form solution for the projection onto a disc, but for general convex sets, there are no closed-form solutions [29, 38], and for every iteration in POCS, a minimization problem should be solved. In this situation, a CSP method can be employed instead [33], which normally has slower convergence rate compared to POCS [33].

Suppose POCS generates a sequence {\left\{{z}_{i}^{k}\right\}}_{k=0}^{\infty}. The following two theorems state convergence properties of POCS.

**Theorem 4.1** (Consistent case) *If the intersection of*{\mathcal{D}}_{i}*in* (5) *is non-empty, then the sequence*{\left\{{z}_{i}^{k}\right\}}_{k=0}^{\infty}*converges to a point in the non-empty intersection*{\mathcal{D}}_{i}.

**Proof** See Theorem 5.5.1 in [33, Ch.5].

In practical cases, some distance measurements might be smaller than the real distance due to measurement noise, and the intersection {\mathcal{D}}_{i} might be empty. It has been shown that under certain circumstances, POCS *converges* as in the following sense. Suppose {\lambda}_{k}^{i} be a steering sequence defined as [26]

\begin{array}{c}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{0.3em}{0ex}}\underset{k\to \infty}{lim}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{\lambda}_{k}^{i}=0,\\ \underset{k\to \infty}{lim}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\frac{{\lambda}_{k+1}^{i}}{{\lambda}_{k}^{i}}=1,\\ \phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\sum _{k=0}^{\infty}{\lambda}_{k}^{i}=+\infty .\end{array}

(10)

Let *m* be an integer. If in (10) we have

\underset{k\to \infty}{lim}\frac{{\lambda}_{km+j}^{i}}{{\lambda}_{km}^{i}}=1,\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}1\le j\le m-1,

(11)

then the steering sequence {\lambda}_{k}^{i} is called *m*-steering sequence [26]. For such steering sequences, we have the following convergence result.

**Theorem 4.2** (Inconsistent case) *If the intersection of*{\mathcal{D}}_{i}*in* (5) *is empty and steered sequences defined in* (11) *are used for POCS in Algorithm 2, then the sequence*{\left\{{z}_{i}^{k}\right\}}_{k=0}^{\infty}*converges to the minimum of the convex function*{\sum}_{j\in {\mathcal{A}}_{i}}{\u2225{\mathcal{P}}_{{\mathcal{D}}_{ij}}\left(z\right)-z\u2225}^{2}.

**Proof** See Theorem 18 in [39].

Note that in papers [18, 24, 29], and [19], the cost function minimized by POCS in the inconsistent case should be corrected to the one given in Theorem 4.2.

One interesting feature of POCS is that it is insensitive to very large positive biases in distance estimates, which can occur in NLOS conditions. For instance, in Figure 2, one bad measurement with large positive error (shown as big dashed circle) is assumed to be a NLOS measurement. As shown, a large positive measurement error does not have any effect on the intersection, and POCS will automatically ignore it when updating the estimate. Generally, for positive measurement errors, POCS considers only those measurements that define the intersection.

When a target is outside the convex hull of reference nodes, the intersection area is large even in the noiseless case, and POCS exhibits poor performance [37]. Figure 3 shows the intersection of three discs centered around reference nodes that contains a target's position when the target is inside or outside the convex hull of the three reference nodes. We assume that there is no error in measurements. As shown in Figure 3b, the intersection is large for the target placed outside the convex hull. In [29], a method based on projection onto hyperbolic sets was shown to perform better in this case; however, the robustness to NLOS is also lost.

#### 4.1.2 Projection onto hybrid sets

The performance of POCS strongly depends on the intersection area: the larger the intersection area, the larger the error of the POCS estimate. In the POCS formulation, every point in the intersection area can potentially be an estimate of a target position. However, it is clear that all points in the intersection are not equally plausible as target estimates. In this section, we describe several methods to produce smaller intersection areas in the positioning process that are more likely to be targets' positions. To do this, we review POCS for hybrid convex sets for the positioning problem. In fact, here we *trade the robustness* property of POCS to obtain *more accurate* algorithms. The hybrid algorithms have a reasonable convergence speed and show better performance compared to POCS for line-of-sight (LOS) conditions. However, the robustness against NLOS is partially lost in projection onto hybrid sets. The reason is that in NLOS conditions, the disc defined in POCS method contains the target node; however, for the hybrid sets, this conclusion is no longer true, i.e., the set defined in hybrid approach might not contain the target node.

**Projection onto Rings**: Let us consider the disc defined in (4). It is obvious that the probability of finding a target inside the disc is not uniform. The target is more likely to be found near the boundary of the disc. When the measurement noise is small, instead of a disc {\mathcal{D}}_{ij}, we can consider a ring {\mathcal{R}}_{ij} (or more formally, an annulus) defined as

{\mathcal{R}}_{ij}=\left\{z\in {\mathbb{R}}^{2}|{\widehat{d}}_{ij}-{\epsilon}_{l}\le \u2225z-{z}_{j}\u2225\le {\widehat{d}}_{ij}-{\epsilon}_{u}\right\},\phantom{\rule{1em}{0ex}}j\in {\mathcal{A}}_{i},

(12)

where ϵ_{
l
}≥ 0, ϵ_{
u
}≥ 0, and the control parameter ϵ_{
l
}+ ϵ_{
u
}determines the width of the ring that can be connected to the distribution of noise (if available). Then, projection onto rings (POR) can be implemented similar to POCS, except the disc {\mathcal{D}}_{ij\left(k\right)} in Algorithm 2 is replaced with the ring {\mathcal{R}}_{ij\left(k\right)}. When ϵ_{
l
}= ϵ_{
u
}= 0, POR changes to a well-known algorithm called Kaczmarz's method [33], also called algebraic reconstruction technique (ART) in the field of image processing [33, 40], or the boundary projection method in the positioning literature [41], which tries to find a point in intersection of a number of circles. The ART method may converge to local optima instead of the global optimum [37]. The ring in (12) can be written as the intersection of a convex and a concave set, {\mathcal{D}}_{ij}^{{\in}_{u}} and {\mathcal{C}}_{ij}^{\in l} respectively, defined by

{\mathcal{D}}_{ij}^{{\in}_{u}}=\left\{z\in {\mathbb{R}}^{2}|\u2225z-{z}_{j}\u2225\le {\widehat{d}}_{ij}+{\in}_{u}\right\},\phantom{\rule{1em}{0ex}}j\in {\mathcal{A}}_{i},

(13)

{\mathcal{C}}_{ij}^{\in l}=\left\{z\in {\mathbb{R}}^{2}|\u2225z-{z}_{j}\u2225\ge {\widehat{d}}_{ij}+{\in}_{l}\right\},\phantom{\rule{1em}{0ex}}j\in {\mathcal{A}}_{i},

(14)

so that

{\mathcal{R}}_{ij}={\mathcal{D}}_{ij}^{{\in}_{u}}\cap {\mathcal{C}}_{ij}^{{\in}_{l}},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}j\in {\mathcal{A}}_{i},

(15)

Hence, the ring method changes the convex feasibility problem to a convex-concave feasibility problem [42]. This method has good performance for LOS measurements when E\left\{{\in}_{ij}\right\}=0.

In some situations, the performance of POCS can be improved by exploiting additional information in the measurements [29, 30]. In addition to discs, we can consider other types of convex sets, under assumption that the target lies in, or close to, the intersection of those convex sets. Note that we still have a convex feasibility problem. We will consider two such types of convex sets: the inside of a hyperbola and a halfplane.

**Hybrid Hyperbolic POCS**: By subtracting each pair of distance measurements, besides discs, we find a number of hyperbolas [29]. The hyperbola defined by subtracting measured distances in reference node *j* and *k*[29] divides the plane into two separated sets: one convex and one concave. The target is assumed to be found in the intersection of a number of *discs* and convex *hyperbolic sets*. For instance, for the target *i*,

{\widehat{z}}_{i}\in \mathcal{D}{\mathcal{H}}_{i}=\bigcap _{j\in {\mathcal{A}}_{i}}{\mathcal{D}}_{ij}\bigcap _{\left\{j,k\right\}\in {\mathcal{A}}_{i},j\ne k}{\mathcal{H}}_{jk}^{i}.

(16)

where {\mathcal{H}}_{jk}^{i} is the convex hyperbolic set defined by the hyperbola derived in reference node *j* and *k*[29]. Therefore, projection can be done sequentially onto both discs and hyperbolic sets. Figure 4 shows the intersection of two discs and one hyperbolic set that contains a target. Since there is no closed-form solution for the projection onto a hyperbola, the CSP approach is a good replacement for POCS [33]. Therefore, we can apply a combination of POCS and CSP for this problem. Simulation results in [29] shows significant improvement to the original POCS when discs are combined with hyperbolic sets, especially when target is located outside the convex hull of reference nodes.

**Hybrid Halfplane POCS**: Now we consider another hybrid method for the original POCS. Considering every pair of references, e.g., the two reference nodes in Figure 5, and drawing a perpendicular bisector to the line joining the two references, the whole plane is divided into two halfplanes. By comparing the distances from a pair of reference nodes to a target, we can deduce that the target most probably belongs to the halfplane containing the reference node with the *smallest* measured distance. Therefore, a target is more likely to be found in the intersection of a number of discs and halfplanes than in the intersection of only the discs. Formally, for target *i*, we have

{\widehat{z}}_{i}\in \mathcal{D}{\mathcal{F}}_{i}=\bigcap _{j\in {\mathcal{A}}_{i}}{\mathcal{D}}_{ij}\bigcap _{\left\{j,k\right\}\in {\mathcal{A}}_{i},j\ne k}{\mathcal{F}}_{jk}^{i}.

(17)

where {\mathcal{F}}_{jk}^{i} defines a halfplane that contains reference node *j* or *k* and is obtained as follows. Let a^{T}x = *b*, for **a**,**x** ∈ ℝ^{2}, and *b* ∈ ℝ, be the perpendicular bisector to the line joining reference nodes *j* and *k*, and suppose halfplanes {x ∈ ℝ^{2}|a^{T}x > *b*} and {x ∈ ℝ^{2}|a^{T}x ≤ *b*} contain reference nodes *j* and *k*, respectively. The halfplane {\mathcal{F}}_{jk}^{i} containing the target *i* obtained as

{\mathcal{F}}_{jk}^{i}=\left\{\begin{array}{cc}\hfill \left\{x\in {\mathbb{R}}^{2}|{a}^{T}x>b\right\},\hfill & \hfill \mathsf{\text{if}}\phantom{\rule{2.77695pt}{0ex}}{\widehat{d}}_{ij}\le {\widehat{d}}_{ik}\hfill \\ \hfill \left\{x\in {\mathbb{R}}^{2}|{a}^{T}x\le b\right\},\hfill & \hfill \mathsf{\text{if}}\phantom{\rule{2.77695pt}{0ex}}{\widehat{d}}_{ij}>{\widehat{d}}_{ik}.\hfill \end{array}\right.

(18)

There is a closed-form solution for the projection onto the halfplane [33]; hence, POCS can be easily applied to such hybrid convex sets. In [30], POCS for halfplanes was formulated, and we used the algorithm designed there for the projection onto the halfplane in Section 5.

When there are two different convex sets, we can deal with hybrid POCS in two different ways. Either POCS is sequentially applied to discs and other convex sets or POCS is applied to discs and other sets individually and then the two estimates can be combined as an initial estimate for another round of updating. This technique is studied for a specific positioning problem in [38].

#### 4.1.3 Bounding the feasible set

In previous sections, we studied projection methods to solve the positioning problem. In this section, we consider a different positioning algorithm based on the convex feasibility problem. As we saw before, the position of an unknown target can be found in the intersection of a number of discs. The intersection in general may have any convex shape. We still assume positive measurement errors in this section, so that the target definitely lies inside the intersection. This assumption can be fulfilled for distance estimation based on, for instance, time of flight for a reasonable signal-to-noise ratio [43]. In contrast to POCS, which tries to find a point in the feasible set as an estimate, outer approximation (OA) tries to approximate the feasible set by a suitable shape and then one point inside of it is taken as an estimate. The main problem is how to accurately approximate the intersection. There is work in the literature to approximate the intersection by convex regions such as polytopes, ellipsoids, or discs [19, 44–46].

In this section, we consider a disc approximation of the feasible set. Using simple geometry, we are able to find all intersection points between different discs and finally find a smallest disc that passes through them and covers the intersection. Let {z}_{k}^{I}, *k* = 1, ..., *L* be the set of intersection points. Among all intersection points, some of them are redundant and will be discarded. The common points that belong to the intersection are selected as {\mathcal{S}}_{\mathsf{\text{int}}\phantom{\rule{1em}{0ex}}}=\left\{{z}_{k}^{I}|{z}_{k}^{I}\in {\mathcal{D}}_{i}\right\}. The problem therefore renders to finding a disc that contains {\mathcal{S}}_{\mathsf{\text{int}}\phantom{\rule{1em}{0ex}}} and covers the intersection. This is a well-known optimization problem treated in, e.g., [20, 45]. We can solve this problem by, for instance, a heuristic in which we first obtain a disc covering {\mathcal{S}}_{\mathsf{\text{int}}\phantom{\rule{1em}{0ex}}} and check if it covers the whole intersection. If the whole intersection is not covered by the disc, we increase the radius of disc by a small value and check whether the new disc covers the intersection. This procedure continues until a disc covering the intersection is obtained. This disc may not be the minimum enclosing disc, but we are at least guaranteed that the disc covers the whole intersection. A version of this approach was treated in [19].

Another approach was suggested in [45] that yields the following convex optimization problem:

\begin{array}{l}\underset{\text{\lambda}}{\text{minimize}}{\Vert {\displaystyle \sum _{j\in {\mathcal{A}}_{i}}{\text{\lambda}}_{j}{z}_{j}}\Vert}^{2}-{\displaystyle \sum _{j\in {\mathcal{A}}_{i}}{\text{\lambda}}_{j}\left({\Vert {z}_{j}\Vert}^{2}-{\widehat{d}}_{ij}^{2}\right)}\\ \text{subject}\phantom{\rule{0.5em}{0ex}}\text{to}\phantom{\rule{0.5em}{0ex}}\lambda \in {S}_{\left|{A}_{i}\right|},\end{array}

(19)

where *S*_{
p
}is a unit simplex, which is defined as {S}_{p}=\left\{\mathsf{\text{x}}\in {\mathbb{R}}^{p}|{x}_{i}\ge 0,{\sum}_{i}^{p}{x}_{i}=1\right\}, and |*χ*| is the cardinality of set *χ*. The final disc is given by a center {\widehat{z}}_{{c}_{i}} and a radius {\widehat{R}}_{i}, where

\begin{array}{c}{\widehat{z}}_{{c}_{i}}=\sum _{j\in {\mathcal{A}}_{i}}{\lambda}_{j}{z}_{j}\\ {\widehat{R}}_{i}=\sqrt{{\u2225\sum _{j\in {\mathcal{A}}_{i}}{\lambda}_{j}{z}_{j}\u2225}^{2}-\sum _{j\in {\mathcal{A}}_{i}}{\lambda}_{j}\left({\u2225{z}_{j}\u2225}^{2}-{\widehat{d}}_{ij}^{2}\right).}\end{array}

(20)

Note when there are two discs \left(\left|{\mathcal{A}}_{i}\right|=2\right), the intersection can be efficiently approximated by a disc, i.e., the approximated disc is the minimum disc enclosing the intersection. For \left|{\mathcal{A}}_{i}\right|\ge 3, there is no guarantee that the obtained disc is the minimum disc enclosing the intersection [45].

When the problem is inconsistent, a coarse estimate may be taken as an estimate, e.g., the arithmetic mean of reference nodes as

{\widehat{z}}_{{c}_{i}}=\frac{1}{\left|{\mathcal{A}}_{i}\right|}\sum _{j\in {\mathcal{A}}_{i}}{z}_{j}.

(21)

Finally, we introduce a method to bound the position error of POCS for the positive measurement errors where the target definitely lies inside the intersection. In the best case, the error of estimation is zero, and in the worst case, the absolute value of position error is equal to the largest Euclidian distance between two points in the intersection. Therefore, the maximum length of the intersection area determines the maximum absolute value of estimation error that potentially may happen. Hence, the maximum length of the intersection defines an upper bound on the absolute value of position error for the POCS estimator. To find an upper bound, for instance for target *i*, we need to solve the following optimization problem:

\begin{array}{c}\mathsf{\text{maximize}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\u2225z-{z}^{\prime}\u2225\\ \mathsf{\text{subject}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{to}}\phantom{\rule{1em}{0ex}}z,{z}^{\prime}\in {\mathcal{D}}_{i}.\end{array}

(22)

The optimization problem (22) is non-convex. We leave the solution to this problem as an open problem and instead use the method of OA described in this section to solve the problem, e.g., for the case when the measurement errors are positive, we can upper bound the position error with {\widehat{R}}_{i} [found from (20)].

### 4.2 Cooperative networks

#### 4.2.1 Cooperative POCS

It is not straightforward to apply POCS in a cooperative network. The explanation why follows in the next paragraph. However, we propose a variation of POCS for cooperative networks. We will only consider projection onto convex sets, although other sets, e.g., rings, can be considered.

To apply POCS, we must unambiguously define all the discs, {\mathcal{D}}_{ij}, for every target *i*. From (4), it is clear that some discs, i.e., discs centered around a reference node, can be defined without any ambiguity. On the other hand, discs derived from measurements between targets have unknown centers. Let us consider Figure 6 where for target one, we want to involve the measurement between target two and target one. Since there is no prior knowledge about the position of target two, the disc centered around target two cannot be involved in the positioning process for target one. Suppose, based on applying POCS to the discs defined by reference nodes 5 and 6 (the red discs), we obtain an initial estimate **ẑ**_{2} for target two. Now, based on distance estimate {\widehat{d}}_{12}, we can define a new disc centered around **ẑ**_{2} (the dashed disc). This new disc can be combined with the two other discs defined by reference nodes 3 and 4 (the black solid discs). Figure 6 shows the process for localizing target one. For target two, the same procedure is followed.

Algorithm 3 implements cooperative POCS (Coop-POCS). Note that even in the consistent case, discs may have an empty intersection during updating. Hence, we use relaxation parameters to handle a possibly empty intersection during updating. Note that the convergence properties of Algorithm 3 are unknown and need to be further explored in future work.

#### 4.2.2 Cooperatively bounding the feasible sets

In this section, we introduce the application of the outer approximation to cooperative networks. Similar to non-cooperative networks, we assume that all measurement errors are positively biased. To apply OA for cooperative networks, we first determine an

**Algorithm 3**
*Coop-POCS*

1: Initialization: {\mathcal{T}}_{ij}={\mathbb{R}}^{2},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}j\in {\mathcal{B}}_{i},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}i=1,...,M

2: **for** *k* = 0 until convergence or predefined number *K* **do**

**3: for** *i* = 1,...,*M* **do**

4: find **ẑ**_{
i
}with POCS such that

{\widehat{z}}_{i}\in {\mathcal{D}}_{i}=\bigcap _{j\in {\mathcal{A}}_{i}}{\mathcal{D}}_{ij}\bigcap _{j\in {\mathcal{B}}_{i}}{\mathcal{T}}_{ij}

5: **for** *m* = 1,...,*M* **do**

6: if *m* is such that i\in {\mathcal{B}}_{m}, then update sets {\mathcal{T}}_{mi} as

{\mathcal{T}}_{mi}=\left\{z\in {\mathbb{R}}^{2}|\u2225z-{\widehat{z}}_{i}\u2225\le {\widehat{d}}_{mi}\right\}

7: **end for**

8: **end for**

9: **end for**

outer approximation of the feasible set by a simple region that can be exchanged easily between targets. In this paper, we consider a disc approximation of the feasible set. This disc outer approximation is then iteratively refined at every iteration finding a smaller outer approximation of the feasible set. The details of the disc approximation were explained previously in Section 4.1.3, and we now extend the results to the cooperative network scenario.

To see how this method works, consider Figure 7 where target two helps target one to improve its positioning. Target two can be found in the intersection derived from two discs centered around **z**_{5} and **z**_{6} in non-cooperative mode (semi oval shape). Suppose that we outer-approximate this intersection by a disc (small dashed circle). In order to help target one to outer-approximate its intersection in cooperative mode, this region should be involved in finding the intersection for target one. We can extend every point of this disc by {\widehat{d}}_{12} to come up with a large disc (big dashed circle) with the same center. It is easily verified that (1) target one is guarantee to be on the intersection of the extended disc and discs around reference nodes 3 and 4; (2) the outer-approximated intersection for target one is *smaller* than that for the non-cooperative case. Note if we had extended the exact intersection, we end up with an even smaller intersection of target one. Cooperative OA (Coop-OA) can be implemented as in Algorithm 4.

We can consider the intersection obtained in Coop-OA as a constraint for NLS methods (CNLS) to improve the performance of the algorithm in (3). Suppose that for target *i*, we obtain a final disc as {\widehat{\mathcal{D}}}_{i} with center **ẑ**_{
i
}and radius {\widehat{R}}_{i}. It is clear that we can define \u2225{z}_{i}-{\widehat{z}}_{i}\u2225\le {\widehat{R}}_{i} as a constraint for the ith target in the optimization problem (3). This problem can be solved iteratively similar to Algorithm 2 considering constraint obtained in Coop-OA. Algorithm 5 implements Coop-CNLS.

**Algorithm 4**
*Coop-OA*

1: Initialization: {\mathcal{T}}_{ij}={\mathbb{R}}^{2},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}j\in {\mathcal{B}}_{i},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}i=1,...,M

2: **for** *k* = 0 until convergence or predefined number *K* **do**

3: **for** *i* = 1,...,*M* **do**

4: find outer approximation (by a disc with center **ẑ**_{i} and radius {\widehat{R}}_{i}) using (20) or other heuristic methods such that

\left({\widehat{z}}_{i},{\widehat{R}}_{i}\right)-OA\left\{\bigcap _{j\in {\mathcal{A}}_{i}}{\mathcal{D}}_{ij}\bigcap _{j\in {\mathcal{B}}_{i}}{\mathcal{T}}_{ij}\right\}

5: **for** *m* = 1,...,*M* **do**

6: if *m* is such that i\in {\mathcal{B}}_{m}, then update sets {\mathcal{T}}_{mi} as

{\mathcal{T}}_{mi}=\left\{z\in {\mathbb{R}}^{2}|\u2225z-{\widehat{z}}_{i}\u2225\le {\widehat{d}}_{mi}+{\widehat{R}}_{i}\right\}

7: **end for**

8: **end for**

9: **end for**

**Algorithm 5**
*Coop-CNLS*

1: Run Algorithm 4 to obtain final discs {\widehat{\mathcal{D}}}_{i}=\left\{z\in {\mathbb{R}}^{2}|\u2225z-{\widehat{z}}_{i}\u2225\le {\widehat{R}}_{i}\right\},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}i=1,...,M

2: Initialization: initialize {\widehat{z}}_{i}\in {\widehat{\mathcal{D}}}_{i},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}i=1,...,M

3: **for** *k* = 0 until convergence or predefined number *K* **do**

4: **for** *i* = 1,...,*M* **do**

5: Obtain the position of *i* th target using non-linear LS as

{\widehat{z}}_{i}=arg\underset{{z}_{i}\in {\widehat{\mathcal{D}}}_{i}}{min}\sum _{j\in {\mathcal{B}}_{i}}{\left({\widehat{d}}_{ij}-\u2225{z}_{i}-{\widehat{z}}_{j}\u2225\right)}^{2}+{\sum _{j\in {\mathcal{A}}_{i}}\left({\widehat{d}}_{ij}-\u2225{z}_{i}-{z}_{j}\u2225\right)}^{2}

6: **end for**

7: **end for**