Problem formulation
ConsiderFootnote 1 a directed graph with Na anchor nodes and Nu agent nodes, for a total of N=Na+Nu nodes. In a two-dimensional (2D) scenario, we denote the position of node i by \(\boldsymbol {x}_{i} = [\!x_{i} \ y_{i}]^{\top } \in \mathbb {R}^{2 \times 1}\), where ⊤ denotes transpose. Between two distinct nodes i and j, the binary variable oj→i indicates if a measure, onto direction j→i, is observed (oj→i=1) or not (oj→i=0). In the case when i=j, since a node does not self-measure, we have oi→i=0. This allows us to define the observation matrix \(\boldsymbol {\mathcal {O}} \in \mathbb {B}^{N \times N}\) with elements \(o_{i,j} \triangleq o_{i \rightarrow j}\) as above. The aforementioned directed graph has connection matrix \(\boldsymbol {\mathcal {O}}\). It is important to remark that, for a directed graph, \(\boldsymbol {\mathcal {O}}\) is not necessarily symmetric; physically, this models possible channel anisotropies, miss-detections and, more generally, link failures. Let mj→i be a binary variable, which denotes if the link j→i is LoS (mj→i=1) or NLoS (mj→i=0). Due to physical reasons, mj→i=mi→j. We define the LoS/NLoS matrixFootnote 2\(\mathbf {L} \in \mathbb {B}^{N \times N}\) of elements \(l_{i,j} \triangleq m_{i \rightarrow j}\), and we observe that, since mj→i=mi→j, the matrix is symmetric, i.e., L⊤=L. We stress that this symmetry is preserved regardless of \(\mathbf {\mathcal {O}}\), as it derives from physical reasons only. Let Γ(i) be the (open) neighborhood of node i, i.e., the set of all nodes from which node i receives observables (RSS measures), formally: \(\Gamma (i) \triangleq \{ j \neq i : o_{j \rightarrow i}=1 \}\). We define Γa(i) as the anchor-neighborhood of node i, i.e., the subset of Γ(i) which contains only anchor nodes as neighbors of node i. We also define Γu(i) as the agent-neighborhood of node i, i.e., the subset of Γ(i) which contains only agent nodes as neighbors of node i. In general, Γ(i)=Γa(i)∪Γu(i).
Data model
In the sequel, we will assume that all nodes are stationary and that the observation time-window is sufficiently short in order to neglect correlation in the shadowing terms. In practice, such a model simplification allows for a more analytical treatment of the localization problem and has also been used, for example, in [11, 12]. Following the path loss model and the data models present in the literature [17, 21] and denoting by K the number of samples collected on each link over a predetermined time window, we model the received power at time index k for anchor-agent links as
$$ {{}{\begin{aligned} r^{(m)}_{a \rightarrow i}(k) = \left\{ \begin{array}{l} p_{0_{\text{LOS}}} - 10 \alpha_{\text{LOS}} \log_{10} \| \boldsymbol{x}_{a} - \boldsymbol{x}_{i} \| + w_{a \rightarrow i}(k),\\ \quad \quad \text{if} \ m_{a \rightarrow i}=1; \\ p_{0_{\text{NLOS}}} - 10 \alpha_{\text{NLOS}} \log_{10} \| \mathbf{x}_{a} - \mathbf{x}_{i} \|\! + v_{a \rightarrow i}(k),\\ \quad \quad \text{if} \ m_{a \rightarrow i}=0, \end{array}\right. \end{aligned}}} $$
(1)
while, for the agent-agent link,
$$ {}{\begin{aligned} r^{(m)}_{u \rightarrow i}(k)= \left\{ \begin{array}{l} p_{0_{\text{LOS}}} - 10 \alpha_{\text{LOS}} \log_{10} \| \boldsymbol{x}_{u} - \boldsymbol{x}_{i} \| + w_{u \rightarrow i}(k),\\ \quad \quad \text{if} \ m_{u \rightarrow i}=1; \\ p_{0_{\text{NLOS}}} - 10 \alpha_{\text{NLOS}} \log_{10} \| \boldsymbol{x}_{u} - \boldsymbol{x}_{i} \|\! + v_{u \rightarrow i}(k),\\ \quad \quad \text{if} \ m_{u \rightarrow i}=0, \end{array}\right. \end{aligned}} $$
(2)
where:
-
i,u, with u∈Γu(i), are the indexes for the unknown nodes;
-
a∈Γa(i) is an index for anchors;
-
k=1,…,K is the discrete time index, with K samples for each link;
-
\(p_{0_{\text {LOS/NLOS}}}\) is the reference power (in dBm) for the LoS or NLoS case;
-
αLOS/NLOS is the path loss exponent for the LoS or NLoS case;
-
xa is the known position of anchor a;
-
xu is the unknown position of agent u (similarly for xi);
-
Γa(i), Γu(i) are the anchor- and agent-neighborhoods of node i, respectively;
-
The noise terms wa→i(k),va→i(k),wu→i(k), and vu→i(k) are modeled as serially independent and identically distributed (i.i.d.), zero-mean, Gaussian random variables, independent from each other (see below), with variances:
\(\text {Var} [\! w_{a \rightarrow i}(k) ] = \text {Var} [\! w_{u \rightarrow i}(k) ] = \sigma ^{2}_{\text {LOS}}\),
\(\text {Var}[\!v_{a \rightarrow i}(k)] = \text {Var}[\!v_{u \rightarrow i}(k)] = \sigma ^{2}_{\text {NLOS}}\),
and \(\sigma ^{2}_{\text {NLOS}} > \sigma ^{2}_{\text {LOS}} > 0\).
More precisely, letting δi,j be Kronecker’s deltaFootnote 3, the independence assumption is formalized by the following equations
$$ \begin{aligned} &\mathbb{E}\left[w_{j_{1} \rightarrow i_{1}}(k_{1}) w_{j_{2} \rightarrow i_{2}}(k_{2}) \right] = \sigma^{2}_{\text{LOS}} \delta_{k_{1}, k_{2}} \delta_{i_{1}, i_{2}} \delta_{j_{1}, j_{2}} \\ &\mathbb{E}\left[v_{j_{1} \rightarrow i_{1}}(k_{1}) v_{j_{2} \rightarrow i_{2}}(k_{2}) \right] \ = \sigma^{2}_{\text{NLOS}} \delta_{k_{1}, k_{2}} \delta_{i_{1}, i_{2}} \delta_{j_{1}, j_{2}} \\ &\mathbb{E}\left[w_{j_{1} \rightarrow i_{1}}(k_{1}) v_{j_{2} \rightarrow i_{2}}(k_{2}) \right] = 0 \end{aligned} $$
(3)
for any k1,k2,i1,i2,j1∈Γ(i1),j2∈Γ(i2). The previous equations imply that two different links are always independent, regardless of the considered time instant. In this paper, we call this property link independence. If only one link is considered, i.e., j2=j1 and i2=i1, then independence is preserved by choosing different time instants, implying that the sequence \(\left \{ w_{j \rightarrow i} \right \}_{k} \triangleq \left \{ w_{j \rightarrow i}(1), w_{j \rightarrow i}(2), \dots \right \}\) is white. The same reasoning applies to the (similarly defined) sequence {vj→i}k. As a matter of notation, we denote the unknown positions (indexing the agents before the anchors) by \(\boldsymbol {x} \triangleq \left [\boldsymbol {x}^{\top }_{1} \ \cdots \ \boldsymbol {x}^{\top }_{N_{u}}\right ]^{\top } \in \mathbb {R}^{2 N_{u} \times 1}\) and we define η as the collection of all channel parameters, i.e., \(\boldsymbol {\eta } \triangleq \left [\boldsymbol {\eta }^{\top }_{\text {LOS}} \ \boldsymbol {\eta }^{\top }_{\text {NLOS}}\right ]^{\top }\), with \(\boldsymbol {\eta }_{\text {LOS}} \triangleq \left [p_{0_{\text {LOS}}} \ \alpha _{\text {LOS}} \ \sigma ^{2}_{\text {LOS}}\right ]^{\top } \in \mathbb {R}^{3 \times 1}\), \(\boldsymbol {\eta }_{\text {NLOS}} \triangleq \left [p_{0_{\text {NLOS}}} \ \alpha _{\text {NLOS}} \ \sigma ^{2}_{\text {NLOS}}\right ]^{\top } \in \mathbb {R}^{3 \times 1}\).
It is important to stress that, in a more realistic scenario, channel parameters may vary from link to link and also across time. However, such a generalization would produce an under-determined system of equations, thus giving up uniqueness of the solution and, more generally, analytical tractability of the problem. For the purposes of this paper, the observation model above is sufficiently general to solve the localization task while retaining analytical tractability.
Time-averaged RSS measures
Motivated by a result given in Appendix A, we consider the time-averaged RSS measures, defined as
$$ \bar{r}_{j \rightarrow i} \triangleq \frac{1}{K} \sum\limits_{k=1}^{K} r^{(m)}_{j \rightarrow i}(k), \quad j \in \Gamma(i) $$
(4)
as our new observablesFootnote 4. While it would have been preferable to work with the original data from a theoretical standpoint, several considerations lead to the preference of time-averaged data, most notably: (1) comparison with other algorithms present in the literature, where the data model assumes only one sample per link, i.e., K=1, which is simply a special case in this paper; (2) reduced computational complexity in the subsequent algorithms; (3) if the RSS measures onto a given link needs to be communicated between two nodes, the communication cost is notably reduced, since only one scalar, instead of K samples, needs to be communicated; (4) formal simplicity of the subsequent equations.
Moreover, from Appendix A, it follows that, assuming known L, the ML estimators of the unknown positions based upon an original data or a time-averaged data are actually the same. To see this, it suffices to choose \(\boldsymbol {\theta } = (\boldsymbol {x}, p_{0_{\text {LOS}}}, p_{0_{\text {NLOS}}}, \alpha _{\text {LOS}}, \alpha _{\text { NLOS}})\) and
$$ s_{j \rightarrow i}(\boldsymbol{\theta}) =\left\{ \begin{array}{l} p_{0_{\text{LOS}}} - 10 \alpha_{\text{LOS}} \log_{10} \| \boldsymbol{x}_{j} - \boldsymbol{x}_{i} \|, \\ \quad \quad \text{if} \ m_{j \rightarrow i}=1 ; \\ p_{0_{\text{NLOS}}} - 10 \alpha_{\text{NLOS}} \log_{10} \| \boldsymbol{x}_{j} - \boldsymbol{x}_{i} \|, \\ \quad \quad \text{if} \ m_{j \rightarrow i}=0 \end{array}\right. $$
(5)
for j∈Γ(i) and splitting the additive noise term as required. For a fixed link, only one of two cases (LoS or NLoS) is verified, thus applying (34) of Appendix A yields
$$ \begin{aligned} \arg {\underset{\boldsymbol{\theta}}{\max}}\ p\left(r^{(m)}_{j \rightarrow i}(1),\dots, r^{(m)}_{j \rightarrow i}(K) ; \boldsymbol{\theta}, \sigma_{j \rightarrow i}^{2}\right) \\ = \arg {\underset{\boldsymbol{\theta}}{\max}}\ p\left(\bar{r}_{j \rightarrow i}; \boldsymbol{\theta}, \sigma^{2}_{j \rightarrow i}\right) \end{aligned} $$
(6)
where \(\sigma _{j \rightarrow i}^{2}\) is either \(\sigma ^{2}_{\text {LOS}}\) or \(\sigma ^{2}_{\text {NLOS}}\) and the general result follows from link independence.
We define \(\mathcal {R}_{i}\) as the set of all RSS measures that node i receives from anchor neighbors, i.e., \(\mathcal {R}_{i} \triangleq \left \{ r^{(m)}_{a \rightarrow i}(1), \dots, r^{(m)}_{a \rightarrow i}(K) : a \in \Gamma _{a}(i) \right \}\), \(\mathcal {Z}_{i}\) as the set of all RSS measures that node i receives from agent neighbors, i.e., \(\mathcal {Z}_{i} \triangleq \left \{ r^{(m)}_{j \rightarrow i}(1), \dots, r^{(m)}_{j \rightarrow i}(K) : j \in \Gamma _{u}(i) \right \}\), and \(\mathcal {Y}_{i}\) as the set of all RSS measures locally available to node i, i.e., \(\mathcal {Y}_{i} \triangleq \mathcal {R}_{i} \cup \mathcal {Z}_{i}\). Analogously, for time-averaged measures, we define \(\bar {\mathcal {R}}_{i} \triangleq \left \{\bar {r}_{a \rightarrow i} : a \in \Gamma _{a}(i)\right \}\), \(\bar {\mathcal {Z}}_{i} \triangleq \left \{\bar {r}_{j \rightarrow i} : j \in \Gamma _{u}(i)\right \}\), and \(\bar {\mathcal {Y}}_{i} = \bar {\mathcal {R}}_{i} \cup \bar {\mathcal {Z}}_{i}\). Finally, we define
$$ \Upsilon \triangleq \bigcup\limits_{i=1}^{N_{u}} \mathcal{Y}_{i} $$
(7)
which represents the information available to the whole network.
Single-agent robust maximum likelihood (ML)
We first consider the single-agent case, which we will later use as a building block in the multi-agent case. The key idea is that instead of separately treating the LoS and NLoS cases, e.g., by hypothesis testing, we resort to a two-component Gaussian mixture model for the time-averaged RSS measures. More precisely, we assume that the probability density function (pdf), p(·), of the time-averaged RSS measures, for anchor-agent links, is given by
$$ {}{\begin{aligned} & p(\bar{r}_{a \rightarrow i}) \\ &\quad = \frac{\lambda_{i}}{\sqrt{2 \pi \frac{\sigma^{2}_{\text{LOS}}}{K}}} e^{- \frac{K}{2 \sigma^{2}_{\text{LOS}}} \left(\bar{r}_{a \rightarrow i} - p_{0_{\text{LOS}}} + 10 \alpha_{\text{LOS}} \log_{10} \| \boldsymbol{x}_{a} - \boldsymbol{x}_{i} \|\right)^{2}} \\ &\quad + \frac{1-\lambda_{i}}{\sqrt{2 \pi \frac{\sigma^{2}_{\text{NLOS}}}{K}}} e^{- \frac{K}{2 \sigma^{2}_{\text{NLOS}}} \left(\bar{r}_{a \rightarrow i} - p_{0_{\text{NLOS}}} + 10 \alpha_{\text{NLOS}} \log_{10} \| \boldsymbol{x}_{a} - \boldsymbol{x}_{i} \|\right)^{2}} \\ \end{aligned}} $$
(8)
and, for agent-agent links,
$$ {}{\begin{aligned} &p(\bar{r}_{u \rightarrow i}) \\ &\quad = \frac{\zeta_{i}}{\sqrt{2 \pi \frac{\sigma^{2}_{\text{LOS}}}{K}}} e^{- \frac{K}{2 \sigma^{2}_{\text{LOS}}} \left(\bar{r}_{u \rightarrow i} - p_{0_{\text{LOS}}} + 10 \alpha_{\text{LOS}} \log_{10} \| \boldsymbol{x}_{u} - \boldsymbol{x}_{i} \|\right)^{2}} \\ &\quad + \frac{1- \zeta_{i}}{\sqrt{2 \pi \frac{\sigma^{2}_{\text{NLOS}}}{K}}} e^{- \frac{K}{2 \sigma^{2}_{\text{NLOS}}} \left(\bar{r}_{u \rightarrow i} - p_{0_{\text{NLOS}}} + 10 \alpha_{\text{NLOS}} \log_{10} \| \boldsymbol{x}_{u} - \boldsymbol{x}_{i} \|\right)^{2} }\\ \end{aligned}} $$
(9)
where:
-
λi∈(0,1) is the mixing coefficient for anchor-agent links of node i;
-
ζi∈(0,1) is the mixing coefficient for agent-agent links of node i.
Empirically, we can intuitively interpret λi as the fraction of anchor-agent links in LoS (for node i), while ζi as the fraction of agent-agent links in LoS (for node i). As in [21], the Markov chain induced by our model is regular and time-homogeneous. From this, it follows that the Markov chain will converge to a two-component Gaussian mixture, giving a theoretical justification to the proposed approach.
Assume that there is a single agent, say node i, with a minimum of three anchorsFootnote 5 in its neighborhood (|Γa(i)|≥3), in a mixed LoS/NLoS scenario. Our goal is to obtain the maximum likelihood estimator (MLE) of the position of node i. Let \(\boldsymbol {\bar {r}}_{i} = \left [\bar {r}_{1 \rightarrow i} \ \cdots \ \bar {r}_{| \Gamma _{a}(i)| \rightarrow i}\right ]^{\top } \in \mathbb {R}^{| \Gamma _{a}(i)| \times 1}\) be the collection of all the time-averaged RSS measures available to node i. Using the previous assumptions and the independency between the links, the joint likelihood functionFootnote 6\(p(\boldsymbol {\bar {r}}_{i} ; \boldsymbol {\theta })\) is given by
$$ p(\boldsymbol{\bar{r}}_{i} ; \boldsymbol{\theta}) = \prod_{a \in \Gamma_{a}(i)} p(\bar{r}_{a \rightarrow i}; \boldsymbol{\theta}) $$
(10)
where θ=(xi,λi,η). Thus, denoting with \(L(\boldsymbol {\theta }; \boldsymbol {\bar {r}}_{i})\) the log-likelihood, we have
$$ L(\boldsymbol{\theta}; \boldsymbol{\bar{r}}_{i}) = \sum_{a \in \Gamma_{a}(i)} \ln p(\bar{r}_{a \rightarrow i}) $$
(11)
The MLE of θ is given by
$$ \hat{\boldsymbol{\theta}}_{ML} = \arg {\underset{\boldsymbol{\theta}}{\max}}\ L(\boldsymbol{\theta} ; \boldsymbol{\bar{r}}_{i}) $$
(12)
where the maximization is subject to several constraints: λi∈(0,1), αLOS > 0, αNLOS>0, \(\sigma ^{2}_{\text {LOS}} > 0\), and \(\sigma ^{2}_{\text {NLOS}} > 0\). In general, the previous maximization admits no closed-form solution, so we must resort to numerical procedures.
Multi-agent robust ML-based scheme
In principle, our goal would be to have a ML estimate of all the Nu unknown positions, denoted by x. Let \(\boldsymbol {\lambda } \triangleq \left [\lambda _{1} \ \cdots \ \lambda _{N_{u}}\right ]^{\top }\), \(\boldsymbol {\zeta } \triangleq \left [\zeta _{1} \ \cdots \ \zeta _{N_{u}}\right ]^{\top }\) be the collections of the mixing coefficients. Defining θ=(x,λ,ζ,η), the ML joint estimator
$$ \hat{\boldsymbol{\theta}}_{ML} = \arg {\underset{\boldsymbol{\theta}}{\max}}\ p(\Upsilon; \boldsymbol{\theta}) $$
(13)
is, in general, computationally unfeasible and naturally centralized. In order to obtain a practical algorithm, we now resort to a sub-optimal but computationally feasible and distributed approach. The intuition is as follows. Assume, for a moment, that a specific node i knows η, λ, ζ and also all the true positions of its neighbors (which we denote by \(\mathcal {X}_{i}\)). Then, the ML joint estimation problem is notably reduced, in fact,
$$ \hat{\boldsymbol{x}}_{i_{ML}} = \arg {\underset{\boldsymbol{x}_{i}}{\max}}\ p(\Upsilon; \boldsymbol{x}_{i}) $$
(14)
We now make the sub-optimal approximation of avoiding non-local information in order to obtain a distributed algorithm, thus resorting to
$$ \hat{\boldsymbol{x}}_{i} = \arg {\underset{\boldsymbol{x}_{i}}{\max}}\ p\left(\mathcal{\bar{Y}}_{i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \lambda_{i}, \zeta_{i}, \boldsymbol{\eta}\right) $$
(15)
where we made explicit the functional dependence on all the other parameters (which, for now, are assumed known). Due to the i.i.d. hypothesis, the “local” likelihood function has the form
$$\begin{array}{*{20}l} p\left(\mathcal{\bar{Y}}_{i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \lambda_{i}, \zeta_{i}, \boldsymbol{\eta}\right) \!\!=&\!\! \prod_{a \in \Gamma_{a}(i)} \!\!p(\bar{r}_{a \rightarrow i} ; \boldsymbol{x}_{i}, \lambda_{i}, \boldsymbol{\eta})\\ &\times \!\!\!\! \prod_{j \in \Gamma_{u}(i)} p(\bar{r}_{j \rightarrow i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \zeta_{i}, \boldsymbol{\eta}) \end{array} $$
(16)
where the marginal likelihoods are Gaussian-mixtures and we underline the (formal and conceptual) separation between anchor-agent links and agent-agent links. By taking the natural logarithm, we have
$$\begin{array}{*{20}l} \ln p\left(\mathcal{\bar{Y}}_{i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \lambda_{i}, \zeta_{i}, \boldsymbol{\eta}\right) =& \sum_{a \in \Gamma_{a}(i)} \ln p(\bar{r}_{a \rightarrow i} ; \boldsymbol{x}_{i}, \lambda_{i}, \boldsymbol{\eta}) \\ &+ \sum_{j \in \Gamma_{u}(i)} \ln p (\bar{r}_{j \rightarrow i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \zeta_{i}, \boldsymbol{\eta}) \end{array} $$
(17)
The maximization problem in (15) then reads
$$ {}{\begin{aligned} \hat{\boldsymbol{x}}_{i} = \arg {\underset{\boldsymbol{x}_{i}}{\max}} & \left\{ \sum_{a \in \Gamma_{a}(i)} \ln p(\bar{r}_{a \rightarrow i} ; \boldsymbol{x}_{i}, \lambda_{i}, \boldsymbol{\eta}) \right.\\ &\quad \left. + \sum_{j \in \Gamma_{u}(i)} \ln p (\bar{r}_{j \rightarrow i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \zeta_{i}, \boldsymbol{\eta}) \right\}. \end{aligned}} $$
(18)
We can now relax the initial assumptions: instead of assuming known neighbors positions \(\mathcal {X}_{i}\), we will substitute them with their estimates, \(\hat {\mathcal {X}}_{i}\). Moreover, since the robust ML-based self-calibration can be done without knowing the channel parameters η, we also maximize over them. Lastly, we maximize with respect to the mixing coefficients λi,ζi. Thus, our final approach is
$$ {\begin{aligned} \left(\hat{\boldsymbol{x}}_{i}, \hat{\lambda}_{i}, \hat{\zeta}_{i}, \hat{\boldsymbol{\eta}}\right) = \arg {\underset{\boldsymbol{x}_{i}, \lambda_{i}, \zeta_{i}, \boldsymbol{\eta}}{\max}} & \left\{ \sum_{a \in \Gamma_{a}(i)} \ln p(\bar{r}_{a \rightarrow i} ; \boldsymbol{x}_{i}, \lambda_{i}, \boldsymbol{\eta}) \right. \\ & \quad + \left. \sum_{j \in \hat{\Gamma}_{u}(i)}\! \ln p \left(\bar{r}_{j \rightarrow i} ; \boldsymbol{x}_{i}, \hat{\mathcal{X}}_{i}, \zeta_{i}, \boldsymbol{\eta}\right) \right\} \end{aligned}} $$
(19)
where \(\hat {\Gamma }_{u}(i)\) is the set of all agent neighbors of node i for which estimated positions exist. We can iteratively construct (and update) the set \(\hat {\Gamma }_{u}(i)\), in order to obtain a fully distributed algorithm, as summarized in Algorithm 1.
A few remarks are now in order. First, this algorithm imposes some restrictions on the arbitrariness of the network topology, since the information spreads starting from the agents which were able to self-localize during initialization; in practice, this requires the network to be sufficiently connected. Second, convergence of the algorithm is actually a matter of compatibility: if the network is sufficiently connected (compatible), convergence is guaranteed. Given a directed graph, compatibility can be tested a priori and necessary and sufficient conditions can be found (see Section 4). Third, unlike many algorithms present in the literature, symmetrical links are not necessary, nor do we resort to symmetrization (like NBP): this algorithm naturally takes into account the (possible) asymmetrical links of directed graphs.
Distributed maximum likelihood (DML)
As a natural competitor of the proposed RDML algorithm, we derive here the distributed maximum likelihood (DML) algorithm, which assumes that all links are of the same type. As its name suggests, this is the non-robust version of the previously derived RDML. As usual, we start with the single-agent case as a building block for the multi-agent case. Using the assumption that all links are the same and the i.i.d. hypothesis, the joint pdf of the time-averaged RSS measures, received by agent i, is given by
$$\begin{array}{*{20}l} p\!\left(\boldsymbol{\bar{r}}_{i} ; \boldsymbol{x}_{i}, p_{0}, \alpha, \sigma^{2}\right) &\,=\, \frac{1}{\left(2 \pi \sigma^{2}\right)^{|\Gamma_{a}(i)|/2}}\\ &\quad\!\!\times\! e^{-\frac{1}{2 \sigma^{2}} \sum_{a \in \Gamma_{a}(i)} (\bar{r}_{a \rightarrow i} - p_{0} + 10 \alpha \log_{10} \!\| \boldsymbol{x_{i}} - \boldsymbol{x}_{a} \|)^{2} } \end{array} $$
(20)
We can now proceed by estimating, with the ML criterion, first p0 as a function of the remaining parameters, followed by α as a function of xi and finally xi. We have
$$ \begin{aligned} &\hat{p}_{0}(\alpha, \boldsymbol{x}_{i}) = \\ &\arg {\underset{p_{0}}{\min}}\ \sum_{a \in \Gamma_{a}(i)} \left(\bar{r}_{a \rightarrow i} - p_{0} + 10 \alpha \log_{10} \| \boldsymbol{x_{i}} - \boldsymbol{x}_{a} \| \right)^{2}. \\ \end{aligned} $$
(21)
Defining \(s_{a,i} \triangleq 10 \log _{10} \| \boldsymbol {x_{i}} - \boldsymbol {x}_{a} \|\) as the log-distance, \(\boldsymbol {s}_{i} \triangleq \left [s_{1,i} \ s_{2,i} \ \cdots \ s_{|\Gamma _{a}(i)|,i}\right ]^{\top } \in \mathbb {R}^{|\Gamma _{a}(i)| \times 1}\) the column-vector collecting them and \(\boldsymbol {1}_{n} =\ [\!1 \cdots 1]^{\top } \in \mathbb {R}^{n \times 1}\) an all-ones vector of dimension n, the previous equation can be written as
$$ \hat{p}_{0}(\alpha, \boldsymbol{x}_{i}) = \arg {\underset{p_{0}}{\min}} \| \boldsymbol{\bar{r}}_{i} + \alpha \boldsymbol{s}_{i} - p_{0} \boldsymbol{1}_{|\Gamma_{a}(i)|} \|^{2} $$
(22)
which is a least-squares (LS) problem and its solution is
$$ \hat{p}_{0}(\alpha, \boldsymbol{x}_{i}) = \frac{1}{| \Gamma_{a}(i) |} \sum_{a \in \Gamma_{a}(i)} (\bar{r}_{a \rightarrow i} + 10 \alpha \log_{10} \| \boldsymbol{x_{i}} - \boldsymbol{x}_{a} \|) $$
(23)
By using this expression, the problem of estimating α as a function of xi is
$$ \hat{\alpha}(\boldsymbol{x}_{i}) = \arg {\underset{\alpha}{\min}}\ \left\| \boldsymbol{P}^{{}^{\boldsymbol{\perp}}}_{\boldsymbol{1}_{|\Gamma_{a}(i)|}}(\boldsymbol{\bar{r}}_{i} + \alpha \boldsymbol{s}_{i}) \right\|^{2} $$
(24)
where, given a full-rank matrix \(\boldsymbol {A} \in \mathbb {R}^{m \times n}\), with m≥n, \(\boldsymbol {P}^{{}^{\perp }}_{\boldsymbol {A}}\) is the orthogonal projection matrix onto the orthogonal complement of the space spanned by the columns of A. It can be computed via \(\boldsymbol {P}^{{}^{\perp }}_{\boldsymbol {A}} = \boldsymbol {I}_{m} - \boldsymbol {P_{A}}\), where PA=A(A⊤A)−1A⊤ is an orthogonal projection matrix and Im is the identity matrix of order m. The solution to problem (24) is given by
$$ \hat{\alpha}(\boldsymbol{x}_{i}) = - \boldsymbol{\left(\tilde{s}_{i}^{\top} \tilde{s}_{i} \right)^{-1} \tilde{s}_{i}^{\top} \tilde{r}_{i} } $$
(25)
where \(\boldsymbol {\tilde {r}}_{i} = \boldsymbol {P}^{{}^{\boldsymbol {\perp }}}_{\boldsymbol {1}_{|\Gamma _{a}(i)|}} \boldsymbol {\bar {r}}_{i}\) and \(\boldsymbol {\tilde {s}}_{i} = \boldsymbol {P}^{\boldsymbol {\perp }}_{\boldsymbol {1}_{|\Gamma _{a}(i)|}} \boldsymbol {s}_{i}\). By using the previous expression, we can finally write
$$ \hat{\boldsymbol{x}_{i}} = \arg {\underset{\boldsymbol{x}_{i}}{\min}}\ \left\| \boldsymbol{P}^{{}^{\boldsymbol{\perp}}}_{\boldsymbol{\tilde{s}_{i}}} \boldsymbol{\tilde{r}}_{i} \right\|^{2} $$
(26)
which, in general, does not admit a closed-form solution, but can be solved numerically. After obtaining \(\hat {\boldsymbol {x}}_{i}\), node i can estimate p0 and α using (23) and (25).
The multi-agent case follows an almost identical reasoning of the RDML. Approximating the true (centralized) MLE by avoiding non-local information and assuming to already have an initial estimate of p0 and α, it is possible to arrive at
$$ {\begin{aligned} \hat{\boldsymbol{x}}_{i} = \arg {\underset{\boldsymbol{x}_{i}}{\min}} & \left\{ \sum_{a \in \Gamma_{a}(i)}\left(\bar{r}_{a \rightarrow i} - p_{0} + 10 \alpha \log_{10} \| \boldsymbol{x_{i}} - \boldsymbol{x}_{a} \| \right)^{2} \right.\\ & \quad \left. + \sum_{j \in \hat{\Gamma}_{u}(i)} \left(\bar{r}_{j \rightarrow i} - p_{0} + 10 \alpha \log_{10} \| \boldsymbol{x_{i}} - \boldsymbol{x}_{j} \| \right)^{2} \right\} \end{aligned}} $$
(27)
where (again) an initialization phase is required and the set of estimated agents-neighbors \(\hat {\Gamma }_{u}(i)\) is iteratively updated. The key difference with RDML is that, due to the assumption of the links being all of the same type, the estimates of p0 and α are broadcasted and a common consensus is reached by averaging. This increases the communication overhead, but lowers the computational complexity, operating a trade-off. The DML algorithm is summarized in Algorithm 2.
Similar remarks as for the RDML can be made for the DML. Again, the network’s topology cannot be completely arbitrary, as the information must spread throughout the network starting from the agents which self-localized, implying that the graph must be sufficiently connected. Necessary and sufficient conditions to answer the compatibility question are the same as RDML. Secondly, the (strong) hypothesis behind the DML derivation (i.e., all links of the same type) allows for a more analytical derivation, up to position estimation, which is a nonlinear least-squares problem. However, it is also its weakness since, as will be shown later, it is not a good choice for mixed LoS/NLoS scenarios.
Centralized MLE with known nuisance parameters (C-MLE)
The centralized MLE of x with known nuisance parameters, i.e., assuming known L and η, is chosen here as a benchmark for both RDML and DML. In the following, this algorithm will be denoted by C-MLE. Its derivation is simple (see Appendix B) and results in
$$ {\begin{aligned} & \hat{\boldsymbol{x}}_{ML} \\ &= \arg {\underset{\boldsymbol{x}}{\min}}\ \sum\limits_{i=1}^{N_{u}} \sum\limits_{j \in \Gamma(i)} \frac{1}{\sigma^{2}_{j \rightarrow i}} \left(\bar{r}_{j \rightarrow i} - p_{0_{j\rightarrow i}} \,+\, 10 \alpha_{j \rightarrow i} \log_{10} \| \boldsymbol{x}_{j} - \boldsymbol{x}_{i} \| \right)^{2} \end{aligned}} $$
(28)
where \(\left (p_{0_{j \rightarrow i}}, \alpha _{j \rightarrow i}, \sigma ^{2}_{j \rightarrow i}\right)\) are either LoS or NLoS depending on the considered link. It is important to observe that, if all links are of the same type, the dependence from \(\sigma ^{2}_{j \rightarrow i}\) in (28) disappears. From standard ML theory [45], C-MLE is asymptotically (K→+∞) optimal. The optimization problem (28) is computationally challenging, as it requires a minimization in a 2Nu-dimensional space, but still feasible for small values of Nu.