We now introduce some elements of localization via spatial sparsity; further we introduce block-sparsity, which is the theoretical foundation for our novel approach to localization.
3.1 Localization via spatial sparsity
Recently, a sparsity-based approach to localization has been explored in literature [9–11, 15], which arises from the observation that localization can be interpreted as the reconstruction of a sparse signal, namely a signal with few non-zero entries. Specifically, we define a vector b∈{0,1}D such that b
i
=1 if a device is in the cell i, and b
i
=0 otherwise. Assuming that the number of devices is small with respect to the number D of RPs, b is sparse, and its recovery (i.e., the recovery of the position of its non-zero entries) corresponds to solving the localization problem.
The mathematical problem [9, 16] consists in finding \(b\in \mathbb {R}^{D}\) such that z=Ψ
b+η where \(\eta \in \mathbb {R}^{TJ}\) is a small error, subject to the constraint that just one position is occupied by each device. A common formulation to this problem is the following:
$$ \min_{x\in\mathbb{R}^{n}}\|z-\Psi x\|_{2}\qquad \text{s. t.} \|x\|_{0}\leq k $$
((5))
where ∥x∥0=|{i∈{1,…,D}|x
i
≠0}|, |·| denotes cardinality, and k≥1 is the number of devices to be localized. The sparse recovered vector has 1 in the position where the solution to (5) is non-zero and 0 otherwise.
As already mentioned, the classical algorithms, such as the nearest neighbor ones [17] and Bayesian ones [18], can localize only one device, while our methods can handle multiple devices. However, in this work, we focus on the localization of one device in order to fairly compare to the classical algorithms. From now on, we then assume k=1.
Assuming that J<D, the formulation of the problem in (5) is similar to compressed sensing [12], which has been widely studied in the last years. The problem can be solved by relaxing the constraint with the ℓ
1-norm and by using convex optimization [12]. It is well known that b can be recovered solving (5) if the sensing matrix fulfills the so-called restricted isometry property (RIP; [19]), which however has been proved only for a limited class of matrices (in particular, random matrices [20]). In our case, the matrix Ψ not only cannot be chosen, but is in fact determined by the real environment, which makes it poorly suited to a study of its mathematical properties.
In this section, we develop three different schemes that recast the localization into a block-sparse recovery problem. The schemes we propose are referenced as crossing approach, hierarchical approach, and hierarchical approach with spatial averages. In the next paragraphs, we describe the proposed protocols.
3.2 Crossing approach
In this first approach, we provide the position of the target by estimating the row and the column which identify the cell occupied in the grid. More precisely, we consider the signature map Ψ exactly as defined in (3), but we look at it organized in blocks according to the rows of the discretization grid:
The first block Ψ[1] corresponds to the first row of the grid and thus it consists in the first C columns of Ψ, as defined in (3), where C is the total number of columns of the grid.
Let b∈{0,1}D be the vector such that
$$b=\left(\begin{array}{c} b[1]\\ b[2]\\ \vdots\\ b[R] \end{array} \right) $$
where
$$b[r]=\left(\begin{array}{c} b_{(r-1)C+1}\\ b_{(r-1)C+2}\\ \vdots\\ b_{rC} \end{array} \right) \in \{0,1\}^{C}, $$
for any r∈{2,…,R}. The row is then estimated by solving
$$ \min_{b\in\mathbb{R}^{n}}\|z-\Psi b\|_{2}\qquad \text{s. t.} \|b\|_{2,0}\leq k $$
((6))
where \(z\in \mathbb {R}^{TJ \times 1}\) is the acquired vector in the runtime phase, ∥b∥2,0=|{i∈{1,…,N}:∥b[i]∥>0}|. If b
⋆ is the solution of (6), then the index \(\hat r\) such that \(b^{\star }[\hat r]\neq 0\) is the row occupied by the device.
Then, reorganizing the columns in Ψ, we can write it in blocks according to the columns of the grid, i.e.
$$ \widetilde{\Psi} = \left(\widetilde{\Psi}[1], \widetilde{\Psi}[2], \ldots, \widetilde{\Psi}[C]\right) $$
where each block \(\widetilde {\Psi }[c] = \left [ \widetilde {\Psi }[c](1), \widetilde {\Psi }[c](2), \ldots, \widetilde {\Psi }[c](r), \ldots, \widetilde {\Psi }[c](R) \right ]\) contains the cth column of the previous R blocks:
e.g., \(\widetilde {\Psi }[c](r) = \Psi [r](c)\) is the cth column of the rth block of Ψ.
The estimation of the column \(\hat c\) is provided by the solution of
$$ \min_{b\in\mathbb{R}^{n}}\|z-\widetilde{\Psi} b\|_{2}\qquad \text{s. t.} \|b\|_{2,0}\leq k $$
((7))
Thus, the estimated position is given by \( x = C (\hat {r}-1)+\hat c\), crossing the estimated row \(\hat r\) and column \(\hat c\), as shown in Fig. 3.
3.3 Hierarchical approach
In this approach, we refine the spatial sparsity methods by grouping the cells and proposing a hierarchical search of the device position.
The idea is as follows. In the training phase, some adjacent cells, which form a partition of the ground floor, are grouped into N macroblocks. The dictionary Ψ is then rearranged accordingly. For example, let us consider a 6×8 grid and N=12 groups composed by 2×2 blocks of adjacent cells as shown in Fig. 4.
The matrix Ψ is then rearranged to form a new dictionary, denoted by
$$\Psi' = \left[\Psi'[1], \Psi'[2], \ldots, \Psi'[N] \right]. $$
Specifically, Ψ has the following structure:
where the different colors represent different blocks, composed by the cells indicated in Fig. 4. We also rearrange \(x\in \mathbb {R}^{D}\) into a concatenation of N blocks x=(x[1]⊤,x[2]⊤,…,x[N]⊤)⊤.
In the runtime phase, first, the macroblock occupied by the device is estimated by solving the optimization problem
$$ \min\|z-\Psi' x\|_{2}\qquad \text{s. t.} \|x\|_{2,0}\leq k $$
((8))
where ∥x∥2,0=|{i∈{1,…,N}:∥x[i]∥2>0}|. Second, if x
⋆ is the solution to (8) and ℓ
⋆ is the index such that x
⋆[ℓ
⋆]≠0, the cell occupied by the device in the selected macroblock is estimated by solving (5), reducing the dictionary to the columns of Ψ
′[ℓ
⋆].
This approach can be extended to multiple stages and to multiple devices to be localized.
3.4 Hierarchical approach with spatial averages
We now propose an extension of the hierarchical approach. It also operates in two consecutive steps, requiring a new signature map, where each block \(\widehat {\Psi }[n]\) is computed taking the mean in the cells constituting the block:
$$\widehat{\Psi}[n] = \frac{1}{L_{n}} \sum_{l=1}^{L_{n}} \psi_{\mathcal S[n](l)} \qquad \in \mathbb{R}^{TJ}, $$
where L
n
is the number of cells constituting the nth block and \(\psi _{\mathcal {S}[n](l)}\) is the column of Ψ indexed by \(\mathcal {S}[n](l)\) that is the lth component of the subset \(\mathcal {S}\) of indices of the cells constituting the nth block.
Referring to Fig. 4, \(\mathcal {S}[1] = \{ 1, 2, 9, 10 \}, \; \mathcal {S}[2] = \{2,3,11,12\}\), and so on, are the subsets of indices representing each block. Thus, for instance, the first block, which is the red-colored one, is computed taking the average of the first, the second, the ninth and the tenth columns of Ψ, as defined in (3).
Specifically
This approach is motivated by the fact that averaging RSS measurements minimizes the mean square error (MSE) of the RSS values measured in different points. Moreover, if the distance among the reference points is not too large, compared to the environment dimension, the averaging process is approximately equivalent to take the RSS from the centroid of the reference points. For instance, if we consider four adjacent cells in a macroblock 2×2, the averaging process provides an approximation of the RSS in the middle of the macroblock. Thus, this could be interpreted as a new discretization with bigger cells.
Then, we find the occupied block \(\hat \iota \) by solving (5), and finally we can find the occupied position by solving (5) using the selected block \(\widehat {\Psi }[\hat \iota ]\). In all previous approaches, the recovery of the occupied position is computed via Orthogonal Matching Pursuit (OMP) algorithm [21]. We choose OMP instead of a convex optimization routine, such as the interior point or the simplex method [22], since it is computationally more efficient.