For limited-energy WSNs, less computational cost and less complexity method for target tracking is anticipated. And then, communications between nodes or target can consume a lot of energy. So, MPMC scheme accompanying with no clustering and less communication is proposed.

### Problem statement

Figure 1 shows target tracking overview in our WSN. Sensors (including normal sensors denoted as \(S\), and anchor sensors denoted as \(A\)) are randomly located \(20\,{\text{m}} \times 20\,{\text{m}}\) indoor area, according to a two-dimensional Poisson distribution with a density of \(\lambda_{0}\), in which the ratio of anchors is \(\gamma\). A target, whose path is presented as a solid thick line, moves along with its maneuvering trajectory in WSN deployment area and is detected by sensor nodes (normal sensors or anchors), adopting an acceptable tracking strategy. The target, equipped a recognizable sensor, can also communicate with other sensors in this WSN. Because the clustering algorithm can consume much energy for WSN, the proposed method tracks the target without clustering but through establishing the 1-hop and 2-hop neighbor lists by HTC scheme [44]. In each sampling interval, nodes including the target can execute HTC algorithm to obtain its 1-hop and 2-hop neighbors. Each HTC request packet contains node ID, location and current moment. Receiving this request packet, nodes (including the target) can obtain distance information of their 1-hop and 2-hop neighbors. Consecutive position information obtained from request packet of the specific target is used to obtain the target’s velocity. Location and velocity information can be used for iterative MPMC filter to obtain target tracking. For example, target can obtain its 1-hop and 2-hop neighbors through performing HTC scheme in iteration \(k\). And then it can derive its relative locations to its neighbors [45]. MPMC filter can derive target tracking adopting \(N_{k}\) proposals within these locations, accompanied by measurements or observations detected by the target in this iteration \(k\).

The tracking environment for our indoor sensor networks is illustrated in Fig. 2. A wireless sensor is made of a transceiver, data processing center, data storage module and battery, in addition to a sensing unit, such as a PIR sensor (to detect the information of a target, equipped in normal sensors and anchors), a Global Positioning System (GPS) receiver (to identify the location of the sensor node equipped in anchor nodes). A target, a tracked object, equips a PIR sensor moving as a normal sensing node. All nodes can move randomly in our test room. Sensor nodes (including normal nodes and anchors) collect the object tracking information and transmit this information to anchor nodes. Simultaneously, location and tracking of the target can also be obtained from the information of its 1-hop and 2-hop neighbors through executing MC-MPMC scheme. In the realistic target tracking scenario, the target is tracked and undergoes occasional maneuvers. That is, target may move with a linear trajectory (straight line or local straight line) and with a maneuver mode (random trajectory) at any time interval.

### Maneuvering model

The target state is supposed to \(x_{k}\) at the time \(k\) in the state space \(X\), and the measurement or observation model is supposed to \(m_{k}\) in the measurement or observation space \(M\) under random finite set (RFS) models [37]. Then target state and measurement set can be represented by \(X_{k} \in F(X)\) and \(M_{k} \in F(M),\) respectively. System model and measurement model can be generally described as follows [25, 31]:

$$x_{k} = F_{k} x_{k - 1} + G_{k} q_{k - 1}$$

(1)

$$m_{k} = H_{k} x_{k} + v_{k}$$

(2)

where \(x_{k}\) is the state vector at time \(k\) denoted as \(x_{k} = (x_{k} \, \dot{x}_{k} \, y_{k} \, \dot{y}_{k} \, \omega_{k} )^{T}\), \(F_{k}\) is target transition matrix, \(G_{k}\) is control input matrix, and \(q_{k}\) is process noise which follows a Gaussian distribution with zero mean and covariance \(Q_{k}\) defined as \(E[q_{k} q_{k}^{T} ]\). In the state vector representation, \(x_{k}\) and \(y_{k}\) are the states of target position, \(\dot{x}_{k}\) and \(\dot{y}_{k}\) are the states of target velocity, \(\omega_{k}\) is the turn rate. And \(m_{k}\) is the measurement at time \(k\) denoted as \(m_{k} = (x_{k} \, y_{k} \, \omega_{k} )^{T}\), which presents the measurement of position and turn angle of the target, \(H_{k}\) is measurement matrix, and \(v_{k}\) is measurement noise which also follows a Gaussian distribution with zero mean and covariance \(V_{k}\) defined as \(E[v_{k} v_{k}^{T} ]\). The process noise and measurement noise are mutually uncorrelated to each other.

$$F_{k} = \left[ {\begin{array}{*{20}l} 1 \hfill & {\frac{{\sin (\omega_{k - 1} T)}}{{\omega_{k - 1} }}} \hfill & 0 \hfill & {\frac{{\cos (\omega_{k - 1} T) - 1}}{{\omega_{k - 1} }}} \hfill & 0 \hfill \\ 0 \hfill & {\cos (\omega_{k - 1} T)} \hfill & 0 \hfill & { - \sin (\omega_{k - 1} T)} \hfill & 0 \hfill \\ 0 \hfill & {\frac{{1 - \cos (\omega_{k - 1} T)}}{{\omega_{k - 1} }}} \hfill & 1 \hfill & {\frac{{\sin (\omega_{k - 1} T)}}{{\omega_{k - 1} }}} \hfill & 0 \hfill \\ 0 \hfill & {\sin (\omega_{k - 1} T) \, } \hfill & 0 \hfill & {\cos (\omega_{k - 1} T)} \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill \\ \end{array} } \right]$$

$$G_{k} = \left[ {\begin{array}{*{20}l} {\frac{{T^{2} }}{2}} \hfill & 0 \hfill & 0 \hfill \\ T \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & {\frac{{T^{2} }}{2}} \hfill & 0 \hfill \\ 0 \hfill & T \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & T \hfill \\ \end{array} } \right]$$

$$Q_{k} = {\text{cov}} (G_{k} q_{k - 1} ) = \left[ {\begin{array}{*{20}l} {\frac{1}{4}T^{4} \sigma_{x,k - 1}^{2} } \hfill & {\frac{1}{2}T^{2} \sigma_{x,k - 1}^{2} } \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ {\frac{1}{2}T^{2} \sigma_{x,k - 1}^{2} } \hfill & {T^{2} \sigma_{x,k - 1}^{2} } \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & {\frac{1}{4}T^{4} \sigma_{y,k - 1}^{2} } \hfill & { \, \frac{1}{2}T^{2} \sigma_{y,k - 1}^{2} } \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & {\frac{1}{2}T^{2} \sigma_{y,k - 1}^{2} } \hfill & {T^{2} \sigma_{y,k - 1}^{2} } \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & {T^{2} \sigma_{\omega ,k - 1}^{2} } \hfill \\ \end{array} } \right]$$

Measurement matrix \(H_{k}\) can be demonstrate the position information of the target in \(x\) and \(y\) directions, which can be presented as,

$$H_{k} = \left[ {\begin{array}{*{20}l} 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill \\ \end{array} } \right]$$

### MC-MPMC-based tracking method

And now, an improved predicting method for target tracking-based MC-MPMC method is proposed. In generally, both measurements and states of the target are inherent uncertain. According to Bayesian estimation theory, target state posterior probability density \(\pi_{k\left| k \right.} (X_{k} \left| {M_{1:k} } \right.)\) can be obtained by Bayes recursion as [43]

$$\pi_{{k\left| {k - 1} \right.}} (X_{k} \left| {M_{1:k - 1} } \right.) = \int {f_{{k\left| {k - 1} \right.}} } (X_{k} \left| {X_{k - 1} ,M_{1:k - 1} } \right.)\pi_{{k - 1\left| {k - 1} \right.}} (X_{k - 1} \left| {M_{1:k - 1} } \right.){\text{d}}X_{k - 1}$$

(3)

$$\pi_{kk} (X_{k} \left| {M_{1:k} } \right.) = \frac{{\ell_{k} (M_{k} \left| {X_{k} } \right.)\pi_{kk - 1} (X_{k} \left| {M_{1:k - 1} } \right.)}}{{\int {\ell_{k} } (M_{k} \left| {X_{k} } \right.)p_{kk - 1} (X_{k} \left| {M_{1:k - 1} } \right.){\text{d}}X_{k} }}$$

(4)

in which \(f_{{k\left| {k - 1} \right.}} (X_{k} \left| {M_{1:k} } \right.)\) is the state transition density function based on current observations or measurements, \(\pi_{kk - 1} (X_{k} \left| {M_{1:k} } \right.)\) denotes the target probability density and \(\ell_{k} (M_{k} \left| {X_{k} } \right.)\) denotes the target likelihood function, respectively. In many practical applications, the integrals in Bayes recursion of Eqs. 3 and 4 cannot be obtained in closed form or the computational complexity of the integrals grows sharply with samples increasing. So, the approximations should be proposed adopting importance sampling methods.

$$\pi_{kk - 1} ({\mathbf{x}}) = g_{k} ({\mathbf{x}}) + \int {f_{kk - 1} } (x\left| {y)\pi_{k - 1} (y){\text{d}}y + \int {\beta_{kk - 1} } (x} \right.\left| y \right.)\pi_{k - 1} (y){\text{d}}y$$

(5)

$$\pi_{k} ({\mathbf{x}}) = [1 - p_{{{\text{D}},k}} ({\mathbf{x}})]\pi_{kk - 1} ({\mathbf{x}}) + \sum {\frac{{p_{{{\text{D}},k}} ({\mathbf{x}})\ell_{k} (m\left| x \right.)\pi_{kk - 1} ({\mathbf{x}})}}{{\kappa_{k} (m) + \int {p_{{{\text{D}},k}} (y)\ell (m\left| y \right.)\pi_{kk - 1} (y){\text{d}}y} }}}$$

(6)

in which \(p_{{{\text{D}},k}} ({\mathbf{x}})\) is the target detection probability, \(\kappa_{k} (m)\) is the process clutter, \(g_{k} ({\mathbf{x}})\) is the new generating distribution of state samples from the proposals and \(\beta_{kk - 1} (x\left| y \right.)\) is the spawning distribution of state samples based on measurements.

In this work, an improved target tracking method based on MC-MPMC scheme accompanied by HTC algorithm is presented elaborately to approximate or estimate the distribution of target states. PMC filter [43] is a well-known iterative adaptive importance sampling technique, which is briefly described as follows.

At each iteration \(k\), it generates a set of samples \(\left\{ {{\mathbf{x}}_{k}^{i} } \right\}_{i = 1}^{N}\), where \(i\) denotes the sample index and \({\mathbf{x}}\) denotes the variable of interest which means the object state distribution in our tracking system. That is, sampling or resampling iterates once in each time interval. In order to obtain the samples, PMC algorithm makes use of a collection of proposal densities \(\left\{ {q_{k}^{i} ({\mathbf{x}})} \right\}_{i = 1}^{N}\), with each sample being drawn from a different proposal, \({\mathbf{x}}_{k}^{i} \sim q_{k}^{i} ({\mathbf{x}})\).Then, these samples are assigned an importance weight, respectively, denoted as \(w_{k}^{i} = \frac{{\pi ({\mathbf{x}}_{k}^{i} )}}{{q_{k}^{i} ({\mathbf{x}}_{k}^{i} \left| {\mu_{k}^{i} } \right.,{\mathbf{C}}_{i} )}}\), in which \(\pi ({\mathbf{x}}_{k}^{i} )\) is the target posterior density function, \(\mu_{k}^{i}\) is the adaptive parameters of proposal densities in the sampling period \(k\) and \({\mathbf{C}}_{i}\) is the static parameters. After normalizing these importance weights, the unbiased convergent estimator can be obtained,

$$\overline{w}_{k} = \frac{1}{TN}\sum\limits_{i = 1}^{N} {\sum\limits_{k = 1}^{T} {\frac{{\pi (x_{k}^{i} )}}{{q_{k}^{i} (x_{k}^{i} )}}} }$$

(7)

in which the parameter \(N\) and \(T\) refer to the maximum sample number and the maximum sampling period, respectively. PMC method performs multinomial resampling process by drawing \(N\) independent samples from the discrete probability random measurements, which is denoted as

$$w_{k}^{i} = \frac{{\pi (x_{k}^{i} )}}{{\overline{w}_{k - 1} q_{k}^{i} (x_{k}^{i} )}}$$

(8)

$$\hat{\pi }_{i}^{N} ({\mathbf{x}}) = \sum\limits_{i = 1}^{N} {\overline{w}_{k}^{i} } \delta ({\mathbf{x}} - {\mathbf{x}}_{k}^{i} )$$

(9)

in which \(\delta ( \cdot )\) is the Dirac delta function. This method proceeds iteratively, building a global importance sampling estimator using different proposals at every iteration. This can partially avoid the sample degeneracy phenomenon. That is, samples with negligible weights or relatively low weights can avoid to be removed directly.

The proposal in each iteration of \(k\), \(q_{k}^{i} ({\mathbf{x}})\), can be formed according to the behaviors of the previous \(q_{k - 1}^{i} ({\mathbf{x}})\), or depending on the previous samples \({\mathbf{x}}_{k - 1}^{1} ,...,{\mathbf{x}}_{k - 1}^{N}\), or the joint dependence on them. To be clear, the relative previous or current locations of the target from its 1-hop and 2-hop neighbors can be considered as the samples in this location-based tracking strategy [46]. There is a series of improved PMC-based resampling scheme to predict the variable distribution, such as DM-PMC, GR-PMC and LR-PMC [43]. To avoid the sample degeneracy, deterministic mixture weighting method DM-PMC is presented. GR-PMC draws multiple samples generated by a proposal or mix band, instead of only one as done in PMC. And LR-PMC performs the resampling independently for each proposal. These improved PMC schemes improve the diversity of the population for PMC.

Inspired by the time-varying target birth distribution in [41], an iteration-varying proposal generating scheme is proposed in this work. Moreover, measurement or observation remedy scheme is introduced to correct the missed detection issue. This improved MC-MPMC scheme estimates target states, which is presented elaborately as follows.

**Step 1** Tracking initialization

In the initial period of \(k = 1\), all nodes in the network, containing the target and sensing nodes (normal nodes and anchors), can establish their 1-hop and 2-hop neighbor lists (NL) and update these NLs every iteration stage adopting HTC scheme. Suppose that sample generation function of target state distribution can be complied with the Poisson distributions, that is, \(x_{1}^{i} \sim P(\lambda_{1} )\). And then, select the initial parameters defining the \(N\) proposals,

The adaptive parameters \(a_{1} = \left\{ {\lambda_{1}^{1} ,\lambda_{1}^{2} ,...,\lambda_{1}^{N} } \right\}\).

The corresponding set of static parameters, \(\left\{ {{\mathbf{C}}_{i} } \right\}_{i = 1}^{N}\).

The adaptive parameters in \(a_{1}\) are the means of \(E[\lambda_{1}^{i} ]\), which is \(\lambda_{1}^{1} ,\lambda_{1}^{2} ,...,\lambda_{1}^{N}\). And the parameters \(\left\{ {{\mathbf{C}}_{i} } \right\}_{i = 1}^{N}\) are the covariances of \(cov[a_{1}^{i} ]\), which is \(\lambda_{1} ,\lambda_{1} ,...,\lambda_{1}\).

Initial sample candidates generated by initial proposals can be denoted as \(SL_{{ {\text{1-hop}}}}\) and \(SL_{{{\text{2-hop}}}}\), which refers to locations between target and its 1-hop possible neighbors and with 2-hop possible neighbors, respectively, which can be seen in [38]. That the target obtains location information used for tracking state establishment can be seen from [39] elaborately.

$$p({\mathbf{x}}_{1} ) = \frac{1}{{SL_{{{\text{1-hop}}}} + SL_{{{\text{2-hop}}}} }}$$

(10)

$$\pi ({\mathbf{x}}_{1} ) = \ell (m_{1} \left| {\mathbf{x}} \right._{1} )p({\mathbf{x}}_{1} )$$

(11)

**Step 2** Mixture weighting and predicting

After initialization step, samples of target states \(x_{k}^{i} (i = 1,2,...,N_{k} )\) are generated from the proposals \(q_{k}^{i} ({\mathbf{x}})\) in iteration \(k\) according to the behaviors of the previous \(q_{k - 1}^{i} ({\mathbf{x}})\) and measurements \(m_{k}\) of locations at the period \(k\). In PMC scheme, each sample is drawn from a different proposal. An improved DM-PMC is proposed in [37], which considers the average of weighted mixture of all proposals around kernel iteration point as the kernel density approximation of target pdf. This simple mixture can mitigate sample degeneracy to a certain extent. But DM-PMC brings weighted mixture only through obtaining the mean, without actually taking the sample diversity into account. And also, the minimum mean integrated square error (MISE) estimator is obtained based on the infinity of \(N\) (sample number) [37], while \(N\) is finite in our work.

A prominent feature of PMC method is that not only the number but also the distribution of the proposals in iteration \(k\) can be different from each other, without jeopardizing the validity [36]. Both the weights and component parameters of a mixture importance sampling density are jointly taken into account for M-PMC scheme to obtain the adaptive mixture PMC scheme in [47]. The adaption of importance sampling density can be presented as weights combined with component density parameters in M-PMC. Inspired from DM-PMC and M-PMC, a mixture weighting combined with sample generating process and measurement process is proposed in MC-MPMC scheme, that is, proposals at the period \(k\) are generated based on the previous samples at the period \((k - 1)\) sensed by 1-hop and 2-hop neighbors of the target and the current measurements \(m_{k}\). And then, the global current samples are generated from these proposals with a mixture weighting. In MC-MPMC scheme, samples in iteration \(k\) are drawn from proposals \(q_{k}^{i} ({\mathbf{x}})\), which can be presented as

$${\mathbf{x}}_{k}^{j} \sim \left\{ {P\left( {\lambda_{j}^{k - 1} ,{\mathbf{C}}_{j}^{k - 1} } \right)} \right\} \cup \left\{ {m_{k}^{j} (x)} \right\}\quad j = 1,2,...,n_{k}$$

(12)

$$p({\mathbf{x}}_{k}^{j} ) = \sum\limits_{j = 1}^{{n_{k} }} {p_{j} } P\left( {\lambda_{j}^{k} ,{\mathbf{C}}_{j}^{k} } \right)m_{k}^{j} (x)$$

(13)

The proposal can be presented with a distribution of a Poisson process with a parameter \(\lambda_{j} (j = 1,2,...,n_{k} )\), with a probability of \(p_{1} ,p_{2} ,...,p_{{n_{k} }}\), respectively, in which the sum of \(p_{j}\) is 1, that is, \(\sum\nolimits_{j = 1}^{{n_{k} }} {p_{j} } = 1\). That is, several samples are drawn from different proposals with a mixture pattern rather than each sample being drawn from a specific proposal as in PMC. The distribution and the number of the proposals vary from one iteration to another. And then, weights of the mixture PMC can be derived as

$$w_{k}^{i} ({\mathbf{x}}) = \frac{{f({\mathbf{x}}_{k}^{i} \left| {(P_{j} )_{j = 1}^{{n_{k} }} ,({\mathbf{C}}_{j} )_{j = 1}^{{n_{k} }} } \right.)m_{k}^{i} ({\mathbf{x}})}}{{\prod\nolimits_{j = 1}^{{n_{k} }} {\varphi (({\mathbf{x}}_{k}^{i} )^{j} \left| {(P_{j} )_{j = 1}^{{n_{k - 1} }} ,({\mathbf{C}}_{j} )_{j = 1}^{{n_{k - 1} }} } \right.)m_{k}^{i} ({\mathbf{x}})} }}$$

(14)

in which \(\varphi (q_{k} \left| {\lambda_{k} ,{\mathbf{C}}_{k} } \right.)\) is the density of Poisson distribution with mean value \(\lambda_{k}\) and covariance \({\mathbf{C}}_{k}\) at the condition of \({\text{q}}_{k}\) for iteration \(k\). And \({\text{f}}({\mathbf{x}}_{k}^{i} \left| {\lambda_{k} ,{\mathbf{C}}_{k} } \right.)\) is the density function of generating samples sensed by 1-hop and 2-hop neighbors of the target according to the proposals for iteration \(k\). So, the probability distribution of target states can be derived based on the set of \(\{ {\mathbf{x}}_{k - 1}^{i} ,{\text{w}}_{k - 1}^{i} \}_{i = 1}^{{n_{k} }}\), which is presented as

$$\pi_{k - 1}^{{}} (x) = \sum\limits_{i = 1}^{{n_{k} }} {w_{k}^{i} ({\mathbf{x}})} \delta ({\mathbf{x}} - {\mathbf{x}}_{k}^{i} )$$

(15)

And then, the probability distribution of target states can be predicted as

$$\pi_{kk - 1}^{{}} ({\mathbf{x}}) = \sum\limits_{i = 1}^{{n_{k} }} {\tilde{w}_{{k\left| {k - 1} \right.}}^{i} ({\mathbf{x}})} \delta ({\mathbf{x}} - {\tilde{\mathbf{x}}}_{{k\left| {k - 1} \right.}}^{i} )$$

(16)

in which \({\tilde{\mathbf{x}}}_{{k\left| {k - 1} \right.}}^{i}\) is obtained according to the proposal distributions \(q_{k}^{i} ({\mathbf{x}})\) and measurements \(m_{k}\) of locations in the period \(k\),

$$\tilde{x}_{{k\left| {k - 1} \right.}}^{i} = q(x_{{k\left| {k - 1} \right.}}^{i} \left| {x_{k - 1}^{i} ,m_{k} } \right.)\quad i = 1,2,...,n_{k}$$

(17)

$$\widetilde{w}_{{k\left| {k - 1} \right.}}^{i} ({\mathbf{x}}) = \frac{{f_{{k\left| {k - 1} \right.}} ({\tilde{\mathbf{x}}}_{{k\left| {k - 1} \right.}}^{i} \left| {{\mathbf{x}}_{k - 1}^{i} } \right.)w_{k - 1}^{i} }}{{q_{k} ({\tilde{\mathbf{x}}}_{{k\left| {k - 1} \right.}}^{i} \left| {{\mathbf{x}}_{k - 1}^{i} } \right.,m_{k} )}}$$

(18)

**Step 3** Update and resampling

According to updating strategy in Eq. 9 of PMC scheme, target state sample distributions can be updated based on Eq. 15 ~ 16.

$$\pi_{k} ({\mathbf{x}}) = \sum\limits_{i = 1}^{{n_{k} }} {\tilde{w}_{k}^{i} ({\mathbf{x}})} \delta ({\mathbf{x}} - {\tilde{\mathbf{x}}}_{{k\left| {k - 1} \right.}}^{i} )$$

(19)

in which weights are updated in Eq. 18. Two weight updating parts are contained in Eq. 18, whose first factor is the target weight of undetected probability which is denoted as \(w_{k,c}^{i}\) and the second factor is that of detected probability which is denoted as \(w_{{\text{k,t}}}^{{\text{i}}}\).

$$\tilde{w}_{k}^{i} ({\mathbf{x}}) = [1 - p_{{{\text{D}},k}} ({\tilde{\mathbf{x}}}_{{k\left| {k - 1} \right.}}^{i} )]\tilde{w}_{{k\left| {k - 1} \right.}}^{i} ({\mathbf{x}}) + \frac{{\sum {p_{{{\text{D}},k}} ({\tilde{\mathbf{x}}}_{{k\left| {k - 1} \right.}}^{i} )} \ell_{k} (m\left| {{\tilde{\mathbf{x}}}_{{k\left| {k - 1} \right.}}^{i} } \right.)}}{{\kappa_{k} (m) + \pi_{k} (m)}}\tilde{w}_{{k\left| {k - 1} \right.}}^{i} ({\mathbf{x}})$$

(20)

$$\ell_{k} (m_{k} \left| {\mathbf{x}} \right._{k} ) = \prod\limits_{{S_{1} \in S_{{ {\text{1-hop}}}} }} {p(S_{1} \left| {x_{k}^{i} } \right.} )\prod\limits_{{S_{2} \in S_{{{\text{2-hop}}}} }} {p(S_{2} \left| {x_{k}^{i} } \right.)}$$

(21)

$$\pi_{k} (m) = \sum\limits_{i = 1}^{{n_{k} }} {p_{D,k} (\widetilde{x}_{{k\left| {k - 1} \right.}}^{i} )\ell (m_{k} \left| {\tilde{x}_{{k\left| {k - 1} \right.}}^{i} } \right.)} \tilde{w}_{{k\left| {k - 1} \right.}}^{i}$$

(22)

When the value of \(p_{{{\text{D}},k}} ({\mathbf{x}})\), shown in Eq. 18, is not equal to 1, that is, the target may not be detected in iteration \(k\) with the probability of \((1 - p_{{{\text{D}},k}} ({\mathbf{x}}))\). \(p(S_{1} \left| {x_{k}^{i} } \right.)\) and \(p(S_{2} \left| {x_{k}^{i} } \right.)\) refers to the probability of state sample within in 1-hop and 2-hop area of target, respectively, which is used to obtain the target distribution. Target states may be falsely estimated under undetected occasions, especially for the smaller value of \(p_{{{\text{D}},k}}\). The undetected target can increase tracking time and tracking energy, and consequently result in poor tracking performance.

Comprehensive analyses for tracking performance degradation because of undetected target were proposed in [38], and the number of measurements is increased to resolve this undetected target problem as related, called as measurement compensatory MPMC (MC-MPMC) method.

The predicted state estimation of \(\hat{X}_{k,p}\) and its measurement set \(\hat{M}_{k - 1,p}\) can be computed by Eqs. 23 and 24, based on \(\hat{X}_{k - 1,t}\) and process noise and measurement noise.

$$\hat{X}_{k,p} = F_{k - 1} \hat{X}_{k,t} + G_{k} q_{k - 1}$$

(23)

$$\hat{M}_{k,p} = H_{k - 1} \hat{X}_{k,t} + v_{k}$$

(24)

Target is detected successfully meaning that one of predicted measurement \(\hat{M}_{k,p}\) (footnote \(p\) denotes prediction) is very close to the true measurement \(M_{k,t}\) (footnote \(t\) denotes true), which is detected by anchors or normal sensors for the location or state information.

$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{M}_{k,p} = \left\{ {M_{k,p}^{i} \left| {(M_{k,t}^{j} } \right. - \hat{M}_{k,p}^{i} )^{T} P_{k}^{ - 1} (M_{k,t}^{j} - \hat{M}_{k,p}^{i} ) < \gamma_{th} } \right\}\quad j = 1,2,...,n_{t}$$

(25)

in which \(N_{t}\) is the number of true measurements \(M_{k,t}\), \(\gamma_{th}\) is a threshold and \({\text{P}}_{k}\) is called as Prediction covariance.

$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{M}_{k,c} = \hat{M}_{k,p} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{M}_{k,p}$$

(26)

Compensatory measurement set can be computed according to Eqs. 23–26.

The missing detection weight of \(w_{k,c}^{i}\) in Eq. 20 can be rewritten as Eq. 27, which is the compensatory weight of \(\tilde{x}_{k,c}^{i}\).

$$\tilde{w}_{{\text{k,c}}}^{{\text{i}}} = \sum\limits_{j = 1}^{{n_{c} }} {[1 - p_{{{\text{D}},k}} ({\tilde{\mathbf{x}}}_{{k\left| {k - 1} \right.}}^{i} )]\tilde{w}_{{k\left| {k - 1} \right.}}^{i,j} ({\mathbf{x}})}$$

(27)

$$\tilde{w}_{k,t}^{i} = \frac{{\sum {p_{{{\text{D}},k}} ({\tilde{\mathbf{x}}}_{k}^{i} )} \ell_{k} (m_{j,t} \left| {{\tilde{\mathbf{x}}}_{k}^{i} } \right.)}}{{\kappa_{k} (m_{j,t} ) + D_{k} (m_{j,t} )}}\tilde{w}_{{k\left| {k - 1} \right.}}^{i} ({\mathbf{x}})$$

(28)

in which \(N_{c}\) is the number of compensatory measurements \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{M}_{{\text{k,c}}}\). And then, the predicted weight of Eq. 18 can be computed as follows:

$$\begin{aligned} \tilde{w}_{k}^{i} & = \sum\limits_{j = 1}^{{n_{c} }} {\tilde{w}_{k,c}^{i,j} } + \sum\limits_{j = 1}^{{n_{t} }} {\tilde{w}_{k,t}^{i} } \\ & = \sum\limits_{j = 1}^{{N_{c} }} {\frac{{p_{{{\text{D}},k}} ({\tilde{\mathbf{x}}}_{k}^{i} )\ell_{k} (m_{j,c} \left| {{\tilde{\mathbf{x}}}_{k}^{i} } \right.)}}{{\kappa_{k} (m_{j,c} ) + D_{k} (m_{j,c} )}}} \tilde{w}_{{k\left| {k - 1} \right.}}^{i} ({\mathbf{x}}) + \sum\limits_{j = 1}^{{N_{t} }} {\frac{{p_{{{\text{D}},k}} ({\tilde{\mathbf{x}}}_{k}^{i} )\ell_{k} (m_{j,t} )}}{{\kappa_{k} (m_{j,t} ) + D_{k} (m_{j,t} )}}} \tilde{w}_{{k\left| {k - 1} \right.}}^{i} ({\mathbf{x}}) \\ \end{aligned}$$

(29)

**Step 4** Target state estimations

And then, the estimations of target states are computed according to true measurements and compensatory measurements as Eq. 26,

$$\hat{X}_{k} = \hat{X}_{k,t} + \hat{X}_{k,c}$$

(30)

$$\hat{X}_{k,t} = \sum\limits_{j = 1}^{{n_{t} }} {\tilde{w}_{k,t}^{i,j} } x_{k,t}^{j}$$

(31)

$$\hat{X}_{k,c} = \sum\limits_{j = 1}^{{n_{c} }} {\tilde{w}_{k,c}^{i,j} } x_{k,c}^{j}$$

(32)

Mixture weights can present more valid estimators with less variance, which improve the diversity of the population of PMC algorithm. Also, the whole mixture of observations can match well with the targets than each observation separately.

At each time interval, state error can be obtained from Eq. 27 referred from [45], in which \(x_{k}^{iest}\) is the estimated state and \(x_{k}^{i}\) is the actual one, which are the distances between target and sensors.

$$Te_{k} = \sum\limits_{i = 1}^{{N_{k} }} {\frac{{\left\| {x_{k}^{iest} \left. { - x_{k}^{i} } \right\|} \right.}}{{N_{k} }}}$$

(33)

Tracking delay is also an important character in our time-critical tracking system, and we always attempt to improve the behavior of delay in order to obtain the real-time monitoring and tracking. In MC-MPMC scheme, delay contains two parts \(T_{k}^{{j({\text{HTC}})}}\) and \(T_{k}^{{j({\text{MC-MPMC}})}}\). The estimation of time delay in communication can be referred as [48],

$$T_{k}^{i} = \sum\limits_{j = 1}^{i} {(T_{k}^{{j({\text{HTC}})}} + T_{k}^{{j({\text{MC-MPMC}})}} )}$$

(34)

Tracking consumption is the most important metric in energy-constrained WSNs. Average cost can be presented in two parts \(E_{k}^{{i({\text{HTC}})}}\) and \(E_{k}^{{i({\text{MC-MPMC}})}}\), which refers to the cost of performing HTC scheme and MC-MPMC scheme in iteration, respectively.

$$E_{k}^{{N_{k} }} = \frac{1}{{N_{k} }}\sum\limits_{j = 1}^{{N_{k} }} {(E_{k}^{{j({\text{HTC}})}} + } E_{k}^{{j({\text{PMCL}})}} )$$

(35)