Open Access

Distributed estimation in wireless sensor networks with semi-orthogonal MAC

EURASIP Journal on Wireless Communications and Networking20162016:214

https://doi.org/10.1186/s13638-016-0716-z

Received: 23 February 2016

Accepted: 27 August 2016

Published: 8 September 2016

Abstract

This paper is concerned with distributed estimation of a scalar parameter using a wireless sensor network (WSN) that employs a large number of sensors operating under limited bandwidth resource. A semi-orthogonal multiple-access (MA) scheme is proposed to transmit observations from K sensors to a fusion center (FC) via N orthogonal channels, where KN. The K sensors are divided into N groups, where the sensors in each group simultaneously transmit on one orthogonal channel (and hence the transmitted signals are directly superimposed at the FC as opposed to be coherently combined). Under such a semi-orthogonal multiple access channel (MAC), performance of the linear minimum mean squared error (LMMSE) estimation is analyzed in terms of two indicators: the channel noise suppression capability and the observation noise suppression capability. The analysis is performed for two versions of the proposed semi-orthogonal MA scheme: fixed sensor grouping and adaptive sensor grouping. In particular, the semi-orthogonal MAC with fixed sensor grouping is shown to have the same channel noise suppression capability and two times the observation noise suppression capability when compared to the orthogonal MAC under the same bandwidth resource. For the semi-orthogonal MAC with adaptive sensor grouping, it is determined that N=4 is the most favorable number of orthogonal channels when taking into account both performance and feedback requirement. In particular, the semi-orthogonal MAC with adaptive sensor grouping is shown to perform very close to that of the hybrid MAC, while requiring only log2N=2 bits of information feedback instead of the exact channel phase for each sensor.

Keywords

Wireless sensor networks Distributed estimation Multiple access channel Multiple access scheme

1 Introduction

Wireless sensor networks (WSNs) have found applications in diverse areas such as environmental data gathering [1], industrial monitoring [2] and monitoring of smart electricity grids [3], and mobile robots and autonomous vehicles [4]. Such widespread applications of WSNs are made possible by advances in wireless communications and high-speed low-power electronics, which makes WSNs inexpensive, compact and versatile [5]. All these applications of WSNs are based on the same fundamental task of sampling (i.e., observing) some signal parameter using sensors geographically distributed over a field and estimating the parameter of interest using a central processing unit (fusion center). Such a signal processing task is generally known as distributed estimation.

To perform distributed estimation using a WSN, each sensor makes an observation of the quantity of interest, generates a local signal, and then sends it to a fusion center (FC) via a wireless fading channel. Based on the data collected from the sensors, the FC produces a final estimate of the desired quantity according to some fusion rule. An important design consideration for a WSN is the transmission method from the sensors to the fusion center, which can be analog or digital. With analog transmission, each sensor amplifies and forwards its observation to the FC. On the other hand, for digital transmission, each sensor performs source and channel coding before transmitting the encoded information over the fading channel (see [6] and references therein). According to the studies in [717], analog transmission generally outperforms digital transmission. This is because the fidelity of the source’s parameter is always compromised in the source coding (quantization) process required for digital transmission, while it is preserved with analog transmission. As such, this paper also focuses on distributed estimation in WSNs based on analog transmission.

There are many factors that affect the performance of distributed estimation. These include the accuracy of sensors’ observations (which is usually modeled as observation noise), the available bandwidth and power resources, the fading characteristics of the wireless channels between sensors and the FC, the fusion rule used by the FC, and the type of multiple access channel (MAC)1 used to communicate the sensors’ observations to the FC. To date, there are three types of MAC commonly considered for distributed estimation: coherent, orthogonal, and hybrid. For these MACs, it is required that the responses of all wireless channels connecting the sensors to FC be estimated at the FC (typically via the use of training signals) and used in the distributed estimation algorithm. Such channel estimation at the FC is assumed to be perfect for all the MACs considered in this paper. On the other hand, whether the sensors require any channel state information (CSI) depends on the type of MAC.

For the coherent MAC studied in [8], the sensors’ observations are coherently combined and transmitted to the FC on one channel. Although the coherent MAC appears to be very bandwidth efficient, it requires that each sensor needs to know the wireless channel response from it to the FC so that synchronization among sensors can be established. This requirement presents a serious challenge in a practical implementation of the coherent MAC since the channel responses need to be measured at the FC and fed back to the sensors. Such a feedback overhead can be very significant for a large WSN. The impact of imperfect synchronization corresponding to phase errors is investigated in [18], where a master-slave architecture is also proposed to reduce the synchronization overhead. It should also be pointed out that phase modulation is investigated in [19] for coherent transmission of sensor observations to the FC, but without the important consideration of fading.

In contrast to the coherent MAC, for the orthogonal MAC examined in [7], all K sensors in the network transmit their observations to the FC via K orthogonal channels, which can be realized with orthogonal frequency-division or time-division multiplexing. The orthogonal MAC does not require synchronization among sensors, and hence is more favorable for implementation. The major disadvantage of the orthogonal MAC is that it requires larger transmission bandwidth or latency to accommodate K orthogonal channels. More recently, a hybrid MAC is investigated in [17], where all sensors are divided into groups and the coherent MAC is used for sensors within each group, whereas the orthogonal MAC is used across different groups. A flexible trade-off between the coherent and orthogonal MACs can therefore be obtained by changing the number of groups and the number of sensors in each group. However, in such a hybrid MAC, synchronization among sensors within the same group is still required and the amount of channel information feedback from the FC to the sensors is the same as that of the coherent MAC.

This paper proposes and investigates the use of another type of MAC, referred to as a semi-orthogonal MAC for distributed estimation. The proposed semi-orthogonal multiple access (MA) scheme aims to improve the performance of distributed estimation under a limited bandwidth constraint. Specifically, considered is a scenario where N orthogonal channels are shared by K sensors to transmit the their observations to the FC, where the N is much smaller than K due to bandwidth constraint. While the semi-orthogonal MA scheme is designed based on the similar idea of sensor grouping in [17], the key difference is that, in the proposed MA scheme, the sensors in one group transmit simultaneously without the expensive phase synchronization operation. This means that the signals from sensors within one group are directly superimposed instead of coherently combined as in the hybrid MAC.

The proposed semi-orthogonal MA scheme can be implemented with either fixed or adaptive sensor grouping. In fixed sensor grouping, each sensor transmits on fixed orthogonal channels. In general, more than one orthogonal channel can be allocated to one sensor. However, it shall be shown that such channel allocation causes correlation among the equivalent channel responses 2 and degrades the estimation performance. As such, fixed sensor grouping should be done in such a way that the groups are disjoint. For adaptive sensor grouping, sensors are grouped according to the ranges (i.e., sub-regions) that their channel phases fall into. The extra cost for implementing adaptive sensor grouping is only log2N bits of feedback information from the FC to each sensor to indicate channel allocation. This amount of feedback overhead is significantly smaller than the phase values (real numbers) of the channel responses required in the coherent and hybrid MA schemes. It will be shown that, compared to fixed sensor grouping, the estimation performance achieved with adaptive grouping is improved by a large margin. In fact, the performance of the semi-orthogonal MA scheme with adaptive grouping is very close to the performance of the hybrid MA scheme under the same bandwidth and power constraints and the same number of sensors.

There has been extensive study of distributed estimation using a WSN. To put the novelty and contributions of the current work in context, key papers on distributed estimation that are related to the study in the present paper are discussed next. The seminal work in [20] analyzes the fundamental tradeoffs between the number of sensors, their total transmit power, the number of degree of freedom of the source, the spatio-temporal communication bandwidth, and the end-to-end distortion under a coherent MAC. An important result established in [20] is that, for typical situations, the distortion goes down at best like 1/K, where K is the number of sensors. For the case of a simple “Gaussian” sensor network, where a single memoryless Gaussian source is observed by many sensors subject to independent Gaussian observation noises and the sensors are linked to a fusion center via a coherent Gaussian MAC, it is shown in [10] that uncoded transmission is strictly optimal, rather than only in the scaling law sense of 1/K. The system model of distributed estimation considered in the present paper is similar to the simple Gaussian sensor network in [10], albeit the novel semi-orthogonal MAC is used instead of the coherent MAC. In fact, it is shown in ([21] Chapter 5) that the semi-orthogonal MAC with adaptive sensor grouping achieves the optimal scaling law.

It should be pointed out that designing optimal transmit power/energy allocation strategies for WSNs has been an active area of research in recent years. For example, reference [7] considers a WSN with the orthogonal MAC and derives the optimal power allocation policies in a way that the total distortion is minimized subject to a sum power constraint at the sensors. The work in [22] considers the same orthogonal MAC as [7] but instead of minimizing the total distortion, total transmission power is minimized under distortion constraints. Wu and Wang [23] studies power allocation taking into account sensing noise uncertainty, whereas the optimal power allocation for linear estimation over coherent MAC has been considered in [8]. All the studies on power allocation in [7, 8, 22, 23] focus on sensors equipped with conventional batteries with fixed energy storages. More recently, reference [5] addresses the problem of optimal power allocation to efficiently estimate a random source using distributed wireless sensors equipped with energy harvesting technology. Since this paper focuses on the impact of different MACs, the simple equal power allocation shall be considered throughout.3

Before closing this section, it is important to stress that in a truly distributed estimation framework, the objective is to coordinate all the sensors so that without communicating with one another, they collectively maximize the quality of estimation at the FC [24]. In contrast to such truly distributed estimation, collaborative estimation is considered in [25], where the network is divided into a set of sensor clusters, with collaboration allowed among sensors within the same cluster, but not across clusters. It is shown in [25] that when the channels to the FC are orthogonal and cost-free collaboration is possible within each cluster, the optimum collaboration strategy is to perform the inference in each cluster and use the best available channel to transmit the estimated parameter (local message) to the FC. The optimum power allocation among the clusters is also found in [25] and shown to operate in a water-filling manner. Kar and Varshney [24] considers a similar sensor collaboration paradigm as in [25] but using a coherent MAC for communications from sensors to FC. The authors obtained the optimum cumulative power-distortion tradeoff when a fixed but otherwise cost-free collaboration topology is used and addressed the design of collaborative topologies where finite costs are involved in collaboration. Instead of having all the sensors collaboratively observe an (single) underlying scalar parameter, reference [26] derives an optimum power allocation scheme among collaborative sensors for the case that the sensors observe individual signals that are spatially correlated. While it is intuitively expected that sensor collaboration helps to reduce the distortion of the estimated parameter(s) at the FC, it comes at significant costs of providing reliable communication links among the collaborative sensors as well as extra signal processing (linear combination of shared observations) for each sensor cluster. As such, sensor collaboration is not considered in the present paper.

The remaining of this paper is organized as follows. Section 2 describes the system model of distributed estimation under the coherent, orthogonal, hybrid, and semi-orthogonal MACs. Section 3 proposes two sensor grouping approaches for the semi-orthogonal MAC. Section 4 analyzes and compare the estimation performance under the various MACs. Section 5 presents numerical results and discussion. Section 6 concludes the paper.

2 System model

Figure 1 shows a system model of distributed estimation using a WSN. Here, a scalar Gaussian random variable s is observed in a memoryless fashion by K sensors and each observation is subject to white Gaussian noise. The observation of the ith sensor can be expressed as
$$ x_{i}=s+v_{i}, \quad 1 \le i \le K, $$
(1)
Fig. 1

System model for distributed estimation using a WSN

where the source signal s and observation noise v i are treated as random variables with zero mean and variances \({\sigma ^{2}_{s}}\) and \( {\sigma ^{2}_{v}}\), respectively. The observation signal-to-noise ratio (SNR) is defined as \(\gamma _{\mathrm {o}} = \frac {\sigma ^{2}_{s}}{\sigma ^{2}_{v}}\).

Using analog modulation, the ith sensor simply amplifies x i with a gain a i and transmits the result to the FC. The total transmit power in this WSN is \(P_{\text {tot}}=\sum ^{K}_{i=1} {a^{2}_{i}} \left (\sigma ^{2}_{s} + \sigma ^{2}_{v}\right)\). The communication channels from sensors to the FC are considered to be wireless fading channels. Let \(h_{i}= r_{i} {\mathrm e}^{j \varphi _{i}}\), i=1,…,K, represent the channel response from sensor i to the FC. These channel responses are modeled as independent and identically distributed (i.i.d.) complex Gaussian random variables with zero mean and unit variance, denoted as \(\mathcal {CN}(0,1)\). This also means that the magnitude r i and phase φ i are independent random variables with Rayleigh and uniform distributions, respectively. On each wireless fading channel, the transmitted signal is disturbed by additive white Gaussian noise (AWGN). The AWGN sample, denoted as ω, is modeled as a complex Gaussian random variable with zero mean and variance \(\sigma ^{2}_{\omega }\). The channel SNR is defined as \(\gamma _{\mathrm {c}} = \frac {P_{\text {tot}}}{\sigma ^{2}_{\omega }}\).

The received signal at the FC is generally denoted by y=[y 1,y 2,…,y N ]. The dimension N and expression of y in terms of a i , x i , h i and ω depend on the type of multiple access channel (MAC) realized from the sensors to the FC. This is elaborated in the below subsections. Regardless of the type of MAC, the task of the FC is to estimate the underlying source signal based on the received signal y. Using the linear minimum mean square error (LMMSE) estimator [27] and assuming that all the channel responses are available at the FC, the estimation of s is \(\check {s} = \mathbf {C}_{s \mathbf {y}} \mathbf {C}^{-1}_{\mathbf {y} \mathbf {y}} \, \mathbf {y}\), where C s y is the covariance between s and y and C y y is the covariance of y. The corresponding MSE is \(\epsilon ={\sigma _{s}^{2}}-\mathbf {C}_{s \mathbf {y}} \mathbf {C}^{-1}_{\mathbf {y} \mathbf {y}} \mathbf {C}_{\mathbf {y} s}\), which depends on the specific realizations of channel responses h i ’s. The long-term performance of a WSN is evaluated using the average MSE (AMSE), defined as \(\text {AMSE} = \mathcal {E} \left \{ \epsilon \right \}\), where the expectation is taken over channel response realizations.

2.1 Coherent MAC

With the coherent MAC [8], after the phases of channel responses are compensated at sensors, signals from all sensors are essentially transmitted on one (equivalent) channel. Thus, the dimension of y is N=1, i.e., the received signal at FC is a scalar. To realize phase compensation at the transmitters, the phase values of the wireless channel responses need to be sent from the FC to all sensors, which represents a large amount of feedback. After phase compensation, the transmitted signal at the ith sensor is \(x_{i} = a_{i} \left (s+v_{i} \right) {\mathrm e}^{-j \varphi _{i}}\). Under timing synchronization among the sensors, all the useful information resides in the real part of the received signal at the FC, which can be expressed as4
$$ y = \sum^{K}_{i=1} a_{i} \left(s+v_{i} \right) r_{i} + \mathcal{R} \left\{ \omega \right\}. $$
(2)
The LMMSE estimator and the corresponding MSE are given as
$$ \check{s}_{\text{coh}} = \left[ \frac{\left(\sum^{K}_{i=1} a_{i} r_{i} \right) {\sigma^{2}_{s}}}{\left(\sum^{K}_{i=1} a_{i} r_{i} \right)^{2} {\sigma^{2}_{s}} + \left(\sum^{K}_{i=1} {a^{2}_{i}} {r^{2}_{i}} \right) {\sigma^{2}_{v}} + \frac{\sigma^{2}_{\omega}}{2}} \right] y, $$
(3)
$$ \epsilon_{\text{coh}} = \left[\sigma^{-2}_{s} + \frac{\left(\sum^{K}_{i=1} a_{i} r_{i} \right)^{2} }{ \left(\sum^{K}_{i=1} {a^{2}_{i}} {r^{2}_{i}} \right) {\sigma^{2}_{v}} + \frac{\sigma^{2}_{\omega}}{2}} \right]^{-1}. $$
(4)

2.2 Orthogonal MAC

With the orthogonal MAC [7], K sensors transmit their observations to the FC via K orthogonal channels. The orthogonal MAC does not require feedback of channel responses from the FC to the sensors, and hence is more favorable for implementation. However, the key disadvantage of the orthogonal MAC is that it requires a larger transmission bandwidth or longer latency to realize multiple orthogonal channels. At the FC, the channel phase on each wireless channel is compensated first. After such phase compensation, all the useful information is found in the real parts of the processed signals. On the ith wireless channel, by taking the real part of the complex baseband signal, one has
$$ \begin{aligned} y_{i} &= a_{i} \left(s+v_{i} \right) r_{i} + \mathcal{R} \left\{\omega{\mathrm e}^{-j \varphi_{i}} \right\}\\ &= a_{i} r_{i} s + a_{i} v_{i} r_{i} + \mathcal{R} \left\{\omega{\mathrm e}^{-j \varphi_{i}}\right\} = \bar{r}_{i} s + \bar{v}_{i} + \bar{\omega}_{i}, \end{aligned} $$
(5)

where \(\bar {r}_{i} = a_{i} r_{i}\), \(\bar {v}_{i} = a_{i} v_{i} r_{i}\) and \(\bar {\omega }_{i} = \mathcal {R} \left \{ \omega {\mathrm e}^{-j \varphi _{i}} \right \}\).

Let y=[y 1,y 2,…,y K ], \(\bar {\mathbf {r}}=\left [ \bar {r}_{1}, \bar {r}_{2}, \ldots, \bar {r}_{K} \right ]^{\top }\), \(\bar {\mathbf {v}}=\left [ \bar {v}_{1}, \bar {v}_{2}, \ldots, \bar {v}_{K} \right ]^{\top }\) and \(\bar {\boldsymbol {\omega }}=\left [ \bar {\omega }_{1}, \bar {\omega }_{2}, \ldots, \bar {\omega }_{K} \right ]^{\top }\). Then Eq. (5) turns to \(\mathbf {y} = \bar {\mathbf {r}} s + \bar {\mathbf {v}} + \bar {\boldsymbol {\omega }}\). It then follows that the LMMSE estimator of s based on y is
$$ \check{s}_{\text{orth}} = {\sigma^{2}_{s}} \bar{\mathbf{r}}^{\top} \left({\sigma^{2}_{s}} \bar{\mathbf{r}} \bar{\mathbf{r}}^{\top} + \mathbf{\Sigma}_{\bar{\mathbf{v}}} + \mathbf{\Sigma}_{\bar{\omega}} \right)^{-1} \mathbf{y}, $$
(6)
where
$$ \mathbf{\Sigma}_{\bar{\mathbf{v}}} = \mathcal{E} \left\{\bar{\mathbf{v}}\bar{\mathbf{v}}^{\top} \right\} = \text{diag} \left({a^{2}_{1}}{r^{2}_{1}}{\sigma^{2}_{v}}, \, {a^{2}_{2}} {r^{2}_{2}} {\sigma^{2}_{v}}, \, \ldots, \, {a^{2}_{K}} {r^{2}_{K}} {\sigma^{2}_{v}} \right), $$
(7)
$$ \mathbf{\Sigma}_{\bar{\omega}} = \mathcal{E} \left \{ \bar{\boldsymbol{\omega}} \bar{\boldsymbol{\omega}}^{\top} \right \} = \text{diag} \left(\frac{\sigma^{2}_{\omega}}{2}, \frac{\sigma^{2}_{\omega}}{2},\ldots,\frac{\sigma^{2}_{\omega}}{2} \right). $$
(8)
The corresponding MSE distortion is
$$ \begin{aligned} \epsilon_{\text{orth}} &= \left[ \sigma^{-2}_{s} + \bar{\mathbf{r}}^{\top} \left(\mathbf{\Sigma}_{\bar{\mathbf{v}}} + \mathbf{\Sigma}_{\bar{\omega}} \right)^{-1} \bar{\mathbf{r}} \right]^{-1} \\ &= \left(\sigma^{-2}_{s} + \sum^{K}_{i=1} \frac{{a^{2}_{i}} {r^{2}_{i}} }{ {a^{2}_{i}} {r^{2}_{i}} {\sigma^{2}_{v}} + \frac{\sigma^{2}_{\omega}}{2}} \right)^{-1}. \end{aligned} $$
(9)

2.3 Hybrid MAC

With the hybrid MAC considered in [17], all sensors are divided into groups and the coherent MAC is used for sensors within each group, whereas the orthogonal MAC is used across different groups. This MAC provides a solution for scenarios where there are N (a small number due to bandwidth constraint) orthogonal channels that are shared by K sensors, where KN. In this hybrid MAC, to obtain coherent combination in each group, channel phase information feedback from the FC to the sensors is still required and the amount of feedback is the same as that of the coherent MAC. In addition, the required transmission bandwidth in this MAC depends on the number of sensor groups.

After phase compensation, the transmitted signal at the ith sensor is \(x_{i} = a_{i} \left (s+v_{i} \right) {\mathrm e}^{-j \varphi _{i}}\). Under timing synchronization among the sensors in the same group, on the nth equivalent channel, all the useful information is in the real part of the received signal at the FC, which can be expressed as
$${} \begin{aligned} y_{n} &= {\sum\nolimits}_{i \in \Omega_{n}} a_{i} \left(s+v_{i} \right) r_{i} + \mathcal{R} \left \{ \omega_{n} \right \},\quad n=\!1,2,\ldots, N, \\ &= \left({\sum\nolimits}_{i \in \Omega_{n}} a_{i} r_{i} \right) s + \left({\sum\nolimits}_{i \in \Omega_{n}} a_{i} v_{i} r_{i} \right) + \mathcal{R} \left \{ \omega_{n} \right\}\\ &= \bar{r}_{n} s + \bar{v}_{n} + \bar{\omega}_{n}, \end{aligned} $$
(10)
where Ω n is the index set of sensors in the nth group, \(\bar {r}_{n} = \sum _{i \in \Omega _{n}} a_{i} r_{i}\), \(\bar {v}_{n} = \sum _{i \in \Omega _{n}} a_{i} v_{i} r_{i}\) and \(\bar {\omega }_{n} = \mathcal {R} \left \{ \omega _{n} \right \}\). Let y=[y 1,y 2,…,y N ], \(\bar {\mathbf {r}}=\left [ \bar {r}_{1}, \bar {r}_{2}, \ldots, \bar {r}_{N} \right ]^{\top }\), \(\bar {\mathbf {v}}=\left [ \bar {v}_{1}, \bar {v}_{2}, \ldots, \bar {v}_{N} \right ]^{\top }\) and \(\bar {\boldsymbol {\omega }}=\left [ \bar {\omega }_{1}, \bar {\omega }_{2}, \ldots, \bar {\omega }_{N} \right ]^{\top }\), then similar to the orthogonal MAC, the LMMSE estimator of s based on y is
$$ \check{s}_{\text{hyb}} = {\sigma^{2}_{s}} \bar{\mathbf{r}}^{\top} \left({\sigma^{2}_{s}} \bar{\mathbf{r}} \bar{\mathbf{r}}^{\top} + \mathbf{\Sigma}_{\bar{\mathbf{v}}} + \mathbf{\Sigma}_{\bar{\omega}} \right)^{-1} \mathbf{y}, $$
(11)
where
$$ \begin{aligned} \mathbf{\Sigma}_{\bar{\mathbf{v}}} &= \mathcal{E} \left \{ \bar{\mathbf{v}} \bar{\mathbf{v}}^{\top} \right \}= \text{diag} \left({\sum\nolimits}_{i \in \Omega_{1}} {a^{2}_{i}} {r^{2}_{i}} {\sigma^{2}_{v}},\right.\\ &\quad\left. {\sum\nolimits}_{i \in \Omega_{2}} {a^{2}_{i}} {r^{2}_{i}} {\sigma^{2}_{v}}, \, \ldots, \, {\sum\nolimits}_{i \in \Omega_{N}} {a^{2}_{i}} {r^{2}_{i}} {\sigma^{2}_{v}} \right) \end{aligned} $$
(12)
and \(\mathbf {\Sigma }_{\bar {\omega }}\) is still the same as (8). The corresponding MSE distortion is
$$ \begin{aligned} \epsilon_{\text{hyb}} &= \left[ \sigma^{-2}_{s} + \bar{\mathbf{r}}^{\top} \left(\mathbf{\Sigma}_{\bar{\mathbf{v}}} + \mathbf{\Sigma}_{\bar{\omega}} \right)^{-1} \bar{\mathbf{r}} \right]^{-1}\\ &= \left[ \sigma^{-2}_{s} + \sum^{N}_{n=1} \frac{\left(\sum_{i \in \Omega_{n}} a_{i} r_{i} \right)^{2} }{ \left(\sum_{i \in \Omega_{n}} {a^{2}_{i}} {r^{2}_{i}} \right) {\sigma^{2}_{v}} + \frac{\sigma^{2}_{\omega}}{2}} \right]^{-1}. \end{aligned} $$
(13)

2.4 Semi-orthogonal MAC

The semi-orthogonal MAC can be considered as a direct competitor of the hybrid MAC in the sense that they are both suitable for the scenarios where there are N orthogonal channels that are shared by K sensors, where KN. The key novelty in realizing the semi-orthogonal MAC is that the ith sensor transmits to the FC according to a length-N vector \(\mathbf {g}^{(i)}=\left [ g^{\left (i \right)}_{1},g^{\left (i \right)}_{2},\ldots,g^{\left (i \right)}_{N} \right ]\), whose element is either 0 or 1. The set of g (i)’s gives an allocation of N orthogonal channels to K sensors. For the ith sensor, if the nth element of g (i) is 1, then the ith sensor transmits on the nth orthogonal channel.5 Under timing synchronization among the sensors, the received signal on the nth orthogonal channel at the FC is
$$ y_{n}=\left[ \sum^{K}_{i=1} a_{i} \left(s+v_{i} \right) g^{\left(i \right)}_{n}h_{i} \right] + \omega_{n}, \quad n=1,2, \ldots,N, $$
(14)

where ω n ’s are the i.i.d. (over n) complex AWGN components with zero mean and variance \(\sigma ^{2}_{\omega }\).

Equation (14) can be rewritten as
$${} y_{n}=\left(\sum^{K}_{i=1} a_{i}g^{\left(i \right)}_{n}h_{i}\! \right)\! s + \left(\sum^{K}_{i=1} a_{i}v_{i}g^{\left(i \right)}_{n}h_{i}\! \right) + \omega_{n}=\hat{h}_{n}s+\hat{v}_{n}+\omega_{n}, $$
(15)
where \(\hat {h}_{n}=\sum \limits ^{K}_{i=1} a_{i}g^{\left (i \right)}_{n}h_{i}\) and \(\hat {v}_{n}=\sum \limits ^{K}_{i=1} a_{i}v_{i}g^{\left (i \right)}_{n}h_{i}\) are defined as the equivalent channel response and equivalent observation noise of the nth orthogonal channel, respectively. Since y n is complex, while s is real, the phase of the equivalent channel response \(\hat {h}_{n}\) is compensated to obtain
$$\begin{array}{@{}rcl@{}} \bar{y}_{n} &=& \mathcal{R} \left\{\frac{\hat{h}^{\ast}_{n}}{\left| \hat{h}_{n} \right|} y_{n} \right\} = \underbrace{\left| \sum^{K}_{i=1} a_{i}g^{\left(i \right)}_{n}h_{i} \right|}_{\bar{h}_{n}} s \\ &&+ \underbrace{\mathcal{R} \left\{\frac{\left(\sum\limits^{K}_{i=1} a_{i}g^{\left(i \right)}_{n}h^{\ast}_{i} \right) \left(\sum\limits^{K}_{i=1} a_{i}v_{i}g^{\left(i\right)}_{n}h_{i}\right)}{\left| \sum\limits^{K}_{i=1} a_{i}g^{\left(i \right)}_{n}h_{i} \right|} \right\}}_{\bar{v}_{n}}\\ &&+ \underbrace{\mathcal{R} \left\{\frac{\left(\sum\limits^{K}_{i=1} a_{i}g^{\left(i \right)}_{n}h^{\ast}_{i} \right) \omega_{n}}{\left|\sum\limits^{K}_{i=1} a_{i}g^{\left(i \right)}_{n}h_{i} \right|} \right \}}_{\bar{\omega}_{n}}\\ &=& \bar{h}_{n} s + \bar{v}_{n} + \bar{\omega}_{n}. \end{array} $$
(16)

The above phase compensation discards halves of observation noise and channel noise. It is pointed out that the phase compensation of the equivalent channel response is performed at the FC. Therefore, no phase information is needed at the sensors and feedback of channel phase information is not required.

Let \(\bar {\mathbf {y}}=\left [ \bar {y}_{1}, \bar {y}_{2}, \ldots, \bar {y}_{N} \right ]^{\top }\), \(\bar {\mathbf {h}}=\left [ \bar {h}_{1}, \bar {h}_{2}, \ldots, \bar {h}_{N} \right ]^{\top }\), \(\bar {\mathbf {v}}=\left [ \bar {v}_{1}, \bar {v}_{2}, \ldots, \bar {v}_{N} \right ]^{\top }\) and \(\bar {\boldsymbol {\omega }}=\left [ \bar {\omega }_{1}, \bar {\omega }_{2}, \ldots, \bar {\omega }_{N} \right ]^{\top }\). Then one has \(\bar {\mathbf {y}} = \bar {\mathbf {h}} s + \bar {\mathbf {v}} + \bar {\boldsymbol {\omega }}\). The LMMSE estimation of s based on \(\bar {\mathbf {y}}\) is6
$$ \check{s}_{\text{semi}} = {\sigma^{2}_{s}} \bar{\mathbf{h}}^{\top} \left({\sigma^{2}_{s}} \bar{\mathbf{h}} \bar{\mathbf{h}}^{\top} + \mathbf{\Sigma}_{\bar{\mathbf{v}}} + \mathbf{\Sigma}_{\bar{\omega}} \right)^{-1} \bar{\mathbf{y}}, $$
(17)
where
$${} \begin{aligned} \mathbf{\Sigma}_{\bar{\mathbf{v}}} &= \mathcal{E} \left \{ \bar{\mathbf{v}} \bar{\mathbf{v}}^{\top} \right \} \\&= \left \{ \theta_{n,l} = {\sigma^{2}_{v}} \sum^{K}_{i=1} {a^{2}_{i}} g^{\left(i \right)}_{n} g^{\left(i \right)}_{l} t_{ni} t_{li}; \quad n,l=1,2,\ldots,N \right \}, \end{aligned} $$
(18)
$$ t_{ni} = \mathcal{R} \left\{h_{i} \right\} \frac{\mathcal{R} \left\{\hat{h}_{n} \right\}}{\left|\hat{h}_{n} \right|} + \mathcal{I} \left \{ h_{i} \right \} \frac{\mathcal{I} \left \{ \hat{h}_{n} \right \}}{\left| \hat{h}_{n} \right|}, $$
(19)
and \(\mathbf {\Sigma }_{\bar {\omega }}\) is still the same as (8). The corresponding MSE distortion is
$$ \epsilon_{\text{semi}} = \left[ \sigma^{-2}_{s} + \bar{\mathbf{h}}^{\top} \left(\mathbf{\Sigma}_{\bar{\mathbf{v}}} + \mathbf{\Sigma}_{\bar{\omega}} \right)^{-1} \bar{\mathbf{h}} \right]^{-1}. $$
(20)
The above expression clearly shows that the estimation performance under a semi-orthogonal MAC strongly depends on how N orthogonal channels are shared by K sensors, i.e., how the sensors are grouped. The two sensor grouping strategies proposed in the next section are based on the two performance indicators established as follows. First, setting \({\sigma ^{2}_{v}}=0\) gives \(\mathbf {\Sigma }_{\bar {\mathbf {v}}}=\boldsymbol {0}\) and the MSE distortion becomes
$${} \epsilon_{\text{semi}-\alpha} = \left[\sigma^{-2}_{s} + \bar{\mathbf{h}}^{\top} \left(\mathbf{\Sigma}_{\bar{\omega}}\right)^{-1} \bar{\mathbf{h}} \right]^{-1} = {\sigma^{2}_{s}} \left(1 + 2 \alpha \gamma_{\mathrm{c}} \right)^{-1}, $$
(21)
where
$$ \alpha = \sum\limits^{N}_{n=1} \left| \frac{\sum^{K}_{i=1} g^{\left(i \right)}_{n} h_{i}}{\sqrt{N K_{1}}} \right|^{2}. $$
(22)

The parameter α indicates the impact of channel noise on the MSE performance. The larger α is, the lesser the impact is.

On the other hand, setting \(\sigma ^{2}_{\omega }=0\) gives \(\mathbf {\Sigma }_{\bar {\omega }}=\boldsymbol {0}\) and the MSE distortion is
$$ \epsilon_{\text{semi}-\beta} = \left[\sigma^{-2}_{s} + \bar{\mathbf{h}}^{\top} \left(\mathbf{\Sigma}_{\bar{\mathbf{v}}}\right)^{-1} \bar{\mathbf{h}} \right]^{-1} = {\sigma^{2}_{s}} \left(1+\beta \gamma_{\mathrm{o}} \right)^{-1}, $$
(23)
where
$$ \beta = {\sigma^{2}_{v}} \bar{\mathbf{h}}^{\top} \left(\mathbf{\Sigma}_{\bar{\mathbf{v}}}\right)^{-1} \bar{\mathbf{h}}. $$
(24)

In this case, the parameter β indicates the impact of observation noise on the MSE performance. The larger β is, the lesser the impact is.

3 Sensor grouping methods for semi-orthogonal MAC

3.1 Fixed sensor grouping

Fixed sensor grouping means that the assignment of orthogonal channels, once decided, does not change during the communication phase. When assigning N orthogonal channels to K sensors, where KN, an obvious question arises: Should more than one orthogonal channel be assigned to a single sensor and will this improve the MSE performance of distributed estimation?

To answer the above question, let us first examine a simple scenario where there are two orthogonal channels (N=2) with K 1 sensors transmitting on each of them. Under equal power allocation, the gain factor is \(a_{i} = \bar {a} = \sqrt {\frac {P_{\text {tot}}}{2 K_{1} \left ({\sigma ^{2}_{s}} + {\sigma ^{2}_{v}} \right)}}\). Note that there are M= max(2K 1K,0) sensors that transmit on both orthogonal channels. Treating the equivalent channel responses \(\hat {h}_{1} = \bar {a} \sum \limits ^{K}_{i=1} g^{(i)}_{1} h_{i}\) and \(\hat {h}_{2} = \bar {a} \sum \limits ^{K}_{i=1} g^{(i)}_{2} h_{i}\) as random variables, the correlation coefficient between \(\hat {h}_{1}\) and \(\hat {h}_{2}\) is easily found to be \(\rho =\frac {M}{K_{1}}\). If there is no sensor transmitting on both orthogonal channels, M=0 and the above correlation coefficient will be zero. However, such scenario requires that K=2K 1 and all K sensors are equally divided into two disjoint groups with sensors in each group transmitting on one orthogonal channel.

Next, define \(\tilde {h}_{1}=\frac {1}{\sqrt {2 K_{1}}} \sum ^{K}_{i=1} g^{(i)}_{1} h_{i}\) and \(\tilde {h}_{2}=\frac {1}{\sqrt {2 K_{1}}} \sum ^{K}_{i=1} g^{(i)}_{2} h_{i}\), which are basically the scaled versions of \(\bar {h}_{1}\) and \(\bar {h}_{2}\) defined in (16). Then, the parameter α is \(\alpha = | \tilde {h}_{1} |^{2} + | \tilde {h}_{2} |^{2}\). Since each of \(\tilde {h}_{1}\) and \(\tilde {h}_{2}\) is a \(1/\sqrt {2K_{1}}\) times the sum of K 1 i.i.d. complex Gaussian random variables, each with zero mean and unit variance, it is a complex Gaussian random variable with zero mean and variance 1/2. It follows immediately that the expected value of α is \(\mathcal {E}\{\alpha \}=1\). To find the expression for β, express \(\tilde {h}_{1}\) and \(\tilde {h}_{2}\) as \(\tilde {h}_{1}=\frac {m_{1}}{\sqrt {2}} \mathrm {e}^{j \phi _{1}}\) and \(\tilde {h}_{2}=\frac {m_{2}}{\sqrt {2}} \mathrm {e}^{j \phi _{2}}\). Then, Appendix A shows that, when K approaches infinity, \(\beta =\frac {2 \left [ {m^{2}_{1}} - 2 \rho \cos \left (\phi _{1} - \phi _{2} \right) m_{1} m_{2} + {m^{2}_{2}} \right ]}{1 - \rho ^{2} \cos ^{2} \left (\phi _{1} - \phi _{2} \right)}\). The expectation of β is more tedious to obtain, and it is given in (61) of Appendix A.

Table 1 tabulates the values of \(\mathcal {E}\{\beta \}\) versus ρ, obtained by theory and simulation. The theoretical and simulation results match very well. As can be seen, while \(\mathcal {E} \left \{ \alpha \right \}\) is a constant 1, \(\mathcal {E} \left \{ \beta \right \}\) is a monotonically-decreasing function of ρ. This means that, while the correlation among the equivalent channel responses does not affect the channel noise suppression capability, it reduces the observation noise suppression capability. Overall, the correlation among the equivalent channel responses degrades the estimation performance, and hence should be avoided. This can be done by not assigning more than one orthogonal channel to each sensor.
Table 1

Values of \(\mathcal {E}\{\beta \}\) with N=2

ρ

0.05

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0.95

Theory

3.984

3.969

3.909

3.812

3.680

3.518

3.333

3.133

2.930

2.741

2.655

Simulation

3.998

3.987

3.923

3.816

3.689

3.528

3.356

3.150

2.948

2.789

2.703

For N>2, it is not easy to determine channel allocation among sensors and perform the corresponding correlation analysis among the equivalent channel responses of the orthogonal channels. Instead, the following example of ad hoc channel allocation scheme shall be investigated. Assume that K is an integer multiple of N. If \(\frac {\left (n-1\right)K}{N}+K_{1} \leq K\), then the nth orthogonal channel is shared by sensors with indices in the set of \(\left \{ \frac {\left (n-1\right)K}{N}+1, \ldots,\frac {\left (n-1\right)K}{N}+K_{1} \right \}\). If \(\frac {\left (n-1\right)K}{N}+K_{1} > K\), then the set of sensor indices is \(\left \{ 1, \ldots, \frac {\left (n-1\right)K}{N}+K_{1}-K \right \} \cup \left \{ \frac {\left (n-1\right)K}{N}+1, \ldots, K \right \}\). With such a channel assignment, as long as \(K_{1}>\frac {K}{N}\), some sensors will transmit on more than one orthogonal channel and the correlation among the equivalent channel responses of orthogonal channels is not zero. As K 1 increases from \(\frac {K}{N}\) to K, more and more sensors transmit on more than one orthogonal channel and the correlation among the equivalent channel responses increases from 0 to 1. Therefore, K 1 can be adjusted for different levels of correlation among the equivalent channel responses.

With the above channel allocation scheme, the parameter α defined in (22) can be expressed as \(\alpha =\frac {1}{N}\sum _{n=1}^{N} |\tilde {h}_{n}|^{2}\), where \(\tilde {h}_{n}\) is \(\frac {1}{\sqrt {K_{1}}}\) of the sum of K 1 i.i.d. zero-mean unit-variance complex Gaussian random variables. It then follows that the expected value of α also equals to 1 in this case. Since the analytical expression of \(\mathcal {E} \left \{ \beta \right \}\) is not available, simulation results are obtained and shown in Fig. 2 for three settings of (N=2, K=16), (N=4, K=32) and (N=8, K=64). In obtaining these simulation results, the power allocation is performed such that each sensor transmission on one orthogonal channel consumes the same power of P tot/(NK 1). It is observed from Fig. 2 that as K 1 increases, \(\mathcal {E} \left \{ \beta \right \}\) generally decreases, although not monotonically. The non-monotonic decrease of \(\mathcal {E} \left \{ \beta \right \}\) versus increasing K 1 is due to the ad hoc channel assignment and power allocation described above. In particular, the adopted power allocation is quite “non-uniform” in the sense that the total power allocated to a given sensor is proportional to the number of orthogonal channels assigned to it. Depending on K 1, this number ranges from 1 to N for each sensor, whereas a larger K 1 reduces the power level P tot/(NK 1) allocated for each sensor transmission. The most important observation of this figure is that \(\mathcal {E} \left \{ \beta \right \}\) takes on the largest value when \(K_{1}=\frac {K}{N}\) as expected.
Fig. 2

Plot of \(\mathcal {E} \left \{ \beta \right \}\) versus K 1

From the above theoretical derivations and simulation results, it can be concluded that only one orthogonal channel should be assigned to each sensor. In other words, all sensors are divided into disjoint groups and those sensors in the same group transmit on one orthogonal channel. Moreover, the fixed sensor grouping proposed here is such that all sensors are equally divided into groups, i.e., the nth group is \(\Omega _{n} = \left \{ \frac {\left (n-1\right)K}{N}+1, \ldots, \frac {nK}{N} \right ]\).

3.2 Adaptive sensor grouping

With the fixed sensor grouping described in the previous subsection, the parameter α can be written as
$$ \alpha = \sum\limits^{N}_{n=1} \underbrace{\left| \frac{\sum_{i \in \Omega_{n}} h_{i}}{\sqrt{K}} \right|^{2}}_{\alpha_{n}}=\sum\limits^{N}_{n=1} \alpha_{n}, $$
(25)
where each α n is affected only by the channel responses of sensors transmitting on the nth orthogonal channel. Therefore α n can be interpreted as an indicator of the channel noise suppression capability of the nth orthogonal channel. Similarly, the parameter β can be expressed as
$$ \begin{aligned} \beta&=\sum^{N}_{n=1} \underbrace{\frac{\left| \sum_{i \in \Omega_{n}} a_{i} h_{i} \right|^{2} }{\sum_{i \in \Omega_{n}} {a^{2}_{i}} \left(\mathcal{R} \left \{ h_{i} \right \} \frac{\mathcal{R} \left \{ \hat{h}_{n} \right \}}{\left| \hat{h}_{n} \right|} + \mathcal{I} \left \{ h_{i} \right \} \frac{\mathcal{I} \left \{ \hat{h}_{n} \right \}}{\left| \hat{h}_{n} \right|} \right)^{2} }}_{\beta_{n}}\\&=\sum\limits^{N}_{n=1} \beta_{n}, \end{aligned} $$
(26)

where β n is also affected only by the channel responses of sensors transmitting on the nth orthogonal channel, and it can be interpreted as the indicator of the observation noise suppression capability of the nth orthogonal channel.

The above simple observation suggests that if all sensors can be properly grouped according to their channel responses, larger α n and β n can be obtained for each orthogonal channel and thus the overall channel noise suppression and observation noise suppression capabilities of the semi-orthogonal MAC will be improved.

Intuitively, sensors with channel responses of similar phases should be grouped together to get better channel noise suppression and observation noise suppression. Will this “similar phase” grouping strategy work and how to define “similar phase”? To answer this question, examine a scenario that one sensor with channel response of magnitude 1 and phase 0 transmits on an orthogonal channel. Both the indicators of the channel noise suppression and observation noise suppression of this orthogonal channel are 1. Next, add another sensor with channel response of magnitude r (r<1) and phase 𝜗 (0≤𝜗π) to form a group7. Then the two indicators of this orthogonal channel change to:
$$ \alpha_{n} = \left(r\cos \vartheta + 1 \right)^{2} + \left(r \sin \vartheta \right)^{2} = r^{2} + 2r \cos \vartheta +1, $$
(27)
$$ \beta_{n} = \frac{r^{2} + 2r \cos \vartheta + 1}{\left(\cos \phi \right)^{2} + \left(r \cos \vartheta \cos \phi + r \sin \vartheta \sin \phi \right)^{2}}, $$
(28)

where ϕ is the phase of the equivalent channel response and \(\tan \phi = \frac {r \sin \vartheta }{r \cos \vartheta + 1}\).

If α n >1, the added sensor is said to be constructive for channel noise suppression and if β n >1, the added sensor is constructive for observation noise suppression. Note that if the added sensor transmits on an orthogonal channel alone, then α n =r 2 and β n =1. This means that if the added sensor is constructive, sensor grouping improves performance of individual sensors (i.e., without being grouped).

To determine if the added sensor is constructive for channel noise suppression and/or observation noise suppression, it is straightforward to show from (27) and (28) that
$$ \left\{ \begin{array}{ll} \alpha_{n} > 1, &\text{if} \; 0 \le \vartheta < \text{arccos} \left(- \frac{r}{2} \right) \\ \alpha_{n} \le 1, &\text{if}\; \text{arccos} \left(- \frac{r}{2} \right) \le \vartheta \le \pi \end{array}\right. $$
(29)
$$ \left\{ \begin{array}{ll} \beta_{n} > 1, &\text{if} \; 0 \le \vartheta < \text{arccos} \left(- r \right) \\ \beta_{n} \le 1, &\text{if} \; \text{arccos} \left(- r \right) \le \vartheta \le \pi \end{array}\right. $$
(30)
The above analysis leads to the following three regions of 𝜗:
  • If 𝜗 is in region A, i.e., \(0\leq \vartheta <\arccos \left (-\frac {r}{2} \right)\), the added sensor is constructive for both channel noise suppression and observation noise suppression. Note that region A includes \(\left [ 0, \frac {\pi }{2} \right ]\), regardless of the value of r.

  • If 𝜗 is in region B, i.e., \(\arccos \left (-\frac {r}{2} \right)\leq \vartheta <\arccos \left (-r \right)\), the added sensor is destructive for channel noise suppression, but constructive for observation noise suppression.

  • If 𝜗 is in region C, i.e., arccos(−r)≤𝜗<π, the added sensor is destructive for both channel noise suppression and observation noise suppression.

At this point, the question raised at the beginning of this section has been answered for grouping two sensors. In summary, if the phase difference between the channel responses of the two sensors is in the region of \(\left [ 0, \frac {\pi }{2} \right ]\), sensor grouping is beneficial. However, if the phase difference is larger than \(\frac {\pi }{2}\), grouping sensors on the same orthogonal channel may be destructive for either channel noise suppression or observation noise suppression, or for both of them, and sensor grouping may give worse performance. Therefore, if the semi-orthogonal MAC is employed with N=2 or N=3 and if the whole phase region of 2π is partitioned into N=2 or N=3 equal sub-regions (each of length π or 2π/3), then grouping the sensors with channel phases in the same sub-region to transmit on the same orthogonal channel might not always be beneficial. However, if the semi-orthogonal MAC is used with N≥4 and the whole phase region is partitioned into N equal sub-regions (each of length \(\frac {2 \pi }{N}\)), then grouping the sensors with channel phases in the same sub-region always picks constructive sensors in one group and therefore performance improvement is guaranteed8.

In summary, for N≥4, adaptive sensor grouping is always beneficial and is done such that all sensors whose channel phases fall into the same nth region \(\left [\frac {2 \pi \left (n-1 \right)}{N}, \frac {2 \pi n}{N}\right ]\) transmit on the nth orthogonal channel. This adaptive sensor grouping is analyzed in more detail in the next section.

4 Performance analysis

In general, with the same number of sensors, different MACs yield different values of \(\mathcal {E} \left (\alpha \right)\) and \(\mathcal {E} \left (\beta \right)\), implying different capabilities of channel noise suppression and observation noise suppression. This section analyzes in detail the estimation performance of the semi-orthogonal MAC under fixed and adaptive sensor grouping and also compare with the coherent MAC, orthogonal MAC and hybrid MAC. The analysis and comparison are carried out for equal power allocation, i.e., \(a_{i}=\bar {a}=\sqrt {P_{\text {tot}} / K \left ({\sigma ^{2}_{s}}+{\sigma ^{2}_{v}} \right)}\).

4.1 Orthogonal, coherent, and hybrid MACs

Under equal power allocation, the MMSE distortions obtained with the LMMSE estimator for the orthogonal, coherent, and hybrid MACs can be shown to be:
$$ \epsilon_{\text{orth}} = \left[\sigma^{-2}_{s} + \sum^{K}_{i=1} \frac{\bar{a}^{2} \left| h_{i} \right|^{2} }{ \bar{a}^{2} \left| h_{i} \right|^{2} {\sigma^{2}_{v}} + \frac{\sigma^{2}_{\omega}}{2}} \right]^{-1}, $$
(31)
$$ \epsilon_{\text{coh}} = \left[\sigma^{-2}_{s} + \frac{\left(\sum^{K}_{i=1} \bar{a} \left| h_{i} \right| \right)^{2} }{ \left(\sum^{K}_{i=1} \bar{a}^{2} \left| h_{i} \right|^{2} \right) {\sigma^{2}_{v}} + \frac{\sigma^{2}_{\omega}}{2}} \right]^{-1}, $$
(32)
and
$$ \epsilon_{\text{hyb}} = \left[\sigma^{-2}_{s} + \sum^{N}_{n=1} \frac{\left(\sum_{i \in \Omega_{n}} \bar{a} \left| h_{i} \right| \right)^{2} }{ \left(\sum_{i \in \Omega_{n}} \bar{a}^{2} \left| h_{i} \right|^{2} \right) {\sigma^{2}_{v}} + \frac{\sigma^{2}_{\omega}}{2}} \right]^{-1}. $$
(33)

For the orthogonal MAC, it is easily seen that \(\mathcal {E}_{\text {orth}} \left (\alpha \right) = \mathcal {E} \left (\sum ^{K}_{i=1} \frac {\left | h_{i} \right |^{2}}{K} \right) = 1\) and \(\mathcal {E}_{\text {orth}} \left (\beta \right) = K\).

For the coherent MAC, \(\mathcal {E}_{\text {coh}}\left (\alpha \right) = \mathcal {E} \left [ \left (\sum ^{K}_{i=1} \frac {\left | h_{i} \right |}{\sqrt {K}} \right)^{2} \right ]\) and \(\mathcal {E}_{\text {coh}} \left (\beta \right) = \mathcal {E} \left [ \frac {\left (\sum ^{K}_{i=1} \frac {\left | h_{i} \right |}{\sqrt {K}} \right)^{2}}{ \frac {1}{K} \left (\sum ^{K}_{i=1} \left | h_{i} \right |^{2} \right)} \right ]\). As K, according to the central limit theorem, \(\sum ^{K}_{i=1} \frac {\left | h_{i} \right |}{\sqrt {K}}\) is a Gaussian random variable with mean \(\sqrt {K} \mathcal {E} \left (\left | h_{i} \right | \right)\) and variation \(\mathcal {D} \left (\left | h_{i} \right | \right)\). Since |h i | is a Rayleigh distributed random variable with pdf \(f_{\left | h_{i} \right |} \left (\left | h_{i} \right | \right) = 2 \left | h_{i} \right | \text {exp} \left (- \left | h_{i} \right |^{2} \right)\), it follows that \(\mathcal {E} \left (\left | h_{i} \right | \right) = \sqrt {\frac {\pi }{4}}\) and \(\mathcal {D} \left (\left | h_{i} \right | \right) = 1 - \frac {\pi }{4}\). Thus
$$ \mathcal{E}_{\text{coh}} \left(\alpha \right) = \frac{K \pi}{4} +1 - \frac{\pi}{4} \approx 0.78 K, $$
(34)
$$ \mathcal{E}_{\text{coh}} \left(\beta \right) = \frac{\mathcal{E} \left[ \left(\sum^{K}_{i=1} \frac{\left| h_{i} \right|}{\sqrt{K}} \right)^{2} \right] }{ \mathcal{E} \left(\left| h_{i} \right|^{2} \right)} = \frac{K \pi}{4} +1 - \frac{\pi}{4} \approx 0.78 K. $$
(35)
Similarly, it can be shown for the hybrid MAC that
$$\begin{array}{@{}rcl@{}} \mathcal{E}_{\text{hyb}} \left(\alpha \right) &=& \mathcal{E} \left[ \sum^{N}_{n=1} \frac{1}{N} \left(\sum_{i \in \Omega_{n}} \frac{\left| h_{i} \right|}{\sqrt{K/N}} \right)^{2} \right]\\ &=& \frac{K \pi}{4 N} + 1 - \frac{\pi}{4} \approx 0.78 \frac{K}{N}, \end{array} $$
$$\begin{array}{@{}rcl@{}} \mathcal{E}_{\text{hyb}} \left(\beta \right) &=& \mathcal{E} \left[ \sum^{N}_{n=1} \frac{\left(\sum_{i \in \Omega_{n}} \frac{\left| h_{i} \right|}{\sqrt{K/N}} \right)^{2}}{ \frac{N}{K} \left(\sum_{i \in \Omega_{n}} \left| h_{i} \right|^{2} \right)} \right] \\ &=& \frac{K \pi}{4} + N \left(1 - \frac{\pi}{4} \right) \approx 0.78 K. \end{array} $$

4.2 Semi-orthogonal MAC with fixed sensor grouping

For the parameter α, one has \(\mathcal {E}_{\mathrm {semi-F}} \left \{ \alpha \right \} = \mathcal {E} \left \{ \sum ^{N}_{n=1} \left | \frac {\sum _{i \in \Omega _{n}} h_{i}}{\sqrt {K}} \right |^{2} \right \} = 1\), which is the same as for the case of the orthogonal MAC.

The expected value of β is difficult to obtain for arbitrary values of K and N. To gain some insight, the pdf of β is obtained by simulation and plotted in Fig. 3 for various values of \(\frac {K}{N}\). The corresponding values of \(\mathcal {E}_{\mathrm {semi-F}}\{\beta \}\) are shown in Table 2. Figure 3 clearly shows that, as long as \(\frac {K}{N}>1\), there is a high probability that the value of β is larger than N=4. Furthermore, the larger the ratio \(\frac {K}{N}\) is, the more likely β takes on a larger value.
Fig. 3

Pdf of parameter β with N=4

Table 2

Values of \(\mathcal {E}_{\mathrm {semi-F}} \left \{ \beta \right \}\) with N=4

K

4

8

16

32

64

128

\(\mathcal {E}_{\mathrm {semi-F}} \left \{ \beta \right \}\)

4

5.3

6.4

7.1

7.5

7.8

When K, it is shown in Appendix B that that β follows a Gamma distribution with parameters a=N and b=2. As a consequence, \(\mathcal {E}_{\mathrm {semi-F}} \left \{ \beta \right \}=2N\). This result means that, for a WSN with a large number of sensors, the semi-orthogonal MAC is two times better than the orthogonal MAC in the observation noise suppression capability.

4.3 Semi-orthogonal MAC with adaptive sensor grouping

As discussed in Section 3.2, adaptive sensor grouping is beneficial when N≥4. To implement the adaptive sensor grouping, the nth sub-region of the phase partition is [𝜗 1,𝜗 2), where \(\vartheta _{1} = \frac {2 \pi \left (n-1 \right)}{N}\) and \(\vartheta _{2} = \frac {2 \pi n}{N}\). Let h i =x+jy and focus on the case that the phase of h i falls into the first sub-region [0,2π/N). With N equal-length sub-regions of the phase, the probability that the phase of a channel response falls into a specific sub-region is 1/N. Thus, the joint pdf of x and y is simply
$$ f_{x,y} \left(x,y \right) = \frac{N}{\pi} \text{exp} \left[ - \left(x^{2} + y^{2} \right) \right], $$
(36)
where x>0 and x tan𝜗 1<y<x tan𝜗 2. Based on this joint pdf, it is simple to show that when K, the means and variances of x and y are
$$\begin{aligned} \mu_{x} &= \frac{N}{2 \sqrt{\pi}} \cos \left(\frac{\vartheta_{1} + \vartheta_{2}}{2} \right) \sin \left(\frac{\pi}{N} \right),\\ \mu_{y} &= \frac{N}{2 \sqrt{\pi}} \sin \left(\frac{\vartheta_{1} + \vartheta_{2}}{2} \right) \sin \left(\frac{\pi}{N} \right), \end{aligned} $$
$$\begin{aligned} {\sigma^{2}_{x}} &= \frac{N}{4 \pi} \cos \left(\vartheta_{1} + \vartheta_{2} \right) \sin \left(\frac{2 \pi}{N} \right) + \frac{1}{2} - {\mu^{2}_{x}}, \\ {\sigma^{2}_{y}} &= -\frac{N}{4 \pi} \cos \left(\vartheta_{1} + \vartheta_{2} \right) \sin \left(\frac{2 \pi}{N} \right) + \frac{1}{2} - {\mu^{2}_{y}}, \end{aligned} $$
and
$$ \mathcal{E} \left\{ xy \right\} = \frac{N}{4 \pi} \sin \left(\vartheta_{1} + \vartheta_{2} \right) \sin \left(\frac{2 \pi}{N} \right). $$
Let \(\tilde {h}_{n} = \sum _{i \in \Omega _{n}} \frac {h_{i}}{\sqrt {K}} = \tilde {x}_{n} + j \tilde {y}_{n} = m_{n} {\mathrm e}^{j \phi _{n}}\). Then, according to the central limit theorem, when K, \(\tilde {x}_{n}\) and \(\tilde {y}_{n}\) are i.i.d Gaussian random variables with means and variances
$$ \mu_{\tilde{x}} = \mu_{\tilde{y}} = \frac{\sqrt{K}}{N} \mu_{x}, \,\, \sigma^{2}_{\tilde{x}} = \sigma^{2}_{\tilde{y}} = \frac{{\sigma_{x}^{2}}}{N}. $$
It then follows that
$${} \begin{aligned} \mathcal{E} \left \{ \alpha_{n} \right \} &= \mathcal{E} \left \{ \tilde{x}^{2}_{n} + \tilde{y}^{2}_{n} \right \} = 2\left(\mu^{2}_{\tilde{x}} + \sigma^{2}_{\tilde{x}}\right) \\&= \frac{2K {\mu^{2}_{x}} }{N^{2}} + \frac{1 - 2 {\mu^{2}_{x}} }{N} = \frac{1}{N} + \frac{\left(K -N \right)}{4 \pi} \sin^{2} \left(\frac{\pi}{N} \right). \end{aligned} $$
(37)
Therefore,
$$ \begin{aligned} \mathcal{E} \left \{ \alpha \right \} &= \sum^{N}_{n=1} \mathcal{E} \left \{ \alpha_{n} \right \} = 1 + \frac{N \left(K - N \right)}{4 \pi} \sin^{2} \left(\frac{\pi}{N} \right)\\ &\approx \left[ \frac{N}{4 \pi} \sin^{2} \left(\frac{\pi}{N} \right) \right] K. \end{aligned} $$
(38)
On the other hand,
$${} {{\begin{aligned} &\mathcal{E} \left\{\beta_{n} \right\}\\ &\ \ = \frac{N \mathcal{E} \left\{\alpha_{n}\right\}}{\left(\! {\mu^{2}_{x}} + {\sigma^{2}_{x}}\! \right)\! \cos^{2}\! \phi_{n} + \left(\! {\mu^{2}_{y}} + {\sigma^{2}_{y}}\! \right)\! \sin^{2}\! \phi_{n} + 2 \mathcal{E}\! \left\{ xy \right \}\! \cos \phi_{n} \sin \phi_{n}} \\& \ \ = \frac{N \mathcal{E} \left \{ \alpha_{n} \right \}}{\kappa}. \end{aligned}}} $$
(39)
As K, it can be shown that ϕ n can be substituted by \(\frac {\vartheta _{1} + \vartheta _{2}}{2}\) and κ takes on the following value:
$${} {{\begin{aligned} \kappa & = \left[ \frac{N}{4 \pi} \cos \left(\vartheta_{1} + \vartheta_{2} \right) \sin \left(\frac{2 \pi}{N} \right) + \frac{1}{2} \right] \cos^{2} \left(\frac{\vartheta_{1} + \vartheta_{2}}{2} \right) \\ &\quad + \left[ -\frac{N}{4 \pi} \cos \left(\vartheta_{1} + \vartheta_{2} \right) \sin \left(\frac{2 \pi}{N} \right) + \frac{1}{2} \right] \sin^{2} \left(\frac{\vartheta_{1} + \vartheta_{2}}{2} \right) \\ &\quad + 2 \frac{N}{4 \pi} \sin \left(\vartheta_{1} + \vartheta_{2} \right) \sin \left(\frac{2 \pi}{N} \right) \cos \left(\frac{\vartheta_{1} + \vartheta_{2}}{2} \right) \sin \left(\frac{\vartheta_{1} + \vartheta_{2}}{2} \right) \\ &= \frac{1}{2} + \frac{N}{4 \pi} \cos \left(\vartheta_{1} + \vartheta_{2} \right) \sin \left(\frac{2 \pi}{N} \right) \left[ \cos^{2} \left(\frac{\vartheta_{1} + \vartheta_{2}}{2} \right)\right.\\ &\quad\left.- \sin^{2} \left(\frac{\vartheta_{1} + \vartheta_{2}}{2} \right) \right] \\ &\quad + \frac{N}{4 \pi} \sin^{2} \left(\vartheta_{1} + \vartheta_{2} \right) \sin \left(\frac{2 \pi}{N} \right) \\ &= \frac{1}{2} + \frac{N}{4 \pi} \cos^{2} \left(\vartheta_{1} + \vartheta_{2} \right) \sin\! \left(\! \frac{2 \pi}{N}\! \right)+ \frac{N}{4 \pi} \sin^{2} \left(\vartheta_{1} + \vartheta_{2} \right) \sin\! \left(\! \frac{2 \pi}{N} \!\right) \\ &= \frac{1}{2} + \frac{N}{4 \pi} \sin \left(\frac{2 \pi}{N} \right). \end{aligned}}} $$
(40)
Therefore
$$ \begin{aligned} \mathcal{E} \left\{\beta \right\} &= \sum^{N}_{n=1} \mathcal{E} \left\{\beta_{n} \right\} = \frac{N + \frac{N^{2} \left(K - N \right)}{4 \pi} \sin^{2} \left(\frac{\pi}{N} \right)}{\frac{1}{2} + \frac{N}{4 \pi} \sin \left(\frac{2 \pi}{N} \right)}\\ &\approx \left[ \frac{N^{2} \sin^{2} \left(\frac{\pi}{N} \right)}{2 \pi + N \sin \left(\frac{2 \pi}{N} \right)} \right] K. \end{aligned} $$
(41)

5 Numerical results and discussions

Table 3 compares \(\mathcal {E} \left \{ \alpha \right \}\) and \(\mathcal {E} \left \{ \beta \right \}\) among different MACs for a fixed number of sensors, K. To put these numbers in perspective, the number of orthogonal channels, N, and the amount of feedback required by each type of MAC are also indicated in the table. The theoretical and simulation results of \(\mathcal {E} \left \{ \alpha \right \}\) and \(\mathcal {E} \left \{ \beta \right \}\) are plotted in Figs. 4 and 5, respectively. Observe that when K is large enough, the theoretical results agree very well with the simulation results. For small K, the simulation result is better than the theoretical result for \(\mathcal {E} \left \{ \alpha \right \}\) of the semi-orthogonal MAC with fixed sensor grouping. As for \(\mathcal {E} \left \{ \beta \right \}\), there are differences between the theoretical and simulation results of the hybrid MAC, and the semi-orthogonal MAC (with either fixed or adaptive sensor grouping). This observation suggests that for these three MACs, a sufficiently large number of sensors is required to achieve the asymptotic performance.
Fig. 4

Simulation and theoretical results of \(\mathcal {E} \left \{ \alpha \right \}\)

Fig. 5

Simulation and theoretical results of \(\mathcal {E} \left \{ \beta \right \}\)

Table 3

Asymptotic performance in terms of \(\mathcal {E} \left \{ \alpha \right \}\) and \(\mathcal {E} \left \{ \beta \right \}\)

Type of MAC

\(\mathcal {E} \left \{ \alpha \right \}\)

\(\mathcal {E} \left \{ \beta \right \}\)

Number of orthogonal channels, N

Required feedback

Coherent

0.78 K

0.78 K

1

Exact channel phase

Orthogonal

1

K

K

None

Hybrid (N=4)

0.20 K

0.78 K

4

Exact channel phase

Semi-orthogonal, fixed (N=4)

1

8

4

None

Semi-orthogonal, adaptive (N=4)

0.16 K

0.78 K

4

log2N bits

For the semi-orthogonal MAC with adaptive sensor grouping and N=4, as K, \(\mathcal {E} \left \{ \alpha \right \}\) and \(\mathcal {E} \left \{ \beta \right \}\) increases in the order of K, and thus the average MSE distortion goes to zero. This phenomenon is the same as those of both the coherent and hybrid MACs. However, for the orthogonal MAC and the semi-orthogonal MAC with fixed sensor grouping, the average MSE distortion converges to a fixed value as K increases. This is because \(\mathcal {E} \left \{ \alpha \right \}=1\), regardless of K, for these two MACs.

The semi-orthogonal MAC with adaptive sensor grouping can achieve the same performance at low γ c and even better performance at high γ c as compared to the coherent MAC. However, the semi-orthogonal MAC requires N=4 times the number of orthogonal channels and about five times the number of sensors. Nevertheless, it does not require channel phase information feedback. Furthermore, the semi-orthogonal MAC with adaptive sensor grouping can performs very close to the hybrid MAC. According to the simulation results in Fig. 4, for \(\mathcal {E} \left \{ \alpha \right \}\), the semi-orthogonal MAC is better for small K but worse for large K. With about K=16, the two MACs have the same \(\mathcal {E} \left \{ \alpha \right \}\). For \(\mathcal {E} \left \{ \beta \right \}\), the semi-orthogonal MAC performs nearly the same as the hybrid MAC for all values of K. Again, it is important to point out that channel phase information feedback is needed in the hybrid MAC.

It is of interest to investigate the impact of the number of orthogonal channels N on the estimation performance under the semi-orthogonal MAC with adaptive sensor grouping. To this end, the theoretical quantities \(\frac {\mathcal {E} \left \{ \alpha \right \}}{K}\) and \(\frac {\mathcal {E} \left \{ \beta \right \}}{K}\) are plotted versus N for a sufficient large K (K=128N) in Fig. 6, where the simulation results are also provided to verify the theoretical derivations. As can be seen, as N increases from 4, \(\frac {\mathcal {E} \left \{ \alpha \right \}}{K}\) decreases while \(\frac {\mathcal {E} \left \{ \beta \right \}}{K}\) stays nearly the same. Therefore with a fixed K, if N increases, which means more orthogonal channels and each with fewer sensors transmitting on, the channel noise suppression capability degrades, while the observation noise suppression capability is practically unchanged. The degradation of the channel noise suppression capability due to having more orthogonal channels is reasonable, because with more orthogonal channels, the FC needs to collect and process a larger number of received signal samples that are disturbed by independent AWGN noise components, resulting in a larger noise power overall. On the other hand, the observation noise suppression capability is determined only by the number of sensors, independent of the number of orthogonal channels.
Fig. 6

Plots of \(\frac {\mathcal {E} \left \{ \alpha \right \}}{K}\) and \(\frac {\mathcal {E} \left \{ \beta \right \}}{K}\), by simulation and theoretical analysis

It is pointed out that Fig. 6 also provides the results for N=2. Compared to N=4, although there are fewer orthogonal channels and thus smaller (overall) noise power for N=2, using N=2 has almost the same channel noise suppression capability as using N=4. In addition, the observation noise suppression capability for N=2 is much weaker than that for N=4. These phenomenons are consistent with the analysis in Section 3.2. In general, N=4 is the best choice for the semi-orthogonal MAC with adaptive sensor grouping, which achieves the largest performance improvement while requiring the least transmission bandwidth.

The qualities \(\frac {\mathcal {E}\left \{\alpha \right \}}{K}\) and \(\frac {\mathcal {E} \left \{\beta \right \} }{K}\) are also plotted in Fig. 6 for the hybrid MAC. The advantage of the hybrid MAC over the semi-orthogonal MAC with adaptive sensor grouping is most obvious for N=2. Again, this is because with N=2, destructive superposition of signals from two sensors happens in the semi-orthogonal MAC, while it is never the case in the hybrid MAC. As N increases, the sub-regions of channel phases become narrow, and the direct superposition behaves more and more like the coherent combination. For N=8, the two MACs have nearly the same performance.

Regarding the bandwidth requirement (in terms of the number of orthogonal channels), the hybrid and semi-orthogonal MACs are much more efficient than the orthogonal MAC. The coherent MAC is the most bandwidth efficient since only one channel is used. For the orthogonal MAC and the semi-orthogonal MAC with fixed sensor grouping, no feedback of channel phases from the FC to the sensors is required. For the coherent MAC, due to the requirement of coherent combination among sensors, channel phases need to be transmitted from the FC to the sensors. The exact number of bits used for such information feedback depends on the capability of the feedback channel and the required accuracy, but this is certainly a large amount of overhead. For the hybrid MAC, because coherent combination among sensors in each group is required, the amount of channel information feedback from the FC to the sensors is still the same as that of the coherent MAC. For the semi-orthogonal MAC with adaptive sensor grouping, the FC needs to send only log2N bits to inform each sensor the orthogonal channel to transmit on.

The simulation results of average MSE achieved by the five MACs under comparison are plotted in Fig. 7. When K=N=4, the coherent MAC obviously outperforms the other 4 MACs at low γ c, which is due to its outstanding capability on channel noise suppression. In this case, the hybrid MAC and the semi-orthogonal MAC with fixed sensor grouping are equivalent to the orthogonal MAC. The semi-orthogonal MAC with adaptive sensor grouping performs a little better than the orthogonal MAC.
Fig. 7

Comparison of the average MSE distortions among five different MACs. Note that in the figure’s legend, “Semi-F” and “Semi-A” mean the proposed semi-orthogonal MAC with fixed and adaptive sensor grouping strategies, respectively

With K increasing from K=4 to K=16, the performance improvements are significant, except for the semi-orthogonal MAC with fixed sensor grouping. In particular, the performance of the semi-orthogonal MAC with adaptive sensor grouping is the same as that of the hybrid MAC, which is consistent with the theoretical derivation, and it is between those of the orthogonal MAC and the coherent MAC.

When K is further increased to K=80, the performance of the semi-orthogonal MAC with fixed sensor grouping stays nearly the same as the performance with K=16. On the other hand, the performance of the semi-orthogonal MAC with adaptive sensor grouping improves significantly. The semi-orthogonal MAC with adaptive sensor grouping and K=80 achieves the same (at low γ c) or even better (at high γ c) performance compared to the coherent MAC with K=16. In addition, with K=80, the hybrid MAC only slightly outperforms the semi-orthogonal MAC at low γ c. All the simulation results match with the theoretical analysis presented before.

Finally, the average MSE performances of the semi-orthogonal MAC with adaptive sensor grouping are compared for N=4 and N=8. As shown in Fig. 8, at low γ c, for the network with K=80, using N=8 cannot achieve the same performance as using N=4. If K increases to 140 for N=8, then the performance is the same as that of having N=4 and K=80. This is consistent with the previous theoretical and simulation results, which are \(\frac {\mathcal {E} \left \{\alpha \right \}}{K} \approx 0.16\) for N=4 and \(\frac {\mathcal {E} \left \{\alpha \right \}}{K} \approx 0.094\) for N=8.
Fig. 8

Performance comparison in terms of the average MSE for N>4

6 Conclusions

For WSNs consisting of a sufficient large number of sensors but operating under limited bandwidth resource, a novel semi-orthogonal multiple access scheme was proposed for transmission from K sensors to the FC over N orthogonal channels, where KN. The paper thoroughly analyzed the performance of distributed estimation over such a semi-orthogonal MAC with either fixed or adaptive sensor grouping and compared with the performance achieved with other related MACs. Compared to the orthogonal MAC operating under the same bandwidth, the semi-orthogonal MAC with fixed sensor grouping has the same channel noise suppression capability, but twice the observation noise suppression capability as K approaches infinity. This is achieved with no requirement of information feedback from the FC to sensors. For the semi-orthogonal MAC with adaptive sensor grouping, it is determined that N=4 is the most favorable number of orthogonal channels when taking into account both performance and feedback requirement. In particular, the semi-orthogonal MAC with adaptive sensor grouping is shown to perform very close to that of the hybrid MAC under the same bandwidth and number of sensors, while requiring only two bits of information feedback instead of the exact channel phase for each sensor.

The present paper considers estimating a single source signal. In general, when the number of sources increases, the amount of information to transmit from the sensors to the FC increases, which translates to a larger transmission bandwidth, or equivalently a larger number of orthogonal channels. If the sources are uncorrelated, the proposed semi-orthogonal transmission framework can be applied individually for each source signal. However, if the source signals are correlated, the correlation information should be used in the development of joint semi-orthogonal multiple access schemes and this is left for further research.

7 Endnotes

1 To be consistent with existing literature, the term MAC is also used in this paper, although it is more appropriate to use the term “multiple access scheme” when discussing different communication methods between the sensors and FC.

2 The definition and meaning of equivalent channel responses will be made clearer in Section 3.

3 The interested reader is referred to [28] for a novel power allocation solution under the semi-orthogonal MAC, which is shown to improve the estimation performance when compared to equal power allocation, especially at low channel signal-to-noise ratios.

4 For complex scalars, vectors and matrices, \(\mathcal {R} \left \{ \cdot \right \}\) denotes the real part and \(\mathcal {I} \left \{ \cdot \right \}\) denotes the imaginary part.

5 This allocation is similar to the transmission in an overloaded code-division multiple access (CDMA) systems [29, 30] if one views vector g (i) as the signature vector of sensor i.

6 For random variables, \(\mathcal {E} \left \{ \cdot \right \}\) and \(\mathcal {D} \left \{ \cdot \right \}\) denote expectation and variance, respectively.

7 If the channel response of the added sensor is of magnitude larger than 1, then it can be taken as the first sensor and the other sensor is taken as the added sensor.

8 Since the phases of wireless channel responses are modeled as uniform, one does not expect any benefit to use non-uniform phase partitions.

8 Appendix A

To obtain the expression for β, first one has (recall (18) for the definition of θ n,l )
$$\begin{array}{@{}rcl@{}} \theta_{1,2} &=& {\sigma^{2}_{v}} \bar{a}^{2} \sum^{K}_{i=1} g^{(i)}_{1} \left(\mathcal{R} \left \{ h_{i} \right \} \cos \phi_{1} + \mathcal{I} \left \{ h_{i} \right \} \sin \phi_{1} \right) g^{(i)}_{2} \\ &&\times\left(\mathcal{R} \left \{ h_{i} \right \} \cos \phi_{2} + \mathcal{I} \left \{ h_{i} \right \} \sin \phi_{2} \right) \\ &=& \frac{{\sigma^{2}_{v}} P_{\text{tot}}}{{\sigma^{2}_{s}}+{\sigma^{2}_{v}}} \left[ \cos \phi_{1} \cos \phi_{2} \left(\sum\limits^{K}_{i=1} \frac{g^{(i)}_{1} g^{(i)}_{2}}{2 K_{1}} \mathcal{R}^{2} \left \{ h_{i} \right \} \right)\right. \\ &&\left. + \sin \phi_{1} \sin \phi_{2} \left(\sum\limits^{K}_{i=1} \frac{g^{(i)}_{1} g^{(i)}_{2}}{2 K_{1}} \mathcal{I}^{2} \left \{ h_{i} \right\} \right) \right. \\ &&+ \left(\cos \phi_{1} \sin \phi_{2} + \sin \phi_{1} \cos \phi_{2} \right) \\ &&\times\left.\left(\sum\limits^{K}_{i=1} \frac{g^{(i)}_{1} g^{(i)}_{2}}{2 K_{1}} \mathcal{R} \left\{h_{i} \right\} \mathcal{I} \left\{ h_{i} \right \} \right) \right]. \end{array} $$
(42)
When K and K 1 approach infinity, one has \(\sum ^{K}_{i=1} \frac {g^{(i)}_{1} g^{(i)}_{2}}{2 K_{1}} \mathcal {R}^{2} \left \{ h_{i} \right \} = \frac {M}{2 K_{1}} \mathcal {E} \left \{ \mathcal {R}^{2} \left \{ h_{i} \right \} \right \} = \frac {\rho }{2} \frac {1}{2} = \frac {\rho }{4}\), \(\sum ^{K}_{i=1} \frac {g^{(i)}_{1} g^{(i)}_{2}}{2 K_{1}} \mathcal {I}^{2} \left \{ h_{i} \right \} = \frac {\rho }{4}\), and \(\sum ^{K}_{i=1} \frac {g^{(i)}_{1} g^{(i)}_{2}}{2 K_{1}} \mathcal {R} \left \{ h_{i} \right \} \mathcal {I} \left \{ h_{i} \right \} = 0\).It then follows that
$$ \begin{aligned} \theta_{1,2} &= \frac{{\sigma^{2}_{v}} P_{\text{tot}}}{{\sigma^{2}_{s}}+{\sigma^{2}_{v}}} \frac{\rho}{4} \left(\cos \phi_{1} \cos \phi_{2} + \sin \phi_{1} \sin \phi_{2} \right)\\ &= \frac{{\sigma^{2}_{v}} P_{\text{tot}}}{{\sigma^{2}_{s}}+{\sigma^{2}_{v}}} \frac{\rho}{4} \cos \left(\phi_{1} - \phi_{2} \right). \end{aligned} $$
(43)
Similarly, one can show that \(\theta _{2,1} = \theta _{1,2} = \frac {{\sigma ^{2}_{v}} P_{\text {tot}}}{{\sigma ^{2}_{s}}+{\sigma ^{2}_{v}}} \frac {\rho }{4} \cos \left (\phi _{1} - \phi _{2} \right)\) and \(\theta _{1,1} = \theta _{2,2} = \frac {{\sigma ^{2}_{v}} P_{\text {tot}}}{4 \left ({\sigma ^{2}_{s}}+{\sigma ^{2}_{v}} \right)} \). Therefore,
$$\begin{array}{@{}rcl@{}} \beta &=& \left[ \begin{array}{cc} \left| \tilde{h}_{1} \right| & \left| \tilde{h}_{2} \right| \end{array} \right] \left[ \begin{array}{cc} \frac{1}{4} & \frac{\rho \cos \left(\phi_{1} - \phi_{2} \right) }{4} \\ \frac{\rho \cos \left(\phi_{1} - \phi_{2} \right) }{4} & \frac{1}{4} \end{array} \right]^{-1} \left[ \begin{array}{c} \left| \tilde{h}_{1} \right| \\ \left| \tilde{h}_{2} \right| \end{array} \right] \\ &=& \frac{2 \left[ {m^{2}_{1}} - 2 \rho \cos \left(\phi_{1} - \phi_{2} \right) m_{1} m_{2} + {m^{2}_{2}} \right]}{1 - \rho^{2} \cos^{2} \left(\phi_{1} - \phi_{2} \right)}. \end{array} $$
(44)
Let \(m_{1} \mathrm {e}^{j \phi _{1}} = r_{1} + j t_{1}\) and \(m_{2} \mathrm {e}^{j \phi _{2}} = r_{2} + j t_{2}\). Then \(m_{1} = \sqrt {{r^{2}_{1}} + {t^{2}_{1}}}\), \(m_{2} = \sqrt {{r^{2}_{2}} + {t^{2}_{2}}}\), \(\phi _{1} = \arctan \left (\frac {t_{1}}{r_{1}} \right)\), \(\phi _{2} = \arctan \left (\frac {t_{2}}{r_{2}} \right)\). According to the central limit theorem, \(m_{1} \mathrm {e}^{j \phi _{1}}\) and \(m_{2} \mathrm {e}^{j \phi _{2}}\) are two complex Gaussian random variables with zero mean and unit variance. Furthermore, the correlation coefficient between \(m_{1} \mathrm {e}^{j \phi _{1}}\) and \(m_{2} \mathrm {e}^{j \phi _{2}}\) is ρ. Thus the joint pdf of r 1, r 2, t 1 and t 2 is
$${} \begin{aligned} &f \left(r_{1},t_{1},r_{2},t_{2} \right)\\ &\qquad\ = c^{2} \text{exp} \left\{ -\frac{{r^{2}_{1}} + {t^{2}_{1}} - 2 \rho \left(r_{1} r_{2} + t_{1} t_{2} \right) + {r^{2}_{2}} + {t^{2}_{2}}}{1-\rho^{2}} \right\},\\ &\qquad c = \frac{1}{\pi \sqrt{1-\rho^{2}}}. \end{aligned} $$
(45)
Also,
$$\begin{array}{@{}rcl@{}} J &=& \left| \begin{array}{cccc} \frac{\partial m_{1}}{\partial r_{1}} & \frac{\partial m_{1}}{\partial t_{1}} & \frac{\partial m_{1}}{\partial r_{2}} & \frac{\partial m_{1}}{\partial t_{2}} \\ \frac{\partial m_{2}}{\partial r_{1}} & \frac{\partial m_{2}}{\partial t_{1}} & \frac{\partial m_{2}}{\partial r_{2}} & \frac{\partial m_{2}}{\partial t_{2}} \\ \frac{\partial \phi_{1}}{\partial r_{1}} & \frac{\partial \phi_{1}}{\partial t_{1}} & \frac{\partial \phi_{1}}{\partial r_{2}} & \frac{\partial \phi_{1}}{\partial t_{2}} \\ \frac{\partial \phi_{2}}{\partial r_{1}} & \frac{\partial \phi_{2}}{\partial t_{1}} & \frac{\partial \phi_{2}}{\partial r_{2}} & \frac{\partial \phi_{2}}{\partial t_{2}} \\ \end{array} \right| = \left| \begin{array}{cccc} \frac{r_{1}}{m_{1}} & \frac{t_{1}}{m_{1}} & 0 & 0 \\ 0 & 0 & \frac{r_{2}}{m_{2}} & \frac{t_{2}}{m_{2}} \\ -\frac{t_{1}}{{m^{2}_{1}}} & \frac{r_{1}}{{m^{2}_{1}}} & 0 & 0 \\ 0 & 0 & -\frac{t_{2}}{{m^{2}_{2}}} & \frac{r_{2}}{{m^{2}_{2}}} \\ \end{array} \right|\\ &=& -\frac{1}{m_{1} m_{2}}. \end{array} $$
(46)
It then follows that
$${} \begin{aligned} &f \left(m_{1},m_{2},\phi_{1},\phi_{2} \right) = \frac{f \left(r_{1},t_{1},r_{2},t_{2} \right)}{\left| J \right|} \\ & = c^{2} m_{1} m_{2} \text{exp} \left \{ -\frac{{m^{2}_{1}} - 2 \rho m_{1} m_{2} \cos \left(\phi_{1} - \phi_{2} \right) + {m^{2}_{2}}}{1-\rho^{2}} \right \}. \end{aligned} $$
(47)
Let x= cos(ϕ 1ϕ 2) and y=ϕ 2. Then
$$\begin{array}{@{}rcl@{}} J &=& \left| \begin{array}{cc} \frac{\partial x}{\partial \phi_{1}} & \frac{\partial x}{\partial \phi_{2}} \\ \frac{\partial y}{\partial \phi_{1}} & \frac{\partial y}{\partial \phi_{2}} \\ \end{array} \right| = \left| \begin{array}{cc} -\sin \left(\phi_{1} - \phi_{2} \right) & \sin \left(\phi_{1} - \phi_{2} \right) \\ 0 & 1 \\ \end{array} \right|\\ &=& -\sin \left(\phi_{1} - \phi_{2} \right), \end{array} $$
$${} \begin{aligned} f\! \left(\! m_{1},m_{2},x,y\! \right)&= \frac{2 f \left(m_{1},m_{2},\phi_{1},\phi_{2} \right)}{\left| J \right|}\\ &= \frac{2 c^{2} m_{1} m_{2}}{\sqrt{1-x^{2}}} \text{exp}\! \left \{\! -\frac{{m^{2}_{1}} - 2 \rho m_{1} m_{2} x + {m^{2}_{2}}}{1-\rho^{2}}\! \right \}\!, \end{aligned} $$
(48)
and
$${} \begin{aligned} f\! &\left(m_{1},m_{2},x \right)\\ &\quad= 2 \pi f \left(m_{1},m_{2},x,y \right) \\ &\quad= \frac{4 m_{1} m_{2}}{\pi \left(1-\rho^{2} \right) \sqrt{1-x^{2}}} \text{exp}\! \left \{\! -\frac{{m^{2}_{1}} - 2 \rho m_{1} m_{2} x + {m^{2}_{2}}}{1-\rho^{2}}\! \right \}\!. \end{aligned} $$
(49)
Next,
$${} {{\begin{aligned} \mathcal{E}\!\left \{ \beta \right \}& = \mathcal{E} \left \{ \frac{2 \left[ {m^{2}_{1}} - 2 \rho \cos \left(\phi_{1} - \phi_{2} \right) m_{1} m_{2} + {m^{2}_{2}} \right]}{1 - \rho^{2} \cos^{2} \left(\phi_{1} - \phi_{2} \right)} \right \} \\ &= \frac{8 \left(1-\rho^{2} \right) }{\pi} \times \int^{\infty}_{0} \int^{\infty}_{0} \int^{1}_{-1} \frac{\left({m^{2}_{1}} - 2 \rho x m_{1} m_{2} + {m^{2}_{2}} \right) m_{1} m_{2}}{ \left(1-\rho^{2} x^{2} \right) \sqrt{1-x^{2}}} \\ &\quad\times \text{exp} \left \{ -\frac{{m^{2}_{1}} - 2 \rho m_{1} m_{2} x + {m^{2}_{2}}}{1-\rho^{2}} \right \} {\mathrm d} x {\mathrm d} m_{1} {\mathrm d} m_{2} \\ & =\frac{8}{\pi \left(1-\rho^{2} \right)} \int^{-1}_{1} \frac{1}{ (1-\rho^{2} x^{2})\sqrt{1-x^{2}}}\\ &\quad\times\int^{\infty}_{0} m_{2} \text{exp} \left \{ -\frac{\left(1-\rho^{2} x^{2} \right) {m^{2}_{2}} }{1-\rho^{2}} \right \} \Lambda \left(x,m_{2} \right) {\mathrm d} m_{2} {\mathrm d} x, \end{aligned}}} $$
(50)
where
$$\begin{array}{@{}rcl@{}} \Lambda \left(x,m_{2} \right) & = & \int^{\infty}_{0} \left[ (m_{1}-\rho x m_{2})^{2}+(1-\rho^{2} x^{2}){m_{2}^{2}} \right]\\&&\times\ m_{1} \text{exp} \left \{ -\frac{\left(m_{1} - \rho m_{2} x \right)^{2} }{1-\rho^{2}} \right \} {\mathrm d} m_{1} \\ &=& \int^{\infty}_{-\rho m_{2} x} \left[ \begin{array}{l} y^{3} + \rho x m_{2} y^{2} + \left(1 - \rho^{2} x^{2} \right){m^{2}_{2}} y\\ + \rho x \left(1 - \rho^{2} x^{2} \right) {m_{2}^{3}} \end{array} \right] \\&&\times\ \text{exp} \left \{ -\frac{y^{2} }{1-\rho^{2}} \right \} {\mathrm d} y. \end{array} $$
(51)
If −1<x<0, then
$${} \begin{aligned} \Lambda\! \left(x,m_{2} \right)&= \int^{\infty}_{-\rho m_{2} x} \left[ \begin{array}{l} y^{3} + \rho x m_{2} y^{2} + \left(1 - \rho^{2} x^{2} \right) {m^{2}_{2}} y\\ + \rho x \left(1 - \rho^{2} x^{2} \right) {m_{2}^{3}} \end{array} \right]\\ &\quad\times\text{exp} \left \{ -\frac{y^{2} }{1-\rho^{2}} \right \} {\mathrm d} y \\ &= \frac{\left(1-\rho^{2} \right)^{2} }{2} \Gamma \left(2, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right)\\ &\quad + \frac{\rho x m_{2} \left(1 - \rho^{2} \right)^{\frac{3}{2}}}{2} \Gamma \left(\frac{3}{2}, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) \\ &\quad + \frac{\left(1 - \rho^{2} x^{2} \right) {m^{2}_{2}} \left(1-\rho^{2} \right) }{2} \Gamma \left(1, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) \\ &\quad + \frac{\rho x\! \left(\! 1 - \rho^{2} x^{2} \right)\! {m^{3}_{2}} \left(1-\rho^{2} \right)^{\frac{1}{2}} }{2} \Gamma\! \left(\frac{1}{2}, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right)\!. \end{aligned} $$
(52)
If 0<x<1, then
$$\begin{array}{@{}rcl@{}} \Lambda\! \left(x,m_{2} \right)&=& \int_{0}^{\rho m_{2} x} \left[ \begin{array}{l} y^{3} + \rho x m_{2} y^{2} + \left(1 - \rho^{2} x^{2} \right){m^{2}_{2}} y\\ + \rho x \left(1 - \rho^{2} x^{2} \right) {m_{2}^{3}} \end{array} \right]\\ &&\times\ \text{exp} \left \{ -\frac{y^{2} }{1-\rho^{2}} \right \} {\mathrm d} y \\ &=& \frac{\left(1-\rho^{2} \right)^{2} }{2} \Gamma \left(2, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right)\\ && + \frac{\rho x m_{2} \left(1 - \rho^{2} \right)^{\frac{3}{2}}}{2} \Gamma \left(\frac{3}{2}, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) \\ & & + \frac{\left(1 - \rho^{2} x^{2} \right) {m^{2}_{2}} \left(1-\rho^{2} \right) }{2} \Gamma \left(1, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) \\ & & + \frac{\rho x \left(1 - \rho^{2} x^{2} \right) {m^{3}_{2}} \left(1-\rho^{2} \right)^{\frac{1}{2}} }{2} \Gamma\!\! \left(\! \frac{1}{2}, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}}\! \right) \\ & & + \rho x m_{2} \left(1-\rho^{2} \right)^{\frac{3}{2}} \gamma \left(\frac{3}{2}, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) \\ & & + \rho x\! \left(\! 1 - \rho^{2} x^{2} \right)\! {m^{3}_{2}} \left(1-\rho^{2} \right)^{\frac{1}{2}} \gamma\!\! \left(\! \frac{1}{2}, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right).\\ \end{array} $$
(53)
The functions γ(a,x) and Γ(a,x) are incomplete gamma functions [31]. Since \( x \Gamma \left (\frac {3}{2}, \frac {\rho ^{2} x^{2} {m^{2}_{2}}}{1- \rho ^{2}} \right)\) and \( x \left (1 -\rho ^{2} x^{2} \right) \Gamma \left (\frac {1}{2}, \frac {\rho ^{2} x^{2} {m^{2}_{2}}}{1- \rho ^{2}} \right)\) are odd functions of x and the integral with respect to x is from −1 to 1, the two terms integrate to zero. Then one has
$$\begin{array}{@{}rcl@{}} \mathcal{E} \left \{ \beta \right \}&=& \frac{4}{\pi \left(1-\rho^{2} \right)} \int^{0}_{-1}\frac{1}{(1-\rho^{2} x^{2})\sqrt{1-x^{2}}} \int^{\infty}_{0} m_{2} \text{exp} \\ & &\times \left\{-\frac{\left(1 - \rho^{2} x^{2} \right) {m^{2}_{2}} }{1-\rho^{2}} \right \} \Lambda_{-} \left(x,m_{2} \right) {\mathrm d} m_{2} {\mathrm d} x \\ & & + \frac{4}{\pi \left(1-\rho^{2} \right)} {\int^{1}_{0}} \frac{1}{ (1-\rho^{2} x^{2}) \sqrt{1-x^{2}}} \int^{\infty}_{0} m_{2} \text{exp} \\ & &\times \left \{ -\frac{\left(1 - \rho^{2} x^{2} \right) {m^{2}_{2}} }{1-\rho^{2}} \right \} \Lambda_{+} \left(x,m_{2} \right) {\mathrm d} m_{2} {\mathrm d} x, \\ \end{array} $$
(54)
where
$$\begin{array}{@{}rcl@{}} \Lambda_{-} \left(x,m_{2} \right)&=& \frac{\left(1-\rho^{2} \right)^{2} }{2} \Gamma \left(2, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) \\ & & + \frac{\left(1 - \rho^{2} x^{2} \right) {m^{2}_{2}} \left(1-\rho^{2} \right) }{2} \Gamma \left(1, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right), \\ \end{array} $$
(55)
$$\begin{array}{@{}rcl@{}} &\!\!\!\!\!\Lambda_{+}& \left(x,m_{2} \right) \\ &=& \frac{\left(1-\rho^{2} \right)^{2} }{2} \Gamma \left(2, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) \\ & & + \frac{\left(1 - \rho^{2} x^{2} \right) {m^{2}_{2}} \left(1-\rho^{2} \right) }{2} \Gamma \left(1, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) \\ & & + 2 \rho x m_{2} \left(1-\rho^{2} \right)^{\frac{3}{2}} \gamma \left(\frac{3}{2}, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) \\ & & + 2\rho x \left(1 - \rho^{2} x^{2} \right) {m^{3}_{2}} \left(1-\rho^{2} \right)^{\frac{1}{2}} \gamma \left(\frac{1}{2}, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right). \\ \end{array} $$
(56)
One also can compute
$${} \begin{aligned} &\int^{\infty}_{0} m_{2} \text{exp} \left \{ -\frac{\left(1- \rho^{2} x^{2} \right) {m^{2}_{2}}}{1- \rho^{2}} \right \} \Gamma \left(2, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) {\mathrm d} m_{2}\\ &= \int^{\infty}_{0} m_{2} \text{exp} \left \{ -\frac{\left(1- \rho^{2} x^{2} \right) {m^{2}_{2}}}{1- \rho^{2}} \right \} \int^{\infty}_{\frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}}} \mathrm{e}^{-t} t {\mathrm d} t {\mathrm d} m_{2} \\ &= \int^{\infty}_{0} \mathrm{e}^{-t} t \int^{\frac{\sqrt{\left(1-\rho^{2} \right) t} }{\rho \left| x \right|} }_{0} m_{2} \text{exp} \left \{ -\frac{\left(1- \rho^{2} x^{2} \right) {m^{2}_{2}}}{1- \rho^{2}} \right \} {\mathrm d} m_{2} {\mathrm d} t \\ &= \frac{1 - \rho^{2} }{ 2 \left(1- \rho^{2} x^{2} \right)} \int^{\infty}_{0} \mathrm{e}^{-t} t \gamma \left(1, \frac{\left(1-\rho^{2} x^{2} \right) t }{\rho^{2} x^{2}} \right) {\mathrm d} t \\ &= \frac{1 - \rho^{2} }{ 2 \left(1- \rho^{2} x^{2} \right)} \frac{1 - \rho^{2} x^{2} }{ \rho^{2} x^{2}} \Gamma \left(3 \right) \left(1 + \frac{1 - \rho^{2} x^{2} }{ \rho^{2} x^{2}} \right)^{-3}\\ &\quad\times\mathrm{F} \left(1,3,2,1-\rho^{2} x^{2} \right) \\ &= \left(1 - \rho^{2} \right) \left(\rho^{2} x^{2} \right)^{2} \mathrm{F} \left(1,3,2,1-\rho^{2} x^{2} \right), \end{aligned} $$
(57)
and similarly,
$$\begin{array}{@{}rcl@{}} \int^{\infty}_{0}\!\! &m_{2}\text{exp} \left\{ -\frac{\left(1- \rho^{2} x^{2} \right) {m^{2}_{2}}}{1- \rho^{2}} \right\} {m^{2}_{2}} \Gamma \left(1, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) {\mathrm d} m_{2}\\ &= \frac{\left(1 - \rho^{2} \right)^{2} \left(\rho^{2} x^{2} \right) }{2} \mathrm{F} \left(1,3,3,1-\rho^{2} x^{2} \right), \end{array} $$
(58)
$$\begin{array}{@{}rcl@{}} \int^{\infty}_{0}\!\! &m_{2} \text{exp} \left \{ -\frac{\left(1- \rho^{2} x^{2} \right) {m^{2}_{2}}}{1- \rho^{2}} \right \} m_{2} \gamma \left(\frac{3}{2}, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) {\mathrm d} m_{2} \\ &=\frac{2 \left(1 - \rho^{2} \right)^{\frac{3}{2}} \left(\rho^{2} x^{2} \right)^{\frac{3}{2}}}{3} \mathrm{F} \left(1,3,\frac{5}{2},\rho^{2} x^{2} \right), \end{array} $$
(59)
$$\begin{array}{@{}rcl@{}} \int^{\infty}_{0}\!\! &m_{2} \text{exp} \left \{ -\frac{\left(1- \rho^{2} x^{2} \right) {m^{2}_{2}}}{1- \rho^{2}} \right \} {m^{3}_{2}} \gamma \left(\frac{1}{2}, \frac{\rho^{2} x^{2} {m^{2}_{2}}}{1- \rho^{2}} \right) {\mathrm d} m_{2} \\ &=2 \left(1 - \rho^{2} \right)^{\frac{5}{2}} \left(\rho^{2} x^{2} \right)^{\frac{1}{2}} \mathrm{F} \left(1,3,\frac{3}{2},\rho^{2} x^{2} \right), \end{array} $$
(60)
where F(α,β,γ,z) is the Gauss hypergeometric function [31]. Thus
$$\begin{array}{@{}rcl@{}} \mathcal{E}\left\{\beta \right\} &=& \frac{4 \left(1-\rho^{2} \right)^{2}}{\pi} {\int^{1}_{0}} \frac{\left(\rho^{2} x^{2} \right)^{2} }{ \left(1-\rho^{2} x^{2} \right) \sqrt{1-x^{2}}} \\&&\times\left[ \begin{array}{l} \mathrm{F} \left(1,3,2,1-\rho^{2} x^{2} \right) \\ + \frac{4}{3} \mathrm{F} \left(1,3,\frac{5}{2},\rho^{2} x^{2} \right) \end{array} \right] {\mathrm d} x \\ &&+ \frac{4 \left(1-\rho^{2} \right)^{2}}{\pi} {\int^{1}_{0}} \frac{\rho^{2} x^{2} }{ \sqrt{1-x^{2}}}\\ &&\times\left[ \mathrm{F} \left(1,3,3,1-\rho^{2} x^{2} \right) + 4 \mathrm{F} \left(1,3,\frac{3}{2},\rho^{2} x^{2} \right) \right] {\mathrm d} x.\\ \end{array} $$
(61)

9 Appendix B

Under equal power allocation, the expression of β n for the semi-orthogonal MAC with fixed sensor grouping simplifies to
$$ \beta_{n} = \frac{\left| \sum_{i \in \Omega_{n}} \sqrt{\frac{N}{K}} h_{i} \right|^{2} }{\frac{N}{K} \sum_{i \in \Omega_{n}} \left(\mathcal{R} \left \{ h_{i} \right \} \frac{\mathcal{R} \left \{ \hat{h}_{n} \right \}}{\left| \hat{h}_{n} \right|} + \mathcal{I} \left \{ h_{i} \right \} \frac{\mathcal{I} \left \{ \hat{h}_{n} \right \}}{\left| \hat{h}_{n} \right|} \right)^{2} }. $$
(62)
Let \(\tilde {h}_{n} = \sum _{i \in \Omega _{n}} \sqrt {\frac {N}{K}} h_{i} = m_{n} \mathrm {e}^{j \phi _{n}}\). Then \(\tilde {h}_{n}\) is a circularly symmetric complex Gaussian random variables with zero mean and variance 1. The numerator of β n , \(| \tilde {h}_{n} |^{2}\), is of exponential distribution with parameter λ=1, whose pdf is
$$ f_{| \tilde{h}_{n} |^{2}}\left(|\tilde{h}_{n} |^{2}\right) = \text{exp} \left(-| \tilde{h}_{n} |^{2} \right). $$
(63)
Since \(\hat {h}_{n}\) has the same phase as \(\tilde {h}_{n}\), so the denominator of β n turns to
$$\begin{array}{@{}rcl@{}} \frac{N}{K} {\sum\nolimits}_{i \in \Omega_{n}}&&\!\!\!\!\!\!\!\!\!\! \left(\mathcal{R} \left\{h_{i} \right \} \cos \phi_{n} + \mathcal{I} \left \{ h_{i} \right \} \sin \phi_{n} \right)^{2} \\ &=& \left(\frac{N}{K} {\sum\nolimits}_{i \in \Omega_{n}} \mathcal{R}^{2} \left\{ h_{i} \right\}\right) \cos^{2} \phi_{n}\\ && + \left(\frac{N}{K} {\sum\nolimits}_{i \in \Omega_{n}} \mathcal{I}^{2} \left \{ h_{i} \right \} \right) \sin^{2} \phi_{n} \\ && + 2 \left(\frac{N}{K} {\sum\nolimits}_{i \in \Omega_{n}} \mathcal{R} \left \{ h_{i} \right \} \mathcal{I} \left \{ h_{i} \right \} \right) \cos \phi_{n} \sin \phi_{n}.\\ \end{array} $$
(64)
When K, according to the strong law of large numbers, (64) turns to
$$\begin{array}{@{}rcl@{}} &\!\!\!\!\!\!\!\!\mathcal{E}& \left\{\mathcal{R}^{2} \left\{h_{i} \right\}\right\} \cos^{2} \phi_{n} + \mathcal{E} \left \{ \mathcal{I}^{2} \left \{ h_{i} \right \} \right \} \sin^{2} \phi_{n}\\ &&+ 2 \mathcal{E} \left \{ \mathcal{R} \left \{ h_{i} \right \} \mathcal{I} \left \{ h_{i} \right \} \right \} \cos \phi_{n} \sin \phi_{n} \\ =&&\!\frac{1}{2} \cos^{2} \phi_{n} + \frac{1}{2} \sin^{2} \phi_{n} = \frac{1}{2}. \end{array} $$
(65)
Therefore, when K, \(\beta _{n} = 2 | \tilde {h}_{n} |^{2}\) is exponentially distributed with parameter λ=2, whose pdf is
$$ f_{\beta_{n}} \left(\beta_{n} \right) = \frac{1}{2} \text{exp} \left(\frac{\beta_{n}}{2} \right),\; \beta_{n} \ge 0. $$
(66)
Finally, it is well known that the sum of N independent and identically distributed (i.i.d.) exponential random variables with parameter λ=2 is a Gamma random variable with parameters a=N and b=2. For completeness, the pdf of the Gamma distribution is as follows:
$$ f_{\beta} \left(\beta\right) = \frac{\beta^{a-1}}{\Gamma \left(a\right)b^{a}} \text{exp} \left(- \frac{\beta}{b} \right), \quad \beta \ge 0,\; a>0,\; b>0, $$
(67)

where \(\Gamma \left (a \right) = \int ^{\infty }_{0} x^{a-1} \mathrm {e}^{-x} {\mathrm d} x\) is the Gamma function. If a is an integer, then Γ(a)=(a−1)!.

Declarations

Acknowledgements

This work was supported by a Discovery Grant from the Natural Sciences and Engineering Council of Canada (NSERC). The authors would like to thank the anonymous reviewers for many constructive comments, which greatly helped in improving the clarity of this paper.

Authors’ contributions

The work was carried out by the first author when she was a graduate student under the academic supervision of the second author. Both authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Electrical and Computer Engineering, University of Saskatchewan

References

  1. IF Akyildiz, W Su, Y Sankarasubramaniam, E Cayirci, A survey on sensor networks. IEEE Commun. Mag. 40:, 102–114 (2002).View ArticleGoogle Scholar
  2. VC Gungor, GP Hancke, Industrial wireless sensor networks: challenges, design principles, and technical approaches. IEEE Trans. Ind. Electron. 56:, 4258–4265 (2009).View ArticleGoogle Scholar
  3. VC Gungor, B Lu, GP Hancke, Opportunities and challenges of wireless sensor networks in smart grid. IEEE Trans. Ind. Electron. 57:, 3557–3564 (2010).View ArticleGoogle Scholar
  4. C-Y Chong, SP Kumar, Sensor networks: evolution, opportunities and challenges. Proc. IEEE. 91:, 1247–1256 (2003).View ArticleGoogle Scholar
  5. M Nourian, S Dey, A Ahlen, Distortion minimization in multi-sensor estimation with energy harvesting. IEEE J. Select. Areas in Commun. 33(3), 524–539 (2015).View ArticleGoogle Scholar
  6. J-J Xiao, A Ribeiro, Z-Q Luo, G Giannakis, Distributed compression-estimation using wireless sensor networks. IEEE Signal Proc. Mag. 23:, 27–41 (2006).View ArticleGoogle Scholar
  7. S Cui, JJ Xiao, ZQ Luo, A Goldsmith, HV Poor, Estimation diversity and energy efficiency in distributed sensing. IEEE Trans. Signal Process. 55:, 4683–4695 (2007).MathSciNetView ArticleGoogle Scholar
  8. JJ Xiao, S Cui, ZQ Luo, A Goldsmith, Linear coherent decentralized estimation. IEEE Trans. Signal Process. 56:, 757–770 (2008).MathSciNetView ArticleGoogle Scholar
  9. M Gastpar, M Vetterli, in Lecture Notes in Computer Science, 2634. Source-channel communication in sensor networks (SpringerNew York, 2003), pp. 162–177.Google Scholar
  10. M Gastpar, Uncoded transmission is exactly optimal for a simple Gaussian “sensor” network. IEEE Trans. Inform. Theory. 54:, 5247–5251 (2008).MathSciNetView ArticleMATHGoogle Scholar
  11. K Liu, H El-Gamal, A Sayeed, in IEEE/SP 13th Workshop on Statist. Singal Process. On optimal parametric field estimation in sensor networks (IEEE/SP 13thWorkshop on Statist. Singal ProcessBordeaux, 2005), pp. 1170–1175.Google Scholar
  12. W Bajwa, A Sayeed, R Nowak, in 4th Int. Symp. Inf. Process. Sens. Netw. Matched source-channel communication for field estimation in wireless sensor network (Los Angeles, 2005), pp. 332–339.Google Scholar
  13. MK Banavar, C Tepedelenlioglu, A Spanias, Estimation over fading channels with limited feedback using distributed sensing. IEEE Trans. Signal Process. 58:, 414–425 (2010).MathSciNetView ArticleGoogle Scholar
  14. H Senolm, C Tepedelenlioglu, Performance of distributed estimation over unknown parallel fading channels. IEEE Trans. Signal Process. 56:, 6057–6068 (2008).MathSciNetView ArticleGoogle Scholar
  15. TJ Goblick, Theoretical limitation on the transmission of data from analog sources. IEEE Trans. Inform. Theory. IT-11:, 558–567 (1965).View ArticleMATHGoogle Scholar
  16. M Gastpar, B Rimoldi, M Vetterli, To code or not to code: lossy source-channel communication revisited. IEEE Trans. Inform. Theory. 49:, 1147–1158 (2003).MathSciNetView ArticleMATHGoogle Scholar
  17. JC Liu, CD Chung, Distributed estimation in a wireless sensor network using hybrid mac. IEEE Trans. Veh. Technol. 60:, 3424–3435 (2011).View ArticleGoogle Scholar
  18. R Mudumbai, G Barriac, U Madhow, On the feasibility of distributed beamforming in wireless networks. IEEE Trans. Wireless Commun. 6:, 1754–1763 (2007).View ArticleGoogle Scholar
  19. C Tepedelenlioglu, On the asymptotic efficiency of distributed estimation systems with constant modulus signals over multiple-access channels. IEEE Trans. Inform. Theory. 57:, 7125–7130 (2011).MathSciNetView ArticleGoogle Scholar
  20. M Gastpar, M Vetterli, Power, spatio-temporal bandwidth, and distortion in large sensor networks. IEEE Trans. Inform. Theory. 23:, 745–754 (2005).MathSciNetGoogle Scholar
  21. Su J, Distributed estimation in wireless sensor networks under semi-orthogonal MAC. M.Sc. Thesis, University of Saskatchewan, Canada, 2014.Google Scholar
  22. I Bahceci, AK Khandani, Linear estimation of correlated data in wireless sensor networks with optimum power allocation and analog modulation. 56:, 1146–1156 (2008).Google Scholar
  23. J-Y Wu, T-Y Wang, Power allocation for robust distributed bestlinear- unbiased estimation against sensing noise variance uncertainty. IEEE Trans. Wireless Commun. 12:, 2853–2869 (2013).View ArticleGoogle Scholar
  24. S Kar, PK Varshney, Linear coherent estimation with spatial collaboration. IEEE Trans. Inform. Theory. 59:, 3532–3553 (2013).View ArticleGoogle Scholar
  25. J Fang, H Li, Power constrained distributed estimation with cluster-based sensor collaboration. IEEE Trans. Wireless Commun. 8:, 3822–3832 (2009).View ArticleGoogle Scholar
  26. M Fanaei, MC Valenti, A Jamalipour, NA Schmid, in Proc. IEEE Acoustics, Speech and Signal Process. Optimal power allocation for distributed BLUE estimation with linear spatial collaboration (Florence, 2014), pp. 5452–5456.Google Scholar
  27. SM Kay, Fundamentals of statistical signal processing: estimation theory (Prentice-Hall, Inc., Englewood Cliffs, 1993).MATHGoogle Scholar
  28. J Su, HH Nguyen, HD Tuan, in Canadian Workshop on Information Theory. Power allocation for distributed estimation in sensor networks with semi-orthogonal MAC (St. John’s, 2015).Google Scholar
  29. VAP Viswanath, DNC Tse, Optimal sequences, power control and user capacity of synchronous CDMA systems with linear MMSE multiuser receiver. IEEE Trans. Inform. Theory. 45:, 1968–1983 (1999).MathSciNetView ArticleMATHGoogle Scholar
  30. HH Nguyen, E Shwedyk, Bandwidth constrained signature waveforms for maximizing the network capacity of synchronous CDMA systems. IEEE Trans. Commun. 49:, 961–965 (2001).View ArticleMATHGoogle Scholar
  31. IS Gradshteyn, IM Ryzhik, Table of Integrals, Series, and Products (Academic Press, New Jersey, 2007).MATHGoogle Scholar

Copyright

© The Author(s) 2016