Open Access

Two-dimensional DOA estimation of coherent sources using two parallel uniform linear arrays

EURASIP Journal on Wireless Communications and Networking20172017:60

https://doi.org/10.1186/s13638-017-0844-0

Received: 14 November 2016

Accepted: 20 March 2017

Published: 31 March 2017

Abstract

A novel two-dimensional (2-D) direction-of-arrival (DOA) estimation approach based on matrix reconstruction is proposed for coherent signals impinging on two parallel uniform linear arrays (ULAs). In the proposed algorithm, the coherency of incident signals is decorrelated through two equivalent covariance matrices, which are constructed by utilizing cross-correlation information of received data between the two parallel ULAs and the changing reference element. Then, the 2-D DOA estimation can be estimated by using eigenvalue decomposition (EVD) of the new constructed matrix. Compared with the previous works, the proposed algorithm can offer remarkably good estimation performance. In addition, the proposed algorithm can achieve automatic parameter pair-matching without additional computation. Simulation results demonstrate the effectiveness and efficiency of the proposed algorithm.

Keywords

Matrices reconstruction 2-D DOA estimation Coherent signals Decoupled estimation Uniform linear array (ULA)

1 Background

2-D direction-of-arrival (DOA) estimation of incident coherent source signals has received increasing attention in radar, sonar, and seismic exploration [15]. Many high-resolution techniques, such as MUSIC [6] and ESPRIT [7], have achieved exciting estimation performance. However, the aforementioned methods assume the incident signals are independent, which would encounter performance degradation due to the rank deficiency when coherent signals exist. To decorrelate coherent signals, the spatial smoothing (SS) [8] or forward-backward spatial smoothing (FBSS) [9] are especially noteworthy. However, this technique generally reduces the effective array aperture, and the maximum number of resolvable signals cannot exceed the number of array sensors. In [10], an effective matrix decomposition method utilizing cross-correlation matrix is proposed to decorrelate coherent signals. Chen et al. [11] have proposed a 2-D ESPRIT-like method that realizes decorrelation by reconstructing a Toeplitz matrix. With the help of three correlation matrices, Wang et al. [12] have presented a 2-D DOA estimation method. Recently, Nie et al. [13] have introduced an efficient subspace algorithm for 2-D DOA estimation. In [14], a novel 2-D DOA estimation method using a sparse L-shaped array is proposed to obtain high performance and less complexity. Xia et al. [15] have proposed a polynomial root-finding-based method for 2-D DOA estimation by using two parallel uniform linear arrays (ULAs), which has less computational burden. Some decorrelation algorithms are proposed in [1618] to achieve 2-D DOA estimation by utilizing two parallel ULAs. However, the limitation of the abovementioned algorithms is that the estimation performance cannot be satisfactory due to the fact that the structure of the array is not being fully exploited.

For the purpose of description, the following notations are used. Boldface italic lower/uppercase letters denote vectors/matrices. (·)*, (·)T, (·), and (·)H stand for the conjugation, transpose, Moore-Penrose pseudo-inverse, and conjugate transpose of a vector/matrix, respectively. The notation E(x) and diag (·) separately denote the expectation operator and the diagonal matrix, respectively.

2 Date model

As illustrated in Fig. 1, the antenna array consists of two parallel ULAs (X a and Y a ) in the x − y plane. Each ULA has N omnidirectional sensors with spacing d x , and the interelement spacing between the two ULAs is d y . Suppose that M far-field narrowband coherent signals impinge on the two parallel ULAs from 2-D distinct directions (α i , β i )(1 ≤ i ≤ M), where α i and β i are measured relatively to the x axis and to the y axis, respectively.
Fig. 1

Parallel array configuration for 2-D DOA estimation

Let the kth element of the subarray X a be the phase reference and then the observed signals \( {x}_m^k(t) \) at the mth element can be expressed as
$$ {x}_m^k(t)={\displaystyle \sum_{i=1}^M{e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}{s}_i(t)}+{n}_{x, m}(t) $$
(1)
where s i (t) denotes the complex envelope of the ith coherent signal, λ is the signal wavelength, and d x represents the spacing between two adjacent sensors. The superscript k(k = 1, 2, , N) of the \( {x}_m^k(t) \) stands for the number of the reference element in subarray X a , and the subscript m(m = 1, 2, , N) of the \( {x}_m^k(t) \) denotes the number of the element along the x positive axis in subarray X a . n x,m (t) is the additive Gaussian white noise (AGWN) of the mth element in subarray X a .
Note that when m = k, the observed signals at the kth element can be expressed as
$$ \begin{array}{c}{x}_k^k(t)={\displaystyle \sum_{i=1}^M{e}^{- j\left(2\pi /\lambda \right)\left( k- k\right){d}_x \cos {\alpha}_i}{s}_i(t)}+{n}_{x, k}(t)\\ {}\kern2.3em ={\displaystyle \sum_{i=1}^M{s}_i(t)}+{n}_{x, k}(t)\end{array} $$
(2)
With a similar processing, employing the kth element of the subarray Y a as the phase reference and then the observed signals \( {y}_m^k(t) \) at the mth element can be expressed as
$$ {y}_m^k(t)={\displaystyle \sum_{i=1}^M{e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_y \cos {\alpha}_i}{e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i}{s}_i(t)}+{n}_{y, m}(t) $$
(3)

Similarly as in (1), the superscript k(k = 1, 2, , N) of the \( {y}_m^k(t) \) stands for the number of the reference element in subarray Y a , and the subscript m(m = 1, 2, , N) of the \( {y}_m^k(t) \) denotes the number of the element along the x positive axis in subarray Y a . n y,m (t) is the AGWN of the mth element in subarray Y a .

The observed vectors X k (t) and Y k (t) can be written as
$$ {\mathbf{X}}^k(t)={\left[{x}_1^k(t),{x}_2^k(t),\cdots, {x}_N^k(t)\right]}^T $$
(4)
$$ {\mathbf{Y}}^k(t)={\left[{y}_1^k(t),{y}_2^k(t),\cdots, {y}_N^k(t)\right]}^T $$
(5)

3 The proposed algorithm

For the subarray X a , the auto-correlation calculation is defined as follows:
$$ \begin{array}{c}\kern0.1em {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{x}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.9em ={\displaystyle \sum_{i=1}^M{g}_i(t){e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}}+{\sigma}^2\delta \left( m, k\right)\end{array} $$
(6)
where
$$ {g}_i(t)={\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t)} $$
(7)
$$ \delta \left( m, k\right)=\left\{\begin{array}{c}\hfill 1,\kern1.3em m= k\hfill \\ {}\hfill 0,\kern1.3em m\ne k\hfill \end{array}\right. $$
(8)
Assume that the kth element of the subarray X a is the phase reference. Thus, the auto-correlation vectors \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) between X k (t) and the corresponding reference element \( {x}_k^k(t) \) can be defined as follows:
$$ \begin{array}{c}\kern0.2em {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{\mathbf{X}}^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern3em ={\left[{r}_{x_1^k{\left({x}_k^k\right)}^{\ast}}^k,{r}_{x_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {r}_{x_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^{\mathrm{T}}\end{array} $$
(9)
It is obvious that N column vectors will be achieved as the superscript k of the \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) is changed from 1 to N. Therefore, we construct an equivalent auto-covariance matrix R xx as follows:
$$ \begin{array}{c}\kern0.2em {\mathbf{R}}_{x x}=\left[{\mathbf{r}}_{{\mathbf{X}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\mathbf{r}}_{{\mathbf{X}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\mathbf{r}}_{{\mathbf{X}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\left[\begin{array}{cccc}\hfill {r}_{x_1^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_1^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_1^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill {r}_{x_2^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_2^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_2^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill {r}_{x_N^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_N^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_N^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \end{array}\right]\end{array} $$
(10)
Similarly as in (6), for the subarray Y a , the cross-correlation calculation \( {\tilde{r}}_{y_m^k{\left({x}_k^k\right)}^{\ast}}^k \) can be written as
$$ \begin{array}{c}\kern0.1em {\tilde{r}}_{y_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{y}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.9em ={\displaystyle \sum_{i=1}^M{g}_i(t){e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}}{e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i}\end{array} $$
(11)
Then, the cross-correlation vectors \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) between Y k (t) and the reference element \( {x}_k^k(t) \) in subarray X a can be expressed as
$$ \begin{array}{c}\kern0.2em {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{\mathbf{Y}}^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern3em ={\left[{\tilde{r}}_{y_1^k{\left({x}_k^k\right)}^{\ast}}^k,{\tilde{r}}_{y_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {\tilde{r}}_{y_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^T\end{array} $$
(12)
Obviously, we can obtain another N column vectors when the superscript k of the \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) is varied from 1 to N. Based on the N column vectors, an equivalent cross-covariance matrix R yx can be given by
$$ \begin{array}{c}\kern0.1em {\mathbf{R}}_{y x}=\left[{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\left[\begin{array}{cccc}\hfill {\tilde{r}}_{y_1^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_1^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_1^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill {\tilde{r}}_{y_2^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_2^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_2^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill {\tilde{r}}_{y_N^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_N^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_N^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \end{array}\right]\end{array} $$
(13)
In order to obtain the final matrix form of the equivalent auto-covariance matrix R xx as in (10), we need to further investigate the auto-correlation calculation \( {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k \) in (6).
$$ \begin{array}{c}\kern0.1em {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{x}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{- j\left(2\pi /\lambda \right)\left[\left( m-1\right)-\left( k-1\right)\right]{d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_i}\cdotp {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\kern0.1em \\ {}\kern2.8em =\left[\begin{array}{cccc}\hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_1}\hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_2}\hfill & \hfill \cdots \hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right]\cdotp \\ {}\kern3.6em \left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right]\cdotp \left[\begin{array}{c}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_1}\hfill \\ {}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_2}\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right]+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\mathtt{a}}_m\left(\alpha \right)\mathbf{G}{\mathtt{a}}_k^H\left(\alpha \right)+{\sigma}^2\delta \left( m, k\right)\end{array} $$
(14)
where
$$ \mathbf{G}= diag\left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right] $$
(15)
$$ {\mathtt{a}}_m\left(\alpha \right)=\left[\begin{array}{ccc}\hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_1}\hfill & \hfill \cdots \hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right] $$
(16)
It can be seen from (16) that \( {\mathtt{a}}_m\left(\alpha \right) \) is the mth row of the steering matrix in covariance matrix with the scenario when the first element of the subarray X a is set to be the reference element. According to (14), (15), and (16), Eq. (9) can be rewritten as
$$ \begin{array}{l}\kern0.2em {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k={\left[{r}_{x_1^k{\left({x}_k^k\right)}^{\ast}}^k,{r}_{x_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {r}_{x_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^T\\ {}\kern3em =\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathtt{a}}_k^H\left(\alpha \right)+{\sigma}^2\delta \left( m, k\right)\end{array} $$
(17)
where \( \mathbf{A}\left(\alpha \right)=\left[\begin{array}{cccc}\hfill \mathtt{a}\left({\alpha}_1\right)\hfill & \hfill \mathtt{a}\left({\alpha}_2\right)\hfill & \hfill \cdots \hfill & \hfill \mathtt{a}\left({\alpha}_M\right)\hfill \end{array}\right] \) is the steering matrix of the covariance matrix along the subarray X a , and \( \mathtt{a}\left({\alpha}_i\right)={\left[\begin{array}{cccc}\hfill 1\hfill & \hfill {e}^{- j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i}\hfill & \hfill \cdots \hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( N-1\right){d}_x \cos {\alpha}_i}\hfill \end{array}\right]}^T \).
Based on (17), the matrix R xx in (10) can be rewritten as
$$ \begin{array}{c}\kern0.2em {\mathbf{R}}_{x x}=\left[{\mathbf{r}}_{{\mathbf{X}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\mathbf{r}}_{{\mathbf{X}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\mathbf{r}}_{{\mathbf{X}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)+ diag\left[{\sigma}_1^2,{\sigma}_2^2,\cdots, {\sigma}_N^2\right]\end{array} $$
(18)
where \( {\sigma}_i^2 \) is the noise power on the ith element of the subarray X a .
Similar to the equivalent auto-covariance matrix R xx in (18), the equivalent cross-covariance matrix R yx in (13) can be rewritten as
$$ \begin{array}{l}\kern0.2em {\mathbf{R}}_{yx}=\left[{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.9em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)\end{array} $$
(19)
where
$$ \boldsymbol{\Psi} \left(\beta \right)=\left[\begin{array}{cccc}\hfill \upsilon \left({\beta}_1\right)\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill \upsilon \left({\beta}_2\right)\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill \upsilon \left({\beta}_M\right)\hfill \end{array}\right] $$
(20)
$$ \mathbf{G}= diag\left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right] $$
(21)

From (18) and (19), it is easy to see that since α i  ≠ α j , (i ≠ j), A(α) is a full column rank matrix with rank (A(α)) = M. Similarly, since β i  ≠ β j , (i ≠ j), Ψ(β) is a full-rank diagonal matrix with rank (Ψ(β)) = M. According to (7) and (15), note that the incident signals s i (t) ≠ 0, (i = 1, 2 M), so g i (t) ≠ 0. As a result, G is a full-rank diagonal matrix, namely, rank(G) = M. That is, if the narrowband far-field signals are statistically independent, the diagonal element g i (t) of the matrix G represents the power of the ith incident signal. If the narrowband far-field signals are fully coherent, the diagonal element g i (t) of the matrix G denotes the sum of the powers of the M incident signals. Notice that if the narrowband far-field signals are the coexistence of the uncorrelated and coherent signals, which means there are K coherent signals, the remaining are M − K statistically independent signals. Then, the diagonal element g i (t) of the matrix G stands for the sum of the powers of the K coherent signals when the subscript of the diagonal element g i (t) in matrix G corresponding to the source signal belongs to one of the K coherent signals. If the subscript of the diagonal element g i (t) in matrix G corresponding to the source signal belongs to one of the remaining M − K mutually independent signals, the diagonal element g i (t) of the matrix G denotes the power of the ith independent signal.

From the above theoretical analysis, the coherency of incident signals is decorrelated through matrices constructing no matter whether the signals are uncorrelated, coherent, or partially correlated.

From (18), we can obtain the noiseless auto-covariance matrix \( {\widehat{\mathbf{R}}}_{xx} \)
$$ {\widehat{\mathbf{R}}}_{xx}=\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right) $$
(22)
The eigenvalue decomposition (EVD) of \( {\widehat{\mathbf{R}}}_{xx} \) can be written
$$ {\widehat{\mathbf{R}}}_{xx}={\displaystyle \sum_{i=1}^M{\lambda}_i{\mathbf{U}}_i{\mathbf{U}}_i^H} $$
(23)
where {λ 1 ≥ λ 2 ≥  ≥ λ M } and {U 1, U 2, , U M } are the non-zero eigenvalues and eigenvector of the noiseless auto-covariance matrix \( {\widehat{\mathbf{R}}}_{xx} \), respectively. Then, the pseudo-inverse of \( {\widehat{\mathbf{R}}}_{xx} \) is
$$ {\mathbf{R}}_{xx}^{\dagger }={\displaystyle \sum_{i=1}^M{\lambda}_i^{-1}{\mathbf{U}}_i{\mathbf{U}}_i^H} $$
(24)
Since A(α) is a column full-rank matrix, the Eq. (22) can be expressed as
$$ \begin{array}{c}\kern0.1em \mathbf{G}{\mathbf{A}}^H\left(\alpha \right)={\mathbf{A}}^{-1}\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\\ {}\kern3.8em ={\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\end{array} $$
(25)
According to (19) and (25), the matrix R yx can be rewritten as
$$ \begin{array}{l}\kern0.1em {\mathbf{R}}_{yx}=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern1.8em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\end{array} $$
(26)
Right-multiplying both sides of (26) by \( {\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right) \)
$$ \begin{array}{l}\kern0.1em {\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern6em {\widehat{\mathbf{R}}}_{xx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)\end{array} $$
(27)
Substituting (23) and (24) into (27) yields
$$ \begin{array}{l}\kern0.2em {\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern7em \left({\displaystyle \sum_{i=1}^M{\lambda}_i{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\left({\displaystyle \sum_{i=1}^M{\lambda}_i^{-1}{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\mathbf{A}\left(\alpha \right)\\ {}\kern5.1em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern6.9em \left({\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\mathbf{A}\left(\alpha \right)\\ {}\kern5em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)\\ {}\kern5em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\end{array} $$
(28)
Notice that \( {\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H} \) is an identity matrix, that is, \( {\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H}=\mathbf{I} \). Based on (24) and (26), a new matrix R can be defined as follows:
$$ \mathbf{R}={\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger } $$
(29)
From (29), the (28) can be further rewritten as
$$ \mathbf{R}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right) $$
(30)

Obviously, the columns of A(α) are the eigenvectors corresponding to the major diagonal elements of diagonal matrix Ψ(β). Therefore, by performing the EVD of R, the A(α) and Ψ(β) can be achieved. Then, the DOA estimation of the coherent signals can be achieved according to \( \upsilon \left({\beta}_i\right)={e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i} \) and \( \mathtt{a}\left({\alpha}_i\right)={\left[1,{e}^{- j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i},\cdots, {e}^{- j\left(2\pi /\lambda \right)\left( N-1\right){d}_x \cos {\alpha}_i}\right]}^T \) without additional computations for parameter pair-matching and 2-D peak searching.

Up to now, the steps of the proposed matrix reconstruction method with the finite sampling data are summarized as follows:
  1. (1)

    Calculate the column vectors \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) of the equivalent auto-covariance matrix R xx by (6) and (9). Similarly, compute the column vectors \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) of the equivalent cross-covariance matrix R yx according to (11) and (12)

     
  2. (2)

    Achieve the matrix R xx and the matrix R yx by (10) and (13)

     
  3. (3)

    Obtain the noiseless auto-covariance matrix \( {\widehat{\mathbf{R}}}_{xx} \) by (22). Then, perform EVD to obtain pseudo-inverse matrix \( {\mathbf{R}}_{xx}^{\dagger } \)

     
  4. (4)

    Construct the new matrix R by (29) and then get the A(α) and Ψ(β) by performing EVD of the new matrix R

     
  5. (5)

    Estimate the 2-D DOAs θ i  = (α i , β i ) of incident coherent source signals via \( \upsilon \left({\beta}_i\right)={e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i} \) and \( \mathtt{a}\left({\alpha}_i\right)={\left[1,{e}^{- j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i},\cdots, {e}^{- j\left(2\pi /\lambda \right)\left( N-1\right){d}_x \cos {\alpha}_i}\right]}^T \).

     

4 Simulation result

In this section, computer simulations are performed to ascertain the performance of the proposed algorithm. The proposed method is compared with another efficient algorithm (DMR-DOAM) in [17]. The number of sensors in each subarray is N = 7 with sensor displacement d x  = d y  = λ/2. Consider M = 4 coherent signals with carrier frequency f = 900MHz coming from α = (75o, 100o, 120o, 60o) and β = (65o, 75o, 90o, 50o). The phases of coherent signals are [π/5, π/3, π/3, π/3]. Results on each of the simulation are analyzed by 1000 Monte Carlo trials. Two performance indices, called the root-mean-square-error (RMSE) and normalized probability of success (NPS), are defined to evaluate the performance of the proposed algorithm and DMR-DOAM algorithm.
$$ \begin{array}{l}\kern0.1em \mathrm{RMSE}\left(\alpha \right)=\sqrt{\frac{1}{1000 K}{{\displaystyle \sum_{i=1}^{1000}{\displaystyle \sum_{n=1}^M\left({\widehat{\alpha}}_n(i)-{\alpha}_n\right)}}}^2}\\ {}\mathrm{RMSE}\left(\beta \right)=\sqrt{\frac{1}{1000 K}{{\displaystyle \sum_{i=1}^{1000}{\displaystyle \sum_{n=1}^M\left({\widehat{\beta}}_n(i)-{\beta}_n\right)}}}^2}\end{array} $$
(31)
where \( {\widehat{\alpha}}_n(i) \) and \( {\widehat{\beta}}_n(i) \) are the estimates of α n and β n for the ith Monte Carlo trial respectively, and K is the source number.
$$ \mathrm{N}\mathrm{P}\mathrm{S}=\frac{\varUpsilon_{\mathrm{suc}}}{T_{\mathrm{total}}} $$
(32)
where ϒ suc and Τ total denote the times of success and Monte Carlo trial, respectively. Furthermore, a successful experiment is that satisfies \( \max \left(\left|{\widehat{\theta}}_n-{\theta}_n\right|\right)<\varepsilon \), and ε equals 0.5 for estimation of the coherent signals.
In the first simulation, we evaluate the performance of the two algorithms with respect to the input signal-to-noise ratio (SNR). The number of snapshots is fixed at 1000, and the SNR varies from −10 to 10 dB. The RMSE of the DOAs versus the SNR is shown in Fig. 2. It can be seen from Fig. 2 that the proposed algorithm can provide better DOA estimation than the DMR-DOAM algorithm no matter whether the RMSE curve of the α or the RMSE curve of the β. Fig. 3 shows the NPS of the DOAs versus SNR, which illustrates that the performance of the proposed algorithm is better than that of the DMR-DOAM algorithm. Furthermore, even at low SNR, the proposed algorithm can still achieve better estimation performance. The reason is that the proposed algorithm takes full advantage of all the received data of the two parallel ULAs to construct the equivalent auto-covariance matrix R xx and cross-covariance matrix R yx , which can improve the estimation precision. On the contrary, the DMR-DOAM algorithm obtains the DOAs of signals at the cost of reduction in array aperture, which often leads to poorer DOA estimation.
Fig. 2

The RMSE of the DOA estimates versus input SNR

Fig. 3

The NPS of the DOA estimates versus input SNR

In the second simulation, we investigate the performance of the two algorithms versus the number of snapshots. The simulation conditions are similar to those in the first simulation, except that the SNR is set at 5 dB, and the number of snapshots is varied from to 10 to 250. The RMSE of the DOAs versus the number of snapshots is depicted in Fig. 4. As shown in Fig. 4, the proposed algorithm behaves better performance than the DMR-DOAM algorithm.
Fig. 4

The RMSE of the DOA estimates versus input snapshots

The result in Fig. 5 shows the NPS of the DOAs versus the number of snapshots. From Fig. 5, it can be observed that the proposed algorithm has much higher estimation performance than the DMR-DOAM algorithm as the number of snapshots increases. Moreover, the superiority of the proposed algorithm is much more obvious than the DMR-DOAM algorithm no matter whether the number of snapshots is small or large. This indicates that the proposed algorithm is more useful especially when the low-computational cost and highly real-time data process are inquired.
Fig. 5

The NPS of the DOA estimates versus input snapshots

In the last simulation, we assess the performance of the proposed algorithm as the correlation factor ρ is varied from 0 to 1 between s 1(t) and s 2(t). The SNR is set at 5 dB, and the number of snapshots is 800. Note that the ε in (32) is set to be 0.6 in this simulation. The performance curves of the DOA estimation against correlation factor are shown in Figs. 6 and 7. From Figs. 6 and 7, we can see that the proposed algorithm outperforms the DMR-DOAM algorithm.
Fig. 6

The RMSE of the DOA estimates versus correlation factor

Fig. 7

The NPS of the DOA estimates versus correlation factor

5 Conclusions

A novel decoupling algorithm for 2-D DOA estimation with two parallel ULAs has been presented. In the proposed algorithm, two equivalent covariance matrices are reconstructed to achieve the decorrelation of the coherent signals and the estimated angle parameters are pair-matched automatically. It has been shown that the proposed algorithm yields remarkably better estimation performance than the DMR-DOAM algorithm.

Declarations

Acknowledgements

This research was supported by the National Natural Science Foundation of China (61602346), by the Key Talents Project for Tianjin University of Technology and Education (TUTE) (KYQD16001), by the Tianjin Municipal Science and Technology innovation platform, intelligent transportation coordination control technology service platform (16PTGCCX00150), and by the National Natural Science Foundation of China (61601494).

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Automotion and Transportation, Tianjin University of Technology and Education
(2)
Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University
(3)
College of Electronic and Communication Engineering, Tianjin Normal University
(4)
School of Electronic Engineering, Tianjin University of Technology and Education
(5)
The 28th Research Institute of China Electronics Technology Group Corporation

References

  1. H Krim, M Viberg, Two decades of array signal processing research: the parametric approach. IEEE Signal Process. Mag. 13(4), 67–94 (1996)View ArticleGoogle Scholar
  2. Z Li, K Liu, Y Zhao et al., Ma MaPIT: an enhanced pending interest table for NDN with mapping bloom filter. IEEE Comm. Lett. 18(11), 1423–1426 (2014)View ArticleGoogle Scholar
  3. Z Li, L Song, H Shi, Approaching the capacity of K-user MIMO interference channel with interference counteraction scheme. Ad Hoc Netw. 2016, 1–6 (2016)Google Scholar
  4. Z Li, Y Chen, H Shi et al., NDN-GSM-R: a novel high-speed railway communication system via named data networking. EURASIP J. Wirel. Commun. Netw. 2016(48), 1–5 (2016)View ArticleGoogle Scholar
  5. X Liu, Z Li, P Yang et al., Information-centric mobile ad hoc networks and content routing: a survey. Ad Hoc Netw. 2016, 1–14 (2016)Google Scholar
  6. RO Schmidt, Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag. 34(3), 276–280 (1986)View ArticleGoogle Scholar
  7. R Roy, T Kailath, ESPRIT-estimation of signal parameters via rotational invariance techni- ques. IEEE Trans. Acoust. Speech Signal Process. 37(7), 984–995 (1989)View ArticleGoogle Scholar
  8. N Tayem, HM Kwon, L-shape 2-dimensional arrival angle estimation with propagator method. IEEE Trans. Antennas Propag. 53(5), 1622–1630 (2005)View ArticleGoogle Scholar
  9. S Marcos, A Marsal, M Benidir, The propagator method for source bearing estimation. Signal Process. 42(2), 121–138 (1995)View ArticleGoogle Scholar
  10. JF Gu, P Wei, HM Tai, 2-D direction-of-arrival estimation of coherent signals using cross- correlation matrix. Signal Process. 88(1), 75–85 (2008)View ArticleMATHGoogle Scholar
  11. F Chen, S Kwong, CW Kok, ESPRIT-like two-dimensional DOA estimation for coherent signals. IEEE Trans. Aerospace and Electronic Systems 46(3), 1477–1484 (2010)View ArticleGoogle Scholar
  12. GM Wang, JM Xin, NN Zheng et al., Computationally efficient subspace-based method for two-dimensional direction estimation with L-shaped array. IEEE Trans. Signal Process. 59(7), 3197–3212 (2011)MathSciNetView ArticleGoogle Scholar
  13. X Nie, LP Li, A computationally efficient subspace algorithm for 2-D DOA estimation with L-shaped array. IEEE Signal Process Lett. 21(8), 971–974 (2014)View ArticleGoogle Scholar
  14. JF Gu, WP Zhu, MNS Swamy, Joint 2-D DOA estimation via sparse L-shaped array. IEEE Trans. Signal Process. 31(5), 1171–1182 (2015)MathSciNetView ArticleGoogle Scholar
  15. TQ Xia, Y Zheng, Q Wan et al., Decoupled estimation of 2-D angles of arrival using two parallel uniform linear arrays. IEEE Trans. Antennas Propag. 55(9), 2627–2632 (2007)View ArticleGoogle Scholar
  16. TQ Xia, Y Zheng, Q Wan et al., 2-D angle of arrival estimation with two parallel uniform linear arrays for coherent signals. IEEE Radar. Conf. 55(9), 244–247 (2007)Google Scholar
  17. L Wang, GL Li, WP Mao, New method for estimating 2-D DOA in coherent source environment based on data matrix reconstruction data matrix reconstruction. J. Xidian Univ. 40(2), 159–168 (2013)Google Scholar
  18. H Chen, C Hou, Q Wang et al., Cumulants-based Toeplitz matrices reconstruction method for 2-D coherent DOA estimation. IEEE Sensors J. 14(8), 2824–2832 (2014)View ArticleGoogle Scholar

Copyright

© The Author(s). 2017