For the subarray X
a
, the auto-correlation calculation is defined as follows:
$$ \begin{array}{c}\kern0.1em {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{x}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.9em ={\displaystyle \sum_{i=1}^M{g}_i(t){e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}}+{\sigma}^2\delta \left( m, k\right)\end{array} $$
(6)
where
$$ {g}_i(t)={\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t)} $$
(7)
$$ \delta \left( m, k\right)=\left\{\begin{array}{c}\hfill 1,\kern1.3em m= k\hfill \\ {}\hfill 0,\kern1.3em m\ne k\hfill \end{array}\right. $$
(8)
Assume that the kth element of the subarray X
a
is the phase reference. Thus, the auto-correlation vectors \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) between X
k(t) and the corresponding reference element \( {x}_k^k(t) \) can be defined as follows:
$$ \begin{array}{c}\kern0.2em {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{\mathbf{X}}^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern3em ={\left[{r}_{x_1^k{\left({x}_k^k\right)}^{\ast}}^k,{r}_{x_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {r}_{x_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^{\mathrm{T}}\end{array} $$
(9)
It is obvious that N column vectors will be achieved as the superscript k of the \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) is changed from 1 to N. Therefore, we construct an equivalent auto-covariance matrix R
xx
as follows:
$$ \begin{array}{c}\kern0.2em {\mathbf{R}}_{x x}=\left[{\mathbf{r}}_{{\mathbf{X}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\mathbf{r}}_{{\mathbf{X}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\mathbf{r}}_{{\mathbf{X}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\left[\begin{array}{cccc}\hfill {r}_{x_1^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_1^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_1^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill {r}_{x_2^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_2^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_2^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill {r}_{x_N^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_N^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_N^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \end{array}\right]\end{array} $$
(10)
Similarly as in (6), for the subarray Y
a
, the cross-correlation calculation \( {\tilde{r}}_{y_m^k{\left({x}_k^k\right)}^{\ast}}^k \) can be written as
$$ \begin{array}{c}\kern0.1em {\tilde{r}}_{y_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{y}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.9em ={\displaystyle \sum_{i=1}^M{g}_i(t){e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}}{e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i}\end{array} $$
(11)
Then, the cross-correlation vectors \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) between Y
k(t) and the reference element \( {x}_k^k(t) \) in subarray X
a
can be expressed as
$$ \begin{array}{c}\kern0.2em {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{\mathbf{Y}}^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern3em ={\left[{\tilde{r}}_{y_1^k{\left({x}_k^k\right)}^{\ast}}^k,{\tilde{r}}_{y_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {\tilde{r}}_{y_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^T\end{array} $$
(12)
Obviously, we can obtain another N column vectors when the superscript k of the \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) is varied from 1 to N. Based on the N column vectors, an equivalent cross-covariance matrix R
yx
can be given by
$$ \begin{array}{c}\kern0.1em {\mathbf{R}}_{y x}=\left[{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\left[\begin{array}{cccc}\hfill {\tilde{r}}_{y_1^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_1^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_1^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill {\tilde{r}}_{y_2^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_2^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_2^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill {\tilde{r}}_{y_N^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_N^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_N^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \end{array}\right]\end{array} $$
(13)
In order to obtain the final matrix form of the equivalent auto-covariance matrix R
xx
as in (10), we need to further investigate the auto-correlation calculation \( {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k \) in (6).
$$ \begin{array}{c}\kern0.1em {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{x}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{- j\left(2\pi /\lambda \right)\left[\left( m-1\right)-\left( k-1\right)\right]{d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_i}\cdotp {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\kern0.1em \\ {}\kern2.8em =\left[\begin{array}{cccc}\hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_1}\hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_2}\hfill & \hfill \cdots \hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right]\cdotp \\ {}\kern3.6em \left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right]\cdotp \left[\begin{array}{c}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_1}\hfill \\ {}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_2}\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right]+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\mathtt{a}}_m\left(\alpha \right)\mathbf{G}{\mathtt{a}}_k^H\left(\alpha \right)+{\sigma}^2\delta \left( m, k\right)\end{array} $$
(14)
where
$$ \mathbf{G}= diag\left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right] $$
(15)
$$ {\mathtt{a}}_m\left(\alpha \right)=\left[\begin{array}{ccc}\hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_1}\hfill & \hfill \cdots \hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right] $$
(16)
It can be seen from (16) that \( {\mathtt{a}}_m\left(\alpha \right) \) is the mth row of the steering matrix in covariance matrix with the scenario when the first element of the subarray X
a
is set to be the reference element. According to (14), (15), and (16), Eq. (9) can be rewritten as
$$ \begin{array}{l}\kern0.2em {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k={\left[{r}_{x_1^k{\left({x}_k^k\right)}^{\ast}}^k,{r}_{x_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {r}_{x_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^T\\ {}\kern3em =\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathtt{a}}_k^H\left(\alpha \right)+{\sigma}^2\delta \left( m, k\right)\end{array} $$
(17)
where \( \mathbf{A}\left(\alpha \right)=\left[\begin{array}{cccc}\hfill \mathtt{a}\left({\alpha}_1\right)\hfill & \hfill \mathtt{a}\left({\alpha}_2\right)\hfill & \hfill \cdots \hfill & \hfill \mathtt{a}\left({\alpha}_M\right)\hfill \end{array}\right] \) is the steering matrix of the covariance matrix along the subarray X
a
, and \( \mathtt{a}\left({\alpha}_i\right)={\left[\begin{array}{cccc}\hfill 1\hfill & \hfill {e}^{- j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i}\hfill & \hfill \cdots \hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( N-1\right){d}_x \cos {\alpha}_i}\hfill \end{array}\right]}^T \).
Based on (17), the matrix R
xx
in (10) can be rewritten as
$$ \begin{array}{c}\kern0.2em {\mathbf{R}}_{x x}=\left[{\mathbf{r}}_{{\mathbf{X}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\mathbf{r}}_{{\mathbf{X}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\mathbf{r}}_{{\mathbf{X}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)+ diag\left[{\sigma}_1^2,{\sigma}_2^2,\cdots, {\sigma}_N^2\right]\end{array} $$
(18)
where \( {\sigma}_i^2 \) is the noise power on the ith element of the subarray X
a
.
Similar to the equivalent auto-covariance matrix R
xx
in (18), the equivalent cross-covariance matrix R
yx
in (13) can be rewritten as
$$ \begin{array}{l}\kern0.2em {\mathbf{R}}_{yx}=\left[{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.9em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)\end{array} $$
(19)
where
$$ \boldsymbol{\Psi} \left(\beta \right)=\left[\begin{array}{cccc}\hfill \upsilon \left({\beta}_1\right)\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill \upsilon \left({\beta}_2\right)\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill \upsilon \left({\beta}_M\right)\hfill \end{array}\right] $$
(20)
$$ \mathbf{G}= diag\left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right] $$
(21)
From (18) and (19), it is easy to see that since α
i
≠ α
j
, (i ≠ j), A(α) is a full column rank matrix with rank (A(α)) = M. Similarly, since β
i
≠ β
j
, (i ≠ j), Ψ(β) is a full-rank diagonal matrix with rank (Ψ(β)) = M. According to (7) and (15), note that the incident signals s
i
(t) ≠ 0, (i = 1, 2 ⋯ M), so g
i
(t) ≠ 0. As a result, G is a full-rank diagonal matrix, namely, rank(G) = M. That is, if the narrowband far-field signals are statistically independent, the diagonal element g
i
(t) of the matrix G represents the power of the ith incident signal. If the narrowband far-field signals are fully coherent, the diagonal element g
i
(t) of the matrix G denotes the sum of the powers of the M incident signals. Notice that if the narrowband far-field signals are the coexistence of the uncorrelated and coherent signals, which means there are K coherent signals, the remaining are M − K statistically independent signals. Then, the diagonal element g
i
(t) of the matrix G stands for the sum of the powers of the K coherent signals when the subscript of the diagonal element g
i
(t) in matrix G corresponding to the source signal belongs to one of the K coherent signals. If the subscript of the diagonal element g
i
(t) in matrix G corresponding to the source signal belongs to one of the remaining M − K mutually independent signals, the diagonal element g
i
(t) of the matrix G denotes the power of the ith independent signal.
From the above theoretical analysis, the coherency of incident signals is decorrelated through matrices constructing no matter whether the signals are uncorrelated, coherent, or partially correlated.
From (18), we can obtain the noiseless auto-covariance matrix \( {\widehat{\mathbf{R}}}_{xx} \)
$$ {\widehat{\mathbf{R}}}_{xx}=\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right) $$
(22)
The eigenvalue decomposition (EVD) of \( {\widehat{\mathbf{R}}}_{xx} \) can be written
$$ {\widehat{\mathbf{R}}}_{xx}={\displaystyle \sum_{i=1}^M{\lambda}_i{\mathbf{U}}_i{\mathbf{U}}_i^H} $$
(23)
where {λ
1 ≥ λ
2 ≥ ⋯ ≥ λ
M
} and {U
1, U
2, ⋯, U
M
} are the non-zero eigenvalues and eigenvector of the noiseless auto-covariance matrix \( {\widehat{\mathbf{R}}}_{xx} \), respectively. Then, the pseudo-inverse of \( {\widehat{\mathbf{R}}}_{xx} \) is
$$ {\mathbf{R}}_{xx}^{\dagger }={\displaystyle \sum_{i=1}^M{\lambda}_i^{-1}{\mathbf{U}}_i{\mathbf{U}}_i^H} $$
(24)
Since A(α) is a column full-rank matrix, the Eq. (22) can be expressed as
$$ \begin{array}{c}\kern0.1em \mathbf{G}{\mathbf{A}}^H\left(\alpha \right)={\mathbf{A}}^{-1}\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\\ {}\kern3.8em ={\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\end{array} $$
(25)
According to (19) and (25), the matrix R
yx
can be rewritten as
$$ \begin{array}{l}\kern0.1em {\mathbf{R}}_{yx}=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern1.8em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\end{array} $$
(26)
Right-multiplying both sides of (26) by \( {\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right) \)
$$ \begin{array}{l}\kern0.1em {\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern6em {\widehat{\mathbf{R}}}_{xx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)\end{array} $$
(27)
Substituting (23) and (24) into (27) yields
$$ \begin{array}{l}\kern0.2em {\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern7em \left({\displaystyle \sum_{i=1}^M{\lambda}_i{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\left({\displaystyle \sum_{i=1}^M{\lambda}_i^{-1}{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\mathbf{A}\left(\alpha \right)\\ {}\kern5.1em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern6.9em \left({\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\mathbf{A}\left(\alpha \right)\\ {}\kern5em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)\\ {}\kern5em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\end{array} $$
(28)
Notice that \( {\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H} \) is an identity matrix, that is, \( {\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H}=\mathbf{I} \). Based on (24) and (26), a new matrix R can be defined as follows:
$$ \mathbf{R}={\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger } $$
(29)
From (29), the (28) can be further rewritten as
$$ \mathbf{R}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right) $$
(30)
Obviously, the columns of A(α) are the eigenvectors corresponding to the major diagonal elements of diagonal matrix Ψ(β). Therefore, by performing the EVD of R, the A(α) and Ψ(β) can be achieved. Then, the DOA estimation of the coherent signals can be achieved according to \( \upsilon \left({\beta}_i\right)={e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i} \) and \( \mathtt{a}\left({\alpha}_i\right)={\left[1,{e}^{- j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i},\cdots, {e}^{- j\left(2\pi /\lambda \right)\left( N-1\right){d}_x \cos {\alpha}_i}\right]}^T \) without additional computations for parameter pair-matching and 2-D peak searching.
Up to now, the steps of the proposed matrix reconstruction method with the finite sampling data are summarized as follows:
-
(1)
Calculate the column vectors \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) of the equivalent auto-covariance matrix R
xx
by (6) and (9). Similarly, compute the column vectors \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) of the equivalent cross-covariance matrix R
yx
according to (11) and (12)
-
(2)
Achieve the matrix R
xx
and the matrix R
yx
by (10) and (13)
-
(3)
Obtain the noiseless auto-covariance matrix \( {\widehat{\mathbf{R}}}_{xx} \) by (22). Then, perform EVD to obtain pseudo-inverse matrix \( {\mathbf{R}}_{xx}^{\dagger } \)
-
(4)
Construct the new matrix R by (29) and then get the A(α) and Ψ(β) by performing EVD of the new matrix R
-
(5)
Estimate the 2-D DOAs θ
i
= (α
i
, β
i
) of incident coherent source signals via \( \upsilon \left({\beta}_i\right)={e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i} \) and \( \mathtt{a}\left({\alpha}_i\right)={\left[1,{e}^{- j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i},\cdots, {e}^{- j\left(2\pi /\lambda \right)\left( N-1\right){d}_x \cos {\alpha}_i}\right]}^T \).