For the subarray X_{
a
}, the autocorrelation calculation is defined as follows:
$$ \begin{array}{c}\kern0.1em {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{x}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.9em ={\displaystyle \sum_{i=1}^M{g}_i(t){e}^{ j\left(2\pi /\lambda \right)\left( m k\right){d}_x \cos {\alpha}_i}}+{\sigma}^2\delta \left( m, k\right)\end{array} $$
(6)
where
$$ {g}_i(t)={\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t)} $$
(7)
$$ \delta \left( m, k\right)=\left\{\begin{array}{c}\hfill 1,\kern1.3em m= k\hfill \\ {}\hfill 0,\kern1.3em m\ne k\hfill \end{array}\right. $$
(8)
Assume that the kth element of the subarray X_{
a
} is the phase reference. Thus, the autocorrelation vectors \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) between X
^{k}(t) and the corresponding reference element \( {x}_k^k(t) \) can be defined as follows:
$$ \begin{array}{c}\kern0.2em {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{\mathbf{X}}^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern3em ={\left[{r}_{x_1^k{\left({x}_k^k\right)}^{\ast}}^k,{r}_{x_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {r}_{x_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^{\mathrm{T}}\end{array} $$
(9)
It is obvious that N column vectors will be achieved as the superscript k of the \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) is changed from 1 to N. Therefore, we construct an equivalent autocovariance matrix R
_{
xx
} as follows:
$$ \begin{array}{c}\kern0.2em {\mathbf{R}}_{x x}=\left[{\mathbf{r}}_{{\mathbf{X}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\mathbf{r}}_{{\mathbf{X}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\mathbf{r}}_{{\mathbf{X}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\left[\begin{array}{cccc}\hfill {r}_{x_1^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_1^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_1^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill {r}_{x_2^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_2^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_2^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill {r}_{x_N^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_N^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_N^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \end{array}\right]\end{array} $$
(10)
Similarly as in (6), for the subarray Y_{
a
}, the crosscorrelation calculation \( {\tilde{r}}_{y_m^k{\left({x}_k^k\right)}^{\ast}}^k \) can be written as
$$ \begin{array}{c}\kern0.1em {\tilde{r}}_{y_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{y}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.9em ={\displaystyle \sum_{i=1}^M{g}_i(t){e}^{ j\left(2\pi /\lambda \right)\left( m k\right){d}_x \cos {\alpha}_i}}{e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i}\end{array} $$
(11)
Then, the crosscorrelation vectors \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) between Y
^{k}(t) and the reference element \( {x}_k^k(t) \) in subarray X_{
a
} can be expressed as
$$ \begin{array}{c}\kern0.2em {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{\mathbf{Y}}^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern3em ={\left[{\tilde{r}}_{y_1^k{\left({x}_k^k\right)}^{\ast}}^k,{\tilde{r}}_{y_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {\tilde{r}}_{y_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^T\end{array} $$
(12)
Obviously, we can obtain another N column vectors when the superscript k of the \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) is varied from 1 to N. Based on the N column vectors, an equivalent crosscovariance matrix R
_{
yx
} can be given by
$$ \begin{array}{c}\kern0.1em {\mathbf{R}}_{y x}=\left[{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\left[\begin{array}{cccc}\hfill {\tilde{r}}_{y_1^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_1^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_1^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill {\tilde{r}}_{y_2^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_2^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_2^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill {\tilde{r}}_{y_N^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_N^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_N^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \end{array}\right]\end{array} $$
(13)
In order to obtain the final matrix form of the equivalent autocovariance matrix R
_{
xx
} as in (10), we need to further investigate the autocorrelation calculation \( {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k \) in (6).
$$ \begin{array}{c}\kern0.1em {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{x}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{ j\left(2\pi /\lambda \right)\left( m k\right){d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{ j\left(2\pi /\lambda \right)\left[\left( m1\right)\left( k1\right)\right]{d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{ j\left(2\pi /\lambda \right)\left( m1\right){d}_x \cos {\alpha}_i}\cdotp {e}^{j\left(2\pi /\lambda \right)\left( k1\right){d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\kern0.1em \\ {}\kern2.8em =\left[\begin{array}{cccc}\hfill {e}^{ j\left(2\pi /\lambda \right)\left( m1\right){d}_x \cos {\alpha}_1}\hfill & \hfill {e}^{ j\left(2\pi /\lambda \right)\left( m1\right){d}_x \cos {\alpha}_2}\hfill & \hfill \cdots \hfill & \hfill {e}^{ j\left(2\pi /\lambda \right)\left( m1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right]\cdotp \\ {}\kern3.6em \left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right]\cdotp \left[\begin{array}{c}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k1\right){d}_x \cos {\alpha}_1}\hfill \\ {}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k1\right){d}_x \cos {\alpha}_2}\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right]+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\mathtt{a}}_m\left(\alpha \right)\mathbf{G}{\mathtt{a}}_k^H\left(\alpha \right)+{\sigma}^2\delta \left( m, k\right)\end{array} $$
(14)
where
$$ \mathbf{G}= diag\left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right] $$
(15)
$$ {\mathtt{a}}_m\left(\alpha \right)=\left[\begin{array}{ccc}\hfill {e}^{ j\left(2\pi /\lambda \right)\left( m1\right){d}_x \cos {\alpha}_1}\hfill & \hfill \cdots \hfill & \hfill {e}^{ j\left(2\pi /\lambda \right)\left( m1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right] $$
(16)
It can be seen from (16) that \( {\mathtt{a}}_m\left(\alpha \right) \) is the mth row of the steering matrix in covariance matrix with the scenario when the first element of the subarray X_{
a
} is set to be the reference element. According to (14), (15), and (16), Eq. (9) can be rewritten as
$$ \begin{array}{l}\kern0.2em {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k={\left[{r}_{x_1^k{\left({x}_k^k\right)}^{\ast}}^k,{r}_{x_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {r}_{x_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^T\\ {}\kern3em =\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathtt{a}}_k^H\left(\alpha \right)+{\sigma}^2\delta \left( m, k\right)\end{array} $$
(17)
where \( \mathbf{A}\left(\alpha \right)=\left[\begin{array}{cccc}\hfill \mathtt{a}\left({\alpha}_1\right)\hfill & \hfill \mathtt{a}\left({\alpha}_2\right)\hfill & \hfill \cdots \hfill & \hfill \mathtt{a}\left({\alpha}_M\right)\hfill \end{array}\right] \) is the steering matrix of the covariance matrix along the subarray X_{
a
}, and \( \mathtt{a}\left({\alpha}_i\right)={\left[\begin{array}{cccc}\hfill 1\hfill & \hfill {e}^{ j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i}\hfill & \hfill \cdots \hfill & \hfill {e}^{ j\left(2\pi /\lambda \right)\left( N1\right){d}_x \cos {\alpha}_i}\hfill \end{array}\right]}^T \).
Based on (17), the matrix R
_{
xx
} in (10) can be rewritten as
$$ \begin{array}{c}\kern0.2em {\mathbf{R}}_{x x}=\left[{\mathbf{r}}_{{\mathbf{X}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\mathbf{r}}_{{\mathbf{X}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\mathbf{r}}_{{\mathbf{X}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)+ diag\left[{\sigma}_1^2,{\sigma}_2^2,\cdots, {\sigma}_N^2\right]\end{array} $$
(18)
where \( {\sigma}_i^2 \) is the noise power on the ith element of the subarray X_{
a
}.
Similar to the equivalent autocovariance matrix R
_{
xx
} in (18), the equivalent crosscovariance matrix R
_{
yx
} in (13) can be rewritten as
$$ \begin{array}{l}\kern0.2em {\mathbf{R}}_{yx}=\left[{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.9em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)\end{array} $$
(19)
where
$$ \boldsymbol{\Psi} \left(\beta \right)=\left[\begin{array}{cccc}\hfill \upsilon \left({\beta}_1\right)\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill \upsilon \left({\beta}_2\right)\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill \upsilon \left({\beta}_M\right)\hfill \end{array}\right] $$
(20)
$$ \mathbf{G}= diag\left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right] $$
(21)
From (18) and (19), it is easy to see that since α
_{
i
} ≠ α
_{
j
}, (i ≠ j), A(α) is a full column rank matrix with rank (A(α)) = M. Similarly, since β
_{
i
} ≠ β
_{
j
}, (i ≠ j), Ψ(β) is a fullrank diagonal matrix with rank (Ψ(β)) = M. According to (7) and (15), note that the incident signals s
_{
i
}(t) ≠ 0, (i = 1, 2 ⋯ M), so g
_{
i
}(t) ≠ 0. As a result, G is a fullrank diagonal matrix, namely, rank(G) = M. That is, if the narrowband farfield signals are statistically independent, the diagonal element g
_{
i
}(t) of the matrix G represents the power of the ith incident signal. If the narrowband farfield signals are fully coherent, the diagonal element g
_{
i
}(t) of the matrix G denotes the sum of the powers of the M incident signals. Notice that if the narrowband farfield signals are the coexistence of the uncorrelated and coherent signals, which means there are K coherent signals, the remaining are M − K statistically independent signals. Then, the diagonal element g
_{
i
}(t) of the matrix G stands for the sum of the powers of the K coherent signals when the subscript of the diagonal element g
_{
i
}(t) in matrix G corresponding to the source signal belongs to one of the K coherent signals. If the subscript of the diagonal element g
_{
i
}(t) in matrix G corresponding to the source signal belongs to one of the remaining M − K mutually independent signals, the diagonal element g
_{
i
}(t) of the matrix G denotes the power of the ith independent signal.
From the above theoretical analysis, the coherency of incident signals is decorrelated through matrices constructing no matter whether the signals are uncorrelated, coherent, or partially correlated.
From (18), we can obtain the noiseless autocovariance matrix \( {\widehat{\mathbf{R}}}_{xx} \)
$$ {\widehat{\mathbf{R}}}_{xx}=\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right) $$
(22)
The eigenvalue decomposition (EVD) of \( {\widehat{\mathbf{R}}}_{xx} \) can be written
$$ {\widehat{\mathbf{R}}}_{xx}={\displaystyle \sum_{i=1}^M{\lambda}_i{\mathbf{U}}_i{\mathbf{U}}_i^H} $$
(23)
where {λ
_{1} ≥ λ
_{2} ≥ ⋯ ≥ λ
_{
M
}} and {U
_{1}, U
_{2}, ⋯, U
_{
M
}} are the nonzero eigenvalues and eigenvector of the noiseless autocovariance matrix \( {\widehat{\mathbf{R}}}_{xx} \), respectively. Then, the pseudoinverse of \( {\widehat{\mathbf{R}}}_{xx} \) is
$$ {\mathbf{R}}_{xx}^{\dagger }={\displaystyle \sum_{i=1}^M{\lambda}_i^{1}{\mathbf{U}}_i{\mathbf{U}}_i^H} $$
(24)
Since A(α) is a column fullrank matrix, the Eq. (22) can be expressed as
$$ \begin{array}{c}\kern0.1em \mathbf{G}{\mathbf{A}}^H\left(\alpha \right)={\mathbf{A}}^{1}\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\\ {}\kern3.8em ={\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{1}{\mathbf{A}}^H\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\end{array} $$
(25)
According to (19) and (25), the matrix R
_{
yx
} can be rewritten as
$$ \begin{array}{l}\kern0.1em {\mathbf{R}}_{yx}=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern1.8em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{1}{\mathbf{A}}^H\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\end{array} $$
(26)
Rightmultiplying both sides of (26) by \( {\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right) \)
$$ \begin{array}{l}\kern0.1em {\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern6em {\widehat{\mathbf{R}}}_{xx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)\end{array} $$
(27)
Substituting (23) and (24) into (27) yields
$$ \begin{array}{l}\kern0.2em {\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern7em \left({\displaystyle \sum_{i=1}^M{\lambda}_i{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\left({\displaystyle \sum_{i=1}^M{\lambda}_i^{1}{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\mathbf{A}\left(\alpha \right)\\ {}\kern5.1em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern6.9em \left({\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\mathbf{A}\left(\alpha \right)\\ {}\kern5em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{1}\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)\\ {}\kern5em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\end{array} $$
(28)
Notice that \( {\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H} \) is an identity matrix, that is, \( {\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H}=\mathbf{I} \). Based on (24) and (26), a new matrix R can be defined as follows:
$$ \mathbf{R}={\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger } $$
(29)
From (29), the (28) can be further rewritten as
$$ \mathbf{R}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right) $$
(30)
Obviously, the columns of A(α) are the eigenvectors corresponding to the major diagonal elements of diagonal matrix Ψ(β). Therefore, by performing the EVD of R, the A(α) and Ψ(β) can be achieved. Then, the DOA estimation of the coherent signals can be achieved according to \( \upsilon \left({\beta}_i\right)={e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i} \) and \( \mathtt{a}\left({\alpha}_i\right)={\left[1,{e}^{ j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i},\cdots, {e}^{ j\left(2\pi /\lambda \right)\left( N1\right){d}_x \cos {\alpha}_i}\right]}^T \) without additional computations for parameter pairmatching and 2D peak searching.
Up to now, the steps of the proposed matrix reconstruction method with the finite sampling data are summarized as follows:

(1)
Calculate the column vectors \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) of the equivalent autocovariance matrix R
_{
xx
} by (6) and (9). Similarly, compute the column vectors \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) of the equivalent crosscovariance matrix R
_{
yx
} according to (11) and (12)

(2)
Achieve the matrix R
_{
xx
} and the matrix R
_{
yx
} by (10) and (13)

(3)
Obtain the noiseless autocovariance matrix \( {\widehat{\mathbf{R}}}_{xx} \) by (22). Then, perform EVD to obtain pseudoinverse matrix \( {\mathbf{R}}_{xx}^{\dagger } \)

(4)
Construct the new matrix R by (29) and then get the A(α) and Ψ(β) by performing EVD of the new matrix R

(5)
Estimate the 2D DOAs θ
_{
i
} = (α
_{
i
}, β
_{
i
}) of incident coherent source signals via \( \upsilon \left({\beta}_i\right)={e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i} \) and \( \mathtt{a}\left({\alpha}_i\right)={\left[1,{e}^{ j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i},\cdots, {e}^{ j\left(2\pi /\lambda \right)\left( N1\right){d}_x \cos {\alpha}_i}\right]}^T \).