Skip to main content

DOA estimation using multiple measurement vector model with sparse solutions in linear array scenarios

Abstract

A novel algorithm is presented based on sparse multiple measurement vector (MMV) model for direction of arrival (DOA) estimation of far-field narrowband sources. The algorithm exploits singular value decomposition denoising to enhance the reconstruction process. The proposed multiple nature of MMV model enables the simultaneous processing of several data snapshots to obtain greater accuracy in the DOA estimation. The DOA problem is addressed in both uniform linear array (ULA) and nonuniform linear array (NLA) scenarios. Superior performance is demonstrated in terms of root mean square error and running time of the proposed method when compared with conventional compressed sensing methods such as simultaneous orthogonal matching pursuit (S-OMP), l 2,1 minimization, and root-MUISC.

1 Introduction

Compressed sensing (CS) is a novel paradigm shift in sampling and signal acquisition which has attracted considerable attention for application in wireless communications, signal processing, and array processing [13]. This technique relies on the fact that many signals can be represented using only a few nonzero coefficients. The associated problem with CS concerns the recovery of a sparse signal by solving an under-determined system of equations. Indeed, CS theory takes advantage of sparsity constraint on solution vector to recover a high-dimensional signal from a small set of measurements. The conventional CS setup is about recovery of a sparse vector while many applications involve acquisition of multiple signals. In this case, all signals are sparse and exhibit the same indices for their nonzero coefficients. This setting leads to the recovery of a row sparse matrix which has only a few nonzero rows. This problem is well known in sparse approximation and has been termed the multiple measurement vector problem [2, 3] or simultaneous sparse approximation (SSA) problem [4]. In fact, it can be considered as an extension of single-measurement vector (SMV) problem that represents the conventional form of CS problem.

Direction of arrival estimation is a classic problem in array processing field. It finds various applications in radar, sonar, acoustics, and communication systems [5]. To date, several methods have been developed to solve DOA problem such as MUSIC, ESPRIT, and Capon [5, 6]. In recent years, some new approaches have been introduced that exploit the spatial sparsity of source signals to obtain DOA estimations [7, 8]. These methods are based on defining a sampling grid on the angular solution space and solving the conventional single measurement CS problem.

In this paper, DOA estimation for narrowband far-field signals is resolved using multiple measurement vector approach. A new algorithm is proposed that is based on singular value decomposition (SVD) to solve the sparse multiple measurement vector (MMV) problem. The performance of proposed method has been compared with other conventional techniques, in particular, simultaneous orthogonal matching pursuit (S-OMP) [4], l 2,1 minimization [2], and Root-MUSIC [9]. Numerical simulations indicate that the proposed method outperforms aforementioned algorithms in terms of root mean square error (RMSE) and recovery rate. The proposed approach is applicable to both uniform linear arrays and nonuniform linear arrays. In addition, numerical simulations show the superiority of proposed method in both uniform linear array (ULA) and nonuniform linear array (NLA) scenarios.

The rest of the paper is organized as follows. An overview of CS is presented in Section 2. The system model for DOA estimation is presented in Section 3. MMV recovery algorithms are discussed in Section 4, including the proposed algorithm. Numerical simulations are provided in Section 5, and finally, the work is concluded in Section 6.

2 Compressed sensing

The goal of CS is to recover the unknown K-sparse vector xN having K nonzero elements from linear measurements yM, such that y = Φx (MN). Matrix Φ called sensing matrix performs an acquisition process on sparse vector and delivers the measurement vector. The problem associated with compressed sensing is an under determined system of equations. Under some conditions, the solution vector can be recovered accurately from the measurement vector y [10]. The recovery of a single vector is often called single measurement vector (SMV) problem. The extension of SMV model to a finite set of jointly sparse vectors that share the same locations for the nonzero elements is known as MMV problem [2]. MMV problem is an appropriate model for a couple of applications in medical imaging [11], array processing [7], and equalization of sparse communication channels [12]. The MMV model is the result of concatenating a finite number of SMV problems that share common sparse support. Mathematical representation of MMV model is

$$ Y=\Phi X $$
(1)

where YM × L is the matrix of measurements, Φ represents the sensing matrix, and XN × L is the row-sparse matrix which has only K nonzero rows. The row support of matrix X is defined as [4]:

$$ \Omega =\mathrm{supp}(X)=\left\{ i\ \Big|\ {x}_{i j}\ne 0\ \mathrm{for}\ \mathrm{some}\ j\right\} $$
(2)

where Ω denotes the index set that includes the indices corresponding to the nonzero rows. In the MMV setting, the goal is to jointly recover the set of vectors that share a common sparse support. The MMV recovery algorithms will be discussed later in Section 4. Theorem in [2] provides the necessary and sufficient uniqueness condition for MMV recovery problem. It states that a necessary and sufficient condition for the measurements Y = ΦX to uniquely determine the row sparse matrix X is given by:

$$ \left|\mathrm{supp}(X)\right|<\frac{\mathrm{spark}\left(\varPhi \right)-1+\mathrm{rank}(X)}{2} $$
(3)

where |.| stands for cardinality of a set, and spark(.) is the smallest number of linearly dependent columns of a matrix [2].

3 System model

Described first is the general form of ULA; then, the NLA will be defined based on a ULA with missing sensors. Consider a ULA which is made up of M identical and omnidirectional sensors. The elements of array have λ/2 spacing between them, where λ is the wavelength of impinging signals on the array. There are K independent, far-field narrowband-arriving signals from sources (targets) that impinge on the ULA at distinct angles θ i  (i = 1, …, K). Then the array output can be represented as follows [5, 6]:

$$ \boldsymbol{y}(t)= A\left(\boldsymbol{\theta}\ \right)\boldsymbol{s}(t)+\boldsymbol{w}(t),\kern0.5em t=1,2,\dots . L $$
(4)

where \( A=\left[\boldsymbol{a}\left({\theta}_1\right),\boldsymbol{a}\left({\theta}_2\right),\dots, \boldsymbol{a}\left({\theta}_K\right)\right]\in {\mathcal{C}}^{M\times K} \) is the matrix consisting of steering vectors and \( \boldsymbol{a}\left({\theta}_i\right)={\left[1,\dots, {e}^{- j\pi \left( M-1\right) \sin \left({\theta}_i\right)}\right]}^T\in {\mathcal{C}}^{M\times 1} \). Each steering vector denotes a distinctive angle θ i from arriving signals. The received signal is denoted by \( \boldsymbol{s}(t)={\left[{u}_1(t),\dots, {u}_K(t)\right]}^T\in {\mathcal{C}}^{K\times 1} \)u i(t) being the signal coming from i-th source at time instance t. In addition, the received signal will be corrupted with white Gaussian noise w(t). The number of time snapshots is equated to L. The superscript “T” represents matrix transpose operation. The purpose is to estimate θ i  (i = 1,.., K) based on received signal y(t). In the specific case of an NLA, some sensors of ULA are omitted. NLA case will be explained in detail in Section 5.2. Matrix A(θ ) is not known in Eq. (4) because it depends on unknown DOAs (θ = [θ 1, …, θ K ]). We can represent Eq. (4) in matrix form as:

$$ Y= A\left(\boldsymbol{\theta} \right) S+ W $$
(5)

where YM × L is the array output matrix comprising all output vectors, \( S={\left[{s}_1,\dots, {s}_K\right]}^T\in {\mathcal{C}}^{K\times L} \) is the matrix of source signals s i  = [u i (1), …, u i (L)] (1 ≤ i ≤ K), and \( W=\left[ w(1),\dots, w(L)\right]\in {\mathcal{C}}^{M\times L} \) is the noise matrix. Eq. (5) is derived by concatenating the vectors in Eq. (4), where Eq. (5) is not a typical form of CS problem. Therefore, it is necessary to reformulate a parameter estimation problem as a sparse representation problem. To transform Eq. (5) into a sparse representation problem, it is necessary to consider all possible angles of arrival as \( \Theta =\left\{{\tilde{\theta}}_1,{\tilde{\theta}}_2,\dots, {\tilde{\theta}}_N\ \right\} \), where N represents the number of potential solutions and must be much greater than M and K. In fact, Θ is a sampling grid of all possible DOAs. The elements of Θ cover the range between −90° and +90° with a desired step size. By using this assumption, we discretize the angular space uniformly into a grid. In addition, we assume that the DOAs of sources (targets) would lie on the sampling grid Θ. We define \( \Phi =\left[ a\left({\tilde{\theta}}_1\right),\dots, a\left({\tilde{\theta}}_N\right)\right]\in {\mathrm{\mathbb{C}}}^{M\times N} \) as the steering matrix composed of steering vectors corresponding to the angles in sampling grid Θ. Unlike A(θ), matrix Φ is known and does not depend on the locations of targets. Hence, we can rewrite Eq. (5) as a sparse MMV model [6]:

$$ Y=\Phi X+ W $$
(6)

where XN × L is a row-sparse matrix that has only K nonzero rows. Each nonzero row of X corresponded to a signal from a specific source. Matrix W represents the noise matrix, and Φ is considered as the sensing matrix. This new formulation conforms to MMV model. The support of sparse matrix X can be denoted by supp(X) = Ω. From Section 2, we know each column of row sparse matrix shares the same locations for nonzero entries in MMV model. In this formulation, the DOAs correspond to the elements of support set Ω. In other words, the indices of nonzero rows in X determine the columns of Φ (steering vectors) that have participated in constructing the output matrix Y. Eq. (6) is the sparse MMV representation for DOA estimation, and it can be considered as the noisy version of Eq. (1). In Eq. (6), Φ is a known matrix with Vandermonde structure; thus, spark(Φ) = M + 1. When rank(X) = K, according to theorem explained in Section 2, we have M > K + 1; hence, the problem will have unique solution. The same relation is deducible from estimation theory based on which we know that the number of sensors must be greater than the number of targets. Regarding Eq. (6), one can find the DOAs by recovering the support of row sparse matrix X. If L = 1, the problem reduces to SMV. Now, we can apply sparse recovery algorithms to Eq. (6) and find support set Ω to determine the angles of arrival. In the next Section, we will discuss recovery algorithms.

4 Recovery algorithms for MMV model

4.1 Recovery via l 2,1 minimization

The l 2,1 minimization is an extension of the well-known l 1 minimization approach used especially for the recovery of row-sparse matrices. This approach is based on l p,q norm which is formulated by [2]:

$$ {\left\Vert X\right\Vert}_{p, q}={\left({\displaystyle \sum_i}{\left\Vert {x}^i\right\Vert}_p^q\right)}^{1/ q} $$
(7)

where x i denotes the i th row of matrix X. By using Eq. (7), optimization-based algorithm is able to recover the row-sparse matrix from the problem defined by:

$$ \widehat{X}=\underset{X\in {\mathrm{\mathbb{R}}}^{N\times L}}{ \min }{\left\Vert X\right\Vert}_{2,1}\ s. t.\kern0.5em {\left\Vert Y-\Phi X\right\Vert}_2^2<\epsilon $$
(8)

In the preceding equation, Y and Φ are given, and ϵ bounds the amount of noise in recovered data. This method needs to solve an optimization problem and consequently is very time consuming. In this paper, we exploit SPGL1 toolbox [13] to solve Eq. (8), which is an alternative expression for basis pursuit denoising (BPDN). The l 2,1 minimization method described below will be referred to as BPDN.

4.2 S-OMP algorithm

S-OMP algorithm is an extension of well-known OMP method reported in [14] which has been developed by Tropp [4] to solve sparse MMV problem. If L = 1, the algorithm is reduced to OMP method. Generally, in each of the iterations, the S-OMP algorithm tries to find an index of a column which accounts for the greatest correlation with the measurements matrix.

4.3 Proposed algorithm

The proposed algorithm is presented in Table 1. In this work, we exploit the idea of using SVD decomposition to compute signal subspace and eliminate additive noise. This approach has been combined with an iterative approach to reconstruct row sparse matrix [15]. For simplicity, we first consider the noiseless case described in Eq. (1), i.e., Y = ΦX, where X is a row-sparse matrix with K nonzero and independent rows. In the above equation, the rank of matrix X is equal to the rank of measurement matrix Y, i.e., rank(Y) = rank(X) = K. Indeed, the measurement matrix Y is a linear composition of the rows of matrix X. Matrix ΦΩ is obtained by setting the columns of Φ indexed by Ω. Then we can write range(Y) = range(ΦΩ). In other words, the columns corresponding to support set Ω create the range of Y due to row sparsity of matrix X. In order to eliminate scaling, the orthonormal basis is computed, U = orth(Y). After computing U, we can find the elements of support set Ω which indicate the position of nonzero rows in matrix Φ by checking the condition below:

Table 1 Proposed algorithm
$$ \frac{{\left\Vert {\Phi}_j^H U\right\Vert}_2}{{\left\Vert {\Phi}_j\right\Vert}_2}=1\ i f f\ j\in \Omega $$
(9)

After finding Ω, the nonzero rows of row-sparse matrix can be calculated with ease by using matrix pseudo inverse. The aforementioned method however will not work in noisy environment described in Eq. (6). In the presence of uncorrelated noise, the matrix Y will be full rank, and we have rank(Y) ≠ K. Hence, the noise needs to be removed and the rank of matrix Y reduced to K. The SVD decomposition tool [7] has been used to remove the additive noise from the measurement matrix. First, [u, L, V] = svd(Y) needs to be calculated to obtain dimensionality reduced version of Y red  = YVD. In order to select K singular values corresponding to K targets, matrix D = [I K , 0 K × (L − K)] ′ is defined. Matrix Y red is a noiseless version of Y which spans the target signals subspace, and its rank is equal to K. Next, the algorithm tries to find the columns of matrix Φ which have the greatest correlation with U = orth(Y red ). The columns of sensing matrix that maximize \( {\left\Vert {\Phi}_j^H U\right\Vert}_2\ /{\left\Vert {\Phi}_j\right\Vert}_2 \) determine the elements of support set Ω. Finally, signal X is recovered by using pseudo inverse operation. If L < K, the rank of Y is equal to L and the performance of algorithm degrades. In this paper, we assume that the number of snapshots is greater than the number of targets, i.e., L > K.

5 Simulations

We considered the problem of DOA estimation in both uniform linear array and nonuniform linear array scenarios. Numerical simulations are used to compare the proposed method with other well-known techniques. Simulation results for ULA are presented in Section 5.1 and NLA investigation in Section 5.2.

5.1 Uniform linear array

In this part, several scenarios have been devised to compare the performance of DOA estimation method proposed with others using uniform linear array with M = 30 sensors. We consider three independent narrowband and far-field sources (K = 3) located at three distinct random angles. The angles of sources are uniformly selected from the range−90° to 90°. In each realization, new random angles are selected. The possible range is discretizes with a step size of 1°; subsequently, N = 181. The signals of the sources are determined from Gaussian distribution with zero mean and unit variance. Criterion used to assess the performance of algorithms is based on the empirical recovery rate, which is the probability of successful recovery of sources. It is assumed that N r realizations are achieved and in each realization K targets are found; therefore, estimation is determined for K × N r DOAs. The total number of successful recovered targets is represented by N success during the whole N r realizations, and ERR = N success/(K × N r ).

Figure 1 shows the effect of the number of snapshots of the proposed algorithm, S-OMP, and BPDN. It is evident that more snapshots significantly improve the probability of source recovery using the proposed method. The reason is that SVD, which is used in the proposed algorithm, works better with more snapshots because the singular values are estimated more accurately. Increase of the number of snapshots marginally improves the performance of S-OMP and BPDN.

Fig. 1
figure 1

Effect of number of snapshots on the recovery rate for SNR = 30 dB, M = 30, and K = 3

Figure 2 depicts ERR versus the number of sources using L = 70 and SNR = 30 dB for the proposed algorithm, S-OMP, and BPDN. It demonstrates that for K between 2 and 17, the proposed method achieves better recovery rate than S-OMP and BPDN. Failure rate of the proposed algorithm is comparable with the other two techniques for K ≥ 17K ≥ 17. In other words, by corrupting the sparsity (i.e., having more targets to estimate), the proposed algorithm fails to reconstruct sparse solution vector like the other compressed sensing methods.

Fig. 2
figure 2

Effect of increasing the number of sources (K) on ERR for SNR = 30 dB, L = 70, and M = 30

In Fig. 3, a different scenario is assumed. Here, M = 30, L = 70, and three sources impinge on the array at 10°, 70°, and 71°. To verify the superiority of the proposed algorithm over S-OMP, l 2,1 minimization, and Root-MUSIC, RMSE indicator is utilized which is defined by:

Fig. 3
figure 3

Performance comparison of RMSE for DOAs at angles 10°, 70°, and 71°

$$ \mathrm{RMSE}=\frac{1}{K}{\displaystyle \sum_{k=1}^K}\sqrt{\frac{1}{N_r}{\displaystyle \sum_{j=1}^{N_r}}{\left({\widehat{\theta}}_{k j}-{\theta}_k\right)}^2} $$
(10)

where N r is the number of independent Monte Carlo realizations, K is the number of sources, θ k is the true DOA, and \( {\widehat{\theta}}_{kj} \) is the j-th estimation value for θ k .

Figure 3 demonstrates the efficiency of the proposed method in resolving adjacent angles in low SNRs. The proposed method outperforms the other algorithms and converges to the actual DOA values over a wider range of SNR values. The main advantage of proposed method is that it combines both SVD denoising and MMV approach to achieve better sparse signal recovery. In contrast, S-OMP is not able to discern adjacent targets even at an SNR = 30 dB where its root mean square error remains fixed at 3.54°. The high coherency of matrix Φ causes this method to be inaccurate in scenarios with closely spaced targets. Root-MUSIC and BPDN both are efficient for SNRs higher than 15 dB.

Running time investigation of the proposed algorithm, S-OMP, and BPDN are shown in Figs. 4 and 5 for M = 30, L = 70, and SNR = 30 dB. One can observe that, in contrast to S-OMP, the running time of the proposed algorithm increases negligibly with increase in K. Moreover, despite being insensitive for K > 5, l 2,1 minimization (BPDN) is a significantly slower algorithm, requiring 3 orders of magnitude longer times to resolve the targets. Figure 5 shows the running time against the number of snapshots. The results show once again that BPDN performed the worst requiring thousand-fold more time in comparison to the other two methods. The results also show the proposed method consumes less time to calculate SVD values than S-OMP because it deals with a reduced matrix size of Y red M × K while S-OMP has to work with a larger matrix of YM × L .

Fig. 4
figure 4

Running time of the proposed and other algorithms versus the number of targets

Fig. 5
figure 5

Running time as a function of number of snapshots

The last simulation of this part is dedicated to the case of two closely spaced sources with varying angular separation. Here, the ULA employs 20 sensors, and the number of snapshots L = 100 and SNR = 5 dB. The angle between two sources varies from 1° to 30°. The result for proposed algorithm, S-OMP, and root-MUSIC is shown in Fig. 6.

Fig. 6
figure 6

Bias of localizing two sources as a function of separation for SNR = 5 dB

5.2 Nonuniform linear array

In this section, DOA estimation is investigated in NLA scenarios where low-angle tracking is required to discern targets that are close to each other. By using NLA configurations, the accuracy of angle measurements can be improved in comparison to uniform linear array setup with the same number of sensors. In other words, NLA configurations have a narrower beamwidth by using a wider aperture and therefore achieve a better DOA resolution.

If one or more sensors in a ULA malfunction, then the array can be considered an NLA. As popular and efficient methods such as root-MUSIC cannot be used in NLAs, we have applied the CS approach to various configurations of NLA problem. An NLA with aperture M′ and M sensors has been denoted by \( {\mathrm{NLA}}_{M^{\hbox{'}}, M} \). For example, NLA30,20 denotes 20 sensor linear arrays with an aperture length 30 × λ/2. In Fig. 7, the sensors of a NLA5,3 are located at positions corresponding to the vector p = [0,2,5].

Fig. 7
figure 7

The dark circles denote the active sensors and the white circles the omitted sensors

The steering vector associated with the array in Fig. 7 can be written as \( a\left({\mu}_i\right)=\left[{e}^{j2{\mu}_i}\ {e}^{j5{\mu}_i}\right] \) where \( {\mu}_i=-\frac{2\pi}{\lambda}\ \Delta \sin \left({\uptheta}_{\mathrm{i}}\right) \) and Δ represents the distance between sensors. Regarding this nonuniform structure, the distance between the elements of vector increases; consequently, the coherency of vector decreases, and the process of recovery become more efficient. To evaluate the performance of recovery algorithms, some simulation tests have been carried out considering the scenario in Fig. 3 with NLA60,30. The results in Fig. 8 show that for this case, the aperture has doubled in length while the number of sensors remains the same. The figure shows that the overall performance of all CS-based methods has improved in comparison to results in Fig. 3 where the ULA with 30 sensors using S-OMP method attains an error of 3.54° at SNR = 30 dB, while with NLA60,30, an error limit is 1.44° at the same SNR. The improvement is because when the linear array has a nonuniform arrangement, the lower mutual coherency between array manifold columns helps reconstruction method pick up correct DOAs from sampling grid. NLA could be considered as a sampling machine taking random spatial samples from arriving signals. Increasing randomness in [10] is reported to tighten RIP property. Furthermore, owing to the doubled aperture length in Fig. 8, the ability of the array to discern closely located DOAs has been enhanced.

Fig. 8
figure 8

Performance comparison of RMSE in LNA scenario for NLA 60,30

The second experiment concerns the effect of aperture length on DOA estimation in NLA scenario. We compare ULA15 to four NLAs with the same number of active sensors. The experiment has been carried as a Monte Carlo simulation with three randomly chosen angles in each of the iterations. Figure 9 shows that by increasing the aperture, while the number of sensors remains the same, the array’s ability to resolve DOAs improves significantly. This is because the longer aperture size provides narrow beamwidth. From the CS perspective, a longer aperture in the NLA arrangement decreases the matrix mutual coherency since the sensors can be located far from each other.

Fig. 9
figure 9

The role of aperture length in DOA estimation with NLAs using proposed method

The last experiment investigates the performance of the array with fixed aperture and variable number of sensors. The results are summarized in Fig. 10. While NLA60,5 with only 5 sensors fail to achieve a satisfactory result for SNR < 20 dB, the other two alternatives with 10 and 15 sensors exhibit desirable performance. Indeed, the more sensors we exploit, the more measurements we get; consequently, the sparse signal will be recovered more accurately even at low SNRs.

Fig. 10
figure 10

Effect of number of sensors when the aperture size is fixed for the proposed method

6 Conclusions

The proposed algorithm, which is based on row-sparse matrix recovery for DOA estimation problem, not only handles multiple measurements but also converges significantly faster in comparison to other conventional compressed sensing algorithms. By using SVD decomposition, the proposed algorithm is able to achieve better results in terms of RMSE and ERR in low SNR situations. It consumes less time than its rivals because it works on a matrix with lower dimensions due to SVD denoising. Impact of aperture length and sensor number on the performance of proposed method was also investigated. The results show that for a fixed number of sensors, the array with longer aperture outperforms arrays with shorter aperture. When the aperture size is fixed, the array with more sensors exhibits better performance as more sensors acquire more measurements, which helps sparse signal recovery.

References

  1. DL Donoho, Compressed sensing. IEEE Transactions on Information Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  2. YC Eldar, G Kutyniok, Compressed Sensing: Theory and Applications. UK: Cambridge University Press; 2012.

  3. YC Eldar, Sampling Theory: Beyond Bandlimited Systems. UK: Cambridge University Press; 2015.

  4. JA Tropp, AC Gilbert, MJ Strauss, Algorithms for simultaneous sparse approximation. Part I: greedy pursuit. Signal Processing 86(3), 572–588 (2006)

    Article  MATH  Google Scholar 

  5. H Krim, M Viberg, Two decades of array signal processing research: the parametric approach. IEEE Signal Processing Magazine 13(4), 67–94 (1996)

    Article  Google Scholar 

  6. hen, Z., Gokeda, G., & Yu, Y. (2010) Introduction to Direction-of-Arrival Estimation, Artech House

  7. D Malioutov, M Çetin, AS Willsky, A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Transactions on Signal Processing 53(8), 3010–3022 (2005)

    Article  MathSciNet  Google Scholar 

  8. X Li, X Ma, S Yan, C Hou, Single snapshot DOA estimation by compressive sampling. Appl. Acoust. 74(7), 926–930 (2013)

    Article  Google Scholar 

  9. A Barabell, Improving the Resolution Performance of Eigenstructure-Based Direction-Finding Algorithms, Acoustics, Speech, and Signal Processing. IEEE Int. Conference on ICASSP’83, vol. 8, 1983, pp. 336–339

    Google Scholar 

  10. EJ Candès, J Romberg, T Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on information theory 52(2), 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  11. JW Phillips, RM Leahy, JC Mosher, MEG-based imaging of focal neuronal current sources. IEEE Trans. Med. Imaging 16(3), 338–348 (1997)

    Article  Google Scholar 

  12. SF Cotter, BD Rao, Sparse channel estimation via matching pursuit with application to equalization. IEEE Transactions on Communications 50(3), 374–377 (2002)

    Article  Google Scholar 

  13. Van den Berg E, Friedlander M P. (2010) Sparse optimization with least-squares constraints, E. and M. P., Tech. Rep., Dept of Computer Science, Univ of British Columbia.

  14. JA Tropp, AC Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory 53(12), 4655–4666 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  15. SM Hosseini, R Sadeghzadeh, MJ Azizipour, A Simultaneous Sparse Approximation Approach for DOA Estimation. IEEE Int. Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015, pp. 1–5

    Google Scholar 

Download references

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seyyed Moosa Hosseini.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hosseini, S.M., Sadeghzadeh, R.A. & Virdee, B.S. DOA estimation using multiple measurement vector model with sparse solutions in linear array scenarios. J Wireless Com Network 2017, 58 (2017). https://doi.org/10.1186/s13638-017-0838-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-017-0838-y

Keywords