# DOA estimation using multiple measurement vector model with sparse solutions in linear array scenarios

- Seyyed Moosa Hosseini
^{1}Email author, - R. A. Sadeghzadeh
^{1}and - Bal Singh Virdee
^{2}

**2017**:58

https://doi.org/10.1186/s13638-017-0838-y

© The Author(s). 2017

**Received: **5 September 2016

**Accepted: **7 March 2017

**Published: **28 March 2017

## Abstract

A novel algorithm is presented based on sparse multiple measurement vector (MMV) model for direction of arrival (DOA) estimation of far-field narrowband sources. The algorithm exploits singular value decomposition denoising to enhance the reconstruction process. The proposed multiple nature of MMV model enables the simultaneous processing of several data snapshots to obtain greater accuracy in the DOA estimation. The DOA problem is addressed in both uniform linear array (ULA) and nonuniform linear array (NLA) scenarios. Superior performance is demonstrated in terms of root mean square error and running time of the proposed method when compared with conventional compressed sensing methods such as simultaneous orthogonal matching pursuit (S-OMP), *l*
_{2,1} minimization, and root-MUISC.

### Keywords

Compressed sensing Direction of arrival Multiple measurement vector Nonuniform linear array## 1 Introduction

Compressed sensing (CS) is a novel paradigm shift in sampling and signal acquisition which has attracted considerable attention for application in wireless communications, signal processing, and array processing [1–3]. This technique relies on the fact that many signals can be represented using only a few nonzero coefficients. The associated problem with CS concerns the recovery of a sparse signal by solving an under-determined system of equations. Indeed, CS theory takes advantage of sparsity constraint on solution vector to recover a high-dimensional signal from a small set of measurements. The conventional CS setup is about recovery of a sparse vector while many applications involve acquisition of multiple signals. In this case, all signals are sparse and exhibit the same indices for their nonzero coefficients. This setting leads to the recovery of a row sparse matrix which has only a few nonzero rows. This problem is well known in sparse approximation and has been termed the multiple measurement vector problem [2, 3] or simultaneous sparse approximation (SSA) problem [4]. In fact, it can be considered as an extension of single-measurement vector (SMV) problem that represents the conventional form of CS problem.

Direction of arrival estimation is a classic problem in array processing field. It finds various applications in radar, sonar, acoustics, and communication systems [5]. To date, several methods have been developed to solve DOA problem such as MUSIC, ESPRIT, and Capon [5, 6]. In recent years, some new approaches have been introduced that exploit the spatial sparsity of source signals to obtain DOA estimations [7, 8]. These methods are based on defining a sampling grid on the angular solution space and solving the conventional single measurement CS problem.

In this paper, DOA estimation for narrowband far-field signals is resolved using multiple measurement vector approach. A new algorithm is proposed that is based on singular value decomposition (SVD) to solve the sparse multiple measurement vector (MMV) problem. The performance of proposed method has been compared with other conventional techniques, in particular, simultaneous orthogonal matching pursuit (S-OMP) [4], *l*
_{2,1} minimization [2], and Root-MUSIC [9]. Numerical simulations indicate that the proposed method outperforms aforementioned algorithms in terms of root mean square error (RMSE) and recovery rate. The proposed approach is applicable to both uniform linear arrays and nonuniform linear arrays. In addition, numerical simulations show the superiority of proposed method in both uniform linear array (ULA) and nonuniform linear array (NLA) scenarios.

The rest of the paper is organized as follows. An overview of CS is presented in Section 2. The system model for DOA estimation is presented in Section 3. MMV recovery algorithms are discussed in Section 4, including the proposed algorithm. Numerical simulations are provided in Section 5, and finally, the work is concluded in Section 6.

## 2 Compressed sensing

*K*-sparse vector x ∈ ℂ

^{ N }having

*K*nonzero elements from linear measurements y ∈ ℂ

^{ M }, such that y = Φx (

*M*≪

*N*). Matrix Φ called sensing matrix performs an acquisition process on sparse vector and delivers the measurement vector. The problem associated with compressed sensing is an under determined system of equations. Under some conditions, the solution vector can be recovered accurately from the measurement vector y [10]. The recovery of a single vector is often called single measurement vector (SMV) problem. The extension of SMV model to a finite set of jointly sparse vectors that share the same locations for the nonzero elements is known as MMV problem [2]. MMV problem is an appropriate model for a couple of applications in medical imaging [11], array processing [7], and equalization of sparse communication channels [12]. The MMV model is the result of concatenating a finite number of SMV problems that share common sparse support. Mathematical representation of MMV model is

*Y*∈ ℂ

^{ M × L }is the matrix of measurements, Φ represents the sensing matrix, and

*X*∈ ℂ

^{ N × L }is the row-sparse matrix which has only

*K*nonzero rows. The row support of matrix

*X*is defined as [4]:

*Ω*denotes the index set that includes the indices corresponding to the nonzero rows. In the MMV setting, the goal is to jointly recover the set of vectors that share a common sparse support. The MMV recovery algorithms will be discussed later in Section 4. Theorem in [2] provides the necessary and sufficient uniqueness condition for MMV recovery problem. It states that a necessary and sufficient condition for the measurements

*Y*= Φ

*X*to uniquely determine the row sparse matrix

*X*is given by:

## 3 System model

*M*identical and omnidirectional sensors. The elements of array have

*λ*/2 spacing between them, where

*λ*is the wavelength of impinging signals on the array. There are

*K*independent, far-field narrowband-arriving signals from sources (targets) that impinge on the ULA at distinct angles

*θ*

_{ i }(

*i*= 1, …,

*K*). Then the array output can be represented as follows [5, 6]:

*θ*

_{ i }from arriving signals. The received signal is denoted by \( \boldsymbol{s}(t)={\left[{u}_1(t),\dots, {u}_K(t)\right]}^T\in {\mathcal{C}}^{K\times 1} \)—

*u*

_{i}(

*t*) being the signal coming from

*i*-th source at time instance

*t*. In addition, the received signal will be corrupted with white Gaussian noise

*w*(

*t*). The number of time snapshots is equated to

*L*. The superscript “

*T*” represents matrix transpose operation. The purpose is to estimate

*θ*

_{ i }(

*i*= 1,..,

*K*) based on received signal y(

*t*). In the specific case of an NLA, some sensors of ULA are omitted. NLA case will be explained in detail in Section 5.2. Matrix

*A*(θ ) is not known in Eq. (4) because it depends on unknown DOAs (θ = [

*θ*

_{1}, …,

*θ*

_{ K }]). We can represent Eq. (4) in matrix form as:

*Y*∈ ℂ

^{ M × L }is the array output matrix comprising all output vectors, \( S={\left[{s}_1,\dots, {s}_K\right]}^T\in {\mathcal{C}}^{K\times L} \) is the matrix of source signals

*s*

_{ i }= [

*u*

_{ i }(1), …,

*u*

_{ i }(

*L*)] (1 ≤

*i*≤

*K*)

_{,}and \( W=\left[ w(1),\dots, w(L)\right]\in {\mathcal{C}}^{M\times L} \) is the noise matrix. Eq. (5) is derived by concatenating the vectors in Eq. (4), where Eq. (5) is not a typical form of CS problem. Therefore, it is necessary to reformulate a parameter estimation problem as a sparse representation problem. To transform Eq. (5) into a sparse representation problem, it is necessary to consider all possible angles of arrival as \( \Theta =\left\{{\tilde{\theta}}_1,{\tilde{\theta}}_2,\dots, {\tilde{\theta}}_N\ \right\} \), where

*N*represents the number of potential solutions and must be much greater than

*M*and

*K*. In fact, Θ is a sampling grid of all possible DOAs. The elements of Θ cover the range between −90° and +90° with a desired step size. By using this assumption, we discretize the angular space uniformly into a grid. In addition, we assume that the DOAs of sources (targets) would lie on the sampling grid Θ. We define \( \Phi =\left[ a\left({\tilde{\theta}}_1\right),\dots, a\left({\tilde{\theta}}_N\right)\right]\in {\mathrm{\mathbb{C}}}^{M\times N} \) as the steering matrix composed of steering vectors corresponding to the angles in sampling grid Θ. Unlike

*A*(θ), matrix Φ is known and does not depend on the locations of targets. Hence, we can rewrite Eq. (5) as a sparse MMV model [6]:

*X*∈ ℂ

^{ N × L }is a row-sparse matrix that has only

*K*nonzero rows. Each nonzero row of

*X*corresponded to a signal from a specific source. Matrix

*W*represents the noise matrix, and Φ is considered as the sensing matrix. This new formulation conforms to MMV model. The support of sparse matrix

*X*can be denoted by supp(

*X*) = Ω. From Section 2, we know each column of row sparse matrix shares the same locations for nonzero entries in MMV model. In this formulation, the DOAs correspond to the elements of support set Ω. In other words, the indices of nonzero rows in

*X*determine the columns of Φ (steering vectors) that have participated in constructing the output matrix

*Y*. Eq. (6) is the sparse MMV representation for DOA estimation, and it can be considered as the noisy version of Eq. (1). In Eq. (6), Φ is a known matrix with Vandermonde structure; thus, spark(Φ) =

*M*+ 1. When rank(

*X*) =

*K*, according to theorem explained in Section 2, we have

*M*>

*K*+ 1; hence, the problem will have unique solution. The same relation is deducible from estimation theory based on which we know that the number of sensors must be greater than the number of targets. Regarding Eq. (6), one can find the DOAs by recovering the support of row sparse matrix

*X*. If

*L*= 1, the problem reduces to SMV. Now, we can apply sparse recovery algorithms to Eq. (6) and find support set Ω to determine the angles of arrival. In the next Section, we will discuss recovery algorithms.

## 4 Recovery algorithms for MMV model

### 4.1 Recovery via *l*
_{2,1} minimization

*l*

_{2,1}minimization is an extension of the well-known

*l*

_{1}minimization approach used especially for the recovery of row-sparse matrices. This approach is based on

*l*

_{ p,q }norm which is formulated by [2]:

*x*

^{ i }denotes the

*i*

^{ th }row of matrix

*X*. By using Eq. (7), optimization-based algorithm is able to recover the row-sparse matrix from the problem defined by:

In the preceding equation, *Y* and Φ are given, and *ϵ* bounds the amount of noise in recovered data. This method needs to solve an optimization problem and consequently is very time consuming. In this paper, we exploit SPGL1 toolbox [13] to solve Eq. (8), which is an alternative expression for basis pursuit denoising (BPDN). The *l*
_{2,1} minimization method described below will be referred to as BPDN.

### 4.2 S-OMP algorithm

S-OMP algorithm is an extension of well-known OMP method reported in [14] which has been developed by Tropp [4] to solve sparse MMV problem. If *L* = 1, the algorithm is reduced to OMP method. Generally, in each of the iterations, the S-OMP algorithm tries to find an index of a column which accounts for the greatest correlation with the measurements matrix.

### 4.3 Proposed algorithm

*Y*= Φ

*X*, where

*X*is a row-sparse matrix with

*K*nonzero and independent rows. In the above equation, the rank of matrix

*X*is equal to the rank of measurement matrix

*Y*, i.e., rank(

*Y*) = rank(

*X*) =

*K*. Indeed, the measurement matrix

*Y*is a linear composition of the rows of matrix

*X*. Matrix Φ

_{Ω}is obtained by setting the columns of Φ indexed by Ω. Then we can write range(

*Y*) = range(Φ

_{Ω}). In other words, the columns corresponding to support set Ω create the range of

*Y*due to row sparsity of matrix

*X*. In order to eliminate scaling, the orthonormal basis is computed,

*U*= orth(

*Y*). After computing

*U*, we can find the elements of support set Ω which indicate the position of nonzero rows in matrix Φ by checking the condition below:

Proposed algorithm

Input : initialize : Ω ← [
\( \Omega =\left\{\mathrm{j}\ \Big|\ c= argma{x}_j\left(\frac{{\left\Vert {\varPhi}_j^H U\right\Vert}_2}{{\left\Vert {\varPhi}_j\right\Vert}_2}\right)\right\},\kern1.25em \mathrm{select}\ K\ \mathrm{column}\ \mathrm{indices}\ (j)\ \mathrm{that}\ \mathrm{maximize}\ c \) \( X={\Phi}_{\Omega}^{\dagger } Y \) output : |

After finding Ω, the nonzero rows of row-sparse matrix can be calculated with ease by using matrix pseudo inverse. The aforementioned method however will not work in noisy environment described in Eq. (6). In the presence of uncorrelated noise, the matrix *Y* will be full rank, and we have rank(*Y*) ≠ *K*. Hence, the noise needs to be removed and the rank of matrix *Y* reduced to *K*. The SVD decomposition tool [7] has been used to remove the additive noise from the measurement matrix. First, [*u*, *L*, *V*] = *svd*(*Y*) needs to be calculated to obtain dimensionality reduced version of *Y*
_{
red
} = *YVD*. In order to select *K* singular values corresponding to *K* targets, matrix *D* = [*I*
_{
K
}, 0_{
K × (L − K)}] ′ is defined. Matrix *Y*
_{
red
} is a noiseless version of *Y* which spans the target signals subspace, and its rank is equal to *K*. Next, the algorithm tries to find the columns of matrix Φ which have the greatest correlation with *U* = orth(*Y*
_{
red
}). The columns of sensing matrix that maximize \( {\left\Vert {\Phi}_j^H U\right\Vert}_2\ /{\left\Vert {\Phi}_j\right\Vert}_2 \) determine the elements of support set Ω. Finally, signal *X* is recovered by using pseudo inverse operation. If *L* < *K*, the rank of *Y* is equal to *L* and the performance of algorithm degrades. In this paper, we assume that the number of snapshots is greater than the number of targets, i.e., *L* > *K*.

## 5 Simulations

We considered the problem of DOA estimation in both uniform linear array and nonuniform linear array scenarios. Numerical simulations are used to compare the proposed method with other well-known techniques. Simulation results for ULA are presented in Section 5.1 and NLA investigation in Section 5.2.

### 5.1 Uniform linear array

In this part, several scenarios have been devised to compare the performance of DOA estimation method proposed with others using uniform linear array with *M* = 30 sensors. We consider three independent narrowband and far-field sources (*K* = 3) located at three distinct random angles. The angles of sources are uniformly selected from the range−90° to 90°. In each realization, new random angles are selected. The possible range is discretizes with a step size of 1°; subsequently, *N* = 181. The signals of the sources are determined from Gaussian distribution with zero mean and unit variance. Criterion used to assess the performance of algorithms is based on the empirical recovery rate, which is the probability of successful recovery of sources. It is assumed that *N*
_{
r
} realizations are achieved and in each realization *K* targets are found; therefore, estimation is determined for *K* × *N*
_{
r
} DOAs. The total number of successful recovered targets is represented by *N*
_{success} during the whole *N*
_{r} realizations, and ERR = *N*
_{success}/(*K* × *N*
_{
r
}).

*L*= 70 and SNR = 30 dB for the proposed algorithm, S-OMP, and BPDN. It demonstrates that for

*K*between 2 and 17, the proposed method achieves better recovery rate than S-OMP and BPDN. Failure rate of the proposed algorithm is comparable with the other two techniques for

*K*≥ 17

*K*≥ 17. In other words, by corrupting the sparsity (i.e., having more targets to estimate), the proposed algorithm fails to reconstruct sparse solution vector like the other compressed sensing methods.

*M*= 30,

*L*= 70, and three sources impinge on the array at 10°, 70°, and 71°. To verify the superiority of the proposed algorithm over S-OMP,

*l*

_{2,1}minimization, and Root-MUSIC, RMSE indicator is utilized which is defined by:

*N*

_{r}is the number of independent Monte Carlo realizations,

*K*is the number of sources,

*θ*

_{ k }is the true DOA, and \( {\widehat{\theta}}_{kj} \) is the

*j*-th estimation value for

*θ*

_{ k }.

Figure 3 demonstrates the efficiency of the proposed method in resolving adjacent angles in low SNRs. The proposed method outperforms the other algorithms and converges to the actual DOA values over a wider range of SNR values. The main advantage of proposed method is that it combines both SVD denoising and MMV approach to achieve better sparse signal recovery. In contrast, S-OMP is not able to discern adjacent targets even at an SNR = 30 dB where its root mean square error remains fixed at 3.54°. The high coherency of matrix Φ causes this method to be inaccurate in scenarios with closely spaced targets. Root-MUSIC and BPDN both are efficient for SNRs higher than 15 dB.

*M*= 30,

*L*= 70, and SNR = 30 dB. One can observe that, in contrast to S-OMP, the running time of the proposed algorithm increases negligibly with increase in

*K*. Moreover, despite being insensitive for

*K*> 5,

*l*

_{2,1}minimization (BPDN) is a significantly slower algorithm, requiring 3 orders of magnitude longer times to resolve the targets. Figure 5 shows the running time against the number of snapshots. The results show once again that BPDN performed the worst requiring thousand-fold more time in comparison to the other two methods. The results also show the proposed method consumes less time to calculate SVD values than S-OMP because it deals with a reduced matrix size of

*Y*

_{ red }∈ ℂ

^{ M × K }while S-OMP has to work with a larger matrix of

*Y*∈ ℂ

^{ M × L }.

*L*= 100 and SNR = 5 dB. The angle between two sources varies from 1° to 30°. The result for proposed algorithm, S-OMP, and root-MUSIC is shown in Fig. 6.

### 5.2 Nonuniform linear array

In this section, DOA estimation is investigated in NLA scenarios where low-angle tracking is required to discern targets that are close to each other. By using NLA configurations, the accuracy of angle measurements can be improved in comparison to uniform linear array setup with the same number of sensors. In other words, NLA configurations have a narrower beamwidth by using a wider aperture and therefore achieve a better DOA resolution.

*M′*and

*M*sensors has been denoted by \( {\mathrm{NLA}}_{M^{\hbox{'}}, M} \). For example, NLA

_{30,20}denotes 20 sensor linear arrays with an aperture length 30 × λ/2. In Fig. 7, the sensors of a NLA

_{5,3}are located at positions corresponding to the vector

*p*= [0,2,5].

_{60,30}. The results in Fig. 8 show that for this case, the aperture has doubled in length while the number of sensors remains the same. The figure shows that the overall performance of all CS-based methods has improved in comparison to results in Fig. 3 where the ULA with 30 sensors using S-OMP method attains an error of 3.54° at SNR = 30 dB, while with NLA

_{60,30}, an error limit is 1.44° at the same SNR. The improvement is because when the linear array has a nonuniform arrangement, the lower mutual coherency between array manifold columns helps reconstruction method pick up correct DOAs from sampling grid. NLA could be considered as a sampling machine taking random spatial samples from arriving signals. Increasing randomness in [10] is reported to tighten RIP property. Furthermore, owing to the doubled aperture length in Fig. 8, the ability of the array to discern closely located DOAs has been enhanced.

_{15}to four NLAs with the same number of active sensors. The experiment has been carried as a Monte Carlo simulation with three randomly chosen angles in each of the iterations. Figure 9 shows that by increasing the aperture, while the number of sensors remains the same, the array’s ability to resolve DOAs improves significantly. This is because the longer aperture size provides narrow beamwidth. From the CS perspective, a longer aperture in the NLA arrangement decreases the matrix mutual coherency since the sensors can be located far from each other.

_{60,5}with only 5 sensors fail to achieve a satisfactory result for SNR < 20 dB, the other two alternatives with 10 and 15 sensors exhibit desirable performance. Indeed, the more sensors we exploit, the more measurements we get; consequently, the sparse signal will be recovered more accurately even at low SNRs.

## 6 Conclusions

The proposed algorithm, which is based on row-sparse matrix recovery for DOA estimation problem, not only handles multiple measurements but also converges significantly faster in comparison to other conventional compressed sensing algorithms. By using SVD decomposition, the proposed algorithm is able to achieve better results in terms of RMSE and ERR in low SNR situations. It consumes less time than its rivals because it works on a matrix with lower dimensions due to SVD denoising. Impact of aperture length and sensor number on the performance of proposed method was also investigated. The results show that for a fixed number of sensors, the array with longer aperture outperforms arrays with shorter aperture. When the aperture size is fixed, the array with more sensors exhibits better performance as more sensors acquire more measurements, which helps sparse signal recovery.

## Declarations

### Competing interests

The authors declare that they have no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- DL Donoho, Compressed sensing. IEEE Transactions on Information Theory
**52**(4), 1289–1306 (2006)MathSciNetView ArticleMATHGoogle Scholar - YC Eldar, G Kutyniok, Compressed Sensing: Theory and Applications. UK: Cambridge University Press; 2012.Google Scholar
- YC Eldar, Sampling Theory: Beyond Bandlimited Systems. UK: Cambridge University Press; 2015.Google Scholar
- JA Tropp, AC Gilbert, MJ Strauss, Algorithms for simultaneous sparse approximation. Part I: greedy pursuit. Signal Processing
**86**(3), 572–588 (2006)View ArticleMATHGoogle Scholar - H Krim, M Viberg, Two decades of array signal processing research: the parametric approach. IEEE Signal Processing Magazine
**13**(4), 67–94 (1996)View ArticleGoogle Scholar - hen, Z., Gokeda, G., & Yu, Y. (2010) Introduction to Direction-of-Arrival Estimation, Artech House Google Scholar
- D Malioutov, M Çetin, AS Willsky, A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Transactions on Signal Processing
**53**(8), 3010–3022 (2005)MathSciNetView ArticleGoogle Scholar - X Li, X Ma, S Yan, C Hou, Single snapshot DOA estimation by compressive sampling. Appl. Acoust.
**74**(7), 926–930 (2013)View ArticleGoogle Scholar - A Barabell,
*Improving the Resolution Performance of Eigenstructure-Based Direction-Finding Algorithms, Acoustics, Speech, and Signal Processing*. IEEE Int. Conference on ICASSP’83, vol. 8, 1983, pp. 336–339Google Scholar - EJ Candès, J Romberg, T Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on information theory
**52**(2), 489–509 (2006)MathSciNetView ArticleMATHGoogle Scholar - JW Phillips, RM Leahy, JC Mosher, MEG-based imaging of focal neuronal current sources. IEEE Trans. Med. Imaging
**16**(3), 338–348 (1997)View ArticleGoogle Scholar - SF Cotter, BD Rao, Sparse channel estimation via matching pursuit with application to equalization. IEEE Transactions on Communications
**50**(3), 374–377 (2002)View ArticleGoogle Scholar - Van den Berg E, Friedlander M P. (2010) Sparse optimization with least-squares constraints, E. and M. P., Tech. Rep., Dept of Computer Science, Univ of British Columbia.Google Scholar
- JA Tropp, AC Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory
**53**(12), 4655–4666 (2007)MathSciNetView ArticleMATHGoogle Scholar - SM Hosseini, R Sadeghzadeh, MJ Azizipour,
*A Simultaneous Sparse Approximation Approach for DOA Estimation*. IEEE Int. Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015, pp. 1–5Google Scholar