Improved least mean square algorithm with application to adaptive sparse channel estimation
 Guan Gui^{1}Email author and
 Fumiyuki Adachi^{1}
https://doi.org/10.1186/168714992013204
© Gui and Adachi; licensee Springer. 2013
Received: 20 December 2012
Accepted: 28 July 2013
Published: 5 August 2013
Abstract
Least mean square (LMS)based adaptive algorithms have attracted much attention due to their low computational complexity and reliable recovery capability. To exploit the channel sparsity, LMSbased adaptive sparse channel estimation methods have been proposed based on different sparse penalties, such as ℓ_{1}norm LMS or zeroattracting LMS (ZALMS), reweighted ZALMS, and ℓ_{p}norm LMS. However, the aforementioned methods cannot fully exploit channel sparse structure information. To fully take advantage of channel sparsity, in this paper, an improved sparse channel estimation method using ℓ_{0}norm LMS algorithm is proposed. The LMStype sparse channel estimation methods have a common drawback of sensitivity to the scaling of random training signal. Thus, it is very hard to choose a proper learning rate to achieve a robust estimation performance. To solve this problem, we propose several improved adaptive sparse channel estimation methods using normalized LMS algorithm with different sparse penalties, which normalizes the power of input signal. Furthermore, CramerRao lower bound of the proposed adaptive sparse channel estimator is derived based on prior information of channel taps' positions. Computer simulation results demonstrate the advantage of the proposed channel estimation methods in mean square error performance.
Keywords
1 Introduction
Traditional least mean square (LMS) is one of the most popular algorithms for adaptive system identification [9], e.g., channel estimation. LMSbased adaptive channel estimation can be easily implemented due to its low computational complexity. In current broadband wireless communication systems, channel impulse response in time domain is often described by a sparse channel model, supported by a few large coefficients. The LMSbased adaptive channel estimation method never takes advantage of the channel sparse structure although its mean square error (MSE) lower bound has a direct relationship with finite impulse response (FIR) channel length. To improve the estimation performance, recently, many algorithms have been proposed to take advantage of the sparse nature of the channel. For example, based on the recent theory of compressive sensing (CS) [10, 11], various sparse channel estimation methods have been proposed in [12–14]. Some of these methods are known to achieve robust estimation, e.g., sparse channel estimation using leastabsolute shrinkage and selection operator [15]. However, these kinds of sparse methods have two potential disadvantages: one is that computational complexity may be very high, especially for tracking fast timevariant channels; the other is that training signal matrices for these CSbased sparse channel estimation methods are required to satisfy the restricted isometry property [16]. It was well known that designing these training matrices is a nondeterministic polynomialtime (NP)hard problem [17].
In order to avoid the two detrimental problems of sparse channel estimation, a variation of the LMS algorithm with ℓ_{1}norm sparse constraint has been proposed in [18]. The ℓ_{1}norm sparse penalty is incorporated into the cost function of the conventional LMS algorithm, which results in the LMS update with a zero attractor, namely zeroattracting LMS (ZALMS) and reweighted ZALMS (RZALMS) [18] which is motivated by the reweighted ℓ_{1}norm minimization recovery algorithm [19]. To further improve the estimation performance, an adaptive sparse channel estimation method using ℓ_{p}norm LMS (LPLMS) algorithm has been proposed [20]. However, there still exists a performance gap between LPLMS and optimal sparse channel estimation. It is worth mentioning that optimal channel estimation is often denoted by the sparse lower bound which is derived in Theorem 4 in Section 3.
Due to the fact that solving the LPLMS algorithm is a nonconvex problem, the algorithm cannot obtain an optimal adaptive sparse solution [10, 11]. Hence, a computationally efficient algorithm is required to obtain a more accurate adaptive sparse channel estimation.
According to the CS theory [10, 11], solving the ℓ_{0}norm sparse penalty problem can obtain an optimal sparse solution. In other words, ℓ_{0}norm sparse penalty on LMS algorithm is a good candidate to achieve more accurate channel estimations. These backgrounds motivate us to use the optimal ℓ_{0}norm sparse penalty on LMS in order to improve the estimation performance.
In addition, since sparse LMSbased channel estimation methods have a common drawback of sensitivity to scaling of random training signal, it is very hard to choose a proper learning rate to achieve a robust estimation performance [21]. To solve this problem, we propose several improved adaptive sparse channel estimation methods using normalized LMS (NLMS) algorithm, which normalizes the power of input signal, with different sparse penalties, i.e., ℓ_{p}norm (0 ≤ p ≤ 1).
The contributions of this paper are described below. Firstly, we propose an improved adaptive sparse channel estimation method using ℓ_{0}norm least square error algorithm, termed as L0LMS [22]. Secondly, based on algorithms in [18, 20], we propose four kinds of improved adaptive sparse channel estimation methods using sparse NLMS algorithms. Thirdly, CramerRao lower bound (CRLB) of the proposed adaptive sparse channel estimator is derived based on prior information of known positions of nonzero taps. Lastly, various simulation results are given to confirm the effectiveness of our proposed methods.
Section 2 introduces the system model and problem formulation. Section 3 discusses various adaptive sparse channel estimation methods using different LMSbased algorithms. In Section 4, computer simulation results for the MSE performance are presented to confirm the effectiveness of sparsityaware modifications of LMS. Concluding remarks are presented in Section 5.
2 System model and problem formulation
where μ is the step size of gradient descend. For later use, we define a parameter γ_{max} which is the maximum eigenvalue of the covariance matrix R = E{x(t)x^{ T }(t)} of input signal vector x(t). Here, we introduce Theorem 1 in relation to the step size μ. The detailed derivation can also be found in [21].
Theorem 1 The necessary condition of reliable LMS adaptive channel estimation is 0 < μ < 2/γ_{ max }.
□
3 (N)LMSbased adaptive sparse channel estimation
Equation 12 motivated us to introduce different sparse penalties in order to take advantage of the sparse structure as prior information. Here, if we analogize the updated equation in Equation 12 to the CSbased sparse channel estimation [10, 11], one can find that more accurate sparse channel approximation is adopted, and then better estimation accuracy could be obtained and vice versa. The conventional sparse penalties are ℓ_{p}norm (0 < p ≤ 1) and ℓ_{0}norm, respectively. Since ℓ_{0}norm penalty on sparse signal recovery is a wellknown NPhard problem, only the ℓ_{p}norm (0 < p ≤ 1)based sparse LMS approaches have been proposed for adaptive sparse channel estimation. Compared to conventional two sparse LMS algorithms (ZALMS and RZALMS), LPLMS can achieve a better estimation performance. However, there still exists a performance gap between the LPLMSbased channel estimator and the optimal one. As the development of mathematics continues, a more accurate sparse approximate algorithm to ℓ_{0}norm LMS (L0LMS) is proposed in [22] and analyzed in [23]. However, they never considered any application on sparse channel estimation. In this paper, the L0LMS algorithm is applied in adaptive sparse channel estimation to improve the estimation performance.
It is easily found that exploitation of more accurate sparse structure information can obtain a better estimation performance. In the following, we investigate sparse LMSbased adaptive sparse channel estimation methods using different sparse penalties.
3.1 LMSbased adaptive sparse channel estimation

ZALMS. To exploit the channel sparsity in time domain, the cost function of ZALMS [18] is given by${L}_{\mathrm{ZA}}\left(n\right)=\frac{1}{2}{e}^{2}\left(n\right)+{?}_{\mathrm{ZA}}{\tilde{\mathbf{h}}\left(n\right)}_{1},$(13)where ?_{ZA} is a regularization parameter to balance the estimation error and sparse penalty of $\tilde{\mathbf{h}}\left(n\right)$. The corresponding updated equation of ZALMS is$\begin{array}{l}\tilde{\mathbf{h}}\left(n+1\right)=\tilde{\mathbf{h}}\left(n\right)\mu \frac{?{L}_{\mathrm{ZA}}\left(n\right)}{?\tilde{\mathbf{h}}\left(n\right)}\\ \phantom{\rule{3.6em}{0ex}}=\tilde{\mathbf{h}}\left(n\right)+\mathit{\mu e}\left(n\right)\mathbf{x}\left(t\right){?}_{\mathrm{ZA}}\mathrm{sgn}\left(\tilde{\mathbf{h}}\left(n\right)\right),\end{array}$(14)where ?_{ZA}?=?µ?_{ZA} and sgn(·) is a componentwise function which is defined as$\mathrm{sgn}\left(\mathbf{h}\right)=\left\{\begin{array}{c}\hfill \begin{array}{c}\hfill 1,h>0\hfill \\ \hfill 0,h=0\hfill \end{array}\hfill \\ \hfill 1,h<0\hfill \end{array}\right..$
? From the updated equation in Equation 14, the function of its second term is compressing small channel coefficients as zero in high probability. That is to say, most of the small channel coefficients can be simply replaced by zeros, which speeds up the convergence of this algorithm.

RZALMS. ZALMS cannot distinguish between zero taps and nonzero taps as it gives the same penalty to all the taps which are often forced to be zero with the same probability; therefore, its performance will degrade in less sparse systems. Motivated by the reweighted l_{1}norm minimization recovery algorithm [19], Chen et al. have proposed a heuristic approach to reinforce the zero attractor which was termed as the RZALMS [18]. The cost function of RZALMS is given by${L}_{\mathrm{RZA}}\left(n\right)=\frac{1}{2}{e}^{2}\left(n\right)+{?}_{\mathrm{RZA}}\underset{i=1}{\overset{N}{{\displaystyle ?}}}\phantom{\rule{0.25em}{0ex}}\mathrm{log}\left(1+{?}_{\mathrm{RZA}}\left{h}_{i}\left(n\right)\right\right),$(15)where ?_{RZA}?>?0 is the regularization parameter and ?_{RZA}?>?0 is the positive threshold. In computer simulation, the threshold is set as ?_{RZA}?=?20 which is also suggested in [18]. The i th channel coefficient ${\tilde{h}}_{i}\left(n\right)$ is then updated as$\begin{array}{l}{\tilde{h}}_{i}\left(n+1\right)={\tilde{h}}_{i}\left(n\right)\mu \frac{?{L}_{\mathrm{RZA}}\left(n\right)}{?{\stackrel{~}{h}}_{i}\left(n\right)}\\ \phantom{\rule{4em}{0ex}}={\tilde{h}}_{i}\left(n\right)+\mathit{\mu e}\left(n\right)x\left(ti\right){?}_{\mathrm{RZA}}\frac{\mathrm{sgn}\left({\tilde{h}}_{i}\left(n\right)\right)}{1+{?}_{\mathrm{RZA}}\left{\tilde{h}}_{i}\left(n\right)\right},\end{array}$(16)where ?_{RZA}?=?µ?_{RZA}?_{RZA}. Equation 16 can be expressed in the vector form as$\tilde{\mathbf{h}}\left(n+1\right)=\tilde{\mathbf{h}}\left(n\right)+\mathit{\mu e}\left(n\right)\mathbf{x}\left(t\right){?}_{\mathrm{RZA}}\frac{\mathrm{sgn}\left(\tilde{\mathbf{h}}\left(n\right)\right)}{1+{?}_{\mathrm{RZA}}\left\tilde{\mathbf{h}}\left(n\right)\right}.$
? Please note that the second term in Equation 16 attracts the channel coefficients ${\tilde{h}}_{i}\left(n\right),i=1,2,\dots ,N$ whose magnitudes are comparable to 1/?_{RZA} to zeros.

LPLMS. Following the above ideas in [18], LPLMSbased adaptive sparse channel estimation method has been proposed in [20]. The cost function of LPLMS is given by${L}_{\mathrm{LP}}\left(n\right)=\frac{1}{2}{e}^{2}\left(n\right)+{?}_{\mathrm{LP}}{\tilde{\mathbf{h}}\left(n\right)}_{p},$(17)where ?_{LP}?>?0 is a regularization parameter. The corresponding updated equation of LPLMS is given as$\begin{array}{l}\tilde{\mathbf{h}}\left(n+1\right)=\tilde{\mathbf{h}}\left(n\right)\mu \frac{?{L}_{\mathrm{LP}}\left(n\right)}{?\tilde{\mathbf{h}}\left(n\right)}\\ \phantom{\rule{4em}{0ex}}=\tilde{\mathbf{h}}\left(n\right)+\mathit{\mu e}\left(n\right)\mathbf{x}\left(t\right){?}_{\mathrm{LP}}\frac{{\tilde{\mathbf{h}}\left(n\right)}_{p}^{1p}\mathrm{sgn}\left\{\tilde{\mathbf{h}}\left(n\right)\right\}}{{?}_{\mathrm{LP}}+{\left\tilde{\mathbf{h}}\left(n\right)\right}^{\left(1p\right)}},\end{array}$(18)
where ?_{LP}?=?µ?_{LP} and ?_{LP}?>?0 is a small positive parameter.
? L0LMS (proposed). Consider l_{0}norm penalty on LMS cost function to produce a sparse channel estimator as this penalty term forces the channel tap values of $\tilde{\mathbf{h}}\left(n\right)$ to approach zero. The cost function of L0LMS is given by${L}_{L0}\left(n\right)=\frac{1}{2}{e}^{2}\left(n\right)+{?}_{L0}{\tilde{\mathbf{h}}\left(n\right)}_{0},$(19)where ?_{L 0}?>?0 is a regularization parameter and ${\tilde{\mathbf{h}}\left(n\right)}_{0}$ denotes l_{0}norm sparse penalty function which counts the number of nonzero channel taps of $\tilde{\mathbf{h}}\left(n\right)$. Since solving the l_{0}norm minimization is an NPhard problem [17], to reduce computational complexity, we replace it with an approximate continuous function:${\tilde{\mathbf{h}}\left(n\right)}_{0}\u02dc\underset{i=0}{\overset{N1}{{\displaystyle ?}}}\left(1{e}^{\xdf\left{\tilde{h}}_{i}\left(n\right)\right}\right).$(20)? The cost function in Equation 19 can then be rewritten as${L}_{L0}\left(n\right)=\frac{1}{2}{e}^{2}\left(n\right)+{?}_{L0}\underset{i=0}{\overset{N1}{{\displaystyle ?}}}\left(1{e}^{\xdf\left{\tilde{h}}_{i}\left(n\right)\right}\right).$(21)? The firstorder Taylor series expansion of exponential function ${e}^{\xdf\left{\tilde{h}}_{i}\left(n\right)\right}$ is given as${e}^{\xdf\left{\tilde{h}}_{i}{}_{i}\left(n\right)\right}\u02dc\left\{\begin{array}{cc}\hfill 1\xdf\left{\tilde{h}}_{i}\left(n\right)\right,\hfill & \hfill \mathrm{when}\phantom{\rule{0.25em}{0ex}}\left{\tilde{h}}_{i}\left(n\right)\right=\frac{1}{\xdf}\hfill \\ \hfill 0,\hfill & \hfill \mathrm{otherwise}\hfill \end{array}\right..$(22)? The updated equation of L0LMSbased adaptive sparse channel estimation can then be derived as$\begin{array}{l}\tilde{\mathbf{h}}\left(n+1\right)=\tilde{\mathbf{h}}\left(n\right)\\ \phantom{\rule{4.2em}{0ex}}+\mathit{\mu e}\left(n\right)\mathbf{x}\left(t\right){?}_{L0}\xdf\mathrm{sgn}\left(\tilde{\mathbf{h}}\left(n\right)\right){e}^{\xdf\tilde{\mathbf{h}}\left(n\right)},\end{array}$(23)where ?_{L 0}?=?µ?_{L 0}. Unfortunately, the exponential function in Equation 23 will also cause high computational complexity. To further reduce the complexity, an approximation function $J\left(\tilde{\mathbf{h}}\left(n\right)\right)$ is also proposed to the updated Equation 23. Finally, the updated equation of L0LMSbased adaptive sparse channel estimation can be derived as$\tilde{\mathbf{h}}\left(n+1\right)=\tilde{\mathbf{h}}\left(n\right)+\mathit{\mu e}\left(n\right)\mathbf{x}\left(t\right){?}_{L0}J\left(\tilde{\mathbf{h}}\left(n\right)\right),$(24)where ?_{ L0 }?=?µ?_{ L0 } and $J\left(\tilde{\mathbf{h}}\left(n\right)\right)$ is defined as$J\left(\tilde{\mathbf{h}}\left(n\right)\right)=\left\{\begin{array}{cc}\hfill 2{\xdf}^{2}{\tilde{h}}_{i}\left(n\right)2\xdf\xb7\mathrm{sgn}\left({\tilde{\mathit{h}}}_{i}\left(n\right)\right)\hfill & \hfill \mathrm{when}\phantom{\rule{0.25em}{0ex}}{\tilde{h}}_{i}\left(n\right)=\frac{1}{\xdf}\hfill \\ \hfill 0,\hfill & \hfill \mathrm{otherwise},\hfill \end{array}\right.\phantom{\rule{0.25em}{0ex}}$(25)for all i ? {1, 2,…, N}.
3.2 Improved adaptive sparse channel estimation methods
The common drawback of the above sparse LMSbased algorithms is that they are vulnerable to probabilistic scaling of their training signal x(t). In other words, LMSbased algorithms are sensitive to signal scaling [21]. Hence, it is very hard to choose a proper step size μ to guarantee stability of these sparse LMSbased algorithms even if the step size satisfies the necessary condition in Equation 10.
where normalized adaptive update term is μ_{1}e(n)x(t)/x^{ T }(t)x(t) which replaces the adaptive update μe(n)x(t) in Equation 4. The advantage of NLMSbased adaptive sparse channel estimation it that it can mitigate the scaling interference of training signal due to the fact that NLMSbased methods estimate the sparse channel by normalizing with the power of training signal x(t). To ensure the stability of the NLMSbased algorithms, the necessary condition of step size μ_{1} is derived briefly. The detail derivation can also be found in [21].
Hence, the necessary condition of reliable adaptive sparse channel estimation is μ_{1} satisfying the theorem in Equation 35.
□

ZANLMS (proposed). According to the Equation 14, the updated equation of ZANLMS can be written as$\tilde{\mathbf{h}}\left(n+1\right)=\tilde{\mathbf{h}}\left(n\right)+{\mu}_{1}e\left(n\right)\frac{\mathbf{x}\left(t\right)}{{\mathbf{x}}^{T}\left(t\right)\mathbf{x}\left(t\right)}{?}_{\mathrm{ZAN}}\mathrm{sgn}\left(\tilde{\mathbf{h}}\left(n\right)\right).$(40)
where ?_{ZAN}?=?µ_{1}?_{ZAN} and ?_{ZAN} is a regularization parameter for ZANLMS.

RZANLMS (proposed). According to Equation 16, the updated equation of RZANLMS can be written as$\begin{array}{l}\tilde{\mathbf{h}}\left(n+1\right)=\tilde{\mathbf{h}}\left(n\right)+{\mu}_{1}e\left(n\right)\frac{\mathbf{x}\left(t\right)}{{\mathbf{x}}^{T}\left(t\right)\mathbf{x}\left(t\right)}\\ \phantom{\rule{5.7em}{0ex}}{?}_{\mathrm{RZAN}}\frac{\mathrm{sgn}\left(\tilde{\mathbf{h}}\left(n\right)\right)}{1+{?}_{\mathrm{RZA}}\left\tilde{\mathbf{h}}\left(n\right)\right}.\end{array}$(41)
where ?_{RZAN}?=?µ_{1}?_{RZA}?_{RZAN} and ?_{RZAN} is a regularization parameter for RZANLMS. The threshold is set as ?_{ RZAN }?=??_{ RZA }?=?20 which is also consistent with our previous research in [24–27].

LPNLMS (proposed). According to the LPLMS in Equation 18, the updated equation of LPNLMS can be written as$\begin{array}{l}\tilde{\mathbf{h}}\left(n+1\right)=\tilde{\mathbf{h}}\left(n\right)+{\mu}_{1}e\left(n\right)\frac{\mathbf{x}\left(t\right)}{{\mathbf{x}}^{\mathit{T}}\left(t\right)\mathbf{x}\left(t\right)}\\ \phantom{\rule{5.7em}{0ex}}{?}_{\mathrm{LPN}}\frac{{\tilde{\mathbf{h}}\left(n\right)}_{p}^{1p}\mathrm{sgn}\left\{\tilde{\mathbf{h}}\left(n\right)\right\}}{{?}_{\mathrm{LPN}}+{\left\tilde{\mathbf{h}}\left(n\right)\right}^{\left(1p\right)}},\end{array}$(42)
where ?_{LPN}?=?µ_{1}?_{LPN}/10, ?_{L 0N} is a regularization parameter, and ?_{LPN}?>?0 is a threshold parameter.

L0NLMS (proposed). Based on updated the equation of L0LMS algorithm in Equation 24, the updated equation of L0NLMS algorithm can be directly written as$\tilde{\mathbf{h}}\left(n+1\right)=\tilde{\mathbf{h}}\left(n\right)+{\mu}_{1}e\left(n\right)\frac{\mathbf{x}\left(t\right)}{{\mathbf{x}}^{\mathit{T}}\left(t\right)\mathbf{x}\left(t\right)}{?}_{L0N}J\left(\tilde{\mathbf{h}}\left(n\right)\right),$(43)
where ?_{L 0N}?=?µ_{1}?_{L 0N} and ?_{L 0N} is a regularization parameter. The sparse penalty function $J\left(\tilde{\mathbf{h}}\left(n\right)\right)$ has been defined as in (25).
3.3 CramerRao lower bound
To decide the CRLB of the proposed channel estimator, Theorems 3 and 4 are derived as follows.
Theorem 3 For an Nlength channel vector h, if μ satisfies 0 < μ < 2/λ_{ max }, then MSE lower bound of LMS adaptive channel estimator is$B=\mu {P}_{0}N/\left(2\mu {\lambda}_{\mathit{min}}\right)~\mathcal{O}\left(N\right)$, where P_{ 0 }is a parameter which denotes unit power of gradient noise and λ_{ min }denotes the minimum eigenvalue of R.
where N is the channel length of h, {λ_{ i }; i = 0, 1, …, N − 1} are eigenvalues of the covariance matrix R and λ_{min} is its minimal eigenvalue.
□
Theorem 4 For an Nlength sparse channel vector h which consists of K nonzero taps, if μ satisfies 0 < μ < 2/λ_{ max }, then the MSE lower bound of the sparse LMS adaptive channel estimator is ${B}_{S}=\mu {P}_{0}K/\left(2\mu {\lambda}_{\mathrm{min}}\right)~\mathcal{O}\left(K\right)$.
□
4 Computer simulations
Simulation parameters of (N)LMSbased algorithms
μ  γ _{ZA(N)}  γ _{RZA(N)}  γ _{LP(N)}  γ _{L 0(N)}  

LMSbased  0.05  $\phantom{\rule{0.25em}{0ex}}0.02{\sigma}_{n}^{2}$  $0.05{\sigma}_{n}^{2}$  $0.005{\sigma}_{n}^{2}$  $0.02{\sigma}_{n}^{2}$ 
NLMSbased  0.1  $0.002{\sigma}_{n}^{2}$  $0.005{\sigma}_{n}^{2}$  $0.0005{\sigma}_{n}^{2}$  $0.002{\sigma}_{n}^{2}$ 
where E{∙} denotes the expectation operator, and h and $\tilde{\mathbf{h}}\left(n\right)$ are the actual channel vector and its estimator, respectively.
As the SNR increases, the obvious performance advantages of NLMSbased methods are vanishing. Hence, compared with LMSbased methods, we can conclude that NLMSbased methods not only work more reliably for unknown signal scaling, but also work more stably for noise interference, especially in low SNR environment.
In addition, the simulation results also show that (N)LMSbased methods have an inverse relationship with the number of nonzero channel taps. In other words, for a sparse channel, (N)LMSbased methods can achieve a better estimation performance and vice versa. Let us take K = 1 and 8 as examples. In Figures 15 and 16, when the number of nonzero taps is K = 1, performance gaps are bigger than in the case where K = 8, as shown in Figures 17 and 18. Hence, these simulation results also show that the estimation performance of adaptive sparse channel estimation is also affected by the number of nonzero channel taps. When the channel is no longer sparse, the performance of these proposed methods will reduce to the performance of (N)LMSbased methods.
5 Conclusions
In this paper, we have investigated various (N)LMSbased adaptive sparse channel estimation methods by enforcing different sparse penalties, e.g., ℓ_{p}norm and ℓ_{0}norm. The research motivation originated from the fact that LMSbased channel estimation methods are sensitive to the scaling of random training signal and easily causing the estimation performance unstable. Unlike LMSbased methods, the proposed NLMSbased methods have avoided the uncertain signal scaling and normalized the power of input signal with different sparse penalties.
Initially, we proposed an improved adaptive sparse channel estimation method using ℓ_{0}norm sparse constraint LMS algorithm and compared it with ZALMS, RZALMS, and LPLMS. The proposed method was based on the CS background that ℓ_{0}norm sparse penalty can exploit a more accurate channel sparsity.
In addition, to improve the robust performance and increase the convergence speed, we proposed NLMSbased adaptive sparse channel estimation methods using different sparse penalties, i.e., ZANLMS, RZANLMS, LPNLMS, and L0NLMS. For example, ZANLMS can achieve a better estimation than ZALMS. The proposed methods exhibit faster convergence and better performance which are confirmed by computer simulations under various SNR environments.
Declarations
Acknowledgements
The authors would like to thank Dr. Koichi Adachi of the Institute for Infocomm Research for his valuable comments and suggestions as well as for the improvement of the English expression of this paper. The authors would like to extend their appreciation to the anonymous reviewers for their constructive comments. This work was supported by a grantinaid for the Japan Society for the Promotion of Science (JSPS) fellows (grant number 24∙02366).
Authors’ Affiliations
References
 Raychaudhuri D, Mandayam N: Frontiers of wireless and mobile communications. Proc. IEEE 2012, 100(4):824840.View ArticleGoogle Scholar
 Adachi F, Grag D, Takaoka S, Takeda K: Broadband CDMA techniques. IEEE Wirel. Commun. 2005, 12(2):818. 10.1109/MWC.2005.1421924View ArticleGoogle Scholar
 Adachi F, Kudoh E: New direction of broadband wireless technology. Wirel. Commun. Mob. Com. 2007, 7(8):969983. 10.1002/wcm.507View ArticleGoogle Scholar
 Schreiber WF: Advanced television systems for terrestrial broadcasting: some problems and some proposed solutions. Proc. IEEE 1995, 83(6):958981. 10.1109/5.387095View ArticleGoogle Scholar
 Molisch AF: Ultra wideband propagation channels: theory, measurement, and modelling. IEEE Trans. Veh. Technol. 2005, 54(5):15281545. 10.1109/TVT.2005.856194View ArticleGoogle Scholar
 Yan Z, Herdin M, Sayeed AM, Bonek E: Experimental study of MIMO channel statistics and capacity via the virtual channel representation. Technique report: University of WisconsinMadison; 2007.Google Scholar
 Czink N, Yin X, Ozcelik H, Herdin M, Bonek E, Fleury BH: Cluster characteristics in a MIMO indoor propagation environment. IEEE Trans. Wirel. Commun. 2007, 6(4):14651475.View ArticleGoogle Scholar
 Vuokko L, Kolmonen VM, Salo J, Vainikainen P: Measurement of largescale cluster power characteristics for geometric channel models. IEEE Trans. Antennas Propagat. 2007, 55(11):33613365.View ArticleGoogle Scholar
 Widrow B, Stearns SD: Adaptive Signal Processing. New Jersey: Prentice Hall; 1985.Google Scholar
 Candes E, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 2006, 52(2):489509.MathSciNetView ArticleGoogle Scholar
 Donoho DL: Compressed sensing. IEEE Trans. Inform. Theory 2006, 52(4):12891306.MathSciNetView ArticleGoogle Scholar
 Bajwa WU, Haupt J, Sayeed AM, Nowak R: Compressed channel sensing: a new approach to estimating sparse multipath channels. Proc. IEEE 2010, 98(6):10581076.View ArticleGoogle Scholar
 Taubock G, Hlawatsch F, Eiwen D, Rauhut H: Compressive estimation of doubly selective channels in multicarrier systems: leakage effects and sparsityenhancing processing. IEEE J. Sel. Top. Sign Proces. 2010, 4(2):255271.View ArticleGoogle Scholar
 Gui G, Wan Q, Peng W, Adachi F: Sparse multipath channel estimation using compressive sampling matching pursuit algorithm. The 7th IEEE Vehicular Technology Society Asia Pacific Wireless Communications Symposium (APWCS), Kaohsiung, 20–21 May 2010.15.Google Scholar
 Tibshirani R: Regression shrinkage and selection via the Lasso. J. Roy. Stat. Soc. (B) 1996, 58(1):267288.MathSciNetGoogle Scholar
 Candes EJ: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 2008, 346(9–10):589592.MathSciNetView ArticleGoogle Scholar
 Garey MR, Johnson DS: Computers and Intractability: a Guide to the Theory of NPCompleteness. New York, NY: W.H. Freeman & Co; 1990.Google Scholar
 Chen Y, Gu Y, Hero AO: Sparse LMS for system identification. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Taipei, 19–24 April 2009.31253128.Google Scholar
 Candes EJ, Wakin MB, Boyd SP: Enhancing sparsity by reweighted ℓ_{1}minimization. J. Fourier Anal. Appl. 2008, 14(5–6):877905.MathSciNetView ArticleGoogle Scholar
 Taheri O, Vorobyov SA: Sparse channel estimation with ℓpnorm and reweighted ℓ1norm penalized least mean squares. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, 22–27 May 2011.28642867.Google Scholar
 Haykin S: Adaptive Filter Theory. 3rd edition. Englewood Cliffs: PrenticeHall; 1996.Google Scholar
 Gu YT, Jin J, Mei S: ℓ_{0}Norm constraint LMS algorithm for sparse system identification. IEEE Signal Process. Lett. 2009, 16(9):774777.View ArticleGoogle Scholar
 Su GL, Jin J, Gu YT, Wang J: Performance analysis of ℓ_{0} norm constraint least mean square algorithm. IEEE Trans. Signal Process. 2012, 60(5):22232235.MathSciNetView ArticleGoogle Scholar
 Gui G, Peng W, Adachi F: Improved adaptive sparse channel estimation based on the least mean square algorithm. IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, 7–10 April 2013.31303134.Google Scholar
 Gui G, Mehbodniya A, Adachi F: Least mean square/fourth algorithm for adaptive sparse channel estimation. IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), London, 8–11 September 2013. in pressGoogle Scholar
 Gui G, Mehbodniya A, Adachi F: Adaptive sparse channel estimation using reweighted zeroattracting normalized least mean fourth. 2nd IEEE/CIC International Conference on Communications in China (ICCC), Xian, 12–14 August 2013. in pressGoogle Scholar
 Huang Z, Gui G, Huang A, Xiang D, Adachi F: Regularization selection method for LMStype sparse multipath channel estimation, in 19th AsiaPacific Conference on Communications (APCC), Bali, 29–31 August 2013. in pressGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.