 Research
 Open Access
 Published:
Variablestepsize based sparse adaptive filtering algorithm for channel estimation in broadband wireless communication systems
EURASIP Journal on Wireless Communications and Networking volume 2014, Article number: 195 (2014)
Abstract
Sparse channels exist in many broadband wireless communication systems. To exploit the channel sparsity, invariable stepsize zeroattracting normalized least mean square (ISSZANLMS) algorithm was applied in adaptive sparse channel estimation (ASCE). However, ISSZANLMS cannot achieve a good tradeoff between the convergence rate, the computational cost, and the performance. In this paper, we propose a variable stepsize ZANLMS (VSSZANLMS) algorithm to improve the ASCE. The performance of the proposed method is theoretically analyzed and verified by numerical simulations in terms of mean square deviation (MSD) and bit error rate (BER) metrics.
1 Introduction
Broadband transmission is one of the key techniques in wireless communication systems [1–3]. To realize reliable broadband communication, one challenge is accurate channel estimation in order to mitigate intersymbol interference (ISI). Conventional normalized least mean square (ISSNLMS) algorithm using invariable step size was considered as one of the effective methods for channel estimation due to its easy implementation [4]. However, ISSNLMS does not take the channel characteristic into consideration and cannot take the advantage of the inherent channel prior information. During the last few years, more and more channel measurements have validated and indicated that broadband channels are most likely to have sparse or clustersparse structures [5–7], as shown in Figure 1 as an example. In particular, channel sparsity in different mobile communication systems are summarized in Table 1. Inspired by least absolute shrinkage and selection operator (LASSO) algorithm [8], an ℓ_{1}norm sparse constraint function can be used to take the advantage of channel sparsity in adaptive sparse channel estimation (ASCE); zeroattracting ISSNLMS (ZAISSNLMS) has been proposed for ASCE [9, 10] to improve the estimation performance.
It is well known that step size is a critical parameter which determines the estimation performance, convergence rate, and computational cost. However, ISSNLMS and ZAISSNLMS adopt a fixed step size, and as a result, they are unable to achieve a good balance between steadystate estimation performance and convergence speed. Different from ISSNLMS [4], variable stepsize NLMS (VSSNLMS) was first proposed to improve the estimation performance [11] without sacrificing the convergence speed. Variable step size is controlled by the instantaneous square error of each iteration, i.e., lower error will decrease the step size and vice versa. To the best of our knowledge, the application of sparse VSSNLMS to simultaneously exploit the channel sparsity and control the step size has not been reported in the literature.
In this paper, we propose a zeroattracting VSSNLMS (ZAVSSNLMS) algorithm for sparse channel estimation. The main contribution of this paper is to propose the ZAVSSNLMS using VSS rather than ISS for estimating spare channels. In addition, the step size of the proposed algorithm is updated in each iteration according to the error information. In the following, conventional ZAISSNLMS is introduced and its drawback is analyzed at first. ZAVSSNLMS is then proposed using an adaptive step size to achieve a lower steadystate estimation error. To derive the adaptive step size, different from the traditional VSSNLMS algorithm in [11], two practical problems are considered: sparse channel model and tractable independent assumptions [12]. At last, numerical simulations are carried out to evaluate the proposed algorithm in terms of two metrics: mean square deviation (MSD) and bit error rate (BER).
The remainder of this paper is organized as follows. A system model is described and ZAISSNLMS algorithm is introduced in Section 2. In Section 3, ZAVSSNLMS algorithm is proposed. Numerical results are presented in Section 4 to evaluate the performance of the proposed ASCE method. Finally, we conclude the paper in Section 5.
2 ZAISSNLMS algorithm
Consider a frequencyselective fading wireless communication system where FIR sparse channel vector h = [h_{0}, h_{1}, …, h_{N −1}]^{T} has length N and it is supported only by K nonzero channel taps. Assume that an input training signal x(t) is used to probe the unknown sparse channel. At receiver, equivalentbaseband observed signal y(t) at time t is given by:
where x(t) = [x(t), x(t −1), …, x(t − N +1)]^{T} denotes the vector of training signal x(t); z(t) is the additive white Gaussian noise (AWGN), which is assumed to be independent to x(t); (⋅)^{T} denotes the vector transpose operation. The objective of ASCE is to adaptively estimate the unknown sparse channel vector h using the training signal vector x(t) and the observed signal y(t). According to Equation 1, instantaneous error e(n) is defined as,
where \mathbf{v}\left(n\right)=\mathbf{h}\tilde{\mathbf{h}}\left(n\right) denotes the channel estimation error in the nth iteration. In the sequel, one can apply ZAISSLMS algorithm to exploit channel sparsity in time domain. First of all, cost function of ZAISSLMS is given by:
where λ is the regularization parameter to balance the updating square error e^{2}(n) and sparse penalty of the nth updated channel estimator \tilde{\mathbf{h}}\left(n\right);\phantom{\rule{0.62em}{0ex}}{\Vert \phantom{\rule{0.5em}{0ex}}\cdot \phantom{\rule{0.5em}{0ex}}\Vert}_{1} denotes ℓ_{1}norm operation, e.g., {\Vert \mathbf{h}\Vert}_{1}={\displaystyle {\sum}_{l=0}^{N1}\left{h}_{l}\right}. The update equation of ZAISSLMS at time t is:
where μ is the ISS which determines the convergence speed; ρ = μλ is a parameter which depends on the stepsize μ and the regularization parameter λ; and sgn(⋅) is a componentwise function which is defined by:
Observing the update Equation 4, its second term attracts smallvalue channel coefficients to zero in high probability. In other words, most of the smallvalue channel coefficients can be replaced by zero. This will speed up the convergence and mitigate the noise on zero positions as well. However, the performance of the ZAISSLMS is often degraded by random scaling of training signals. To avoid the randomness as well as to improve the estimation performance, we proposed an improved algorithm (i.e., ZAISSNLMS) in our previous works in [9] and [10]. The update equation of ZAISSNLMS [9] was proposed as follows
The ZAISSNLMS algorithm in Equation 6 adopts one step size and its convergence speed is fixed as shown in Figure 2a. As a result, one drawback of ZAISSNLMS is the lack of ability to trade off between the estimation performance and convergence speed.
3 Proposed algorithm
Recall that the ZAISSNLMS algorithm in Equation 6 does not utilize VSS. It is well known that the step size is a critical parameter which determines the estimation performance, convergence speed, and computational cost. Inspirited by the VSSNLMS algorithm in [11], VSS is introduced to make the step size adaptive to the estimation error to further improve the estimation performance. Based on the previous research [10] and [11], ZAVSSNLMS algorithm has the following update equation,
where μ(n +1) is the VSS which is calculated from the estimation error and the variance of the additive noise. Comparing Equation 7 with Equation 4, it can be found that the step size is different, i.e., step size in Equation 4 is invariant while step size in Equation 7 is adaptively variant. There are two facts about μ(n) and ρ that should be noticed: 1) the variant stepsize μ(n) is adopted to speed up the convergence speed in the case of large estimation error, while to ensure the stability in the case of small estimation error; 2) the parameter ρ, which depends on the initial stepsize μ and regularization parameter λ, is utilized to exploit channel sparsity effectively. Otherwise, variant parameter ρ(n) = μ(n)λ may cause extracomputational complexity and ineffectiveness use of the channel sparsity.
Optimal stepsize μ_{ o }(n +1) for the (n +1)th iteration is derived based on the following assumptions:

(A1):
Input vector x(t) and the additive noise z(t) are mutually independent at time t.

(A2):
Input vector x(t) is a stationary sequence of independent zero mean Gaussian random variables with a finite variance{\mathrm{s}}_{\mathrm{x}}^{2}\text{.}

(A3):
z(t) is an independent zero mean random variables with variance{\mathrm{s}}_{\mathrm{z}}^{2}\text{.}

(A4):
\tilde{\mathbf{h}}\left(n\right)is independent of x(t).
These assumptions provide a mathematically tractable analysis in the subsequent proposed algorithm. The proposed algorithm in Equation 7 can be rewritten in terms of the estimation error vector v(n) as follows:
Taking the expectation on the MSD of \tilde{\mathbf{h}}\left(n\right), it can be written as:
Based on the assumptions (A1)(A4), we can get the following results:
According to Equation 9, MSD depends on parameters μ and ρ. However, the optimal value of ρ cannot be directly obtained since it is determined by channel sparsity and the additive noise. In order to find the optimal stepsize μ_{ o }(n +1), empirical parameter ρ is used to make a fair comparison with the traditional method in Equation 6. When ρ is fixed in Equation 7, finding μ(n +1) becomes a convex problem so that it can maximize D(μ(n +1)), given by
In other words, to find the optimal stepsize μ_{ o }(n +1) is equivalent to finding the largest gradient descend from the nth iteration to the (n +1)th iteration. By solving the convex problem in Equation 14, the (n +1)th optimal stepsize μ_{ o }(n +1) is obtained by:
where p_{ o }(n +1) ≜ x(t)x^{T}(t)v(n)/x^{T}(t)x(t). Obviously, the optimal step size is determined by p(n +1) and the noise variance {\mathrm{\sigma}}_{z}^{2}. Unfortunately however, the optimal vector p_{ o }(n +1) depends on the unknown channel vector h and it is not available during adaptive updating process. Based on the assumption (A1), it can be found that:
According to Equation 16, an alternative approximate vector p(n +1) by time averaging is given as follows,
where β ∈ [0, 1) is the smoothing factor to control the value of VSS and the estimation error. Note that the VSS will reduce to ISS when β = 0. Therefore, approximate stepsize μ(n +1) for ZAVSSNLMS is given by:
where C is a positive threshold parameter satisfying C\sim \mathcal{O}\left(1/SNR\right), where SNR is the received signal noise ratio (SNR). To better understand the proposed algorithm in Equation 7, Figure 2 is used to illustrate the two functions: zero attracting (for sparse constraint) and VSS (for convergence speed). According to Equation 18, the range of VSS is given by μ(n +1) ∈ (0, μ_{max}), where μ_{max} is the maximal step size. To ensure the stability of the adaptive algorithm, the maximal step size is usually set to be less than 2 [4]. Based on Equation 18, stepsize μ for ZAISSNLMS is invariable but the stepsize μ(n +1) for ZAVSSNLMS is variable as depicted in Figure 3, where the maximal stepsize μ_{max} and stepsize μ are set as μ = μ_{max} ∈ {0.5, 1, 1.5}. From this figure, it can be found that the value of VSS μ (n +1) will decrease as the estimation error decreases and vice versa; on the other hand, ISS is invariant. Specifically, in the case of small step size, high performance can be achieved since small step size ensures the stability of the algorithm; while in the case of large step size, low computation complexity can be achieved since large step size increases the convergence speed. That is to say, as the updating error decreases, ZAVSSNLMS reduces its step size adaptively to ensure the algorithm stability as well as to achieve better steadystate estimation performance.
4 Numerical simulations
To testify the effectiveness of the proposed method, two metrics are adopted, i.e., MSD and BER. Channel estimators \tilde{h}\left(n\right) are evaluated by the average MSD which is defined as:
where h and \tilde{\mathbf{h}}\left(n\right) are the channel vector and its nth iterative adaptive channel estimator, respectively. ‖·‖_{2} is the Euclidean norm operator and {\Vert \mathbf{h}\Vert}_{2}^{2}={{\displaystyle {\sum}_{i=1}^{N}\left{h}_{i}\right}}^{2}. System performance is evaluated in terms of BER which adopts different data modulation schemes. The results are averaged over 1,000 independent Monte Carlo (MC) runs. The length of channel vector h is set to be N = 64 and its number of dominant taps is set to be K = 2 and 6, respectively. Each dominant channel tap follows random Gaussian distribution \mathcal{C}\mathcal{N}\left(0,{\mathrm{\sigma}}_{h}^{2}\right) and subjects to a total power constraint E{\left\{\Vert \mathbf{h}\Vert \right.}_{2}^{2}=\left(\right)close="}">\n 1\n and positions randomly within the length of h. The received signaltonoise ratio (SNR) is defined as {P}_{0}/{\sigma}_{n}^{2}, where P_{0} is the received power of the pseudorandom noise (PN) sequence for training. Numerical simulation parameters are listed in Table 2.
Average MSD performance of the proposed method is evaluated at first. K = 2 and 6 are used and the results are shown in Figures 4, 5, 6, 7, 8, and 9 under three SNR regimes, i.e. 5, 10, and 20 dB. The proposed algorithm, ZAISSNLMS, is compared with three existing methods, i.e., ISSNLMS [4], VSSNLMS [11], and ZAISSNLMS [9, 10]. It can be observed from Figures 4, 5, 6, 7, 8, and 9 that ZAVSSNLMS achieved both faster convergence speed and better MSD performance than ZAISSNLMS. The reason is that VSSbased gradient descend of the proposed algorithm makes a good tradeoff between the convergence speed and the MSD performance. In addition, to achieve better steadystate estimation performance, regularization parameter methods for ZANLMStype algorithms are adopted [13, 14] and set to be \rho =0.0015\phantom{\rule{0.25em}{0ex}}{\sigma}_{n}^{2}. In different SNR regimes, ZAVSSNLMS always achieves a better estimation performance than ZAISSNLMS. Furthermore, since ZAVSSNLMS takes the advantage of the channel sparsity as well, it obtains a better estimation performance than VSSNLMS, especially in the extreme sparse channel case (e.g., K = 2).
In the next, BER performance using the proposed channel estimator is evaluated. The channel is assumed to be a steadystate sparse channel with number of nonzero taps K = 2 and SNR = 5 dB. Received SNR is defined by E_{ s }/N_{0}, where E_{ s } is the received signal power and N_{0} is the noise power. Numerical result is shown in Figures 10 and 11. In Figure 10, multilevel phase shift keying (PSK) modulation, i.e., 8 and 16 PSK are used for data modulation. In Figure 11, multilevel quadrature amplitude modulation (QAM), i.e., 16 and 64 QAM, are used for data modulation. It is observed that the proposed algorithm can achieve a much better BER performance than ISSNLMS and VSSNLMS. Although there is no significant performance gain between our proposed algorithm and ISSZANLMS, fast convergence rate can be achieved by the proposed algorithm.
Therefore, it has been confirmed that the proposed algorithm can achieve the advantages of good performance and fast convergence speed.
5 Conclusions
Step size is a key parameter for NLMSbased adaptive filtering algorithms to balance the steadystate estimation performance and convergence speed. Either ISSNLMS or ZAISSNLMS cannot update their step size in the process of adaptive error updating. In this paper, a ZAVSSNLMS filtering algorithm was proposed for channel estimation. Unlike the traditional algorithms, the proposed algorithm utilizes VSS which can update the step size adaptively according to the updating error. Therefore, the proposed method can achieve a better steadystate performance while keeping a comparable convergence speed when compared with the existing methods. Simulation results have been presented to confirm the effectiveness of the proposed method in terms of MSD and BER metrics.
References
Adachi F, Tomeba H, Takeda K: Introduction of frequencydomain signal processing to broadband singlecarrier transmissions in a wireless channel. IEICE Trans. Commun. 2009, E92B(no. 9):27892808. 10.1587/transcom.E92.B.2789
Adachi F, Kudoh E: New direction of broadband wireless technology. Wirel. Commun. Mob. Comput. 2007, 7(no. 8):969983. 10.1002/wcm.507
Dai L, Wang Z, Yang Z: Nextgeneration digital television terrestrial broadcasting systems: Key technologies and research trends. IEEE Commun. Mag. 2012, 50(6):150158.
Widrow B, Stearns D: Adaptive Signal Processing. Prentice Hall, New Jersey; 1985.
Herdin M, Bonek E, Member S, Fleury BH, Czink N, Yin X, Ozcelik H: Cluster characteristics in a MIMO indoor propagation environment. IEEE Trans. Wirel. Commun. 2007, 6(no. 4):14651475.
Wyne S, Czink N, Karedal J, Almers P, Tufvesson F, Molisch A: A ClusterBased Analysis of OutdoortoIndoor Office MIMO Measurements at 5.2 GHz,” IEEE 64th Vehicular Technology Conference (VTCFall). Montreal, Canada; 2006:15. doi:10.1109/VTCF.2006.15
Vuokko L, Kolmonen VM, Salo J, Vainikainen P: Measurement of largescale cluster power characteristics for Geometric channel models. IEEE Trans. Antennas Propag. 2007, 55(no. 11):33613365.
Tibshirani R: Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. 1996, 58(no. 1):267288.
Gui G, Peng W, Adachi F: Improved adaptive sparse channel estimation based on the least mean square algorithm. IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China 2013, 31303134.
Gui G, Adachi F: Improved adaptive sparse channel estimation using least mean square algorithm. EURASIP J. Wirel. Commun. Netw. 2013, 2013(no. 1):118. 10.1186/1687149920131
Shin H, Sayed AH, Song W: Variable stepsize NLMS and affine projection algorithms. IEEE Signal Process. Lett. 2004, 11(no. 2):132135. 10.1109/LSP.2003.821722
Eweda E, Bershad NJ: Stochastic analysis of a stable normalized least mean fourth algorithm for adaptive noise canceling with a white Gaussian reference. IEEE Trans. Signal Process. 2012, 60(12):62356244.
Huang Z, Gui G, Huang A, Xiang D, Adachi F: Regularization selection methods for LMSType sparse multipath channel estimation. The 19th AsiaPacific Conference on Communications (APCC), Bali Island, Indonesia 2013, 15.
Gui G, Mehbodniya A, Adachi F: Least mean square/fourth algorithm for adaptive sparse channel estimation. IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), London, UK 2013, 15.
Acknowledgements
The authors would like to extend their appreciation to the anonymous reviewers for their constructive comments. This work was supported in part by the Japan Society for the Promotion of Science (JSPS) research activity startup research grant (No. 26889050), Akita Prefectural University startup research grant, as well as the National Natural Science Foundation of China under grants (Nos. 61401069, 61261048, and 61201273).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Gui, G., Peng, W., Xu, L. et al. Variablestepsize based sparse adaptive filtering algorithm for channel estimation in broadband wireless communication systems. J Wireless Com Network 2014, 195 (2014). https://doi.org/10.1186/168714992014195
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168714992014195
Keywords
 Sparse channel
 ZANLMS
 Invariable step size
 Variable step size
 ASCE