Skip to main content

Improved least mean square algorithm with application to adaptive sparse channel estimation

Abstract

Least mean square (LMS)-based adaptive algorithms have attracted much attention due to their low computational complexity and reliable recovery capability. To exploit the channel sparsity, LMS-based adaptive sparse channel estimation methods have been proposed based on different sparse penalties, such as 1-norm LMS or zero-attracting LMS (ZA-LMS), reweighted ZA-LMS, and p-norm LMS. However, the aforementioned methods cannot fully exploit channel sparse structure information. To fully take advantage of channel sparsity, in this paper, an improved sparse channel estimation method using 0-norm LMS algorithm is proposed. The LMS-type sparse channel estimation methods have a common drawback of sensitivity to the scaling of random training signal. Thus, it is very hard to choose a proper learning rate to achieve a robust estimation performance. To solve this problem, we propose several improved adaptive sparse channel estimation methods using normalized LMS algorithm with different sparse penalties, which normalizes the power of input signal. Furthermore, Cramer-Rao lower bound of the proposed adaptive sparse channel estimator is derived based on prior information of channel taps' positions. Computer simulation results demonstrate the advantage of the proposed channel estimation methods in mean square error performance.

1 Introduction

The demand for high-speed data services has been increasing as emerging wireless devices are widely spreading. Various portable wireless devices, e.g., smartphones and laptops, have generated rapidly, giving rise to massive data traffic [1]. Broadband transmission is an indispensable technique in the next-generation wireless communication systems [2, 3]. The broadband channel is described by a sparse channel model in which multipath taps are widely separated in time, thereby creating a large delay spread [48]. In other words, most of channel coefficients are zero or close to zero, while only a few channel coefficients are dominant (large value). A typical example of sparse multipath channel is shown in Figure 1, where the number of dominant taps is four while the length is 16.

Figure 1
figure 1

A typical example of sparse multipath channel.

Traditional least mean square (LMS) is one of the most popular algorithms for adaptive system identification [9], e.g., channel estimation. LMS-based adaptive channel estimation can be easily implemented due to its low computational complexity. In current broadband wireless communication systems, channel impulse response in time domain is often described by a sparse channel model, supported by a few large coefficients. The LMS-based adaptive channel estimation method never takes advantage of the channel sparse structure although its mean square error (MSE) lower bound has a direct relationship with finite impulse response (FIR) channel length. To improve the estimation performance, recently, many algorithms have been proposed to take advantage of the sparse nature of the channel. For example, based on the recent theory of compressive sensing (CS) [10, 11], various sparse channel estimation methods have been proposed in [1214]. Some of these methods are known to achieve robust estimation, e.g., sparse channel estimation using least-absolute shrinkage and selection operator [15]. However, these kinds of sparse methods have two potential disadvantages: one is that computational complexity may be very high, especially for tracking fast time-variant channels; the other is that training signal matrices for these CS-based sparse channel estimation methods are required to satisfy the restricted isometry property [16]. It was well known that designing these training matrices is a non-deterministic polynomial-time (NP)-hard problem [17].

In order to avoid the two detrimental problems of sparse channel estimation, a variation of the LMS algorithm with 1-norm sparse constraint has been proposed in [18]. The 1-norm sparse penalty is incorporated into the cost function of the conventional LMS algorithm, which results in the LMS update with a zero attractor, namely zero-attracting LMS (ZA-LMS) and reweighted ZA-LMS (RZA-LMS) [18] which is motivated by the reweighted 1-norm minimization recovery algorithm [19]. To further improve the estimation performance, an adaptive sparse channel estimation method using p-norm LMS (LP-LMS) algorithm has been proposed [20]. However, there still exists a performance gap between LP-LMS and optimal sparse channel estimation. It is worth mentioning that optimal channel estimation is often denoted by the sparse lower bound which is derived in Theorem 4 in Section 3.

Due to the fact that solving the LP-LMS algorithm is a non-convex problem, the algorithm cannot obtain an optimal adaptive sparse solution [10, 11]. Hence, a computationally efficient algorithm is required to obtain a more accurate adaptive sparse channel estimation.

According to the CS theory [10, 11], solving the 0-norm sparse penalty problem can obtain an optimal sparse solution. In other words, 0-norm sparse penalty on LMS algorithm is a good candidate to achieve more accurate channel estimations. These backgrounds motivate us to use the optimal 0-norm sparse penalty on LMS in order to improve the estimation performance.

In addition, since sparse LMS-based channel estimation methods have a common drawback of sensitivity to scaling of random training signal, it is very hard to choose a proper learning rate to achieve a robust estimation performance [21]. To solve this problem, we propose several improved adaptive sparse channel estimation methods using normalized LMS (NLMS) algorithm, which normalizes the power of input signal, with different sparse penalties, i.e., p-norm (0 ≤ p ≤ 1).

The contributions of this paper are described below. Firstly, we propose an improved adaptive sparse channel estimation method using 0-norm least square error algorithm, termed as L0-LMS [22]. Secondly, based on algorithms in [18, 20], we propose four kinds of improved adaptive sparse channel estimation methods using sparse NLMS algorithms. Thirdly, Cramer-Rao lower bound (CRLB) of the proposed adaptive sparse channel estimator is derived based on prior information of known positions of non-zero taps. Lastly, various simulation results are given to confirm the effectiveness of our proposed methods.

Section 2 introduces the system model and problem formulation. Section 3 discusses various adaptive sparse channel estimation methods using different LMS-based algorithms. In Section 4, computer simulation results for the MSE performance are presented to confirm the effectiveness of sparsity-aware modifications of LMS. Concluding remarks are presented in Section 5.

2 System model and problem formulation

Consider a sparse multipath communication system, as shown in Figure 2.

Figure 2
figure 2

A typical sparse multipath communication system.

The input signal x(t) and ideal output signal y(t) are related by

y t = h T x t + z t ,
(1)

where h = [h0, h1, …, hN − 1]T is N-length sparse channel vector which is supported by K (KN) dominant taps, x(t) = [x(t), x(t − 1), …, x(t − N + 1)]T is an N-length input signal vector of x(t) and z(t) is an additive noise at time t. The objective of the LMS-type adaptive filter is to estimate the unknown sparse channel h using x(t) and y(t). According to Equation 1, the n th channel estimation error e(n) is easily written as

e n = y t h ˜ T n x t = y t h ˜ T n x t ,
(2)

at time t, where h ˜ n is the LMS adaptive channel estimator. It was worth noting that x(t) and y(t) are invariant as iterative times. Based on Equation 2, standard LMS cost function can be written as

L n = 1 2 e 2 n .
(3)

Hence, the updated equation of LMS adaptive channel estimation is derived as

h ˜ n + 1 = h ˜ n μ L n h ˜ n = h ˜ n + μe n x t ,
(4)

where μ is the step size of gradient descend. For later use, we define a parameter γmax which is the maximum eigenvalue of the covariance matrix R = E{x(t)xT(t)} of input signal vector x(t). Here, we introduce Theorem 1 in relation to the step size μ. The detailed derivation can also be found in [21].

Theorem 1 The necessary condition of reliable LMS adaptive channel estimation is 0 < μ < 2/γ max .

Proof At first, the (n + 1)th update coefficient vector h ˜ n + 1 is written as

h ˜ n + 1 = h ˜ ( n ) + μ x ( t ) e ( n ) = h ˜ n + μ x ( t ) y t h ˜ T n x t .
(5)

Assume that h is a real FIR channel vector; subtracting h from both sides in Equation 5, we can rewrite it as

h ˜ n + 1 h = h ˜ n h + μ x t y t h ˜ T n x t .
(6)

By defining v n = h ˜ n h and according to Equation 1, Equation 6 can be rewritten as

v n + 1 = v ( n ) + μ x ( t ) y t h T x t + μ x ( t ) x T t h x T t h ˜ n = v n + μ x ( t ) z ( t ) μ x ( t ) x T ( t ) v ( n ) = I μ x t x T t v n + μ x ( t ) z ( t ) .
(7)

Assume that input signal vector x(t) and estimated FIR channel vector h ˜ n are statistically independent from the other, then x(t) and z(t) are also independent, i.e., E{x(t)z(t)} = 0. Taking the expectation for both sides in Equation 7, we can obtain

E v n + 1 = E I μ x t x T t v n + E μ x t z t = I μE x t x T t E v n + E μ x t z t = I μ R E v n ,
(8)

The necessary condition of the reliable LMS adaptive channel estimation is given by

1 μ λ max < 1 ,
(9)

where λmax denotes the maximum eigenvalue of R. Hence, to guarantee fast convergence speed and reliable updating of LMS in Equation 4, step size μ should be satisfied:

0 < μ < 2 / λ max .
(10)

3 (N)LMS-based adaptive sparse channel estimation

From Equation 4, we can find that the LMS-based channel estimation method never takes advantage of the sparse structure in channel vector h. To get a better understanding, the LMS-based channel estimation methods can be expressed as

h ˜ n + 1 = h ˜ n + Adaptive error update .
(11)

Unlike the conventional LMS method in Equation 11, sparse LMS algorithms exploit channel sparsity by introducing several p-norm penalties to their cost functions with 0 ≤ p ≤ 1. The LMS-based adaptive sparse channel estimation methods can be written as

h ˜ n + 1 = h ˜ n + Adaptive error update LMS + Sparse penalty sparse LMS .
(12)

Equation 12 motivated us to introduce different sparse penalties in order to take advantage of the sparse structure as prior information. Here, if we analogize the updated equation in Equation 12 to the CS-based sparse channel estimation [10, 11], one can find that more accurate sparse channel approximation is adopted, and then better estimation accuracy could be obtained and vice versa. The conventional sparse penalties are p-norm (0 < p ≤ 1) and 0-norm, respectively. Since 0-norm penalty on sparse signal recovery is a well-known NP-hard problem, only the p-norm (0 < p ≤ 1)-based sparse LMS approaches have been proposed for adaptive sparse channel estimation. Compared to conventional two sparse LMS algorithms (ZA-LMS and RZA-LMS), LP-LMS can achieve a better estimation performance. However, there still exists a performance gap between the LP-LMS-based channel estimator and the optimal one. As the development of mathematics continues, a more accurate sparse approximate algorithm to 0-norm LMS (L0-LMS) is proposed in [22] and analyzed in [23]. However, they never considered any application on sparse channel estimation. In this paper, the L0-LMS algorithm is applied in adaptive sparse channel estimation to improve the estimation performance.

It is easily found that exploitation of more accurate sparse structure information can obtain a better estimation performance. In the following, we investigate sparse LMS-based adaptive sparse channel estimation methods using different sparse penalties.

3.1 LMS-based adaptive sparse channel estimation

The following are the LMS-based adaptive sparse channel estimation methods:

  • ZA-LMS. To exploit the channel sparsity in time domain, the cost function of ZA-LMS [18] is given by

    L ZA n = 1 2 e 2 n + ? ZA h ˜ n 1 ,
    (13)

    where ?ZA is a regularization parameter to balance the estimation error and sparse penalty of h ˜ n . The corresponding updated equation of ZA-LMS is

    h ˜ n + 1 = h ˜ ( n ) - µ ? L ZA n ? h ˜ n = h ˜ n + µe ( n ) x ( t ) - ? ZA sgn h ˜ n ,
    (14)

    where ?ZA?=?µ?ZA and sgn(·) is a component-wise function which is defined as

    sgn h = 1 , h > 0 0 , h = 0 - 1 , h < 0 .

    ? From the updated equation in Equation 14, the function of its second term is compressing small channel coefficients as zero in high probability. That is to say, most of the small channel coefficients can be simply replaced by zeros, which speeds up the convergence of this algorithm.

  • RZA-LMS. ZA-LMS cannot distinguish between zero taps and non-zero taps as it gives the same penalty to all the taps which are often forced to be zero with the same probability; therefore, its performance will degrade in less sparse systems. Motivated by the reweighted l1-norm minimization recovery algorithm [19], Chen et al. have proposed a heuristic approach to reinforce the zero attractor which was termed as the RZA-LMS [18]. The cost function of RZA-LMS is given by

    L RZA n = 1 2 e 2 n + ? RZA ? i = 1 N log 1 + ? RZA h i n ,
    (15)

    where ?RZA?>?0 is the regularization parameter and ?RZA?>?0 is the positive threshold. In computer simulation, the threshold is set as ?RZA?=?20 which is also suggested in [18]. The i th channel coefficient h ˜ i n is then updated as

    h ˜ i n + 1 = h ˜ i ( n ) - µ ? L RZA n ? h ~ i n = h ˜ i n + µe ( n ) x t - i - ? RZA sgn h ˜ i n 1 + ? RZA h ˜ i n ,
    (16)

    where ?RZA?=?µ?RZA?RZA. Equation 16 can be expressed in the vector form as

    h ˜ n + 1 = h ˜ n + µe n x t - ? RZA sgn h ˜ n 1 + ? RZA h ˜ n .

    ? Please note that the second term in Equation 16 attracts the channel coefficients h ˜ i n , i = 1 , 2 , , N whose magnitudes are comparable to 1/?RZA to zeros.

  • LP-LMS. Following the above ideas in [18], LP-LMS-based adaptive sparse channel estimation method has been proposed in [20]. The cost function of LP-LMS is given by

    L LP n = 1 2 e 2 n + ? LP h ˜ n p ,
    (17)

    where ?LP?>?0 is a regularization parameter. The corresponding updated equation of LP-LMS is given as

    h ˜ n + 1 = h ˜ ( n ) - µ ? L LP n ? h ˜ n = h ˜ n + µe ( n ) x ( t ) - ? LP h ˜ n p 1 - p sgn h ˜ n ? LP + h ˜ n 1 - p ,
    (18)

    where ?LP?=?µ?LP and ?LP?>?0 is a small positive parameter.

    ? L0-LMS (proposed). Consider l0-norm penalty on LMS cost function to produce a sparse channel estimator as this penalty term forces the channel tap values of h ˜ n to approach zero. The cost function of L0-LMS is given by

    L L 0 n = 1 2 e 2 n + ? L 0 h ˜ n 0 ,
    (19)

    where ?L 0?>?0 is a regularization parameter and h ˜ n 0 denotes l0-norm sparse penalty function which counts the number of non-zero channel taps of h ˜ n . Since solving the l0-norm minimization is an NP-hard problem [17], to reduce computational complexity, we replace it with an approximate continuous function:

    h ˜ n 0 ˜ ? i = 0 N - 1 1 - e - ß h ˜ i n .
    (20)

    ? The cost function in Equation 19 can then be rewritten as

    L L 0 n = 1 2 e 2 n + ? L 0 ? i = 0 N - 1 1 - e - ß h ˜ i n .
    (21)

    ? The first-order Taylor series expansion of exponential function e - ß h ˜ i n is given as

    e - ß h ˜ i i n ˜ 1 - ß h ˜ i n , when h ˜ i n = 1 ß 0 , otherwise .
    (22)

    ? The updated equation of L0-LMS-based adaptive sparse channel estimation can then be derived as

    h ˜ n + 1 = h ˜ n + µe n x t - ? L 0 ß sgn h ˜ n e - ß h ˜ n ,
    (23)

    where ?L 0?=?µ?L 0. Unfortunately, the exponential function in Equation 23 will also cause high computational complexity. To further reduce the complexity, an approximation function J h ˜ n is also proposed to the updated Equation 23. Finally, the updated equation of L0-LMS-based adaptive sparse channel estimation can be derived as

    h ˜ n + 1 = h ˜ n + µe n x t - ? L 0 J h ˜ n ,
    (24)

    where ? L0 ?=?µ? L0 and J h ˜ n is defined as

    J h ˜ n = 2 ß 2 h ˜ i n - 2 ß · sgn h ˜ i n when h ˜ i n = 1 ß 0 , otherwise ,
    (25)

    for all i ? {1, 2,…, N}.

3.2 Improved adaptive sparse channel estimation methods

The common drawback of the above sparse LMS-based algorithms is that they are vulnerable to probabilistic scaling of their training signal x(t). In other words, LMS-based algorithms are sensitive to signal scaling [21]. Hence, it is very hard to choose a proper step size μ to guarantee stability of these sparse LMS-based algorithms even if the step size satisfies the necessary condition in Equation 10.

Let us reconsider the updated equation of LMS in Equation 4. Assuming that the n th adaptive channel estimator h ˜ n + 1 is the optimal solution, the relationship between the (n = 1)th channel estimator h ˜ n + 1 and input signal x(t) is given as

h ˜ T n + 1 x t = y t ,
(26)

where y(t) is assumed to be ideal received signal at the receiver. To solve a convex optimization problem in Equation 26, the cost function can be constructed as [21]

C n = h ˜ n + 1 h ˜ n T h ˜ n + 1 h ˜ n + ξ y t h ˜ T n + 1 x t ,
(27)

where ξ is the unknown real-value Lagrange multiplier [21]. The optimal channel estimator at the (n + 1)th update can be found by letting the first derivative of

C n = 0 .
(28)

Hence, it can be derived as

C n h ˜ n + 1 = 2 h ˜ n + 1 h ˜ n ξ x t = 0 .
(29)

The (n + 1)th optimal channel estimator is given from Equation 29 as

h ˜ n + 1 = h ˜ n + 1 2 ξ x t .
(30)

By substituting Equation 30 into Equation 26, we obtain

ξ x T t x t = 2 y t h ˜ T n x t = 2 e n ,
(31)

where e n = y t h ˜ T n x t (see Equation 2) and the unknown parameter ξ is given by

ξ = 2 e n x T t x t .
(32)

By substituting it to Equation 30, the updated equation of NLMS is written as

h ˜ n + 1 = h ˜ n + μ 1 e n x t x T t x t ,
(33)

where μ1 is the gradient step size which controls the adaptive convergence speed of NLMS algorithm. Based on the updated Equation 33, for better understanding, NLMS-based sparse adaptive updated equation can be generalized as

h ˜ n + 1 = h ˜ n + Normalized adaptive error update NLMS + Sparse penalty sparse NLMS ,
(34)

where normalized adaptive update term is μ1e(n)x(t)/xT(t)x(t) which replaces the adaptive update μe(n)x(t) in Equation 4. The advantage of NLMS-based adaptive sparse channel estimation it that it can mitigate the scaling interference of training signal due to the fact that NLMS-based methods estimate the sparse channel by normalizing with the power of training signal x(t). To ensure the stability of the NLMS-based algorithms, the necessary condition of step size μ1 is derived briefly. The detail derivation can also be found in [21].

Theorem 2 The necessary condition of reliable NLMS adaptive channel estimation is

0 < μ 1 < E e n h h ˜ n T x t / x T t x t E e 2 n / x T t x t .
(35)

Proof Since the NLMS-based algorithms share the same gradient step size to ensure their stability, for simplicity, studying the NLMS for a general case. The updated equation of NLMS is given by

h ˜ n + 1 = h ˜ n + μ 1 e n x t x T t x t ,
(36)

where μ1 denotes the step size of NLMS-type algorithms. Denoting the channel estimation error vector as u n = h h ˜ n , (n + 1)th update error u(n + 1) can be written as

u n + 1 = u n μ 1 e n x t x T t x t .
(37)

Obviously, the (n + 1)th update MSE E{u2(n + 1)} can also be given by

E u 2 n + 1 = E u 2 n 2 μ 1 E e n u T n x t x T t x t + μ 1 2 E e 2 n x T t x t .
(38)

To ensure the stable updating of the NLMS-type algorithms, the necessary condition is satisfying

E u 2 n + 1 E u 2 n = 2 μ 1 E e n h h ˜ n T x t x T t x t + μ 1 2 E e 2 n x T t x t 0 .
(39)

Hence, the necessary condition of reliable adaptive sparse channel estimation is μ1 satisfying the theorem in Equation 35.

The following are the improved adaptive sparse channel estimation methods:

  • ZA-NLMS (proposed). According to the Equation 14, the updated equation of ZA-NLMS can be written as

    h ˜ n + 1 = h ˜ n + µ 1 e n x t x T t x t - ? ZAN sgn h ˜ n .
    (40)

    where ?ZAN?=?µ1?ZAN and ?ZAN is a regularization parameter for ZA-NLMS.

  • RZA-NLMS (proposed). According to Equation 16, the updated equation of RZA-NLMS can be written as

    h ˜ n + 1 = h ˜ n + µ 1 e n x t x T t x t - ? RZAN sgn h ˜ n 1 + ? RZA h ˜ n .
    (41)

    where ?RZAN?=?µ1?RZA?RZAN and ?RZAN is a regularization parameter for RZA-NLMS. The threshold is set as ? RZAN ?=?? RZA ?=?20 which is also consistent with our previous research in [2427].

  • LP-NLMS (proposed). According to the LP-LMS in Equation 18, the updated equation of LP-NLMS can be written as

    h ˜ n + 1 = h ˜ n + µ 1 e n x t x T t x t - ? LPN h ˜ n p 1 - p sgn h ˜ n ? LPN + h ˜ n 1 - p ,
    (42)

    where ?LPN?=?µ1?LPN/10, ?L 0N is a regularization parameter, and ?LPN?>?0 is a threshold parameter.

  • L0-NLMS (proposed). Based on updated the equation of L0-LMS algorithm in Equation 24, the updated equation of L0-NLMS algorithm can be directly written as

    h ˜ n + 1 = h ˜ n + µ 1 e n x t x T t x t - ? L 0 N J h ˜ n ,
    (43)

    where ?L 0N?=?µ1?L 0N and ?L 0N is a regularization parameter. The sparse penalty function J h ˜ n has been defined as in (25).

3.3 Cramer-Rao lower bound

To decide the CRLB of the proposed channel estimator, Theorems 3 and 4 are derived as follows.

Theorem 3 For an N-length channel vector h, if μ satisfies 0 < μ < 2/λ max , then MSE lower bound of LMS adaptive channel estimator is B = μ P 0 N / 2 μ λ min ~ O N , where P 0 is a parameter which denotes unit power of gradient noise and λ min denotes the minimum eigenvalue of R.

Proof Firstly, we define the estimation error at the (n + 1)th iteration v(n + 1) as

v n + 1 = h ˜ ( n + 1 ) h = v n + μ x ( t ) e ( n ) = v n 1 2 μ Γ ( n ) ,
(44)

where Γ n = L n / h ˜ n = 2 x t e n is a joint gradient error function which includes channel estimation error and noise plus interference error. To derive the lower bound of the channel estimator, two gradient errors should be separated. Hence, assuming Γ(n) can be split in two terms: Γ n = Γ ˜ n + 2 w n where Γ ˜ n = 2 R h ˜ n p denotes the gradient error and w(n) = [w0(n), w1(n), …, wN − 1(n)]T represents the gradient noise vector [21]. Obviously, E{w(n)} = 0 and

x t e ( n ) = 1 2 Γ ( n ) = R h ˜ n h w n = ( R h ˜ n p ) w ( n ) = R h ˜ n h w n = Rv n w ( n ) ,
(45)

where p = Rh. Then, we rewrite v(n + 1) in Equation 44 as

v n + 1 = v ( n ) + μ x ( t ) e ( n ) = v n μ Rv ( n ) μ w ( n ) = I μ R v n μ w ( n ) = I μ QD Q H v n μ w ( n ) = Q I μ D Q H v n μ w ( n ) ,
(46)

where the covariance matrix can be decomposed as R = QDQH. Here, Q is an N × N unitary matrix while D = diag{λ1, λ2, …, λ N } is an N × N eigenvalue diagonal matrix. We denote v ˜ n = Q H v n and w ˜ n = Q H w n as the rotated vectors, and Equation 46 can be rewritten as

v ˜ n + 1 = I μ D v ˜ n μ w ˜ n .
(47)

According to Equation 47, the MSE lower bound of LMS can be derived as

B = lim n E v ˜ n + 1 2 2 = lim n E I μ D v ˜ n μ w ˜ T n T I μ D v ˜ n μ w ˜ n = lim n I μ D 2 E v ˜ n 2 2 a n 2 μ I μ D E w ˜ T n v ˜ n b n + μ 2 E w ˜ T n w ˜ n c n .
(48)

Since signal and noise are independent, hence, E w ˜ T n v ˜ n = = E w ˜ T n v ˜ n = 0 , and Equation 48 can be simplified as

B = lim n I μ D 2 E w ˜ n 2 2 a n + μ 2 E w ˜ T n w ˜ n c n
(49)

For a better understanding, the first term a(n) in Equation 49 can be expanded as

I μ D 2 E v ˜ n 2 2 a n = I μ D 4 E v ˜ n 1 2 2 a n 1 + μ 2 I μ D 2 E w ˜ T n w ˜ n c n 1 I μ D 4 E v ˜ n 1 2 2 a n 1 = I μ D 6 E v ˜ n 2 2 2 a n 2 + μ 2 I μ D 4 E w ˜ T n w ˜ n c n 2 I μ D 2 n 1 E v ˜ 1 2 2 a 1 = I μ D 2 n E v ˜ 0 2 2 a 0 + μ 2 I μ D 2 n 1 E w ˜ T n w ˜ n c 0 .
(50)

According to Equation 50, Equation 49 can be further rewritten as

B = lim n I μ D 2 n E v ˜ 0 2 2 a 0 + μ 2 i = 0 n I μ D 2 i E w ˜ T n w ˜ n c 0 + c 1 + + c n 1 lim n μ 2 i = 0 n + 1 I μ D 2 i E w ˜ T n w ˜ n ,
(51)

where the first term lim n I μ D 2 n E v ˜ n 2 2 0 when |1 − μλ i | < 1. Consider the MSE lower bound of the i th channel taps {b i ; i = 0, 1, …, N − 1}. We obtain

b i = lim n μ 2 i = 0 N 1 1 μ λ i 2 i E w ˜ i n 2 = μ P 0 2 μ λ i ,
(52)

where E w ˜ i n 2 = λ i P 0 and P 0 denotes the gradient noise power. For any overall channel, since the LMS adaptive channel estimation method does not use the channel sparse structure information, the MSE lower bound should be cumulated from all of the channel taps. Hence, the lower bound BLMS of LMS is given by

B = i = 0 N 1 b i = i = 0 N 1 μ P 0 2 μ λ i i = 0 N 1 μ P 0 2 μ λ min = μN P 0 2 μ λ min ~ O N ,
(53)

where N is the channel length of h, {λ i ; i = 0, 1, …, N − 1} are eigenvalues of the covariance matrix R and λmin is its minimal eigenvalue.

Theorem 4 For an N-length sparse channel vector h which consists of K non-zero taps, if μ satisfies 0 < μ < 2/λ max , then the MSE lower bound of the sparse LMS adaptive channel estimator is B S = μ P 0 K / 2 μ λ min ~ O K .

Proof From Equation 53, we can easily find that the MSE lower bound of the adaptive sparse channel estimator has a direct relationship with the number of non-zero channel coefficients, i.e., K. Let Ω denote the set of non-zero taps' position, that is, h i  ≠ 0, for i ϵ Ω and h i = 0 for others. We can then obtain the lower bound of the sparse LMS as

B S = i = 0 , i Ω N 1 b i = i Ω μ P 0 2 μ λ i i Ω μ P 0 2 μ λ m in = μK P 0 2 μ λ min ~ O K .
(54)

4 Computer simulations

In this section, we will compare the performance of the proposed channel estimators using 1,000 independent Monte Carlo runs for averaging. The length of sparse multipath channel h is set as N = 16, and its number of dominant taps is set as K = 1, 2, 4, and 8. For simplicity, the number of dominant taps is written in the form of a set as K ϵ {1, 2, 4, 8}. The values of the dominant channel taps follow the Gaussian distribution, and the positions of dominant taps are randomly selected within the length of h and is subjected to h 2 2 = 1 . The received signal-to-noise ratio (SNR) is defined as 10 log E 0 / σ n 2 , where E0 = 1 is the received signal power and the noise power is given by σ n 2 = 10 SNR / 10 . Here, we compare their performance with three SNR regimes: {5, 10, 20 dB}. All of the step sizes and regularization parameters are listed in Table 1. It is worth noting that the (N)LMS-based algorithms can exploit more accurate sparse channel information at higher SNR environment. Hence, all of the parameters are set at a direct ratio in relation with noise power. For example, in the case of SNR = 10 dB, the parameters of LMS-based algorithm are matched with the parameters which are given in [20]. Hence, the propose regulation parameter method can adaptively exploit the channel sparsity under different SNR regimes.

Table 1 Simulation parameters of (N)LMS-based algorithms

The estimation performance between the actual and estimated channels is evaluated by MSE standard which is defined as

MSE h ˜ n = E h h ˜ n 2 2 ,
(55)

where E{∙} denotes the expectation operator, and h and h ˜ n are the actual channel vector and its estimator, respectively.

In the first experiment, we evaluate the estimation performance of LP-(N)LMS as a function of p ϵ {0.3, 0.5, 0.7, 0.9} which are shown in Figures 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14 in three SNR regimes, i.e., {5, 10, 20 dB}. The parameter is set as ϵLP = ϵLPN = 0.05 which is suggested in [20]. In the case of low SNR regime, e.g., SNR = 5 dB, MSE performance curves are depicted in Figures 3, 4, 5, and 6, with different K ϵ {1, 2, 4, 8}. One can find that LP-(N)LMS algorithm is not stable if we set p = 0.3 because their estimation performances are even worse than the other cases, i.e., p ϵ {0.5, 0.7, 0.9}. In the case of intermediate SNR regime, e.g., SNR = 10 dB, MSE performance curves are depicted in Figures 7, 8, 9, and 10, with different K ϵ {1, 2, 4, 8}. As shown in Figure 7, to estimate a very sparse channel (K = 1), the LP-(N)LMS algorithm with p = 0.3 can obtain a better estimation performance than the others, i.e., p ϵ {0.5, 0.7, 0.9}. However, if K > 1, the LP-(N)LMS algorithm is no longer stable as shown in Figures 8, 9, and 10, whose estimation performance is even worse than that in (N)LMS. In the case of high SNR regime, e.g., SNR = 20 dB, likewise, the LP-(N)LMS algorithm achieves a better estimation performance than the others only when K ≤ 2 as shown in Figures 11 and 12. Hence, one can find that the stability of the LP-(N)LMS algorithm with p = 0.3 depends highly on both SNR and K. If the algorithm adopts p ϵ {0.5, 0.7, 0.9} as shown in Figures 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14, fortunately, there is no obvious relationship between stability and either SNR or K. In order to trade off stability and estimation performance of the LP-(N)LMS algorithm, it is better to set the value of the sparse penalty as p = 0.5. For one thing, the LP-(N)LMS algorithm with p = 0.5 can always achieve a better estimation performance than the cases with p ϵ {0.7, 0.9}. In another case, even though the LP-(N)LMS algorithm with p = 0.3 can obtain a better estimation performance in certain circumstances, its stability depends highly on SNR and K. Hence, in the following simulation results, p = 0.5 is considered for LP-(N)LMS-based adaptive sparse channel estimation.

Figure 3
figure 3

Performance comparison of LP-(N)LMS with different p (SNR = 5 dB and K= 1).

Figure 4
figure 4

Performance comparison of LP-(N)LMS with different p (SNR = 5 dB and K= 2).

Figure 5
figure 5

Performance comparison of LP-(N)LMS with different p (SNR = 5 dB and K= 4).

Figure 6
figure 6

Performance comparison of LP-(N)LMS with different p (SNR = 5 dB and K= 8).

Figure 7
figure 7

Performance comparison of LP-(N)LMS with different p (SNR = 10 dB and K= 1).

Figure 8
figure 8

Performance comparison of LP-(N)LMS with different p (SNR = 10 dB and K= 2).

Figure 9
figure 9

Performance comparison of LP-(N)LMS with different p (SNR = 10 dB and K= 4).

Figure 10
figure 10

Performance comparison of LP-(N)LMS with different p (SNR = 10 dB and K= 8).

Figure 11
figure 11

Performance comparison of LP-(N)LMS with different p (SNR = 20 dB and K= 1).

Figure 12
figure 12

Performance comparison of LP-(N)LMS with different p (SNR = 20 dB and K= 2).

Figure 13
figure 13

Performance comparison of LP-(N)LMS with different p (SNR = 20 dB and K= 4).

Figure 14
figure 14

Performance comparison of LP-(N)LMS with different p (SNR = 20 dB and K= 8).

In the second experiment, we compare all the (N)LMS-based sparse adaptive channel estimation methods with different SNR regimes: {5, 20 dB} as well as K ϵ {1, 8}, as shown in Figures 15, 16, 17, and 18, respectively. In the case of low SNR regime (e.g., SNR = 5 dB), the MSE curves show that NLMS-based methods achieve a better estimation performance than LMS-based ones as shown in Figure 15. Let us take ZA-NLMS and ZA-LMS as examples. As shown in the figure, each performance curve of ZA-NLMS is much lower than that of ZA-LMS. That is to say, the proposed ZA-NLMS achieves a better estimation performance than the traditional ZA-LMS. Similarly, other proposed NLMS-type methods, i.e., RZA-NLMS and LP-NLMS, also achieve better estimation performances than their corresponding sparse LMS types. To further confirm the stability and effectiveness of our proposed methods, sparse (N)LMS-based estimation methods are also evaluated in the case of 20 dB as well as with different K, respectively.

Figure 15
figure 15

MSE versus the number of iterations (SNR = 5 dB and K= 1).

Figure 16
figure 16

MSE versus the number of iterations (SNR = 5 dB and K= 8).

Figure 17
figure 17

MSE versus the number of iterations (SNR = 20 dB and K= 1).

Figure 18
figure 18

MSE versus the number of iterations (SNR = 20 dB and K= 8).

As the SNR increases, the obvious performance advantages of NLMS-based methods are vanishing. Hence, compared with LMS-based methods, we can conclude that NLMS-based methods not only work more reliably for unknown signal scaling, but also work more stably for noise interference, especially in low SNR environment.

In addition, the simulation results also show that (N)LMS-based methods have an inverse relationship with the number of non-zero channel taps. In other words, for a sparse channel, (N)LMS-based methods can achieve a better estimation performance and vice versa. Let us take K = 1 and 8 as examples. In Figures 15 and 16, when the number of non-zero taps is K = 1, performance gaps are bigger than in the case where K = 8, as shown in Figures 17 and 18. Hence, these simulation results also show that the estimation performance of adaptive sparse channel estimation is also affected by the number of non-zero channel taps. When the channel is no longer sparse, the performance of these proposed methods will reduce to the performance of (N)LMS-based methods.

5 Conclusions

In this paper, we have investigated various (N)LMS-based adaptive sparse channel estimation methods by enforcing different sparse penalties, e.g., p-norm and 0-norm. The research motivation originated from the fact that LMS-based channel estimation methods are sensitive to the scaling of random training signal and easily causing the estimation performance unstable. Unlike LMS-based methods, the proposed NLMS-based methods have avoided the uncertain signal scaling and normalized the power of input signal with different sparse penalties.

Initially, we proposed an improved adaptive sparse channel estimation method using 0-norm sparse constraint LMS algorithm and compared it with ZA-LMS, RZA-LMS, and LP-LMS. The proposed method was based on the CS background that 0-norm sparse penalty can exploit a more accurate channel sparsity.

In addition, to improve the robust performance and increase the convergence speed, we proposed NLMS-based adaptive sparse channel estimation methods using different sparse penalties, i.e., ZA-NLMS, RZA-NLMS, LP-NLMS, and L0-NLMS. For example, ZA-NLMS can achieve a better estimation than ZA-LMS. The proposed methods exhibit faster convergence and better performance which are confirmed by computer simulations under various SNR environments.

References

  1. Raychaudhuri D, Mandayam N: Frontiers of wireless and mobile communications. Proc. IEEE 2012, 100(4):824-840.

    Article  Google Scholar 

  2. Adachi F, Grag D, Takaoka S, Takeda K: Broadband CDMA techniques. IEEE Wirel. Commun. 2005, 12(2):8-18. 10.1109/MWC.2005.1421924

    Article  Google Scholar 

  3. Adachi F, Kudoh E: New direction of broadband wireless technology. Wirel. Commun. Mob. Com. 2007, 7(8):969-983. 10.1002/wcm.507

    Article  Google Scholar 

  4. Schreiber WF: Advanced television systems for terrestrial broadcasting: some problems and some proposed solutions. Proc. IEEE 1995, 83(6):958-981. 10.1109/5.387095

    Article  Google Scholar 

  5. Molisch AF: Ultra wideband propagation channels: theory, measurement, and modelling. IEEE Trans. Veh. Technol. 2005, 54(5):1528-1545. 10.1109/TVT.2005.856194

    Article  Google Scholar 

  6. Yan Z, Herdin M, Sayeed AM, Bonek E: Experimental study of MIMO channel statistics and capacity via the virtual channel representation. Technique report: University of Wisconsin-Madison; 2007.

    Google Scholar 

  7. Czink N, Yin X, Ozcelik H, Herdin M, Bonek E, Fleury BH: Cluster characteristics in a MIMO indoor propagation environment. IEEE Trans. Wirel. Commun. 2007, 6(4):1465-1475.

    Article  Google Scholar 

  8. Vuokko L, Kolmonen VM, Salo J, Vainikainen P: Measurement of large-scale cluster power characteristics for geometric channel models. IEEE Trans. Antennas Propagat. 2007, 55(11):3361-3365.

    Article  Google Scholar 

  9. Widrow B, Stearns SD: Adaptive Signal Processing. New Jersey: Prentice Hall; 1985.

    Google Scholar 

  10. Candes E, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 2006, 52(2):489-509.

    Article  MathSciNet  Google Scholar 

  11. Donoho DL: Compressed sensing. IEEE Trans. Inform. Theory 2006, 52(4):1289-1306.

    Article  MathSciNet  Google Scholar 

  12. Bajwa WU, Haupt J, Sayeed AM, Nowak R: Compressed channel sensing: a new approach to estimating sparse multipath channels. Proc. IEEE 2010, 98(6):1058-1076.

    Article  Google Scholar 

  13. Taubock G, Hlawatsch F, Eiwen D, Rauhut H: Compressive estimation of doubly selective channels in multicarrier systems: leakage effects and sparsity-enhancing processing. IEEE J. Sel. Top. Sign Proces. 2010, 4(2):255-271.

    Article  Google Scholar 

  14. Gui G, Wan Q, Peng W, Adachi F: Sparse multipath channel estimation using compressive sampling matching pursuit algorithm. The 7th IEEE Vehicular Technology Society Asia Pacific Wireless Communications Symposium (APWCS), Kaohsiung, 20–21 May 2010.1-5.

    Google Scholar 

  15. Tibshirani R: Regression shrinkage and selection via the Lasso. J. Roy. Stat. Soc. (B) 1996, 58(1):267-288.

    MathSciNet  Google Scholar 

  16. Candes EJ: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 2008, 346(9–10):589-592.

    Article  MathSciNet  Google Scholar 

  17. Garey MR, Johnson DS: Computers and Intractability: a Guide to the Theory of NP-Completeness. New York, NY: W.H. Freeman & Co; 1990.

    Google Scholar 

  18. Chen Y, Gu Y, Hero AO: Sparse LMS for system identification. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Taipei, 19–24 April 2009.3125-3128.

    Google Scholar 

  19. Candes EJ, Wakin MB, Boyd SP: Enhancing sparsity by reweighted ℓ1-minimization. J. Fourier Anal. Appl. 2008, 14(5–6):877-905.

    Article  MathSciNet  Google Scholar 

  20. Taheri O, Vorobyov SA: Sparse channel estimation with ℓp-norm and reweighted ℓ1-norm penalized least mean squares. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, 22–27 May 2011.2864-2867.

    Google Scholar 

  21. Haykin S: Adaptive Filter Theory. 3rd edition. Englewood Cliffs: Prentice-Hall; 1996.

    Google Scholar 

  22. Gu YT, Jin J, Mei S: ℓ0-Norm constraint LMS algorithm for sparse system identification. IEEE Signal Process. Lett. 2009, 16(9):774-777.

    Article  Google Scholar 

  23. Su GL, Jin J, Gu YT, Wang J: Performance analysis of ℓ0 norm constraint least mean square algorithm. IEEE Trans. Signal Process. 2012, 60(5):2223-2235.

    Article  MathSciNet  Google Scholar 

  24. Gui G, Peng W, Adachi F: Improved adaptive sparse channel estimation based on the least mean square algorithm. IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, 7–10 April 2013.3130-3134.

    Google Scholar 

  25. Gui G, Mehbodniya A, Adachi F: Least mean square/fourth algorithm for adaptive sparse channel estimation. IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), London, 8–11 September 2013. in press

    Google Scholar 

  26. Gui G, Mehbodniya A, Adachi F: Adaptive sparse channel estimation using re-weighted zero-attracting normalized least mean fourth. 2nd IEEE/CIC International Conference on Communications in China (ICCC), Xian, 12–14 August 2013. in press

    Google Scholar 

  27. Huang Z, Gui G, Huang A, Xiang D, Adachi F: Regularization selection method for LMS-type sparse multipath channel estimation, in 19th Asia-Pacific Conference on Communications (APCC), Bali, 29–31 August 2013. in press

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Dr. Koichi Adachi of the Institute for Infocomm Research for his valuable comments and suggestions as well as for the improvement of the English expression of this paper. The authors would like to extend their appreciation to the anonymous reviewers for their constructive comments. This work was supported by a grant-in-aid for the Japan Society for the Promotion of Science (JSPS) fellows (grant number 24∙02366).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guan Gui.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Authors’ original file for figure 26

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Gui, G., Adachi, F. Improved least mean square algorithm with application to adaptive sparse channel estimation. J Wireless Com Network 2013, 204 (2013). https://doi.org/10.1186/1687-1499-2013-204

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2013-204

Keywords