Open Access

Design and complexity analysis of an improved adaptive filtering algorithm for non-sparse impulse response

EURASIP Journal on Wireless Communications and Networking20142014:14

https://doi.org/10.1186/1687-1499-2014-14

Received: 21 October 2013

Accepted: 13 January 2014

Published: 22 January 2014

Abstract

An improved adaptive filtering (IMPNLMS+) algorithm has been proposed for non-sparse impulse responses by incorporating an adaptive parameter μ of the μ-law compression into the improved μ-law PNLMS algorithm (IMPNLMS) algorithm. It not only achieves optimal step-size control factors but also overcomes that the convergence of classical μ-law PNLMS (MPNLMS) is even slower than conventional NLMS algorithm for dispersive channels. In this paper, we propose IMPNLMS++ algorithm, where normalized algorithm is analyzed to reduce computational complexity of proposed improved μ-law PNLMS (IMPNLMS+) algorithm. The validity has been proved by the simulation results.

Keywords

IMPNLMS + algorithmNon-sparse impulse responseComputational complexity

1. Introduction

In echo cancelation systems, the adaptive filters are used to identify the impulse responses for echo paths. However, the impulse responses are usually sparse in nature [13]. For these systems, the classical normalized least-mean-square (NLMS) algorithm which assigns the same step-size to all filter coefficients converges slowly. Some adaptive algorithms exploiting the sparse nature of the impulse response have been proposed to resolve this problem.

The proportionate NLMS (PNLMS) algorithm [4] presented by Duttweiler converges slowly dramatically after the initial fast period. The μ-law PNLMS (MPNLMS) algorithm [5] was proposed to solve this problem. Although it achieves optimal proportionate step size, it converges even slower than the classical NLMS algorithm in dispersive channels. The improved MPNLMS (IMPNLMS) algorithm [6] for non-sparse impulse responses was proposed to improve the performance of MPNLMS with an automatic adjustable parameter. If the convergence speed of IMPNLMS algorithm can be further improved, this algorithm will get a good application in time-varying environment. In [7], we have presented the improvement of the IMPNLMS algorithm using a variable parameter μ instead of the constant value. As a consequence, the convergence features of the MPNLMS algorithm are improved significantly.

In this paper, the improved IMPNLMS algorithm, referred to as IMPNLMS + algorithm all over the paper, is reviewed. The computational complexity is then compared with other adaptive filter algorithms. Non-normalized algorithm is analyzed to bring down the computational complexity of the IMPNLMS+; the feasibility is confirmed by numerical situations.

This paper is organized as follows. In Section 2, the IMPNLMS + algorithm is reviewed. In Section 3, in order to lower computational complexity, the non-normalized technology is analyzed. In Section 4, numerical simulations confirm the computational complexity of IMPNLMS + is lower without normalization. Finally, Section 5 presents conclusions.

2. Review of the improved IMPNLMS algorithm

For non-sparse response, we have proposed an IMPNLMS + algorithm [7] by applying time variable parameter μ, which does not perform worse than NLMS even for dispersive channels. It can be applied in time-varying environment as well. In this section, the IMPNLMS + algorithm is recalled.

The steepest descent algorithm with step-size control matrix using μ-law [4] in IMPNLMS + algorithm can be written as:
w k + 1 = w k + βG k + 1 x k e k x T k G k + 1 x k + δ
(1)
The step-size control matrix (L x L):
G k + 1 = diag g 1 k + 1 g 2 k + 1 g L k + 1
(2)
The l th coefficient g l (k) has been presented in [6]:
g l k = 1 - α k 2 N + 1 + α k F w l k 2 F w l k 1 + ϵ
(3)
Where, logarithmic function in IMPNLMS + differs from [6]:
F w l k = l n 1 + μ k w l k
(4)
Here, μ (k) [8] is a time variable parameter instead of constant:
μ k = 1 ϵ k
(5)
ϵ k = γ k L λ x 2
(6)
γ k = ηγ k - 1 + 1 - η e 2 k - 1
(7)
The parameter α (k) in IMPNLMS + can be described as:
α k = 2 ξ k - 1
(8)
ξ k = 1 - ρ ξ k - 1 + ρ ξ w k , 0 < ρ 1
(9)
ξ w k = L L - L 1 - w k 1 L w k 2
(10)

In IMPNLMS + algorithm, thanks to the adaptation of μ (k), the algorithm is more flexible to minimize the MSE related to the time-varying μ (k). The IMPNLMS + algorithm can achieve better convergence even in time-varying environment where the echo path changes obviously. However, the computation of step-size control matrix with μ-law in IMPNLMS + algorithm is expensive. In the next section, we analyze the computational complexity based on IMPNLMS + algorithm. Non-normalized technology is discussed to reduce the computational complexity of IMPNLMS+, which can be also applied to all proportionate NLMS algorithms.

3. The analytical solution of computational complexity

In general, the computational complexity of adaptive algorithms can be visualized as the number of additions, multiplications logarithm calculations, etc. per iteration. In Table 1, we compare the computational complexity of each filter coefficient update equation. The computation of the filter coefficient update equation in IMPNLMS + algorithm is expensive. It adds three L + four additions, L + six multiplications, two divisions, and two prescribing per iteration in addition to the computational load of MPNLMS. In this section, we improve the IMPNLMS + algorithm, termed IMPNLMS++, by reducing normalization. The possibility of lowering IMPNLMS + computational complexity by reducing normalization is discussed.
Table 1

The computational complexity of each filter coefficient update equation of NLMS

Algorithms

Addition

Multiplication

Division

Comparison

Logarithm

Prescribing

NLMS

L

2 L + 1

1

0

0

0

PNLMS

2 L - 1

4 L + 1

2

2 L

0

0

IPNLMS

3 L

4 L + 1

2

0

0

0

MPNLMS

2 L - 1

4 L + 2

2

2 L

L

0

IMPNLMS

5 L + 2

5 L + 5

3

0

L

1

IMPNLMS+

5 L + 3

5 L + 8

4

0

L

2

In [4], the denominator of coefficient update equation in PNLMS is x T (k) x (k), Duttweiler normalized the step-size control factors g l (k) to avoid direct influence of target impulse response amplitude (i.e., |h m | ) on efficient step-size parameter β g l (k). Thus, the value of final efficient step-size assigned to filters is only proportional to parameter β. However, when the denominator is x T (k)G(k + 1)x(k), normalization can be skipped. The analysis can be described as follows.

For the coefficient update equation whose denominator is x T (k) G (k + 1) x (k), molecular and denominator are multiplied by the non-zero real number c at the same time:
βG k + 1 x k e k x T k G k + 1 x k + δ = β cG k x k e k x T k cG k + 1 x k +
(11)

Where δ is a small positive number only to prevent the algorithm from stalling when denominator equals zero. Hence, in general, x T (k)[cG(k + 1)]x(k) , the operation (11) which is just the same as normalization does not affect the coefficient update item dramatically. Therefore, the absence of normalization has little effect on adaptive algorithms, but also reduces L divisions. This thesis is applicable to MPNLMS and IMPNLMS as well.

4. Numerical simulations

In this section, the proportionate algorithm IMPNLMS + is simulated to confirm the little influence on algorithms without normalization. The input of the simulation system, which is similar as [7], is described as follows.

The coefficients of the unknown system w 0 and the adaptive filter w is 100 (L = 100). The signal-to-noise ratio (SNR) is 40 dB, and the disturbance z (n) is a Gaussian signal. The constant step size β is 0.5. The initial value of ξ is 0.96. The forgetting factor ρ used to estimate channel sparseness is 0.1. In IMPNLMS + algorithm, η = 0.99, and ν = 1,000.

Similarly, to evaluate the IMPNLMS + algorithm performance, the normalized misalignment measure (in dB) [1] is used:
K k = 10 log 10 w 0 - w k 2 2 w 0 2 2 .
(12)
In simulations, the input of the simulation system is white Gaussian noise with zero mean and unit variance. The sparseness degrees of channel are 0.80 and 0.61 and from 0.90 to 0.80, respectively. The IMPNLMS+ and non-normalized IMPNLMS+ (MPNLMS++), are compared as Figures 1, 2, and 3.
Figure 1

Gaussian input with sparseness degree 0.80.

Figure 2

Gaussian input with sparseness degree 0.60.

Figure 3

Gaussian input with sparseness degree range from 0.90 to 0.80.

In Figures 1, 2, and 3, with the increasing of iterations, IMPNLMS+ and IMPNLMS++ converges gradually in 1,000 iterations. Furthermore, the lower the sparseness degree is, the faster IMPNLMS+ and IMPNLMS++ converge. As illustrated in figures above, IMPNLMS++ performs nearly the same as the normalized IMPNLMS+. However, the computational complexity of IMPNLMS++ is L divisions less than normalized IMPNLMS+ .

5. Conclusions

In this paper, we first recall the improved IMPNLMS algorithm for non-sparse impulse response. The complexity of each filter coefficient update equation is then compared. To reduce computational complexity of the IMPNLMS + algorithm, the possibility of employing the non-normalization technique is verified through theoretical derivations. Simulation results have proven the effectiveness and feasibility of the non-normalized IMPNLMS + algorithm.

Declarations

Acknowledgment

This work is supported in part by NSFC 61143008, National High Technology Research and Development Program of China (no. 2011AA01A204), the Fundamental Research Funds for the Central Universities.

Authors’ Affiliations

(1)
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications
(2)
Key Laboratory of Trustworthy Distributed Computing and Service, Ministry of Education, Beijing University of Posts and Telecommunications

References

  1. Huang Y, Benesty J, Chen J: Acoustic MIMO Signal Processing. New York: Springer-Verlag; 2006.Google Scholar
  2. Martin RK, Sethares WA, Williamson RC, Johnson CR: Exploiting sparsity in adaptive filters. IEEE Trans. Signal Process. 2002, 50(8):1883-1894. 10.1109/TSP.2002.800414View ArticleGoogle Scholar
  3. Sun S, Yanhong J, Yamao Y: Overlay cognitive radio OFDM system for 4G cellular network. IEEE Wireless Communication. 2013, 20(2):68-73.View ArticleGoogle Scholar
  4. Duttweiler DL: Proportionate normalized least-mean-squares adaptation in echo cancelers. IEEE Trans. Speech Audio Processing. 2000, 8(5):508-518. 10.1109/89.861368View ArticleGoogle Scholar
  5. Deng H, Doroslovacki M: Improving convergence of the PNLMS algorithm for sparse impulse response identification. IEEE Signal Processing Lett. 2005, 12(3):181-184.View ArticleGoogle Scholar
  6. Liu L, Fukumoto M, Saiki S: An improved mu-law proportionate NLMS algorithm. In IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP’08). Las Vegas; 2008:3797-3800.Google Scholar
  7. Songlin S, Xiao X, Chenglin Z, Yanhong J, Yueming L: An improved adaptive filtering algorithm for non-sparse impulse response. In International Conference on Communications, Signal Processing, and Systems (CSPS). 202nd edition. Heidelberg: Springer; 2012:409-415.Google Scholar
  8. Wagner K, Doroslovacki M: Gain allocation in proportionate-type NLMS algorithms for fast decay of output error at all times. In IEEE International Conference on Acoustics, Speech, and Signal Processing. Taipei; 2009:19-24.Google Scholar

Copyright

© Sun et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.