Acoustic Echo Cancellation using Variable Step- Size NLMS Algorithms

DOI : 10.17577/IJERTCONV2IS13010

Download Full-Text PDF Cite this Publication

Text Only Version

Acoustic Echo Cancellation using Variable Step- Size NLMS Algorithms

GOPALAIAH

Research Scholar, Dayananda Sagar College of Engineering

Bangalore, India gopaliah@gmail.com

Dr.K. SURESH

Professor

Sri Darmasthala Manjunatheswara Institute of Technology Ujire, India

ksece1@gmail.com

AbstractThe purpose of a variable step-size normalized

LMS filter is to solve the dilemma of fast convergence rate and low excess MSE. In the past two decades, many VSS-NLMS algorithms have been presented and have claimed that they have good convergence and tracking properties. This paper summarizes several promising algorithms and gives a performance comparison via extensive simulation. Simulation results demonstrate that Benestys NPVSS and our GSER have the best performance in both time-invariant and time-varying systems.

Index TermsAdaptive filters, normalized least mean square (NLMS), variable step-size NLMS, regularization parameter.

  1. INTRODUCTION

    Adaptive filtering algorithms have been widely employed in many signal processing applications. Among them, the normalized least mean square (NLMS) adaptive filter is most popular due to its simplicity. The stability of the basic NLMS is controlled by a fixed step-size constant , which also governs the rate of convergence, speed of tracking ability and the amount of steady-state excess mean-square error (MSE).

    In practice, the NLMS is usually implemented by adding the squared norm of input vector with a small positive number commonly called the regularization parameter. For the basic NLMS algorithm, the role of is to prevent the associated denominator from getting too close to zero, so as to keep the filter from divergence. Since the performance of -NLMS is affected by the overall step-size parameter, the regularization parameter has an effect on convergence properties and the excess MSE as well, i.e., a too big may slow down the adaptation of the filter in certain applications.

    There are conflicting objectives between fast convergence and low excess MSE for NLMS with fixed regularization parameter. In the past two decades, many variable step-size NLMS (VSS-NMS) algorithms have been proposed to solve this dilemma of the conventional NLMS. For example, Kwong used the power of instantaneous error to introduce a variable step-size LMS (VSSLMS) filter [6]. This VSSLMS has a larger step size when the error is large, and has a smaller step size when the error is small. Later Aboulnasr pointed out that VSSLMS algorithm is fairly sensitive to the accompanying noise, and presented a modified VSSLMS

    (MVSS) algorithm to alleviate the influence of uncorrelated disturbance. The step-size update of MVSS is adjusted by utilizing an estimate of the autocorrelation of errors at adjacent time samples. Recently Shin, Sayed, and Song used the norm of filter coefficient error vector as a criterion for optimal variable step-size, and proposed a variable step-size affine projection algorithm (VS-APA), and a variable step-size NLMS (VS-NLMS) as well. Lately Benesty proposed a non- parametric VSS NLMS algorithm (NPVSS), which need not tune many parameters as that of many variable step size algorithms.

    Another type of VSS algorithms has time-varying regularization parameter that is fixed in the conventional – NLMS filters. By making the regularization parameter gradient-adaptively, Mandic presented a generalized normalized gradient descent (GNGD) algorithm. Mandic claimed that the GNGD adapts its learning rate according to the dynamics of the input signals, and its performance is bounded from below by the performance of the NLMS. Very recently, Mandic introduced another scheme with hybrid filters structure to further improve the steady-state misadjustment of the GNGD. Choi, Shin, and Song then proposed a robust regularized NLMS (RR-NLMS) filter, which uses a normalized gradient to update the regularization parameter. While most variable step-size algorithms need to tune several parameters for better performance, we presented an almost tuning-free generalized square-error-regularized NLMS algorithm (GSER) recently. Our GSER exhibits very good performance with fast convergence, quick tracking

    and low steady-state MSE.

    The purpose of this paper is to provide a fair comparison among these VSS algorithms. In Section II, we summarize the algorithms. Section III illustrates the simulation results. Conclusions are given in Section IV.

  2. VARIABLE STEP-SIZE ALGORITHMS

    In this section, we summarize several variable step-size adaptive filtering algorithms including VSSLMS , MVSS , VS- APA , VS-NLMS, NPVSS, GNGD, RR-NLMS, and GSER

    algorithm .

    Let d(n) be the desired response signal of the adaptive filter

    d(n) = xT (n)h(n) + v(n) , (1)

    where h(n) denotes the coefficient vector of the unknown system with length M ,

    h(n )= [h0(n ), p(n ),.. .,hM- 1(n)]T, (2)

    x(n) is the input vector

    x(n) = [x(n), x(n 1),, x(n M +1)]T , (3)

    and v(n) is the system noise that is independent of x(n) .

    Let the adaptive filter have same structure and same order as that of the unknown system. Denoting the coefficient vector of the filter at iteration n as w(n) . We express the a priori estimation error as

    e(n) = d(n) xT (n)w(n) . (4)

    1. VSSLMS algorithm

      Kwong used the squared instantaneous a priori estimation error to update the step size as

      (n +1) = (n) + e2 (n) , (5)

      where 0 < < 1 , > 0 , and (n +1) is restricted in some pre- decided[min ,max ] . The filter coefficient vector update recursion is given by

      w(n +1) = w(n) + (n)e(n)x(n) . (6)

      where 2 is a positive number proportional to K, max < 2 ,and

      p(n) is an M ×1 vector recursively given by

      p(n) = p(n 1)+(1-)X(n)(XT(n)X(n)+1I)-1 (13)

      A variable step size NLMS (VS-NLMS) was obtained as a special case of VS-APA by choosing K = 1 as follows.

      D.NPVSS algorithm

      Benesty argued that many variable step-size algorithms may not work reliably because they need to set several parameters which are not easy to tune in practice, and proposed a nonparametric variable step-size NLMS algorithm (NPVSS).

      The filter coefficient vector update recursion is given as that of (6), and the variable step size is updated as

      v

      Where 3, 4 are positive numbers, 2 is the power of the

    2. MVSS algorithm

      Aboulnasr utilized an estimate of the autocorrelation of e(n) at adjacent time samples to update the variable step size

      system noise, and the power of the error signal is estimated as

      E.GNGD algorithm.

      as

      where

      (n +1) = (n) + p2

      (n) , (7)

      The GNGD belongs to the family of time-varying regularized VSS algorithm. The filter coefficient vector is updated as

      p(n) = p(n 1) + (1 )e(n)e(n 1) , (8) and 0 < < 1 .

    3. VS-APA and VS-NLMS

      Shin, Sayed, and Song proposed a variable step-size affine projection algorithm (VS-APA), which employed an error vector, instead of a scalar error as used in VSSLMS [6] and MVSS, to adjust the variable step size. The coefficient vector update recursion is given by

      w(n +1) = w(n)+µ(n)X(n)(XT(n)X(n)+1I)-1 (9)

      where 1 is a small positive number, I is an unit matrix of size K × K , X(n) is an M × K input matrix defined as X(n) = [x(n),x(n 1),, x(n K +1)] , (10)

      and

      e(n) = [e(n), e(n 1),, e(n K +1)]T . (11)

      The variable step size (n) is obtained by

      ,

      where c is a fixed step size, and the regularization parameter

      (n) is recursively calculated as

      where is an adaptation parameter needs tuning, and the initial value (0) has to be set as well.

      F.RR-NLMS Algorithm

      Chois RR-NLMS algorithm is a modified version of GNGD. The regularization parameter is updated as

      where sgn(x) represents the sign function, and min is a parameter needs tuning.

      G. GSER Algorithm. The GSER updates w(n) as follows,

      where is a positive parameter that makes the filter more general, and the power of the error signal is estimated.

  3. SIMULATION RESULTS

    In this section, we present the comparison results of several experiments of VSSLMS, MVSS , VS-APA, VS-NLMS, NPVSS, GNGD, RR-NLMS, and GSER algorithm. The adaptive filter is used to identify a 128-tap acoustic echo system ho(n ) .We have used the normalized squared coefficient error (NSCE) to evaluate the performance of the algorithms. The NSCE is defined as

    We have run extensive simulations. The results are reasonably consistent. In this section, we show some simulation results with the following parameters setup: = = 0.99, = 5×105, 1 = 0.1, 2=10-4, 3=20, 4=10-3, min=10-4 min=10-

    3,max=1,c= 1, = 0.15, and K = 4 . We assume the power of

    slightly NSCE advantage than that of GSER in 20-dB SNR case. It should be noted that NPVSS assume 2

    v is available in the simulation. Figures 5 and 6 are the results of AR process input with 30-dB SNR and 20-dB SNR, respectively. RR-NLMS has worst NSCE and shows slower tracking behavior compare to white Gaussian signal input case. VS-APA still has problem in low SNR situation: the NSCE of VS-APA is 10 dB worse than that of its competing algorithms. GSER has fastest tracking and convergence speed in 30-dB SNR case.

    v

    system noise, 2

    , is available for NPVSS algorithm.

    1. Time-Invariant System

      The reference input, x(n) , is either a zero mean, unit variance white Gaussian signal or a second-order AR process. The power of the echo system is about 1. An independent white Gaussian signal with zero mean and variance 0.001 is added to the system output. Figures 1 and 2 are the results of white Gaussian signal input and AR process input, respectively. The NSCE curves are ensemble averages over 20 independent runs. As can be seen, VS-NLMS has the worst performance. GNGD and RR-NLMS have similar convergence speed in the early period, and GNGD exhibits very limited performance in later phase while RR-NLMS keeps adaptation to a lower NSCE. However, we notice that RR-NLMS is out-performed by the rest algorithms in this category. VSSLMS and MVSS have the same performance in the simulation. VS-APA, NPVSS and GSER are among the best group that has fast convergence speed and low NSCE.

    2. Time-Varying System

      Tracking the time-varying system is an important issue in adaptive signal processing. We compare RR-NLMS, VA-APA, NPVSS and GSER in a scenario that the acoustic echo system h (n) is changed to its negative value at sample 35,000. The additive zero mean white Gaussian noise, v(n) , is either with variance 0.01 or 0.001. Figures 3 and 4 are the results of white Gaussian signal input with 30-dB signal-to-noise ration (SNR) and 20-dB SNR, respectively. All algorithms have fast tracking performance. RR-NLMS has worst NSCE. VS-APA achieves the lowest NSCE when SNR is 30dB. However, the NSCE of VS-APA is 5dB worse than that of NPVSS and GSER. Notice that VS-APA exhibits slow convergence rate. NPVSS has

  4. CONCLUSIONS

Many variable step-size NLMS algorithms have been proposed to achieve fast convergence rate, rapid tracking, and low misalignment in the past two decades. This paper summarized several promising algorithms and presented a performance comparison by means of extensive simulation. According to the simulation, Benestys NPVSS and our GSER have the best performance in both time-invariant and time- varying systems.

REFERENCES

    1. T. Aboulnasr and K Mayyas, A robust variable step-size LMS-type algorithm: analysis and simulations, IEEE Transactions on Signal Processing, Vol. 45, No. 3, pp. 631 639, March 1997.

    2. M. T. Akhtar, M. Abe, and M. Kawamata, A new variable step size LMS algorithm-based method for improved online secondary path modeling in active noise control systems, IEEE Transactions on Audio, Speech and Language Processing, Vol. 14, No. 2, pp. 720-726, March 2006.

    3. J. Benesty et al., A nonparametric VSS NLMS algorithm, IEEE Signal Processing Letters, Vol. 13, No. 10, pp 581-584, Oct. 2006.

    4. Y. S. Choi, H. C. Shin, and W. J. Song, Robust regularization for normalized LMS algorithms, IEEE Transactions on Circuits and Systems II, Express Briefs, Vol. 53, No. 8, pp. 627631, Aug. 2006.

    5. Y. S. Choi, H. C. Shin, and W. J. Song, Adaptive regularization matrix for affine projection algorithm, IEEE Transactions on Circuits and Systems II, Express Briefs, Vol. 54, No. 12, pp. 10871091, Dec. 2007.

    6. R. H. Kwong and E. W. Johnston, A variable step size LMS algorithm, IEEE Transactions on Signal Processing, Vol. 40, pp. 1633 – 1642, July 1992.

    7. J. Lee, H. C. Huang, and Y. N. Yang, The generalized

      square-error-regularized LMS algorithm, Proceedings of WCECS 2008, pp.

      157 160, Oct. 2008.

    8. D. P. Mandic et al., Collaborative adaptive learning using hybrid filters,

      Proceedings of 2007 IEEE ICASSP, pp. III 921924, April 2007.

    9. D. P. Mandic; A generalized normalized gradient descent algorithm,

      IEEE Signal Processing Letters, Vol. 11, No. 2, pp. 115118, Feb. 2004.

    10. H. C. Shin, A. H. Sayed, and W. J. Song, Variable step-size NLMS and affine projection algorithms, IEEE Signal Processing Letters, Vol. 11, No. 2, pp. 132 – 135, Feb. 2004.

    11. J. M. Valin and I. B. Collings, Interference-normalized least mean square algorithm, IEEE Signal Processing Letters, Vol. 14, No. 12, pp. 988-991, Dec. 2007.

Leave a Reply