Comparative Study of Different Adaptive Filter Algorithms used for Effective Noise Cancellation

DOI : 10.17577/IJERTV3IS041771

Download Full-Text PDF Cite this Publication

Text Only Version

Comparative Study of Different Adaptive Filter Algorithms used for Effective Noise Cancellation

Nagaraju. P. Assistant Professor, Department of Telecommunication Engineering,

R. V. C. E, Bangalore, India

Gayathri S Department of Telecommunication Engineering,

R. V. C. E, Bangalore, India

Harshitha N Department of Telecommunication Engineering,

R. V. C. E, Bangalore, India

Meghna Prakash Department of Telecommunication Engineering,

  1. V. C. E, Bangalore, India

    AbstractSpeech is a very basic way for humans to convey information with a frequency spectral range of 300-3400 Hz. Speech signals are easily corrupted by noise. Hence, noise cancellation is one of the most essential requirements in the present telecommunication systems. Adaptive algorithms are currently being used for effective noise cancellation. The changes in signal characteristics are quite fast. This requires the utilization of adaptive algorithms, which converge rapidly.

    In this paper, a comparative study of Least Mean Squares (LMS), Normalized Least Mean Square (NLMS) and Affine Projection (AP) algorithms is discussed.

    An adaptive FIR filter with Least Mean Squares (LMS), Normalized Least Mean Squares (NLMS) and Affine Projection (AP) algorithms was developed to remove noise in speech signal using MATLAB. Simulation was done for various convergence factors (µ) and the working of the above mentioned adaptive algorithms was compared.

    KeywordsAdaptive Filter, Least Mean Squares, Normalized Least Mean Squares, Affine Projection, Noise Cancellation and Convergence speed.

    1. INTRODUCTION

      Noise is a predominant factor in any communication system and it degrades the performance of the system considerably. Therefore, it becomes essential to remove the noise that corrupts the signals. Various techniques are used for the removal of noise. Adaptive filtering is one such technique that helps to remove noise effectively. These are the filters whose characteristics can be changed for achieving optimal results. They change their parameters to minimize the errors based on different adaptive algorithms

      As shown in the Fig. 1, an Adaptive filter block has two inputs primary and reference. The primary input receives a signal s from the signal source that is corrupted by the presence of noise n uncorrelated with the signal. The reference input receives a noisen0 uncorrelated with the signal s but correlated in some way with the noise n. The noise n0 passes through a filter to produce an output that is a close estimate of primary input noise. This noise estimate is subtracted from the corrupted signal to produce an estimate of the signal, , the filters output.

      An adaptive filter is capable of adjusting its impulse response to minimize an error signal that is dependent on the filter output. The adjustment of the filter weights and hence the impulse response, is governed by an adaptive algorithm. The criteria used may be the minimization of the mean square error, the temporal average of the least squares error etc. Different algorithms are used for each of the minimization criteria e.g. Least Mean Squares (LMS) algorithm, Normalized Least Mean Square (NLMS) algorithm, Affine projection (AP) etc.

    2. METHODOLOGY

      In this paper, three adaptive filter algorithms with different convergence speeds and computational complexities are considered. The three algorithms are LMS algorithm, NLMS algorithm and Affine Projection algorithm. These algorithms are simulated using MATLAB. The results obtained are discussed in section

    3. .

      Signal source, s

      Noise source,

      Primary i/p, s+n

      +

      Adaptive filter

      Filter o/p

      o/p signal,

      1. Least Mean Square Algorithm (LMS)

        LMS algorithm adjusts the coefficients of w(n) of a filter in order to reduce the mean square error between the desired signal and output of the filter. This algorithm is also used due to its computational simplicity.

        n

        Reference signal, n0

        Error signal

        The computational procedure for LMS Algorithm is as

        follows:

        1. Initially, set each weight wk(i), i=0,1,..,N-1,to an

          Fig. 1. Adaptive filter block diagram.

          arbitrary fixed value, such as zero.

          For each subsequent sampling instant, k=1,2,..carry step 2 to step 4 given below.

        2. Compute filter output

          =0

          = 1 ( 1)wk(i) (1)

          update of NLMS is viewed as a one dimensional affine projection.

          The affine projection algorithm is defined as follows:

          = 1 (6)

          =[ + I ]-1

          (7)

        3. Compute the error estimate

          = (2)

        4. Update the next filter weight

          = 1+ µ (8)

          The excitation matrix is L by N and has structure

          = , 1 . , 1 Where =[ , +1]t. The

          +1

          =

          + (3)

          adaptive tap weight vector is = [0, , . 1, ]t

          Where , is the ith tap at sample period n.

          Where x(n) is the input vector of time delayed input values and w(n) is the weight vector at time n .The parameter µ which is known as the step-size is a positive value.

      2. Normalized Least Mean Square (NLMS)

        Normalized Least Mean Square (NLMS) is derived

        The vector has the length of N and consists of noise along with residual echo left uncancelled by the echo cancellers. L is the length adaptive tap weight vector .The N-length vector, is the system output consisting of response echo path impulse response, to the excitation and additive system ,i.e

        from Least Mean Square (LMS) algorithm. It is seen that

        =

        +

        (9)

        the input signal power changes in time and due to this change, the step-size between two adjacent coefficients of

        Where are Eigen values close to 0. When N=1

        the filter will change affecting the convergence rate. To overcome this problem, the step-size parameter is adjusted with respect to the input signal power in NLMS algorithm. The step-size parameter is said to be normalized.

        The step-size for computing the update weight vector is,

        =

        + () ^2

        (4)

        Where (n) is step-size parameter at sample n, is normalized step-size (0 < <2) and c is safety factor (small positive constant).

        By replacing by (n) into the (3) the weight vector update now is,

        relations (6), (7), and (8) reduce to the familiar NLMS algorithm. Thus, APA is a generalization of NLMS.

        Initialize wk(i) and x(k-i)

        Read x(k) and y(k)

        Filter x(k)

        = 1 ( 1)wk(i)

        =0

        Compute error

        =

        Compute the factor

        ()=

        + ( ) ^2

        Update the co-efficient

        +1 = + ( )

        + 1 = + + () ^2 (5)

        The implementation of a NLMS algorithm is illustrated in Fig. 2.

      3. Affine Projection Algorithm (APA)

In APA the projections are made in multiple dimensions.APA adaptively changes the projection order according to the estimated variance of the filter output error. The error variance is estimated using exponential window averaging with a variable forgetting factor and a simple moving averaging technique. The inputprogresses are selected according to two different criteria to update the filter coefficients at each iteration. Each tap weight vector

Fig 2. Flowchart of NLMS algorithm

  1. EXPERIMENTAL RESULTS

    In this section, we discuss the results obtained by simulating LMS, NLMS and AP algorithms. The input speech signal considered here was first recorded on a mobile phone and converted to .wav format. On conversion the spectrum of speech signal was obtained as shown in Fig. 3. To this speech signal random noise which was periodic in nature was added and was subjected to filtering using LMS, NLMS and AP algorithms. The simulation of the algorithms was performed for step-sizes of 0.1 and 0.01. The error between the input signal to the adaptive filter (speech signal with noise) and the output of the adaptive filter is obtained and plotted for each of the algorithms. To find the speed of convergence of the algorithms the mean square error versus the number of iterations graph is plotted. This plot shows the number of iterations it takes for the mean square error to become negligible. The results for LMS algorithm is shown in Fig. 4 and Fig. 5. The results for NLMS algorithm is shown in Fig. 6 and Fig. 7. The results for AP algorithm are shown in Fig. 8 and Fig. 9. The outputs of all the algorithms are compared in Fig. 10 and Fig. 11.

    1

    0.8

    0.6

    0.4

    Signal Value

    0.2

    0

    -0.2

    -0.4

    -0.6

    -0.8

    0.025

    0.02

    Mean square error—>

    0.015

    0.01

    0.005

    0

    MSE V/s iteration

    0 50 100 150 200 250 300

    Samples—>

    Fig. 4.Results of LMS algorithm for u=0.01

    -1

    0 1 2 3 4 5 6

    Time Index

    Fig. 3. Speech signal

    4

    x 10

    0.025

    0.02

    Mean square error—>

    0.015

    0.01

    MSE V/s iteration

    0.025

    0.02

    Mean square error—>

    0.015

    0.01

    MSE V/s iteration

    0.005

    0

    0 100 200 300 400 500 600 700 800

    Samples—>

    Fig 6. Results of NLMS algorithm for u=0.01

    0.005

    0

    0 500 1000 1500 2000 2500 3000

    Samples—>

    Fig. 5.Results of LMS Algorithm for u=0.1

    0.025

    0.02

    Mean square error—>

    0.015

    0.01

    0.005

    0

    MSE V/s iteration

    0 5 10 15 20 25 30 35

    Samples—>

    Fig. 7. Results of NLMS algorithm for u=0.1

    0.02

    0.018

    0.016

    Mean square error—>

    0.014

    0.012

    0.01

    0.008

    0.006

    0.004

    0.002

    0

    MSE V/s iteration

    0 5 10 15 20 25

    Samples—>

    Fig. 9.Results of APA for u=0.1

    0.025

    0.02

    Mean square error—>

    0.015

    0.01

    0.005

    0

    MSE V/s iteration

    0 50 100 150 200 250

    Samples—>

    Fig. 8. Results of APA for u=0.01

    Fig. 10. Output of LMS, NLMS and AP algorithms respectively

    0.025

    0.02

    Mean square error—>

    0.015

    MSE V/s iteration

    hand if µ is chosen to be very small, time to converge to optimal weights will be very large.

    If R=E{x(k)x(k)} is the autocorrelation matrix containing a set of eigen values, the convergence speed is given by:

    0.01

    0.005

    (10)

    = 1 ( + )

    0

    0 50 100 150 200 250 300

    Samples—>

    MSE V/s iteration

    0.025

    0.02

    Mean square error—>

    0.015

    0.01

    0.005

    0

    Where max and min are the largest and smallest eigen value of the auto-correlation matrix.

    Faster convergence can be achieved when maxis close to

    min, that is, the maximum achievable convergence depends on eigen values of R.

    The LMS algorithm has slow convergence but is simple to implement and gives good results if step size is chosen correctly and is suitable for stationary environment. LMS algorithm involves less consumption of memory and amount of calculation.

    0.025

    0.02

    Mean square error—>

    0.015

    0.01

    0.005

    0

    0 100 200 300 400 500 600 700 800

    Samples—>

    MSE V/s iteration

    0 50 100 150 200 250

    Samples—>

    The NLMS algorithm is suitable for both stationary as well as non-stationary environment. The noise cancellation performance of NLMS was observed to be better when compared with LMS algorithm.

    The Affine projection algorithm, due to the projection factor converges at very fast rate when compared to the other two algorithms.

    REFERENCES

    1. JyothiDhiman, Shadab Ahmad, KuldeepGulia, Comparison between Adaptive filter Algorithms , International Journal of Science, Engineering and Technology Research (IJSETR), Volume 2, Issue 5, May 2013.

    2. Komal R. Borisagar and Dr.G.R.Kulkarni : Simulation and

      Fig. 11. Mean square error v/s number of iterations plots for LMS, NLMS and AP algorithm respectively.

  2. CONCLUSION

Using MATLAB, Least Mean Squares (LMS), Normalized Least Mean Squares (NLMS) and Affine Projection (AP) algorithms were simulated.

Their performance for various convergence factors and noise environment were compared. It is observed that by increasing the filter order, accuracy increases and by increasing the step size, convergence rate increases.

Selection of suitable value of µ is imperative to the performance of all the three algorithms. If µ is chosen to be large, the amount with which the weights change depend heavily on the gradient estimate.

If µ is chosen to be large then the weights oscillate with large variance and filter becomes unstable. On the other

Comparative Analysis of LMS and RLS Algorithms Using Real Time Speech Input Signal, 44 Vol.10 Issue 5 (Ver1.0)October2010,Global Journal of Researches in Engineering.

  1. Sayed. A. Hadei, Student Member IEEE and M. lotfizad: A Family of Adaptive Filter Algorithms in Noise Cancellation for Speech Enhancement, International Journal of Computer and Electrical Engineering, Vol. 2, No. 2, April 2010.

  2. Syed Zahurul Islam, Syed Zahidul Islam, Razali Jidin, Mohd. AlauddinMohd. Ali: Performance Study of Adaptive Filtering Algorithms for Noise Cancellation of ECG Signal, IEEE 2009.

  3. Hung Ngoc Nguyen, Majid Dowlatnia, Azhar Sarfraz,

    Implementation of the LMS and NLMS algorithms for Acoustic Echo Cancellation in teleconference system using MATLAB, ISRN, December 2009.

  4. V. R. Vijaykumar, P. T. Vanathi: Modified Adaptive Filtering Algorithm for Noise Cancellation in Speech Signals, December 2006.

  5. Ville Myllylä: obust fast affine projection algorithm for acoustic echo cancellation, Darmstadt University of Technology, Institute of Communication Technology, 2001.

  6. Sankaran S., Beex A., Convergence behavior of affine projection algorithms, IEEE Transactions on signal processing, Vol. 48, No. 4, 2000.

  7. Masashi Tanaka, Shoji Makino, Member, IEEE, and Junji Kojima, A Block Exact Fast Afne Projection Algorithm, IEEE Transaction on speech and audio processing, VOL. 7, NO. 1, JANUARY 1999.

  8. S.V.Narasimhan, S.Veena: Signal processing, Narosa publishing house,2008 .

Leave a Reply