Efficient Implementation of Acoustic Echo Cancellation by Adaptive Neuro Fuzzy Inference System

DOI : 10.17577/IJERTCONV2IS05039

Download Full-Text PDF Cite this Publication

Text Only Version

Efficient Implementation of Acoustic Echo Cancellation by Adaptive Neuro Fuzzy Inference System

A.Malathi1

Post Graduate Student., Dept. of ECE Parisutham Institute of Technology and Science

Affiliated to Anna University, Chennai., Tamilnadu Email id: malathi.pits@gmail.com1

    1. arthikeyan2

      Assistant Professor., Dept. of ECE Parisutham Institute of Technology and Science

      Affiliated to Anna University, Chennai, Tamilnadu Emailid:mailtokarthik.physics@gmail.com2

      Abstract Removal of echo from respiratory signal is a classical problem. In recent years, adaptive filtering has become one of the effective and popular approaches for the processing and analysis of the respiratory signals. Adaptive filters permit to detect time varying potentials and to track the dynamic variations of the signals. Besides, they modify their behavior according to the input signal. Therefore, they can detect shape variations in the ensemble and thus they can obtain a better signal estimation. In this project work, respiratory signal is generated synthetically. After that, the echo has been mixed with respiratory signal. That echo has been nullified from respiratory signal by using adaptive filter algorithms [LMS and RLS] and Adaptive Neuro Fuzzy Inference System. The performance evaluation of the proposed techniques is done in terms of signal echo return loss enhancement (ERLE), signal to noise ratio (SNR), mean square error (MSE) and convergence rate. These properties depend on a few parameters such as: step size (for the LMS), forgetting factor (for the RLS) and filter length (for both the LMS and the RLS). Also, it is true for both algorithms that the filter length is proportional to MSE rate and it takes more time to converge for both algorithms. Comparison is made between various types of LMS and RLS algorithms based on their performance evaluation. Then the best adaptive filter algorithm is compared with the performance of ANFIS.

      Keywords AEC, LMS, NLMS, VSSLMS, RLS, ANFIS

      1. INTRODUCTION

        Echo could be a reflection of sound, inward to the hearer a while once the direct sound. Typical examples square measure the echo made by the lowest of a well, by a building, or by the walls of an inside space associated an empty space. A real echo could be a single reflection of the sound supply. The time delay is that the additional distance divided by the speed of sound. The area unit two sorts of echo: 1.acoustic echo and 2.hybrid echo. Adaptive filter could be a filter that self-adjusts its transfer performs consistent with an improvement rule driven by miscalculation signal. Due to the complexness of the improvement algorithms, most adaptive filter is digital filters. By means of distinction, a non-adaptive filter features a static transfer perform. Adaptive filters are need for a few

        applications as a result of some parameter of the specified process operation (for instance, the location of reflective surfaces in a very resonating space) arent known earlier. The

        Fig1.origin of echo

        adaptive filter uses feedback within the sort of miscalculation signal to refine its transfer performs to match the dynamic parameters. In section shown figure, the received signal is Output through the telephone loudspeaker (audio source) and this audio signal is then reverberate in a very real environmental and picked up by the system microphone(audio sink) leading to the first meant signal and attenuated, The step size will increase or decreases because the mean square error will increase or decreases, permitting the reconciling filter to trace changes within the system additionally manufacturing a little steady state error. Recursive least squares (RLS) adaptive filter is an associate formula that recursively finds the filter coefficients that minimize a weighted linear statistical method value operate about the input signal. This is often in distinction to alternative algorithm like the smallest amount means (LMS) that aim to scale back the mean square error. Within the deviation of the RLS, the input signal area unit thought-about settled, whereas for the LMS and similar formula they are thought about random. ANFIS may be reasonably neural networks that have supported Takagi- Sugeno fuzzy reasoning system. Since it integrates each neural network and mathematical logic principles, its potential to capture the advantage of each in a very single framework. Its reasoning system corresponds to a group of fuzzy IF-THEN rules that have learning capability to approximate nonlinear functions.

      2. TIME DOMAIN ADAPTIVE FILTER

        Adaptive filters are typically used when echo occurs in the same band as the signal or when the echo band is unknown or varies over time. The basic form of time domain adaptive filtering application as echo cancellation is shown in fig. 2. Different algorithms can be used to adapt the weights w of the filter, with an attempt to minimize the mean square error (MSE) performance function.

        Fig 2: Block Diagram of Adaptive Echo Cancellation

        1. Least Mean Square Algorithm (LMS)

          The main function of LMS algorithm is to minimize the mean square error (MSE) between the echo and its output. the complete LMS algorithm is written as three equations:

          1. Filter output: y(n) = w x(n) (1)

          2. Error Estimation: e(n) = d(n) y(n) (2)

          3. Tap Weight adaptation: w(n+1) = w(n) + m e(n)x(n) (3)

            The second operation defines the estimation error e(n), computed based on the current estimate of the tap weight vector w in the first operation. The last term in the third operation refers to the correction that is applied to the previous estimate of the tap weight vector. (Corresponding to step 3 of the method of steepest descent).The learning curve of LMS algorithm is given below. The learning curve of LMS is not smooth like the learning curve of steepest descent algorithm, as it has gradient echo due to statistical changes,

            Some notable aspects of the performance of the Adaptive Filter are:

            • LMS tends to reject the noisy data due to the smoothing action of the small step size parameter LMS can track slowly varying systems, and is often useful in non-stationary environments

            • The LMS error function has a unique global minimum, and hence the algorithm does not tend to get stuck at undesirable local minima.

            • LMS is computationally simple (m multiplications and m additions per iteration)and memory efficient. (Only one m-vector must be stored).

            • LMS is computationally simple (m multiplications and m additions per iteration)and memory efficient. (Only one m-vector must be stored).

              Fig 3: block diagram for LMS algorithm

            • The convergence of LMS is often slow (it may take hundreds or thousands of iterations to converge from an arbitrary initialization).

        2. Normalized LMS (NLMS) Algorithm

          The weight update equation (3) it is clear that the adjustment is directly proportional to the tap input vector u (n). Therefore, when u (n) is large, then the LMS filter suffers from a gradient noise amplification problem. To overcome this difficulty; we may use the normalized LMS filter. In particular, the adjustment applied to the tap weight vector at iteration (n+1) is normalized with respect to the squared Euclidean norm tap input vector u(n) at iteration n So the weight vector update equation for each iteration given as

          w(n+1) = w(n) + (µ/u(n)2)u(n)e(n) (4)

          With the proper choice of µ, the NLMS adaptive filter can often converge faster than the LMS adaptive filter.

        3. Vriable Step-Size LMS Algorithm

          Based on the error-squared power, Kwong and Johnston proposed a simpler Variable Step Size least mean square algorithm (VSS LMS)[13]. The error power reflects the convergence state of the adaptive filter, where a converging system has a higher error power while the converged system has a smaller error power. Therefore, scalar step size increases or decreases as the squared error increases or decreases, thereby allowing the adaptive filter to track changes in the system and produces a smaller steady state error. The step size of the VSS algorithm is adjusted as follows:

          µ(n+1)= µ(n)+e2(n) (5)

          The variable step size algorithm, as appeared in [13] is of the form

          w(n+1)=w(n)+ µ (n)x(n)e(n) (6) where e(n)=d(n)-x(n)tw(n) Step size is updated as

          µ(n+1) = µ e2(n) with 0< µ <1 , µ >0 (7) and µ(n+1)= max if (µ n+1)> µ max (8)

          min if µ (n+1)< µ min (9)

          µ(n+1) otherwise where 0< µ min< µ max.To ensure

          stability, the variable step size (n) is constrained to the pre- determined maximum and minimum step size values of the LMS algorithm, while and are the parameters controlling the recursion. Simply, the VSS algorithm's step size value change by tracking the error square or the error power. A large error increases the step size to provide faster tracking while a small error reduces the step size for smaller steady state error. Although this approach can improve the step size trade-off effect, the drawback is that the maximum and minimum step sizes would require to be known a priori. This is essential in order to achieve the fastest convergence rate while not causing instability. A different technique usually known as the gradient adaptive step size, is as follows

          µ (n)= µ (n-1) + e(n)e(n-1)XT(n-1)X(n) (10)

          where µ is a small positive constant which controls the recursion. The recursion of µ (n) in equation 4.5 is such that the gradient (e(n)XT(n)) is larger during the converging period while becoming zero after convergence. The variable step size algorithms (except for the gradient adaptive step size) are based on some heuristic rules on the step size adjustment which are translated into numerical formulae.

        4. Recursive Least Squares (RLS)

        The Recursive Least Squares (RLS) adaptive filter is an algorithm which recursively finds the filter coefficients thatminimize a weighted linear least squares cost function relating to the input signals. In the derivation of the RLS, the input signals are considered deterministic, while for the LMS and similar algorithm they are considered stochastic. Compared to most of its competitors, the RLS exhibits extremely fast convergence. The RLS algorithm can be summarized as follows.

        Parameters: p=Filter order

        =Exponential weighting factor

        =Value used to initialize P(0).

        Initialization: w0 = 0

        P(0)=-1I

        Computation: for n=1,2,compute

        z(n) = p(n-1) x*(n)

        g(n) = x(n) / (+xt(n) z(n))

        (n) = d(n) wt(n-1) x(n)

        w(n) = w(n-1)+(n) g(n)

        p(n) = p(n-1)-g(n) zt(n)/.

      3. ADAPTIVE NEURO FUZZY INFERENCE SYSTEM (ANFIS)

        ANFIS are fuzzy Sugeno models put in the framework of adaptive systems to facilitate learning and adaptation. Such framework makes fuzzy logic more systematic and less relying on expert knowledge. There are many benefits to using ANFIS in pattern learning and

        simpler compared to neural networks, which require extensive trails and errors for optimization of their architecture and initializations Figure4:ANFIS Architecture. There are many benefits to using ANFIS in pattern learns µ and detection as compared to linear systems and neural networks. These benefits are centered on the fact that ANFIS combines the capabilities of both neural networks and fuzzy systems in learning nonlinearities. Moreover, ANFIS architecture requirements and initializations are fewer and simpler compared to neural networks, which require extensive trails and errors for optimization of their architecture and initializations. To present the ANFIS architecture, let us consider two-fuzzy rules based on a first-order Sugeno model,

        Fig 4.ANFISArchitecture

        Rule 1: if (x is A1) and (y is B1), then (f2=p1x+q1y+r1)

        Rule 2: if (x is A2) and (y is B2), then (f2=p2x+q2y+r2)

        one possible ANFIS architecture to implement these two rules is shown in Fig. 3. Note that a circle indicates a fixed node whereas a square indicates an adaptive node (the parameters are changed during training).

        Layer 1: Calculate Membership Value for Premise Parameter

        All the nodes in this layer are adaptive nodes; is the degree of the membership of the input to the fuzzy membership function (MF) represented by the node,

        Output O1,i for node i=1,2 O1,i= µ Ai(x2)

        Output O1,i for node i=3,4

        O1,i= µ Bi-2(x2)

        Layer 2: Firing Strength of Rule

        The nodes in this layer are fixed (not adaptive).These are labeled to indicate that they play the role of a simple multiplier. The outputs of these nodes are given by

        detection as compared to linear systems and neural networks.

        O2,I = µ

        (x1) µBi (x2)

        Ai

        These benefits are centered on the fact that ANFIS combines the capabilities of both neural networks and fuzzy systems in learning nonlinearities. Fuzzy techniques incorporate information sources into a fuzzy rule base that represents the knowledge of the network structure so that structure learning techniques can easily be accomplished. Moreover, ANFIS architecture requirements and initializations are fewer and

        The output of each node is this layer represents the firing strength of the rule.

        Layer 3: Normalize Firing Strength

        Nodes in this layer are also fixed nodes. These are labeled N to indicate that these perform a normalization of the firing strength from previous layer. The output of each node in this layer is given by,

        Layer 4: Consequent Parameters

        All the nodes in this layer are adaptive nodes. The output ofeach node is simply the produc

        Where pi, qi and ri are design parameters (consequent parameter since they deal with the then-part of the fuzzy rule). Layer 5: Overall Output

        This layer has only one node labeled indicate that is performs the function of a simple summer. The output of this single node is given by,

        The ANFIS architecture is not unique. Some layers can be combined and still produce the same output. In this ANFIS architecture, there are two adaptive layers (1 and 4). Layer 1 has three modifiable parameters (ai, bi and ci ) pertaining to the input MFs. These parameters are called premise parameters. Layer 4 has also three modifiable parameters (pi, qi and ri ) pertaining to the first-order polynomial. These parameters are called consequent parameters

        1. Echo Cancellation

          The method used in this paper is adaptive echo cancellation (AEC) based on neuro fuzzy logic technique. AEC is a process by which the interference signal can be filtered out by identifying a non linear model between a measurable echo source and the corresponding immeasurable interference. This is an extremely useful technique when a signal is submerged in a very noisy environment. Usually, the echo is not steady; it changes from time to time. So the echo cancellation must be an adaptive process: it should be able to work under changing conditions, and be able to adjust itself according to the changing environment. The basic idea of an adaptive echo cancellation algorithm is to pass the corrupted signal through a filter that tends to suppress the echo while leaving the signal unchanged. As mentioned above, this is an adaptive process, which means it does not require prior knowledge of signal or echo characteristics. Figure shows

          interference signal can be filtered out by identifying a linear model between a measurable echo source (artifact) and the corresponding immeasurable interference. Figure shows echo cancellation with ANFIS filtering. Here x(k) represents th respiratory signal which is to be extracted from the noisy signal, n(k) is the echo source signal. The echo signal goes through unknown nonlinear dynamics (f) and generates a distorted echo d(k) ,which is then added to x(k) to form the measurable output signal y(k).The aim is to retrieve x(k) from the measured signal y(k) which consists of the required signal x(k) plus d(k) , a distorted and delayed version of n(k) i.e. the interference signal. The function f(.) represents the passage dynamics that the echo signal n(k) goes through. If f(.) was known exactly, it would be easy to recover x(k) by subtracting d(k) from y(k) directly. However, f(.) is usually unknown in advance and could be time- varying due to changes in the environment. Moreover, the spectrum of d(k) may overlap with that of x (k) substantially, invalidating the use of common frequency domain filtering techniques. To estimate the interference signal d(k) ,we need to pick up a clean version of the echo signal n(k) that is independent of the required signal. However, we cannot access d(k) directly since it is an additive component of the overall measurable signal y(k) . In Figure 2, ANFIS is used to estimate the unknown interference d^(k) .When d^(k) and d(k) are close to each other, these two get cancelled and we get the estimated output signal x^(k) which is close to the required signal. Thus by this method, the echo is completely removed and the required signal is obtained.

      4. SIMULATIONS AND EXPERIMENTAL RESULTS

        This section presents the results of simulation using MATLAB to investigate the performance behaviors of various adaptive filter algorithms and ANFIS in non stationary environment with step sizes of 0.06 and filter order of 16. The principle means of comparison is the error cancellation capability of the algorithms which depends on the parameters such as step size, filter length and number of iterations. A random noise is added with respiratory signals. It is then removed using ANFIS and adaptive filter algorithms such as LMS, NLMS, VSLMS, and RLS. All Simulations presented are averages over 1975 independent runs.

        Magnitude Response (dB)

        60

        50

        Magnitude (dB)

        40

        30

        20

        10

        0

        -10

        0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

        Normalized Frequency ( rad/sample)

        Fig 6.Desire Signal

        This Figure6 shows that the respiratory signal which is generated synthetically. This is the desired signal for the adaptive filter.

        Fig.5. AEC implementation.

        echo cancellation with ANFIS filtering. The principle used for the elimination of artifacts is AEC. It is a process by which the

        1

        0.8

        0.6

        0.4

        amplitude

        0.2

        0

        -0.2

        -0.4

        -0.6

        -0.8

        -1

        output signal

        0 5 10 15 20 25 30

        No.of Samples

        Fig 7. output signal

        less when ANFIS technique is used. Also SNR is better for the same technique.

        This Figure 7 shows that the echo cancellated signal with the respiratory signal which is generated synthetically. This is the output signal to the ANFIS.This Figure 8 shows that the echo return loss enhancement signal with the respiratory signal which is generated synthetically. This is to the ANFIS

        Echo Return Loss Enhancement

        60

        50

        ERLE [dB]

        40

        30

        20

        10

        0

        0 5 10 15 20 25 30

        Time [sec]

        Fig 8. echo return loss enhancement signal

        Fig 9. comparisons of LMS, RLS and ANFIS algorithm

      5. CONCLUSION

This study has revealed useful properties of the LMS and the RLS algorithms and ANFIS in case of adaptive echo cancellation. It has been found that the RLS algorithm generally performs better irrespective of the nature of the signal and the echo. The RLS is particularly useful in the case of signals where abrupt changes of amplitude or frequency may occur such as echo. But this better-quality performance

A.Tables

The table 1 provides the comparison of the mean

comes at a price: The RLS takes more time to compute, especially when the filter length is large. But change in filter

square error and SNR (input and output) of LMS algorithms. In this table filter order is chosen as constant to find, which

step size value the LMS algorithm gives the best result. For this, the value of MSE should be minimum and value of SNR output should be maximum. It is observed that for µ=0.008 the LMS algorithm gives the best result. The value of MSE .SNR output for that best step size value is 0.00020 and 11.8237 respectively. But there is always tradeoff between MSE and SNR .hence choosing an algorithm depends on the parameter on which the system has more concern.

Table 1. Comparison of various LMS algorithms

Table 2. Comparison of adaptive filtering algorithms with ANFIS

The Table 2 shows that comparison of MSE with that of adaptive algorithms and ANFIS. It shows that MSE value of the estimated respiratory signal and convergence time is

length doesnt have too much effect on the convergence behavior of the RLS. For the LMS, this increase is quite substantial. It can be stated that the RLS algorithm should be preferred over the LMS for adaptive noise cancellation unless the computation time is a matter of great concern. But in this paper normalized LMS is compared with ANFIS. Quantitative analysis reveals that ANFIS out performance the normalized LMS. The result obtained indicates that ANFIS is a useful AI (Artificial Intelligence) technique to cancel the non linear interferences from the respiratory signal.

REFERENCES

  1. A.Bhavani Sankar, D.Kumar & K.Seethalakshmi, Performance Study of Various Adaptive Filter Algorithms for Noise Cancellation in Respiratory Signals, Signal Processing, An International Journal,2010.

  2. Syed Zahurul Islam, Syed Zahidul Islam, Razali Jidin, Mohd. Alauddin Mohd Ali Performance Study of Adaptive Filtering Algorithms for Noise Cancellation of ECG Signal, IEEE magazine , 2009.

  3. C. Kezi Selva Vijila, C.Ebbie Selva Kumar, Interference cancellation in

    EMG signal Using ANFIS, ACEEE academy publisher, 2009.

  4. Mohammad Zia Ur Rahman, Rafi Ahamed Shaik, D V Rama Koti Reddy, An Efficient Noise Cancellation Technique to remove noise from the ECG signal using Normalized Signed Regressor LMS algorithm,IEEE magazine, 2009.

  5. Vinod K. Pandey, and Prem C. Pandey, Cancellation of Respiratory Artifact in Impedance Cardiography,IEEE BME group,2005.

  6. Floris Ernst , Alexander Schlaefer , Sonja Dieterich , Achim Schweikard A Fast Lane Approach to LMS prediction of respiratory motion signals,Biomedical Signal Processing and Control,2008.

  7. Ondracka J., Oravec R., Kadlec J., Cocherová E. simulation of RS and LMS algorithms for adaptive noise cancellation in matlab. FEI STU Bratislava, Slovak Republic UTIA, CAS Praha, Czech Republic.

  8. Gaurav Saxena, Subramaniam Ganesan, and Manohar Das .Real Time Implementation Of Adaptive Noise CancellationIEEE Magazine, 2008.

  9. S.Haykin, Adaptive filter theory,2nd edition, prentice Hall, Englewood cliffs, New Jersy,1991.

  10. A.H sayed, Fundamentals of Adaptive filtering, John Wiley and sons

    2003.

  11. Monson H.Hayes, Statistical digital signal processing and modeling by

    John Wiley and Sons 2002.

  12. Emmanual C.Ifeachor,Barrie W.Jervis,Digital signal processing and

    practical approach, 2nd edition.

  13. C.Kezi Selva Vijila, C.Ebbie Selva Kumar ,Interference cancellation in EMG signal using ANFIS ,ACEEE,Academy publisher,2009.

Leave a Reply