 Open Access
 Total Downloads : 812
 Authors : S.Radhika, Sivabalan Arumugam
 Paper ID : IJERTV1IS9361
 Volume & Issue : Volume 01, Issue 09 (November 2012)
 Published (First Online): 29112012
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Survey on the Different Adaptive Algorithms Used In Adaptive Filters
S.Radhika1 Sivabalan Arumugam2
1Research Scholar,Sathyabama University,Chennai600119
2Manager Standard,NEC Mobile networks Excellence Centre,Chennai600096
Abstract:Adaptive filters finds application in various fields which includes echo cancellers, noise cancellation, system identification etc. There are many algorithms available and the choice of a particular type of algorithm is dependent on the requirement of the algorithms in the particular environment. Among the various types of algorithms, LMS is very commonly used because of its simplicity. In this paper a comparison of the variants of LMS algorithms with respect to their convergence behavior, tracking capability, robustness, computational complexity, steady state error is made .
Keywords:Adaptive filter ,LMS.

Introduction
Adaptive filters are one of the important tool in the digital signal processing. They are mainly used whenever the statistical characteristics of the signal is said to be nonstationary in nature. In these type of filters ,the coefficients tend to get updated as per the conditions prevailing so as to minimize the error. Hence the key component of the adaptive filter is the adaptive algorithm. These algorithms are the set of rules which defines how the updation is made. The important requirements of these adaptive algorithm are that they should adapt to the changing statistics and track the solution as the time changes. Based on the adaptive algorithms, adaptive filters are classified broadly into two broad classifications. One is the sample by sample approach and the other is the block approach[1].Sample by sample algorithms are further classified as time domain and frequency domain algorithms[1],[2],[3].The sample by
sample time domain approach is further classified as least mean square and recursive least square algorithms. Similarly the sample by sample frequency domain approach are classified as based on sliding discrete Fourier transform, frequency sampling method, sub band techniques.
Similarly the block adaptive algorithms are classified as time domain and frequency domain adaptive algorithms. They update the filter coefficients only in blocks therefore tracking is very poor.
The choice of a particular type of algorithm depends mainly on the application. The various application of adaptive systems includes the adaptive equalizers [1] used to remove the inter symbol interference in the communication systems, the acoustic echo cancellation systems.[2]and noise cancellation systems[3] where the adaptive filters generate the echo or noise which is subtracted from the corrupted signal to get back the original signal. Identification of unknown system transfer function is another important application of adaptive filters [2].
In this paper we discuss only the sample by sample time domain LMS algorithms and its variants with respect to various performance criteria. The LMS algorithm is one of the most popular algorithm used in adaptive filters.The advantages of LMS algorithm lies in its simplicity and no requirement of ensemble average to be known in advance. The basic structure [2] of a LMS linear adaptive scheme is shown in figure1.
Fig.1:Block Diagram of Adaptive Scheme
It consists of a filter whose input is x(n)=[x(n),x(n1),x(n2),..,x(nk)] where k is the number of delay elements and w(n)=[w(n),w(n1),w(n2),..w(nk)] is the tap weight vector. These values are estimated from the adaptive algorithm whose expected value may come close to the Weiner Hofp solution as n tends to infinite.
The desired signal d(n) and the estimated value of the desired signal y(n) from the filter are used to generate the error signale(n). This error signal along with the input signal determines the update to the weight vector for the filter.

Mathematical description of the LMS and its variants
All LMS algorithms have a weight update equation given as
w(n+1)=w(n)+Âµe(n)x*(n)
where x(n)=input vector, w(n+1)=value of the new weight at time n+1,w(n)= value of the weight at time n,Âµ=step size chosen to be a value between
0 Âµ 2/max
of the LMS algorithm is that the convergence is slow due to the step size restriction which depends on the eigen value of the auto correlation matrix of the input signals. Several methods have been used to speed up the convergence of the LMS algorithm at the cost of increased complexity.

Normalized least mean square algorithm:
The normalized least mean square algorithm is the normalized form of the LMS algorithm.In this algorithm the difficulty encountered by the LMS in the selection of step size is eliminated by normalizing the step size.The main advantage of NLMS is that the gradient noise amplification is very much reduced due to the norm function present in the denominator of the weight update equation. The convergence is also faster than the LMS .The main disadvantage with this algorithm is that the computational complexity is more than LMS . The modified equation governing the NLMS is given by
w(n+1)=w(n)+Âµe(n)x(n)
———–
x(n)+p
Where p=small positive value which is used when norm of x(n) becomes very small.

Proportionate normalized least mean square algorithm
e(n)=d(n)y(n)
where y(n) =wn(k)x*(nk)
2.1 Least Mean Square algorithm:
The LMS[2] is the most popular algorithm due to its simplicity. We can notice that the update function is obtained by the multiplication of the step size with the current value of the error signal and input signal and does not depend on any other previous value. The main drawback
The equations governing the PNLMS algorithm is given by [4]
w(n+1)=w(n)+Âµe(n)x(kn)
x
————— N 2(k).
The need for faster convergence led to the development of PNLMS. The main concept behind this algorithm is that the gain is adapted in proportional to the
magnitude of the estimated impulse response. The adaptive gain at each tap position varies from one position to the other. The total adaptive gain is carefully monitored and controlled so as to have misadjustment constant throughout .The main limitation with this algorithm is that the small coefficients receive less gain therefore the time needed to reach the steady state error is increased.Also the initial convergence is faster and since the impulse response is dispersive ,the overall convergence is slower than NLMS.

Afifine Projection Algorithm
Affine Projection Algorithm[2] is a variant of the NLMS algorithm .It converges faster at a cost of computational complexity[5]. Its performance also degrades in the presence of impulse inrterference[6].In this algorithm when the convergence is increased with the projection order there is simultanceous increase in the compiutational complexity.The equations are
Min
w(n+1)= w^ (n+1)w(n)2 subject to the constraint
y(n)XT(n)w(n+1)=0

Affine projection sign algorithm
The APSA [6]combines the benefits of APA and SA by updating its weights vector according to L1 norm optimization criteria while using multiple projections. Due to the combined feature it has got improved speed of convergence ,steady state error than APA and SA.The computational complexity [7] is also less than APA,NLMS algorithms.
The equation are w(n+1)=w(n)+ Âµx(k)
———
s s
Sqrt(X T(k)X (k)+c)
Where Xs(k)=X(k)sgn(e(k)) &c is the

Variable sep size Normalized LMS algorithms
Variable step size NLMS adaptive filters were developed to meet the objective of fast convergence with a low excess mean square error.
In the paper [8] the power of the instantaneous error is used to derive the variable step size LMS filter. Here the step size is larger when the estimated error is larger and vice versa.In [9] the norm of the filter coefficients error vector is the criteria for selecting the variable step size.In [10] the mean square error and the estimated system noise power is used to control the step size. The algorithm is a variable step size non parametric VSSNLMS algorithm. In [8] a comparison of the VSS NLMS is made with other VSSNLMS and is found that [8] performs well with faster convergence ,low steady state error and good tracking capability.In[11] a variable step size APA which is robust is presented.This algorithm is based on the minimization of the square norm of the a posteriori error subject to a time dependent constraint on the norm of the filter update.The performance analysis finds that this algorithm RVSSAPA has a steady state error lower without reducing the speed of convergence or tracking.


Conclusion
This paper has discussed the LMS adaptive algorithms used in adaptive filters.LMS algorithm is used when cost is the major criteria.However convergence and tracking of the APA is more than LMS algorithm.The performance of hybrid algorithms are better than their parents.Hence in future it is required to make an adaptive algorithm which has less computational complexity,more
stability,less steady state error and with good converge and tracking capability.
regularization parameter.
Comparison table depicting the features of the adaptive algorithm.
Sl No 
Adaptive Algorithm 
Convergence 
Tracking 
Computational Complexity 
Steady State Error 
Stability 
Limitation 
1 
LMS 
Poor due to the restriction of the step size on the eigen value of the autocorr function of the input signal . 
Poor 
O(M) where M=length of the filter 
Slightly more than the minimu m mean square error due to gradien t noise 
good 
The convergence and tracking are poor and mean square error is more. 
2 
NLMS 
Faster than LMS 
Better than LMS 
2L+2 additions 2L+3 – multiplications 1 division 
Lesser than LMS 
More stable than LMS 
Complexity is more. 
3 
PLMS 
Faster 
Bertter 
50% more 
Almost 
good 
More complex 
convergence 
than 
complex than 
constan 

than 
NLMS 
NLMS 
t 

LMS,NLMS 
through 

since the step 
out due 

size is 
to 

proportional 
distribu 

to the impulse 
tion of 

response of 
the 

the signal 
tiotal 

adaptiv 

e gain 

over 

the 

taps. 

4 
APA 
Improved 
Better 
As the 
Due to 
good 
Most costlier 
convergence 
than LMS 
projection order 
regulari 
due to 

and 
isa increased to 
zation 
complexithy in 

NLMS 
improve the 
the near 
computation. 

convergence the 
end 

computational 
noise 
complexity also increases 
amplifi cation is avoided . 
Vol. 1 Issue 9, November 

5 
RVSSAPA 
Faster than APA since it is base on L1and L2 norm optimizartion 
Better than APA 
More complex 
Slightly better than. APA 
Very robust and highly stable than APA 
Computational complexity is more. 
6 
APSA 
Faster convergence than APA due to concept of L1norm optimization 
Better than APA 
2L+3M+4 Multiplications Ml+l+2M2+3M +1 additions 1 square root and 1 division 
Better than APA and SA 
Better than APA and SAwith higher projectio n order. 
No of additions in the efficient implementatio n is still high of the order of O(ML) 
7 
VSSNLMS 
Faster convergence 
Better than NLMS 
More 
Better than NLMS 
good 
Complexity in computation is more. 
2012
References

Paulo.S.R.Diniz,Adaptive Filtering Algorithms And Practical Implementations,Kluwer Academic Publishers ,London. 2nd Edition.

S.Haykin,,Adaptive Filter Theory,Upper Saddle River,Prentice Hall, 2002, 4th Edition.

Monsoon.H.Hayes,Statistical Digital Signal Processing And Modelling,Wiley India.

D.L.Duttweiler,Proportionate Normalized LeastMeanSquares Adaptation In Echo Cancellers, IEEE Trans. Speech Audio Process., Vol. 8, No.5, Pp. 508518, Sep. 2000

Tiange Shao, Yahong Rosa Zheng,Jacob Benesty,Affine Projection Sign Algorithm Robust Against Impulsive Interferences,IEEE Signal Processing Letters, Vol. 17, No. 4, April 2010.

K.Ozeki,T.Umeda, An Adaptive Filtering Algorithm Using An Orthogonal projection To An Affine Subspace And Its Properties, Electron.Comm. Jpn., Vol. 67A, No. 5, Pp. 1927, May 1984.

Jingen Ni, Feng Li,:Efficient Implementation Of The Affine Projection Sign Algorithm,IEEE Signal Processing Letters, Vol. 19, No. 1, January 2012.

HsuChang Huang And Junghsi Lee,A New Variable StepSize NLMS Algorithm And Its Performance Analysis,IEEE Transactions On Signal Processing, Vol. 60, No. 4, April 2012.

R. H. Kwong And E. W. Johnston, A Variable Step Size LMS Algorithm,IEEE Trans. Signal Process., Vol. 40, No 7, Pp. 1633 1642,Jul. 1992.

H. C. Shin, A. H. Sayed, And W. J. Song, Variable StepSize NLMS And Affine Projection Algorithms, IEEE Signal Process. Lett., Vol. 11,No. 2, Pp. 132135, Feb. 2004.

Leonardo Reyvega A, Hernanrey A, Jacob Benesty, Robust Variable StepSize Affine Projection Algorithm:, Signal Processing ,Elsevier.