 Open Access
 Total Downloads : 707
 Authors : V.Ravi Sankar, T.V.Krishna Moorthy, S.Abdul Mansoor, Sk.Ruksana Begum
 Paper ID : IJERTV1IS8391
 Volume & Issue : Volume 01, Issue 08 (October 2012)
 Published (First Online): 29102012
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
Split Wiener Filtering in Adaptive Systems
V.Ravi sankar, T.V.Krishna Moorthy, S.Abdul Mansoor, Sk.Ruksana Begum
ECE Department, SV College of Engineering, Tirupathi, Ap, India
ECE Department, Holy Mary Institute of Technology and Science, Hyderabad, AP, India ECE Department, Vignana Bharathi Institute of Tehnology, Hyderabad, AP, India
ECE Department, Holy Mary Institute of Technology and Science, Hyderabad, AP, India
AbstractThis paper proposes a new structure for split transversal filtering and introduces the optimum split Wiener filter. The approach consists of combining the idea of split filtering with a linearly constrained optimization scheme. Furthermore, a continued split procedure, which leads to a multisplit filter structure, is considered. It is shown that the multisplit transform is not an input whitening transformation. Instead, it increases the diagonalization factor of the input signal correlation matrix without affecting its eigenvalue spread. A power normalized, timevarying stepsize least mean square (LMS) algorithm, which exploits the nature of the transformed input correlation matrix, is proposed for updating the adaptive filter coefficients. The multisplit approach is extended to linearphase adaptive filtering and linear prediction. The optimum symmetric and antisymmetric linearphase Wiener filters are presented. Simulation results enable us to evaluate the performance of the multisplit LMS algorithm.
Index TermsAdaptive filtering, linearphase filtering, linear prediction, linearly constrained filtering, split filtering, Wiener filtering.
I . INTRODUCTION
NONRECURSIVE systems have been frequently used in digital signal processing, mainly in adaptive filtering. Such finite impulse response (FIR) filters have the desirable properties of guaranteed stability and absence of limit cycles. However, in some applications, the filter order must be large (e.g., noise and echo cancellation and channel equalization, to name a few in the communication field) in order to obtain an acceptable performance. Consequently, an excessive number of multiplication operations is required, and the implementation of the filter becomes unfeasible, even to the most powerful digital signal processors. The problem grows worse in adaptive filtering.
Besides the computational complexity, the convergence rate and the tracking capability of the algorithms also deteriorate with an increasing number of coefficients to be updated. Owing to its simplicity and robustness, the least mean square (LMS) algorithm is one of the most widely used algorithms for adaptive signal processing. Unfortunately, its performance in terms of convergence rate and tracking capability depends on the eigenvalue spread of the input signal correlation matrix [1][3]. Transform domain LMS algorithms, like the discrete cosine transform (DCT) and the discrete Fourier transform (DFT), have been employed to solve this problem at the expense of a high computational complexity [2], [4]. In general, it consists of using an orthogonal transform together with power normalization for speeding up the convergence of the LMS algorithm. Very interesting, efficient, and different approaches have also been proposed in the literature [5], [6], but they still present the same tradeoff between performance and complexity. Another alternative to overcome the aforementioned drawbacks of nonrecursive adaptive systems is the split processing technique. The fundamental principles were introduced when Delsarte and Genin proposed a split Levinson algorithm for real Toeplitz matrices in [7]. Identifying the redundancy of computing the set of the symmetric and antisymmetric parts of the predictors, they reduced the number of multiplication operations of the standard Levinson algorithm by about one half. Subsequently, the same authors extended the technique to classical algorithms in linear prediction theory, such as the Schur, the lattice, and the
normalized lattice algorithms [8]. A split LMS adaptive filter for autoregressive (AR) modeling (linear prediction) was proposed in [9] and generalized to a socalled unified approach [10], [11] by the introduction of the continuous splitting and the corresponding application to a general transversal filtering problem. Actually, an appropriate formulation of the split filtering problem has yet to be provided, and such a formulation would bring to us more insights on this versatile digital signal processing technique, whose structure exhibits high modularity, parallelism, or concurrency. This is the purpose of the present paper. By using an original and elegant joint approach combining split transversal filtering and linearly constrained optimization, a new structure for the split transversal filter is proposed. The optimum split Wiener filter and the optimum symmetric and ant symmetric linearphase Wiener filters are introduced. The approach consists of imposing the symmetry and the antisymmetry conditions on the impulse responses of two filters connected in parallel by means of an appropriate set of linear constraints implemented with the socalled generalized sidelobe canceller structure. Furthermore, a continued splitting process is applied to the proposed approach, giving rise to a multisplit filtering structure. We show that such a
multisplit processing does not reduce the eigenvalue spread, but it does improve the diagonalization factor of the input signal correlation matrix. The interpretations of the splitting transform as a linearly constrained processing are then
Figure 1. Split adaptive transversal filtering
considered in adaptive filtering, and a power normalized and timevarying stepsize LMS algorithm is suggested for updating the parameters of the proposed scheme.We also extend such an approach to linearphase adaptive filtering and linear prediction. Finally, simulation results obtained with the multisplit algorithm are presented and compared with the standard LMS, DCT LMS and recursive least squares (RLS) alogithms.

Split Transversal Filtering
Any finite sequence can be expressed as the sum of symmetric and antisymmretric sequence. Where the symmetric (antisymmetric) part is onhalf of the sum (difference) of original sequence and its backward version. The same concept can be applied to the finite impulse response of the transversal. The impulse response of the FIR filter of order M can be represented in the matrix notations as
(1)
Denote the Mby1 tapweight vector of a transversal filter. This vector is represented as the summation of symmetric and antisymmetric parts as given below,
(2)
Where, the symmetric sequence denoted by ws and the anti symmetric sequence denoted by wa are given by the following two equations respectively.
(3)
(4)
And is the NN by reflection matrix (or exchange matrix), which has unit elements along the cross diagonal and zeros elsewhere. That is for example J2 which is 2by2 matrix is given by
(5)
Thus, if , , where . The symmetry and anti symmetry conditions of and are, respectively, described by
(6)
(7)
Let us consider the classical scheme of transversal filter shown in the figure1(a). The filter tapweight vector can be split into symmetric and antisymmetric parts and is represented in the figue1(b). The input signal and the desired response are modeled as widesense stationary discretetime stochastic processes of zero mean. Without loss of generality, all the parameters have been assumed to be real valued.
Figure 1(a) Adaptive transversal filtering
Figure 2. Split adaptive tansversal filtering
The same input and the desired response are given to the transversal filter and the split transversal filter and it can be observed that the performance of both of them is same. But the convergence rate is the improved for split transversal filter when compared to the normal transversal filters [7].

Split Filtering as Linearly Constrained Filtering Problem
The essence of a Wiener filer is that it minimizes the meansquare value of an estimation error. In solving this optimization problem in section 3.3.1, no constraints were imposed on the solution. In some filtering applications, it may be desirable or even mandatory to design a filter that minimizes a meansquare criterion subject to specific constraint [1].
The principle of linearly constrained transversal optimal filtering is to minimize the power of the estimation error e(n), subject to a set of linear constraints on the weight vector defined by
(8)
Where C is the N byK constraint matrix, and f is a K element response vector.
The constraints are imposed so as to prevent the weight vector from cancelling the signal of interest. To satisfy the requirement of multiple constraints, we may use the generalized side lobe canceller (GSC) whose weight vector is separated into two components, a quiescent weight vector, which satisfies the prescribed constraints And an unconstrained weight vector, the optimization of which, in accordance with Wiener filter theory, minimizes the effects of receiver noise and interfering signals.
The GSC implementation of the linear constraints of equation (8) is represented in the figure 3 shown below [2], [12].
Figure 3. Generalized side lobe canceller
This GSC implementation mainly consists of changing a constrained minimization problem into an unconstrained one. From the figure 3 it can be observed that the GSC implementation consists of Nby(Nk) signal blocking matrix represented . This signal blocking matrix represents the basis for the orthogonal complement of the subspace spanned by columns of C. That is,
(9)
The (NK)element vector represents an unconstrained filter and the coefficient vector
(10)
.
The splitting of w into its symmetric and antisymmetric parts (see Fig. 1 a & b) can be interpreted as a linearly constrained optimization problem. Let us define matrices , as well as vectors as,
(11)
And
(12)
And, for N odd (where K=(N1)/2), or
(13)
(14)
And , for N even ( where K=N/2 ).
Now imposing the constraints on the symmetric and anti symmetric parts of the tap weight vector given by
(15)
(16)
From the above equations we can find that , in equation (15) must be orthogonal to the subspace spanned by the columns of . Similarly is orthogonal to the subspace spanned by the columns of for
.
Now implementing the GSC structure described in figure on the symmetric and anti symmetric parts of leads to the block diagram shown in figure 4 (a) (N even).
Figure 4(a). GSC implementation of the split filter
However, since and, and, so, the two branches and can be eliminated from the figure 4.3 (a). Moreover, it is easy to verify that and . is a possible choice of signal blocking matrix to span the subspace that is the orthogonal complement of the subspace spanned by the columns of [4]. This property can also be verified by the fact that forces to be symmetric through the equation 15, whereas would force it to be anti symmetric. Considering the above properties, Figure 4(a) can be simplified to the block diagram shown in Figure 4(b) [4].
It is observed that the vectors and are merely composed of the first N/2 coefficients of and . This can be easily verified by noting that the pre multiplication of by yields and of by yields .
The estimation error is then given by
Where
(17)
(18)
Denotes the Nby1 tapinput vector. In the meansquarederror sense, the vectors and are chosen to minimize the following cost function.
Where,
Is the variance of the desired response d(n), R is the NbyN correlation matrix of X(n), and P is the Nby1 crosscorrelation vector between X(n) and d(n).
Figure 4(b).GSC implementation of the split filter
From the symmetric and Toilets properties of correlation matrix R it can easily be shown that R=JRJ. A matrix with this property is known as Centro symmetric [3] and, in the case of R, can be partitioned into the form shown below.
(20)
For N even (K=N/2), where and are KbyK correlation matrices of .
Therefore the optimum solutions are given by
Where
(21)
(22)
(23)
(24)
(25)
These equations define the true optimum linearphase Wiener filter `, having both constant group delay and constant phase delay.

Development of MultiSplit Transform
For ease of presentation, Let us consider , where L is an integer number greater than one. Now applying the above discussed splitting procedure continuously to the transversal filters and after L steps we arrive at the MultiSplit scheme shown in the figure 5. This requires splitting operations where l=1,2,3..,L.
Figure 5 MultiSplit adaptive filtering
In the above figure and are by matrices such as in equations 4.13 and 4.14, respectively, and , for i=0,1,,N1, are the single parameters of the resulting zeroorder filters.
The above MultiSplit scheme can be viewed as a linear transformation of X(n), which is denoted by
(26)
Where
And
(27)
(28)
It can be observed that for , T is a matrix of +1s and 1s, in which the inner product of any two distinct columns is zero. In fact, T is a non singular matrix, . In other words, the columns of T are mutually orthogonal.
It is observed that one of the different ways turns the T into the Norder Hadamard matrix H so that the Multi Split scheme can be represented in the compact form shown in figure 6 [4].
Figure 6 Hadamard transform of the input x(n)
The Hadamard matrix of order N can be constructed from as follows,
(29)
Starting with , this gives , , , and Hadamard matrices of all orders which are powers of two. An alternative way of describing equation 4.29 is
Where denotes the Kronecker product of matrices, and
(30)
(31)
Another very interesting linear transform is obtained, making
And
(33)
(32)
Where l=1,2,,L. Using the equations 4.27, this results in a linear transformation of X(n) with the flow graph is shown in the Figure 7. (N=8).
Now by substituting M for the linear transform H, the MultiSplit scheme is also represented by the figure 7. where,
(34)
And .
Figure 7. Flow graph of butterfly computation for
The above mentioned linear transforms do not convert the vector X(n) into corresponding input vector of uncorrelated variables. Therefore single parameters in the figures 4 and 5 cannot be optimized separately by the meansquare error criterion.
The MultiSplit transform improves the digitalization of the input correlation matrix R. This can be observed from the following equation 4.35, obtained by pre and post multiplying the R of equation 4.20 with
(35)
Finally, the optimum coefficients for i=0, 1, 2., N1, of the figure 4.5 can be obtained by minimizing of the meansquared error, which results in
(36)
Where [4]. From the equation (36) we can also get the optimum wiener filter coefficients as,
(37)

Comparison of performance of adaptive algorithms
The simulation results for denoising of the noise added sine wave using standard LMS, RLS, DCTLMS and MultiSplit LMS for different number of iterations are tabulated in the below Table 5.5. Mean square error is taken as the performance criteria for comparison of adaptive algorithms. Using the tabulated values comparison graph is generatd as is shown in the figure 8.
Figure 8. MSE comparisons of LMS, RLS, DCTLMS and MultiSplit LMS algorithm for denoising the noisysine wave

Conclusions
The objective of the project is an appropriate formulation of the split filtering problem which will bring to us more insights on this versatile digital signal processing technique, whose structure exhibits high modularity, parallelism, or concurrency. The procedure to be followed to achieve is described in the following paragraphs. By using an original and elegant joint approach combining split transversal filtering and linearly constrained optimization, a new structure for the split transversal filter is proposed. The optimum split Wiener filter and the optimum symmetric and anti symmetric linearphase Wiener filters are introduced. The approach consists of imposing the symmetry and the anti symmetry conditions on the impulse responses of two filters connected in parallel by means of an appropriate set of linear constraints implemented with the socalled generalized side lobe canceller structure. Furthermore, a continued splitting process is applied to the proposed approach, giving rise to a Multisplit filtering structure. The interpretations of the splitting transform as a linearly constrained processing are then considered in adaptive filtering, and a power normalized and timevarying stepsize LMS algorithm is suggested for updating the parameters of the proposed scheme. Finally, simulation results obtained with the Multi split algorithm are presented and compared with the standard LMS, DCTLMS, and recursive least squares (RLS) algorithms. The developed MultiSplit algorithm is used for denoising in the acoustic signal and in ECG signals using Mat lab and the results are compared with that of the standard LMS, DCTLMS and RLS algorithms. It is observed that the performance of the RLS and DCTLMS algorithms are better than the standard LMS and the proposed MultiSplit LMS, but as already stated their implementation is difficult and the computational complexity is more when compared to the standard LMS and MultiSplit LMS algorithm. So what LMS is the most widely used algorithm in the adaptive filters. It is observed that the proposed MultiSplit algorithm performs better than the standard LMS algorithm in terms of MSE and also the convergence rate. That is for particular MSE of 0.0061 the standard LMS takes 650 iterations where as MultiSplit LMS takes only 250 iterations. Moreover, it is also observed that the proposed MultiSplit algorithms performance is close to the performance of RLS algorithm.

Future scope
This project implements a new structure of split transversal filtering (multi split transversal filtering). Here, input vectors as well as filter coefficients are split as symmetric and asymmetric parts using Hadmard transform. Hadmard transform is a linear transform, which operates on time domain samples of input and impulse response of filter. The same procedure can also be repeated in frequency domain. The input vector is split as low frequency part and high frequency part, each part is separately applied adaptive filtering algorithm, which leads to sub band adaptive algorithm. As no transformation is required and only requires filter banks, it is less computationally burden.
9.References:

S.Haykin, Adaptive Filter Theory, 4th edition. Englewood Cliffs, NJ: PrenticeHall, 2002

S.Hykin and A.Steinhardt, Eds., Adaptive Radar Detection and Estimation. New York: Wiley, 1992.

S.L.Marple Jr., Digital Spectral Analysis with Applications. Englewood Cliffs, NJ:
PrenticeHall, 1998.

Leodardo S. Resende,Joao Marcos T. Ramano, Maurice G. Bellanger, Split Wiener Filtering With Application in Adapive Systems, IEEE Trans.Signal Processing, vol.52, No.3,March 2004.

F. Beaufays, Transformdomain adaptive filters: An analytical approach, IEEE Trans. Signal Processing, vol. 43, pp. 422431 Feb. 1995.

J. S. Goldstein, I. S. Reed, and L. L. Scharf, A multistage representation of the Wiener filter based on orthogonal projections, IEEE Trans. Inform. Theory, vol. 44,Nov , pp. 29432959. 1998.

P. Delsarte and Y. V. Genin, The split levinson algorithm, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP34, pp. 470478 June 1986.

Phillippe Delsarte and Yvesgenin,On the splitting of classical algorithms in linear prediction theory, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP35, pp. 645653 May 1987.

K. C. Ho and P. C. Ching, Performance analysis of a splitpath LMS adaptive filter for AR modeling, IEEE Trans. Signal Processing, vol. 40, pp. 13751382June 1992.

P.C.Ching and K.F.Wan,A Unified approach to split structure adaptive filtering, in Proc. IEEE ISCAS, Detroit, MI, May 1995.

K.F. Wan and P.C.Ching, Multilevel splitpath adaptive filtering and its unification with discrete walsh transform adaptation, IEEE Trans. Circuits Sysst.II, vol. 44, pp. 147151, Feb 1997.

L. J. Griffiths and C. W. Jim, An alternative approach to linearly constrained adaptive beamforming, IEEE Trans. Antennas Propagat., vol. AP30, pp. 2734 Jan. 1982.

E.CIfeachor and B.W.Jervis, Digital Signal Processing, A Practical Approach, Prentice Hall, 2002.