Spectral and MFCC Feature Extraction Methodology for Cardiac Signal Analysis: A Comparative Study

DOI : 10.17577/IJERTCONV3IS15044

Download Full-Text PDF Cite this Publication

Text Only Version

Spectral and MFCC Feature Extraction Methodology for Cardiac Signal Analysis: A Comparative Study

K. Lakshmi Devi,

Research Scholar,

Mother Teresa Womens University, Kodaikanal.

Dr. M. Arthanari,

Dean,

Nehru Institute of Technology, Coimbatore.

Abstract Feature Extraction is very crucial in the analysis of signals as it transforms the original signal into a relatively low dimensional feature space for further analysis. This paper compares two such feature extraction methods: Spectral and Mel frequency Cepstral coefficient, as the heart sounds in the time domain is problematic.

Keywords Phonocardiogram; MFCC; Spectral Features; DWT.

  1. INTRODUCTION

    Cardiac auscultation is the primary test conducted for the analysis of Heart sounds. Heart sounds are easily diagnosed either by the ECG or PCG signals. They both have been used as biometric traits for the automatic Identification system. Phonocardiogram (PCG) signals are easily obtained by placing the stethoscope against the chest. They are complex signals and cannot be visually analyzed. They become inaccurate when disturbed by noise, breath sounds, etc. But they cannot be forged and a fresh sample can be taken every second. In recent years, different researchers are studying at the possibilities of using heart sounds for biometric recognition.

    The two major sounds in a cardiac cycle are S1 and S2. S1 occurs during the onset of the Ventricular contraction during the closure of the AV valves [3]. It is the longest and the loudest. S2 occurs during the closure of the semi lunar valves. Its frequency is higher when compared to S1 but are shorter. The pattern of S1 and S2 sounds are taken for research in the identification process as they are unique for each person.

  2. FEATURE EXTRACTION

    The overall authentication process captures the signal, amplifies, emphasizes, trains and matches and verifies. The Heart sound signals are captured by Stethoscope. The raw signal is decomposed into frames and is normalized to remove offsets. Discrete Wavelet Transform subsamples the signal and selects the S1 and S2 sounds with frequency ranging from 30-250Hz. They are then smoothed and the S1 and S2 peaks are detected.

    Emphasizing involves transforming a signal to emphasize certain characteristics and that contributes to the effective training and Matching or the Identity verification. Feature Extraction does the job. It simplifies the amount of

    resources so as to describe a large set of data accurately. The Segmented S1 and S2 sounds contain numerous redundant data which needs further processing.

    A Number of Feature extraction methods are available and may be temporal or Spectral, Cepstral coefficients, Harmonic, Rhythmic, cardiac and GMM Super vector [1]. All features are frame based except the temporal features. Improvement in the accuracy of classification does not happen with taking all the features and the cost factor also to be considered. Feature selection method then selects the optimal subset of the features for the implementation.

  3. COMPARISON

    A Mel Frequency Cepstral Coefficients & Spectral Coefficients

    MFCC are conventional and extracts the cepstrum coefficients by filter banks spaced according to the Mel Scale. As the frequency increases, details decrease. FFT of the signal is computed and then fed to the filter bank. It based on the short-term power spectrum of a sound, based on a linear cosine transform [2]. They are not robust to noise and takes the logs of the power at each Mel frequency.

    Spectral Analysis quantifies various amounts (amplitude, power, intensity, phases) with frequency. It is a frame based frequency domain analysis. Any signal when represented as amplitude varies with time has a corresponding frequency spectrum. Analysis is performed on the entire signal. The spectral features are got by maximizing the statistical distance between the epileptic and the normal power spectrums. Number of spectral features such as Roll off, flux, centroid, and entropy gives us accurate dimensionality reduction.

    B. Comparative study

    When Considering Heart sounds, as PCG signals are non-stationary, non-linear and are not smooth waves, Fourier transform fails in analyzing. Fourier transforms are uncertain which means that they give information about the frequency in the signal but not about where and when the frequency occurred. Contrarily, Wavelet decomposition exists within a finite domain and gives all information at a particular instance. So they suit well for Heart sound

    analysis as PCG signals can be analyzed with time dependent or frequency dependent factors [4].

    Spectrogram STFT could not detect the four components of the first sound S1 and two of the second Heart sound S2. Wavelet transform is capable of detecting all the components- qualitative and quantitative measurements of Time- frequency characteristics of the signals.

    As MFCC is bound to the Fourier transformation, it cannot gain complete accuracy in extracting the features or reducing the dimensions of the PCG signals. Fig.1 shows the difference in the wavelet and the Cosine transformed feature extraction. The Figure clearly explains of the robust nature of the Fourier and cosine transforms, to noise.

    Obviously MFCC lacks in gaining accuracy for the non- stationary heart sound.

    Fig.1. Feature Extraction by Wavelet and Cosine Transform

    When Features extracted and selected using DWT and thereby which undergoes spectral extraction is likely to give good results for the analysis of PCG signals and is depicted in Fig.2.

    Fig. 2. Feature Extraction by Spectrum and MFCC

  4. CONCLUSION

To produce an accurate representation of the phenomenon, it is sometimes necessary to measure it from several perspectives. Spectral features reveal those representations. To make the most of it, we need powerful tools and new research challenges that overcome the failures of the feature extraction tools.

REFERENCES

  1. Huy Dat Tran, Yi Ren Leng, Haizhou Li, Feature Integration for Heart sound Biometrics.

  2. Francesco Beritelli, Andrea Spadaccini, Human Identity Verification abased on Heart Sounds: Recent Advances and Future Directions.

  3. Guy Amit, Heart Sound Analysis: Theory, Techniques and Applications.

  4. Nashwa El. Bendary, Hameed Al-Qaheri,Hossam M. Zawbaa, Mohamed Hamed, Aboul Ella Hassanien, Qiangfu Zhao, Ajith Abraham, HSAS: Heart Sound Authentication System.

Leave a Reply