A Novel Approach to Person Identification using Biometrics

DOI : 10.17577/IJERTCONV3IS20067

Download Full-Text PDF Cite this Publication

Text Only Version

A Novel Approach to Person Identification using Biometrics

Dolly Reney Electronics & Telecomm Engg

Christian College of Engg & Technology,

Bhilai, Chhattisgarh, India

Dr. Neeta Tripathi Electronics & Telecomm Engg

Shri Shankracharya Engg College Bhilai, Chhattisgarh, India

Abstract -The present study explores a novel approach to express relationship between facial feature parameter and speech acoustic parameter in universally defined six Basic Emotional State for person identification. A comprehensive database has been prepared for the analysis. Facial features like area, edge count, major axis, minor axis, eccentricity and speech acoustic parameter, mainly, MFCC are extracted by feature extraction method. Finally, relations have been developed between facial feature parameters and acoustic parameters and person is identified through his emotions.

Keywords-, Feature Extraction, MFCC, Emotion Recognition.

  1. INTRODUCTION

    Person identification through emotion is the process of identifying the person through his emotions. As we know that emotions are a complex state of feeling that results in physical and psychological changes that influence our behavior[1]. Those acting primarily on emotion may seem as if they are not thinking, but cognition is an important aspect of emotion, particularly the interpretation of events. For example, the experience of fear usually occurs in response to a threat. The cognition of danger and subsequent arousal of the nervous system (e.g. rapid heartbeat and breathing, sweating, muscle tension) is an integral component to the subsequent interpretation and labeling of that arousal as an emotional state. Emotion is also linked to behavioral tendency. Extroverted people are more likely to be social and express their emotions, while introverted people are more likely to be more socially withdrawn and conceal their emotions. Basically there are six emotions and relationship between these emotions result in positive and negative influences.[2]So if we calculate the emotion of any person then it will be easy to identify the person in any state and many of the problems faced by usual biometric system will be removed.The rest of the paper is organized as follows Section II gives flowchart description, Section III gives Implementation techniques, Section IV describes result and Section V states the conclusion.

  2. FLOWCHART DESCRIPTION

    For person identification comprehensive database has been developed. Facial expressions and speech are recorded simultaneously .This database comprises recording of sentences spoken by number of Male & Female in different emotional states i.e. Neutral, Anger, Disgust, Fear, Happiness, Sadness and Surprise.

    1. Parameterization

      Facial Feature

      A set of facial feature parameter has been parameterize from the knowledge of facial anatomy and MPEG-4 standard.

      Speech Feature

      The original speech signals though optimal human hearing and perception purpose contains redundant information.

      Eliminating such redundancies helps in reducing the computational overhead and also improves system accuracy by preprocessing.

    2. Identification

    Compact set of parameter speech and facial feature will be used for identification of person using neural network

    Start

    Recording of Video sequence at emotional state.

    Recording of Video sequence at emotional state.

    Separate speech from video

    Separate speech from video

    Parameterization

    Parameterization

    Analysis of facial

    Analysis of speech

    Analysis of facial

    Analysis of speech

    Trained through Neural or fuzzy

    Trained through Neural or fuzzy

    Person Identification

    Person Identification

    End

    Fig.1 Flowchart

  3. IMPLEMENTATION TECHNIQUE

    The adaptation of regular criteria in face detection and speech recognition to calculate emotional values for person identification with the violes jones method for face detection and calculating MFCC value for speech recognition in order to see a levelheaded enhancement in performance is focus of this research. The Viola-Jones face detector contains three main ideas that make it possible to build a successful face detector that is able to run in real time: the Haar-like features, classifier learning with AdaBoost[3] [4] [5] [6] [7].and the attentional cascade structure.

    Algorithm for proposed method:-

    STEP1: A database is created which consist of face image of persons along with their sound in different emotions.

    STEP2:- Detect the face by Viola Jones Algorithm.

    This algorithm uses Haar like feature for face detection. A Haar-like feature considers adjacent rectangular regions as shown in fig.2 at a specific location in a detection window, sums up the pixel intensities in each region and calculates the difference between these sums. This difference is then used to categorize subsections of an image.

    Fig 2:- Adjacent rectangular regions

    Fig 3:- The image above gives an idea how search algorithm works. It starts with either a large or small window and scans the image exhaustively

    Step 3.Applying edge detection on the image

    Using Viola jones, face detection process is completed then we apply edge detection method because EDGE find edges in intensity image. Here we use canny method as this method finds edges by looking for local maxima of the gradient of I. The gradient is calculated using the derivative of a Gaussian filter.

    Step 4. Applying Properties of the image

    This step measures properties of image regions (blob analysis).This does not accept binary images as its first input so it is converted to label matrix as label connects components in 2-D binary image.

    Step5: Analysis of Persons voice sample

      1. Recording

        Recording was done using an electrat microphone in a partially sound treated room with PRAAT software with

        sampling rate 16 kHz / 16 bit and distance between mouth and microphone was adjusted nearly 30 cm.

      2. Feature Extraction

    Here in feature extraction process two features are extracted Mel Frequency Cepstral Coefficient (MFCC) and Mel Energy spectrum Dynamic coefficients (MEDC). Fig. 2 shows the MFCC feature extraction process. The MFCC technique makes use of two types of filters i.e linearly spaced filters and logarithmically spaced filters. OKhalifa etal[8]had identified the main steps of MFCC feature extraction process as shown in Fig. 4 and Fig 5.represents MEDC feature extraction.

    Fig 5. Block dia. Of MEDC feature extraction process

    Fig 5. Block dia. Of MEDC feature extraction process

    Fig.4:- Block diagram of MFCC feature

    Fig 5. Block dia. Of MEDC feature extraction process

    Fig:6 – Result of create database which shows extracted image of original

    Fig:6 – Result of create database which shows extracted image of original

  4. EXPERIMENTAL RESULTS Result of face detection is shown in fig.6

image

After KNN classification results are as shown below:-

Fig.7 KNN result

In similar manner other emotions can also be evaluated, we can also calculate gabor magnitude features and and KFA on real image data as shown above

Here, we partition the data into three sets, where the first represents the training data, the second represents the evaluation data (where hyper parameters such as decision threshold are set).

REFRENCES

  1. "Theories of Emotion". Psychology.about.com. 2013-09-13. Retrieved 201-11-11.

  2. Handel, Steven. "Classification of Emotions". Retrieved 30 April 2012.

  3. P. Viola and M. J. Jones, "Robust real-time face detection," Int. J. Computer Vision, vol. 57, no. 2, pp.137-154, 2004.

  4. C. Zhang and P. Viola, Multiple-Instance Pruning for Learning Efficient Cascade Detector, In Proc. Of Neural Information processing Systems , Dec. 2007.

  5. Y. Deng and G. Su, Face Detection Based on Fuzzy Cascade Classifier with Scale-invariant Features, Int. J. Information Technology, vol. 12, no. 5, 2006.

  6. R. Xiao, H. Zhu, H. Sun, and X. Tang, Dynamic Cascades for Face Detection, Int. Conf. Computer Vision, pp. 1-8, Oct. 2007.

  7. J. Wu, S. C. Brubaker, M. D. Mullin and J. M. Rehg, Fast Asymmetric Learning for Cascade Face Detection, TPAMI, vol.20. no.3, pp.369 382, 2008.

  8. O.Khalifa,S.Khan,M.R.Islam, M.Faizal and D.Dol, 2004.Text Independent Automatic Speaker Recognition.3rd International Conference on Electrical & Computer Engineering, Dhaka, Bangladesh, pp.561-564.

  9. Y.L. Lin and G. Wei, Speech emotion recognition based on HMM and SVM, proceeding of fourth International conference on Machine Learning and Cybernetics,Guangzhou, 18-21 August 2005.

  10. Struc V., Paveic, N.: The Complete Gabor-Fisher Classifier for Robust Face Recognition, EURASIP Advances in Signal Processing, vol. 2010, 26pages

Leave a Reply