Facial Expression Recognition using Daubechies Wavelet PCA for Authentication

DOI : 10.17577/IJERTV3IS031801

Download Full-Text PDF Cite this Publication

Text Only Version

Facial Expression Recognition using Daubechies Wavelet PCA for Authentication

S. Saranya1

V. Saravanyap

R. Balamurugan3

PG Student

PG Student

PG Student

Department of ECE

Department of ECE

Department of ECE

Sri Krishna College Of Eng&Tech

Sri Krishna College Of Eng&Tech

Sri Krishna College Of Eng&Tech

Coimbatore, India

Coimbatore, India

Coimbatore, India

Abstract – This paper presents a new idea for detecting an unknown human face in input imagery and recognizing his/her facial expression. The objective of this paper is to develop highly intelligent machines or robots that are mind implemented. A Facial Expression Recognition system needs to solve the following problems: detection and location of faces in a cluttered scene, decomposition, facial feature extraction, and facial expression classification. The universally accepted five principal emotions to be realized are: Angry, Happy, Sad, Disgust and Surprise along with neutral. The ORL database is used to store the captured images. Principal Component Analysis (PCA) is implemented with Daubechies Wavelet Transform (DWT) for Feature Extraction to determine principal emotions with high PSNR. In comparison with the traditional use of PCA, the proposed method Wavelet PCA yields better recognition accuracy, with reduction in dimensionality and time complexity.

Keywords

Feature Extraction, Facial Expression Detection, Principle component Analysis, Daubechies Wavelet Transform.

  1. INTRODUCTION

    Facial expression is one of the most powerful, natural, and immediate means for human beings to communicate their emotions and intentions. Facial expression carries crucial information about the mental, emotional and even physical states of the conversation. It is a desirable feature of the next generation human- computer interfaces. Computers that can recognize facial expressions and respond to the emotions of humans accordingly enable better human-machine communication development of information technology .Recognition of facial expression in the input image needs three functions: locating a face in the image, decomposition of the image by Daubechies Wavelet Transform and recognizing its expression. Recognition of human facial expression by computer is a key to develop such technology. In recent years, much research has been done on machine recognition of human Facial expressions. Conventional methods extract features of facial organs, such as eyes and a mouth and recognize the expressions from changes in their shapes or their geometricalrelationships by different facial expressions. When two photos of a human face are considered, the photo shows the facial expression more strongly. Accordingly, as extending the step of facial expression recognition, wavelet transform is used to reduce the dimensionality and give better recognition accuracy.One of the key remaining problems in facerecognition is to handle the variability in appearance

    due to changes in pose, expression, and lighting conditions. The increasing progressof communication technology and computer science has led us toexpect the importance of facial expression in future human machine interface and advanced communication, such as multimedia and low- bandwidth transmission of facial data in human interaction, the articulation and perception of facial expressions form a communication channel, that is additional to voice and that carries crucial information about the mental, emotional and even physical states of the conversation. Face localization, decomposition and feature extraction are the major issues in automatic facial expression recognition.

  2. RELATED WORK

    Bartlett explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimationof optical flow; holistic spatial analysis, such as independent component analysis, local feature analysis, and linear discriminant analysis. Donato compared several techniques, which included optical flow, principal component analysis, independent component analysis, local feature analysis and Gabor wavelet representation, to recognize eight single action units and four action unit combinations using image sequences that were manually aligned and free of head motions. Lien describes a system that recognizes various action units based on dense flow,feature point tracking and edge extraction. The system includesthree modules to extract feature information: dense-flow extraction using a wavelet motion model, facial feature tracking, and edge and line extraction. Fasel fulfills the recognition of facial action units, i.e., the subtle change of facial expressions,and emotion-specified expressions. The optimum facial feature extraction algorithm, Canny Edge Detector, is applied to localize face images, and a hierarchical clustering-based scheme reinforces the search region of extracted highly textured facial clusters. This paper provides a new fully automatic framework to analyze facial action units, the fundamental building blocks offacial expression enumerated in Paul Ekmans Facial ActionCoding System (FACS). The action units examined in this paper include upper facial muscle movements such as inner eyebrowraise, eye widening, and so forth, which combine to form facial expressions. In this paper, a new technique

    Wavelet principal component analysis is developed for image representation. Lee and Kim approached a method of expression-invariant face recognition that transforms input face image with an arbitrary expression into its corresponding neutral facial expression image. To achieve expression-invariance, first extract the facial feature vector from the input image using AAM. Next, transform the input facial feature vector into its corresponding neutral facial expression vector using direct or indirect facial expression transformation.Finally, perform the expression- invariant face recognition bydistance-based matching techniques nearest neighbour classifier, linear discriminant analysis (LDA) and generalized discriminantanalysis (GDA). Geethaetal.a method was described forreal time face/head tracking and facial expression recognition. A face is located by extracting the head contour points using the motion information. Among the facial features, eyes are the most prominent features used for determining the size of a face. The visual features are modelled using support vector machine (SVM) for facial expression recognition. Sebe et al.experiment with different types of classifiers such as k-Nearest Neighbor (kNN),Support Vector Machines (SVMs), and Bayesian Networks anddecision tree based classifiers in their work: Authentic FacialExpression Analysis.

  3. FACIAL EXPRESSION DATABASE

    In this work ,ORL database is used.A set of pictures taken at Olivetti Research laboratory. There are images of 30 different persons,10 images where taken for each person. The series of 10 images presents variations in facial expressions, in facial position (slightly rotated faces) and in someother details like glass/no-glasses. All the photos were taken with the persons in a frontal position against a dark background.The images have a resolution of 92*112 pixels is enough to implement pre processing modules like local filtering or local feature extraction.ORL is challenging due to large number of individuals to dentify with respect to the little amount of images per person(10),8 persons for train and 2 for test.ORL has been used to benchmark many face identification system.Following Figure 1 shows the database images considered for Facial Expression recognition.

  4. DAUBECHIES WAVELET TRANSFORM DECOMPOSITION

    Wavelet transforms are multi-resolution image decomposition tool that provide a variety of

    Fig 1:The database images considered for Facial Expression recognition.

    channels representing the image feature by different frequency sub bands at multi-scale. It is a famous technique in analyzing signals. When decomposition is performed, the approximation and detail component can be separated. The Daubechies wavelet (db2) decomposed up to five levels has been used here for image fusion. These wavelets are used here because they are real and continuous in nature and have least root-mean-square (RMS) error

    compared to other wavelets . Daubechies wavelets are a family of orthogonal wavelets defining a discrete wavelet transform and characterized by a maximal number of vanishing moments for some given support. This kind of

    2D DWT aims to decompose the image into approximation coefficients (cA) and detailed coefficient cH, cV and cD (horizontal, vertical and diagonal) obtained by wavelet decomposition of the input image (X). The first part of

    [cA, cH, cV, cD] = dwt2 (X, wname) (1)

    [cA, cH, cV, cD] = dwt2 (X, Lo_D, Hi_D) (2)

    Equation (1), wname is the name of the wavelet usedfor decomposition. Equation (2) Lo_D (decomposition low- pass filter) and Hi_D (decomposition high-passfilter) wavelet decomposition filters. This kind of two dimensional DWT leads to a decomposition ofapproximation coefficients at level j in four components:the approximation at level j+1, and the details in threeorientations (horizontal, vertical, and diagonal).

  5. EXPERIMENT

    The block schematic of facial expression recognition system is given in figure 2. We have developed a program in MATLAB to obtained DWT of the images in the dataset. Input image forms the first state for the face recognitionmodule. To this module a face image is passed as an input for the system. The input image samples are considered of non-uniform illumination effects, variable facial expressions, and face imagewith glasses. In second phase of operation the face image passedis transformed to operational compatible format, where the face Image is resized to uniform dimension; the data type of the image sample is decomposed and the LL component of the image is passed forFeature extraction. In feature extraction unit PCA for the computation of face features for recognition. These features are passed to the classifier unit called as kNN classifier for the classification of given face query with the knowledge created for the available database. For the implementation of face recognition a real time captured facedata as well as ORL database used. For the implementation of the proposed recognition architecture the database samples are trained for the knowledge creation for classification. During training phase when a new facial image is added to the system the features are calculated and aligned for the dataset formation.Comparing the Euclidean distance of the test face with the known Euclidean

    distanceof the database the differences between the test and known set of weights, such thata minimum difference between any pair would symbolize the closest match.

    Fig 2:Block Diagram of Daubechies Wavelet PCA

  6. RESULTS

    The optimally design Daubechies Wavelet Transform tested onthe training dataset. The results obtained are excellent. The recognition rate for all five principal emotions namely Angry, Disgusts, Happy, Sad and Surprise along with Neutral isobtained which is more than previous existing techniques.

    Table 1. Recognition Rates of Traditional PCA and PCA+DWT

    The PSNR Value for different Expressions are calculated and the results are listed below,

    Table 2. PSNR Values of Various Facial Expressions on Test Images

    TEST

    IMAGE

    FACIAL

    EXPRESSION

    PSNR

    Image001

    Disgust

    28.589

    Image002

    Neutral

    24.636

    Image003

    surprise

    12.488

    Image004

    Happy

    23.008

    Image006

    Sad

    26.684

    Image007

    Angry

    25.2593

    CALCULATION OF PSNR FOR VARIOUS FACIAL EXPRESSIONS

    30

    28

    26

    24

    PSNR

    22

    20

    18

    16

    14

    12

    10

    Disgust Neutral surprise Happy Sad Angry

    FACIAL EXPRESSIONS

    PSNR vs FACIAL EXPRESSIONS

    K-NO OF IMAGES

    PCA

    PCA+DWT

    5

    78

    81%

    6

    84

    85%

    7

    87

    89%

    8

    89

    92%

    9

    96

    97.4%

    Fig 4:Graphical Representation for PSNR Values of different Facial Expressions

    The elapsed time for PCA and PCA +DWT is calculated for each expression and the result is tabulated in the below Table 3.

    TEST IMAGE

    ELAPSED

    TIME FOR DWT+PCA

    ELAPSED TIME PCA

    Disgust

    1.821304

    2.749260

    Neutral

    1.490201

    2.1430011

    Surprise

    1.329358

    2.186679

    Happy

    3.040614

    4.729815

    Sad

    1.677391

    2.184346

    Angry

    1.563060

    2.087045

    COMPARISION OF PCA AND PCA+DWT BASED ON RECOGNITION RATE

    RECOGNITION RATE

    97.4%

    92%

    89%

    85%

    81%

    96

    89

    87

    84

    78

    4 5 6 7 8 9 10

    K-NO OF IMAGES

    PCA PCA+DWT

    Fig 3: Graphical Representation for Comparison of Recognition rate for PCA and PCA+DWT

    Table3. Comparison of PCA and PCA+DWT based on Elapsed Time of Various Facial Expressions on Test Images

    COMPARISION OF PCA AND PCA+DWT BASED ON ELAPSED TIME

    5.0

    4.5

    ELAPSED TIME

    4.0

    3.5

    3.0

    2.5

    2.0

    1.5

    1.0

    Disgust Neutral Surprise Happy Sad Angry

    FACIAL EXPRESSIONS

    PCA+DWT PCA

    Fig 5:Graphical Representation for Elapsed Time of different Facial Expressions

  7. CONCLUSION

In this research paper proposed PCA for classification of Facial Expressions using Daubechies Wavelet Transform is considered. Expression Classification resultsfor all principal emotions along with Neutral on training datasetis achieved with high PSNR. The proposed algorithm isimplemented on both real time as well as ORLdatabase. Eachimage is enhanced, localized and its distinct features are extracted using PCA and DWT. The algorithm can effectively distinguish different Expressions by identifyingfeatures by the experimental results. The proposed method in comparison with the present hybrid methods has low Elapsed time and computation load with high recognition ratein both training and recognizing stages.

APPENDIX -I

PCA-Principal Component Analysis DWT-Daubechies Wavelet Transform AAM-Active Appearance Model

GDA-Generalized Discriminant Analysis SVM- Support Vector Machine

FACS- Facial Action Coding System kNN- k-Nearest Neighbour

ORL-Olivetti Research laboratory RMS- Root Man Square

UNIVERSITY DETAILS

REFERENCES

  1. Ghahari, A., Rakhshani, Fatmehsari, Y., Zoroofi, R., A., (2009)A Novel Clustering-Based Feature Extraction Method for an Automatic Facial Expression Analysis System., IEEE fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing ,pp.1314 1317

  2. Geetha, A., Palaniappan, B., Palanivel, S., Ramalingam, V., 2009, Facial expression recognition A real time approach, Expert Systems with Applications: An International Journal, Vol. 36, pp. 303-308.

  3. Liu, H., Shang, D., Song, F., Yang, J., 2008, A highly scalable incremental facial feature extraction method, journal of Neurocomputing.

  4. Lee, H. S., Kim, D., 2008, Expression-invariant face recognition by facial expression transformations, Journal of Pattern Recognition, Volume39.

  5. Levine, M. D. and Yingfeng Yu. ,2006,Face recognition subject to variations in facial expression, illumination and pose using correlation filters, Journal of Computer Vision and Image Understanding, Vol 104, pp. 1-15.

  6. Yang, J., Zhang, D., 2004, Two-dimensional pca: a new approach to appearance-based face representation and recognition, IEEE Trans. Pattern Anal. Mach. Intell.Vol. 26, No. 1, pp. 131137.

  7. Viola, P., and Jones, M., 2004, Robust real-time object detection, International Journal of Computer Vision, Vol. 57, No. 2, pp.137 154.

  8. Cirelo, M., Cohen, I., Cozman, F., Huang, T., Sebe, N., 2004, Semi-supervised learning of classifier, Theory,algorithms, and applications to human-computer interaction.Vol.26, No. 12, pp.15531567.

  9. Kapoor, A., Picard, R. W., Yuan Qi. , 2003, Fully Automatic Upper Facial Action Recognition, IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pp.195.

  10. Fasel, B. and Luettin, J., 2003, Automatic facial expression analysis: A survey. Pattern Recognition, Vol. 36, and pp.259275.

  11. Chen, Huang, T. S., Cohen, I., Garg, L., Sebe, N., 2003, Facial expression recognition from video sequences: Temporal and static modeling, CVIU, Vol. 91, pp.160 187.

  12. Bourel, F., Chibelushi, C., Low, A. A., 2002, Robust Facial Expression Recognition Using a State-Based Model of Spatially- Localized Facial Dynamics", Proc.Fifth IEEE

  13. Int. Conf. Automatic Face and Gesture Recognition, pp. 106-111. Cootes, T. and Kittipanya-ngam, P., 2002, Comparing variations on the active appearance model algorithm. In BMVC, pp 837 846.

  14. Cootes, T., Edwards, G., and Taylor C., 2001, Active appearance models. PAMI, Vol. 23, No. 6, pp. 681685.

  15. Cohn J. F., Tian, Kanade, T., 2001,Recognizing action units for facial expression analysis IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 2.

  16. Cohn, J., Kanade, T., Lien, J., 2000, Detection, tracking and classification of action units in facial expression, Journal of Robotics and Autonomous Systems, Vol. 31, pp.131146.

  17. Jain, A.K., Duin R.P.W., Mao J., 2000,"Statistical Pattern Recognition: A Review", IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 22, No. 1, pp. 4-37.

  18. Pantic, M. and Rothkrantz, L., 2000, Automatic analysis of facial expressions: The state of the art, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 12, pp.14241445.

SRI KRISHNA COLLEGE OF ENGINEERING AND TECHNOLOGY

An Autonomous Institution Sugunapuram, Kuniamuthur P.O, Coimbatore-641 008.

*Accredited by NBA-AICTE* An ISO 9001:2008 Certified Institution

*Approved by AICTE and Afflicated to Anna University

Leave a Reply