A New Face Recognition Technique Based on Rotated Principal Components

DOI : 10.17577/IJERTCONV3IS20037

Download Full-Text PDF Cite this Publication

Text Only Version

A New Face Recognition Technique Based on Rotated Principal Components

Garima

    1. Scholar, Computer Science Engg LNCT Jabalpur, M.P.

      Naazish Rahim

      Assistant Professor,

      Sujit Tiwari

      Assistant Professor, Computer Science Engg Department

      LNCT Jabalpur, M.P.

      Computer Science Engg Department LNCT Jabalpur, M.P

      Abstract Biometrics are methods to design verify or identify individuals using their physiological or behavioral characteristics. Biometrics are having area of application such as human computer interface, security system, banking sector, network security, database management, office and building access, e-commerce, teleconferencing etc. Face Recognition is most widely used biometrics as it is user friendly and easily accessible. In this paper rotated two dimensional principal component analyses is used for face recognition. AR Face database is used.

      Keywords:- Face Recognition, PCA, Rotated Principal Components

      1. INTRODUCTION

        In physiological characteristic of biometrics such as face, finger, hand, palm, iris, retina, ear and behavioural characteristic such as signature, keystroke, voice. A face recognition system is expected to identify faces present in images and videos automatically. It can operate in either or both of two modes: Face verification (or authentication): involves a one-to-one. Face identification (or recognition): involves one-to-many

        Face recognition is mostly used among the others in biometric system. FR system can be used in access control, surveillance system, human machine interface and multimedia communication. FR method can be divided into two way which is based on face representation:-

        1. Appearance based method: uses holistic texture features and is applied to either whole-face or specific regions in a face image.

        2. Feature Based method: which uses geometric facial features like mouth, eyes, Brows, cheeks etc and geometric relationships between them.

        The importance of Face recognition system does not require the support of individuals while the other systems need such support. It can work in either or both of two modes:

        1. Face verification (or authentication): involves a one-to- one

        2. Face identification (or recognition): involves one-to- many

          The block diagram of a face recognition system is shown in figure. It consist of four components

          Fig. 1 Block Diagram of Face Recognition System

          In the block diagram of FR system in which firstly we acquire an image or video from a camera then given to face detection block which track the face from input image. After detection, faces are aligned centrally so as to extract feature accurately from feature extraction block and process in face like rotation, resize, crop, compress and normalized. At last feature is matching with database it gives authentication.

          Two main mode of face recognition are identification and authentication. When verification is on demand, the face recognition system is given a face image and it is given a claimed individuality. The system is expected to either reject or accept the claim. On the other hand, in the identification problem, the system is trained by some images of known individuals and given a test image. It decides which individual the test image belongs to.

      2. LITERATURE SURVEY

        Qi Zhu et.al [2012] used the directional 2DPCA (D2DPCA) that can extract features from the matrixes in any direction. 2DPCA can be seen as the row-based PCA and only reflects the information in each row, and some structure information cannot be uncovered by it so D2DPCA used that can extract features from the matrixes in any direction. D2DPCA is rotated the sample matrix in certain angle and perform 2DPCA on the rotated matrixes, which is equivalent to performing 2DPCA in the corresponding direction. The results of experiments on AR database which recognize 60.47% in six direction and FERET datasets which recognize

        57.60 %.

        Sang-Heon Lee et. al. [2012] proposed an illumination robust face recognition system using a fusion approach based on efficient facial feature called differential two-dimensional principal component analysis, D2D-PCA. Face images are divided into two sub-images to minimize illumination effects, and then applied D2D-PCA separately to each sub-images. The individual matching scores obtained from two sub- images are then integrated using a weighted-summation operation, and the fused-score is utilized to recognize the individual person. Extended Yale face database B achieves the best recognition rate of 95.59%.

        Yue ZENG et. al. [2011] designed an algorithm of face recognition based on the variation of 2DPCA (V2DPCA) which make the most useful of the discriminant information of covariance, and use the fewer coefficients to representing a image. Compared with PCA, the two-dimensional PCA method (2DPCA) is a more efficient technique for dealing with 2D. Experiments on the ORL and YALE face bases show improvement in both recognition accuracy and recognition time over the original 2DPCA.

        Wankou Yang et. al. [2011] is describing a Sequential Row Column 2DPCA (RC2DPCA) which uses 2DPCA operated in the row direction and alternative 2DPCA operated in column direction. RC2DPCA can compress image in row and column direction. The experiments on the ORL 96.65% of recognition rate and FERET databases 77.25% of recognize rate.

        Wankou Yang et.al [2010] used extended 2DPCA and Bi- directional PCA (BDPCA) to non Euclidean space i.e. Laplacian BDPCA method to improve the robustness of 2DPCA and BDPCA. The performance of results experiment in FERET database and ORL database. The recognition rate of LBDPCA is 79.83% for FERET and 97.50% for ORL

      3. PROBLEM IDENTIFICATION

        Face and Head Rotation: – It is very difficult task to extract the features from rotated face and head in different angle and location. It can be in plane and out plane. Head rotation in clockwise or counter clockwise its effect the performance of the recognition system. Variation in distance of the face from the camera also effects the results.

        Illumination Effect:- The illumination variation is a very big problem where an Face appears totally different when different lighting conditions are used. When one side of face is more effect of light than other side then the face become one side bright and other side dark so it is difficult to recognize the face in the dark room. Indeed, varying the direction of illumination can result in larger image differences than varying the identity or the viewpoint of a face.

        Facial Expression:- Facial Expression directly affects the appearance of face in the image. Expressions are smiling, cry, anger, sad etc which change its shape and size.

        Aging:- It is very difficult the recognize the image with varying age.

        Occlusion:- Hairstyle vary, glasses, sun glasses, scarf, makeup that can occlude the facial features.

        Size of image:- An image of size 50× 50 may hard to classify if original class of the image was 500×500. An image captured from camera with different size varies person to person.

        Background Change: – The cluttering background affects the accuracy of face detection, and face patches including this background also diminish the performance of face recognition algorithms.

        Speed: – A large database is required for higher accuracy and efficiency. Speeds are varying with different database. Also depend the speed on Processor and software used. So to reduce cost of system, speed of correct recognition is always needed to be increased.

      4. METHODOLOGY

        Principal component analysis (PCA) :- This method is also known as Eigen face method. PCA is widely used dimensionality reduction technique used for calculating eigen features. In this method PCA treats the Face images as 2D data and classifies the face images by projecting them to eigen face space which is composed of eigen vectors obtained by the variance of the face images. Classify face image then projecting them space is eigenface space which is composed of eigen vector i.e variance. Eigenface Recognition derives its name from the German prefix Eigen meaning own or individual. Correlation between face images of test and training images. Eigenface is used because mathematical algorithms using eigen vectors represent the primary components of the face.PCA tries to generalize the input data to extract the features.

        Fig 2 Flow chart of Proposed Method

        The face recognition system adopted in this work is having five steps shown in fig 2

          1. Face Database Acquisition

          2. Feature Extraction

          3. Normalization Technique

          4. Fusion Technique

          5. Recognition

        1. Face Image Database Acquisition

          Step 1. Take the face image as a input from database Step 2. Convert colour image into greyscale image Step 3. Image is resize or cropped

          Step 4. Image Database is divided into 2 parts a). Training Image

          b). Testing Image

        2. Feature Extraction

          Rotated 2DPCA is used for feature extraction. Face image is taken as two dimensional matrixes and rotated in six different directions. Rotated images are appended to form array.

          where i= 1, 2, 3number of training images

          Testing image:- Facial images under test is then cropped and then rotated. The cropped and rotated test face image is subtracted from mean image of database.

          t = t (10)

          Projected Test image of each of rotated image is then calculated from their respective eigen face matrix.

          t = T . t (11)

        3. Classification

          Euclidean Distance is used to calculate the distance. It is given by

          Training image: – In this some step to extract the feature m d

          from face image matrices

          Step1:- Calculate Mean

          All training facial images which are under training database are cropped. These images are then rotated and appended as

          dk = (k i,j t i,j)2

          i=1 j=1

        4. Normalization Techniques

          (12)

          page wise arrays. Then mean of each array is calculated

          N

          = 1

          (1)

          Sigmoid Function: By normalization, distance scores of each of rotated face image are mapped between 0 and 1. Sigmoid function is used for normalization in this technique.

          N

          Step2:- Calculate the variance

          i

          i=1

          d = 1 (13)

          knew

          knew

          1+exp{(dk)}

          k dk dk

          k dk dk

          [d ( 2 )]

          (d ) =

          Training images in each rotated image database are subtracted from their respective mean image to form variance.

          k

          (14)

          2dk

          = (2)

          Step3:- Calculate the co-variance

          Variance matrix which is product of variance matrix with its transpose.

          N i=1 i

          N i=1 i

          = T = 1 N T i (3)

          Then covariance matrices of all facial images are added Step4:- Calculate Eigen vector and Eigen value

          Eigen values & Eigen vectors are calculated as

          X i = i i (4)

          T .. i = i i (5)

          T . i = i i (6)

          i = i i (7) Hence i = i is one of the eigen vector of X.

          Eigen vector corresponding to highest eigen values are selected . Eigen face Matrix is calculate which is product of variance of each face image with d number of highest eigen vectors.

          = A. Y (8)

          Step5:- calculate projection matrices It is calculated as

          is normalized score, dk is raw distance score, µk is mean and k is standard deviation of kth rotated face image

        5. Fusion Technique

          Here fusion is being performed by weighted summation method, which is feature level fusion of pre-classification type fusion.Feature level fusion refers to combining different feature vectors. Here fusion is being performed by weighted summation method, which is feature level fusion of pre- classification type fusion. It is given by

          dk = w1. dknew + w2. dknew (15) where values of weights are selected such that, w1 + w2 =1

        6. Recognition

        Minimum value of fused score s is found.

        Output = min (dk) (16) Its location reflects the facial image under test.

      5. RESULT

        The proposed method of rotated 2DPCA is used in AR face database which is having 2600 facial images. There are 100 people with 26 images of each of them. 13 images of each person are used for training and 13 images are used for testing. Training database are created by rotating images in six direction to extract the feature 0°,10°,20°,30°,40°,50°.

        k= T. Ai (9)

        Fig 3. Rotated face images in 6 directions

        Database of these images are created and then used for final matching with image under test.

        (a) (b)

        Fig. 4 (a) Face image under test (b) Result image after test

        It is found that good results are being obtained for individual matching. This method can be applied for complete database of 1300 test images are result can be obtained as recognition rate with varying principle components.

      6. CONCLUSION

The proposed rotated two dimensional principal component analysis method is applied for AR face database. It can be applied for other databases also like ORl, FERET databases. Also inhouse database can be prepared. This method can be applied for images under varying conditions and results can be obtained for these databases with varying principal components.

REFERENCES

  1. Anil K. Jain, Arun Ross and Salil Prabhakar, An Introduction to Biometric Recognition, IEEE Transactions on Circuits and Systems for Video Technology, Special Issue on Image- and Video-Based Biometrics, Vol. 14, No. 1, January 2004.

  2. Yong Zhang, Christine McCullough , John R. Sullins and Christine R. Ross, Hand-Drawn Face Sketch Recognition by Humans and a PCA- Based Algorithm for Forensic Applications, IEEE Transactions On Systems, Man, And CyberneticsPart A: Systems And Humans, Vol. 40, No. 3, May 2010.

  3. W. Zhao, R. Chellappa, P. J. Phillips and A. Rosenfeld, Face Recognition: A Literature Survey, ACM Computing Surveys, Vol. 35, pp. 39945, December 2003.

  4. Patil A.M, Kolhe S.R and Patil P.M, 2D Face Recognition Techniques: A Survey, Bioinfo Publications, International Journal of Machine Intelligence, ISSN: 09752927, Volume 2, Issue 1, 2010.

  5. M. Turk and A. Pentland, Eigenfaces for Face Detection/Recognition, Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.

  6. Qi Zhu and Yong Xu, Multi-directional two-dimensional PCA with matching score level fusion for face recognition, Springer-Verlag London Limited 31 January 2012.

  7. Sang-Heon Lee, Dong-Ju Kim and Jin-Ho Cho, Illumination-Robust Face Recognition System Based on Differential Components, IEEE Transactions on Consumer Electronics, Vol. 58, No. 3, August 2012.

  8. Yue Zeng, Dazheng Feng, Li Xiong, An Algorithm of Face Recognition Based on the Variation of 2DPCA, Journal of Computational Information Systems, Page No. 303-310, January, 2011.

  9. B.G. Vijay Kumar and R. Aravind, Computationally efficient algorithm for face super-resolution using (2D)2-PCA based prior, The Institution of Engineering and Technology Image Process., , Vol. 4, Iss. 2, pp. 6169, 2010.

  10. Yanwei Pang, Dacheng Tao, Yuan Yuan and Xuelong Li, Binary Two-Dimensional PCA, IEEE Transactions On Systems, Man, And CyberneticsPart B: Cybernetics, Vol. 38, No. 4, August 2008.

  11. Jian Yang and hengjun Liu, Horizontal and Vertical 2DPCA-Based Discriminant Analysis for Face Verification on a Large-Scale Database, IEEE Transactions On Information Forensics And Security, Vol. 2, No. 4, December 2007.

  12. Jian Yang, David Zhang, Alejandro F. Frangi, and Jing-yu Yang, Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition, IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 26, No. 1, January 2004

  13. Matthew A.Turk and Alex P.Pentland,Face Recognition Using EigenFaces,IEEE,1991.

Leave a Reply