Improved Face Recognition By Combining LDA And PCA Techniques

DOI : 10.17577/IJERTV2IS60902

Download Full-Text PDF Cite this Publication

Text Only Version

Improved Face Recognition By Combining LDA And PCA Techniques

Sukanya Roychowdhury Lecturer,Information Technology, Pillais Institute of Information Technology New Panvel, Navi Mumbai, India.

Sharvari Govilkar,

H.O.D. Computer Engineering Department, Pillais Institute of Information Technology ,

New Panvel ,Navi Mumbai, India.

Abstract

Any face recognition system works best in ideal condition. But can be very sensitive in real time. We have presented a methodology for improving the robustness and accuracy of face recognition system based on combination of PCA and LDA face representation technique. We show that combination of PCA and LDA will outperform best individual face recognition algorithm based on PCA or LDA.

  1. Introduction

    Machine recognition of human face from still and video images has become an active research area in the communities of image processing, pattern recognition, neural networks and computer vision. The most remarkable abilities of human vision are that of face recognition. It develops over several years of childhood, is important for several aspects of our social life, and together with related abilities, such as estimating the expression of people with which we interact, has played an important role in the course of evolution. The problem of face recognition was considered in the early stages of computer vision and is now undergoing a revival after nearly 20 years. Different specific techniques were proposed or proposed recently. This interest is motivated by wide applications ranging from static matching

    of controlled format photographs such as passports, credit cards, driving licenses, access control systems, model-based video coding, criminal identification and authentication in secure system like computer or bank teller machines and mug shots to real-time matching of surveillance video images presenting different constraints in terms of processing requirements.

    Although many face recognition by human beings and machines are developed, it is still difficult to design an automatic system for the task because in real world, illumination, complex background, visual angle and facial expression for face images are highly variable. Several methods have been proposed for face detection, including graph matching, neural networks, and also geometric feature based. Although researchers in psychology, neural sciences and engineering, image processing and computer vision have investigated a number of issues related to face recognition by human beings and machines, it is still difficult to design an automatic system for this task, especially when real-time identification is required. The reasons for this difficulty are two-fold:

    1. Face images are highly variable and 2) Sources of variability include individual appearance, three-dimensional (3-D) facial expression, facial hair, makeup, and so on and these factors change from time to time. Furthermore, the lighting, background, scale, and parameters of the acquisition are all variables in facial images acquired under real-world scenarios. The variations between the images of the same face due to illumination and viewing direction are almost always larger than image variations due to changes in the face identity. This makes face recognition a great challenging problem.

      1. Face recognition issues

        There are mainly two issues are central to face recognition:

        1. What features can be used to represent a face under environmental changes?

        2. How to classify a new face image based on the chosen representation?

        For 1), many successful face detection and feature extraction paradigms have been developed. The frequently used approaches are to use geometrical features, where the relative positions and shapes of different features are measured. At the same time, several paradigms have been proposed to use global representation of a face, where all features of a face are automatically extracted from an input facial image. It has been indicated that these algorithms with global encoding of a face are fast in face recognition. In singular value decomposition (SVD) of a matrix was used to extract features from the patterns. It has been illustrated that singular values of an image are stable and represent the algebraic attributes of an image, being intrinsic but not necessarily visible.

      2. Face recognition system

        Many face recognition systems have been proposed in the last years. Each of them is based on a particular representation of a face. To the best of our knowledge, we can identify two kinds of approaches: the appearance- based approaches, in which the face image is viewed as a feature vector, and the structural approaches, in which a deformable model like a graph is used for face representation.

        Methods of the first kind try to reduce the dimensionality of the original face space with respect to a certain criterion cost. A feature reduction is performed by applying some standard algorithms of pattern recognition. The most known approach is the PCA representation or eigenface [1] approach, proposed by Turk and Pentland: the face image is projected in a space in which the correlation among the components is zero. This space transformation is called Karhunen-Loeve transform. Another appearance based approach is the LDA [3] representation or fisherface approach, proposed by Kriegmann et al.: the face image is projected in the Fisher space, in which the variability among the face-vectors of the same class is minimized and the variability among the face-vectors of different classes is maximized.

      3. Approach

    Although many approaches for face recognition have been proposed in the last years, none of them can overcome the main problem of this kind of biometrics: the huge variability of many environmental parameters (lighting, pose, scale). Hence, face recognition systems can achieve good results at the expense of robustness.

    In computerized face recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of features to a more manageable number before classification. Each of the new dimensions is a linear combination of pixel values, which form a template. The linear combinations obtained using Fisher's linear discriminant are called Fisher faces, while those obtained using the related principal component analysis are called eigenfaces. In this work we describe a methodology for improving the robustness of a face recognition system based on the fusion of two well-known statistical representations of a face: PCA and LDA. Experimental results confirm the benefits of fusing PCA and LDA.

  2. Combining LDA and PCA

    1. Introduction

      Many works analyzed the differences between these two techniques, but no work investigated the possibility of fusing them. In our opinion, the apparent strong correlation of LDA and PCA, especially when frontal views are used and PCA is applied before LDA, discouraged the fusion of such algorithms. However, it should be noted that LDA and PCA are not so correlated as one can think, as the LDA transformation applied to the principal components can generate a feature space significantly different

      from the PCA one. Therefore, the fusion of LDA and PCA [4] for face recognition and verification is worth of theoretical and experimental investigation.

    2. Methodology

The Fusion of PCA and LDA is composed of following steps: It is composed of the following steps- representation of the face according to the PCA and the LDA approaches;

– the distance vectors dPCA and dLDA from all the N faces in the database are computed;

-for the final decision, these two vectors are combined according to a given combination rule.There are two algorithms for the fusion phase: the K- Nearest Neighbour and the Nearest Mean.

Figure 2.1 Fusion Methodology

There are two kind of approaches:

  1. The K-Nearest Neighbor approach (KNN) and

  2. The Nearest Mean approach (NM).

    First, we normalize the distance vectors dPCA and dLDA in order to reduce the range of these instances in the interval [0, 1].The second step is to compute a combined distance vector d that must contain both PCA and LDA information. To this aim, we followed two ways:

    First way, we obtained the combined distance vector by computing the mean vector:

    . (1)

    Second way, we obtained the combined distance vector by appending dPCA and dLDA vector:

  3. Comparison of all methods.

    For some subjects, the images were taken at different times, varying the lighting, facial expressions (open/closed eyes,smiling/not smiling), and facial details (glasses/no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement). The data set was subdivided into a training set, made up of 5 images per class (200 images), and a test set, made up of 5 images per class (200 images). In order to assess recognition performance, we repeated our experiment for ten random partitions of the data set.

    PCA PCA LDA LDA

    PCA PCA LDA LDA

    d = {d1 ,,dN ,d1 ,,dN }

    . (2)

    where N is the number of images in the face database. If C is the number of the identities, also

    called classes, an identity c is associated to each couple

    LDA PCA

    LDA PCA

    (dj , dj ) , j = 1,,N.

    Most frequent identity among the first K components of d is selected. If the combined distance vector follows eq. (1), we call our algorithm M-KNN or Mean-KNN; if it follows eq. (2), we call our algorithm A-KNN or Append-KNN.

    In the case of the NM approach, we first compute a template for each identity in the database. We selected the average image for both PCA and LDA representations. Consequently, our distance vectors dPCA and dLDA are composed by C components instead of N. These vectors are combined

    according to eq. (1) or (2). The identity associated to the smallest combined distance is selected. The related algorithms are called Mean-NM or M- NM and Append-NM or A-NM.

    Figure 3.1 Results show that the combination of PCA and LDA produces more reliable system.

  4. Conclusion

    The fusion of two statistical approaches, namely PCA and LDA, for face representation and recognition has been investigated. Comparisons of results confirm the benefits in fusing them with two kinds of combination rules.

    We combined PCA and LDA with the KNN-based combination rule and the NM-based combination rule. In general, the performance of the KNN rule is much better than that of the NM rule

  5. References

  1. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, Eigenfaces versus fisherfaces: Recognition using class specific linear projection, IEEE Trans. Pattern Anal. Machine Intell., vol. 19, pp. 711720, 1997.

  2. R. Brunelli and T. Poggio, Face recognition: Features versus templates, IEEE Trans. Pattern Anal. Machine Intell., vol. 15, pp. 10421053, 1993.

  3. H.Yu and H.Yang: A direct LDA algorithm for high-dimensional data with application to face recogniion, Pattern Recognition 34(10):2067-2070, 2001.

  4. A.Martinez and A.Kak: PCA versus LDA, IEEE Trans. On PAMI, 23(2):228- 233, 2001.

Leave a Reply