Image-Based Recognition by using Laplacianfaces

DOI : 10.17577/IJERTCONV3IS16039

Download Full-Text PDF Cite this Publication

Text Only Version

Image-Based Recognition by using Laplacianfaces

1. Venkatesh. K 2. T. Buvaneswari

M.E Final / Cse Asst.Professor / Cse

Annapoorana Engineering College Annapoorana Engineering College Salem., Tamilnadu, India Salem., Tamilnadu, India

Abstract In this paper Mainly proposes an appearance-based face recognition method called the Laplacianface approach. By using Locality Preserving Projections (LPP), the face images are mapped into a face subspace for analysis. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure.etc

Keywords Face Recognition, Locality Preserving Projections, Principal Component Analysis, Linear Discriminant Analysis.

  1. INTRODUCTION

    A smart environment is one that is able to identify people, interpret their actions, and react appropriately. Thus, one of the most important building blocks of smart environments is a person identification system. Face recognition devices are ideal for such systems, since they have recently become fast, cheap, unobtrusive, and, when combined with voice-recognition, are very robust against changes in the environment. Moreover, since humans primarily recognize each other by their faces and voices, they feel comfortable interacting with an environment that does the same.

    Facial recognition systems are computer-based security systems that are able to automatically detect and identify human faces. These systems depend on a recognition algorithm, such as eigenface or the hidden Markov model. The first step for a facial recognition system is to recognize a human face and extract it fro the rest of the scene. Next, the system measures nodal points on the face, such as the distance between the eyes, the shape of the cheekbones and other distinguishable features. These nodal points are then compared to the nodal points computed from a database of pictures in order to find a match. Obviously, such a system is limited based on the angle of the face captured and the lighting conditions present. New technologies are currently in development to create three-dimensional models of a person's face based on a digital photograph in order to create more nodal points for comparison. However, such technology is inherently susceptible to error given that the computer is

    extrapolating a three-dimensional model from a two- dimensional photograph.

  2. PCA AND LDA

    One approach to coping with the problem of excessive dimensionality of the image space is to reduce the dimensionality by combining features. Linear combinations are particular, attractive because they are simple to compute and analytically tractable. In effect, linear methods paper the high-dimensional data onto a lower dimensional subspace.

    Considering the problem of representing all of the vectors in a set of n d-dimensional samples x1; x2; . . . ; xn, with zero mean, by a single vector y ={y1; y2; . . . ; yn} such that yi represents xi. Specifically, we find a linear mapping from the d-dimensional space to a line. Without loss of generality, we denote the transformation vector by w. That is, wTxi = yi. Actually, the magnitude of w is of no real significance because it merely scales yi. In face recognition, each vector xi denotes a face image.

    Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. The number of principal components is less than or equal to the number of original variables. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to (i.e., uncorrelated with) the preceding components. The principal components are orthogonal because they are the eigenvectors of the covariance matrix, which is symmetric. PCA is sensitive to the relative scaling of the original variables.

    Linear discriminant analysis (LDA) is a generalization of Fisher's linear discriminant are methods used in statistics, pattern recognition and machine learning to find a linear combination of features which characterizes or separates two or more classes of objects or events. The

    resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.

    LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable (i.e. the class label). Logistic regression and probit regression are more similar to LDA, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method.

    Kernel Hilbert Space Into Which Data Points Are Mapped. This Gives Rise To Kernel LPP.

    IV. LEARNING LAPLACIANFACES FOR REPRESENTATION

    LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data.LDA explicitly attempts to model the difference between the classes of data. PCA on the other hand does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made.

  3. LOCALITY PRESERVING PROJECTION

    PCA and LDA aim to preserve the global structure. However, in many real-world applications, the local structure is more important. In this section, we describe Locality Preserving Projection (LPP) , a new algorithm for learning a locality preserving subspace.

    Locality Preserving Projections (LPP) Are Linear Projective Maps That Arise By Solving A Variational Problem That Optimally Preserves The Neighborhood Structure Of The Data Set. LPP Should Be Seen As An Alternative To Principal Component Analysis (PCA) — A Classical Linear Technique That Papers The Data Along The Directions Of Maximal Variance. When The High Dimensional Data Lies On A Low Dimensional Manifold Embedded In The Ambient Space, The Locality Preserving Projections Are Obtained By Finding The Optimal Linear Approximations To The Eigen functions Of The Laplace Beltrami Operator On The Manifold. As A Result, LPP Shares Many Of The Data Representation Properties Of Nonlinear Techniques Such As Laplacian Eigenmaps Or Locally Linear Embedding. Yet LPP Is Linear And More Crucially Is Defined Everywhere In Ambient Space Rather Than Just On The Training Data Points. LPP May Be Conducted In The Original Space Or In The Reproducing

    LPP is a general method for manifold learning. It is obtained by inding the optimal linear approximations to the eigen functions of the Laplace Betrami operator on the manifold. Therefore, though it is still a linear technique, it seems to recover important aspects of the intrinsic nonlinear manifold structure by preserving local structure. Based on LPP, we describe our Laplacianfaces method for face representation in a locality preserving subspace.

    In the face analysis and recognition problem, one is confronted with the difficulty that the matrix XDXT is sometimes singular. This stems from the fact that sometimes the number of images in the training set ðnÞ is much smaller than the number of pixels in each image. In such a case, the rank of XDXT is at most n, while XDXT is an m*m matrix, which implies that XDXT is singular. To overcome the complication of a singular XDXT , we first paper the image set to a PCA subspace so that the resulting matrix XDXT is nonsingular. Another consideration of using PCA as preprocessing is for noise reduction. This method, we call Laplacianfaces, can learn an optimal subspace for face representation and recognition.

    1. VISUAL ANALYSIS

      In many cases, face images maybe visualized as points drawn on a low-dimensional manifold hidden in a high-dimensional ambient space. Specially, we can consider that a sheet of rubber is crumpled into a (high-dimensional) ball. The objective of a imensionality-reducing mapping is to unfold the sheet and to make its low-dimensional structure explicit. If the sheet is not torn in the process, the mapping is topology preserving. Moreover, if the rubber is not stretched or compressed, the mapping preserves the metric structure of the original space. In this paper, our objective is to discover the face manifold by a locally topology-preserving mapping for face analysis (representation and recognition).

    2. FACE MANIFOLD ANALYSIS

      Consider a simple example of image variability. Imagine that a set of face images are generated while the human face rotates slowly. Intuitively, the set of face images correspond to a continuous curve in image space since there is only one degree of freedom, viz. the angel of rotation. Thus, we can say that the set of face images are intrinsically one dimensional. Thus, we can say that the set of face images are intrinsically one dimensional. Many recent works have shown that the face images do reside on a low dimensional sub manifold embedded in a high-dimensional ambient space (image space). Therefore, an effective subspace learning algorithm should be able to detect the nonlinear manifold structure. The conventional algorithms, such as PCA and LDA, model the face images in Euclidean space. They effectively see only the Euclidean structure; thus, they fail to detect the intrinsic low- imensionality.

      With its neighborhood preserving character, the Laplacianfaces seem to be able to capture the intrinsic face manifold structure to a larger extent. Fig. 1 shows an example that the face images with various pose and expression of a person are mapped into two- imensional subspace. The face image data set used here is the same as that used in. This data set contains 1,965 face images taken from sequential frames of a small video. The size of each image is 20 x 28 pixels, with 256 gray-levels per pixel. Thus, each face image is represented by a point in the 560-dimensional ambient space. However, these images are believed to come from a sub manifold with few degrees of freedom.

    3. DATA FLOW DIAGRAM

      Data Flow Diagrams are composed of the four basic symbols shown below.

      • The External Entity symbol represents sources of data to the system or destinations of data from the system.

      • The Data Flow symbol represents movement of data.

      • The Data Store symbol represents data that is not moving (delayed data at rest).

      • The Process symbol represents an activity that transforms or manipulates the data (combines, reorders, converts, etc.).

      Fig.1. Level 0 Data Flow Diagram

      Fig.2. Level 1 Preprocessing

      Fig.3. Level 2 10 Different Directions

      Fig.3. Level 2 28-Dimenstions

      Fig.4. Level 3 Testing the Image

    4. CONCLUSION

This Paper seems to be working fine and successfully. This system can able to provide the proper training set of data and test input for recognition. The face matched or not is given in the form of picture image if matched and text message in case of any difference.

Face recognition technology has come a long way in the last twenty years. Today, machines are able to automatically verify identity information for secure transactions, for surveillance and security tasks, and for access control to buildings etc. These applications usually work in controlled environments and recognition algorithms can take advantage of the environmental constraints to obtain high recognition accuracy. However, next generation face recognition systems are going to have widespread application in smart environments — where computers and machines are more like helpful assistants.

To achieve this goal, computer must be able to reliably identify nearby people in a manner that fits naturally within the pattern of normal human interactions. They must not require special interactions and must conform to human intuitions about when recognition is likely. This implies that future smart environments should use the same modalities as humans, and have approximately the same limitations. These goals now appear in reach — however, substantial research remains to be done in making person recognition technology work reliably, in widely varying conditions using information from single or multiple modalities.

REFERENCE

  1. X. He and P. Niyogi, Locality Preserving Projections, Proc.Conf. Advances in Neural Information Processing Systems, 2003.

  2. A.U. Batur and M.H. Hayes, Linear Subspace for Illumination Robust Face Recognition, (dec2001).

  3. M. Belkin and P. Niyogi, Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering,

  4. The Complete Reference Java Edition 4 Patick Naughton

  5. A.M. Martinez and A.C. Kak, PCA versus LDA, IEEE Trans. Pattern Analysis and Machine Intelligence, Feb. 2001.

Leave a Reply