Multi View Face Recognition Via 3D Model based Pose Regularization

DOI : 10.17577/IJERTCONV3IS19189

Download Full-Text PDF Cite this Publication

Text Only Version

Multi View Face Recognition Via 3D Model based Pose Regularization

Swapnila

Information Science, AMC Engineering College ,

Pavithra N

Assistant Professor, AMC Engineering College,

Abstract-Face Recognition is a challenging task as the images of a person may not always be unconstrained and may appear with variations such as pose, illumination, expression and alignment. The existing methods cannot perform the facial recognition task under constrained environment without compromising on accuracy and precision. The noise resulting from the any of the above factors should be removed in order to achieve the desired result. Hence a data-driven method for face recognition is proposed which locates landmarks in facial images. In this approach, the nodes on every facial image are recognized and their pixel values are taken in matrix form. The entire face is divided into segments and Eigen faces are created for each segment. The algorithm used is PCA. On joining the pixel values and the Eigen a face, a new face is reconstructed which is then converted to grayscale image. Hence after removal of noise and proper alignment, the face verification is carried out. This guarantees a higher degree of precision compared to previous approaches.

Keywords: Landmarks, nodes, Eigen faces, PCA, Grayscale

  1. INTRODUCTION

    In the context of surveillance applications, face is one of the most suitable biometrics. Facial Recognition includes identification and verification of a person from a still picture or a stream of video images. Face verification or face authentication determines whether the input image of a person is genuine or not. By genuine it implies that the input image should match with any of the image present in the system database. There have been many impressive results in the implementation of face recognition. However, the problem that persists forever is the unconstrained face captured by the camera. There are various reasons that pose a challenge towards unconstrained face recognition.[1] Some of these are as follows:

    • Illumination: Under different lighting conditions, there would be significant variation in the appearance of a persons face.

    • Pose: The image captured by the camera is does not always have a person looking straight into the lens. The face may come under many arbitrary poses.

    • Expression: The expressions of a person change according to the mood and situations. This difference in expression can cause a major variation in the images.

    • Alignment: Wrong alignment of the image can degrade the performance of the face recognition

      system. The cropped faces detected are not properly aligned.

      Face Detection

      Face Detection

      The existing methods can be divided into two categories: 2D methods and 3D methods (or hybrid). In general, the 3D model based methods have a higher precision over the 2D methods. 2D methods often use some 2D transformations to approximate the 3d transformations and compensate the error by some statistical learning strategies.

      Face Extraction

      Classification

      Face Extraction

      Classification

      Fig 1: Block Diagram for face recognition

  2. PROPOSED SYSTEM

    The proposed system presents a fully automated view of the face recognition method.[2] Every face has numerous distinguishable landmarks- different peaks and valleys that make up the facial features. These facial features constitute the nodal points. On a typical human face there are about 80 nodal points on an average. Some of them are:

    • Distance between the eyes

    • Width of the nose

    • Shape of cheekbones

    • Length of the jaw line

    In the proposed method, these landmarks are localized and a new facial structure is regenerated. Facial Landmark Localization seeks to automatically locate predefined facial landmarks. The factors that constrain the appearance of a facial image are removed and the weights of each node are recorded in matrix form.

    1. IMAGE ACQUISITION

      The image is captured using a capturing device which can be anything ranging from webcam or a surveillance camera. Image Acquisition is the process of capturing an image and converting it into a manageable entity. The manageable entity refers to scaling the image to bring it into a proper size, correcting the alignment etc. Image is prepared for feature extraction.

    2. SEGMENTATION

      The entire face is scanned for locating the nodal points. The weight of each node is recorded in a matrix form. The value of each pixel is put in rows and columns. Next the face is divided into several segments. [4]The facial region

      segmentation is done by masking the registered probe face based on a Euclidian distance from the nose tip. Each region is further divided into different facial areas and each area is associated with a weight. Eigen face is created using the Eigen vector over each segment. The Eigen face and pixel values are multiplied using a matrix multiplication.

      Characteristic Equation

      When a transformation is represented by a square matrix

      A, the eigenvalue equation can be expressed as This can be rearranged to

      If there exists an inverse

      then both sides can be left multiplied by the inverse to obtain the trivial solution: x = 0. Thus we require there to be no inverse by assuming from linear algebra that the determinant equals

      zero:

    3. FACE RECONSTRUCTION Automatic alignment on multi-view face images is an open problem. There are two basic algorithms used for implementation- PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis)The rendered image is a colored image in which different pixels of different weights are present. Thus for the pixel weights to be uniform, the color image is converted into a grayscale image. That is each pixel is converted into a grayscale pixel. Thus Raster Images are created.

    4. FACE MATCHING

    The reconstructed Eigen face is compared with the image present in the database. The difference in their pixel values is taken and average is recorded.

  3. RESULTS CONCLUSION

    Pose is a challenging problem and it is usually coupled with other factors to jointly affect the performance of practical face recognition systems. Shape regularization can boost the accuracy of the system. A variety of facial images with different alignment, lighting conditions and facial expressions can be matched accurately. The proposed fully automatic system is very efficient for images with different poses and is of high accuracy and robustness.

    At present one system refers to only one dataset. We can build a system that incorporates multiple datasets.

  4. REFERANCES

  1. Reference face graph for face recognition by Mehran Kafai, Bir Bhanu

  2. Non parametrix context modeling of local appearance for pose and expression face recognition by Brandon M Smith, Zhe Lin, Li Zhang

  3. A survey of approaches for 3 dimensional face recognition by Kevin W Boyler, Kyong Chang

  4. Real Time 3D face Identification by Rui Min, Jongmoo Choi, Gerard Medioni

  5. Efficient 3D Reconstruction for Face Recognition by Dalong Jiang, Yuxiao Hu, Wen Gao

Leave a Reply