Face Recognition Using Transform Domain and Overlapping LBP Techniques

DOI : 10.17577/IJERTCONV2IS13099

Download Full-Text PDF Cite this Publication

Text Only Version

Face Recognition Using Transform Domain and Overlapping LBP Techniques

Latha S

Asst Professor: Dept of E & C SJBIT

Bangalore, India lathaait@yahoo.co.in

Savita S Kudakunti

MTech Student, Dept of E and C SJBIT

Bangalore, India svt_kdknt@yahoo.com

Abstract Face Recognition is one of the popular research topics with different applications. In real-world application, human identification and verification is needed to ensure security. In this paper transform and spatial domain techniques are used for face recognition. The face images of orl_faces and JAFEE databases are pre processed using Histogram Equalisation (HE).The Discrete Wavelet Transform (DWT) is applied on Histogram Equalised images to generate four sub bands. The Overlapping Local Binary Pattern (OLBP) is applied on LL sub band to generate final features. The features of test images are compared with the features of database images using Euclidian Distance (ED) to compute performance parameters such as False Acceptance Rate (FAR), False Rejection Rate (FRR) and Total Success Rate (TSR).

Keywords Face Recognition, Histogram Equalisation (HE), Discrete Wavelet Transform (DWT), Overlapping Local Binary Pattern (OLBP), Euclidian Distance (ED).

  1. INTRODUCTION

    Biometrics is the study of behavioural and physciolocal information. It is derived from the Greek word bios (life) and metrikos (measure)[1]. Biometric identification is a most popular research topic, as it finds applications in security requirement, business, military and robotics applications [2].The different techniques for recognition of a person is based on physiological characteristics and behavioural characteristics. Physiological characteristics include fingerprint, face, iris retinal blood vessel pattern etc. Behavioural characteristics include voice, signature and keystroke. Compared to traditional system such as Personal Identification Number (PIN), smartcards etc, the verification of an individual using biometrics is more secured as it involves biometric parameters that are part of human body that cannot be stolen.

    The major steps in face recognition system are preprocessing, feature extraction and comparison using classifiers such as Euclidian Distance (ED), Random Forest, Support Vector Machine (SVM) etc. In preprocessing unit the colour image may be converted into gray scale image, illumination normalisation [3] etc. In Feature extraction facial features are extracted using edge detection technique, Principle component Analysis (PCA), Discrete Cosine Transform (DCT) [4] coefficients, DWT coefficients, DT- CWT[5]. In matching stage Euclidian Distance (ED),

    Hamming Distance (HD), Support Vector Machine (SVM), Random Forest [6] and Neural Network [7] may be used. The Biometrics includes performance metrics such as False Acceptance Rate (FAR), False Rejection Rate (FRR) and Total Success Rate (TSR) to evaluate the accuracy of the system, for comparing FAR and FRR with respect to threshold values for declaring matches and mismatches.

  2. MODEL

    In this section different Performance Parameters are defined.

    1. Definitions of Performance Parameters

      1. False Rejection Rate (FRR): It is the rate at which the system fails to detect a match between the input pattern and a matching template in the database. FRR measures the percent of valid inputs which are incorrectly rejected.

      2. False Acceptance Rate (FAR): It is the measure at which the system incorrectly matches the input pattern to a non- matching template in the database. It measures the percent of valid inputs which are incorrectly accepted.

      3. Total Success Rate (TSR): It is defined as the number of correct persons matched to total number of persons in the database.

      (iv).Equal Error Rate (EER): The rate at which both acceptance and rejection error are equal.

    2. Proposed Model

      In the proposed model DWT (Discrete Wavelet Transform) and OLBP (Overlapping Local Binary Pattern) techniques are used to generate features of face images to identify a person. The block diagram is as shown in the Fig.1

      Test Image

      Database Image

      Fig 3. Sample of orl_faces images of a person

      HE

      HE

      ii. Pre-processing:

      OLBP Features

      OLBP Features

      Matching

      DWT

      DWT

      The images in the database and test image are processed before extracting the features. Histogram Equalization is applied to the input image. Histogram Equalization is a technique to enhance a given image. The technique is to design a transformation T(.) such that the grey value in the output is uniformly distributed as shown in Fig(4).

      Result

      Fig 1. Proposed Block Diagram

      1. Face Databases:

        • JAFFE database

          The face database consists of 10 persons with approximately 20 images per person. The database is created by considering first 6 persons out of 10 persons and first 8 images per person are considered to create data base which leads to 48 images in the database and tenth image from first 6 persons are taken as test image to compute False Rejection Rate (FRR) and Total Success Rate (TSR). The remaining 5 persons out of 10 are considered as out of database to compute False Acceptance Rate (FAR). The samples of JAFFE face database is shown in Fig.2.

          (1)

          (2)

          (3)

          (4)

          (5)

          (6)

          (7)

          (8)

          (9)

          (10)

          Fig. 2 Sample of JAFFE face images of a person

        • Orl_faces database:

    The face data consists of 40 persons with 10 images per person. The database is created by cascading first 6 persons out of 40 persons and first 8 images per persons are considered to create database which leads to 48 images in the database and tenth image is taken as test image to compute FRR and TSR. And remaining 4 persons are considered as out of database to compute FAR. The orl-faces database of ten images of a person is shown in Fig 3.

    (1) (2) (3) (4) (5)

    (6) (7) (8) (9) (10)

    Gray value Normalised Gray value

    Fig 4. Histogram Equalization

    1. Discrete Wavelet Transform (DWT):

      Wavelet Transform provides time-frequency representation of an image. Wavelet transform decomposes a signal into a set of basis functions (wavelets). Wavelets are obtained from a single prototype wavelet called mother wavelet by dilation and shifting given by equation (1).

      (1)

      Where a is the scaling factor and b is the shifting parameter. Wavelet transform decomposes a signal into set of basis functions called wavelets. Two types of Wavelet transform are Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT).The Discrete Wavelet Transform [1] provides sufficient information both for analysis and synthesis of the original signal, that reduces computation time significantly. DWT separates the high and low-frequency portions signal through the uses of filters. It was developed to overcome the drawback of Short-Time Fourier Transform (STFT), and used to analyze non-stationary signals. The main drawback of STFT is that it gives a constant resolution at all frequencies, while the wavelet transform uses a multiresolution technique so that different frequencies are analyzed with different resolutions. The decomposition of the data into different frequency range is made using mother wavelet and scaling function and is reversibl in its operation. The band pass filters perform the task of separation of frequency components. They are classified as Continuous Wavelet Transform (CWT) and DWT; both representations are continuous in time. CWT uses all possible scales and translations, whereas DWT use only selected.1D level DWT separates high and low frequency portions of the signal through the use of filters. One level of transform is achieved

      by passing the signal through high and low pass filters and down sampled by the factor of 2.Multiple levels are obtained by repeating the filtering and decimation process on lowpass output that is depicted in Fig .5

    2. Overlapping Local Binary Pattern (LBP):

      (4)

      Fig 5. Sub-band decomposition of DWT

      When DWT is applied, the image is decomposed into approximation and detail components namely Low-Low (LL), High-Low (HL), Low-High (LH), High-High (HH) sub bands, corresponding to approximate, horizontal, vertical, and diagonal features respectively which are shown in Fig 6. The dimension of each sub band is half the size of original image. The sub bands HL and LH contain the changes of images or edges along horizontal and vertical directions respectively. The HH sub band contains high frequency information of the image.

      The basic Local Binary Pattern was introduced by Timo Ojala and Matti PietikaEinen [8]. The LBP operator, a nonparametric algorithm proposed to describe texture in 2-D images. The important properties of LBP features are its tolerance to illumination variations and computational simplicity hence it is widely used in 2-D face recognition .The local neighbourhood of the LBP operator is defined as a set of sampling points evenly spaced on a circle cantered on the pixel to be labelled. The LBP operator is denoted as LBPP, R where R is the radius of the circle surrounding the center and P is the number of pixels on the circle. The operator produces 2P different output values, corresponding to the 2P different binary patterns that forms the P pixels in the neighbour set

      .The original LBP operator labels each pixel of a given 2-D image by thresholding in a 3*3 neighbourhood. If the values of the neighbouring pixels are greater than that of the central pixel, their corresponding binary bits are assigned to 1; otherwise they are assigned to 0. A binary number is hence formed by concatenating all the eight binary bits, and the resulting decimal value is used for labelling. Fig illustrates the LBP operator by a simple example. For any given pixel at the LBP decimal value is derived by using the equation (5).

      here,

      (5)

      Fig 6. Four Bands obtained after DWT

      The 2D- DWT for an image is given in Equation (2).

      (2)

      Where n denotes the eight neighbours of the central pixel, and are the gray level values of the central pixel and its surrounding pixels respectively.According to equation (5), the LBP code is invariant to monotonic gray-scale transformations, preserving their pixel orders in local neighbourhoods. When LBP operates on the images formed by light reflection it can be used as texture descriptor. The derived binary number is called as LBP a code that codes the local primitive features such as Spot, Flat, Line end, Edge,

      where j is the power of binary scaling and k is a filter constant

      .Single-level DWT is applied on each face image. Haar wavelet [5] is used as the mother wavelet, since it is simple, faster, reversible, memory efficient and gives better results compared to other wavelets. LL band is the Approximate band that is considered as the most significant information of the face image. The size of the image obtained is 50*50. The Haar wavelet is a sequence of rescaled square shaped functions which together form a wavelet family or basis. The basic functions involved are: differencing the input and averaging data by many levels yielding coefficients. The Haar wavelets function is given by the Equation (3) and its scaling function is given by the Equation (4).

      = (3)

      Corner which are invariant with respect to gray scale transformations as shown in Fig (7), so each LBP code is regarded as a micro-texton.

      Fig 7. Illustration of Basic LBP Operator

      In case of overlapping LBP, the outermost column and rows of the matrix is padded with zeros, so that all pixel values are considered to compensate any small variations .i.e if we consider (xc,yc) as the centre pixel (threshold) for first LBP operator, then the next adjacent pixel i.e (xc+1,yc+1) is considered as a threshold for the next adjacent LBP operator.

      So that if there is any small variation in the texture or illumination variation of the image that can be considered.

    3. Matching:

    Euclidian Distance is used to verify whether the person is in the database, by comparing the final features of the images in the database with the feature set of the test images. The Euclidian Distance is calculated by equation (6).

    such as DT-CWT, DT-CWT on Histogram Equalization, DT- CWT on DCT and Dual Tree Based Feature Extraction Face Recognition (DTBFEFR) with different database given in the TABLE 2. It is seen that the proposed algorithm gives 100% recognition compared to other transformation techniques.

    Databases

    Algorithms

    JAFFE

    Orl_faces

    DT-CWT

    90.3%

    76.6%

    HE+DT-CWT

    92.3%

    95%

    DCT+DT-CWT

    87.3%

    83.3%

    DTBFEFR

    100%

    91%

    HE+DWT+OLBP

    (proposed)

    100%

    100%

    TABLE 2. Comparison of Recognition Rate of the proposed algorithm with other algorithms

    Where pi = the feature value of database image qi = the feature value of test image

    (6)

    When the Euclidian Distance between two images is minimum compared with threshold value, match is declared otherwise mismatch is declared.

  3. ALGORITHM

    The person in the database is effectively recognised using proposed algorithm.

    The Objectives:

    1. To increase Total Success Rate

    2. To Reduce FAR and FRR

    Input: Face Image

    Output: Recognition of a person

    Step 1: Face image is read from the database

    Step 2: Histogram Equalization (HE) is applied to the input image.

    Step 3: DWT is applied to the Histogram Equalised image. Step 4: OLBP is applied on LL sub-band to generate final features.

    Step 5: Repeat step 1 to 4 for test images.

    Step 6: Test image features are compared with database image features using Euclidian Distance.

    Step 7: When the Euclidian Distance between two images is minimum compared with threshold value, match is declared otherwise mismatch is declared.

    TABLE 1. Proposed Algorithm

    Fig 8.FAR, FRR and Efficiency versus threshold using orl_faces database

  4. RESULT AND PERFORMANCE ANALYSIS

    For performance analysis we use JAFFE and orl_faces databases. JAFFE database consists of 10 persons with approximately 20 images per person. To evaluate FRR 6 persons with 8 images per person is considered to create a database and one image per person is used as test image. To evaluate FAR remaining 4 persons are considered out of database. The orl_faces database consists of 40 persons with 10 images per person. First 6 persons with 8 images per person are considered to evaluate FRR. To evaluate FAR remaining 34 persons are considered to be out of database.

    The percentage of Recognition Rate of the proposed algorithm is compared with other face recognition techniques

    Fig 9. FAR, FRR and Efficiency versus threshold using JAFFE database

  5. CONCLUSION

In this paper, Discrete Wavelet Transform (DWT) based feature extraction method is proposed for face recognition. DWT is applied on Histogram Equalized image an only LL band is considered for feature extraction for face recognition. OLBP is applied to the LL band of DWT, which makes recognition more robust. It is observed that the proposed algorithm increases the accuracy with 100% Total Success Rate (TSR) at which FAR and FRR is zero for orl_faces database. Further, accuracy can be improved by applying other advanced Transform Domain techniques.

REFERENCES

[1]. Marcos Faundez-Zanuy, Biometric Security Technology, Encyclopedia of Artificial Intelligence, 2009, pp. 262-264.

[2]. Anil K Jain , Arun Ross and Salil Prabhakar, An Introduction to Biometric Recognition, IEEE Transactions on circuits and systems for video technology, Vol. 14, No. 1, January 2004.

[3]. MariuszLeszczynski, Image Preprocessing for illumination invariant Face Verification, Journal Of Telecommunications And Information Technology, 2010

[4]. Aman R. Chadha, Pallavi P. Vaidya, M. Mani Roja, Face Recognition Using Discrete Cosine Transform for Global and Local Features, International Conference on Recent Advancements in Electrical, Electronics and Control Engineering,2011.

[5]. Chao-Chun Liu and Dao-Qing Dai, Face Recognition using Dual- Tree Complex Wavelet Features, IEEE Transactions on Image Processing, 2009, vol. 18, issue 11, pp. 2593- 2599

[6]. Jun-Ying Zeng, Xiao-Hua Cao, An improvement of Adaboost for Face Detection with Random Forest , ICIC 2010,CCIS 93,pp. 22- 29.

[7]. M.Nandini, P.Bhargavi, G.Raja Sekhar, Face Recognition Using Nueral Network, International Journal of Scientific and Research Publications, Volume 3, Issue 3, March 2013 1 ISSN 2250-3153

[8]. Timo Ojala, Matti PietikaEinen, Multi Resolution Gray-Scale and Rotation Invarient Texture Classification with Local Binary Pattern, IEEE Transactions on Pattern Analysis and Machine Intelligence , Vol.24, No.7, pp.971-987, 2002

[9]. Lee, P.-H.; Wu, S.-W.; Yi-Ping Hung, "Illumination Compensation Using Oriented Local Histogram Equalization and its Application to Face Recognition," Image Processing, IEEE Transactions on , vol.21, no.9, pp.4280,4289, Sept. 2012

[10]. X. Tan and B. Triggs, Enhanced local texture feature sets for face recognition under difficult lighting conditions, IEEE Trans. Image Process., vol. 19, no. 6, pp. 16351650, Jun. 2010

[11]. S. Ravi, Ph.D. and Sadique Nayeem A Study on Face Recognition Technique based on Eigenface International Journal of Applied Information Systems (IJAIS) ISSN : 2249-08mputer Science FCS, New York, USA Volume 5 No.4, March 2013 .

.

Leave a Reply