A Novel Approach to Ear Recognition based on LBP and PCA

DOI : 10.17577/IJERTCONV7IS05014

Download Full-Text PDF Cite this Publication

Text Only Version

A Novel Approach to Ear Recognition based on LBP and PCA

Resmi K R

School of Computer Science Mahatma Gandhi University Kottayam, India

Raju G

Department of Computer Science and Engineering Christ Deemed to be University

Bengaluru, India

Abstract Biometric authentication is popular now a days due its security applications. Recognition of an individual based on ear is a new challenging technology. In this paper, we have performed detection and recognition of ear. Our proposed method consists of mainly four steps. First is ear detection using template matching. In second step size normalization of detected ear is performed to make all ears having the same size and in third step features are extracted from the ear to represent it as a vector. Finally classification is done on the features using KNN classifier. The proposed method was applied two databases RR and IIT Delhi database. The result shows that the proposed method achieves better results compared to other methods.

KeywordsEar recognition; LBP; principal component analysis; template matching

  1. INTRODUCTION

    Biometric means recognizing a person based on features that are unique to everyone. Traditional authentication methods, such as ID card or password had lots of disadvantages like easy to lost, stolen etc. Biometrics, which uses the physical or behavioral characteristics of humans, provides a relatively new perspective in case of security. Ear is a non-invasive biometric which has certain advantages over other biometric traits. Smaller size and uniform color distribution of ear makes low storage and computation. Position of ear is fixed in side of face. Ear is not affected by facial expressions and costumes in comparison with face biometric [1]. According to medical studies the growth of ear is somewhat constant in the age group of 4 to 70 years [2].Multimodal face biometric in combination with ear shows better recognition results when compared to face alone in some studies. Ear identification accuracies reduced in case of occlusion, illumination change and pose.

    Based on the observation that local texture descriptors gave better recognition result, our method uses Local binary pattern for feature extraction. IIT Delhi and RR database was used for experimental study.

    The rest of the paper is presented as follows. Section 2 discusses literature review. Proposed approach is discussed in section 3. Experimental analysis and conclusion are presented in section 4 and section 5 respectively.

  2. LITERATURE REVIEW

    The French criminologist Bertillon [3] suggested the use of ear for human identification, more than a century ago. Iannarelli [2] examined more than 10 000 ears and based on

    12 measurements he found that ear is used to differentiate individuals. Based on the feature extraction techniques, 2D ear recognition is classified into geometric/statistical, local, holistic/global and hybrid approaches. Geometric techniques as the name suggest uses ear geometric features such as shapes, curves etc. The first computerized system based on geometrical features was developed by Burge and Burger [4]. Their method first extract curves from ear contours and from the voronoi diagram of contours graph matching was applied for recognition. Moreno et al. [5] used geometrical features such as ear shape and wrinkles for ear recognition. Geometric methods didnt give good results in poor quality image, occlusion by hair or spex, lighting and pose.

    Holistic techniques used features from ear structure as a whole. Hurley et al. [6] used force field technique for ear recognition. This technique calculates the force field from input ear image by treating pixels as Gaussian force field. In force field transformation the image is viewed as a collection of mutually attracting particles that act as the sources of gaussian force field. Victor et al . [7] and Chang et al. [8] used principal component analysis to characterize ear as eigen ear for ear recognition.

    Local descriptor extracts features from local areas of ear structure. Nanni and lumni [9] used gabor features from the selected color spaces. For UND E database the recognition rate is 84%. Pflug et al.[10] used local phase quantization features(LPQ) and achieved a rank – 1 recognition rate of 93.1%. Benzaoui et al. [11] used binarized statistical image features (BSIF) for IIT DelhiI and IIT DelhiII database and achieved a recognition rate of 96.7% and 97.3% respectively. Pflug et al. [12] used a hybrid technique by combining different texture features like LPQ, BSIF, LBP and HOG. Their experiments was conducted on three different datasets, UND-J2, AMI and IITK database. Hybrid approaches are computationally complex when compared to holistic or local techniques.

  3. PROPOSED APPROACH

    In this section our proposed ear recognition techniques is discussed in detail. Fig. 1 shows the proposed approach.

    1. Preprocessing

    2. Feature Extraction

      Fig. 3 Ear Detection

      Fig. 1 Proposed technique

      In pre-processing skin area of the input side face image are segmented and pre-processed for edge computation. Skin segmentation eliminates all non-skin pixels from the image. YCbCr is used for skin segmentation. After skin segmentation, edge map of the skin regions is obtained using canny edge detector. In the proposed technique, ear template is created by averaging the intensities of a set of ear images taken from the databases. After getting the distance transforms of the edge maps of the face and ear template, cross correlation based searching is used to find the ear from the side face image. The detected ear is size normalized to make all ears of the same size. The following Fig.2 shows the flow chart for ear localization used in our approach.

      Fig. 2 Flow chart for ear localization

      For feature extraction our method uses LBP(Local Binary Pattern) which is a local texture descriptor that uses information on a pixel level by comparing the grey level values of a pixel to values in its neighborhood. The LBP operator introduced by Ojala et al. [13] is a gray scale illumination invariant texture descriptor. The size of neighborhood is determined by a radius.

      Every pixel within the radius that has a value larger than the center pixel is assigned the binary value 1, whereas all pixels with less grey level value compared to center pixel are assigned the binary value 0. The binary values in the corresponding neighborhood pixels are grouped to form a binary string corresponding to the center pixel value either in clock or anticlockwise direction.

      The LBP-based ear descriptor is computed by first sliding a window over the LBP image. From each sub window a local histogram with 256 bins is extracted. The LBP code generation is shown in Fig. 4.

      Fig. 4 Calculation of LBP operator applied on normalized ear image

      After LBP features are extracted, PCA applied for dimensionality reduction. It is used to reduce the number of features to only those with a huge variation among them. Firstly a row vector is created by taking each pixel in an image row by row .All the row vectors are combined to form a matrix. The covariance matrix in PCA is calculated as follows in equation 1:

      cov(xi , x j ) E[(xi, i )(x j, j )]

      (1)

      A sample image from our own database called RR database and the detected ear is shown in Fig. 3.

      for i, j = 1, 2, 3 n, where E is the mathematical expectation.

      In Principal components analysis (PCA),an orthogonal linear transformation transforms the data to a new coordinate system such that the greatest variance by any projectio of the data lie on the first coordinate (called the first principal

      component), the second greatest variance on the second coordinate, and so on.

    3. Classification

      For classification KNN classifier was employed in our approach. K-nearest neighbor is a classification algorithm which uses a measure of similarity to classify objects. A distance is calculated between a test sample and all the training samples to determine which k training samples have the smallest distance from the test sample. The test sample will be placed into the class of the smallest distance from training sample. The KNN uses Euclidean distance for distance classification.

  4. EXPERIMENTAL ANALYSIS

    For evaluation we extract features from ear images from IIT Delhi and RR database. IIT Delhi is a standard database. We created our own database called RR database which contain 300 color images of 100 subjects. The dataset is divided into training and a testing set. Features examined from the training sample are compared with the features in the test sample for measuring the matching score. LBP features are extracted from blocked and non-blocked mages. LBP is illumination invariant and it has low computational load. From the extracted LBP features PCA is applied to reduce the dimension of the feature vector. Table 1 shows the summary of recognition accuracies using features alone and combined features. From Table1 it is found that combination of LBP and PCA gave better results compared to each feature alone.

    Table 1 Summary of recognition accuracies

    Features

    database

    Classifier

    Accuracy

    LBP

    IIT Delhi

    KNN

    93%

    LBP

    RR database

    KNN

    87.5%

    PCA

    IIT Delhi

    KNN

    85%

    PCA

    RR database

    KNN

    80.17%

    LBP+PCA

    IIT Delhi

    KNN

    95.3%

    LBP+PCA

    RR database

    KNN

    88.6%

    The following graph shows recognition accuracies in IIT Delhi database only for comparative study with state of art.

    Fig. 5 Recognition accuracy on IIT Delhi

  5. CONCLUSION

In this paper a novel approach to ear recognition is presented. In the proposed approach automatic ear segmentation using template matching and recognition using texture descriptor LBP is presented. LBP along with PCA gave better results compared to LBP and PCA alone. Here features extracted from blocked image gave better results compared to non- blocked images.

REFERENCES

    1. A. Kumar, C. Wu, Automated human identification using ear imaging, Pattern Recognition(2011)

      ,doi:10.1016/j.patcog.2011.06.005 .

    2. A. Iannarelli. Ear identification. Forensic Identification Series. Paramont publishing company, Fremont,California,1989.

    3. Bertillon A. Identification Anthropometrique: Instructions Signaletique; 1885.

    4. M. Burge, W. Burger, Biometrics: Personal Identification in Networked Society, Springer US, Boston, MA, 1996, Ch. Ear Biometrics, pp. 273285.

    5. B. Moreno, A. Sánchez, J. F. Vélez, On the use of outer ear images for personal identification in security applications, in: Proceedings of the International Carnahan Conference on Security Technology, IEEE, 1999, pp. 469476.

    6. D. J. Hurley, M. S. Nixon, J. N. Carter, Automatic ear recognition by force field transformations, in: Proceedings of the Colloquium on Visual Biometrics, IET, 2000, pp. 71.

    7. B. Victor, K. Bowyer, S. Sarkar, An evaluation of face and ear biometrics, in: Proceedings of the International Conference on Pattern Recognition, Vol. 1, IEEE, 2002, pp. 429432.

    8. K. Chang, K. W. Bowyer, S. Sarkar, B. Victor, Comparison and combination of ear and face images in appearance-based biometrics, Transactions on Pattern Analysis and Machine Intelligence 25 (9) (2003) 11601165.

    9. L. Nanni, A. Lumini, Fusion of color spaces for ear authentication,Pattern Recognition 42 (9) (2009) 19061913.

    10. A. Pflug, C. Busch, A. Ross, 2D ear classification based on unsupervised clustering, in: Proceedings of the International Joint Conference on Biometrics, IEEE, 2014, pp. 18.

    11. A. Benzaoui, N. Hezil, A. Boukrouche, Identity recognition based on the external shape of the human ear, in: Proceedings of the International Conference on Applied Research in Computer Science and Engineering, IEEE, 2015, pp. 15.

    12. A. Pflug, P. N. Paul, C. Busch, A comparative study on texture and surface descriptors for ear biometrics, in: Proceedings of the International Carnahan Conference on Security Technology, IEEE, 2014, pp. 16.

    13. Ojala, T. and Pietikäinen, M. (1999), Unsupervised Texture Segmentation Using Feature Distributions. Pattern Recognition 32:477-486.

Leave a Reply