Wavelet based Iris Recognition System

DOI : 10.17577/IJERTV4IS020472

Download Full-Text PDF Cite this Publication

Text Only Version

Wavelet based Iris Recognition System

Mrs. Manisha A. Nirgude, Dr. Sachin R. Gengaje

Department of Information Technology Walchand Institute of Technology Solapur, Maharashtra, India

Abstract Iris recognition is one of the most secure biometric techniques. Iris recognition mainly has three steps: iris localization, feature extraction, feature matching or classification. The feature extraction part is very important in iris recognition process. Here we have proposed feature extraction using wavelet transforms and matching of iris images using Euclidean distance and cosine similarity.

KeywordsIris recognition; Discrete Wavelet Transform; Euclidean distance, Cosine similarity

  1. INTRODUCTION

    Biometric authentication refers to technologies that measure and analyzes human physical and behavioral characteristics for recognition and authentication purposes. In other words, we all have unique personal attributes that can be used for distinctive identification purposes, including a fingerprint, the pattern of a retina, and voice characteristics. The main benefit of the biometric technology is that is more safe and comfortable than the traditional systems. By replacing PINs, biometric techniques can potentially prevent unauthorized access to or fraudulent use of ATMs, smart cards, desktop PCs, and computer networks.

    Among the types of biometric being used currently, iris is protected from the external environment behind the cornea and the eyelid. No subject to deleterious effects of aging, the small-scale radial features of the iris remain stable and fixed from about one year of age throughout life. Hence iris recognition is regarded as the most reliable biometrics and has been widely applied in both public and personal security areas.

    A human eye is composed of three main partssclera, iris and pupil. Sclera is white and out of the iris. Pupil is in the centre of an eye and its diameter relative to iris diameter is constantly changing, even under stead illumination. The iris, which has abundant texture information, is between sclera and pupil. Iris is a part of middle coat of eye as shown in Fig 1. The iris is a thin, circular structure in the eye, responsible for controlling the diameter and size of the pupils and the amount of light reaching the pupil. Eye color is the color of the iris. Iris is composed of several layers. Its posterior surface consists of heavily pigmented epithelial cells that make it light tight. Anterior to this layer, there are two cooperative muscles for controlling the pupil. In response to the amount of light entering the eye, muscles attached to the iris expand or contract the aperture at the center of the iris, known as the pupil. Next is the stormal layer, consisting of collageneous connective tissue in arch like processes.

    Fig. 1: Human Eye Database Image

    Coursing through this layer is radially arranged corkscrew like blood vessels. In this paper we propose iris recognition method based on wavelet transform.

  2. RELATED WORK

    Typical iris recognition system includes image acquisition, preprocessing, feature extraction and feature matching or classification. The feature extraction part is very crucial in iris recognition process. In order to provide accurate recognition of individuals, the most discriminating information present in the iris pattern must be extracted. In this stage, texture analysis methods are used to extract the significant features from the normalized iris image. The extracted features will be encoded to generate a feature vector.

    Many Researchers proposed new methods for feature extraction using wavelet transform, DCT, PCA, ICA to iris images, Hilbert transform etc. The first iris recognition system was developed and patented by John Daughman [1, 2]. He located the pupilarry and limbic boundaries of the Iris using the Integro-Differential operator and Gabor filter for feature extraction. Wildes [3] used Hough transform and Laplacian pyramid algorithm. Boles and Boashash [4] extracted iris features using wavelet transform zero- crossings. Lim [5] used Haar Wavelet transform to process iris image and Learning vector quantization for classification. Xiaofu He [6] used complex wavelet transform for feature extraction. Phase information was used for feature vector and using hamming distance images were classified. Jie Wang, Mei Xie [7] used wavelet packet analysis for iris texture analysis to characterize different scales of iris image and Manhattan Distance for matching.

  3. WAVELET TRANSFORM

    1. Why Wavelet?

      Wavelet overcomes the resolution problem of the STFT by using a variable length window. The window function is called the wavelet. Wavelet means a small wave. The analysis windows of different lengths are used for different frequencies e.g. for analysis of high frequencies, use small window so that better time resolution will results while for analysis of low frequencies, use large window to get better frequency resolution. Therefore, wavelets have been called

      mathematical microscope: compressing wavelets increases the magnification of the microscope so that we can see small details. There are number of wavelet family available viz. Haar, Daubechies, Coiflets etc. depending on the nature of wavelet filter function.

    2. Discrete Wavelet Transform

    Discrete wavelet transform is a mathematical tool used for analysis and synthesis of signal using scaling function

    i.e. father wavelet and wavelet function i.e. mother wavelet. Decomposition is as shown in the Fig. 2 and Fig.

    1. At each level of decomposition, four images are generated. From these four images we can select either single or all subbands for further decomposition.

  4. PROPOSED METHOD

    In preprocessing step iris was separated from eye image and in normalization. Iris image is decomposed using DWT as shown in figure 2 and figure 3. At each level of decomposition, four subbands are generated namely LL, LH, HL, HH where L denotes Low Frequency and H denotes High Frequency. From this four images LL subband gives low frequency components or approximation coefficients. There are three types of decomposition depending on how you further decompose at next levels. They are pyramid structure, tree structure or packet analysis. In pyramid type, LL subband is recursively decomposed. In tree type, any one of four subbands depending on the application or subband having highest energy is further decomposed. In Packet analysis, all four subbands are further decomposed to produce four subbbands for each subband. There are different wavelet transforms like Haar, Db2, Db4, Mexican Hat, Coieflets, Symlets etc used for feature extraction. Coefficients obtained in subbands are used for creating feature vectors.

      1. Algorithm for Feature Extraction

        1. Select images from different classes for Training phase.

        2. Train each image as follows:

        3. Take image and Apply DWT to the above selected image as shown in Fig 3 which will generate 4 subbands LL, LH , HL and HH.

        4. Select low frequency components i.e. LL subband for

          Fig. 2: Discrete Wavelet Transform

          Original Image

          LL

          LH

          HL

          HH

          LLL

          LLH

          LHL

          LHH

          LLH

          /td>

          LHL

          LHH

          Fig. 3: Three Levels Decomposition

          further decomposition and repeat step 3 and 4 for further two levels. Approximation coefficients will be taken as feature vector.

        5. Store these features for further selection of features and testing phase.

  5. MATCHING

    Matching of feature vector from trained database and feature vector of image to be tested is done using Euclidean distance as shown in eq.2.

    (1)

    If two vectors exactly match then the Euclidean distance between them will be zero. We have also done testing using cosine similarity of the vectors as shown in eq. 2

    (2)

  6. EXPERIMENT AND RESULTS

    We have used CASIA [8] image database which has taken into two intervals consisting 756 images of 108 different persons. For Feature selection top down approach was used. We have started with full set of features and then repeatedly deleted first few features to obtain most significant features. Finally we have selected 90 features as a feature vector. Threshold has been set to compute False Acceptance Rate (FAR) and False Reject Rate (FRR). We have calculated FAR and FRR as follows:

    (3)

    (4)

    Several tests were conducted with different wavelets like db2, db8, coif2, bior2.8, db8 and with different decomposition levels from level 1, 2, 3 upto 4 with 90 classes. We have found

    TABLE I. DIFFERENT WAVEELT TRANSFORM RESULTS

    Wavelet Transform

    Accuracy

    Db2

    91.17647

    Coiflet2

    92.15686

    Bior2.8

    89.86928105

    Bior6.8

    91.50326797

    Db8

    92.810458

    higher more accuracy with db8 wavelet than other wavelets as shown in Table 1. 225 images from 51 classes are taken from training purpose. Remaining images were used for testing purpose.

    Thresholds for calculating EER were calculated as follows:

    1. 5 images from each class images were taken.

    2. Each image from each class is matched with other images of the same class. i.e. 5*4 =20 matching were done and distances are stored.

    3. Minimum and Maximum distance from all stored distances are found as a start and end of thresholds separately for Euclidean distance and Cosine similarity.

    Thresholds calculated from above procedure are taken for calculating Equal Error Rate (EER) and Receiver Operating Characteristic (ROC) curve. ERR is the point where both false rates are equal. We have achieved ERR of 0.29 with Euclidean distance matching and ERR of

    0.20 with cosine similarity matching. Graphs are as shown in Fig.4 and Fig. 5.

  7. CONCLUSION

In this paper, we have proposed feature extraction method where we have applied different wavelet transforms at different decomposition levels and found that db8 wavelet transforms works better at three levels decomposition. Also we have found results with cosine similarity and Euclidean distance as stated in results. Feature vector size is small as compared to Daughman though efficiency is not comparative to Daughman.

Fig 4: FAR vs FRR graph using Euclidean Distance

Fig 5: FAR vs FRR graph using Cosine Similarity

REFERENCES

  1. J.G.Daugman, High Confidence Visual Recognition of Persons by a Test of Analysis of Statistical Independence, IEEE Trans on Pattern Analysis and Machine Intelligence, Vol. 15, No. 11, pp 1148-1161, Nov 1993.

  2. J. Daugman (2004). How iris recognition works, IEEE Trans. CSVT, vol. 14, no.1, pp. 21 30.

  3. Richard P. Wildes, Iris Recognition: An Emerging Biometric Technology, Proceedings of the IEEE, vol.85, no.9, pp1348- 1363, September 1997.

  4. W. W. Boles and B. Boashash, A human identification technique using images of the iris and wavelet transforms, IEEE transactions on signal processing, vol. 46, no. 4, pp. 1185-1188, April 1998.

  5. S. Lim, K. Lee, O. Byeon, and T.Kim (2001). Efficient Iris Recognition through Improvement of Feature Vector and Classifier, ETRI Journal, vol. 23, no.2, pp. 61-70.

  6. Xiaofu He, Pengfei Shi, Extraction of complex wavelet features for iris recognition, Pattern Recognition, ICPR08 19th International Conference on, 8-11 Dec. 2008, pp.1 4

  7. Jie Wang, Mei Xie, Iris feature extraction based on wavelet packet analysis, Communications, Circuits and Systems Proceedings, International Conference on, Vol.1, June 2006 Pages 31 34

  8. Chinese Academy of Sciences Institute of Automation. Database of 756 Greyscale EyeImages. http://www.sinobiometrics.com

Leave a Reply