An Analytical Study of Feature Extraction Methodologies in Iris Recognition

DOI : 10.17577/IJERTCONV8IS03020
Download Full-Text PDF Cite this Publication
Text Only Version

 

An Analytical Study of Feature Extraction Methodologies in Iris Recognition

J. Anne Priya

Ph. D, Research scholar

Vivekanandha College of Arts and Sciences for Women (Autonomous)

Elayampalayam, Namakkal (DT) – 637205, India

Dr. P. Sumitra

Assistant Professor, Dept of Computer Science Vivekanandha College of Arts and Sciences for Women (Autonomous)

Elayampalayam, Namakkal (DT) – 637205, India

Abstract – A biometric system for Iris recognition is gaining sky-scraping attention throughout the current years for a variety of appliances in both supportive and non-supportive environments. Various areas such as border security, airport, harbor, medical, corporate, etc., extensively utilize Iris recognition for personal recognition because of the unique and stable feature. By analyzing the pattern of iris, a person can be recognized accurately with the help of biometric system. Iris biometric has been accepted widely due avoid the forging of person. Most of the researches are in the pathway to improvise the speed and accuracy of recognizing iris. This paper provides an analytical study of various feature extraction methodologies implied in iris recognition.

Keywords Feature extraction, Iris recognition, Gabor Filter, DWT, DCT, Contourlet Transform

  1. INTRODUCTION

    An individual can be identified accurately by the usage of biometric. This biometric system has been designed in a way that it can be utilized for various recognition like fingerprint, face, iris, hand, voice, signature etc., Based on the behavioral and psychological characteristics, this system is used. In terms of accuracy, iris recognition in biometric is the effective method, because iris has unique feature for each and every individual. Moreover this is always stable for an individual throughout his/her lifetime. As iris is an external part of eye that is visible, iris biometric is said to be highly reliable.

    Image acquisition, preprocessing, iris localization and segmentation, extraction of features, formation of feature vector and identification between a genuine and an imposter image are the steps followed in the process of iris recognition system. These processes are shown in Figure. 1 Each process plays a unique role, in which capturing of iris images under sufficient illumination conditions is carried by image acquisition. Addition of noise is lead because of improper acquisition. Hence preprocessing is done for removal of noise and extraction of pupil is done. Then localization of iris is carried out by removing the pupil. Next the iris is identified and thereafter segmentation is made. The iris segmented is utilized for forming the feature vector which is then compared by the available classifiers for the identification of the authenticated person.

    Fig. 1 Process in iris recognition system

    In general there are two phases available in the biometric system, they are enrollment and verification phase. Feature vector is formed by the feature extraction from the collected images and the vector is stored in the database during the enrollment phase. The image that has to be tested goes for preprocessing and feature extraction, then the formed feature vector is compared by classifiers for the identification of authenticated person.

  2. FEATURE EXTRACTION AND ITS METHODOLOGIES

    In iris recognition system the critical and most important part is feature extraction. This process extracts the distinct area present in the iris. This feature provides global, local and significant information regarding iris. For efficient feature extraction there is n-number of algorithms. Methods utilized in feature extraction are grey level co- occurrence matrix, independent component analysis, principal component analysis, contourlet transform, DWT

    Discrete Wavelet Transform, DCT Discrete Cosine Transform, Log-Gabor filter and Gabor filter. Let us look into these methods detailed manner,

    1. Grey Level Co-occurrence Matrix

      Grey Level Co-occurrence Matrix is also known as gray-level spatial dependence matrix. GLCM is a statistical method which examines the texture of the image by considering the pixels spatial relationships. With the consideration of distance and direction between each pixel, this matrix is obtained. Features that are derived from this matrix are used for texture representation [1]. Correlation,

      Contrast, Dissimilarity, Entropy, Energy, Homogeneity, Variance, Standard Deviation and Mean are the texture features calculated by using GLCM.

    2. Independent Component Analysis

      ICA basically deals with independent components and focus on components mutual independence. Decomposing of mixed signal to independent source signal is carried out by ICA. ICA is also known as BSS-Blind Signal Seperation. PCA computes eigen vectors, whereas independent source vectors are used for computation in ICA. With the help of independent source vector, the original signal is constructed by ICA. For irises coefficient of expansion formed in ICA are used as feature vectors. As ICA source vector are independent, they become more close to natural features in images, hence this method identifies the differences in irises.

    3. Principal Component Analysis

      The main focus of PCA is to find a new set of views or dimensions, so that all those dimensions are orthogonal. Then they are ranked in accordance with variation. Higher dimensional data patterns are found using PCA. Suppression of large count of correlated variables into a least count of uncorrelated variables is involved in this technique. The uncorrelated variable at the least count are said to be principle components. This principle component represents maximum variations, which reveals datas internal structure. One of the simplest eigen vector which is based upon multivariate analysis is PCA [5]. Eigen vectors and values are used in the computation of PCA. PCA initially calculates X covariance matrix of data points, secondly eigen vectors and values are calculated, thirdly eigen vectors are sorted in decreasing order in accordance with eigen values, fourthly to choose initial k eigen vector to be the new k dimensions and finally the transformation of original n dimension data point to k dimension data point.

    4. Contourlet Transform

      Multiscaling and directional filters are utilized in Contourlet Transform. The core idea designed in Contourlet Transform is the spares image expansions are obtained by applying multiscale transform along with local directional transform. A linear structure is formed by grouping the near basis functions with the same level. After the multiscale decomposition, directional decomposition is applied to obtain CT. Hence it provides multi number of various directions in a single level. This transform is divided into two major steps, they are LP Laplaced Pyramid decomposition and DFB Directional Filter Bank. A down sampled low pass version of original image and differences between original and prediction images are generated in each level of LP which in turn results in band passing image [3]. In an effective way the input images are captured by the design of filter banks.

    5. Discrete Wavelet Transform

      Signals are decomposed to a set of basic functions known to be wavelets. Transformation of discrete time signal into a discrete wavelet representation is carried

      by DWT. Basically compressions are of two types, they are lossless and lossy. In lossless compression, original images are made identical digitally and compression is achieves modestly. Where as in lossy compression, redundant signals ae discarded and then signals are changed to inputs. High range of accuracy is expected in medical images without any loss in information; hence DWT with its time-scale representation providing efficient multi- resolution is utilized. Human identification technique by utilizing wavelet transform has been introduced by W. Boles and B. Boashah [4].

    6. Discrete Cosine Transform

      To express signals as sum of sinusoids DCT are used. DCT same as DFT works on discrete data points. The major difference is that only cosine function is used in DCT whereas both sine and cosine are used in DFT. Exceptional energy compaction is shown in highly correlated images by DCT [7]. The mathematical formula applied in DCT is given below,

      F(u,v) = (u) (v) (, ) cos((2 + 1)/2) cos((2 + 1)/2) (1)

      where (u) = *(1/) , = 0,

      (2/) , = 1,2, . 1) & (v)= *(1/) = 0,

      (2/) , = 1,2, . 1)

      DCT is seperable, orthogonal and real. It always need algorithm that are faster for computation. To achieve clear frequency distribution for applying in image, DCT is efficiently used.

    7. Log-Gabor Filter

      Thus a logarithmic gabor filter is called log-gabor clear out is added as an alternative to the ordinary gabor clear out. This log-gabor clear out is based on the fact that when considered on a logarithmic frequency scale, the natural photographs may be higher coded using filters having Guassian switch function. The formula used in log- gabor for frequency response is specified below,

      G( f) = exp{ 0.5×log (f / f0)2 / log( / f0)2} (2)

    8. Gabor filter

      By altering the cosine or sine wave with guassian, gabor filter is created. As sine wave is limited to frequency and not with space, the alteration is carried to give the best possible conjoint localization for frequency and space. Space localization with loss of localization in frequency is resulted by modulating sine wave with guassian wave. This filter is seen in pairs with each having a symmetric and anti- symmetric filter. With the variance of scale, frequency and filter orientation a filter bank is formed. Here by cosine the real part is specified and modulated with guassian on the other hand with the help of modulated sine imaginary part is specified. Then real filters are said as even symmetry and imaginary filters are called as symmetric components [6]. By the sine/cosine wave frequency, the centre frequency of the filter is specified. Finally the filters bandwidth is specified with guassians width.

  3. CONCLUSION

In recent years iris recognition is becoming highly popular. In this paper we have analysis various methods utilized in feature extraction of iris recognition system. Because of the stable and unique recognition iris biometric is gaining its own popularity. Each method described in this paper has its own advantages and importance when carried out in various applications. Researchers are focusing on research to increase accuracy and to reduce time for recognizing iris.

REFERENCES

    1. Shijin Kumar P.S & Dharun V.S, Extraction of Texture Features using GLCM and Shape Features using Connected Regions, IJET, Vol 8, No 6, 2017.
    2. Robert M. Haralick, “Statistical and structural approaches to texture,” Proc. IEEE, vol. 67, no. 5, pp. 786-804, 1979.
    3. F.Fanax Femy & S.P.Victor, Feature Extraction using Contourlet Transform for Glaucomatous Image Classification, IJCA, Vol 95, No.18, 2014.
    4. W. Boles and B. Boashash (1998). A human identification technique using images of the iris and wavelet transform, IEEE Transactions on Signal Processing, vol. 46, no. 4.
    5. Rana, H. K., Azam, M. S., and Akhtar, M. R. (2017). Iris recognition system using pca based on dwt.236SM J Biometrics Biostat, 2(3):1015
    6. Firake, S. G. and Mahajan, P. M. (2016). Comparison of iris recognition using gabor wavelet, principal215component analysis and independent component analysis. International Journal of Innovative Researcp16in Computer and Communication Engineering, 4(6):1233412342.
    7. Minakshi R.Rajput, Iris feature extraction and recognition based on different transforms, International Journal of Engineering Research and Development, Nov 2013.

Leave a Reply