ATM Security System using Iris Recognition by Image Processing

— Iris recognition is a part of biometric identification which offers a new solution for personal identification, authentication and security by analyzing the random pattern of the iris. The iris recognition system automatically recognizes the identity of a person from a new eye image by comparing it to the human iris patterns stored in an iris template database. The iris template database is created using three steps the first step is segmentation. Hough transform is used to segment the iris region from the eye image of the CASIA database. The noise due to eyelid occlusions, reflections is eliminated in the segmentation stage. The next step is normalization. A technique based on Hough Transform was employed on the iris for creating a dimensionally consistent representation of the iris region. The last step is feature extraction. In this Local Binary Pattern and Gray level Co-occurence Matrix are used to extract the features. At last Template of the new eye image will be compared with the iris template database using Probabilistic Neural Network .


INTRODUCTION
According to this invention, there is provided an entire system for security area using iris recognition, it allows access to only those individuals whose iris is matched with the database and deny access to all others, very reliably. This system comprises of four stages: First is the Image Acquisition. In this image is captured under proper illumination, distance and other factors affecting image quality are taken into consideration. This step is crucial because image quality plays an important role in iris Localization. Second is Image Segmentation. In this step, the iris region is isolated from the given image. The iris segmentation is a vital step for overall performance of the system. In the feature extraction stage, unique feature from the segmented iris is extracted to create an iris template. This template further used for recognition. Typically, fourth is Matching. The extracted patterns are mapped onto the patterns already extracted and stored in database [8]. The degree of similarity decides whether the identification is to be established or not. Iris recognition is an automated method of biometric identification that uses unique iris pattern of an individual [9]. Iris is an internal organ of our body visible from outside whose patterns are complex random patterns which are most unique and stable. Among all the biometric technologies used for human authentication today, it is generally conceded that iris recognition is the most accurate. Out of several biometric techniques such as face recognition, finger recognition, hand and finger geometry; iris recognition has been accepted as best and most accurate biometric techniques because of the stability, uniqueness and non-invasiveness of the iris pattern. The iris region, the part between the pupil and the white sclera provides many minute visible characteristics such as freckles, coronas, stripes, furrows, crypts which are unique for each individual. Even two eyes of same person have different characteristics [8]. Furthermore, the chance of obtaining two people with same characteristics is almost zero that makes the system efficient and reliable when security is concerned.

II. RELATED WORK
Most of the commercial Iris recognition system uses patented algorithm developed by John Daugman [8] [9]. Daugman used integro differential operator in his algorithm to find inner and outer boundaries of Iris, including the detection of upper and lower eyelid boundaries. Daughman's rubber sheet model is used for Normalization where in the circular Iris region is unwrapped into rectangular block of fixed dimension. Feature extraction is performed using 2-D Gabor and hamming distance is used for code matching. 1 in 4 million is the theoretical false match probability in this method. Yang Hu et al. [7], proposed a method for optimal generation of iris codes for iris recognition. This method demonstrates that the traditional iris code is the solution of an optimization problem, where the distance between the feature values and the iris codes is minimized. This method also shows that more effective iris codes can be obtained for the optimization problem by adding terms to the objective function. The two additional objective terms have been investigated; the first objective term which exploits the spatial relationships of the bits in different positions of an iris code. The second objective mitigates the influence of less reliable bits in iris codes. For the optimization problem these two objective terms can be individually applied, or in a combined scheme.
Smereka [3] (2010), proposed a method with the capability of reliably segmenting non-ideal images, which is affected with factors like blurring, specular reflection, occlusion, lighting variation, and off-angle images. Haar wavelet transform and contour filter was used to pre-process the image and Circular Hough Transform and Hysteresis thresholding is used to detect the edges of the iris. ICE database was used for experiment to check the performance. Rai et al. [10], proposed a method for code matching based on combination of two algorithms to achieve better accuracy rate. Circular Hough transform is used to extract the iris image and then finding the zigzag collarette region after which detecting and removing the eyelids and eyelashes by using parabola detection technique and trimmed median filters. 1-d Log Gabor filters and Haar wavelets are used to extract features from the zigzag collarette region of iris. Extracted features were recognized with the combination of support vector machine and hamming distance approach. Experimental results shows the remarkable recognition rate when features were extracted from the specific region of the iris, where more complex patterns are available followed by combining support vector machines and Hamming distance approach for feature recognition.. Sunil S Harakannanavar1 et al. [1], proposed a method were the iris and pupil boundaries are detected using circular Hough transform and normalization is performed by using Dougmans rubber sheet model. The fusion is performed in patch level. For performing fusion, the image is converted in to 3×3 patches for mask image and converted rubber sheet model. Patch conversion is done by sliding window technique. So that local information for individual pixels can be extracted. The final features of iris images are extracted by block based empirical mode decomposition as low pass filter to analyse iris images. Finally the database images and the test image are compared using Euclidean Distance (ED) classifier.

III. METHODOLOGY
A. Block Diagram . Fig.1 Block diagram of iris recognition system First, the image is acquired from source. Then preprocessing techniques are applied to the images. The preprocessing is done to remove noise from images which makes the images more suitable for the training process. Pre-processing techniques involve resizing, reduction in noise and image contrast. Later the image set will be split into 2 sets: train set and validation set. The train set will be used for training the model. During the training phase, the model will learn the parameter and will try to classify the images into the five different classes. Once the training is complete, the parameters were tuned to make the model more accurate. Once the model of optimum accuracy is obtained, the made model was used to predict some sample images from validation set and PNN was used to access the performance.

DATA ACQUISITION
The data for training of a model was obtained from a CASIA database v3. The dataset had around 22035 images. The data obtained was highly unbalanced. In our project we have used 20 images for matching purpose.

DATA PRE-PROCESSING
The images obtained from the source were somewhat noisy so median filtering was applied to the images. Image Preprocessing is performed using Canny edge operator, histogram equalization, threshold function for eyelid occlusion & to detect reflection.

SEGMENTATION
Segmentation is removing non useful regions such as the parts outside the iris. The segmentation process determines the iris boundaries and pupil boundaries and then converts this part to a suitable template in normalization stage.

Algorithm 1: Pupil Detection
Input: Eye Image Output: Pupil Centre and its radius.
i. Create Binary image of the input eye image by linear thresholding method. The minimum pixel value of the given image is taken as threshold value. ii. Perform median filtering and morphological operations on the binary image to remove the smaller parts and to get the clean region which is likely pupil region. iii. Calculate the centred of the pupil region using region properties. iv. For determining the radius of the pupil we perform the following operations: ii. Perform canny edge detection method to detect the vertical line on the both rectangle and determine the centre point of each of the line, say p1(x1, y1) and p2(x2, y2). The detected line is likely to be on the iris boundaries iii. Calculate the distance d1 and d2 of the point p1 and p2 from the center. iv. The radius of the iris is obtained by taking the average of the distance d1and d2. v. With Centroid and radius ,we segment the iris region.

NORMALIZATION
In normalization step, detected circular iris region is converted to rectangular shape of uniform size. For this purpose we are going to use Hough Transform. The Hough transform is a technique which can be used to isolate features of a particular shape within an image. Because it requires that the desired features be specified in some parametric form, the classical Hough transform is most commonly used for the detection of regular curves such as lines, circles, ellipses, etc. A generalized Hough transform can be employed in applications where a simple analytic description of a feature is not possible. Due to the computational complexity of the generalized Hough algorithm, we restrict the main focus of this discussion to the classical Hough transform. Despite its domain restrictions, the classical Hough transform retains many20 applications, as most manufactured parts (and many anatomical parts investigated in medical imagery) contain feature boundaries which can be described by regular curves. The main advantage of the Hough transform technique is that it is tolerant of gaps in feature boundary descriptions and is relatively unaffected by image

FEATURE EXTRACTION
Extracting features from the iris image is the most important stage in iris recognition system; especially this system depends on the features that are extracted from iris pattern. We have used local binary pattern (LBP) and Gray Level Cooccurrence matrix (GLCM).

CLASSIFIER
A probabilistic neural network (PNN) has 3 layers of nodes. The architecture for a PNN that recognizes K = 2 classes, but it can be extended to any number K of classes. The input layer contains N nodes: one for each of the N input features of a feature vector. These are fan-out nodes that branch at each feature input node to all nodes in the hidden (or middle) layer so that each hidden node receives the complete input feature vector x. The hidden nodes are collected into groups: one group for each of the K classes.
IV. RESULTS The Recognition rate, False Rejection rate was calculated from CASIA-V3 database. We used 20 images for training and 10 images for testing. We also calculated values of features such as Contrast, Energy, Homogeneity. CONCLUSION ATM Security System Using Iris Recognition allows the genuine and authorize user to access the ATM system. Iris recognition system is highly secure as compared to any other system present. By identifying and comparing a user's face (IRIS) of his/her, our system resist suspected attackers. In this project, we build a system for ATM Security. Images were acquired by database and given to the computer system where it is processed by various MATLAB functions. Then the database iris image were compared with the output iris image and if it is matched then user has access to the account or else it denies user's request. Finally, we got a system that has recognition rate of 94.6 % using PNN.
ACKNOWLEDGMENT This paper and the research behind it would not have been possible without the exceptional support of our guides Prof. Dipali Dhake and Prof. Rahul Parbat. Their enthusiasm, knowledge and exacting attention to detail have been an inspiration. We are gladly taking this opportunity to thank