Identifying People through Iris Recognition

DOI : 10.17577/IJERTV6IS020237

Download Full-Text PDF Cite this Publication

Text Only Version

Identifying People through Iris Recognition

J. C. Martínez-Perales Instituto Politécnico Nacional. Escuela Superior de Cómputo.

Av. Juan de Dios Batíz, esq. Miguel Othón de Mendizábal, Mexico City 07738, Mexico.

J. Cortés-Galicia Instituto Politécnico Nacional. Escuela Superior de Cómputo.

Av. Juan de Dios Batíz, esq. Miguel Othón de Mendizábal, Mexico City 07738, México.

B. Luna-Benoso Instituto Politécnico Nacional, Escuela Superior de Cómputo,

Av. Juan de Dios Batíz, esq. Miguel Othón de Mendizábal, Mexico City 07738, Mexico.

AbstractThe fundamental idea for identity management is to establish an association between a individual and your personal identity, This process is known as pattern recognition, and encompasses the idea of what a biomedical system is. This paper presents a biomedical system for identification of people through the iris.

Keywords Analysis of images, segmentation of images, basal cell carcinoma, dermatoscope.

  1. INTRODUCTION

    The word biometrics is derived from the Greek bio (life) and metric, inferring the meaning as the measure of life [1].

    The term biometry is adopted to encompass technologies applied to the identification of individuals through the recognition of physical, chemical, behavioral characteristics of a person or distinctive features [2, 3]. With the great rise of technological applications to aspects such as information security, access control, e-commerce and bank transfers, among others, need to create methodologies and systems capable of identifying individuals, with this, the importance of biometrics has taken a meaningful course within the activities of society [3].

    Identifying people in security terms is the process by which a user identifies him or herself among a group of people to gain access to a restricted resource [4]. For the development of security systems, three principles of identification of people are used: 1) proof that something is in place (for example a key or a card); 2) proof that something is known (for example a combination or key); and 3) proof that the person is who he claims to be. Biometrics is based on the third principle [5].

    Within the biomechanical systems, iris recognition represents a reliable technique due to the goodness of the iris texture, since in these are found multiple anatomical entities that make up its structure, as well as that it has been proven reliable and precise in the development of biomedical systems and as object of study [6]. Much of the iris-focused biomedical applications are related to security, such as migration control and borders, criminal investigations, access to security boxes, telecommunications (access to mobile phones), security in information systems (access to personal computers, networks) and access control systems in buildings [4].

    This is why this paper presents a methodology that is carried out for the identification of people through the recognition of the iris. For this, the first is the capture of images by means of a digital microscopy, then the image is analyzed by digital image processing, using methods in the spatial domain to finally segment the region of interest, later the characteristics are extracted and the classifier is constructed to obtain the final results of identification.

  2. METHODOLOGY

    1. Obtaining the image

      Firstly, using a digital microscope, a device was created to capture digital iris images as shown in figure 1.

      Fig. 1: a) Digital microscope, b) device for capturing images of the iris.

      Fig. 2 shows images of the iris captured with the constructed device.

      Fig. 2: Images of the iris

    2. Segmentation of the iris

      Once the iris images are captured as shown in fig. 2, the following methodologies are applied to each of the images. First the image was transformed to grayscale and the canny filter was applied. The Canny filter is an operator optimized for the detection of differential edges consisting of the following phases: obtaining the gradient, suppression not maxima to the result of the gradient, threshold hysteresis to non-max suppression and open contour closures [7]. Image 3 shows the result of converting an iris image to grayscale and subsequently the result of applying the Canny filter.

      Fig. 3: a) Grayscale image, b) Image resulting from applying the Canny filter.

      The results of the images shown in Figure 3, after applying the Canny filter, were applied image smoothing, for this it was smoothed by a Gaussian filter based on a Gaussian distribution [8]. Using the mask I = [121; 242; 121]* (1 / 16). Figure 4 shows the result of smoothing the image through the Gaussian filter.

      Fig. 4: Result of applying the Gaussian filter.

      Once obtained images such as those of Figure 4, the Hough

      [12] transform was applied to detect the inner and outer circle that enclose the iris and were placed on the original color image as shown in fig. 5. See that with the inner circle we can remove the pupil as shown in figure 5.

      Fig. 5: Hough's transform to find circles.

      Once located the zone that corresponds to the iris person, proceeded to normalize converting to polar coordinates.

      Fig. 6: Image of the normalized iris and in polar coordinates.

    3. Extraction of characteristics

      For the extraction of characteristics, a total of 11 statistical characteristics were considered: the mean, standard deviation, smoothness, skewness, kurtosis, uniformity, average histogram, modified skew, modified standard deviation, entropy and modified entropy. Considering that N represents the total number of pixels, L the total number of gray levels, I (fij) the value of the gray level of the pixel (i; j) in the image f (x; y), P (j) The probability that the value of the gray level j occurs in the image f(x; y), T(i) The number of pixels with gray value i in the image f (x; y), P (I ((fij))) the probability that the gray level I (fij) occurs in the image f (x; y) And P (fij) = T (I (fij)) / N, table 1 shows the formulas to obtain each one of the statistical values [11].

      Characteristics

      Expression

      Mean

      Standard deviation

      Smoothness

      Skewness

      Kurtosis

      Uniformity

      Average histogram

      Modified skew

      Modified standard deviation

      Entropy

      Modified Entropy

    4. Classification

      Once the characteristics were extracted, we proceeded to feed the chosen classifier model. For this case, we selected the associative classifier based on cellular automata proposed in [9, 10]. For describe the model, we first define what an associative memory is, what the cellular automata are, and later the definition of the associative model based on cellular automata. This description is presented below.

      1. Associative Memories

        Associative memories are mathematical models whose main objective is to recover complete patterns from input patterns. The operation of the associative memory is divided into two phases: learning stage where the associative memory is generated, and phase recovery stage where the associative memory operates [9, 10].

        During the learning phase, the associative memory is constructed from a set of ordered pairs of patterns known in advance, caled fundamental set. Each pattern that defines the fundamental set are called fundamental pattern. The fundamental set is represented as follows [1]:

        During the recovery phase, the associative memory operating with an input pattern for the corresponding output pattern.

        In the following section we present the associative memory model based on cellular automata, for which we make use of the definitions of cellular automata presented in [11].

      2. Associative Memories Based on Cellular Automata

In this work we used the associative memory model based on cellular automata by the authors in [12]. Immediately previous definitions are presented and the proposed model.

3. EXPERIMENTATION AND RESULTS

I used a data bank made up of 20 people, each person contributed witp0 images of the iris of the right eye, having a total of 200 images. For each of the images the proposed methodology was used and a yield of 90% was obtained.

ACKNOWLEDGEMENTS

The authors would like to thank the Instituto Politécnico Nacional (Secretaría Académica, EDD, COFAA, SIP, ESCOM), for their economical support to develop this work

REFERENCES

  1. Sánchez, Aplicaciones en la visión artificial y la biometría informática. Universidad Rey Juan Carlos, primera edición. Librería editorial Dykinson. 2005. ISBN: 849772660x.

  2. Egon L. van den Broek., Beyond Biometrics. Procedia Computer Science. Vol. 1, Issue 1, May 2010, pp. 2511-2519.

  3. K. Yang, E. Yingzi, Z. Zhou, Consent biometrics. Neurocomputing, Vol. 100, Jan 2012, pp. 153-162. Rattani, R. Derakhshani. Ocular biometrics in the visible spectrum: A survey. Image and Vision Computing, Volume 59, March 2017, Pages 1-16

  4. Anil K. Jain, Arun A. Roos, Karthik Nandakumar. Introduction to Biometrics. Springer New York Dordrecht Heidelberg London, 2011.

  5. John Daugman. How Iris Recognition Works, IEEE Transactions on circuits and systems for video technology, Vol. 14, No. 1, Enero 2004.

  6. Mayur A. Ingle, Girish R. Talmale. Respiratory Mask Selection and Leakage Detection System Based on Canny Edge Detection OperatorOriginal Research Article Procedia Computer Science, Volume 78, 2016,

    Pages 323-329

  7. H.H. Afshari, S.A. Gadsden, S. HabibiH.H. Afshari,

    S.A. Gadsden, S. Habibi. Gaussian _lters for parameter and state estimation: A general review of theory and recent trendsReview Article Signal Processing, Volume 135, June 2017, Pages 218-238

  8. Luna-Benoso, C.Yañez-Márquez. Aut_omatas Celulares. Computaci_on y sistemas, Vol. 16. No.4.2012, pp. 471-479. ISSN: 1405-5546.

  9. Luna-Benoso, J. C. Martínez-Perales, R. Flores- Carapia, A. L. Barrales-López. A method for the detection of diabetic retinopathy, by analyzing the image of the retinal vascular network. Contemporary Engoneering Sciences. Vol. 7, 2014, No. 3. 117-134. dx.doi.org/10.12988/ces.2014.31064

  10. T.S. Subashini, V. Ramalingam, S. Palanivel. Automated assessment of breast tissue density in digital mammograms. Computer Vision and Image Understanding, Volume 114, Issue 1, January 2010,

    Pages 33-43

  11. A. Oualid Djekoune, Khadidja Messaoudi, Kahina Amara. Incremental circle hough transform: An improved method for circle detection. Optik – International Journal for Light and Electron Optics, Volume 133, March 2017, Pages 17-31

Leave a Reply