Retinal Image Analysis for Biometrics

DOI : 10.17577/IJERTV7IS040431

Download Full-Text PDF Cite this Publication

Text Only Version

Retinal Image Analysis for Biometrics

V. Monicka

PG Scholar,

Department of Electronics and Communication Easwari Engineering College,

Chennai (India)

M. Kamarajan

Associate Professor,

Department of Electronics and Communication Easwari Engineering College,

Chennai (India)

Abstract A design of retinal image analysis using feature detection and feature matching techniques is proposed. Biometric systems perform person's authentication based on ones physical features. A number of biometric systems has been developed in the last few years such as fingerprints, iris etc. The retinal scans serves the biometric based security systems since the unique pattern of blood vessels serves the purpose. The retinal images are acquired from the DRIVE and STARE database and various feature detection algorithms are used to detect and extract features. The original image is recovered from the distorted image using MSAC algorithm and the extracted features are then compared and feature matching is done to identify the amount of matching to identify and authorize the person. The main limitation of this process lies in the image acquisition. Hence retinal image database is used as the source of input.

Keywords-component; Digital Retinal Images for Vessel Extraction (DRIVE), Space Time Trills Code (STARE), Features from Accelerated Segment Test. (FAST), Speed-Up Robust Features (SURF), Maximally Stable Extremal Regions(MSER)

  1. INTRODUCTION

    Biometrics are automated methods of recognizing a person based on a physiological or behavioural characteristic. Biometric technologies are becoming the foundation of an extensive array of highly secure identification and personal verification solutions. As the level of security breaches and transaction fraud increases, the need for highly secure identification and personal verification technologies is becoming apparent. Biometric-based solutions are able to provide for confidential financial transactions and personal data privacy. The need for biometrics can be found in federal, state and local governments, in the military, and in commercial applications. Enterprise-wide network security infrastructures, government IDs, secure electronic banking, investing and other financial transactions, retail sales,law enforcement, and health and social services are already benefiting from these technologies.Biometric-based authentication applications include workstation, network,and domain access, single sign- on, application logon, data protection, remote access to resources, transaction security and web security. Trust in these electronic transactions is essential to the healthy growth of the global economy. Utilized alone or integrated with other technologies such as smart cards, encryption keys and digital signatures, biometrics are set to pervade nearly all aspects of the economy and our daily lives. Utilizing biometrics for personal authentication is becoming convenient and considerably more accurate than current methods (such as the utilization of passwords or PINs). This is because biometrics links the event to a particular individual (a password or token

    may be used by someone other than the authorized user), is convenient (nothing to carry or remember), accurate (it provides for positive authentication), can provide an audit trail and is becoming socially acceptable and inexpensive.

  2. FEATURE DETECTION ALGORITHMS

    In the detection of these blood vessels, the corners of the lines that form the blood vessel were used to identify the blood vessels. Since the blood vessels lines are inconsistent and have numerous sharp bend[11], various feature detection Algorithm are used to detect them and return corner points.

    Feature Extraction

    This is the process of getting the features that will enable us to distinctly identify the subjects. The features of an image can be extracted using various extraction and detection algorithms that are provided in the MATLAB development software. They include:

    SURF Feature detection and extraction. MSER Feature detection and extraction. Harris Feature detection and extraction. FAST Feature detection and extraction.

    1. SURF Feature Detection Algorithm

      SURF is a quick and robust algorithm which was developed by Bay for nearby, closeness invariant representation and correlation.

      The SURF methodology can be partitioned into three fundamental steps.

      The first step is to choose key feature points such as edges, corners, blobs and T-intersections at distinctive regions in the image. Second step is to use feature vector to depict the surrounding neighbourhood of each feature point. This descriptor must be a unique one. At the same time, it ought to be robust to error identification, noise, photometric and geometric deformations. Finally, the descriptor feature vectors are coordinated among the different accessible images. A Fast- Hessian Detector is used for finding feature points taking into consideration the close estimation of the Hessian lattice of a given picture point. Before the shaping of feature point descriptor is done from the wavelet responses in a certain surrounding to the point, an introduction task needs to be done. This can be done by using the responses to Haar wavelets. This is the reason why a circular region is developed around the detected feature points when SURF algorithm is used. The fundamental point of interest of the SURF methodology is its fast computation, which empowers numerous ongoing applications, such as, image mosaicing, tracking and object recognition. It has speeded-up the SIFT's

      location transform as well as has counteracted nature of the recognized feature points from degrading.

    2. MSER Feature Detection Algorithm

      This is a method of blob detection for image. MSER refers to maximum stable extremal regions. This method was proposed by J.Matas in 2002. The algorithm serves to detect and describe affine invariant features. It serves to find parts of the two images that differences are due to movement of the image objects. This gives this method of detection an edge in getting a wide range of points for the process of matching after the feature extraction process has been completed. This method is also very stable as it selects only regions where the thresholding range is similar and can be employed for color images as well.

    3. Harris Corner detection Algorithm

      Harris corner detector algorithm Harris corner detection algorithm detects feature points by designing a local detecting window inside the image. The small amount of shifting of window in different direction can be determined by the average variation in the pixel intensity. The corner point is the center point of the window. Hence, on shifting the window in any of the direction, a large variation in pixel intensity is seen. When the window is shifted, no change in pixel intensity is seen in any direction if a flat region appears[2]. But, when there is no change in pixel intensity along the edge direction, then an edge region is detected. But, when there is a significant change in pixel intensity in every direction, a corner is detected. A mathematical approach for determining whether the region found is flat, edge or corner is provided by Harris corner detection algorithm. More number of features are detected using this detection algorithm. Though, it is found to be scale variant, but it is invariant to rotation. The change in pixel intensity for the shift [u, v] is given as follows:

      E(u, v) = w(x, y) [I(x +u, y+ v)- I(x, y)]2 (1)

      Where, w(x, y) is a window function, I(x, y) is the ntensity of the individual pixel, and I(x + u, y + v) is the pixel intensity after shift.

      The algorithm for Harris corner detection is given as: Autocorrelation matrix M for each pixel (x, y) in the image is calculated as Gaussian filtering for each pixel of image is generated using matrix M and discrete two dimensional zero mean Gaussian function:

      Gauss = exp (-u2+ v2)/22 (2)

      Calculating the corners measure (R) for each pixel (x, y)

      R= det (M) -k*trace(M)2 (3)

      We choose a local maximum point. The feature points whose pixel values are corresponding with the local maximum interest point are considered in Harris corner detection method. The detection of corner is done after setting the threshold value T.

    4. FAST Feature Detection Algorithm

    Features from Accelerated Segment is a feature detection method that uses corners as the basis of feature extraction. The extracted features can be used to aid in mapping object and matching features in similar images. FAST corner detection was developed by Edward Rosten and Tom Drummond. They emphasized on efficiency and high speed of operation while still maintaining a high level of accuracy. This is the fastest detection method as compared to others above such as the SIFT and MSER. It takes less time and also little memory during computational process. This makes this method highly recommended for video processing applications because of the high performance and efficiency.

    1. PROPOSED SYSTEM

      The system design is represented in various steps in the form of a block diagram. The design starts with acquisition of the image from the DRIVE database which is given as the input followed by various steps to extract the features and find

      the percentage of matching.

      Fig 1. Flow Chart Of Proposed System

      The Objective is to implement a system that extracts the features of a retinal scan to be used as aids in a retinal identification system. The development tool used to extract this features is a software (MATLAB), which is a high-level language and interactive environment for numerical computation, visualization, and programming. Emphasis is made on the process of image processing and feature extraction and not the process of capturing the retinal scan using a retinal scanner.

      Image Acquisition

      The photographs for the DRIVE database were obtained from a diabetic retinopathy screening program in The Netherlands. The screening population consisted of 400 diabetic subjects between 25-90 years of age. Forty photographs have been randomly selected, 33 do not show any sign of diabetic retinopathy and show signs of mild early diabetic retinopathy[3]. Each image has been JPEG compressed. The images were acquired using a Canon CR5 non-mydriatic 3CCD camera with a 45 degree field of view (FOV). Each image was captured using 8 bits per color plane at 768 by 584 pixels. The FOV of each image is circular with a diameter of approximately 540 pixels. For this database, the images have been cropped around the FOV. For each image, a mask image is provided that delineates the FOV. The set of 40 images has been divided into a training and a test set, both containing 20

      images. For the training images, a single manual segmentation of the vasculature is available.

      Selection of Green Channel

      This is the process of converting the true color image RGB to the grayscale intensity image. This is done by eliminating the hue and saturation information of the image and retaining the luminance of the image. Luminance is described as the amount of light that is emitted from as specific area of the image. The conversion basically gives a scale of the various intensities of the image from the lowest to the highest based on the light each part of the image emits[5]. Matlab application uses this same technic to perform the conversion. It ranks the various intensities using a scale of 0-255 which means that zero refers to the darkest shade of gray while 255 refers to the lightest shade of grey. Hence Green Channel is selected.

      Filtering and Smoothening

      Wiener Filtering is a type of liner filtering that uses image variance.This method produces better results as compared to linear filtering since it is more selective than linear which is inclined mostly towards comparing. It preserves high frequencies and edges of the image[1]. Both Adaptive and Median filters have been used to remove the noise in the images during the image enhancement process. This is due to their advantages and superiority in terms of quality of the images produced and appearance.

      Estimation of Rotation Angle

      This example shows how to automatically determine the geometric transformation between a pair of images[8]. When one image is distorted relative to another by rotation and scale, use feature detection and estimate Geometric Transform to find the rotation angle and scale factor. You can then transform the

      distorted image to recover the original image. Find a transformation corresponding to the matching point pairs using the statistically robust M-estimator Sample Consensus (MSAC) algorithm, which is a variant of the RANSAC algorithm. It removes outliers while computing the transformation matrix.

      Feature Matching

      All the feature points that have been detected are matched so as to confirm that features are from the corresponding locations from completely different images. Two main parameters used for comparison of different feature detection algorithms are Accuracy and Time complexity (run time). Even when an image is distorted, the best feature points of an image should remain almost the same[6]. So, when feature matching is done between the original image and the distorted image, the more is the number of matching features out of the number of extracted features, the more is the accuracy. So, it can be said that accuracy is a relative term that also depends on the number of extracted features. Hence, accuracy is defined as the percentage of matched features to the extracted features.

      The image to be compared is read from the database and compared with the stored image for comparison. The extracted features from the input image is compared with the feature points of the stored image. Then the percentage of matching is computed using the following formula

      ÷ = 100 (4)

      Where, Im = Total no. of matched features Iex = Total no. of extracted features Lesser the computational time, better is the performance of the algorithm. This can be calculated by using the run and time option provided in MATLAB. On clicking the run and time option, profiler window opens up that includesall the timing data for each line executed in the code and the total execution time.

    2. SIMULATION RESULTS

      The characteristic of MATLAB enables it to be used for software implementation and give the simulated results on improvements that can be made as it is accurate in implementing all the steps as shown in the block diagram (Figure1).

      Fig 2. Input Image Acquired from DRIVE database

      Thresholding is the simplest method of image segmentation. From a grey scaleimage, thresholding can be used to create binary images. Key feature points of the extracted blood vessels were detected using various Feature detection algorithms.

      Fig 3. Feature detected on the binary image using FAST

      Fig 4. Feature detected on the binary image using SURF

      Fig 5. Feature detected on the binary image using MSER

      The feature points of the two images with 100 percent matching is as shown in the figure 6 and when the feature points are unmatched are shown in the figure 7 respectively.

      Fig 6: 100 % matched Features

      Fig 7: 100 % matched Features

      The table below shows the comparison of percentage of matched points using the different feature detection and extraction algorithms for the stored image with the various other input images and FAST is found to have better performance matching compared to other.

      td>

      MSER

      FAST

      SURF

      Harris

      Image 1

      16.61

      0

      1.8

      1

      Image 2

      15.63

      0

      0.8

      0.8

      Image 3

      15.30

      16.667

      1.8

      1.4

      Image 4

      13.35

      16.667

      1

      0.4

      Image 5 (rotated)

      100

      100

      100

      99.6

      Image 6 (stored)

      100

      100

      100

      99.6

      Fig 8. Comparisons of matching results in Percentage

    3. CONCULSION

Four important feature detection Techniques were used to extract the features from the retinal Images using which the angle of rotation was also estimated and matching percentage with the base image was found. FAST feature detection technique was found to provide more amount of matching compared to the other techniques. The feature extraction of retinal scan has been shown to be unique and therefore gives a high level of safeguarding information. This characteristic enables this method to be incorporated in areas that require high security level such as control rooms of various buildings including airports and Military bases.

A single image is taken as reference and compared with all other test images. The system can be improved by saving all the test images as templates and a single input image can be compared with all the saved template of images. Further the four different feature detection techniques can be compared based on their performance in terms of speed and accuracy.

REFERENCES

  1. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum,(1989)"Detection of blood vessels in retinal images using two-dimensional matched filters," IEEE Transactions on Medical Imaging, vol. 8, no. 3, pp. 260-268.

  2. H. Farzin, H.A. Moghaddam and M.S. Moin, (2008). "A Novel Retinal Identification System," EURASIP Journal on Advances in Signal Processing, vol. 2008, Article ID:280635, pp-306-315.

  3. J. A. Fodor, (1983),"Modularity of the mind: An essay on Faculty psychology," MIT Press, Cambridge.

  4. M.Ortega, C. Marino, M.G. Penedo, M.Blanco and F.Gonzalez(2006)."Biometric Authentication Using Digital Retinal Images,"Proceedings of the 5th WSEAS International Conference on Applied Computer Science, no. AC0S06, pp. 422-427.

  5. Dr.C.P.Simon and Dr.I.Goldstein.,(1935)"A New Scientific Method of Identification," New York State Journal of Medicine.

  6. X. Merlin Sheeba, S. Vasanthi, A nEfficient ELM Approach for Blood Vessel Segmentation in Retinal Images, Bonfring International Journal of Man Machine Interface, Vol. 1, Special Issue, December 2011.

  7. A. Hoover, V. Kouznetsova, and M. Goldbaum, Locantig blood vessels in retinal images by piecewise threshold probing of a matched filter response, IEEE Trans. Med. Image., vol. 19, no. 3, pp. 203210, Mar. 2000.

  8. Muhammad Moazam Fraz , Paolo Remagnino, Andreas Hoppe, Bunyarit Uyyanonvara, Alicja R. Rudnicka, Christopher G. Owen, and Sarah A. Barman, An Ensemble Classification-Based Approach Applied to Retinal Blood Vessel Segmentation, ieee transactions on biomedical engineering, vol. 59, no. 9, September.

  9. J. Liu, Q. Xing, X. Yin, X. Mao, and F. Yu, Pipelined Architecture for a Radix-2 Fast Walsh Hadamard-Fourier transform algorithms, IEEE Trans. on Circuits Syst. II, Exp. Briefs, vo1. 62, no. 11, Nov. 2015, pp.1083-1087.

  10. U Raghavendra, Hamido Fujita, Sulatha V Bhandary, Anjan Gudigar, Jen Hong Tan, U Rajendra Acharya. (2018) Deep Convolution Neural Network for Accurate Diagnosis of Glaucoma Using Digital Fundus Images. Information Sciences.

  11. Oakar Phyo, Aung Soe Khaing, Atuomatic detection of optic disc and blood vessels from retinal images using Image processing technique IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 pISSN: 2321-7308.

Leave a Reply