Enhanced Sclera Recognition on Color Image

DOI : 10.17577/IJERTCONV1IS06151

Download Full-Text PDF Cite this Publication

Text Only Version

Enhanced Sclera Recognition on Color Image

GOKUL RAJAN.V., B.E., M.E*

Department of Computer Science and Engineering, Bharadhidhasan Institute of Technology,

Anna University , Tiruchirappalli-620024. gokulrajan.v@gmail.com

Abstract – The blood vessel structure of the sclera is unique to each person. Therefore, it is well suited for human identification (ID).Sclera recognition is one of the new biometric recognition techniques. A critical step of the Sclera Recognition process is the segmentation of the sclera pattern in the input face/eye image. This process has to deal with the fact that the sclera region of the eye is wet, multilayered and constantly in motion due to involuntary eye movements. Moreover, eyelids, eyelashes and reflections are occlusions of the sclera pattern that can cause errors in the segmentation process. As a result, an incorrect segmentation can produce erroneous biometric recognitions and seriously reduce the final accuracy of the system. This paper reviews current sclera segmentation methods. Edge detection methods will be discussed, along with methods designed to occlusions, such as eyelids and eyelashes.

Index Terms Biometrics, line descriptor, multilayered vessel pattern recognition, sclera recognition, Pattern Enhancement, sclera segmentation.

I. INTRODUCTION

BIOMETRICS is the use of physical, biological, and behavioral traits to identify and verify a persons identity automatically. There are many different traits that can be used as biometrics, including fingerprint, face, iris, retina, gait, and voice [1][13]. Each biometric has its own advantages and disadvantages [2], [3], [13][15]. Table I is the comparison of the different biometrics using the following objective measures: accuracy [2], [16], reliability [17], stability [3], [18], identification (ID) [19], ID capability in a distance [19], user cooperation [18], and scalability to a large population [16]. For instance, face recognition is the natural way that humans identify a person, but peoples faces could change dramatically over years and this change could affect recognition accuracy [4][7]. The fingerprint pattern is very stable over a persons life, and its recognition accuracy is high. However, fingerprint recognition cannot be applied for ID at a distance [8], [9], [20].

Aside from these measures, different people may object to certain methods for various reasons, including culture [21], religion [22], hygiene [23], medical condition [24], personal preference [25], etc. For example, in some cultures or religions, acquiring facial image(s) may make some users uncomfortable [26]. Fingerprints may cause some hygiene issues and public health concerns since it is a contact-based biometric [27].

To achieve high accuracy, iris recognition needs to be performed in the near-infrared (NIR) spectrum [10], [11], which requires additional NIR illuminators. This makes it very

challenging to perform remote iris recognition in real -life scenar

Biometrics

Accuracy

Reliabilit y

Stable

ID

ID in distance

Fingerprint

High

Very high

Yes

Yes

No

Face

Medium

Medium

No

Some what

Somew hat

Iris

Very high

Very High

Yes

Yes

Somew hat

Voice

Low

Low

No

No

Nil

Hand geometry

Low

Low

Yes

No

Nil

Ear shape

Medium

Medium

Yes

No

Somew hat

Signature

Low

Low

No

No

Nil

ios [11].

TABLE I

Main Biometrics Procedures

Overall, no biometric is perfect or can be applied universally. In order to increase population coverage, extend the range of environmental conditions, improve resilience to spoofing, and achieve higher recognition accuracy, multimodal biometrics has been used to combine the advantages of multiple biometrics [28][30]. In this paper, I have enhanced sclera recognition. Our experiment expects that will give sclera recognition can achieve improved recognition accuracy to iris recognition in the visible wavelengths.

Fig.1. Structures of the eye and sclera region

This paper is organized as follows. Section II covers the proposed system of sclera recognition. In Section III, I proposed an automatic segmentation approach in color images. In Section IV, vessel enhancement is covered. In Section V, I have used the line descriptor method that can extract patterns at different orientations, which made it possible to achieve orientation-invariant matching. In Section VII, we present our experimental results, and then, I draw our conclusions in Section VIII.

Gokul Rajan.V

115

D. Eyelash removal and Eyelid localization

The shape of eyelids is so irregular that it is impossible to fit them with simple shape assumptions. In addition, the upper eyelid tends to be partially covered with eyelashes, making the localization more difficult. Fortunately, these problems can be solved by a 1D rank filter and a histogram filter. The 1D rank filters remove the eyelashes, while the histogram filter

sclera vessel structure varies with the number of individual segments. Fig. 5 shows a visual description of the line descriptor.

A descriptor is S = ( r )T. The individual components of the line descriptor are calculated as

0 = tan-1 yl-yp (5.1)

xl-xp

addresses the shape irregularity. The steps involved in the

proposed eyelid localization method are depicted in Fig. 4.

r = (y y )2 + (x x )2 (5.2)

p p

Fig. 4.Example of segmented sclera images. (a) Input Image

(b) Segmented sclera image.

  1. SCLERA VESSEL PATTERN ENHANCEMENT

    The segmented sclera area is highly reflective. As a result, the sclera vascular patterns are often blurry and/or have very low contrast. To mitigate the illumination effect to achieve an illuminationinvariant process, it is important to enhance the vascular patterns. In [14], Daugman shows that the family of Gabor filters is good approximations of the vision processes of the primary visual cortex. Because the vascular patterns could have multiple orientations, in this paper, a bank of directional Gabor filters is used for vascular pattern enhancement

    -rr (x-xo)z+(y-yo)z

    = tan-1 ( d f ine(x)) (5.3)

    dx

    Here, fline(x) is the polynomial approximation of the line segment, (xl,yl) is the center point of the line segment, (xi,yi) is the center of the detected iris, and S is the line descriptor.

    Fig. 5.Sketch of parameters of segment descriptor.

    G(x, y, , s ) = e

    Sz e-2rri(cos B(x-xo)+sin B(y-yo))

    (4.1)

    Additionally, the iris center (xi,yi) is stored with all of

    Where (x0,y0) is the center frequency of the filter, s is the variance of the Gassian, and is the angle of the sinusoidal modulation. For this paper, only the even filter was used for feature extraction of the vessels, since the even filter is symmetric and its response was determined to identify the locations of vessels adequately. The image is first filtered with Gabor filters with different orientations and scales

    (x, y, {), s) = (x, y) (x, y, {), s) (4.2)

    Where I(x,y) is the original intensity image, G(x,y,,s) is the Gabor filter, and IF(x,y,,s) is the Gabor-filtered image at orientation and scale s. Both and s are determined by the desired features to be extracted in the database being used. All

    the individual line descriptors. The line descriptor can extract patterns in different orientations, which makes it possible to achieve orientation-invariant matching.

    V. SCLERA MATCHING

    A. Sclera Template Registration

    I have used a new method based on a random sample consensus (RANSAC)-type algorithm to estimate the best fit parameters for registration between the two sclera vascular patterns. It also randomly chooses a scaling factor and a rotation value, based on a priori knowledge of the database. Using these values, it calculates a fitness value for the registration using these parameters. The two descriptors Sxi and Syj are

    the filtered images are fused together to generate the vessel

    0xi

    0yi

    boosted image F(x,y) Sxi= rxi Syi= ri (6.1)

    xi yi

    F(x, y) = jB<0 s<S(IF(x, y, {), s))2

  2. SCLERA FEATURE EXTRACTION

(4.3)

First, an offset vector is created using the shift offset and randomly determined scale and angular offset values

xO

The line segments are then used to create a template for the vessel structure. The segments are described by three quantitiesthe segment angle to some reference angle at the

where

<{O

= yO (6.2)

s

O

O

iris center, the segment distance to the iris center, and the dominant angular orientation of the line segment. The template for the sclera vessel structure is the set of all individual segments descriptors. This implies that, while each segment descriptor is of a fixed length, the overall template size for a

xO = rxicos0xi ryjcos0yj yO = rxicos0xi ryjcos0yj

The fitness of two descriptors is the minimal summed pairwise distance between the two descriptors given some offset vector

M = (i,j)matcheS m(Si,Sj) min(iTeSt w(Si),jTeSt w(Sj),)

(6.8)

0

Where

D(Sx, Sy) = argminD-(Sx, Sy, O) (6.3)

O

Here, Matches is the set of all pairs that are matching, Test is the set of descriptors in the test template, and Target is the set of descriptors in the target template.

D-(Sx, Sy, <{O) = xi<Test minDist(f(Sxi, <{O), Sy)

(6.4)

VI. EXPERIMENTAL RESULTS

  1. Experimental Methodology

    In this paper, we adopted the Iris Challenge Evaluation

    Here, f(Sxi,0) is the function that applies the registration given the offset vector to a sclera line descriptor

    cos-1 (rxicos0xi+xo)

    matching protocol [12] (proposed by the National Institute of Standards and Technology). The proposed system can only generate four possible recognition results: correctly matching

    xi O

    f(S , <{ ) =

    Sorxi

    rxicos0xi+xo

    cos(0xi+o)

    (6.5)

    (true positive: TP), correctly not matching (true negative: TN), incorrectly matching (false positive: FP), and incorrectly not

    xi

    With the distance between two points is calculated using

    matching (false negative: FN) [11]. The False Accept Rate (FAR), False Reject Rate (FRR), and Genuine Acceptance Rate (GAR) are calculated by

    d(Sxi, Syj) = j(xO)2 + (yO)2 (6.6)

    Where Sxi is the first descriptor used for registration, Syj is the

    FAR= FP

    TN+FP

    FRR = FN

    X 100%

    X 100%

    second descriptor, 0 is the set of offset parameter values, f(Sxi,0) is a function that modifies the descriptor with the given offset values, s is the scaling factor, and is the rotation

    And

    TP + FN

    GAR = 1- FRR

    value. In this way, we ensure that the registration process is globally scale, orientation, and deformation invariant.

  2. Sclera Template Matching

As discussed previously, it is important to design the matching algorithm such that it is tolerant of segmentation errors. The weighting image (Fig.6) is created from the sclera mask by setting interior pixels in the sclera mask to 1, pixels within some distance of the boundary of the mask to 0.5, and pixels outside the mask to 0. This allows a matching value between two segments to be between 0 and 1 and allows

Fig.6.Weighting image.

After the templates are registered, each line segment in the test template is compared with the line segments in the target template for matches

m(Si, Sj) = w(Si)w(Sj)d(Si, Sj) Dmatchand i j match

0 else

(6.7) where Si and Sj are two segment descriptors, m(Si,Sj) is the matching score between segments Si and Sj, d(Si,Sj) is the Euclidean distance between the test and one from the target templates) and the matching result is recorded. The total matching score M is the sum of the individual matching scores divided by the maximum matching score for the minimal set between the test and target templates, i.e., one of the test or target templates has fewer points, and thus, the sum of its descriptor weights sets the maximum score that can be attained

The receiver operating characteristic (ROC), a balanced plot of FAR and GAR, or FAR and FRR, can be used to evaluate the performance of the proposed system. Moreover, since FAR and FRR are in opposition to each other, when FAR = FRR, referred to as equal error rate (EER), it achieves the point which is also widely used to compare accuracy rates of two ROC curves.

B. Overall Matching Results

TABLE II

COMPARISON OF EERS AND GARS FOR TWO SEGMENTATION METHODS

Session

#Image s

Segmentatio n

EER (%)

GAR (%)

FAR=0. 1%

FAR=0.01

%

Sessio n 1

100

Automatic

4.05

91.91

85.00

Manual

3.52

95.32

90.00

Sessio n 2

100

Automatic

9.54

87.02

84.76

Manual

7.22

88.24

85.38

I have used UBIRIS as a database for testing. compared the sclera recognition accuracy using automatic segmentation with that using manual segmentation. The algorithms in the proposed method are implemented in Java. Table II shows the comparison results using both automatic and manual segmentation methods. In Session 1, the EER in automatic segmentation (4.09%) is just a little bit lower than that in manual segmentation (3.70%). In addition, GAR in automatic segmentation (90.71% at FAR = 0.1% and 83 % at FAR = 0.01%) is just a little bit lower than that in manual segmentation (92.55% at FAR = 0.1% and 89.22% at FAR = 0.01%). In Session 2, since the quality of images is worse than that in Session 1, the EER in automatic segmentation (9.98%) is lower than that in manual segmentation (7.48%). However, the GAR in automatic segmentation (85.59% at FAR = 0.1% and 82.85% at FAR = 0.01%) can achieve a similar accuracy to that in manual segmentation (85.49% at FAR = 0.1% and

82.58% at FAR = 0.01%). Some bad quality images could not be automatically segmented and were eliminated in the automatic segmentation step. As a result, these images were not used for recognition. I have to exclue these images for recognition, even using manual segmentation.

VIII. CONCLUSION AND DISCUSSIONS

In this paper, I have proposed enhanced sclera recognition. I expect that the research results will show that sclera recognition is very promising for positive human ID. Sclera may provide a new option for human ID. In this paper, I consider on frontal looking sclera image processing and recognition. Similar to iris recognition, where off-angle iris image segmentation and recognition is still a challenging research topic, off-angle sclera image segmentation and recognition will be an interesting and challenging research topic. In addition, sclera recognition can be combined with other biometrics, such as iris recognition or face recognition (such as 2-D face recognition) to perform multimodal biometrics. Moreover, the effect of template aging in sclera recognition will be studied in the future. Currently, the proposed system is implemented in java. The processing speed can be dramatically reduced by parallel computing approaches.

REFERENCES

  1. Reza Derakhshani and Arun Ross, a new biometric modality based on conjunctival Vasculature,Appeared in Proc. of Artificial Neural Networks in Engineering (ANNIE), (St. Louis, USA), November 2006.

  2. Reza Derakhshani and Arun Ross, A TextureBased Neural Network Classifier for Biometric Identification using Ocular Surface Vasculature, Appeared in proc. Of International Joint Conference on Neural Network(IJCNN),Orlanda (USA),August 2007.

  3. S. Crihalmeanu, A. Ross, and R. Derakhshani, "Enhancement and Registration Schemes for Matching Conjunctival Vasculature," in Proceedings of the ThirdInternational Conference on Advances in Biometrics Alghero, Italy: Springer- Verlag, 2009.

  4. Zhi Zhou, A New Human Identification Method: Sclera Recognition, IEEE transactions on systems, man, and cyberneticspart a: systems and humans, vol. 42, no. 3, may 2012.

  5. L. Flom and A. Safir, Iris Recognition system, U.S.Patent: 4,641,349, 1987.

  6. L.G.Roberts, Machine perception of 3-D solids, Optical and Electro optical Information processing, 1965.

  7. I.E. Sobel, Camera models and machine perception, Thesis, Stanford University, 1970.

  8. A.Rosenfeld. The Max Roberts operator is Hueckel- type of edge detectors. IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.3, No.1, pp.101-103, 1981.

  9. Marr, D. and Hildreth, E. C. Theory of edge detection. Proceedings of the Royal Society of London Series B: Biological Sciences, Vol.207, pp.187-217, 1980.

  10. A. Goshtasby and Hai-LunShyu, Edge detection by Curve fitting, Image and Vision Computing, Vol. 13, No.3, pp.169-177, 1995.

  11. J. Canny, A computational approach for edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8, no.6, pp. 679-698, 1986.

  12. S.Y. Sarkar. and K.L Boyer, Optimal infinite impulse response zero-crossing based edge detectors, Computer Vision Graphics Image Processing: Image Understanding, Vol.54, No.9, pp.224-243, 1991.

  13. L. Ding, and A. Goshtasby, On the Canny edge detector, Pattern Recognition, Vol.34, pp.721-725, 2001.

  14. D. Stern and L. Kurz, Edge detection in correlated noise using Latin squares models, Pattern Recognition, Vol.21, pp.119-129, 1988.

  15. S.Z.Li,Roof-Edge Preserving Image Smoothing Based on MRFs IEEE Trans. On Image Processing, Vol.9, No.6, pp.11341138, 2000.

  16. W.E.L Grimson and T. Pavlidis, Discontinuity detection for visual surface reconstruction, Computer Vision, Graphics, Image Processing, Vol.30, pp.316- 330, 1985.

  17. D. Lee, T. Pavlidis and K. Huang, Edge detection through residual analysis, Proc. IEEE Comput. Soc. Conf. Computer Vision and Pattern Recognition, pp.215-222, 1988.

  18. T. Pavlidis and D. Lee, Residual analysis for feature extraction, in From Pixel to Features, Proc. COST13 Workshop, pp.219-227, 1988.

  19. M.H. Chen, D. Lee, and T. Pavlidis, Residual analysis for feature detection, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.13, pp.30-40, 1991.

  20. S. Zheng, Jian Liu and Jin Wen Tian, A new efficient SVM-based edge detection method Pattern Recognition Letters, Vol.25, pp.11431154, 2004.

  21. J. Daugman, High Confidence Visual Recognition of persons by a test of statistical independence, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.15, No. 11, pp.1148-1161, 1993.

  22. JDaugman, The importance of being random: Statistical principles of iris recognition. Pattern Recognition, Vol.36, No.2 , pp.279-291,2003.

  23. J. Daugman, Demodulation by complex valued wavelets for stochastic pattern recognition, International Journal on Wavelets, Multiresolution and

    Information Processing, Vol. 1, No. 1, pp. 1-17, 2003.

  24. J. Daugman, How iris recognition works, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 14, No.1, pp. 21-30, 2004.

  25. Wildes, R.P., "Iris Recognition: An Emerging Biometric Technology", Proc. of the IEEE, Vol. 85, No.9, pp.1348- 1363, 1997.

  26. S. Lim, K.Lee, O.Byeon, and T. Kim, Efficient Iris Recognition through Improvement of Feature Vector and Classifier, Journal of Electronics and Telecommunication Research Institute, Vol. 23, No. 2, pp. 61 70, 2001.

  27. Ya-Ping Huang, Si-Weiluo, and En-Yi Chen, An Efficient Iris Recognition system, Proceedings of the First International conference on Machine Learning and Cybernetics, Beijing, pp. 450-454, 2002.

  28. Bhola Ram Meena, Personal Identification based on Iris Patterns, Thesis, Department of Computer Science and Engineering, Indian Institute of Technology, Kanpur, 2004.

  29. W. W. Boles and B. Boashash, A Human Identification Technique Using Images of the

    Iris and Wavelet Transform, IEEE Transactions on Signal Processing, Vol. 46, No. 4, pp. 1185-1188, 1998.

  30. Sanchez-Avila, and Sanchez-Reillo R, Two different approaches for iris recognition using Gabor filters and multiscale zero-crossing representation, Pattern Recognition, Vol: 38, pp. 231-240, 2005.

  31. L. Ma, T. Tan, Y. Wang and D. Zhang, Efficient Iris Recognition by Characterizing Key Local Variations, IEEE Transactions on Image Processing,Vol. 13, No. 6, pp. 739750, 2004.

  32. J. Daugman, New Methods in Iris Recognition, IEEE Transactions on Systems, Man, and Cybernatics Part B: Cybernatics, Vol. 37, No. 5, pp. 1167-1175, 2007.

  33. H. Pronenca and L.A Alexandre, Iris segmentation methodology for non-cooperative recognition, IEEE Proceedings on Vision, Image and Signal Processing, Vol. 153, No. 2, pp. 199-205, 2006.

  34. W.J. Ryan, D.L Woodard,. A.T Duchowski, S.T Birchfield, Adapting Starburst for Elliptical Iris Segmentation, 2nd IEEE International Conference onBiometrics: Theory, Applications and Systems,pp. 1-7, 2008.

  35. UBIRIS database, http://iris.di.ubi.pt, 2009

Leave a Reply