A New Human Identification Method: Sclera Identification

DOI : 10.17577/IJERTCONV3IS06030

Download Full-Text PDF Cite this Publication

Text Only Version

A New Human Identification Method: Sclera Identification

Rohit Sonavane; Amey Samant; Kshipra Naikare; Akshay Satam;

Department of Electronics Engineering,

K.C. College of Engineering & Management studies & Research, Kopri,Thane(E)-400 603, India.

Abstract The blood vessel structure of the sclera is unique to each person, and it can be remotely obtained no intrusively in the visible wavelengths. Therefore, it is well suited for human identication (ID). In this paper, we propose a new concept for humanID: sclera recognition. This is a challenging research problem because images of sclera vessel patterns are often defocused and/or saturated and, most importantly, the vessel structure in the sclera is multilayered and has complex nonlinear deformations. This paper has several contributions. First, we proposed the new approach for human ID: sclera Recognition. Second, we developed a new method for sclera segmentation which works for both color and grayscale images. Third, we designed a Gabor wavelet-based sclera pattern enhancement method to emphasize and binarize the sclera vessel patterns. Finally, we proposed a line-descriptor- based feature extraction, registration, and matching method that is illumination, scale, orientation, and deformation invariant and can mitigate the multilayered deformation effects and tolerate segmentation error. The experimental results show that sclera recognition is a promising new biometrics for positive human ID.

Figure 1


BIOMETRICS is the use of physical, biological, and be- havioral traits to identify and verify a persons identity automatically. There are many different traits that can be used as biometrics, including ngerprint, face, iris, retina, gait, and voice. Each biometric has its own advantages and disadvantages. Table I is the comparison of the differenbio biometrics using the following objective measures: accuracy, reliability, stability, identication (ID), ID capability in a distance, user co-operation, and scalability to a large population. For instance, face recognition is the natural way

that humans identify a person, but peoples faces could change dramatically over years and this change could affect recognition accuracy . The ngerprint pattern is very stable over a persons life, and its recognition accuracy is high. However, ngerprint recognition cannot be applied for ID at a distance. Aside from these measures, different people may object to certain methods for various reasons, including culture, religion, hygiene, medical condition, personal preference, etc. For example, in some cultures or religions, acquiring facial image(s) may make some users uncomfortable. Fingerprints may cause some hygiene issues and public health concerns since it is a contact-based biometric. In addition, in real-life applications, some biometrics may be more applicable than others in certain scenarios. For example, in general, the accuracy of iris or ngerprint recognition is higher than facial recognition.

Figure 2

However, in a video surveillance application, facial recognition may be more preferable since it can be integrated into the existing surveillance systems. To achieve high accuracy, iris recognition needs to be performed in the near-

Facial Scan

Retina Scan

Iris Scan

Sclera Scan

Target Area






More than Retina




Effect of Age





User Interaction

Not Required




Training Time


Time Consuming



infrared (NIR) spectrum, which requires additional NIR illuminators. This makes it very challenging to perform remote iris recognition in real-life scenarios. Overall, no biometric is perfect or can be applied universally. In order to increase population coverage, extend the range of environmental conditions, improve resilience to spoong, and achieve higher recognition accuracy, multi modal biometrics has been used to combine the advantages of multiple biometrics.

Figure 3

Researchers are trying to nd new biometrics to provide more options for humanID. Sclera can be acquired at a distance under visible wavelength illumination. In this paper, we propose a new human ID method: sclera recognition. Our experimental results show that sclera recognition can achieve comparable recognition accuracy to iris recognition in the visible wavelengths


Segmentation is the first step in sclera recognition. Many researchers have worked on the segmentation of the pupil and iris boundaries for iris recognition in the NIR wavelengths. However, in these approaches, sclera information is often discarded in iris recognition. Proenca et al. proposed segmentation algorithms for iris images in the visible wavelengths using the UBIRIS database. However, these approaches are designed for iris segmentation and are therefore not verified suitable for sclera recognition. Derakhshaniet al. applied contrast limited adaptive histogram equalization to enhance the green color plane of the RGB image and a multiscale region growing approach to identify the sclera vessels from the image background but used manual segmentation and registration. Crihalmeanuet al. presented a semi-automated system for sclera segmentation. In this paper, we propose a fully automatic sclera segmentation method for both color and grayscale images. The block diagram of the segmentation algorithm, which includes estimation of the glare area, iris boundary detection, estimation of the sclera region in color or grayscale images, and eyelid and iris boundary detection and refinement in the sclera region estimation step. There is a difference between color and grayscale images: The sclera region in a color image is estimated using the best representation between two colorbased techniques, whereas the sclera region in a grayscale image is extracted by Otsus threshold method.

  1. Estimation of Glare Area

    The glare area is usually a small bright area of the iris image. Glare inside the pupil or nearby the pupil area can be modelled as a bright object on a much darker background with sharp edges. However, in some situations, there could be multiple areas with very bright illumination and unwanted glare areas (glares that are not inside the iris or pupil). For example, in Fig. 3(a) and (b), there are glares on the surface of the cornea which create challenges for glare detection. A Sobel filter is first applied to highlight desired glare areas (Fig. 3). For the

    glares in the sclera or skin areas, the local background is often brighter than the pupil or iris. Using the Sobel filter, it will not stand out as much as glare in the desired area. Note that the glare detection method is applied in

    Glare Estimation (a) color image (b) Grayscale. (c) Convoluted images grayscale images. If the original image is a color image, a grayscale transformation is applied first (Fig. 3).

  2. Iris Boundary Detection

    In this paper, we focus on sclera recognition using frontal looking eyes. To improve the segmentation speed, the pupil and iris regions are modeled as circular boundaries nd typical circular iris segmentation methods were used. Here, the pupil and iris regions are segmented using a greedy angular search, which is performed on the edge-detected image and can accurately detect the pupil boundaries regardless of gaze direction and eyelid/eyelash occlusion.

  3. Estimation of Sclera Area

    In the estimation of the sclera area, our sclera detection approach uses either color or grayscale images.

    1) Estimation of Sclera Area in Color Images: The sclera is a nonskin white area in the eye, and two approaches were used to find potential sclera areas.

    1. Nonskin area: The sclera area is the nonskin area of the eye region. This allows for simple heuristics to be used to classify areas in the image as skin or not-skin, andthen, a binary map of the sclera is assumed to be the inverse of the skin.

    2. White area of the eye: The sclera area is white and usually brighter than the remaining parts of the eye in an image. In other words, the sclera area should have low hue (about bottom 1/3), low saturation (bottom 2/5), and high intensity (top 2/3) in the HSV color space.

    Morphological operations are applied to the two binary maps to remove isolated pixels and small

    regions of contiguous pixels. The convex hull of each of these representations is calculated.

    The convex hull is the minimal convex set of points that contains the entire original set. The best estimate of the sclera is determined by dividing each individual mask into two sections around the detected pupil. The final representation is created using the individual portions that are the most homogenous by minimizing the standard deviation of the pixels in the region.

    2) Estimation of Sclera Area in Grayscale Images: In grayscale images, the skin tone approach [(3.3)(3.8)] for color images would not work. We propose an Otsus method based sclera segmentation method. Otsus method is a Linear Discriminant Analysis-based thresholding method. It assumes that there are two classes in an image, foreground (object) and background, which can be


    separated into two classes by intensity. Otsus method automatically searches for the optimum threshold that can minimize the intraclass variance while maximizing the between-class distance. The process of sclera area detection has the following steps (Fig. 9): the region of interest (ROI) selection step, the Otsus method-based thresholding step, and the sclera area detection step. The left and right ROIs are selected based on the iris center and boundaries. The height of the ROI is the diameter of the iris, and the length of the ROI is the distance between the limbic boundary and the margin of the image. Otsus method is applied to the ROIs to obtain potential sclera areas. The correct left sclera area should be located in the right and center sides, and the correct right sclera area should be located in the left and center. This way, we eliminate nonsclera areas. Fig. 9 shows the process

    for detecting the left sclera area. The same approach is applied to detect the right sclera area.

  4. Iris and Eyelid Detection and Refinement

    The top and bottom boundaries of the sclera region are used as initial estimates of the sclera boundaries, and a polynomial is fit to each boundary. Using the top and bottom portions of the estimated sclera region as guidelines, the upper eyelid, lower eyelid, and iris boundaries are then refined using the Fourier

    active contour method. Fig. 10 shows an example of two segmented sclera imagesnote that some areas are not perfectly segmented. In reality, perfect segmentation of all images is impossible. Therefore, the feature extraction and matching steps of the system need to be tolerant of segmentation error.

    1. SCLERA VESSEL PATTERN ENHANCEMENT The segmented sclera area is highly reflective. As a result, the sclera vascular patterns are often blurry and/or have very low contrast. To mitigate the illumination effect to achieve an illumination-invariant process, it is important to enhance the vascular patterns. Daugman shows that the family of Gabor filters are good approximations of the vision processes of the primary visual cortex. Because the vascular patterns could have

      multiple orientations, in this paper, a bank of directional Gabor filters (Fig. 5) is used for vascular pattern enhancement

      For this paper, only the even filter was used for feature extraction of the vessels, since the even filter is symmetric and its response was determined to identify the locations of vessels adequately.

    2. SCLERA FEATURE EXTRACTION Depending on the physiological status of a person (for example, fatigue or nonfatigue), the vascular patterns could have different thicknesses at different times, because of the dilation and constriction of the vessels. Therefore, vessel thickness is not a stable pattern for recognition. In addition, some very thin vascular patterns may not be visible at all times. In this paper, binary morphological operations are used to thin the detected vessel structure down to a single-pixel wide skeleton and to remove the branch points. This leaves a set of single-pixel wide lines that represents the vessel structure. Fig. 12(d) shows the vessel skeleton after binary morphology. These lines are then recursively parsed into smaller segments. The process is repeated until the line segments are nearly linear with the lines maximum size. For each segment, a least squares line is fit to each segment.



    When acquiring the eye images, the eyelids can have different shapes, the iris location can vary, the pupil size can be different, and the eye may be tilted with respect to the camera. The camera-to-object distance and camera zoom can also vary. All of these could affect the size, location, and patterns of the acquired sclera region in the image. It is important to take these variances into account in sclera matching. Therefore, the first step is to perform sclera ROI registration to achieve global translation, orientation, and scaling invariances. In addition, due to the complex deformation that can occur in the vessel patterns, it is desirable to have a registration scheme that is robust and exhaustive but does not unduly introduce false accepts. Most importantly, as we discussed in Section I, the sclera vascular patterns deform nonlinearly with the movement of the eye and eyelids and the contraction/dilation of the pupil. As a result, the segments of the vascular patterns could move individually, and this must be accounted for in the registration scheme.


As discussed previously, it is important to design the matching algorithm such that it is tolerant of segmentation errors. In general, the edge areas of the sclera may not be segmented accurately; therefore, the weighting image (Fig.

14) is created from the sclera mask by setting interior pixels in the sclera mask to 1, pixels within some distance of the boundary of the mask to 0.5, and pixels outside the mask to 0. This allows a matching value between two segments to be between 0 and 1 and allows

Weighting image.

for weighting the matching results based on the segments that are near the masks boundaries. This reduces the effect of segmentation errors, particularly for under segmentation of the boundary between the sclera and eyelids


      The UBIRIS database [6] is a publicly available database with iris images acquired in color. The databank consists of 1877 images acquired from 241 users acquired in two sessions. The images are predominately frontal stare. The database is available with multiple image decree, with the maximum image decree being 800by 600 pixels. In first session, noise was minimized and the images were attempted to be acquired in focus. However, in second session, noise effects were not lessn, precinct light was not standardized, and a substantial number of the images have very poor focus. In both sessions, the images are generally cropped such that the eye is predominately pivoted and the eye province wellcropped in the images.The main focus of the UBIRIS database is to minimize the requirement of user collaboration, i.e., the analysis and proposal of approaches for the automatic recognition of individuals, using images of their vision captured at-a-distance and minimizing the required degree of cooperation from the users, most likely even in the covert mode. In the first session database, the primary difference between good and poor quality images is image focus and/or eyelid occlusion. For some images, the iris and sclera region are very poorly focused, which reduces the visual clarity of the image and in many cases makes the sclera vein patterns difficult, or impossible, to reliably identify by either automatic or manual methods. For the second session database, the images are of very poor quality for sclera recognition. The overall image quality is much worse, and much less consistent than in the first session database. In particular, the focus on the sclera region is very inconsistent, which makes the Second session database very poor for sclera recognition.

      Images from UBIRIS Database Session 1

      Images from UBIRIS Database Session 2

    2. The IUPUI Multi-Wavelength Database The IUPUI multi-wavelength database is an internally acquired database of video images of users eye and the surrounding regions with different eye gaze-angles, illumination wavelengths, and ambient illumination levels. The database is composed of 45 users, with two videos acquired of each user with at least 1 week of time between acquisitions. For each session, 32 videos were acquired 8 different illumination wavelengths (420, 470, 525, 590, 610, 630, 660, and 820 nm), with and without ambient illumination, and both the users left and right eyes. For each video, the user was asked to direct their gaze to 6 different gaze locations (centered, up, left, left-up, right, and right-up) during the video. Each image was acquired at a resolution of 1280 by 1024 pixels, with the eye generally centered in the image. In general, the eye regions are around 1000 pixels in width, about 200 pixels more than the UBIRIS databases maximum eye width. Users were asked to limit their movement of head, but no restraints were used to otherwise limit their movement.

Sample images from IUPUI multi-wavelenght database


In this paper, we have proposed a new biometrics: sclera recognition. Our research results show that sclera recognition is very promising for positive human ID. Sclera provides a new option for human ID. In this paper, we focused on frontal- looking sclera image processing and recognition. Similar to iris recognition, where off-angle iris image segmentation and recognition is still a challenging research topic, off-angle sclera image segmentation and recognition will be an interesting and challenging research topic. In addition, sclera recognition can be combined with other biometrics, such as iris recognition or face recognition (such as 2-D face recognition) to perform multimodal biometrics. Moreover, the effect of template aging in sclera recognition will be studied in the future. Currently, the proposed system is implemented in Matlab. The processing speed can be dramatically reduced by parallel computing approaches


  1. J. Woodward, N. Orlans, and P. Higgins, Biometrics: Identity Assurance in the Information Age. New York: McGraw-Hill, 2003.

  2. Y. Du, Biometrics: Technologies and Trend, in Encyclopedia of Optical Engineering. New York: Marcel Dekker, 2006.

  3. Y. Du, Biometrics, in Handbook of Digital Human Modeling. Mahwah, NJ: Lawrence Erlbaurm, 2008.

  4. D. Pearce and H. Hirsch, The Aurora Experimental Framework for the Performance Evaluation of Speech Recognition Systems Under Noisy Conditions, in Proc. ISCA ITRW ASR Automatic Speech Recognition: Challenges for the Next Millennium, Paris, France, 2000.

  5. M. Turk and A. Pentland, Eigenfaces for recognition, J. Cognitive Neurosci., vol. 3, no. 1, pp. 7186, 1991.

  6. G. Medioni, J. Choi, C.-H. Kuo, and D. Fidaleo, Identifying nonco- operative subjects at a distance using face images and inferred three- dimensional face models, IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 39, no. 1, pp. 1224, Jan. 2009.

  7. M. De Marsico, M. Nappi, and D. Riccio, FARO: Face recognition against occlusions and expression variations, IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 40, no. 1, pp. 121132, Jan. 2010.

  8. D. Maltoni, D. Maio, A. Jain, and S. Prabhakar,Handbook of Fingerprint Recognition. New York : Springer-Verlag, 2009.

  9. M. Vatsa, R. Singh, and A. Noore, Unication of evidence-theoretic fu- sion algorithms: A case study in level-2 and level-3 ngerprint features, IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 39, no. 1, pp. 47 56, Jan. 2009.

Leave a Reply