Human Face Recognition using Skin Texture

DOI : 10.17577/IJERTCONV3IS20046

Download Full-Text PDF Cite this Publication

Text Only Version

Human Face Recognition using Skin Texture

Dhiraj Kumar

    1. ech. Scholar, Software Engineering Department, RCET Bhilai

      Chhattisgarh India

      Devesh Narayan

      Sr. Associate Professor, Computer Science & Engineering RCET Bhilai

      Chhattisgarh India

      Abstract Face is our primary focus of attention for conveying identity. Human face detection by computer systems has become a major field of interest. Detection of faces in a digital image has gained much importance in the last decade, with application in many fields. Automating the process to a computer requires the use of various image processing techniques.Generally, skin texture is the surface texture pattern of any part of human body with bare skin (e.g., face, hand, and palm) Skin detection is widely used in many domains such as face detection and tracking as well as sensitive image detection. Three color spaces RGB, YCbCr and HSI are of main concern. We have compared the algorithms based on these color spaces. In this paper, RGB, YCbCr and HSI color spaces are used to conduct experimental comparison of skin color detection.

      Keywords- Human skin color, gray statistics, RGB color spaces, HSI color spaces, YCbCr spaces

      1. INTRODUCTION

        Face detection is a necessary first step in face recognition systems with the purpose of localizing and extracting the face region from the background. Face detection techniques can be roughly classified into four categories [1], namely, skin color-model based approaches, template matching-based processes, feature based approaches, and statistical based approaches. Usually, face detection techniques integrate some or all of the four approaches to achieve high face detection accuracy and a low false detection rate. Detection rate and the number of false positive rates are important factors in evaluating face detection systems. Detection rate is the ratio between the number of faces correctly detected by the system and the actual number of faces in the image.

        Most of previous researches studies focused on using skin color to detect the face and human detection or human motion tacking. Hassan Yasser et al [2] use skin color to detect skin tone. The color images were converted from RGB to HSI format in order to isolate the effect of the light intensity during shots. The images were analyzed using mathematical and statistical methods (Mean, Median, standard deviation, skewness, and gray level co-occurrence matrices (GLCM), kurtosis. Mehran Fotouhi et al [4], propose combined texture- and color-based skin detection. Non-subsampled contourlet transform is used to represent texture of the whole image. Dimensionality reduction is addressed through principal component analysis (PCA). Ghazali Osman et al [6], implements a region-based skin colour classification technique using stepwise LDA method to model a skin colour distribution. Many skin segmentation methods depend on skin color which has many difficulties. The skin color depends on human race and on lighting conditions, although this can be avoided in some ways using YCbCr color spaces in which the two components Cb and Cr depend only on chrominance, there still many problems with this method

        because there are many objects in the real world that have a chrominance in the range of the human skin which may be wrongly considered as skin. For the above reasons combining the texture features of skin with its color feature will increase the accuracy of skin recognition.

        Color is a powerful fundamental cue that can be used as the first step in the face detection process because color image segmentation is computationally fast. But the skin color of humans varies strongly.Given the speed, accuracy, and cost performance specifications of an end-to-end identification system, the following design issues need to address: (i) How to acquire the input data/measurements (biometrics)? (ii) What internal representation (features) of the input data is invariant and amenable for an automatic feature extraction process? (iii) Given the input data, how to extract the internal representation from it? (iv) Given two input samples in the selected internal representation, how to define a matching metric that translates the intuition of "similarity" among the patterns? (v) How to implement the matching metric? Additionally, for reasons of efficiency, the designer may also need to address the issues involving; (vi) organization of a number of (representations) input samples into a database and effective methods of searching a given input sample.

      2. THE INTRODUCTION OF SEVERAL COLOR SPACE FOR FACE DETECTION

        Color space also called color model or color system, and its purpose is to illustrate the color in some standard with usually acceptable way. In essence, color model is the exposition of coordinate system and subspace, and each color in the system is expressed by a single point [2]. In present, color space used in skin detection mainly has RGB, YCbCr and HSI etc.

        1. RGB color space

          The RGB color space consists of the three additive primaries: red, green and blue. Spectral components of these colors combine additively to produce a resultant color.

          The RGB model is represented by a 3-dimensional cube with red green and blue at the corners on each axis (Figure 1). Black is at the origin. White is at the opposite end of the cube. The gray scale follows the line from black to white. In a 24-bit color graphics system with 8 bits per color channel, red is (255, 0, 0). On the color cube, it is (1, 0, 0). The RGB model simplifies the design of computer graphics systems but is not ideal for all applications. The red, green and blue color components are highly correlated.

          This makes it difficult to execute some image processing algorithms. Many processing techniques, such as histogram

          equalization, work on the intensity component of an image only.

        2. YCbCr color space

          YCbCr space segments the image into a luminosity component and chrominance components. The main advantage is that influence of luminosity can be removed during processing an image. Using the reference images different plots for Y, Cb and Cr values for face and non-face pixels were plotted and studied to find the range of Y, Cb and Cr values for face pixels. After experimenting with various thresholds the best result were found by using the following rule for detecting the skin pixel.

          The Recommendations 601 specifies 8 bit(i.e 0) to 255) coding of YcbCr, whereby the luminance component Y has an excursion of 219 and an offset of +16. This coding places black at code 16 and white at code 235. In doing so, it reserves the extremes of the range for signal processing footroom and headroom. On the other hand, the chrominance components Cb and Cr have excursions of +112 and offset of

          +128, producing a range from 16 to 240 exclusively.

        3. HSI color space

        While RGB may be the most commonly used basis for color description, it has a negative aspect that each of red,green and blue components are highly correlated. Hence cannot necessarily provide relevant information about whether a particular pixel is skin or not. Since hue, saturation and intensity, can be effectively used to describe the color, the HSI color space, however is much more intuitive and provides color information in a manner more in line how human thinks of colors. Since hue, saturation and intensity are three properties used to describe color, it seems logical that there be a corresponding color model, HSI. When using the HSI color space, you dont need to know what percentage of blue or green is reqired to produce a color. You simply adjust the hue to get the color you wish. To change a deep red to pink, adjust the saturation. To make it darker or lighter, alter the intensity.

        Figure 1. Double cone model of HSI color space

        For the HSI being modeled with cylindrical coordinates. The hue(H) is represented as angle 0, varies from 00 to 3600. Saturation (S) corresponds to the radius, varying from 0 to 1. Intensity (I) varies along with the z axis with 0 being black and 1 being white.

        When S=0, color is a gray value of intensity 1. When S=1, color is on the boundary of top cone base. The greater the saturation, the farther the color is from white/gray/black. Adjusting the hue will vary the color from red at 00, through green at 1200, blue at 2400, and back to red at 3600. When I=0, the color is black and therefore H is undefined. When S=0, the color is grayscale. H is also undefined in this case.

        By adjusting I, a color can be made darker or lighter. By maintaining S=1 and adjusting I, shades of that color are created.

      3. SKIN COLOR SEGMENTATION

        This process searches the regions of an input image that potentially contain faces. First, a very large number of face skin samples were collected and a statistical model for the skin-color description in YCbCr color space was developed. The YCbCr color space was chosen for this investigation because of the Y in YCbCr denotes the luminance component, and Cb and Cr represent the chrominance factors. Many researchers assume that the chrominance factors of the YCbCr color space are independent of the luminance component. According to [8], the skin color cluster is more compact in YCbCr than in other color space. YCbCr can be obtained from RGB using a transformation matrix.

      4. MORPHOLOGICAL OPERATIONS

        The color segmentation generates a binary mask with the same size of the original image. However some regions similar to skin also appear white: pseudo color pixels like clothes, floors, building etc. The goal of the connected component algorithm is to analyse the connection property of skin regions and identify the face, which are described by rectangular boxes. Ideally each face is a connected region separated from each other as shown in Fig 14-Fig 17. However, in some circumstances, two or even three faces can be connected by ears or high luminance hairs. In addition, pseudo-skin pixels are scattered and generate hundreds of connected components, which costs unnecessary computations if they are identified as face candidates. However, the connection is thin compared to the inside regions of the face and it can be broken by image morphology operations. After color segmentation the left over noise in the background can be smoothened using morphological processing. It is necessary to remove the unwanted specs in order to speed future processing. Hence the open (erode followed by dialate) operation was performed using a structuring element. It was observed that the open operation has resulted in huge reduction in the number of small noisy specs. Erode shrinks the selected area and expands the background, whereas dilate operation does the reverse of this. In particular, one row direction and one column direction image erosion operations are applied so that more pixels are eroded in column directions. This is based on the observations that faces are usually connected more horizontal. In addition, within a face, connections between the parts above and below the eyes are fragile and it is desired not to erode this connection. As erosion operation act similar to median filter, and can remove pseudo-skin pixels because of their scattered and weak connection property. Between first and second level erosions, holes are filled so that later erosions only happen at edges of the connected components and will not cause regions inside face to fall apart.

      5. FACE LOCALIZATION

        1. Lip Detection

          In this step, the lip region is detected in the segmentation skin color region. We constructed the lip-color model using the approach proposed in skin-color segmentation. First, a very large number of lip samples were collected and a statistical model for the lip color description in YCbCr color space was created. The lip-color map (LCM) is obtained by using the lip-color model. We found that high LCM values and low SCM values are found around the lip. The resulting lip map is then dilated and masked. Then, we obtained the lip region by a predefined threshold.

        2. Eyes Localization

          The purpose of this step is to locate the eyes in the segmented skin-color region with the lip-color detected. We pre-processed the segmented skin color region with lip-color detected to a binary image using skin-color segmentation. Then, we obtained 4-connected components, labelled them, and identified the centre of each segment.

        3. Clip the Face Regions

        Assuming that the real facial region is similar to an elliptical shape, we constructed an elliptical shape model. Because the human face is a variation in scale and orientation, a set of elliptical temples is used to obtain the fitness elliptical. The centres of the elliptical temples o are located at the centre of the isosceles triangle. The major axis of the elliptical is located at the line. The location changes in the 8-connected region of the lip region centre to the obtain various elliptical orientations

        TABLE I. COMPARISON CHART OF THE ALGORITHMS

        Criterion

        RGB

        Color Space

        YCbCr Color Space

        HSI

        Color Space

        Frontal Face

        56.46%

        83.91%

        82.27%

        Titled/ Rotated Face

        54.47%

        80.14%

        80.09%

        Profile face

        47.84%

        80.11%

        79.92%

        Complex Background Image

        42.62%

        73.72%

        71.19%

        Time Consumption

        2.09 sec

        3.46 sec

        3.52 sec

      6. AVAILABLE DATABASES

        Face image databases are publically available for the research purpose. These are CASIA face database, Color

        FERET database and YALE face database and Indian Face database. FERET database contains 1564 sets of images for a total of 14,126 images that includes 1199 individuals and 365 duplicate sets of images. YALE database contains 165 gray- scales in GIF format of 15 individuals, there are 11 images per subject, one per different facial expression or configuration. Indian Face database contains a set of face images taken in February, 2002 in IIT Kanpur campus. There are 11 images of each of 40 distinct subjects

      7. DISCUSSION AND CONCLUSION

In this paper a comparison has been made for detecting faces in the controlled background, using skin color detection on RGB, YCbCr and HSI color spaces. We have found that YcbCr and HSI color space are more efficient in comparison to RGB to classify the skin region. The ultimate goal of this work is a system for objectionable image filtering. The future work is to develop algorithms for skin classification (classifying for which part of the body the skin belongs to), and to investigate for appropriate features that can serve for this purpose.

REFERENCES

  1. Reema ajmera, Namrata Saxena, Face Detection in Digital Images Using Color SPaaces and Edge Detection Techniques, International Journal of Advanced Research in Computer Science and Software Engineering, 2013.

  2. Lei Yang, Hui Lei, Xiaoyu Wu, Dewei Zhao, An algorithm of skin detection based on texture, 4th International Congress on Image and Signal Processing, 2011.

  3. Hassan Yasser, Fatma Mohammed, Maram Mahir, Human Skin Color Code Recognition: A Case Study, International Conference on Computing, Electrical and Electronic Engineering 2013.

  4. Jeonghee Park, Jungwon Seo, Dongun an, Detection of Human Faces using skin color and eyes, IEEE, 2000.

  5. Mehran Fotouhi, Mohammad H.Rohban, Shohreh Kasaie, Skin Detection using Contourlet Texture Analysis, Proceedings of the 14th International CSI Computer Conference, 2009.

  6. Nidhal K.Al abbadi, Nizar Saadi Dahir, Zaid Abd Alkareem, Skin Texture Recognition using Neural Networks

  7. Hwei-Jen Lin, Shu-Yi Wang, Shwu-Huey, and Yang Ta-Kao Face Detection Based on Skin Color Segmentation and Neural Network

    IEEE Transactions on, Volume: 2,pp1144- 1149

  8. Ghazali Osman, Muhammad Suzuri Hitam,Skin Colour Classification Using Linear Discriminant Analysis and Colour Mapping Cooccurrence Matrix, IEEE 2013

  9. Yuan Hui Wang and LiQian Xia, Skin Color and Feature-Based Face Detection in Complicated Backgrounds, 2009 IEEE

  10. Garsah Farhan Al-Qarni and Farzin Deravi, Explicit Integration of Identity Information from Skin Regions to Improve Face Recognition, 2012 Springer-Verlag Berlin Heidelberg

  11. P. Kakumanu, S. Makrogiannis, and N. Bourbakis, A survey of skincolor modeling and detection methods, Pattern Recognition, vol. 40, no. 3, pp. 11061122, 2007.

  12. R. Gonzalez, R. Woods, and S. L. Eddins, Digital Image Processing using MATLAB. Gatesmark Publishing, 2010.

  13. Anil.K.Jain , Ruud Bolle and Sharath Pankanti , BIOMETRICS- Persona1 Identification in Networked Society, Kluwer Academic Publishers, New York

Leave a Reply