Facial Feature Extraction of Color Image using Gray Scale Intensity Value

DOI : 10.17577/IJERTV3IS031245

Download Full-Text PDF Cite this Publication

Text Only Version

Facial Feature Extraction of Color Image using Gray Scale Intensity Value

Lilipta Kumar Bhatta Debaraj Rana

M.Tech Scholar, Dept.of ECE Asst Prof, Dept.of ECE

Centurion University of Technology & Management Centurion University of Technology & Management Bhubaneswar, Odisha, India Bhubaneswar, Odisha, India

Abstract Now-a-days extraction of facial feature points has become an important issue in many applications, such as face recognition, expression recognition, and face detection. A method of facial feature extraction using gray scale intensity value is proposed in this paper. In this method we have developed a technique for extracting the facial features from a color image through skin region extraction, under normal lighting condition and saving the time during the feature extracting by the goal of detecting the features in different expression. The proposed method is one of the simplest way to extract the facial features with good accuracy.

Keywords-YCbCr, Facial feature extraction, skin region extraction, gray scale intensity value

  1. INTRODUCTION

    Human facial features [10] play a significant role for face recognition. Neurophysiologic research and studies have determined that eyes, mouth, and nose are amongst the most important features for recognition [1]. Recognizing someone from facial features makes human recognition a more automated process. Basically the extraction of facial feature points, (eyes, nose, mouth) plays an important role in many applications, such as face recognition [2], face detection [3], model based image coding [4], expression recognition [5], facial animation [6] and head pose determination [7].It is important to note that because the systems use spatial geometry of distinguishing facial features, they do not use hairstyle, facial hair, or other similar factors. Facial recognition can be used generally for police work purposes. For example, public safety, suspected terrorists, and missing children. In this paper, we propose an approach based on gray scale intensity value, using the geometry and symmetry of faces [11], which can extract the features with properties of scale and locate the vital feature points such as eyes, nose and mouth exactly and quickly. This method neednt normalize the images to same size before processing. It can also help to improve the accuracy of face recognition. The rest of the paper is organized as follows. In section II, we briefly describe about skin region extraction. In section III we describe our method in detail to locate the vital feature points from faces automatically. In section IV, we describe about our experiment results. Finally, in section V, the experiment results are compared, analyzed and concludes the paper.

  2. SKIN REGION EXTRACTION

    Color is a prominent feature of human skin. Using skin color as a primitive feature for detecting skin regions has several advantages. In a color image RGB components[13] are dependent on the lighting conditions thus the skin detection may fail if the lighting condition changes Therefore, in this work we have used YCbCr cmodel [9]. The relationship between RGB and YCbCr is as follows:

    Y = 0.299R + 0.587G + 0.114B

    Cb = -0.169R – 0.332G + 0.500B (1) Cr = 0.500R – 0.419G – 0.081B

    In YCbCr color space, the RGB components are separated into luminance (Y), chrominance blue (Cb) and chrominance red (Cr). A skin color map is derived and used on the chrominance components of the input image to detect pixels that appear to be skin. We have here predefined a value for the chrominance level (Cr) to extract the skin region.

  3. PROPOSED METHOD

    Our method for extraction of facial features in a color image is divided into three stages.

    • The face-like regions are segmented based on the characteristics of human face color and face region extracted by sobel operator[11].

    • Then the image is converted into YCbCr component and skin region is extracted using morphological operation.

    • Possible human eye, mouth and nose regions are detected and the regions are extracted by means of gray level intensity value.

    1. Outline of Proposed Method

      The proposed method is based on color image where the face region has to be extracted from the input color image using the method described in [11]. Then the skin region has been extracted from the face region which makes the input image with less complexity. Then the proposed method has applied to extract the features points like eye row location, eye column location and mouth row location followed by

      extraction of features. Finally the extracted features has been shown through rectangle.

      Fig 1: The Proposed Method

    2. Extraction of Face Region

      Fig. 2: Extraction face region

      1. In first stage, the RGB color image is taken as input image.

      2. Then using sobel edge detector the edges of the input image is detected.

      3. After the detection process, by using the intensity value of the image, the face region is extracted as describe in [11]. After applying Sobel edge detector to the intensity image IM (I, J) with dimension MxN converted to binary image BW (I, J). Then for each column I and each row J, C(I) and R(J) by computing

    (3)

    1. In the second stage the input (RGB) color image is converted into YCbCr domain.

    2. Then by using predefined value for the chrominance level (Cr), the skin region is extracted.

    3. Then the morphological closing operation is carried out on the skin region image to remove the holes from the foreground, which results a clear skin reghion.

    D. Eye, Nose and Mouth localization

    Fig. 4 Facial Features Extraction

    From the output of second stage, we first locate the position of left eye. First we divide the image (skin region) vertically into equal two parts taking CCl as the centre column of the image.

    Then we consider the left part of the image and we will take the summation of the intensity value of each column from the left. The point at which the intensity value is minimum is considered as the eye position because the intensity value is minimum at the black region and high at white region.

    Mathematically,

    (4)

    The I-positions IL and IR of the left and right boundaries of the head are given by the smallest and largest values of I such that C(I) >= C(I0)/3 where I0 denotes the column I with the largest C(I). The J-position Jmin of the upper boundary of the head is given by the smallest J such that R(J) >= 0.05(IR- IL). Then the J position jmax of the lower boundary of the head by Jmax = Jmin + 1.2 (IR-IL).

    1. Extraction of Skin Region

      Fig. 3 Skin Region Extraction

      (5)

      Where j= 2*CCl/3 to CCl with V as the extracted intensity face image with size M x N

      After the localization of left eye , then we will calculate the distance between centre column (CCl) and left eye.

      (6)

      Where, dist=distance between centre column (CCl) and left eye. By calculating the distance we will locate the right eye column location, because the right eye will be present at the same distance from the centre because of the symmetry of eye pair.

      Mathematically

      (7)

      (8)

      Where eyeR=Row location of Eye and eybR= Row location of Eyebrow

      After eye localization, a new sub region is formed by taking centre row and last row of the skin region image. Its upper boundary is centre row (Cr) and lower boundary is last row(Lr). Then we will consider the region between centre column(Cl)-0 and centre column(Cl)+10 of the sub region.

      (9)

      Then the mouth row will be calculated as follows:

      (10)

      Where, , i=Cr to LR

      After mouth row localization, we will localize nose. The nose centre is calculated as follows:

      It is made up of the front image, the images rotated with little and big angles on left and right sides, and the images with variety of light and expressions. The image size is normalized to 640*480 and the face area is over 50% in each image. Experiment result on front faces is shown in Figure.5.

      In the above figure, Figure.5, (a) is the original input color image. (b) represents the face region image.(c) represents the skin region image after morphological operation on face region image.(d) provides the localization of eye point by using gray scale intensity value.(e) represents the localization of eye and eyebrow .(f) represents the mouth region then finally in (g) all extracted point has been localized. Then all features including eyes, mouth and nose has been extracted. Some other results of different images are given in Figure.6.In this figure we show the input color image, extracted region image, extracted feature points.

      (11)

      After nose point localization we will localize nose region as follows:

      (12)

      Hence all the facial features are extracted by using the above proposed methods.

  4. EXPERIMENTAL RESULTS

      1. (b) (c)

    (d) (e) (f)

    1. (h)

      Fig.5

      The experiment in this paper is based on the FEI face database [14]. It is a Brazilian database including 200 people each provided 14 face images.

      1. (b) (c) (Input image) (Extracted region) (Feature points)

    Fig.6

    The locating accuracy of the feature points of arround 100 people is average over 93%, as shown with the chart figure below.

    Fig.7 The Accuracy Chart of Features

  5. CONCLUSION

It is well known that its very difficult to detect the features exactly on each face images due to the complication of human face structure and the diversity of face features and shooting angle. This paper proposes a useful approach to extract the feature points from faces automatically. Experimental results show that the locating of the feature points is exact and fast, and it would help to increase the accuracy of face recognition.

Though we set the scale of face images to be fixed on 640*480, it doesnt mean any restriction for the size of images. The approach presented in this paper can automatically locate the feature points with high accuracy as for most front face images of luminance, but it is still limited in the application of large angle rotation with reducing of the accuracy and is partly impacted by strong sidelight.

In the future we will improve the detection by the higher accuracy, also it will be work on the multiple faces and go on study to improve the robustness.

REFERENCES

  1. H. D. Ellis, M. Jeeves, F. Newcombe, And A. Young Eds. Dordrecht: Nijhoff, Introduction To Aspects Of Face Processing: Ten Questions In Need Of Answers, In Aspects Of Face Processing, 1986, Pp.3- 13.

  2. D.J. Beymer, Face Recognition Under Varying Pose, Ieee Processing .On Cvpr, Pp.756-761, June 1994

  3. Hjelmas, B. K. Low,Face Detection: A Survey, Computer Vision And Image Understanding 83 (2001) 236274.

  4. H. C. Huang, M. Ouhyoung, J. L. Wu, Automatic Feature Point Extraction On A Human Face In Model-Based Image Coding, Optical Engineering 32 (1993) 15711580.

  5. D. Pramadihanto, Y. Iwai, M. Yachida,Integrated Person Identification And Expression Recognition From Facial Images, Ieice Transaction. Information And Systems E84d (2001) 856866.

  6. W. S. Lee, N. Magnenat-Thalmann, Fast Head Modelling For Animation, Image And Vision Computing 18 (2000) 355364.

  7. J. G. Wang, E. Sung, Pose Determination Of Human Faces By Using Vanishing Points, Pattern Recognition 34 (2001) 24272445.

  8. Hua Gu ,Guangda Su & Cheng Du, Feature Points Extraction From Faces ,Image And Video,Vision Computing Nz,Vol. 3644, Spie,2003 , Pp. 576585.

  9. V. Vezhnevets, V. Sazonov, A. Andreeva, "A Survey On Pixel-Based Skin Color Detection Techniques", In Proceedings Graphicon-2003, Pp. 85- 92, Moscow, Russia, September 2003.

  10. Elham Bagherian, Rahmita.Wirza.Rahmat And Nur Izura Udzir,Extract Of Facial Feature Point,Ijcsns International Journal Of Computer Science And Network Security. 9 (4) (2009).Pp 551 564.

  11. Tsuyoshi Kawaguchi, Mohamed Rizon, Iris Detection Using Intensity And Edge Information,Ieee Transaction. The Pattern Recognition Society . 12 (3), (2002),Pp 183-182.

  12. T. Kanade, Computer Recognition Of Human Faces Basel And Stuttgart: Birkhauser, 1997.

  13. D Rana and Dr. N. P. Rath, Face Identification using Soft Computing Tool , IEEE International Conference, Conference on Advanced Communication Control and Computing Technologies (ICACCCT-2012), Tamilnadu, INDIA, Aug 23-25 Conference Proceeding Pp: 232-236, 2012

  14. Http://Fei.Edu.Br/~Cet/Facedatabase.Html.

ABOUT AUTHORS

Mr. Debaraj Rana, working as Asst. Professor in the Dept of Electronics & Communication Engineering, CIT Jatni. He has two years of Research Experience. He has done his B.Tech from Biju Pattnaik University of Technology, Odisha and completed in the year 2007 and

Master Degree from VSS University of Technology, Burla, Odisha during 2009-11. He has two numbers of IEEE International Conference Publications on his credit. Presently He is doing Research on Face Analysis (Image Processing) and Different Optimization Technique (Soft Computing).

Mr. Lilipta Kumar Bhatta, continuing his Master Degree from Centurion Institute of Technology, Jatni. Odisha. He has done his B.Tech from Biju Pattnaik University of Technology, Odisha and completed in the year 2011. Presently he is doing Research on Face Analysis (Image Processing).

Leave a Reply