A Tutorial Review on Face Detection

DOI : 10.17577/IJERTV1IS8298

Download Full-Text PDF Cite this Publication

Text Only Version

A Tutorial Review on Face Detection

Mrs. Sunita Roy1, Mr. Sudipta Roy2, Prof. Samir K. Bandyopadhyay3

1 Ph.D. scholar in the Dept. of Computer Science & Engineering, University of Calcutta, Kolkata, India,

2 Pursuing M.Tech in the Dept. of Computer Science & Engineering , University of Calcutta, Kolkata, India,

3 Professor of the Dept. of Computer Science & Engineering, University of Calcutta, Kolkata, India,


In recent technology the popularity and demand of image processing is increasing due to its immense number of application in various fields. Most of these are related to biometric science like face recognitions, fingerprint recognition, iris scan, and speech recognition. Among them face detection is a very powerful tool for video surveillance, human computer interface, face recognition, and image database management [1]. There are a different number of works on this subject. Thus we give a total view of the face detection with different type of methods and their advantage and disadvantage, so that we can determine the way of our research.

  1. Introduction

    The most interesting thing about face detection is that you can embed the concept to discover a new technique. Already it has showed down its ability to become the first step to many advanced computer vision, biometrics recognition and multimedia applications, such as face tracking, face recognition, and video surveillance. More precisely, face detection is a method by which we can point out the region where the face is locating. Now, the concept can be implemented in various ways but mainly we use two steps for this implementation. In the first step, we localize the face region that means we are anticipating those parts of an image where a face may present. And in the second step we actually verify whether the anticipated parts are actually carrying out a face or not [2]. We are doing this using some rules, template or image databases. The concept illustrated above may seemed very simple but when we implement it, we may go through some difficulties because whenever we are detecting a face, we have to concern about its scale, rotation, pose, expression, presence or absence of some structural component, occlusion, illumination variation and image condition [3,4].

    Scale: An image can have multiple numbers of faces with different scale, which means the size (height and width) of a face may differ with other faces in the image.

    Rotation: Image can have face images with different angle.

    Pose: Face images dramatically change according to pose direction of camera, and some features can partially or wholly disappear.

    Expression: A face image can have various expressions, which may affect the spatial characteristic of various facial features.

    Presence or absence of some structural components: Presence of some structural components like mustaches, beards and glasses can make the face detection process very difficult.

    Occlusion: Sometimes, faces can be partially occluded by other object.

    Illumination variation: An image consists of various objects with different lighting effect.

    Image noise: Presence of noise in the image due to some factors like the environment, characteristic of camera may affect the face detection process.

    Though there are lots of difficulties a huge number of techniques have been researched for years and much progress has been suggested in literature.

  2. Brief Review

    In this section we will present some work done by other researchers related to the problem of face detection. This kind of literature survey will give us a good insight into the problem and overviews the different algorithms used to solve the problem. In the early 70s face detection techniques were very simple because the face is in a passport like photo with uniform background and uniform lightning condition. Most of the detection methods consider frontal faces as their research domain. In early 90s more in-depth research was taking over into the problem using different algorithms. These types of detection

    methods can be categorized into two types: feature based and view based or image based [3].

    Edge based

    Linear subspace

    Feature searching

    Feature analysis

    View based or image based method

    Feature based method

    1. Feature Based Face detection technique: – It tries to extract features of the image and match it against the knowledge of the face features. Here the images are first passed into preprocessing filter like simple histograms then the information of the facial features like: the eye brows are darker than the surrounded areas the same concept can be applied to eyes, nose, and mouth [3]. We can adjust the gray scale threshold value to increase the difference between dark and light areas. Skin color information could be used in this analysis as well. The results of this stage are low level features which are random and ambiguous. The ambiguity of features comes from the fact that similar features that exists in the face could exist in other background windows. Not

      only this, there may occur some false detection based on skin color or the human body part ( non face part ) may be detected as face region due to its similarity in skin color, that means after the segmentation process we need a feature analysis part where we analyze the features before making any final decision. We classify different types of face detection techniques in figure 1.

      Furthermore this method is sub-categorized into three areas, which are low level analysis, feature analysis and active shape models. Some methods that fall into each category are edge based, skin color based, gray level based and motion based, which are the type of low level analysis, feature searching and constellation are the type feature analysis and lastly snakes, point distribution model and deformable template are the type active shape model [5].

      Face Detection

      Low level analysis

      Neural networks

      Skin color based

      Constellation analysis

      Eigen faces

      Gray level based

      Active shape model

      Statistical approach

      Deformable templates

      Generalized Measures

      Point distribution model (PDM)


      Motion based



      Figure 1: Different types of face detection techniques.

      1. Low level analysis :- This technique is based on the concept of analyzing low level visual features

        by using pixel properties like intensity levels, edges, and color properties.

        1. Edge based face detection:- Edge is the most primitive feature in computer vision applications and it was applied in some earlier face detection techniques by Sakai et al. [66]. It was based on analyzing line drawings of faces to locate facial features. Craw et al. [8] designed hierarchical method to trace human head outline based on Sakais work. More recent examples of edge based techniques can be found in [61, 62, 63] for facial feature extraction and in [7, 64, 65] for face detection. In edge detection based face detection, simply, edges are labeled and matched with face models to detect face. Govindaraju

          [60] accomplishes this by labeling edges as the left side, hairline, or right side of a front view face and matches these edges against a face model by using the golden ratio [59] for an ideal face: ace model by using the golden ratio:

        2. Gray Level based face detection:- The gray information of an image can also considered as features. For example,facial features like eyebrows, pupils, and lips are usually darker than their surrounding regions. This property can be useful to differentiate various facial parts. Several recent facial feature extraction algorithms [6, 54- 56] are basically search for local gray minima within segmented facial regions. In these methods, first the input images are enhanced by applying contrast-stretching technique and gray-scale morphological routines to improve the quality of local dark patches and thereby make detection easier. Low level gray-scale thresholding is used to extract the dark patches.

          Yang and Huang [58], on the other hand, explore the gray-scale behavior of faces in mosaic (pyramid) images. Furthermore, image resolution can be reduced gradually either by subsampling or averaging, macroscopic features of the face will disappear. At low resolution, face region will become uniform. Based on this observation, Yang proposed a hierarchical face detection framework. Starting at low resolution images, face candidates are established by a set of rules that search for uniform regions. The face candidates are then veried by the existence of prominent facial features using local minima at higher resolutions. The technique of Yang and Huang was recently incorporated into a system for rotation invariant face detection by Lv et al. [57] and an

          extension of the algorithm is presented in Kotropoulos and Pitas [55].

        3. Skin color based face detection: – Skin color plays an important role in detecting faces in color images because skin chromaticity values of different color space can be effectively used to segment the input image. It helps to identify the probable regions containing faces. In this way we can reduce the search space and hence improve the performance by making the processing faster. But this type of segmentation may face some problems in complex background or background having the same color as skin.

          While gray level information gives us basic features about face representation, color information can provide us more face features by extra dimensions of pixel representation. For example, same features in intensity space can be very different in color space. On the other hand, skin color estimations can help us about finding possible face regions in the image. Actually, this is complex task when faces of different races are considered [40, 41, 46]. The most common color model is RGB representation in which colors are defined by combinations of red, green, blue color components. Lighting conditions can dramatically change RGB variations of images. Because of this reason, normalized RGB values are preferred in color-based feature detection [38, 40 , 42 , 47, 46 , 48].

          Normalized colors can calculate by following equations:

          r = R / (R + G + B) b = B / (R + G + B) g = G / (R + G + B)

          In this color representation space, sum of r, g, and b values are always 1. And also, it shows high invariance for lighting conditions. On the other hand, other color representation models are used for face detection like HSI [43, 41, 44, 45], HSV [ 37, 39,

          49], YES [50], YUV [53], YCrCb [37, 51].

          Color based face detection is usually performed by using skin color thresholds according to pre-calculated skin color models [42, 45]. More complex methods use statistical measures of large training sets (adaptive learning). These methods implementations can be improved by new face examples, and they become more robust against environmental factors changes like illumination conditions, camera characteristics [38, 40, 41, 46,


        4. Motion based face detection:- When use of video sequence is available, motion information can be used to locate moving objects. Moving silhouettes like face and body parts can be extracted by simply thresholding accumulated frame differences [36]. Besides face regions, facial features can be located by frame differences [34, 35].

        5. Generalized measures :- So far we have considered low level features like edges, skin color, gray level intensity and motion; all of these techniques are derived in the early stage of the human visual system. This visual system is nothing but the various responses made by our inner retina [69, 70]. This pre-attentive processing allows visual information to be organized in various bases prior to high-level visual activities in the brain. Based on the concepts we can create a machine vision system which was first proposed by Reisfeld et al. [67]. This machine based system should begin with pre- attentive low-level computation of generalized image properties. In their earlier work, Reisfeld and Yeshurun [68] introduced a generalized symmetry operator which is based on edge pixel operation.

      2. Feature Analysis:- Low-level analysis features are not very effective and robust. By these features, possible face regions can be found, but also false regions are found. For example, in skin color model face detection, background objects of similar color can be also detected as a face region. To solve this problem, higher level feature analysis can be used. In feature analysis, visual features are organized due to global concept of face by using face geometry information. Feature analysis approach is divided into two sub approaches. These are feature searching and constellation analysis. Feature searching strategies are based on relative positions of simple facial features. Constellation strategies use flexible features of various face models.

        1. Feature Searching:-In this approach, firstly we determine all the prominent facial features. These features are generally biometrics measurements like eyebrows lines, eyes circles, etc. In the literature, most commonly used facial feature is distinct side by side appearance of a pair of eyes [30]. Also, main face axis [32], outline (top of the head) [31, 32], and body (below the head) are searched to detect face regions. After determining features, which we focus on, these features are searched. According to found features orientations and geometric their geometric ratios, face regions are detected. The facial feature extraction algorithm by De silva et al. [33] is a good example of feature searching method.

        2. Constellation Analysis: – All those approaches discussed so far are rigid in nature; hence fail to solve some problem like locating faces of various poses in complex background. To overcome this problem later researches make a group of facial features in face-like constellations using more robust modeling methods such as statistical analysis. Various types of face constellations have been proposed [12]. Burl et al. [12] make use of statistical shape theory on the features detected from a multi- scale Gaussian derivative filter. Huang et al. [29] also apply a Gaussian lter for pre-processing in a framework based on image feature analysis.

      3. Active Shape Models: – Active shape models focus on complex non-rigid features like actual physical and higher level appearance of features. Active shape models use local features (edges, brightness) to find shape of feature models. Active shape models are divided into three groups: These are snakes, deformable templates, and point distribution models.

        1. Snakes: – In this approach, active contours or snakes are used to locate head boundary [21,22,23,26,27]. Also features boundaries can be found by these contours. To achieve our task we have to initialize the starting position of the snake, which may be at the proximity around the head boundary. Due the elastic nature of the snake, it will feel some force by the object (head) and this force will help the snake to take form as the shape of the object. The evolution of a snake is achieved by an energy function. Our aim will be to minimizing this energy function. Let Esnake denotes the energy of the snake. Now the energy function will look like as follows:

          Esnake = Einternal + Eexternal

          where Einternal and Eexternal are the internal and external energy function respectively.

          The internal energy depends on the intrinsic nature of the snake and defines its natural evolution. The typical natural evolution in snakes is shrinking or expanding. The external energy counteracts the internal energy and enables the contours to deviate from the natural evolution and eventually assume the shape of nearby featuresthe head boundary at a state of equilibria.

          We have to consider two main things whenever we are implementing a snake are energy terms and the energy minimization techniques. Elastic energy [22,23,26,27] is used commonly as internal energy. Internal energy is proportional to the distance between the control points on the snake, which gives the contour an elastic-band characteristic

          that causes it to shrink or expand. On the other hand external energy depends on the type of image features. Now we come to the energy minimization process which is achieved by optimization techniques such as the steepest gradient descent. These minimization processes require high computations. Huang and Chen [22] and Lam and Yan [28] both employ fast iteration methods for faster convergence which can be achieved by greedy algorithms.

          Snakes have some shortcomings like, contour often becomes trapped onto false image features and another one is that snakes are not suitable in extracting nonconvex features due to their tendency to attain minimum curvature.

        2. Deformable Templates :- Locating facial features boundaries by using active contours is not easy task. Finding and locating facial edges is difficult. Sometimes there can be edge detection problems because of bad lighting or bad contrast of image. So, we need more flexible methods. Deformable templates approaches are developed to solve this problem. Deformation is based on local valley, edge, peak, and brightness [24]. Other than face boundary, salient feature (eyes, nose, mouth and eyebrows) extraction is a great challenge of face recognition.

          In this method some predefined templates are used to guide the detection process. These predefined templates are very flexible and able to change their size and other parameter values to match themselves to the data. The final values of these parameters can be used to describe the features. This method should work despite variations in scale, tilt and rotation of head, and lighting conditions.

          An energy function is defined that gives a measure of fit of the template to the image. Minimizing the energy attracts the template to salient features, such as peaks, valleys, and edges in the image intensity. The minimum of the energy function corresponds to the best (local) fit with the image. The template is given some initial parameters that are then updated by steepest descent. This corresponds to following a path in parameter space, and contrasts with traditional methods of template matching that would involve sampling the parameter space to find the best match (and whose computational cost increases exponentially with the dimension of the parameter space). Changing these parameters corresponds to altering the position, orientation, size, and other properties of the template. The initial values of the parameters, which may be very different from the final values, are determined by preprocessing. If, for example, we have input from a global face template (as described in the previous

          section) then we could use this input to determine likely initial values. The template is designed to act on representations of the image, as well as on the image itself. These representations are based on fields, which highlight valleys,

          peaks, and edges, and enable the template to match when its initial parameter values are very different from the correct ones. The final fitness measure, however, is mostly independent of these representations [25].

        3. Point Distribution Models :- These models are compact parameterized descriptions of the shapes based on statistics [9]. The implementation process of PDM is quite different from the other active shape models. The contour of PDM is discretized into a set of labeled points. Now, the variations of these points can be parameterized over a training set that that includes objects of different sizes and poses. We can construct these variations of features as a linear flexible model. The model comprises the mean of all the features in the sets and the principle modes of variation for each point

          x = x + Pv,

          Where x represents a point on the PDM, x is the mean feature in the training set for that point, P = [ p1 p2pt ]

          is the matrix of the t most signicant variation vectors of the covariance of deviations, and v is the weight vector for each mode. Face PDM was rst developed by Lanitis et al. [18] as a exible model. The model depicts the global appearance of a face which includes all the facial features such as eyebrows, the nose, and eyes.

          The advantages of using a face PDM is that it provides a compact parameterized description. In [19], it is implemented as a generic representation for several applications such as coding and facial expression interpretation. In their subsequent work, Lanitis et al. [20] have incorporated a genetic algorithm (GA) and multire solution approach to address the problem in multiple face candidates. The global characteristic of the model also allows all the features to be located simultaneously and thereby removes the need for feature searching. Furthermore, it has been shown that occlusion of a particular feature does not pose a severe problem since other features in the model can still contribute to a global optimal solution [20].

            1. View based or image based face detection technique:- Feature based methods depend on the prior knowledge of the face geometry. Here prior knowledge of the face geometry means relative

              distance between various facial features, which becomes more troublesome as the background scenery gets more complicated. On the other hand view based methods [5] were introduced in an attempt to detect the human face without depending on the knowledge of its geometry. View-based methods treat the face detection problem as a pattern recognition problem by having two separate classes, faces and non faces. Some well known algorithms based on this approach include PCA (Principal Component Analysis), Neural Networks, SVM (Support Vector Machines) and other statistical methods. View based techniques include neural networks, linear subspace methods like eigen faces and statistical approaches like support vector machines (SVM), principal component analysis (PCA).

              1. Neural Network based face detection :- In Rowleys (1999)[10] research a view based approach to detect faces in still images was introduced, and proved that face detection problem can be effectively solved by using Neural Networks to detect frontal, and non-frontal faces with different poses and rotation degree. But the problem when we use a machine learning technique to learn detecting faces was that faces in images vary considerably with different lighting conditions, pose, occlusion and facial expressions, compensating for these variations was important for the learning process.

                Neural networks can be applied successfully in face detection systems. The advantage of using neural networks for face detection is the feasibility of training a system to capture the complex class conditional density of face images. However, one drawback is that the network architecture has to be extensively tuned (number of layers, number of nodes, learning rates, etc.) to get exceptional performance.

              2. Linear Subspace Methods :- Human face images lie in a subspace of overall image space. By using this subspace concept, several analysis methods are developed. In image processing world, the most important three methods are principal component analysis (PCA), linear discriminant analysis, and factor analysis (FA).

 Eigen faces:- In information theory concept, if we want to extract some information frm a face image, we first encode it and then compare it with some other encoded face image on a database

          [11] . A simple way to extract the information from a face image is to capture the variation in a collection of face images and use this information to encode and compare individual face images.

          Mathematically, we wish to find the principal components of distribution of faces or the eigenvectors of the covariance matrix of a set of face images. These eigenvectors are a set of features, which together characterize the variation between face images. Each image location contributes more or less to each eigenvectors, so that we can display the eigenvectors as a sort of ghostly face, which we call an eigenfeces.

          In the training set, each face image is represented by a linear combination of the eigenfaces. Hence the number of possible Eigen faces is same as the number of face images. Another very simple way is to represent the face using the best eigenfaces (largest eigenvalues). In this approach we can reduce the number of eigenfaces thus increasing the computational efficiency. For example, if a training set has M number of images then the number of eigenface that we would gat is also M. Out of these, only M eigenfaces are selected having the largest eigenvalues. These would span the M- dimensional subspace face space out of all the possible images (image space).

          When the face image to be recognized, is projected on a face space, we get the weights associated with the eigenfaces that linearly approximate the face or can be used to reconstruct the face. Now these weights are compared with the weights of the known face images so that it can be recognized as a known face in used in the training set. In simpler words, the Euclidean distance between the image projection and known projections is calculated; the face image is then classified as one of the faces with minimum Euclidean distance.

              1. Statistical Approaches:- Apart from linear subspace methods and neural networks, there are several other statistical approaches to face detection like systems based on information theory, a support vector machine, and Bayes decision rule.

                1. Support Vector Machine (SVM):- According to P. Jonathon Phillips [13] Support Vector Machines (SVMs) are formulated to solve a classical two-class pattern recognition problem. We adapt SVM to face recognition by modifying the interpretation of the output of a SVM classier and

                  devising a representation of facial images that is concordant with a two-class problem. Traditional SVM returns a binary value, the class of the object. To train our SVM algorithm, we formulate the problem in a difference space, which explicitly captures the dissimilarities between two facial images. This is a departure from traditional face space or view-based approaches, which encodes each facial image as a separate view of a face.

                  In difference space, we are interested in the following two classes: the dissimilarities between images of the same individual, and dissimilarities between images of different people. These two classes are the input to a SVM algorithm. A SVM algorithm generates a decision surface separating the two classes. For face recognition, we re-interpret the decision surface to produce a similarity metric between two facial images. This allows us to construct face-recognition algorithms. The work of Moghaddam et al. [14] uses a Bayesian method in a difference space, but they do not derive a similarity distance from both positive and negative samples.

                  We demonstrate our SVM-based algorithm on both verication and identication applications. In identication, the algorithm is presented with an image of an unknown person. The algorithm reports its best estimate of the identity of an unknown person from a database of known individuals. In a more general response, the algorithm will report a list of the most similar individuals in the database. In verication (also referred to as authentication), the algorithm is presented with an image and a claimed identity of the person. The algorithm either accepts or rejects the claim. Or, the algorithm can return a condence measure of the validity of the claim.

                2. Principal Components Analysis (PCA)

          :- PCA is a technique based on the concept of eigenfaces and was first introduced by Kirby and Sirivich in 1988. PCA also known as Karhunen Loeve projection[15]. ). It is one of the more successful techniques of face recognition and easy to understand and describe using mathematics. This method involves using Eigen faces. Eigen faces have been used to track human faces. They use a principal component analysis approach to store a set of known patterns in a compact subspace representation of the image space, where the Eigen vectors of the training image set span the subspace. In this method, the input image and images of the eigenface gallery should be the same size and we have to normalize the input image so that the eyes, nose and mouth are properly lined up i.e. we only consider the face region and that should be a frontal face. In case of a non-frontal face it may result poor performance.

          PCA calculates the Eigen vectors of the covariance matrix, and projects the original data onto a lower dimensional feature space, which is defined by Eigen vectors with large Eigen values [15]. These Eigen vectors are also referred to as Eigen faces. PCA is a very much advantageous approach in those applications where the dimensionality of the original data is vast compared to the size of the dataset.

          The primary advantage of this approach is that, it reduces the dimension of the data by some compression technique. Hence reducing the complexity of grouping the images by representing high dimension data with lower dimension data. PCA method is used in a lot of face detection methods [16,17].

  3. Conclusions

    We give a detail description of the different type of the face detection techniques from which a researcher can easily choose their way of research. The ambiguity of features in the features base comes from the fact that similar features that exist in the face could exist in other background windows it can solve by view based or image based method like Neural Network. Thus there are several advantage and disadvantage of different methods which is carefully describes in this paper.

  4. References

[1] Ajeet Singh, BK Singh and Manish Verma,

Comparison of Different Algorithms of Face Recognition, VSRD-IJEECE, Vol. 2 (5), 2012, 272-


  1. Rein-Lien Hsu, Mohamed Abdel-Mottaleb, and Ani1 K. Jain. FACE DETECTION IN COLOR IMAGES,IEEE pp.1046-1048,2001

  2. Ming-Hsuan Yang, David J. Kriegman, Narendra Ahuja, Detecting Faces in Images: A Survey, IEEE Trans. PAMI, vol. 24, No.1, pp. 1-25, Jan 2002.

  3. Pang Wai Tian, Remote Monitoring System,BEHE, Jan 2008.

  4. Erik Hjelmas and Boon Kee Low, Face Detection: A Survey, Computer Vision and Image Understanding, 83, pp. 236-274, 2001

  5. P. J. L. Van Beek, M. J. T. Reinders, B. Sankur, and J. C. A. Van Der Lubbe, Semantic segmentation of videophone image sequences, in Proc. of SPIE Int. Conf. on Visual Communications and Image Processing, 1992, pp. 11821193.

  6. M. C. Burl and P. Perona, Recognition of planar object classes, in IEEE Proc. of Int. Conf. On Computer Vision and Pattern Recognition, 6, 1996.

  7. I. Craw, H. Ellis, and J. R. Lishman, Automatic extraction of face-feature, Pattern Recog. Lett. Feb. 1987, 183187.

  8. T. F. Cootes and C. J. Taylor, Active shape modelssmart snakes, in Proc. of British Machine Vision Conference, 1992, pp. 266275.

  9. H.A. Rowley, Neural Network-Based Face Detection, PhD thesis, Carnegie Mellon Univ, 1999.

  10. Matthew A. Turk and Alex P. Pentland, Face Recognition Using Eigenfaces, IEEE, pp. 586-591, 1991.

  11. M. C. Burl, T. . Leung, and P. Perona, Face localization via shape statistics, in Int. Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland, June 1995.

  12. P. Jonathon Phillips. Support Vector Machines Applied to Face Recognition,1999

  13. B. Moghaddam, W. Wahid, and A. Pentland. Beyond eigenfaces: probablistic matching for face recognition. In 3rd International Conference on Automatic Face and Gesture Recognition, pages 30 35, 1998.

  14. Dr.Ch.D.V.Subba Rao ,Srinivasulu Asadi, Dr.Ch.D.V.Subba Rao A Comparative study of Face Recognition with Principal Component Analysis and Cross-Correlation Technique, International Journal of Computer Applications (0975 8887),Volume 10 No.8, November 2010

  15. I. Kim, J. H. Shim, J. Yang, "Face Detection".

  16. P. Menezes, J. C. Barreto, J. Dias, "Face Tracking Based on Haar-Like Features and Eigenfaces", 2003.

  17. A. Lanitis, C. J. Taylor, and T. F. Cootes, Automatic tracking, coding and reconstruction of human faces, using exible appearance models, IEEE Electron. Lett. 30, 1994, 15781579.

  18. A. Lanitis, C. J. Taylor, and T. F. Cootes, Automatic interpretation and coding of face images using exible models, IEEE Trans. Pattern Anal. Mach. Intell. 19, 1997.

  19. A. Lanitis, A. Hill, T. Cootes, and C. Taylor, Locating facial features using genetics algorithms, in Proc. Of Int. Conf. on Digital Signal Processing, Limassol, Cyrus, 1995, pp. 520525.

  20. K. M. Lam, H. Yan, "Locating and Extracting the Eye in Human Face Images",1995.

  21. S. R. Gunn and M. S. Nixon, A dual active contour for head and boundary extraction, in IEE Colloquium on Image Processing for Biometric Measurement, London, Apr. 1994, pp. 6/1.

  22. C. L. Huang and C. W. Chen, Human facial feature extraction for face interpretation and recognition, Pattern Recog. 25, 1992, 14351444.

  23. B. Scassellati, "Eye Finding via Face Detection for a Foveated, Active Vision

    System", 1998.

  24. Alan L. Yuille, Deformable Templates for Face Recognition, Journal of Cognitive Neuroscience Volume 3, pp. 59-70,Number 1.

  25. H. Wu, T. Yokoyama, D. Pramadihanto, and M. Yachida, Face and facial feature extraction from colour image, in IEEE Proc. of 2nd Int. Conf. on Automatic Face and Gesture Recognition, Vermont, Oct. 1996, pp. 345349.

  26. T. Yokoyama, Y. Yagi, and M. Yachida, Facial contour extraction model, in IEEE Proc. of 3rd Int. Conf. On Automatic Face and Gesture Recognition, 1998.

  27. K. M. Lam and H. Yan, Fast greedy algorithm for locating head boundaries, Electron. Lett. 30, 1994, 2122.

  28. W. Huang, Q. Sun, C. P. Lam, and J. K. Wu, A robust approach to face and eyes detection from images with cluttered background, in Proc. of International Conference on Pattern Recognition, 1998.

  29. J. L. Crowley, F. Berard, "Multi-model Tracking of faces for Video Communications".

  30. T. Sakai, M. Nagao, and T. Kanade, Computer analysis and classification of photographs of human faces, in Proc. First USAJapan Computer Conference, 1972, p. 2.7.

  31. I. Craw, H. Ellis, and J. R. Lishman, Automatic extraction of face-feature, Pattern Recog. Lett. Feb. 1987, 183187.

  32. L. C. De Silva, K. Aizawa, and M. Hatori, Detection and tracking of facial features by using a facial feature model and deformable circular template, IEICE Trans. Inform. Systems E78D(9), 1995, 1195 1207.

  33. J. L. Crowley, F. Berard, "Multi-model Tracking of faces for Video Communications".

  34. B. K. Low, M. K. Ibrahim, "A Fast and Accurate Algorithm Facial Feature Segmentation", 1997.

  35. M J. T. Rainders, P. J. L. van Beek, B. Sankur,

    J. C. A. van der Lubbe, "Facial Feature Localization and Adaptation of a Generic Face Model for Model- Based Coding", 1994.

  36. C. Garcia, G. Tziritas, "Face Detection Using Quantized Skin Color Regions Merging and Wavelet Packet Analysis", 1999.

  37. J. L. Crowley, F. Berard, "Multi-model Tracking of faces for Video Communications".

  38. R. Herpers, G. Verghese, K. Derpanis, "Detection and Tracking of Faces in Real Environments".

  39. M. Hunke, A. Waibel, "Face Locating and Tracking for Human-computer Interaction".

  40. S. McKenna, S. Gong, J. J. Collins, "Face Tracking and Pose Representation", 1997.

  41. S. Kawato, J. Ohya, "Real-Time Detection of Nodding and Head-Shaking by Directly Detecting and Tracking The "Between-Eyes" ", 2000.

  42. H. P. Graf, E. Cosatto, T. Ezzat, "Face Analysis for Synthesis of Photo-Realistic Talking Heads".

  43. B.E. Shpungin, J. R. Movellan, "A Multi- Threaded Appoach to Real Time Face Tracking", 2000.

  44. K. Sobottka, I. Pitas, "Face Localization and Facial Feature Extraction Based on

    Shape and Color Information", 1996.

  45. J. Yang, A. Waibel, "A Real Time Face Tracker", 1996.

  46. Q. B. Sun, W. M. Huang, J. K. Wu, "Face Detection Based on Color and Local Symetry Information".

  47. K. Yachi, T. Wada, T. Matsuyama, "Human Head Tracking using Adaptive Appearance Models with a Fixed View Point Pantilt-Zoom-Camera".

  48. L. L. Yang, M. A. Robertson, "Multiple-Face Tracking System for General Region of Interest Video Coding", 2000.

  49. E. Saber, A. M. Tekalp, "Frontal-View Face Detection and Facial Feature Extraction using Color, Shape, and Symmetry Based Cost Functions".

  50. N. Tsapatsoulis, Y. Avrithis, S. Kollias, "Efficient Face Detection for Multimedia Applications".

  51. N. Oliver, A. Pentland, F. Berard, "LAFTER: A Real Time Face and Lips Tracker with Facial Expression Recognition".

  52. M. A. Mottalep, A. Elgammal, "Face Detection in Complex Environments from Color Images", 1999.

  53. H. P. Graf, E. Cosatto, D. Gibson, E. Petajan, and M. Kocheisen, Multi-modal system for locating heads and faces, in IEEE Proc. of 2nd Int. Conf. on Automatic Face and Gesture Recognition, Vermont, Oct. 1996, pp. 277282.

  54. C. Kotropoulos and I. Pitas, Rule-based face detection in frontal views, in Proc. Int. Conf. on Acoustic, Speech and Signal Processing, 1997.

  55. K. M. Lam and H. Yan, Facial feature location and extraction for computerised human face recognition, in Int. Symposium on information Theory and Its Applications, Sydney, Australia, Nov. 1994.

  56. X.-G. Lv, J. Zhou, and C.-S. Zhang, A novel algorithm for rotated human face detection, in IEEE Conference on Computer Vision and Pattern Recognition, 2000.

  57. G. Yang and T. S. Huang, Human face detection in a complex background, Pattern Recog. 27, 1994, 5363.

  58. L. G. Frakas and I. R. Munro, Anthropometric Facial Proportions in Medicine. Charles C. Thomas, Spring- eld, IL, 1987.

  59. V. Govindaraju, Locating human faces in photographs, Int. J. Comput. Vision 19, 1996.

  60. H. Graf, E. Cosatto, and T. Ezzat, Face analysis for the synthesis of photo-realistic talking heads, in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition, 2000.

  61. H. P. Graf, T. Chen, E. Petajan, and E. Cosatto, Locating faces and facial parts, in IEEE Proc. of Int. Workshop on Automatic Face-and Gesture- Recognition, Zurich, Switzerland, Jun. 1995, pp. 41 45.

  62. Sudipta Roy , Prof. Samir K. Bandyopadhyay,

    Detection and Quantification of Brain Tumor from MRI of Brain and its Symmetric Analysis

    ,International Journal of Information and Communication Technology Research(IJICTR), pp. 477-483,Volume 2, Number 6, June 2012. [ISSN :


  63. H. P. Graf, E. Cosatto, D. Gibson, E. Peajan, and M. Kocheisen, Multi-modal system for locating heads and faces, in IEEE Proc. of 2nd Int. Conf. on Automatic Face and Gesture Recognition, Vermont, Oct. 1996, pp. 277282.

  64. Sudipta Roy, Prof. Samir K. Bandyopadhyay,

    Contour Detection of Human Knee, International Journal of Computer Science Engineering and Technology (IJCSIT) ,September 2011 , Vol 1, Issue 8,pp. 484-487.

  65. Q. Gu and S. Z. Li, Combining feature optimization into neural network based face detection, in Proceedings of the 15th Intern ational Conference on Pattern Recognition, 2000, Vol. II, p. 4A.

  66. S. R. Gunn and M. S. Nixon, A dual active contour for head and boundary extraction, in IEE Colloquium on Image Processing for Biometric Measurement, London, Apr. 1994, pp. 6/1.

  67. Prof. Samir K. Bandyopadhyay, Sudipta Roy ,

    Detection of Sharp Contour of the element of the WBC and Segmentation of two leading elements like Nucleus and Cytoplasm, International Journal of Engineering Research and Applications (IJERA),

    Vol. 2, Issue 1,Jan-Feb 2012, pp.545-551

  68. T. Sakai, M. Nagao, and T. Kanade, Computer analysis and classication of photographs of human

    faces, in Proc. First USAJapan Computer Conference, 1972, p. 2.7.

  69. D. Reisfeld, H. Wolfson, and Y. Yeshurun, Context-free attentional operators: The generalized symmetry transform, Int. J. Comput. Vision 14, 1995, 119130.

  70. D. Reisfeld and Y. Yeshurun, Robust detection of facial features by generalised symmetry, in Proc. of 11th Int. Conf. on Pattern Recognition, The Hague, The Netherlands, August 1992, pp. A117 120.

  71. N. Treisman, Preattentive processing in vision, Comput. Vision, Graphics Image Process. 31, 1985, 156177.

  72. F. Werblin, A. Jacobs, and J. Teeters, The computational eye, in IEEE Spectrum: Toward an Articial Eye, May 1996, pp. 3037.

  73. http://innovativejournal.in/index.php/ajcsit/article


Leave a Reply