Real Time Smart Car Lock Security System Using Face Detection and Recognition

DOI : 10.17577/IJERTCONV6IS13117

Download Full-Text PDF Cite this Publication

Text Only Version

Real Time Smart Car Lock Security System Using Face Detection and Recognition

Mahendra S M Goutham P K

Department of Electronics and Communication Department of Electronics and Communication Coorg Institute of Technology, Coorg Institute of Technology,

Ponnampet, Karnataka, India Ponnampet, Karnataka, India

Darshan M Prabhu Archana B N

Department of Electronics and Communication Department of Electronics and Communication Coorg Institute of Technology Coorg Institute of Technology

Ponnampet, Karnataka, India Ponnampet, Karnataka, India

Lahari S K

Dept. of Electronics and Communication Coorg Institute of Technology Ponnampet, Karnataka, India

Abstract An improved face detection and recognition method based on information of skin color is proposed in this paper. Color is a powerful fundamental cue of human faces. Skin color detection is first performed on the input color image to reduce the computational complexity. Morphological operations are used and it gives a prior knowledge for face detection. Face is detected by Adaboost algorithm. AdaBoost learning is used to choose a small number of weak classifiers and to combine them into a strong classifier deciding whether an image is a face or not. Then, by using principal component analysis (PCA) algorithm, a specific face can be recognized by comparing the principal components of the current face to those of the known individuals in a facial database built in advance.

KeywordsPCA; Adaboost; Morphological; insert.

  1. INTRODUCTION (Heading 1)

    Biometrics is the science and technology of measuring and analyzing biological data. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements, for authentication. Face detection and recognition technology [5, 8] has been widely discussed in relation to computer vision and pattern recognition. Numerous different techniques have been developed owing to the growing number of real world applications. Biometrics consists of methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits.

    Face detection is a method to find whether or not there are any faces in a given image (usually in gray scale) and, if present, return the image location and content of each face. This is the first step of any fully automatic system that analyzes the information contained in faces (e.g., identity, gender, expression, age, race and pose). Face detection is a type of object class detection in which the locations and sizes of all objects in an image that belong to a given class are found.

    Face detection is the general case of face localization in which the locations and sizes of a known number of faces (usually one) are interpreted. While earlier work dealt mainly with upright frontal faces, several systems have been developed that are able to detect faces fairly accurately with in-plane or out-of-plane rotations in real time. Although a face detection module is typically designed to deal with single images, its performance can be further improved if video stream is available. However, face detection is not straightforward because it has lots of variations of image appearance, such as pose variation (front, non-front), occlusion, image orientation, illuminating condition and facial expression.

    There have been various approaches proposed for face detection, which could be generally classified into four categories. (I)Template matching methods, (ii) Feature- based methods, (iii) Knowledge-based methods [1], and (iv) Machine learning methods. Template matching method means the final decision comes from the similarity between input image and template. It is scale- dependent, rotation- dependent and computational expensive. Feature-based methods use low-level features such as gray [2], color [3, 4], edge, shape [3, 4], and texture to locate facial features, and further, find out the face location. Knowledge-based methods [5] detected an isosceles triangle (for frontal view) or a right triangle (for side view). Machine learning methods use a lot of training samples to make the machine to be capable of judging face or non-face. Despite of the notable successes achieved in the past decades, making a tradeoff between computational complexity and detection efficiency is the main challenge.

    Such as pose, illumination, expression, makeup and age. In order to overcome these problems, 3D face detection and recognition methods have been developed rapidly in recent years [6]. Bronstein et al. presented a recognition framework based on 3D geometric invariants of the human face [14]. Wang et al. described a real-time algorithm based on fisherfaces [10].

    Though these 3D methods which emphasis on the shape of human face are robust in variable environment, they overlook the texture information of human face on the contrary. Therefore, in order to get better efficiency, face data should be sufficiently used and both 2D and 3D face information should be considered [1-3].

    The rest of this paper is organized as follows. Section II describes the overview of the proposed method. The proposed Face detection algorithm is presented in the Section III and IV. Finally, the experimental results and conclusion are drawn in Section V and Section VI.


    In this paper the face detection concept is applied to real time car theft detection application (Biometrics) [2].The face detection architecture is designed using skin color information with Adaboost algorithm. The application flow is as follows:

    A webcam will be placed in the car door, in which the video frames will be recoded and face of the person trying to unlock the car will be detected using face detection algorithm and recognized by using face recognition algorithm (PCA).If the person is not the user, the car door will not open and it will immediately intimate the authorized person regarding the theft details by sending the warning message to his/her mobile through GSM [5]. Otherwise, the car door will open and it allows for authorized access. GPS technology is used to track the current position of the theft vehicle.

    Fig. 1. Smart lock car security system


    Color is a powerful fundamental cue of human faces. The distribution of skin colors clusters in a small region of the chromatic color space. Processing color is faster than processing other facial features. Therefore, skin color detection is first performed on the input color image to reduce the computational complexity. Because of the accuracy of skin color detection affects the result of face detection system, choosing a suitable color space for skin color detection is very important.

    1. Color Spaces

      Color information, as an effective feature, is widely used in image processing and analysis. There are many color spaces, including the commonly used RGB, LUV, LAB, XYZ, YUV for color coding, YIQ, HSV and HIS for

      computer graphics, GLHS (generalized LHS) , and so on. However, it is found that the RGB color space is not suitable for constructing accurate skin color models due to the high correlation between the three components.

      In order to improve the performance of skin color clustering, YCbCr space to build a skin color model is used, as the chrominance components are almost independent of luminance component in the space. There are non-linear relations between chrominance (Cb, Cr) and luminance(Y) of skin color in the high and low luminane region. Hence many existing skin color models operate only on chrominance Cb-Cr plane. In the color detection process, each pixel is classified as either skin or non-skin based on its color components. Skin pixels and non-skin pixels are gathered respectively from a large amount of skin color samples and non-skin color samples that make up of the skin region's backgrounds in two subspaces.

    2. Morphological Operations

      It is effective to use morphological operations and prior knowledge for face detection. Morphological operation can simplify image data while preserving their essential shape characteristics and can eliminate irrelevancies (Haralick and Shapiro, 1993). So it can effectively help to derive a more accurate contour of the skin segment.

      The face region is filled by applying morphological dilation operation with a 3-by-3 structuring element several times (15-20here) followed by the same number of erosion operations using the same structuring element. There exist holes that correspond to the eyes, nose and mouth, etc. within the segmented face region. Dilation operation is used to fill the holes. The erosion operations are applied to the dilation result in order to restore the shape of the face.

    3. Face Detection By Adaboost

      The totally corrective algorithm was applied to the face detection problem using the framework introduced by Viola and Jones [7]. To train a classifier, Viola and Jones select from a large number of very efficiently computable features. Every weak classifier implements a simple threshold function on one of the features. Having such a large set of weak classifiers, AdaBoost learning is used to choose a small number of weak classifiers and to combine them into a classifier deciding whether an image is a face or a non- face.

    4. Haar Like Feature And Integral Image

      The face detection algorithm deals with haar features of human face. A simple feature is used in detection algorithm. In this algorithm importance is given to the features than the statistical pixels because features can be used to represent both the statistically close facial information and sparsely related background data sample image. In its simplest form the features can be thought of as pixel intensity set evaluations. This is where the sum of the luminance of the pixels in the white region of the feature is subtracted from the sum of the luminance in the remaining gray section. The difference value is used as the feature value, and can be combined to form a weak hypothesis on regions of the image. In the implementation part, four of the Haar-like features are chosen, the first with horizontal division, the second a vertical, the third containing two vertical divisions

      and the last containing both the horizontal and vertical division.

      An integral image is defined as the sum of all pixel values (in an image) above and to the left, including itself. Based on the computed integral image, the Haarlike features can be efficiently calculated.



















      Fig. 2. Integral image generation (a) 3 X 3 image, (b) integral image representation

    5. Adaboost Algorithm

      AdaBoost is an algorithm for constructing a strong classifier as linear combination of weak classifiers ht(x). The ht(x) is one of the feature.

      • For the given images (x1,y1),.,(xL,yL) where yi{0,1} indicates positive or negative examples ; gj(xi) is the j th Haar-like feature of the ith example xi.

      • Initialize weights Wl,i = 0.5/m, i <= m 0.5/n, otherwise Where m, n are the no.of positive or negative examples respectively, L= m+n.

      • For t= 1T Normalize weights

        Wt, i = Wt, j /

        For each feature j, train a weak classifier hj, and evaluate its error j with respect to Wt,

        j = Hj (xi)-yi| (3) hj(x) = 1, pjgj(x) < pjj 0, otherwise Pj {1,-1} is a

        parity bit and j is a threshold.

      • Choose the classifier ht with the lowest error t Update the weights Wt+1,i = W t,i t 1-e

      • Where ei = 0 if example xi is classified correctly, ei

        =1 otherwise, and t = t/ (1- t).

      • Final classifier: H(x) = 1 ,

        0 , otherwise, Where t)

    6. Cascaded Detector

      AdaBoost is able to cope with very large sets of weak classifiers due to its greedy character, However, for face detection, very large training set has to be explored as well.In order to improve computational efficiency greatly and also reduce the false positive rate, a sequence of gradually more complex classifiers called a cascade is built.

      Fig. 3. Cascade of stages. Candidate must pass all stages in the cascade to be concluded as a face

      An image window (region) is passed to the first classifier. It is either classified as nonface or a decision is deferred and the image is passed to the second, etc. classifier. The goal of each classifier is to prune the training set for the next stage classifier of the cascade. Since easily recognizable non-face images are classified in the early stages, classifiers of the later stages of the cascade can be Trained rapidly only on the harder, but smaller, part of the non-face training set. Stages in cascade are constructed by training classifiers using AdaBoost.

      Fig. 4. Block diagram of face detection process


    The PCA algorithm is based on K-L translation which is a useful orthogonal transformation [5]. After K-L translation, an image can be dimensionally reduced to a point of a feature subspace. With this feature subspace, any face image can be projected to it, and get a set of coordinate coefficients. This set of coefficients can be used as a basis for face recognition. Such a feature subspace is also known as eigenface space, hence the method is also known as the eigenface method.

    By using PCA algorithm, a specific face can be recognized by comparing the principal components of the current face to those of the known individuals in a facial database built in advance. The detailed procedure of PCA algorithm is described below [3].

      • First, build a training database of human face.

      • Second, represent each image of the database as a vector. Here, the average face vector needs to be calculated. Then subtract average face vector from vector of each face image.

      • Third, calculate eigenface vector and space, and project the training faces into eigenface space. Coordinate coefficients can be obtained.

      • Finally, project the test face image into eigenface space and obtain the coordinate coefficients.

      • Calculate the Euclidean distance between coordinate coefficients of test image and images in database, the test image will be classified by using the nearest distance.

  5. EXPERIMENTAL RESULTS AND ANALYSIS To test the performance of the algorithm, the MIT-

    CBCL face database to train the AdaBoost algorithm is used. MIT-CBCL face database contains 2429 face samples and the samples size is 19 × 19. 2000 face samples in the MIT-CBCL face database are taken as the training set samples. Training generates 12960 rectangular features and the strong classifier composed by 100 weak classifiers.

    Fig. 5. Sample Database images

    Fig. 6. Results of skin color segmentation (a). Input image, (b) RGB components ( c) YCbCr color converted image, (d) segmentation map

    Fig. 7. Results of morphological operations (a) Smoothed image, (b) Binary image, (c) Dilated image, (d) Eroded image, Region after dilation and erosion operations

    Fig. 8. Face detection results (a). Morphologically processed imag, (b).

    Detected face

    Fig. 9. Adaboost testing and training graph

    Table 1 shows the experimental results of different algorithms. As can be seen from the table, face detection based on skin color have a higher detection accuracy. The detection rate of AdaBoost face detection algorithm in general, and has low false detection rate. It is satisfying in improving detection accuracy as well as reducing the error in detection by combining the advantages of the above



    The number of test face

    The number of correct

    face detection


    detection accuracy

    Skin Color- based face detection





    General AdaBoost algorithm





A face detection method based on cost-sensitive AdaBoost is presented in the paper. Improved AdaBoost with skin color detection algorithm achieves more robust performance and higher speed over conventional AdaBoost- based methods. Comparative results on test sets demonstrate the effectiveness of the algorithm. Future work is to implement the face detection and recognition in FPGA and in smart car lock security system.


  1. Sergey Kosov, Kristina Scherbaum, Kamil Faber, ThorstenThorma¨hlen, and Hans-Peter Seidel, Rapid stereo-vision enhanced face detection. In Proc. IEEE International Conference on Image Processing, 2009, pp.12211224.

  2. Sergey Kosov, Thorsten Thorma¨hlen, Hans-Peter Seidel, Accurate Real-Time Disparity Estimation with Variational Methods, in Proc. International Symposium on Visual Computing, 2009, pp.796807.

  3. T.-H. Sun, M. Chen, S. Lo, and F.-C. Tien, Face recognition using 2D and disparity eigenface, Expert Syst.Appl., vol.33, no.2, 2007, pp.265273.

  4. Rainer Lienhart, Alexander Kuranov, and Vadim Pisarevsky, Empirical Analysis of Detection Cascades of Boosted Classifiers for Rapid Object Detection, Springer-Verlag Berlin Heidelberg, LNCS 2781, 2003, pp. 297-304.

  5. Kevin W. Bowyer, Kyong Chang, Patrick Flynn, A survey of approaches and challenges in 3D and multi-modal 3D+ 2D face recognition, Computer Vision and Image Understanding (101), 2006, pp.1-15.

  6. F.Tsalakanidou, D.Tzovaras, Use of depth and colour eigenfaces for face recognition, Pattern Recognition Letters 24, 2003, pp. 14271435.

  7. Yue Ming, Qiuqi Ruan, Senior Member, IEEE, Face Stereo Matching and Disparity Calculation in Binocular Vision System, 2nd International Conference on Industrial and Information Systems, 2010, pp. 281-284.

  8. Andrea F. Abate, Michele Nappi, Daniel Riccio, Gabriele Sabatino, 2D and 3D face recognition: A survey, Pattern Recognition Letters28, 2007, pp. 18851906.

  9. P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, in Proc. IEEE Computer Vision and Pattern Recognition,2001, pp. 511-514.

  10. J.-G. Wang, E.T. Lim, X. Chen, and R. Venkateswarlu, Real-time stereo face recognition by fusing appearance and depth fisherfaces,

J. VLSI Signal Process. Syst., vol. 49, no. 3,2007.

Leave a Reply