Estimation of Age Groups based on Facial Features

DOI : 10.17577/IJERTV7IS070012

Download Full-Text PDF Cite this Publication

Text Only Version

Estimation of Age Groups based on Facial Features

Estimation of Age Groups based on Facial Features

Rahul Kumar1st

(M. Tech Student)

Dept. of Computer Science and Engineering ABES Engineering College

Ghaziabad, Uttar Pradesh-201306

Akhilesh Kumar Srivastava

Dept. of Computer Science and Engineering ABES Engineering College

Ghaziabad, Uttar Pradesh-201306

Deepak Kumar Agarwal

Dept. of Computer Science and Engineering ABES Engineering College

Ghaziabad, Uttar Pradesh-201306

Abstract The recognition of multiple facial varieties, for example, character, expression and gender has been widely covered. The planned age estimate and the prediction of future faces have been investigated from time to time. With the progression in the age of a human being, there are some changes in facial features. This paper deals with giving a procedure to measure age collection using facial features. This procedure includes three phases: position, extraction of characteristics and classification. the geometric components of facial images, such as the topography of wrinkles, the edge of the face, the distance between the left eye and the right eye, the eye separation nose, jaw separation eye and eye separation is calculated lip. Taking into account the grouping by surface age and form data is done using the K-medium grouping calculation. The age groups are progressively ranked according to the number of meetings using the K-averaging grouping calculation. The results obtained were enormous. This article can be used to anticipate future fights, organize gender orientation and recognize facial image expressions.

Keywords Age estimation, eyeball recognition, face detection, wrinkle features.


    People can identify themselves by studying the different characteristics of their respective faces. The process of studying the characteristics of a face is known as "FACIAL RECOGNITION". It is one of the most important biometric methods used in the current scenario. Biometric methods are highly significant and advantageous compared to conventional authentication strategies. This is due to the fact that the biometric features are unique to each individual. A question of individual verification and identification is a series of research into effective development. The most commonly used validation strategies are the face, voice, fingerprint, ear, iris and retina. Research in these areas has been conducted in the last two decades. Conventionally, facial recognition is used to identify in different areas. It is used to identify various reports, such as land registration, travel documents, driving licenses and the recognition of human beings in a range of safety. Facial images are progressively used as an additional method for verification in high-security applications. As the age of an individual increases, a change in facial characteristics occurs, so the database must be updated routinely. To update the database is a difficult task. So, we have to face the problem of

    facial aging and try to develop a mechanism that recognizes a man with 100% accuracy. In this paper we propose a successful estimate of the age group using facial elements such as the surface and the shape of the image of the human face.

    For better execution, computation of geometric elements of facial picture like wrinkle geology, face point, left to right eye separation, eye to nose separation, eye to jaw separation and eye to lip separation is done. In view of the composition and shape data, classification of age is done by making use of K- Means clustering algorithm. Age extents are arranged progressively relying on the number of gatherings utilizing K- Means clustering algorithm [1].

    Human facial image processing has been a dynamic and intriguing exploration issue for quite a long time. Since human faces give a considerable measure of data, numerous themes have drawn heaps of considerations and therefore have been concentrated seriously. The majority of these is face recognition [3]. Other research subjects incorporate feature faces [4], remaking faces from some recommended features [5], grouping gender orientation, races, and expressions from facial pictures [6], et cetera. On the other hand, not very many studies have been done on age classification. Kwon and Lobo

    [8] initially dealt with the age classification issue. They alluded to craniofacial research, dramatic cosmetics, plastic surgery, and discernment to figure out the elements that change with increase in age. They ordered gray scale facial pictures into three age groups babies, young adults and senior adults. To start with, they connected deformable formats [9] and snakes

    [7] to find essential elements, (for instance, eyes, noses, mouth and so on.) from a facial image, and judged in the event that it is a baby by the separations between essential components.

    At that point, they utilized snakes to find wrinkles on particular areas of a face to break down the facial image being young or old. Known and Lobo pronounced that their outcome was promising. In any case, their information set incorporates just 47 pictures, and the baby recognizable proof rate is beneath 68%. Moreover, since the routines they utilized for area, for example, deformable layouts and snakes, are computationally extravagant, the framework wont not be suitable for ongoing handling.


    Traditional face recognition incorporates various methods like Eigen face or principal component analysis (PCA), fisher face or linear discriminate analysis (LDA) in [10], [11]. These strategies extricate facial features from an image and after utilizing them look as a part of the face database for pictures with coordinating elements. Skin composition examination strategy [3], [4] utilizes the visual subtle elements of the skin, as caught in standard computerized or filtered images, and turns the remarkable lines, details and spots evident in a mans skin into a scientific space. There are two fundamental reasons for studying ageing effects in human computer interaction: (1) automatically estimating age for face image, and (2) Automatic age progression for face recognition. A framework has been produced to characterize face pictures into one of the three age bunches: babies, youthful grown-ups and senior grown-ups in [5]. In this paper, key historic points were mined from face pictures and separations between those milestones are calculated. At that point, proportions of those separations were utilized to characterize face pictures as that of new born children or grown-ups. This paper likewise proposes a strategy for wrinkle recognition in predetermined in face pictures to further arrange grown-up images into youthful grown-ups and senior grown-ups. The primary genuine human age estimation hypothesis was proposed in [15], [16]. Those utilized a ageing function (quadratic function) taking into account a parametric model of face pictures and performed undertakings, for instance, programmed age estimation, face recognition, crosswise over age progression. 3-D method utilizes 3-D sensors to catch data about the state of a face in [17], [18]. This data is then used to recognize particular elements on the surface of a face, for instance, the eyes shape attachments, nose and jaw. This system is strong to change in lighting and survey edges. [19], [20] added to a Bayesian age contrast classifier that characterizes face images of people in light of age contrasts and performs face check crosswise over age progression. Those utilized direction change and dstortion of nearby facial element points of interest. Be that as it may, males and females may have diverse face maturing patterns relying upon nature impacts. The AGES (Aging example Sub-space) technique for programmed age estimation is proposed in [21]. It demonstrates the maturing pattern in a 2D sub-space and after that for a concealed face image to develop the face and calculate the age. A 3D maturing displaying system which consequently creates some missing pictures in diverse age gatherings is proposed in [13]. Feature extraction based face recognition, age orientation, and age order is proposed in [23],[24],[25], [26] recommended that the frontal face perspective create an isosceles triangle joining the two eyes and mouth. This isosceles triangle is very helpful for face recognition and estimation of age range. The face triangle is unique for each and every individual and this face triangle can be utilized for face recognition with age.

    To estimate the global facial features of age, the Active Appearance Model (AAM) is applied. The AAM is a generative parametric model that contains both the shape and appearance of a human face, which demonstrates the use of principal component analysis (PCA) and has the ability to create different events using only a small number of parameters. In this way, an AAM has been widely used to show

    the extraction of points from the facial and facial elements. AAM, which is the expansion of Active Shape Model, discovers component points that use the improved method of Minimum Square Medium. At that point, the support vector machine system becomes functional to make hyperplan that will work when the classifiers use the result, the individual is called juvenile or adult. Two separate ripening capacities are produced and used to detect age as proposed by K. Luu et al.

    [27] and Choi et al. [32]. The system proposed by K. Ricanek et al. [28] can be considered as the expansion of K. Luu et al. [27], with the special case in which the LAR (Minimum Angle Regression) strategy is used to construct the accuracy of the discovery of the characteristic points in the image used by AAM. In the LAR strategy, each of the coefficients is initially assigned to 0. So, from the characteristic point X1, LAR constantly moves towards the minimum estimate of the average square until it reaches the competition. Global elements, such as separation, point and proportion, are also considered for the collection order by age. Merve Kilinc [29] Use another system to cover age meetings and a classifier that consolidates the geometric and plot components.

    The results of the classifier qualification are added to provide the assessed age. Relevant investigations show that the best execution is achieved by using the combination of local Gabor binary patterns and geometric elements. From the geometric elements, the cross proportion is determined, which is the proportion of separation between the facial elements, such as nasal closures, head and mouth. The part of the geometric qualities of appearances is considered, as portrayed by a disposition of historical points of concentration on the face, in the vision of age. The relative changes used to estimate the change in the posture of the subjects. The secondary spaces can be distinguished as points in a Grassmann variety. Twisting a normal face to a given face is evaluated as a velocity vector that changes the normal in a given image in the unit of time. At that point, the Euclidean spatial regression strategy becomes functional. This article presents an apprehension with the presentation of a technique for measuring age concentrations using facial features. This system depends on the triangle of the face that has three coordinates coordinate points between the left eyeball, the right eyeball and the point of the mouth. The edge of the face between the left eyeball, the tip of the mouth and the right eyeball assesses the age of a human being. In human studies, it works admirably for human ages 18 to 60, as mentioned by P. Turaga et al. [30] and R. Jana et al. [31].

    Choi et al. [32] examines age identification using the age characteristics classification as part of a request to improve overall performance. In the extraction of features, they talked about local, global and hierarchical characteristics. For example, in neighboring elements, wrinkles, skin, hair and geometric components are extracted using the Sobel filter system. In the AAM technique of global components, the Gabor Wavelet transformation methods are used. Different levels are the mixture of the elements of the neighborhood and the world. In the proposed model they used the Gabor channel to remove wrinkles and the LBP system for skin identification. This improves the execution of the age estimate of neighborhood elements.

    CONNECTICUT. Lin et. In [33], he assessed the age for global facial elements taking into account the mixture of Gabor waves

    and the orthogonal site that preserves the projections. The transformation of the Gabor wavelet is used to build the effectiveness of SVM development. Hu Han et. Al [34] examined the previous preparation of the face, the restriction of

    The gradient of an image function f (p,q) at coordinates (p,q) is defined as the vector.


    f(p, q) = f(p, q), f(p, q

    the facial part, the extraction of features and the hierarchical

    estimate of age. They use SVM-BDT (binary decision tree) to obtain the age classification. A different repressor of the SVM age is ready to anticipate the final age.


    In this portion, few representations, descriptions and methods of digital image processing which have been used in this paper will be discussed briefly. If the readers want to get the information in detail, they may refer to conventional image processing books [3], [4].

    Let N represents the group of natural numbers, (p,q) represents the three-dimensional coordinates of a digitized image, and R represents the set of positive integers i.e. R= {0,1,. I-1} indicating grey levels. Hence image function can be defined as the mapping

    F: N ×N R

    The pixel brightness having coordinates (p.q) can be represented as f (p,q). The center is located at the upper left corner of an image with the x-axis horizontal and the y axis vertical.

    Let us suppose that th R be a threshold, Bi = {b0, b1} indicates a pair of binary gray levels. Also B0, b1 R. the value of applying threshold on an image function f(x,y) at grey level t is a binary image function.

    Ft: N ×N B

    Such that fi (p,q) = b0 if f(p,q)<th, and b1 otherwise.

    Also the gray level histogram of an image function f (p,q) with grey levels in G is a discrete function.

    Hg: R N

    Such that hg (k) = nk, where k G, and nk represents the number of pixels in the image with gray level k.

    let C = {c0c1,. cm} be a subset of R, where m<l, ci R, for i=0,1,..,m, and cj+1 cj =1, for j = 0,1,.m-1. That is [c0, cm] is a sub range of [0, l-1].

    Range normalization of an image function f (p,q) on C is a mapping

    G: N ×N c

    Such that

    G (p,q) = (,)cm-c0+c0

    Where fmax and fmin stands for the minimum gray levels of image function f(x,y).

    The horizontal projection of a binary image function is a discrete function

    P: N N

    Such that p(y) = ny, where y is an y axis coordinate, and ny is the number of pixels at y axis coordinate y in the binary image with gray level b0.

    A smoothing of equally weighted moving average of a horizontal projection p(y) is a transformation

    Q: N N

    Such that

    The property of Sobel operators is that they are used for approximating the magnitude of the gradient. By making use of these masks we can find out the edges in both the horizontal and vertical directions. The definition of masks is as follows and are convoluted with an image function f (p,q):


    In this paper segment, the implementation of the classificatio by age group will be discussed. The implementation process consists mainly of three phases, namely localization, characteristics extraction and age classification, as described in Figure 1.

    In the localization phase the Viola Jones detection algorithm is used. In general, Viola Jones' facial detection algorithm is divided into three basic steps. The three basic steps include feature extraction, increment and multi-scale detection. For the purposes of classification, geometric and anti-wrinkle characteristics are used in the system. In the second phase, ie the phase of extraction of the characteristics, the calculation of two geometrical characteristics occurs. These geometric characteristics are defined as the part of the separations between eyes, nose and mouth. To assess the degrees of facial wrinkles, it is necessary to characterize three distinctive wrinkle characteristics Classification is performed using the K- means clustering algorithm.


    According to the flowchart shown in Figure 1, it is assumed that the input image passes through the location phase. In the localization phase, we use the Viola-Jones algorithm. Viola Jones's algorithm is based on the principle of scanning a sub- window that can recognize faces in a given input image. The standard image processing methodology would be to resize the input image into specific dimensions and then run the fixed size locator through these images. This methodology ends up being a bit boring due to the representation of images of different sizes. Despite the standard methodology, viola jones returns to scale the indicator instead of the input image and runs the search engine commonly through the image; each time with an alternative dimension, we can initially suspect that both approaches consume time; however, purple knights have devised an invariable search scale that requires the same amount of calculations regardless of size. This search engine is constructed using a defined integral image and some simple rectangular components that recall Haar waves. The next section explains this locator.

    In general, Viola Jones's facial detection algorithm is divided into three basic steps. The three basic steps include extraction of features, increment and detection of multiple scales. We will

    q(y) = 1 2+1

    p( + )


    discuss each of them in

    Where R>0 is a smoothing range.

    Figure 1. Process of the System


      It is clear that feature is extremely important to any entity detection algorithm. For the purpose of face detection, a lot of features can be utilized such as eyes, nose, the topology of eye and nose. While detecting face using Viola Face, an extremely basic and direct feature has been utilized.

      Figure 2. Four basic features in Viola Jones Algorithm

      Figure 3. Calculation of Pixel sum within a rectangle

      Figure 2 shows four different characteristics that are calculated using the Viola Jones algorithm. Each of these characteristics can be obtained by deducting the white area from the black zone. The word "zone" used here reflects the sum of all pixels with gray values within the rectangle. To calculate these characteristics, an unusual demonstration known as an integral image was used. In particular, the sum of the pixel values found above and on the left side of (x, y) gives rise to the integral image of a position (x, y). Figure 3 shows the quick approach to process the sum of pixels within a rectangle. Figure 2 indicates that the value of the integral image in position 1 (V1) is the sum total of pixels in the rectangle A; while the value in position 2 (V2) is the sum total of pixels in

      the rectangle A and B. the value in position 3 (V3) is the sum of the pixels in the rectangle A and C, while the value in the position 4 (V4) is the sum of the pixels in the rectangle A, B, C and D. Based on this information, it is easy to get the sum of the pixels of V4 + V3-V2-V1. After using this principle very efficiently, it is easy to get the sum of the pixels of any rectangle located anywhere.


    the significance of the increase in the detection algorithm of Viola Jones's face is the grouping of numerous classifiers without power. This stimulating thought makes the learning process effective and well organized. In particular, the pulse works as follows:

    1. From a given data set, first take a simple and direct classifier and then discover the mistakes you make.

    2. The second step consists in resizing the data set and then supplying the data in which errors were made.

    3. Consider the second direct classifier based on the revised dataset.

    4. Consolidate the first and second classifiers, re-evaluate all the information and check where the data produces errors.

    5. Continue to learn unless you obtain the T grader

    6. The last classifier will be the mix of each of these T classifiers

      Figure 4 shows point of interest of the guideline of boosting.

      Figure 4. Process of Boosting with 3 simple classifiers

      3. MULTI SCALES DETECTION ALGORITHM One more step involved in the Viola Jones Face detection algorithm is multi scale detection. Before doing face detection, it is clear that we have no idea with the size of face in an image. Hence, for detecting face of any size, multi scale detection should be implemented. Learning and testing are based on the rectangle, therefore it is necessary to estimate the features at all the different scales.


    One of the main key issue of any characterization frameworks is to locate an arrangement of reliable features as the basis for classification. In general these features cam be categorized into two categories. These are wrinkle features and geometric features. Let us discuss each one of them in detail.


      One of the most important features of wrinkle characteristics is that it determines a person's age. The estimate of the F5 characteristic can be carried out as follows:

      F5 = (sum of pixels in the region of the forehead / number of pixels in the region of the forehead) + (sum of pixels in the

      region of the left eyelid / number of pixels in the region of the left eyelid) + (sum of pixels in the region of the right eyelid / number of pixels in the right region of the eyelid) + (sum of the pixels in the region of the left eye corner / number of pixels in the region of the left eye corner) + (sum of the pixels in the eye corner region right / number of pixels in the region of the right eye corner).

      F5 can be estimated using the characteristics of the facial image grid that completely depends on the geography of wrinkles in the facial image.

      Figure 6. (a) (c) Original images (b) (d) Results after the Sobel operator

      For the estimation of F5 features, a few steps have to be followed as discussed below:

      As the age keeps on increasing, wrinkles on face turn out to be clearer. Aged individuals regularly have clear wrinkles on the face in the following areas as mentioned below [12]:

        1. The forehead has horizontal furrows.

        2. The eye corners have crows feet.

        3. The cheeks have clear cheekbones, sickle molded pouches, and profound lines between the cheeks and the upper lips.

      Since there are evident changes in wrinkle intensities and even some form clear lines, thus in this paper we make use of Sobel edge magnitudes, approximating gradient magnitudes in order to judge the level of wrinkles. The Sobel edge magnitude is larger, if the pixel belongs to wrinkles. The reason behind the larger magnitude is that the difference of gray levels is self- evident. From this perspective, a pixel is named as a wrinkle pixel if its sober edge size is bigger than some limit. Figure 7

      (a) and (c) demonstrate a youthful grown up and an old grown up. Figure 7(b) and (d) shows the outcomesafter the thresholder Sobel operators. It is clear that the wrinkles are clearer on the old adult than on the young adult


      As indicated by the investigations of facial representation [9] and emotional cosmetics [12], there occurs a lot of change in the facial features as the age keeps on increasing. In this phase, global features in combination with the grid features are extracted from the face images. The global features include the distance between two eye balls, chin to eye, nose tip to eye and eye to lip. These features are estimated as shown in figure 4.

      Figure 5.Distance between (a) two eyeballs (b) eye to nose tip(c) eye to chin

      (d) eye to lip

      By making use of four distance values, there occurs calculation of four features namely F1, F2, F3 and F4 as mentioned below: F1 = (distance from left to right eye ball) / (distance from eye to nose).

      F2 = (distance from left to right eye ball) / (distance from eye to lip).

      F3 = (distance from eye to nose) / (distance from eye to chin). F4 = (distance from eye to nose) / (distance from eye to lip).

      From the figure 9, it is clear that new born babies have a number of wrinkles on their faces. The head bone structure in new born ones is not fully grown. Moreover the ration of primary features is highly different from those in other life spans. Hence we can conclude that it is more reliable to use geometric features as compared to wrinkle features when it is to be judged that whether an image is a baby or not.

      Figure 9. (a)baby(b)result after the sobel operator

      In case of infants, the head is near a circle. The distance between two eyes is almost equal to the distance from eyes to mouth. As the head bone grows, the head becomes oval shaped and accordingly there occurs a sudden increase in the distance from the eyes to the mouth. Above and beyond the ratio between babys eyes and noses is equal to the distance between noses and mouths which in turn are almost equal to one while as in case of adults it is larger than 1, as shown in figure 9(a) and (b).


    Classification is done by making use of K-means clustering algorithm. The classification of various age ranges is done with dynamism depending on the number of groups. On the basis of six features from F1 to F6, classification of age is done into 2,3 and 4 age range clusters as illustrated in table 1.

    By making use of five features i.e. F1 to F5 classification of age in done into range of 2,3 and 4 age groups as illustrated in table II

    Wrinkle features F5 is utilized for age classification into range of 2,3 and 4 age groups as illustrated in table III.


In this work, a strategy for estimating age groups is completely defined. Thus, the proposed system offers a powerful strategy that confirms the age of people from a series of distinctive images of aged faces. Critical components are discussed, for example, the distances between the different parts of the face, the study of the topography of wrinkles and counting the edges of the face. Each of these forms is in contrast to identify the most ideal approach to calculate the age range of facial images in the database. After observing the side effects of all the functions described above, the facial images are grouped into 2, 3 and 4 meetings using the K-means grouping calculation. It has been found that the feature of the topography of wrinkles, that is, F5 gives the best result for measuring the range of human age in contrast to the different components. The previous result leads us to the conclusion that the analysis of the topography of wrinkles was the best strategy to find the human age group of a person.


  1. Jana, Ranjan, Debaleena Datta, and Rituparna Saha. "Age Group Estimation using Face Features." International Journal of Engineering and Innovative Technology (IJEIT) 3.2 (2013): 130- 134.

  2. Horng, Wen-Bing, Cheng-Ping Lee, and Chun-Wen Chen. "Classification of age groups based on facial features." Tamkang Journal of Science and Engineering4.3 (2001): 183-192.

  3. Chellappa, R., Wilson, C. L. and Sirohey, S., Human and machine recognition of faces: A Survey, Proc. of the IEEE, Vol. 83, pp. 705-740 (1995).

  4. Choi, C., Age change for predicting future faces, Proc. IEEE Int. Conf. on Fuzzy Systems, Vol. 3, pp. 1603-1608 (1999)

  5. Shepherd, J. W., An interactive computer system for retrieving faces, Aspects of Face Processing, Ellis, H. D. et al. Eds, Martinus Nijhoff International, Dordrecht, The Netherlands, pp. 398-409 (1986).

  6. Gutta, S. and Wecheler, H., Gender and ethnic classification of human faces using hybrid classifiers, Proc. Int. Joint Conference on Neural Networks, Vol. 6, pp. 4084-4089 (1999).

  7. Kass, M., Witkin, A. and Terzopoulos, D., Snake: active contour models, Proc. First Int. Conf. on Computer Vision, London, England, pp. 259-268 (1987).

  8. Kwon, Y. H. and da Vitoria Lobo, N., Age classification from facial images,Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Seattle, Washington, U. S. A., pp. 762-767 (1994).

  9. Yuille, A. L., Choen, D. S. and Hallinan,192 Wen-Bing Horng et al.P. W., Feature extraction from faces using deformable templates, Proc.IEEE Conf. on Computer Vision and Pattern Recognition, San Diego,

  10. M. A. Turk and A. P. Pentland, Eigen faces for recognition, Journal of Cognitive Neuroscience, 3(1): 7186, 1991.

  11. Sahoolizadeh, Hossein, and Youness Aliyari Ghassabeh. "Face recognition using eigen-faces, fisher-faces and neural networks." Cybernetic Intelligent Systems, 2008. CIS 2008. 7th IEEE International Conference on. IEEE, 2008.

  12. B.D., Zarit, B.J., Super, AND F.K.H. Quek, Comparison of five color models in skin pixel classification, Int. Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, pages 58-63, Corfu, Greece, Sep. 1999.

  13. R.L., Hsu, M., Abdel-Mottaleb, and A.K.Jain, Face detection in color images, IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(5):696-706, May 2002.

  14. Y.H.Kwno and N.daVitoria Lobo, Age Classification from Facial Images, Computer Vision and Image Understanding, vol.74, no.1, pp.1-21, 1999.

  15. A.Lanitis and C.J.Taylor, Towards Automatic Face Identification Robust to Ageing Variation, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.24, no.24, p.442- 455, 2002.

  16. A.Lanitis, C.Draganova, and C.Christodoulou, Comparing different classifiers for automatic age estimation, IEEE Trans.Syst.Man, Cybern.B, Cybern, vol34, no.1, pp.621-628, Feb.2004.

  17. V. Blanz and T. Vetter, Face recognition based on fitting a 3D morphable model, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9):1063 1074, September 2003.

  18. R. Kimmel A. M. Bronstein, M. M. Bronstein, Three- dimensional face recognition, Intl. Journal of Computer Vision, 64(1):530, August 2005.

  19. N.Ramanathan and R. Chellappa, Face verification across age progression, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Diego, CA, 2005, vol.2, pp.462-469.

  20. N.Ramanathan and R. Chellappa, Modeling Age Progression in young faces, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), vol.1, pp.387-394, 2006.

  21. X.Geng, Z.H. Zhou, and K. Smith-Miles, Automatic age estimation based on facial aging patterns, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.29, pp.2234-2240, 2007.

  22. A. K.Jain, Age Invariant Face Recognition, IEEE Trans. on Pattern Analysis and Machine Intelligence, 2010

  23. Ramesha K, K B Raja, Venugopal K R, and L M Patnaik, Feature Extraction based Face Recognition, Gender and Age Classification, International Journal on Computer Science and Engineering (IJCSE), Vol 02, No.01S, pp. 14-23, 2010.

  24. Chiunhsiun Lin, Kuo-Chin Fan, Triangle-based approach to the detection of human face, Pattern Recognition Journal Society, vol.34, pp.1271-1284, 2001.

  25. R. Jana, H. Pal, A. R. Chowdhury, Age Group Estimation Using Face Angle, IOSR Journal of Computer Engineering (IOSRJCE), Volume 7, Issue 5, PP 35-39, Nov-Dec. 2012.

  26. K. Luu, K. Ricanek , T. Bui, and C. Suen. Age Estimation Using Active Appearance Models and Support Vector Machine Regression. In IEEE BTAS, 2009.

  27. K.Ricanek, Y. Wang, C. Chen, and S. Simmons. Generalized Multi-Ethnic Age Estimation. In IEEE BTAS, 2009.

  28. Merve Kilinc, Yusuf Sinan Akgul, Human Age Estimation via Geometric and Textural Features, 2009.

  29. P. Turaga , S. Biswas and R. Chellappa"The role of geometry in ageestimation",Proc. 2010 IEEE Int. Conf. AcousticsSpeech and Signal Processing (ICASSP),pp.946 -949 2010.

  30. R. Jana, H. Pal, A.R.Chowdhury Age group Estimation using Face Angle,IOSR Journal of Computer Engineering, pp. 35-39, 2012.

  31. 32[9] Choi, Youn Joo lee, Sung Joo Lee, Kang Ryoung Park, Jaihie Kim, Age estimation using a hierarchical classifier based on global and local facial features, ELSEVIER, pp.1262-1281, 2011.

  32. C.T. Lin, D.L. Li, J.H. Lai, M.F. Han, J.Y. Chang, Automatic Age Estimation System for Face Images, International Journal of Advanced Robotic Systems, 2012.

  33. Hu Han, Charles Otto and Anil K. Jain Age Estimation from Face Images: Human vs. Machine Performance, IAPR International Conference on Biometrics, 2013.

ISSN: 2278 –



International Journal of

Engineering Research & Technology

0 Fast, Easy, Transparent Publication

0 More than 50000 Satisfied Authors

0 Free Hard Copies of Certificates & Paper

Publication of Paper : Immediately after

Online Peer Review

Why publish in IJERT?

Broad Scope : high standards

Fully Open Access: high visibility, high impact High quality: rigorous online peer review

International readership

No Space constraints (any no. of pages)



Leave a Reply