Multi Modal Biometric System: A Review on Recognition Method

DOI : 10.17577/IJERTV6IS050102

Download Full-Text PDF Cite this Publication

  • Open Access
  • Total Downloads : 112
  • Authors : Dr. Gandhimathi Amirthalingam, Saranya Subramaniam
  • Paper ID : IJERTV6IS050102
  • Volume & Issue : Volume 06, Issue 05 (May 2017)
  • DOI :
  • Published (First Online): 03-05-2017
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

Multi Modal Biometric System: A Review on Recognition Method

Dr.Gandhimathi Amirthalingam Department of Computer Science King Khalid University

Kingdom of Saudi Arabia

Saranya Subramaniam

Software Engineer Robert Bosch Engineering

Coimbatore, India

Abstract In this Review paper, it is intended to summarize and compare the biometric recognition for human identification. The biometrics refers to the use of physiological or biological characteristics to measure the identity of an individual. Literature reviews of the most recent multimodal biometric human recognition techniques are presented. Methods that use multiple types of biometric sources for identification purposes (multi-modal biometric) are reviewed. Combination scheme, description and limitation of the biometric sources and database which are used to improve the recognition are given. An evaluation of multi biometric technology and its conclusions are also given.

KeywordsFace recognition, Ear recognition, biometric recognition, multi modal biometric recognition.


    Biometric techniques are being used increasingly as a hedge against identity theft. The premise is that a biometric is a measurable physical characteristic or behavioral trait and is a more reliable indicator of identity than legacy systems such as passwords and PINs [23]. Biometrics first came to renown in 1879 when Alphonse Bertillon (1853 1914), a French Criminologist, introduced his anthropometrical signalment or Bertillonage system for identifying criminals [10]. A method of identification based on anthropometry of various parts of the human body had developed including head, ear, fingers etc., the size of which remain constant throughout life after attaining full growth. However, greater accuracy and robustness is desired in biometric identification.


    1. Types of Biometrics

      A method of identifying or verifying the identity of an individual person based on the physiological and behavioral characteristics is biometric recognition.

      Physiological biometrics is based on data derived from direct measurement of a part of the human body [10]. Physiological biometrics involves fingerprint, iris-scan, DNA, retina scan, hand geometry, and facial recognition.

      Behavioural biometrics is based on data derived from an action taken by a person or individuals behavioral traits. Behavioral biometrics characteristics involve voice recognition, keystroke-scan, and signature-scan. Any physiological or behavioral characteristic of human can be

      used as a biometric characteristic as long as it is Universal, Unique, Collectable and Permanent [10].

      Biometrics recognition features can be either passive or active. The recognition of Face and ear feature are Passive biometrics. Users participation is not require. It can be analyzed and successful even without any explicit action on the part of the user. But Active biometrics like fingerprint, retina scanning, signature recognition, DNA etc. however, do require some voluntary action by the user and will not work if one reject participating in the process.

    2. Identification / Verification

      Biometric-based personal recognition systems can be classified into two main categories: Verification and Identification. Biometric verification (one-to-one matching) compares the enrolled claim of identity against a face image of an unknown person, whether the person is claims to be. Biometric Identification (one-to-many matching) compares the image of an unknown person against all records in a database of templates and the individual does not claim an identity. The system identifies the individual from the database gallery. This category is usually associated with law enforcement applications.

    3. Uses of Biometrics

    Biometrics is a fast growing technology which can be useful in criminal justice system like mug-shot, post-event analysis, forensics. It provides security to prevent unauthorized access to ATMs, computer networks, cellular phones, email authentication on multimedia workstations, PDA, medical records management and distance learning. The voice biometric can be used during transactions conducted via telephone and internet commerce and banking. Retinal patterns of an individual provide medical information about diabetes or high blood pressure. In automobiles, keys can be replaced with key-less entry devices by the fingerprint biometric system. Face biometric is used in smart card applications [18]. The face-print can be stored in a smart card, bar code or magnetic stripe. Active biometrics like iris, fingerprint, and retina are most widely used and well-known biometrics. The passive biometric, face recognition is very popular. The face recognition is used in forensic applications such as terrorist identification, corpse identification etc. The other biometric applications such as social security, national ID card, border control and passport control.

  3. SURVEY OF MULTIMODAL BIOMETRIC This section literature survey studies that deal with multi

    modal biometrics in order to improve robustness and accuracy of an existing single biometric method. The methodologies that use multiple types of biometric sources for recognition purposes are reviewed. The term multi- modal biometrics", is to use with different sensor types without necessarily indicating that different features of the body are sensed such as appearance and shape of face [31, 33]. The important features of this multi-modal survey are summarized in Table 1.

    A person identification system using two biometrics, face and ear, is presented by A.A. Darwish et al. [11]. The detection and identification of human faces and ears developed a multimodal system using PCA algorithm called eigenfaces and eigenears. PCA decorrrelate data in order to show difference and similarities by finding the eigen vectors of the covariance matrix. This system with fusion of face and ear has implemented on different databases MIT, ORL and Yale. The individual face and ear images are normalized and preprocessed before create clear face and ear images and they are transformed to the PCA space. Images are recognized with the Adaboost classifier. The overall accuracy of the system is 92.24% with FAR of 10% and FRR of 6.1%. They concluded that the combined face and ear is a good technique because it offers a high accuracy and security.

    Cheng Lu et al. describe a multimodal biometric identification based on the face and palmprint [13]. Statistics properties (SP) of the biometric image and two- dimensional principal component analysis (2DPCA) are two feature extraction methods applied in this system. The minimal distance rule is applied at the matching score fusion level of face and palmprint. The unimodal face and palmprint identification is tested to show the superiority of multimodality identification. In classification, city distance and square Euclidean distance are adopted to test the performance of nearest neighbor classifier. The performance of the system is evaluated by the correct identification rate(CIR).The experimental results indicate that the performance of multimodal biometric identification improves the accuracy and CIR can reach 100% based on ORL face database and PolyU palmprint database.

    Xu Xiaona et al. [19] proposed a novel kernel-based feature fusion algorithm method in combination of face and ear. Combining with KPCA or KFDA algorithm, the feature fusion method were presented and applied to multimodal biometrics based on fusion of ear and profile face biometrics. This system defines the Average rule, Prduct rule, Weighted-sum rule in kernel-based fusion feature method and USTB database is analyzed. The recognition rate for KPCA is 94.52% and KFDA is 96.84%. The experimental results show that the KPCA, KFDA method is efficient for fusion feature extraction. The performance of the system is better ear or profile face unimodal biometric recognition.

    Xiaona Xu et al. [4] proposed a novel non-intrusive multimodal recognition technology based on ear and profile face. The face profile view images including ear are captured for recognition purpose. The input images are

    preprocessed and normalized. The pure ear image is cropped from the profile face images. The ear and profile face images are filtered with Wiener filtering to enhance the features. The ear and face images are transformed into the size of 80×50 pixels and 200×200 pixels respectively. The intensity values of the images are equally distributed using histogram equalization. This system setup the FSLDA algorithm for ear classifier and profile face classifier. Fusion of multimodal biometric ear and profile face at the decision level is carried out using the combination methods of Product, Sum and Median rules according to the Bayesian theory and a modified Vote rule. The results of experiment show that the recognition rate is higher than that of uni-modal biometric recognition.

    Muhammad Imran Razzak et al. [9] combined the face and finger veins in which multilevel score level fusion is performed to increase the robustness of the authentication system. The score level fusion of client specific linear discriminant analysis (CSLDA) for face recognition and fusion of face result and finger veins result is performed. The global representations of testing and training data in feature space are PCA and LDA approach. CSLDA is a usual LDA representation that involved in multiple shared faces. CLSDA uses the PCA and LDA to generate a client specific template. If any two clients face recognition are very close then finger veins recognition plays a vital role in final decision. The scores of the face recognition are combined using weighted Fuzzy fusion to improve the face recognition system. The experimental results show that the proposed multimodal recognition system is very efficient in reducing the FAR 0.05 and increasing GAR 91.4.

    M.H. Mahoor et al. [21] proposed a multi-modal biometric system comprised of 2D face and 3D ear recognition component. For 2D face recognition component, Gabor filters are used to extract a group of features. Active Shape Model is used to extract a set of facial landmarks from frontal facial images. For the ear recognition, a set of frames is extracted from a video clip and ear region in each frame is restructured in 3D using Shape From Shading (SFS) algorithm. The ear region contained within each image frame is localized and segmented. The resulting 3D ear models are aligned using the iterative closest point (ICP) algorithm. The system presents a method for fusing the ear and face biometric at the match scores level using weighted sum technique. The match scores of each modality face and ear are calculated. The match scores are normalized and then combined using the weighted sum technique. The fused match score shows the final decision for recognition. The experiment performed on a database of 402 subjects. The result of rank-one identification and verification at FAR 0.01% shows that by fusing the face and ear biometric, the performance of the system is increased to 100%; EER of the multi-modal system is .01%.

    Aloysius George [6] experimented with face and fingerprint. In this, the Linear Discriminant Analysis (LDA) for Face recognition and Directional Filter Bank (DFB) for Fingerprint matching are used. The verification module of the system validates the recognized data from the face image and fingerprint by using the MBP-ANN

    algorithm. If the face and fingerprint verification results of the system match, then there is no further process and verification is accepted. Otherwise, verification is rejected; the MBP-ANN algorithm is used to solve the mismatch problem of the system. The experimental results show that the multi-modal biometric verification method decrease false rejection rate (FRR), false acceptance rate (FAR) 0.0000121%, GAR 99.9789%, and reliability in real time by overcoming technical limitations of uni-modal biometric verification methods.

    Yan Yan et al. [17] formalized the framework for combining multi modal biometric face and palm print. They proposed a Correlation Filter Bank (CFB) technique for multimodal biometrics feature vector extraction. In CFB, the unconstrained correlation filter trained for a specific modality is designed by focusing on the overall origin correlation outputs. Hence, CFB takes full advantage of the information in different modalities to extract the discriminant features. Two face databases (AR and FRGC) and a palmprint database (PolyU) are used for the evaluation of the recognition performance. The author compared the proposed method with other subspace learning based multimodal biometrics fusion methods (PCA [1], LDA [2], LPP [3] and OTF based CFA [5])

    which is fused at the feature level. The experiments are conducted on the non-real multimodal biometrics data for convenience and it shows the superiority of the novel method.

    A human recognition method combined face and speech information in order to improve the problem of single biometric authentication are proposed by Mohamed Soltane et al. [14]. The popular method PCA is used for face recognition. Gaussian mixture modal (GMM) is the main tool used in text-independent speaker recognition, in which can be trained using the Expectation Maximization (EM) and Figueiredo-Jain (FJ) algorithms for score level data fusion. The speech modality, is authenticated with a multi-lingual text-independent speaker verification system with two component speech feature extraction and GMM. The author presented a framework for score level fusion in multi-modal biometric system based on adaptive Bayesian method [16]. The use of finite GMM based Expectation Maximization (EM) estimated algorithm for score level data fusion is proposed. The experiment is performed with the face database collected from the video, encoded UYVY.AVI 640×480, 15.00 fps and uncompressed 16 bit PCM audio. The face detection algorithm PCA is applied to the video files. Audio is extracted and fused with the face extraction to significantly achieve the recognition rate. The EER is reduced to 0.087 between face mode and combination of face-speech biometric.

    A multi-biometric system using lip movement and gestures is proposed by Piotr Dalka et al. [12]. In multimodal human-computer interface (HCI) called LipMouse, allows user to work on a computer using movement and gestures made with the mouth. One of the main area of applications is to use computers for those who are with permanently or temporal disabilities. Web camera acquired a video stream, LipMouse detect and analyze sign and gestures made by users. Face detection is based on a

    cascade of Adaboost classifier algorithm. A mouth region of each video frame is located and its position is used to track lip movements that allows a user to control a screen cursor. Lip gesture recognition is performed by an artificial neural network (ANN) approach. ANN contains parameters like no gesture, mouth opening, forming puckered lips, sticking out the tongue and all gestures. The experiment used 6120 image frames. The entire feature vector for ANN contains lip region only. ANN is trained with a resilient back propagation algorithm (RPROP). The result shows that the recognition rate is 93.7%. The main goal of proposed system HCI application is to make working with a computer as natural, intuitive and effective as possible.

    Biological features of the face or other parts of the human have different properties for different sensors [8]. Each parameter of the biometric can be characterized as better or worse depending on the data of the individual is acquired for identification purposes. Reliable biometric system can be attained on what and how to combine multi biometric sources [20]. There are different ways of integrating multiple biometric sources and they are depends on the number of samples, multiple matches, multiple snapshots, multiple sensors and the number of biometric features in the context of multi-biometric studies [32, 14]. Ear feature provide better biometric performance. Ear undergoes very slight changes from the childhood to adulthood. Due to ears semi-rigid shape and robustness against change over time, the ear has become an increasingly popular biometric feature. It has been shown that combining individual biometric methods face and ear into multi-biometric systems improves recognition [21].


    Multimodal biometrics based on the combined two different biometric sources, face and ear may provide a new approach of non-intrusive biometrics authentication. There are several inspirations to choose face and ear for a multi-modal biometric recognition. The biometric modals face and ear are in close physical proximity, during image acquisition, the data can be captured using conventional cameras. The data collection for face and ear does not require participation or cooperation from the user. Both biometric features are jointly present in an image or video captured of a users head and are both available to a biometric system. In [15,23,25,30,5], the fusion of face and ear biometric were used to perform the recognition.

    1. Comparison with other biometrics

      Gait: Gait is a behavioral biometric. Gait is not supposed to be very distinctive, but is sufficiently discriminatory to allow verification in some low-security applications. It may not remain invariant, especially over a long period of time, due to fluctuations in body weight, major injuries involving joints or brain.

      Iris: Iris is much smaller than the ear, a high resolution camera device is required in order to acquire image of acceptable quality. In general, the capturing sensor device is usually placed far from the subject. Iris recognition also can fail when the subject wear glasses.

      Fingerprint: Finger print recognition system requires the use of specially designed sensors and computational resources which maybe too expensive for large scale deployment, especially when operating in the identification mode. Fingerprints of a small fraction of the population may be unsuitable for automatic identification because of genetic factors, aging, environmental, or occupational reasons. Manual workers may have a large number of cuts and marks on their fingerprints that keep changing.

      Voice: The voice of a person changes over time due to age, health conditions and emotional state, etc. Voice is also not very unique and may not be appropriate for large- scale identification. A disadvantage of voice-based recognition is that speech features are sensitive to a number of factors such as background noise.

      Keystroke: This behavioral biometric is not expected to be unique to each individual. The keystroke dynamics may vary depends on the health condition. It is expect to observe large variations in typical typing patterns. The keystrokes of a person using a system could be monitored quietly as that person is keying in information.

      Palmprint: The palmprints scanners need to capture a large area, they are more expensive than the fingerprint sensors. The physical size of a palmprint based system is large, and it cannot be embedded in certain devices.

      Signature: The signature of a person is to be a characteristic of that individual. Signatures require contact with the writing instrument and an effort on the part of the user, which have been accepted in government, legal, and commercial transactions as a method of verification. It changes over a period of time and is influenced by physical and emotional conditions of the signatories. Signatures of some people vary significantly. Professional forgers may be able to reproduce signatures that fool the system.

    2. Face Biometric

      Face recognition has potential applications in security control, surveillance, office automation, prevention of fraud, video indexing, automatic personalization of environments, etc. [22]. Face recognition is passive and non-intrusive unlike other active biometric techniques such as those using fingerprints, speech and signature [1].

      There are two main categories of face recognition systems: First, Face detection and normalization, the face image database contains one image per person. System identifies a person and returns a list of names that most likely matches the query face image. Secondly, Face identification, System identifies a person from a smaller face databases so that they can gain entry to a particular resource. The face recognition techniques can be modified and used for gender classification. Feature-based and holistic are the two main categories of face recognition methods. The high performance of the system is to

      recognize faces in real time with varying facial expressions, hairstyle, and image background.

    3. Ear Biometric

    Ear is a new class of human biometrics for passive identification with uniqueness and stability. Ear is visible part like a face. The ear are unaffected by ageing. The ear growth after four months of birth is proportional to age. Its location on the side of the head makes extraction easier. Ear biometric is convenient in collecting data comparison to other technologies like retina, iris, fingerprint [24]. The combination of ear and face show high recognition results. The ear features and ear identification were using in forensic for more than 10 years [33, 15]. In the absence of fingerprints, the ear shapes or marks are often used for the identification. It is more reliable due to lack of expressions and less effect of aging. The ear has information rich anatomical feature, its structure and pinna are distinctive. The face profile view image including ear is capture and then ear image from the profile face image is cropped. The recognition is similar to face recognition and it consists of image acquisition, preprocessing, feature extraction, model training and template matching.


This paper presents an overview of multimodal biometric recognition. The literature review has shown that the performance evaluation of multimodal biometrics for two and three modalities for different combinations of algorithms [15]. However, there is no consensus on what features should be used, how they should be acquired, or even how they should be combined. When designing a multimodal biometric system, one must consider the type of data to be acquired (e.g.. 2D or 3D), the type of recognition algorithm or method performed on each data element, the output of that algorithm (error metric), the type of fusion to be performed to combine them and the level at which it should be performed [9, 27]. This review studies the multi-modal combination scheme, number of samples, recognition rate and database used. Most of the studies in this review have a limited number of subjects and images in the evaluation dataset. The integration of multiple biometrics sources face and ear are discussed. The use of combined face and ear data, and found that even a simple fusion technique yields improved performance over either the face or ear alone [7]. The literature studies reviewed in Tables 1 claim that multi-biometrics improve over an individual biometric system. Multi-modal face and ear biometric system [21] is more feasible than, say, a multi-modal face and fingerprint biometric system.



Principal Component Analysis (PCA)

Source (year)


Biometric sources


Performance of Classification in percentage

No. of subjects

Darwish (09) [11]

MIT, Yale

Face + Ear

Accuracy of 92.24% with FAR of 10% and FRR of 6.1%

MIT 40 individuals with 10 face images and 4 ear images per individual.

ORL 15 individuals with 11 face images per individual.

YALE-10 individuals with 4 face and ear images per individual.

Cheng Lu(09) [13]

ORL, PolyU

Face + Palmprint

statistics properties (SP), two- dimensional principal component analysis(2DPCA)

Recognition rate Face 95.50%, Palm print

98.00%, fusion 100%

400 Facial images, 400 palm print images from 40 users.

Xu Xiaona(09) [19]



Face + Ear

KPCA, Kernel Fisher Discrimant Analysis (KFDA)

Recognion Rate fusion KPCA 94.52%, KFDA


79 subjects

Xiaona Xu(07) [4]



Face + Ear

Full Space Linear Discriminant Analysis (FSLDA)

Recognition Rate Product rule 96.43%,

Sum rule 97.62%,

Median rule 97.62%, Modified Vote rule 96.43%

294 images for 42 persons

Muhammad Imran Razzak(10) [9]


Face + Finger Veins

client specific linear discriminant analysis (CSLDA)

FAR 0.05% and GAR 91.4%

35 subjects, 3 images for finger veins and 6 images for face.

M.H. Mahoor(09) [21]

West Virginia University database

2D Face + 3D Ear

Weighted sum technique

EER .01%,FAR .01%,

Rank one identification 100%

402 subjects

Aloysius George(08) [6]


Face+ Fingerprint

Momentum back propagation ANN

GAR 99.9789%, FAR



Yan Yan(08) [17]



Face + Palmprint

Correlation Filter Bank (CFB)

Recognition rate for AR+PolyU 99.23%,

for FRGC+PolyU 97.61%

For AR database, 14 face images from 100 individuals.

For FRGC face database, 20 images from 100 individuals.

For PolyU palmprint database, 20 images from 100 individuals.

Mohamed Soltane (10) [14]

UYVY. AVI 640 x 480,

15.00 fps

Face + Speech

Gaussian mixture modal (GMM)

EER: Face 0.44935,

Speech 0.00269, Face

+ Speech (fusion) 0.08728

30 subjects in which 25 males and 5 females

Piotr Dalka(10) [12]

Faces are recorded using web camera

Lip movement

+ Gestures

Artificial Neural Network (ANN)

Recognition Rate 93.7%

176 persons face were collected during two recording sessions.


  1. M. Turk and A. Pentland, Eigenfaces for recognition, Journal of Cognitive Neuroscience, Vol. 3(1), pp. 71-86, 1991.

  2. P.N. Belhumeur, J.P. Hepanda, and D.J. Kriegman, Eigenface vs. Fisherfaces: recognition using class specific linear projection, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 19(7), pp. 711-720, 1997.

  3. X. He, S. Yan, Y. Hu, and H.-J. Zhang. Face recognition using Laplacianfaces, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 27(3), pp. 328-340, 2005.

  4. Xiaona Xu, Zhichun Mu, Multimodal Recognition Based on Fusion of Ear and Profile Face, Proc. IEEE CS Fourth International Conference on Image and Graphics, pp. 598- 603, 2007.

  5. K.H. Pun, Y.S. Moon, Recent Advances in Ear Biometrics, Proc. IEEE CS Sixth International Conference on Automatic Face and Gesture Recognition (FGR04), 2004.

  6. Aloysius George, Multi-Modal Biometrics Human Verification using LDA and DFB, International Journal of Biometric and Bioinformatics, Vol. 2: Issue (4), pp. 1-10, 2008.

  7. Christopher Middendorff, Kevin W. Bowyer, Ping Yan, Multi-Modal Biometrics Involving the Human Ear, Proc. IEEE, 2007.

  8. M. Abdel-Mottaleb and J. Zhou, A System for Ear Biometrics from Face Profile Images, International Journal on Graphics,Vision and Image Processing, 2006, pp. 2934.

  9. Muhammad Imran Razzak1, Rubiyah Yusof and Marzuki Khalid,Multimodal face and finger veins biometric authentication, Scientific Research and Essays, Vol. 5(17), pp. 2529-2534, 2010.

  10. Ruma Purkait, Ear Biometric: An Aid to Personal Identification, Anthropologist Special, Vol. 3, pp. 215- 218, 2007.

  11. A.A. Darwish, R. Abd Elghafar and A. Fawzi Ali, Multimodal Face and Ear Images, Journal of Computer Science Vol. 5 (5), pp. 374-379, 2009.

  12. Piotr Dalka, Andrzej Czyzewski, Human-Computer Interface Based on Visual Lip Movement and Gesture Recognition, International Journal of Computer Science and Applications, Vol. 7(3), pp. 124 – 139, 2010.

  13. Cheng Lu, Jisong Wang, Miao Qi, Multimodal Biometric Identification Approach Based on Face and Palmprint, Proc.IEEE CS Second International Symposium on Electronic Commerce and Security, pp. 44-47, 2009.

  14. Mohamed Soltane, Noureddine Doghmane, Noureddine Guersi, Face and Speech Based Multi-Modal Biometric Authentication, International Journal of Advanced Science and Technology Vol. 21(8), pp. 41-46, 2010.

  15. R. Brunelli and D. Falavigna, Person identification using multiple cues, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 12(10), pp. 955966, 1995.

  16. A. B. J. Teoh, S.A. Samad, A. Hussain, A face and speech biometric verification system using a simple Bayesian structure, Journal of Information Science Engineering, Vol. 21, pp. 11211137, 2005.

  17. Yan Yan and Yu-Jin Zhang, Multimodal Biometrics Fusion Using Correlation Filter Bank, IEEE, 2008.

  18. L. Puente Rodríguez, A. García Crespo, M. J. Poza Lara, B. Ruiz Mezcua, Study of Different Fusion Techniques for Multimodal Biometric Authentication, Proc. IEEE CS International Conference on Wireless & Mobile Computing, Networking & Communication, pp. 666-671, 2008.

  19. Xu Xiaona, Pan Xiuqin, Zhao Yue, Pu Qiumei, Research on Kernel-Based Feature Fusion Algorithm in Multimodal Recognition, IEEE CS International Conference on Information Technology and Computer Science, pp. 3-6, 2009.

  20. Barnabas Victor, Kevin Bowyer, and Sudeep Sarkar, An Evaluation of Face and Ear Biometrics, Proc. IEEE CS 16th International Conference on Pattern Recognition (ICPR02), 2002.

  21. Mohammad H. Mahoor , Steven Cadavid, and Mohamed Abdel-Mottaleb, Multi-modal Ear and Face Modeling and Recognition, Proc. IEEE 16th International Conference on Image Processing, pp. 4137-4140, 2009.

  22. Rabia Jafri and Hamid R. Arabnia, A Survey of Face Recognition Techniques, Journal of Information Processing Systems, Vol.5, No.2, pp. 41-68, 2009.

  23. A. K. Jain, A. Ross and S. Prabhakar, An introduction to biometric recognition. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 14, pp. 420, Jan 2004.

  24. Charles Schmitt, Allan Porterfield, Sean Maher, David Knowles, Human Identification from Video: A Summary of Multimodal Approaches, Institute for Homeland security solutions, Jun 2010.

Leave a Reply