Multimodal Biometric Systems – A Roadmap to Improve Performance and Accuracy

DOI : 10.17577/IJERTCONV4IS15027

Download Full-Text PDF Cite this Publication

Text Only Version

Multimodal Biometric Systems – A Roadmap to Improve Performance and Accuracy

Garg Mamta*

Department of Computer Science & Engineering SantLongowal Institute of Engineering &

TechnologyLongowal, India

Arora Ajat Shatru

Department of Electrical &Instrumentation Engineering SantLongowal Institute of Engineering & Technology Longowal, India

Gupta Savita

Department of Computer Science & Engineering University Institute of Engineering & Technology, Panjab University

Chandigarh, India

AbstractToday with rapid developments in electronically wired information society, reliable and accurate control mechanism for identification and verification of individuals has become an important issue in several applications. Personal identification using biometrics has emerged as a promising component of secure authentication system. Single modality based biometrics recognition systems are not very robust while combining information from various biometric modalities provides better performance. By fusing several different types of complementary biometric traits together, multi-biometric systems are able to represent and discriminate subjects effectively. Since combining multiple biometric modalities can alleviate many of problems faced by unimodal systems, multimodal biometrics becomes a focused field of research. Considering the importance of information fusion in multi- model system, the main aim of this paper is to focus on the different levels of fusion. Finally, the paper concludes with a discussion on the issues that should be taken into account in deployment of multimodal biometric person authentication systems.

KeywordsBiometrics; Identifiaction; Authentication; Fusion; Unimodal; Multimodal.

  1. INTRODUCTION

    The widespread dispersion of information technology into our daily lives has triggered the real need for reliable and secure mechanism to authenticate individuals. There is an increasing demand for reliable and practical authentication systems for security. Biometrics is method of recognizing a human according to physiological or behavioral characteristic. Traditionally, passwords and tokens are used to grant access to applications. However, security can be easily breached in these applications when a password is exposed to an unauthorized user or a badge is stolen by an impostor. Also these systems are unable to differentiate between an authentic user and the impostor who fraudulently acquires the knowledge of password. Because biometrics are impossible to lose, forget, misplace, or reproduce, so identification and verification based on biometric characteristics have gradually emerged as the most secure and reliable techniques. Biometrics is unique to an individual and almost impossible to duplicate when properly protected. The vital attribute of biometric information is that it validates the authentic user and not the holder. So the development of biometrics has

    addressed the problems that plague traditional verification methods.

    The term biometric is a Greek composite word stemming from the synthesis of bio and metric, meaning life measurement. In 500 B.C. Babylonians used fingerprints for business transactions on clay tables. The first real biometric system was created in 1870 by French anthropologist Alphonse Bertillion and turned biometrics a distinguished field of study. He developed an identification system (Bertillonage) based on detailed records of body measurement, physical description and photographs. A key example of biometrics application is the Unique Identity project from the government of India attempting to identify the whole population of the country.

  2. CHARACTERISTICS OF BIOMETRICS

    Any human body characteristic can be used as reliable biometrics as long as it satisfies universality, uniqueness, acceptability, permanence, circumvention, performance and collectability [1]. Universality means every person should have the biometric characteristic. Any two persons should be sufficiently different in terms of characteristic. The biometric feature should not change/degrade over time. The biometric trait should be measurable with some sensing device. People should have no objections to the measuring/collection of the biometric trait. The system should be robust against spoof attacks, and mimicry. The system should achieve desired accuracy and should be robust against environmental and operational factors like illuminance, back-ground noise etc. Based on the criteria, several distinctive human physiological and behavioral characteristics like face [2], iris [3], sclera [4], ear [5], lip [6], speech [7], teeth [8], hand shape [9],

    fingerprint [10], palm print [11], knuckle-print [12], vein

    pattern [13], tongue print [14], brain-print [15], heart sound

    [16], gait [17], signature [18], periocular [19], handwriting

    [20], nail-prints [21], Electro-Oculo-Gram (EOG) signals [22] etc. can be used as biometrics. Each of these traits has its strengths and weaknesses. For example, face biometric is highly universal but not as distinctive as fingerprint and iris.

    The taxonomy of biometrics modalities is illustrated in Fig. 1.

    Fig. 1. MFCC Representation in terms of time domain

    However, in practical applications, no biometric characteristic fully achieves these requisites, consequently no single biometric model is free of errors and the recognition of accuracy of individual biometric trait may not be adequate to meet the requirements of high security applications. So a single biometric system is not a panacea for the solution of genuine authentication because of environmental variations, physical and spoofing problems. In particular, biometrics systems based on single trait are relatively vulnerable to some sophisticated forms of spoofing. Fingerprint-based systems are among the most commonly used and, can be easily fooled by fake fingerprints, reproduced on simple molds made of materials such as silicone, clay or gelatin [23-24]. Biometric data is often noisy because of deformable nature of biometric modalities, corruption by environmental noise, non- cooperation behavior by users and variability over time.

  3. TAXONOMY OF MULTI-BIOMETRIC SYSTEM In practice, no single biometric trait can satisfy all the

    requirements of an ideal biometric system due to various limitations such as non-universality, spoof attack, inter-class similarities, intra-class variations and high error rates. For instance, due to intrinsic limitations as well as external environmental and sensing factors, no single biometric method can warrant 100% authentication accuracy. However, the identification performance of most of the unimodal systems is still not satisfactory.

    A robust authentication system may require fusion of several modalities. This has motivated the current interest in multi-biometrics, in which several biometric traits are simultaneously used in order to make an identification decision [25-26].Multi-biometric is a method that consolidates the evidences obtained from different sources to overcome the limitations of unimodal biometric systems [27-28]. Ambiguities in one modality, such as poor illumination of face may be compensated by another modality like finger print features. The significant advantage of multimodal approach is that it gives better protection against spoof attacks because more than one modality is required simultaneously to spoof the system.

    Based on the nature of the sources of biometric information, a multi-biometric system can be classified into five categories, which are multi-sensor, multi-algorithm,

    multi-smple, multi-instance and multi-modal systems. Fig. 2 represents the integration scenarios in multi-biometrics.

    Fig. 2. Scenarios in a multimodal biometric system

    1. Multi-sensor systems:

      These systems [29-33] employ multiple sensors to capture single biometric trait of an individual. The application of multi-sensors in the researches is able to enhance the recognition ability of the biometric systems.

    2. Multi-algorithm systems:

      These systems combine the output of multiple methods such as feature extraction or/and classification algorithms for the same biometrics data [34]. This type of system is implemented by various researchers [35-38]. In other words, the supplementary information by more than one algorithm helps to improve the performance.

    3. Multi-sample systems:

      These systems [39-41] use multiple samples derived from the same biometrics acquired by a single sensor. The same algorithm processes each of the samples and the individual results are fused to obtain an overall recognition results.

    4. Multi-instance systems:

      In these systems, the biometric information has been extracted from the multiple instances of the same body trait [42-44].

    5. Multi-modal systems:

    These systems use the evidence of multiple biometric traits to extract the biometric information of an individual and are reliable due to the presence of multiple independent biometrics. Multimodality is based on the concept that the information obtained from different modalities complement each other [45]. The examples of this system have been reported by many researchers [46-48] [27]. The results provided by multimodal biometrics are much more accurate due to the availability of richer information [49].

  4. FUSION IN MULTIMODAL BIOMETRIC SYSTEM

    The key to the success of multimodal biometric system is information fusion. In the case of biometric system, fusion of information can be done at four different levels: sensor level,

    feature level, matching score level and decision level. Fusion at different levels has advantages and disadvantages. Multimodal fusion can be performed in two ways. Various approaches used in information fusion at different fusion levels are shown in Fig 3.

    Fig. 3. Techniques used in information fusion

    1. Fusion prior to matching

      Biometric systems that integrate information at an early stage of processing are believed to be more effective than those systems, which perform integration at a later stage. Fusion prior to matching also called pre-classification can be achieved in two different ways.

      1. Sensor level fusion:

        In sensor level fusion, the raw data acquired from either the samples of the same modality with compatible sensors or multiple instances of the sample modality and the same sensor are fused together as shown in Fig. 4. Consequently, features are derived from the fused raw information and matching is carried out.For example, multiple 2D face images obtained from different viewpoints can be stitched together to form a 3D model of the face [50]. An important point regarding sensor level fusion is that it is only possible to fuse data when samples of the same biometric trait are used.

        Fig. 4. Block diagram of fusion at sensor level

      2. Feature level fusion:

        Feature level fusion refers to combining different feature vectors that are obtained by either using multiple sensors or employing multiple feature extraction algorithms on the same sensor data as shown in Fig. 5. When the feature vectors are

        homogeneous e.g., multiple eigenfaces of users face, a single resultant feature vector can be calculated as a weighted average of the individual feature vectors. When the feature vectors are non-homogeneous e.g., feature vectors obtained using different feature extraction techniques, or feature vectors of different biometric modalities like face and iris, feature vectors can be normalized and then concatenated to form a single feature vector.

        Fig. 5. Block diagram of fusion at feature level

        The goal of feature normalization is to modify the location (mean) and scale (variance) of the features values via a transformation function in order to map them into a common domain. Then feature selection is executed to reduce the dimensionality of new feature vector. Several feature selection algorithms are Genetic Algorithm [51], Sequential Backward Selection [1], Sequential Forward Floating Search [1], Sequential Backward Floating Search [1], Branch And Bound Search [1], Sequential Feed Forward Selection [25], Particle Swarm Optimization [52], Kernel Discriminant Analysis [53], k-Means Clustering [49], Principal Component Analysis [54], SVM [55], K-NN [55], FNN [55], Partitioning Around Medoids [56], and Binary Particle Optimization [57].

    2. Fusion Post Matching

      Fusion after matching also called as post-classification includes match score level and decision level. In post- classification fusion, the information is combined after the decisions of the classifiers have been obtained.

      1. Score level fusion:

        In score level fusion, the output matching scores of different biometric traits are fused together to produce a final fused score. The decision is made using the fused score as shown in Fig. 6.This fusion is termed as fusion at the measurement level or confidence level or opinion level. Various techniques such as Decision Trees [2], Logistic Regression [58], Highest Rank [58], Borda Count [58],Weighted Sum [59-60], Weighted Product [60], Support Vector Machine (SVM) [61], Fuzzy Logic [62], k-nearest neighbor (KNN) [63]may be used to combine match scores. One key issue has to be addressed in the matching score level is the normalization of scores obtained from multiple modalities. So matching score is the normalization of scores obtained from multiple modalities. So various normalization techniques like Bayes-based normalization, min-max, z-score, median-MAD, double-sigmoid, tanh, and piecewise linear are used for the normalization of the match scores [64].In general, score-level fusion provides better results than decision level

        fusion, since more discriminative information is present at score level as compared to decision level fusion.

        Fig. 6. Block diagram of fusion at sensor level

      2. Decision level fusion:

        In decision level fusion, the matching score of each biometric system is converted into a hard decision by comparing it with the threshold tuned for that matcher. The output decisions are then fused together to make the final decision as depicted in Fig. 7. Various techniques applied at decision level fusion are Majority Vote [65], Bayesian classifiers [65], Behavioral Knowledge Space [66], AND [67], and OR [67] rule.

        Fig. 7. Block diagram of fusion at score level

    3. Related studies at different levels

      A number of studies showing the advantages of multimodal biometrics based on two modalities have appeared in the literature like Face and Iris [68-69], Face and Speech [70-71], Face and Fingerprint [72-73], Face and ear [74], Face and Palm print [75-76], Face and Gait [77], Face and Hand [78], Fingerprint and Iris [79], Iris and Finger vein [80], Finger vein and Finger geometry [81], Gait and Cumulative Foot [82], Signature and Speech[83], Iris and Palmprint [84], iris and periocular [85], ECG and Sound [86] etc. These systems differ from one another in terms of algorithms applied and level of fusion.

      In year 2005, Kumar and Zhang [55] demonstrated that a subset of biometric features would be practically sufficient for effective personal recognition by using palm print and hand- shape biometrics. It had been found that the performance of

      feature level fusion outperforms the match score fusion [49], [78], [87-88]. Veeramachaneni et al. 2008 [89] reported that the PSO-based decision fusion strategy performed well on correlated classifiers as compared to average sum rule employing z-score normlization.

      Multiple discriminant analysis (MDA) had been applied on the concatenated features of face and gait to attain the discriminating synthetic features [90] and it was showed that the proposed feature level fusion scheme outperformed the match score level as well as traditional feature level fusion schemes. Shen et al. 2011 [91] developed a multimodal system by fusing face and palm-print modalities and concluded that the decision level strategy achieved a little better performance as compared to feature level approach. Long et al. 2012 [92] presented a multimodal biometric system using face and fingerprint features with the incorporation of Zernike Moment and Radial Basis Function (RBF) Neural Network for personal authentication and stated that fusing information from independent/uncorrelated sources at the feature level enables better authentication than doing it at score level. The fusion at feature level showed improved accuracy compared with confidence score level and decision level data fusion methods [93]. Lip and Ramli 2012 [48] evaluated the performance of feature, score and decision level for lip and speech traits and concluded that score level fusion gave the best performance compared to feature level fusion and decision level fusion while the performance of feature level fusion was better than decision level fusion.

      Noushath et al. 2013 [94] addressed the fusion of face and palm print modalities at all levels of fusion to ascertain best level of fusion for these two modalities. It was concluded that the performance of sensor level fusion was even worse than unimodal counterparts. Also z-score and tanh normalization schemes in feature level fusion found to exhibit similar performance, OR rule performed better than AND rule in the decision level fusion, and score level fusion adopting the sum rule obtained best results. Hence score level fusion outperformed all other levels. Kihal et al. 2014 [95] presented a multimodal biometric system for authentication, based on the fusion of iris and palm print at various levels and reported that score level and decision level fusion s feature level fusion.

      Daniel et al. 2014 [96] analyzed the performance obtained by a multimodal biometric system that combined the feature level fusion and the score level fusion of iris and fingerprint modalities in order to take advantage of both fusion techniques and the experimental results showed that the proposed system performed well than the experiments involving feature level fusion and score level fusion with a significant increase in recognition accuracy. The matching score fusion using face and palm print was found to achieve better performance compared to the feature and decision level fusions as reported by Mohamad et al. 2014[97].

    4. Pros and cons of different fusion levels

    There is a trade-off between the information content and the simplicity of the fusion process as a function of the level of fusion. The amount of information available to the system gets compressed as one proceeds from the sensor module to the decision module In sensor level fusion, the data obtained

    from the different sensors must be compatible, and this may not always be possible (e.g., it may not be possible to fuse face images obtained from cameras with different resolution). This level of fusion is rarely attempted in multi-modal biometrics because raw data from multiple traits cannot be meaningfully combined.

    Since the features contain richer information about the input biometric data than the matching score or the output decision of a classifier/matcher, integration at the feature level should provide better recognition results than other levels of integration. Especially, feature level fusion can exploit the most discriminative information and eliminate the redundant/adverse information from the raw biometric data, and hence it is expected to provide better performance. But it is difficult to consolidate information at the feature level because the feature sets used by different biometric modalities may either be inaccessible or incompatible. More over all commercial biometric systems dont provide access to the feature sets, which they use in their products. Because of these difficulties, only limited work is reported on feature level fusion of multimodal biometric system.

    Fusion at the decision level is too rigid since only a limited amount of information is available at this level. Decision level fusion includes very abstract level of information so they are less referred in designing multimodal biometric systems. Therefore, integration at the matching score level is generally preferred due to the ease of accessing and combining matching scores. Score normalization is needed to transform the raw scores into a common domain, prior to combining them. Also it is quite easy to combine the scores of different biometrics, so lot of work has been done in this field.

  5. CONCLUSION AND FUTURE SCOPE

In this review fusion at different levels in the multimodal systems is discussed leading to the fact that the performance gain is drastically improved in case of uncorrelated traits. New biometric recognition algorithms or authentication algorithms should be developed to improve the system accuracy. Anti- spoofing is attracting growing interest in biometrics, considering the variety of fake materials used in biometrics. Also, combination of face and iris based authentication systems has been widely employed in various biometric applications. Biometric-based authentication systems still have room for improvement, particularly in accuracy, reliability, ability to tolerate the noisy environments and spoof attacks. Biometrics possess a crucial advantage of relying on the user himself and not relying on external tokens which can be easily lost or stolen, so biometrics have been implemented in a variety of applications and it is becoming an active area of research and development.

REFERENCES

  1. A. Ross, A.K. Jain, D. Zhang, Handbook of multibiometrics, Berlin, Heidelberg: Springer 2006.

  2. V.H. Gaidhane, Y.V. Hote, V. Singh, An efficient approach for face recognition based on common eigenvalues, Pattern Recognition, vol. 47, no. 5, pp: 1869-1879, 2014.

  3. S. Umer, B.C. Dhara, B. Chanda, Iris recognition using multiscale morphologic features, Pattern Recognition Letters, vol. 65, pp: 67-74, 2015.

  4. S. Crihalmeanu, A. Ross, Multispectral scleral patterns for ocular biometric recognition, Pattern Recognition Letters, vol. 33, no. 14, pp: 1860-1869, 2012.

  5. A. Kumar, T.-S. T. Chan, Robust ear identification using sparse representation of local texture descriptors, Pattern Recognition, vol. 46, pp: 73-85, 2013.

  6. S.L. Wang, A.W.C Liew, Physiological and behavioral lip biometrics: A comprehensive study of their discriminative power, Pattern Recognition, vol. 45, no. 9, pp: 3328-3335, 2012.

  7. P.K. Ajmera, D.V. Jadhav, R.S. Holambe, Text-independent speaker identification using Radon and discrete cosine transforms based features from speech spectrogram, Pattern Recognition, vol. 44, no. 10, pp: 2749-2759, 2011.

  8. P.L. Lin, Y.H. Lai, P. W. Huang, Dental biometrics: Human identification based on teeth and dental works in bitewing radiographs, Pattern Recognition, vol. 45, no. 3, pp: 934-946, 2012.

  9. R.X. Hu, W. Jia, D. Zhang, J. Gui, L.T. Song, Hand shape recognition based on coherent distance shape contexts, Pattern Recognition, vol. 45, no. 9, pp: 3348-3359, 2012.

  10. X. Tan and B. Bhanu, Fingerprint matching by genetic algorithms, Pattern Recognition, vol. 39, pp: 465-477, 2006.

  11. D.S. Huang, W. Jia, D. Zhang, Palmprint verification based on principal lines, Pattern Recognition, vol. 41, pp: 1316-1328, 2008.

  12. L. Zhang, L. Zhang, D. Zhang, H. Zhu, Online finger-knuckle-print verification for personal authentication, Pattern Recognition, vol. 43, pp: 2560-2571, 2010.

  13. L. Wan, G. Leedham, D.S.Y. Cho, Minutiae feature analysis for infrared hand vein pattern biometrics, Pattern recognition, vol. 41, no. 3, pp: 920-929, 2008.

  14. D. Zhang, Z. Liu, J.-Q. Yan, Dynamic tongueprint: A novel biometric identifier, Pattern Recognition, vol. 43, pp: 1071-1082, 2010.

  15. B.C. Armstrong, M.V. Ruiz-Blondet, N. Khalifian, K.J. Kurtz, Z. Jin,

    S. Laszlo, Brainprint: Assessing the uniqueness, collectability, and permanence of a novel method for ERP biometrics, Neurocomputing, vol. 166, pp: 59-67, 2015.

  16. K. Phua, J. Chen, T.H. Dat L. Shue, Heart sound as a biometric, Pattern Recognition, vol. 41, no. 3, pp: 906-919, 2008.

  17. T.H. Lam, R.S. Lee, D. Zhang, Human gait recognition by the fusion of motion and static spatio-temporal templates, Pattern Recognition, vol. 40, no. 9, pp: 2563-2573, 2008.

  18. K. Cpaka, M. Zalasiski, L. Rutkowski, New method for the on-line signature verification based on horizontal partitioning, Pattern Recognition, vol. 47, no. 8, pp: 2652-2661, 2014.

  19. J.R. Lyle, P.E. Miller, S.J. Pundlik, D.L. Woodard, Soft biometric classification using local appearance periocular region features, Pattern Recognition, vol. 45, no. 11, pp: 3877-3885, 2012.

  20. O. Samanta, U. Bhattacharya, S.K. Parui, Smoothing of HMM parameters for efficient recognition of online handwriting, Pattern Recognition, vol. 47, no. 11, pp: 3614-3629, 2014.

  21. A. Kumar, S. Garg, M. Hanmandlu, Biometric authentication using finger nail plates, Expert Systems with Applications, vol. 41, no. 2, pp: 373-386, 2014.

  22. M. Abo-Zahhad, S.M. Ahmed, S.N. Abbas, A new multi-level approach to EEG based human authentication using eye blinking, Pattern Recognition Letters, doi:10.1016/j.patrec.2015.07.034, 2015.

  23. T. Matsumoto, H. Matsumoto, K.Yamada, S.Hoshino, Impact of artificial gummy fingers on fingerprint systems, in Proceedings of SPIE 2002, vol. 4677, pp: 275 289, 2002.

  24. J. Galbally, R. Cappelli, A. Lumini, G. G. de Rivera, D. Maltoni, J. Fierrez, J. Ortega-Garcia, D. Maio, An evaluation of direct attacks using fake fingers generated from ISO templates, Pattern Recognition Letters, vol. 31, pp: 725732, 2010.

  25. A. Ross, A.K. Jain, Information Fusion in Biometrics, Pattern Recognition Letters, vol. 24, pp: 2115-2125, 2003.

  26. A.K. Jain, A. Ross, S. Prabhakar, An Introduction to Biometric Recognition, IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, pp: 4-20, 2004.

  27. L. Hong, A.K. Jain, S. Pankanti, Can multibiometrics improve performance? in Proceedings of IEEE Workshop on Automatic Identification Advanced Technologies, NJ, USA, pp: 5964, 1999.

  28. L.I. Kuncheva, J.C. Bezdek, R.P.W. Duin, Decision templates for multiple classifier fusion: an experimental comparison, Pattern Recognition, vol. 34, pp: 299-314, 2001.

  29. Z. Pan, G. Healey, M. Prasad, B. Tromberg, Face Recognition in Hyperspectral Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp: 1552-1560, 2003.

  30. J. Lee, B. Moghaddam, H. Pfister, R. Machiraju, Finding optimal views for 3D face shape modeling in Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp: 31-36, 2004.

  31. S.G. Kong, J. Heo, B.R. Abidi, J. Paik, M.A. Abidi, Recent advances in visual and infrared face recognition-a review, Computer Vision and Image Understanding, vol. 97, pp: 103-135, 2005.

  32. R.K. Rowe, K.A. Nixon, Fingerprint enhancement using a multispectral sensor, Biometric Technology for Human Identification II, vol. 5779, pp: 81-93, 2005.

  33. D.R. Kisku, J.K. Sing, M. Tistarelli, P. Gupta, Multisensor Biometric Evidence Fusion for Person Authentication Using Wavelet Decomposition and Monotonic-Decreasing Graph in Proceedings of Seventh International Conference on Advances in Pattern Recognition, pp: 205-208, 2009.

  34. A. Ross, A.K. Jain, Fusion Techniques in Multibiometric Systems, Face Biometrics for Personal Identification, Springer, pp: 185-212, 2007.

  35. X. Lu, Y. Wang, A.K. Jain, Combining classifiers for face recognition in Proceedings of International Conference on Multimedia and Expo 3, pp: 13-16, 2003.

  36. K. Chang, K. Bowyer, P. Flynn, Face Recognition Using 2D And 3D Faces, Workshop on Multi Modal User Authentication (MMUA), pp: 25-32, 2003.

  37. M. Imran, A. Rao, G. Hemantha Kumar, Multibiometric systems: A comparative study of multi-algorithmic and multimodal approaches, Procedia Computer Science, vol. 2, pp: 207-212, 2010.

  38. R. Connaughton, K.W. Bowyer, P.J. Flynn, Fusion of Face and Iris Biometrics in Handbook of Iris Recognition, Springer, pp: 219-237, 2012.

  39. N. Poh, S. Bengio, J. Korczak, A multi-sample multi-source model for biometric authentication in Proceedings of the 2002 12th IEEE workshop on neural networks for signal processing, pp: 375-384, 2002.

  40. K.I. Chang, K.W. Bowyer, P.J. Flynn, An evaluation of multimodal 2D+3D face biometrics, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, pp: 619-624, 2005.

  41. S.A. Samad, D.A. Ramli, A. Hussain, A Multi-Sample Single Source Model using Spectrographic Features for Biometric Authentication in Proceedings of IEEE International Conference on Information, Communications and Signal Processing, pp: 1-5, 2007.

  42. S. Prabhakar, A.K. Jain, Decision-level fusion in fingerprint verification, Pattern Recognition, vol. 35, pp: 861-874, 2002.

  43. J. Jang, K.R. Park, J. Son, Y. Lee, Multi-unit Iris Recognition System by Image Check Algorithm in Lecture Notes in Computer Science, Springer, pp: 450-457, 2004.

  44. A. Yuille et al., Combining left and right irises for personal authentication, Energy minimization methods in computer vision and pattern recognition, 4679, pp: 145-152, 2007.

  45. M. Indovina, U. Uludag, R. Snelick, A. Mink, A.K. Jain, in: Proceedings of multimodal biometric authentication methods: A COTS approach, Proc. Multi-Modal User Authentication (MMUA), pp: 99- 106, 2003.

  46. R. Brunelli, D. Falavigna, L. Stringa, T. Poggio, Automatic Person Recognition by Using Acoustic and Geometric, Machine Vision & Applications, vol. 8, pp: 317-325, 1995.

  47. Z. Zhang, R. Wang, K. Pan, S.Z. Li, P. Zhang, Fusion of Near Infrared Face and Iris Biometrics, Lecture Notes in Computer Science, Springer, pp: 172-180, 2007.

  48. C.C. Lip, D.A. Ramli, Comparative Study on Feature, Score and Decision Level Fusion Schemes for Robust Multibiometric Systems,

    Advances in Intelligent and Soft Computing, Springer, pp: 941-948, 2012.

  49. A. Rattani, D.R. Kisku, M. Bicego, M. Tistarelli, Feature Level Fusion of Face and Fingerprint Biometrics in Proceedings of First IEEE International Conference on Biometrics: Theory, Applications, and Systems, pp: 1-6, 2007.

  50. X. Liu, T. Chen, Geometry-assisted statistical modeling for face mosaicing in Proceedings of International Conference on Image Processing, vol. 2,pp: 883-886, 2003.

  51. A.A. Altun, H.E. Kocer, N. Allahverdi, Genetic algorithm based feature selection level fusion using fingerprint and iris biometrics, International Journal of Pattern Recognition and Artificial Intelligence, vol. 22, pp: 585600, 2008.

  52. R. Raghavendra, B. Dorizzi, A. Rao, G.H. Kumar, Particle swarm optimization based fusion of near infrared and visible images for improved face verification, Pattern Recognition, vol. 44, no. 2, pp: 401-411, 2011.

  53. R. Raghavendra, C. Busch, Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition, Pattern recognition, vol. 47, no. 6, pp: 2205-2221, 2014.

  54. S. Chen, Y. Zhu, D. Zhang, J.-Y. Yang, Feature extraction approaches based on matrix pattern: MatCA and MatFLDA, Pattern Recognition Letters, vol. 26, no. 8, pp: 1157-1167, 2005.

  55. A. Kumar, D. Zhang, Biometric Recognition Using Feature Selection and Combination, Audio- and Video-Based Biometric Person Authentication, Springer, pp: 813-822, 2005.

  56. D.R. Kisku, P. Gupta, J.K. Sing, Feature level fusion of face and palmprint biometrics by isomorphic graph-based improved K-medoids partitioning in Proceedings of the AST/UCMA/ISA/ACN, pp: 70-81, 2010.

  57. Waheeda, A Multimodal Biometric fusion approach based on Binary Particle Optimization, Research and Development in Intelligent Systems, XXVIII, Springer-Verlag London Limited 2011.

  58. M. Monwar, M. Gavrilova, Multimodal Biometric System using Rank Level Fusion Approach, IEEE Trans. On Systems, Man, and Cybernetics-Part B: Cybernetics, vol. 39, pp: 867-878, 2009.

  59. D. Zhang, Z. Guo, G. Lu, L. Zhang, Y. Liu, W. Zuo, Online joint palmprint and palmvein verification, Expert Systems with Applications, vol. 38, pp: 2621263, 2011.

  60. L. Zhang, L. Zhang, D. Zhang, Z. Guo, Phase congruency induced local features for finger-knuckle-print recognition, Pattern Recognition, vol. 45, no. 7, pp: 2522-2530, 2012.

  61. F. Alsaade, A. Ariyaeeinia, A. Malegaonkar, S. Pillay, Qualitative fusion of normalised scores in multimodal biometrics, Pattern Recognition Letters, vol. 30, no. 5, pp: 564-569, 2009.

  62. O.A. Arigbabu, S.M.S. Ahmad, W.A.W. Adnan, S. Yussof, Integration of multiple soft biometrics for human identification, Pattern Recognition Letters, doi:10.1016/j.patrec.2015.07.014, 2015.

  63. Y. Ding, and A. Ross, A comparison of imputation methods for handling missing scores in biometric fusion, Pattern Recognition, vol. 45, no. 3, pp: 919-933, 2012.

  64. S. Ribaric, I. Fratric, Experimental evaluation of matching-score normalization techniques on different multimodal biometric systems in IEEE Mediterranean Electrotechnical Conference, pp: 498501, May 2006.

  65. L. Lam, C.Y. Suen, Optimal combination of pattern classifiers, Pattern Recognition Letters, vol. 16, pp: 945-954, 1995.

  66. Q. Tao, R. Veldhuis, Threshold-optimized decision-level fusion and its application to biometrics, Pattern Recognition, vol. 42, no. 5, pp: 823836, 2009.

  67. Z. Zhang, R. Wang, K. Pan, S. Li, and P. Zhang, Fusion of near infrared face and iris biometrics, in Advances in Biometrics, Lecture Notes in Computer Science, vol. 4642, pp: 172-180, 2007.

  68. J. Y. Gan, J. H. Gao, and J. F. Liu, Research on face and iris feature recognition based on 2DDCT and Kernel Fisher Discriminant Analysis in International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR '08), vol. 1, pp: 401-405, 2008.

  69. S. B. Yacoub, Y.A jaoued, and E. Mayoraz, Fusion of Face and Speech Data for Person Identity Verification in IEEE Transactions On Neural Networks, vol. 10, no. 5, 1999.

  70. B. Kar, B. Kartik, and P. K. Dutta, Speech and Face Biometric for Person Authentication, in IEEE International Conference on Industrial Technology (ICIT 2006), pp.391-396, 2006.

  71. Y. Sutcu, L. Qiming, and N. Memon, Secure Biometric Templates from Fingerprint-Face Features in IEEE Conference on Computer Vision and Pattern Recognition, (CVPR '07), pp: 1-6, 2007.

  72. D. Bouchaffra, and A. Amira, Structural hidden Markov models for biometrics: Fusion of face and fingerprint in Pattern Recognition, vol. 41, issue 3, pp. 852-867.

  73. D. Kisku, P. Gupta, H. Mehrotra and J. Sing, Multimodal Belief Fusion for Face and Ear Biometrics in Intelligent Information Management, vol. , no. 3, pp: 166-171, 2009.

  74. Y. F. Yao, X. Y. Jing, and H. S. Wong, Face and palmprint feature level fusion for single sample biometrics recognition in Neurocomputing, vol. 70, pp.15821586, 2007.

  75. X. Y. Jing, Y. F. Yao, D. Zhang, J. Y. Yang , and M. Li, Face and palmprint pixel level fusion and Kernel DCV-RBF classifier for small sample biometric recognition in Pattern Recogn, vol. 40, pp: 3209 3224, 2007.

  76. M.S. Almohammad, G.I. Salama, T.A. Mahmoud, Human identification system based on feature level fusion using face and gait biometrics in International Conference on Engineering and Technology (ICET), vol. 1, no. 5, pp: 10-11, 2012.

  77. A. Ross, and R. Govindarajan, Feature Level Fusion Using Hand and Face Biometrics, in Proc. of SPIE Conference on Biometric Technology for Human Identification II, Orlando, USA, pp: 196-204, 2005.

  78. N. Radha, and A. Kavitha, Rank level fusion using fingerprint and iris biometrics, in Indian Journal of Computer Science and Engineering, vol. 2, no. 6, pp. 917923, 2012.

  79. M. J. Sudhamani, M. K. Venkatesha, and K. R. Radhika, Fusion at decision level in multimodal biometric authentication system using Iris and Finger Vein with novel feature extraction, in Annual IEEE India Conference (INDICON), pp: 1-6, 2014.

  80. J. F. Yang, and X. Zhang, Feature-level fusion of fingerprint and fingervein for personal identification, in Pattern Recognition Letters, pp. 623-628, 2012.

  81. S. Zheng, K. Huang, T. Tan, and D Tao, A cascade fusion scheme for gait and cumulative foot pressure image recognition, Pattern Recognition, vol. 45, no. 10, pp: 3603-3610, 2012.

  82. A. Nigam, P. Gupta, Designing an accurate hand biometric based authentication system fusing finger knuckleprint and palmprint, Neurocomputing, vol. 151, pp: 1120-1132, 2015.

  83. S. Emerich, E. Lupu, C. Rusu, A new set of features for a bimodal system based on on-line signature and speech, Digital Signal Processing, vol. 23, pp: 928940, 2013.

  84. G. Santos, E. Grancho, M.V. Bernardo, P.T. Fiadeiro, Fusing iris and periocular information for cross-sensor recognition, Pattern Recognition Letters, vol. 57, pp: 52-59, 2015.

  85. Bugdol, M. D., & Mitas, A. W., Multimodal biometric system combining ECG and sound signals, Pattern Recognition Letters, vol. 38, pp: 107-112.

  86. A. Rattani, D. R. Kisku, M. Bicego, M. Tistarelli, Robust Feature- Level Multibiometrics Classification in Proceedings of IEEE Biometric Consortium Conference, Biometrics Symposium, pp: 1-6, 2006.

  87. Y. Yang, K. Lin, F. Han, Z. Zhang, Dynamic Weighting for Effective Fusion of Fingerprint and Finger Vein, Progress in Intelligent Computing and Applications (PICA), vol. 1, pp: 50-61, 2012.

  88. K. Veeramachaneni, L. Osadciw, A. Ross, and N. Srinivas, Decision- level Fusion Strategies for Correlated Biometric Classifiers in Proceedings of IEEE Computer Society Workshop on Biometrics at the Computer Vision and Pattern Recognition (CVPR) conference, Anchorage, AK, USA, 2008.

  89. X. Zhou, B. Bhanu, Feature fusion of side face and gait for video- based human identification, Pattern Recognition, vol. 41, pp: 778 795, 2008.

  90. L. Shen, L. Bai, and Z. Ji, FPCODE: An Efficient Approach for MultiModal Biometrics, International Journal of Pattern Recognition and Artificial Intelligence, vol. 25, pp: 273-286, 2011.

  91. T. B. Long, L. H. Thai, T. Hanh, Multimodal biometric person authentication using fingerprint, face features, PRICAI 2012: Trends in Artificial Intelligence 7458, pp: 613-624, 2012.

  92. M. Abernethy, User Authentication Incorporating Feature Level Data Fusion of Multiple Biometric Characteristics, Doctor of Philosophy Murdoch University, 2011.

  93. S. Noushath, M. Imran, K. Jetly, A. Rao, G. H. Kumar, Multimodal biometric fusion of face and palmprint at various levels in Proceedings of International Conference on Advances in Computing, Communications and Informatics, pp: 1793-1798, 2013.

  94. N. Kihal, S. Chitroub, J. Meunier, Fusion of iris and palmprint for multimodal biometric authentication in Proceedings of 4th International Conference on Image Processing Theory, Tool and Applications (IPTA), pp: 1-6, 2014.

  95. D. M. Daniel., C. Mihaela, T. Romulus, Combining feature extraction level and score level fusion in a multimodal biometric system in Proceedings of 11th International Symposium on Electronics and Telecommunications (ISETC), pp: 1-4, 2014.

  96. Mohamad, N.; Ahmad, M.I.; Ngadiran, R.; Ilyas, M.Z.; Isa, M.N.M.; Saad, P., Investigation of information fusion in face and palmprint multimodal biometrics in Proceedings of 2nd International Conference on Electronic Design (ICED), pp: 347-350, 2014.

Leave a Reply