- Open Access
- Total Downloads : 228
- Authors : Anita G. Khandizod, R. R. Deshmukh
- Paper ID : IJERTV3IS21014
- Volume & Issue : Volume 03, Issue 02 (February 2014)
- Published (First Online): 04-03-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Multispectral Palm print Image Fusion- A Review
Anita G. Khandizod
Department of Computer Science & Information Technology, Dr.Babasaheb Ambedkar Marathawada University.
Aurangabad, India
R. R. Deshmukh
Department of Computer Science & Information Technology, Dr.Babasaheb Ambedkar Marathawada University.
Aurangabad, India
Abstract-In our daily lives, there is a frequent need in identifying people correctly and verifying their identities, biometrics, is the solution and known to be the most reliable method and strong authentication technologies, with the increasing demand of biometric solutions for security systems, palmprint recognition a relatively novel but promising biometric technology. Although the study of palmprint recognition has a shorter history than fingerprint and face recognition, more attention has been directed towards this promising field. In recent years many research have obtained attention in Hand biometrics, including fingerprint, palmprint, and hand geometry and hand vein pattern, there are various types if technique are used although all of them use white light as the illumination source, there is no work systematically evaluating whether white light color illumination is the optimal choice for palmprint recognition this issue, is address by using the multispectral palmprint consist Red, Green, Blue, and NIR these 4 different types of illumination. In this paper, comparative study of several feature level multispectral palm image fusion approaches is conducted. Among others, wavelet transform based image fusion is found to perform best in preserving discriminative patterns from multispectral palm images.
KeywordsBiometric, Discrite wavelet transform, image fusion, 2nd order derivitives.
-
INTRODUCTION
The term "biometrics" is derived from the Greek words bio (life) and metric (to measure), the science of establishing the identity of an individual based on physical, chemical, or behavioral attributes of the person [1].
Biometric consist two types of characteristics physiological and behavioral. These characteristics are unique to each and every individual hence can be used to verify or identify a person. Many users have password which may be forgot or easily accessible, smartcards, keys or tokens as the name implies, but users may share their smartcards, which result in wrong authentications even tokens can be lost or stolen [2]. Biometrics is known to be the most reliable method and strong authentication technologies, capable of providing higher degrees of certainty that a user really is who he or she claims to be, are becoming common, the following figure shows the how the security increased and very useful to the human recognition purpose.
The working of the biometric system consist the following points.
-
Capture the chosen biometric.
-
Process the biometric and extract and enroll the biometric template.
-
Store the template in a local repository, a central repository, or a portable token such as a smart card.
-
Live-scan the chosen biometric.
-
Process the biometric and extract the biometric template.
-
Match the scanned biometric template against stored templates.
Palmprint verification system using Biometrics is one of the emerging technologies, which recognizes a person based on the principle lines, wrinkles and ridges on the surface of the palm. Many researchers have shown that the performance of palm print based biometric systems is comparable to those of face, fingerprint and hand geometry.
Compared with other biometric characteristics palmprint has advantages such as [3].
1] More acceptable when captured.
2] Low-resolution imaging can be employed.
3] Workers or elderly people may not provide clear, fingerprint but could offer clear palmprint.
4] Palmprint image could provide even more information than fingerprint.
5] High accuracy and user friendliness.
Fig.1 Palmpint feature (Principal lines & Wrinkles) [4]
Palmprints have been widely studied for biometric recognition for many years. Various palmprint representations have been
proposed for recognition, such as Line features, Feature points, Fourier spectrum, Eigen palms features, Sobel and morphological features, Texture energy, Wavelet signatures, Gabor phase, Fusion code, Competitive code, etc, among them, using active illumination to enhance the palmprint feature was a popular one. Although all of them use white light as the illumination source, there is no work systematically evaluating whether white light color illumination is the optimal choice for palmprint recognition.
To address this issue, using the proposed multispectral palmprint acquisition device and additive color theory, seven kinds of palmprint images collected by different simulated colors, blue, green, red, cyan, yellow, magenta, and white. For Palmprint recognition methods white color may not be the optimal color, while yellow or magenta may be more appropriate for palmprint recognition. Multispectral palmprint systems can provide more discriminate information under different illuminations in a short time, thus they can achieve better recognition accuracy.
-
-
PROPOSED METHOD
The proposed method composed the following steps as demonstrated in figure 2
PolyU multispectral palmprint database are used, afterwards preprocessing is conducted to independently on each image to achieve the enhance image by filtering process for this different types of nine filter are used out of that Gaussian gives good result as compare to processing technique, then discrete wavelet transform is used to extract the palmprint feature, to obtained the fused image feature level fusion is done to reduce the dimensionality the final fused image is convolved with 2nd order derivatives, finally fused image is integrating multiple image sources is produces and feed into the verification engine for performance evaluation.
Multispectral Palmprint
Pre-processing [Filtering]
Feature Extraction from Red, Green, Blue & NIR bands
Image fusion by different
2 nd Order Derivatives
Classification
Decision
Fig 2.Flowchart of the proposed system
-
REVIEW OF SOME STATE-OF-THE-ART SYSTEM
Comparative Studies on Multispectral Palm Image Fusion for Biometrics, in this paper the idea of multispectral palm image fusion for biometrics. This concept extends the visual capability of camera and will improve user-friendliness, security and hopeful recognition performance of original palmprint based biometric system. Several image fusion based approaches are evaluated in the context of discriminative features. Experimental results suggest that Curvelet transform outperforms several other carefully selected methods in terms of well established criteria [5].
A multispectral palmprint recognition system using wavelet based image fusion has been proposed in 2008 [6]. It uses a multispectral capture device to sense the palm images under different illumination conditions, including red, blue, green and infrared.
Feature band selection based multispectral palmprint recognition has been proposed in 2010 [7] where the statistical features are extracted to compare each single band. Score level fusion is performed to determine the best combination from all candidates. The most discriminative information of palmprint images can be obtained from two special bands.
David Zhang et al. 2010 [8] have developed an online multispectralpalmprint system. To examine the recognition performance of various spectral bands, a large multispectral palmprint database is created. The Red channel achieves the best result, whereas the Blue and Green channels have comparable performance but are slightly inferior to the Near Infrared Channel (NIC). Multispectral fusion accompanied with higher recognition accuracy and antispoofing capability is superior to a single spectrum. Since different bands highlight different texture information, the fusion of them could significantly reduce the EER. It was found that the fusion of Red and Blue achieves the best result.
Rank-level Fusion of Multispectral Palmprints 2012 [9] this paper presents an approach for the personal authentication using rank-level fusion of multispectral palmprints, instead of using multiple biometric modalities and multiple matchers. The rank level fusion involving the non linear combination of hyperbolic tangent functions gives the best recognition rate for the Rank 1 obtained from two types of features, viz., sigmoid and fuzzy. Recognition rate of 99.4% from sigmoid features and that of 99.2% from fuzzy features based on Rank 1 is the outcome of the hyperbolic tangent nonlinearity.
Multispectral Palmprint Recognition Using a Quaternion Matrix (2012), proposed new method [10] for multispectral images based on a quaternion model which could fully utilize the multispectral information. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
Human Identity Verification Using Multispectral Palmprint Fusion presents 2012 [11] an intra-modal fusion environment to integrate multiple raw palm images at low level. To capture the palm characteristics, the fused image is convolved with Gabor wavelet transform. The Gabor wavelet based feature representation reflects very high dimensional space. To reduce the high dimensionality, ant
colony optimization algorithm is applied to consider only relevant, distinctive and reduced feature set from Gabor responses. Finally, the reduced set of features is trained with support vector machines and accomplished user recognition tasks. For evaluation, CASIA multispectral palmprint database is used. The experimental results reveal that the system is robust and encouraging while variations of classifiers are used.
-
MULTISPECTRAL IMAGE
Electromagnetic theory says that hertzian waves provide stronger penetrability into objects. Multispectrum illuminator can penetrate tissues at different depths and form images of both surface skin textures and hypodemia. This multi- spectrum sensor provides greater acquisition time and better quality images than any other unimodal sensors [12].
Multispectral imaging can help detect breaching of a biometric system. Multispectral imaging captures image data at specific wavelengths across the electromagnetic spectrum.
Fig 3: Example of Multispectral Image
Usually, satellites have three or more radiometers (Landsat has seven). Each one acquires one digital image (in remote sensing, called a 'scene') in a small band of visible spectra, ranging from 0.7 µm to 0.4 µm, called red-green-blue (RGB) region, and going to infrared wavelengths of 0.7 µm to 10 or more µm, classified as near infrared (NIR), middle infrared (MIR) and far infrared (FIR or thermal).
-
MULTISPECTRAL PALMPRINT DATABASE
Multispectral palmprint images were collected from 250 volunteers, including 195 males and 55 females. The age distribution is from 20 to 60 years old. We collected samples in two separate sessions. In each session, the subject was asked to provide 6 images for each palm. Therefore, 24 images of each illumination from 2 palms were collected from each subject. In total, the database contains 6,000 images from 500 different palms for one illumination. The average time interval between the first and the second sessions was about 9 days.
-
IMAGE FUSION
Fusion is a good way to increase the system accuracy and robustness [17]. Generally speaking, there are two kinds of fusion. The first kind is fusion of multiple features, from one palmprint image. As the different features from the same image are correlated, the improvement will be limited. The other kind is multimodal, fusion of palmprint with other
biometrics traits. In the past decade, many different multimodal systems have been proposed, including, finger surface+palmprint, handgeometry+palmprint, face+palmprint, fingerprint +palmprint and iris+palmprint. Image Fusion is a process of combining the relevant information from a set of images, into a single image, wherein the resultant fused image will be more informative and complete than any of the input images [18], the result of image fusion is a new image which is more suitable for human and machine perception or further image-processing tasks such as segmentation, feature extraction and object recognition. The fused image is a multispectral image with high spatial resolution obtained by integrating high resolution panchromatic image which is monochrome and the low resolution multispectral color image that consist of the collection of RGB (Red, Green, Blue) bands. This is achieved by applying a sequence of operators on the images that would make the good information in each of the image prominent. The resultant image is formed by combining such magnified information from the input images into a single image [19]. One goal of fusion software is to align anatomical and functional images and allow improved spatial localization of abnormalities. All fusion algorithms have common objectives as given below.
-
Preserve all relevant information in the fused image.
-
Suppress irrelevant parts of the image and noise.
-
Minimize any artifacts or inconsistencies in the fused image.
-
Sharpen the images.
-
Improve geometric corrections.
-
Substitute the missing information image with signals from another image [20].
Image fusion takes place at three different levels
-
pixel level, feature level and decision level. Image fusion methods can be broadly classified into two that is special domain fusion and transform domain fusion, Averaging, Brovery method, Principal Component Analysis (PCA), based methods are special domain methods. But special domain methods produce special distortion in the fused image .This problem can be solved by transform domain approach. The multi-resolution analysis has become a very useful tool for analyzing images. The discrete wavelet transform has become a very useful tool for fusion. The images used in image fusion should already be registered. Mis-registration is a major source of error in image fusion. Pixel level fusion technique is used to increase the special resolution of the multi-spectral image. At the lowest level, pixel-level fusion uses the registered pixel data from all image sets to perform detection and classification functions. This level has the potential to achieve the greatest fusion performance only at the highest computational expense.
At the intermediate level, feature-level fusion combines features that are detected and segmented in the various data sources. Features that correspond to the characteristics of
landscape objects are extracted from initial data sources dependent on their characteristics such as extent, shape and neighbourhood. Features identified from multiple sources create a common feature space for further classification.
Fusion at the decision level combines decisions of independent sensor detection/ classification paths by applying decision rules. The main drawback of this process is that decision uncertainty in each sensor chain is maintained and combined with a composite measure of uncertainty [21].
The above mentioned three-levels of processing are the basic building blocks of multi-source data fusion. During a complex process, these levels might be combined. In all cases, the aim is the extraction of useful information included in the source data while avoiding the introduction of artefacts harmful to human observations or matching analyses.
The evolution of the research work into the field of image fusion can be broadly put into the following three stages.
-
Simple Image Fusion
The simple image fusion techniques mainly perform a very basic operation like pixel selection, addition, subtraction or averaging. These methods are not always effective but some time its gives a good result.
-
Average Method
In this method the resultant fused image is obtained by taking the average intensity of corresponding pixels from both the input image.
-
Select Maximum/Minimum Method
In the maximum method resultant fused image is obtained by selecting the maximum intensity of corresponding pixels from both the input image, while in case of minimum the resultant fused image is obtained by selecting the minimum intensity of corresponding pixels from both the input image.
-
Principal Component Analysis
Principal component analysis (PCA) is a vector space transform often used to reduce multidimensional data sets to lower dimensions for analysis. It reveals the internal structure of data in an unbiased way [22].
-
-
Pyramid Decomposition based fusion
-
An image pyramid consists of a set of low pass or bandpass copies of an image, each copy representing pattern information of a different scale. At every level of fusion using pyramid transform, the pyramid would be half the size of the pyramid in the preceding level and the higher levels will concentrate upon the lower spatial frequencies. Each of the pyramidal algorithms considered by us differs in the way the decomposition is performed. The Recomposition phase also differs accordingly.
-
Laplacian Pyramid The
Laplacian pyramidal method is identical to FSD pyramid except for an additional low pass filtering performed with 2*W. All the other steps are followed as in FSD pyramid.
-
Ratio Pyramid
The Ration pyramidal method is also identical to FSD pyramid except for, in the decomposition phase, after low pass filtering the input image matrices; the pixel wise ratio is calculated instead of subtraction as in FSD [34].
-
Morphological Pyramid
Two levels of filtering are performed on the input image matrices image opening and image closing. Image opening is a combination of image erosion followed by image dilation. Image closing is the other way round. A combination of image opening and image closing gets rid of noise in the image [23].
C. Discrete Wavelet transform based fusion
The following steps outline Wavelet based image fusion from a multispectral palmprint.
-
A two-dimensional discrete wavelet transform is applied on the ROI of a Multispectral palmprint.
-
The Discrete Wavelet Transform (DWT) can decompose one single multispectral palm image in four different kinds of coefficients i.e. one Approximation Coefficient matrix A, and three detail Coefficient matrix DH, Dv, Dd, Horizontal, Vertical, Diagonal direction preserving the image information.
-
Second, the coefficients abstracted from different images can be combined to obtain new coefficients.
-
So that the information in different images is appropriately collected, last the fused image can be achieved.
Fig4. Block diagram of Wavelet based image fusion
-
-
CONCLUSION
A novel palmprint feature extraction approach is used. The novelty lies in extracting two intramodal discriminative features; lines like principal lines and dominant wrinkles and energy features using the same wavelet decomposition of the palmprint ROI. The recent trend of intelligent human computer interface has motivated to tackle the problem of contact free hand based biometrics. Integrating information deep inside skin with appearance in the context of hand biometrics is considered. The advantages of feature-level fusion are appearance as well as inner information of hand is combined to form one solo representation, enforcing the security of the whole system. Multispectral palmprint capture device was designed to offer illuminations of Red, Green, Blue and Infrared channels. The verification results on different illumination are irradiative for choosing the best spectrum for palmprint recognition. By using image fusion in multispectral palm images we get more discriminative features which finally improve accuracy of recognition.
-
ACKNOWLEDGMENTS
-
The authors would like to thank The Hong Kong Polytechnic University (PolyU) for sharing their database (PolyU multispectral palmprint Database).
-
This project is under UGC Rajiv Gandhi National Fellowship F1-17.1/2011-12/RGNF-SC-MAH-9445, sanction at department of CS & IT, Dr. Babasaheb Ambedkar Marathawada University.
REFERENCES
-
A. K. Jain, A. Ross, S. Prabhakar, "An Introduction to Biometric Recognition", IEEE Trans, on Circuits and Systems for Video Technology, Vol. 14, No. 1, pp. 4-19, January2004.
-
S. Prabhakar, S. Pankanti, A. K. Jain, "Biometric Recognition: Security and Privacy Concerns", IEEE Security & Privacy, March/April 2003, pp. 33-42.
-
RÃha Zdenek, Vaclav Matyas, Biometric Authentication Systems. FIMU (Faculty of Informatics Masaryk University) Report Series, Copyright 2000.
-
A. K. Jain, A. Ross, S. Pankanti,Biometrics: A tool for information security, IEEE Transaction on Information Forensics and Security, Vol.1 (2), pp. 125-143, 2006.
-
Y. Hao, Z. Sun and T. Tan, Comparative Studies on Multispectral Palm Image Fusion for Biometrics, proceedings of the Asian Conference on Computer Vision, Vol. 2, 2007, pp. 12-21.
-
D. Han, Z. Guo and D. Zhang, Multispectral Palmprint Recognition Using Wavelet-Based Image Fusion, Proceedings of the International Conference on Signal Processing, Hong Kong, 26-29 October 2008, pp. 2074-2077.
-
Z. Guo, L. Zhang and D. Zhang, Feature Band Selection for Multispectral Palmprint Recognition, Proceedings of the 20th International Conference on Pattern Recognition, 2010, pp. 1136- 1139.
-
D Zhang, Z Guo, G Lu, L Zhang, and W Zuo, "An Online System of Multispectral Palmprint Verification," IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 2, pp. 480-490, 2010.
-
Neha Mittal, Madasu Hanmandlu, Jyotsana Grover, Ritu Vijay, Rank-level Fusion of Multispectral Palmprints, International Journal of Computer Applications (0975 8887) Volume 38 No.2, January 2012.
-
Xingpeng Xu , Zhenhua Guo, Changjiang Song and Yafeng Li, Multispectral Palmprint Recognition Using a Quaternion Matrix, Sensors 2012, 12, 4633-4647;doi:10.3390/s120404633, www.mdpi.com/journal/sensors, ISSN 1424-8220.
-
D. R. Kisku, A.Rattani, P.Gupta, et.al, Human Identity Verification Using Multispectral Palmprint Fusion. Journal of Signal and Information Processing, 2012, pp 263-273.
-
Mrs.Maheswari, Ancy.S,Dr.G.R.Suresh, survey on multispectral biometric images, International Journal of Innovative Research in Computer and Communication Engineering Vol. 1, Issue 4 ISSN (Online): 2320 9801, June 2013.
-
A. K. Jain and J. Feng, Latent palmprint matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, pp. 1032-1047, 2009.
-
D. J. Gawkrodger, Dermatology: An Illustrated Colour Text, 3rd ed. London, U.K.: Elsevier Health Sci., 2002.
-
V. P. Zharov, S. Ferguson, J. F. Eidt, P. C. Howard, L. M. Fink, and
M. Waner, Infrared imaging of subcutaneous veins, Lasers Surg. Med., vol. 34, no. 1, pp. 5661, Jan. 2004.
-
Z. Sun, T. Tan, Y. Wang and S. Z. Li, Ordinal Palmprint Rpresentation for Personal Identification, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Beijing, 20-25 June 2005, pp. 279-284.
-
A.N. Akansu, W.A. Serdij, I.W. Selesnick, Wavelet Transforms in Signal Processing: A Review of Emerging Applications, Physical Communication, Elsevier, vol. 3, issue 1, pp. 1-18, March 2010.
-
A. Uhl and P. Wild, Personal recognition using single-sensor multimodal hand biometrics, International Conference on Image and Signal Processing, pp. 396-404, 2008.
-
J. Wu, Z. Qiu, and D. Sun, A hierarchical identification method based on improved hand geometry and regional content feature for low- resolution hand images, Signal Processing, 88, pp. 1447-1460, 2008.
-
Shivsubramani Krishnamoorthy, K P Soman, Implementation and Comparative Study of Image Fusion Algorithms, International
Journal of Computer Applications (0975 8887) Volume 9 No.2, November 2010.
-
B. Suresh Babu, V.Chandrasekhar, P. Naresh Kumar, K.Vivekananda Swamy, Comparison and Improvement of Wavelet Based Image Fusion, IJCEM International Journal of Computational Engineering & Management, Vol. 15 Issue 3, May 2012 ISSN (Online): 2230- 7893 www.IJCEM.org
-
A. K. Jain, A. Ross. Multibiometrics Systems, ACM, 47(1): 34-40, Jan. 2004.
-
F.Sadjadi,Comparative Image Fusion Analysis, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Volume 3, Issue, 20-26 June 2005 Page(s): 8 8