 Open Access
 Authors : Dr. Shreenidhi. P. L , Shaima Al Hashmi , Dr. Hemalatha Jai Kumari
 Paper ID : IJERTV11IS060193
 Volume & Issue : Volume 11, Issue 06 (June 2022)
 Published (First Online): 22062022
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
Using Neural Network, Human Recognition based on Hand Geometry Method using Fractional Local Ternary Intensity Pattern (FLTIP) Model
Dr. Shreenidhi.P.L,,
Dept of Information Technology, UTASMCT
Ms. Shaima Al Hashmi,
Dept of Information Technology, UTASMCT
Dr. Hemalatha Jai Kumari,
Dept of Information Technology, UTASMCT
AbstractHand geometry feature based image classification is one of the interesting research area in the field of biometric recognition and authentication systems. This type of recognition system is deployed in many applications for ensuring the secure authentication, person identification, and access control. For this purpose, some of the image processing techniques have been developed in the existing works, but it limits with the issues like reduced accuracy, classification efficiency, and increased error rate. In order to solve these problems, this paper intends to develop a new pattern extraction based classification technique for hand geometry feature recognition. At first, the input test image is preprocessed by using the Windowed Convolution Gaussian Filtering (WCGF) technique for eliminating the noise pixels and smoothening the image. After that, the block separation is performed for extracting the most useful patterns from the filtered image with the use of Fractional Local Ternary Intensity Pattern (FLTIP) technique. Consequently, the Neural Network (NN) classifier is implemented to recognize the image based on the extracted feature vectors. During experimental analysis, the performance of the proposed technique is validated by using various evaluation measures. Also, it is compared with some other existing techniques for proving the superiority of the proposed pattern extraction based classification system.
Index TermsHand Geometrical Features, Windowed Convolutional Gaussian Filtering (WCGF), Fractional Local Ternary Intensity Pattern (FLTIP), Neural Network (NN) Classifier, Biometric Authentication, and Recognition System.

INTRODUCTION
BIometric authentication and recognition is one of the emerging research area in the recent days for ensuring security [1,2] . A biometric system is a kind of pattern recognition system, where a biometric data of a person can be operated based on the set of characteristic extracted from the dataset [3]. Typically, the authentication can be performed based on the physiological and behavioral features for personal recognition. The characteristics like face, iris, fingerprint, and veins are the kind of physiological biometrics. In which, the hand biometrics are the most suitable option for authenticating the valid person based on an access control mechanism [4, 5]. Moreover, the hand geometry based authentication is termed as a standard level identification system, which satisfies the most suitable and required characteristics for authentication. In most of the biometric recognition systems, the image is considered as the suitable source for analyzing the features and other parameters required for authentication [6, 7].
Generally, the human hand comprises an extensive range of
characteristics that could be processes or operated by the use of hand geometry biometrics [8]. Based on the utilization of image acquisition device, it is categorized into the types of constraint and unconstrained contact based methods. Moreover, ensuring the security and user integrity are highly essential for protecting the privacy information against the unauthenticated access. For this purpose [9], many biometrics based authentication mechanisms have been developed that protects both the users privacy and security. A typical hand geometry biometrics system have following stages: pre processing of image, segmentation, feature updates, optimization, and classification [10]. Based on these processes, the authenticated person can be identified by using the hand geometry biometrics. Most of the traditional works developed a hand geometry features based biometric authentication system for ensuring the individual person security. But, it lacks with some issues like increased computational complexity, time consumption, reduced accuracy, recognition performance and inefficient classified results. In order to solve these issues, this paper intends to develop a pattern based classification system for hand geometry image processing. Here, the geometrical features are considered to represent the person identification and classification. The major objectives focused on this work are listed as follows:

To remove the noise and artifacts in the input hand image, a new preprocessing technique named as, Windowed Convolutional Gaussian Filtering (WCGF) is developed.

To extract the most required patterns used for authenticating the person, a Fractional Local Texture Intensity Pattern (FLTIP) is proposed.

To classify the image with increased recognition rate based on the geometrical features, a Neural Network (NN) technique is utilized.
The rest of sections present in this paper are structured as follows: the existing algorithms related to preprocessing, feature extraction, and classification used for hand geometry image processing are surveyed in Section II. The complete performing description about the methodology proposed with its overall flow illustration are stated in Section III. The results of experiment and comparing both existing and proposed classification techniques are validated using various measures in Section IV. Finally, the paper is concluded with its future enhancements in Section V.


RELATED WORKS
This section surveys the existing techniques related to the hand geometry verification and classification based on its features.Prabu, et al [11] suggested a Hybrid Adaptive Fusion (HAF) technique for improving the accuracy of biometric recognition system. The main intention of this paper was to develop a secure identification and authentication system based on the iris of users. For this purpose, different feature extraction techniques such as Scale Invariant Fourier Transform (SIFT) and Linear Binary Patterns (LBP) were utilized in this work. Also, the median filtering technique was used to eliminate the noise and irrelevant features in the input iris images. In order to exactly classify the verified users, an extreme machine learning algorithm was employed. Bahmed, et al [12] introduced a multimodal biometric system for a secure personal authentication and access control. The major stages involved in this system were, image extraction, orientation, key point localization and feature extraction. Here, the feature selection based finger geometry was performed to eliminate the redundant information and to increase the accuracy of prediction. Then, the recognition could be performed to select the optimal features with low correlation. The main benefit of this technique was, it provided an increased accuracy and efficiency by incorporating the geometric features with extra biometric features.
Ghose, et al [13] suggested a new feature descriptor algorithm based on the neighborhood intensity pattern for an efficient image retrieval process. Here, an isotropic Gaussian filtering technique was applied to increase the robustness of the mechanism based on multiresolution analysis. Moreover, the retrieval performance could be improved with the help of Genetic Algorithm (GA). Jaswal, et al [14] recommended an adaptive histogram equalization method for ensuring the security of multimodal biometric authentication system. Here, a palm print and hand geometry features have been considered to increase the accuracy of classification.The stages comprised in this work were as follows: ROI extraction, feature extraction and geometry recognition. In order to improve the overall performance of recognition, a fusion of hand shape, palm print and geometrical features have been extracted from the given image. In paper [15], the hand geometry measurements have been used for improving the accuracy of biometric authentication system. Here, 12 different features were utilized to accurately authorize the persons in a hand recognition system. The study done in this research were acquisition of image, preprocessing, and feature expansion. Moreover, the personal authentication could be performed geometrical features extracted from the input image. The disadvantage behind this work was, it required to increase the accuracy of classification.
Song, et al [16] introduced a multitouch authentication mechanism by incorporating the hand geometry and behavioral characteristics information. The benefits of this mechanism were simplicity, efficiency, security and usability. The intention of this technique was to improve the accuracy of user legitimation with better reliability. Sagayam and Hemanth [17] surveyed various hand geometry feature and recognition technique for selecting the most suitable classification algorithm to biometric authentication system. The major objectives have been focused on this study were to reduce the
computational complexity, error rate, then to improve the classification rate and susceptibility. Here, both the supervised and unsupervised machine learning algorithms have been validated to analyze the efficacy of the suitable mechanism used for hand geometry feature recognition. Angadi, et al [18] suggested a multiclass Support Vector Machine (SVM) technique for increasing the performance of hand geometry system. Here, the complete connected graph was utilized to increase the identification rate with reduced complexity. Also, the spectral properties have been utilized for a pegfree hand geometry based user authentication system. The stages involved in this system were preprocessing, segmentation, graph representation, feature extraction, and classification.
Jaswal, et al [19] utilized a palm print and hand geometry features for developing a secure multimodal biometric authentication system. Here, the local texture and geometrical patterns have been extracted for increasing the accuracy of classification. Moreover, Fisher Linear Discriminant (FLD) analysis method was utilized for optimal projections on the image. At last, the SVM based classification and fusion techniques were employed to exactly recognize the objects. In paper [20], an improved Bacterial Foraging Optimization (BFO) was developed for a hand based biometric authentication system. In this system, three types of biometrical features such as finger print, palm print and finger inner knuckle print have been considered for segmentation. At last, the fusion was applied on these images by the use of Particle Swarm Optimization (PSO) method. de Christo, et al [21] employed a biometric authentication system, which comprised the stages of feature extraction, preprocessing, fusion and classification. Here, the SVM based classification technique was employed to accurately recognize the biometric patterns. In addition, a fusion method based on attributes was used to fuse the image by the use of geometrical features. Khaliluzzaman, et al [22] suggested hand geometry features based authentication mechanism for ensuring an increased security. The research involved in current system were preprocessing, color conversion, boundary extraction, and ROI estimation. In this analysis, it was stated that the performance of authentication system was highly depends on the feature vectors of the given images.

PROPOSED METHODOLOGY
This section presents the detailed description about the proposed hand geometry features based personal verification system. The major aim focused on this work is to improve the recognition accuracy of authentication based on the feature representation of the images. For this purpose, a novel techniques such as Windowed Convolution of Gaussian Filtering (WCGF), and Fractional Local Ternary Intensity Pattern (FLTIP) techniques are proposed. The overall flow of the proposed system is depicted in Fig 1, which includes the following stages:

Preprocessing

Block separation

Pattern extraction

Classification
Initially, the given testing hand image is preprocessed for normalizing the image with better smoothening rate, which is
performed by the use of WCGF technique. Then, the blocks of the preprocessed image are segregated for extracting the patterns of the image, where FLTIP pattern is applied for extraction. It efficiently extracts the geometrical features for increasing the overall accuracy of classification. Finally, the NN classifier is employed to classify whether the person ID is authenticated or unauthenticated.
Testing hand Image
PreProcessing (WCGF)
Block Separation
FLTIP pattern extraction
= {1,2, }; Where, N is the column size of image. After
getting the input image, the sharpening is performed by using
the following equation:
(, ) = (, ) + (, ) (2) Where, defines the tuning filter parameter and
(, )represents the high pass filter mask. After that, the
image is separated into cells based on the following equation:
= ( 1: + 1, 1: + 1) (3) Then, the average difference value in is computed with respect to the size of filter mask K and center pixel of the mask
matrix . Here, the index of mask matrix is represented as as
shown in Fig 2. In which, the matrix size can be presented in
both 3Ã—3 and 5Ã—5 matrix formats. The performance of filtering can be enhanced with respect to the minimum the size of mask. This type of filtering improves the value of peak signal to noise ratio value by performing the higher pixel reconstruction with the reduced number of noisy pixels.
NN classification
Classification Result
Recognized ID
Database
(a). 3 Ã— 3 matrix
Fig 1. Flow of the proposed system

Image Preprocessing
Typically, the image preprocessing is one of the initial and important stage in any image processing applications. Because, it is essential to ensure the increased precision rate of the subsequent steps. Also, it reduces the impacts of artifacts that could affect the accuracy of classification. For this purpose, an efficient preprocessing technique, named as, Windowed Convolution of Gaussian Filtering (WCGF) method is utilized in this work. It filters the noisy pixels and enhances the edge details for obtaining a clear texture patterns for further processing. Moreover, it smoothens the image by applying normalization for eliminating an irrelevant and unwanted
information. In this model, the noise can be represented as
based on the following equation:
, (( ) > )
(b). 5 Ã— 5 matrix
Fig 2. Indexing of mask matrix for filtering
At last, the filtered output is taken as (, ), which is further used for pattern extraction. The working procedure of the
proposed WCGF technique is illustrated as follows:
Algorithm I Windowed Convolution of Gaussian Filtering
Input: Testing image ;
Output: Filtered image ;
Step 1: After getting the input image , the image sharpening
can be performed as shown in equation (2) by using the
tuning parameter and high pass filter mask.
= {
0,
(1)
Step 2: for x = 2 to M1 do
Step 3: for y = 2 to N1 do //Where, M and N are image
Where, represented the image pixels for all x and y.
= {1,2, }; Where, M is the row size of image.
size.
Step 4: Cell separation can be performed as shown in equation (3), and noisy pixel is estimated based on equation (2).
tep 5: The average difference value is computed in
using equation (3).
Step 6: If, ~ ~ then
(8)
= {1, ( > ) 0,
After that, the corresponding decimal value of B is computed from the binary streams, which is represented as follows:
1
=
Else
= + (2
Ã— ) (9)
End if
=
Consequently, the maximum pixel progression for each pixel
is estimated as shown in below:
Step 7: The filtered is, (, ) =
Step 8: end y loop;
Step 9: end x loop;

Pattern Extraction
After preprocessing, the patterns of the filtered image are extracted by using the Fractional Local Texture Intensity
(, ) = max ( (, )) (10)
Then, the binary code mapping is performed for obtaining the
pattern vectors from the input, which are illustrated as follows:
Pattern (FLTIP) technique. It is also one of the essential stage
=0
( 2, 2) =
2 Ã—
for extracting the most relevant information that are used for
characterize each class on the image. Here, the pattern extraction is mainly performed for increasing the overall accuracy and efficiency of classification. In this algorithm, the
2 ((, ), 1 (, ), 2 (, )) (11) Where, (, ) = ( + , + ), = 1: 1
1 < & > & >
filtered image is taken as the input for pattern extraction, in
which the zero padding is initialized with two rows and two
2(, , ) = {
(12)
0
columns with respect to the overall boundary. Then, the
window size can be represented as the mask of 5Ã—5 for extracting the patterns from the input. These image cells can be
5 different angles quantization of projection plane for the index
of {+900, +450, 00, 450, 900}.
1 2
The detailed working procedure of the proposed FLTIP based pattern extraction is illustrated in Algorithm II.
Algorithm II Pattern Extraction using FLTIP
Input:Filtered Image,
Output: Texture pattern of the image,
(, ) = =1 =2(, ) Ã— 1( , , )
1 <
(4)
Initialize zero padding for convoluted image in 2 rows and 2
columns at the overall boundary for matrix . =
Where, 1( , , ) = {
0
Initialize be the 5Ã—5 window size represent as the mask
for pattern extraction of the input image.
For x = 3 to m2 loop
= {+900, +450, 00, 450, 900}, = 450 // r is
For y = 3 to n2 loop
the size of mask matrix. From this index, the set of
neighborhood pixels are extracted as , which is computed as
Let, =
( 2: + 2, 2: + 2)
follows:
= {( 1: + 1, 1: + 1)} (5) After that, the set of nearest neighborhood pixels are predicted
from this boundary pixel collections with respect to the mean
average value , which is estimated as follows:
Let, = (, )
Initialize B = 0
L = (p*q)1
For k = 1 to Lloop
Collect the set of neighborhoods using equation (5)
Where, = 2 1
1
 ( ) ( , )
= 2 1.
= =1
(,)
(6)
Estimate for each iteration of k using equation
(6).
Consequently, the average of same mean difference between
the matrix center pixel is calculated from each boundary pixel,
which is shown in below:
Estimate for each using equation (7)
Estimate sign change S between center pixels and
its boundaries as a logical output by using equation (8).
1
 ( ) 
= =1
(7)
Calculate binary to decimal value B for the binary
stream of S by using equation (9).
End loop k
For each iteration, the mean values of and are computed
based on its sign difference, which is used to extract the
binary stream of the separated mask as shown in below:
Calculate maximum pixel progression at each pixel by equation (10).
Binary code mapping by using equations (11) and (12).
End loop y End loop x
= 1 = 1; = 0
{ }
= 2 = 1; = 1
=
(0 ,1 )
(0,0)
(16)
i2, j2 i1, j 2
i2, j1 i1, j 1
i, j 2
i,
j1
i+1,
j2 i+1,
j1
i+2, j 2
i+2, j 1
Where, (0,0) is the central pixel of the image mask, (0,1)
and (1,0) are the neighbor pixels for the angles of 00 and 900
respectively. After that, the image matrix can be rotated with
respect to the updated for analyzing the correct orientation of the image matrix based on , which is shown in below:

2, j i1, j i, j i+1,
j
i+2, j
{
= 90 , + 90 0
(17)
i2,
j+1
i2,
j+2
i1,
j+1
i1,
j+2
i, j+1 i, j+2
i+1,
j+1
i+1,
j+2
i+2,
j+1
i+2,
j+2
90 , 90 < 0
Consequently, the upper peaks and lower peaks in the hand
image is computed based on the finger tips and valley. Then, the vertices of the higher peaks and lower peaks are estimated
i2, j2
i1, j 2
i, j 2
i+1,
j2
i+2, j 2
i2, j1
i1, j 1
i,
j1
i+1,
j1
i+2, j 1
i2, j
i1, j
i, j
i+1,
j
i+2, j
i2,
j+1
i1,
j+1
i, j+1
i+1,
j+1
i+2,
j+1
i2,
j+2
i1,
j+2
i,
j+2
i+1,
j+2
i+2,
j+2

Estimate

Estimate

Fig 3. Phasor diagram of FLTIP
Fig 3 depicts the phasor representation of FLTIP technique, in which the regonition of neighborhood pixels with respect to validation of different angles is illustrated. Here, the red colored block represents the pointer that can be used to estimate the value of . If the index is at the center pixel of the matrix,
the value of is estimated and the neighborhood pixels at the
coordinate position of (x, y) can be represented for computing
the value of. Then, the projection angles of the arrow
indicates the estimation of magnitude of current matrix. In this
case, the center pixel is not considered and the boundary pixel is not considered for . After extracting the patterns,
the geometrical features are also extracted from the filtered
image . In this technique, the orientation of the image matrix
is calculated and the image matrix can be rotated with respect
to the updated angle for getting an accurate orientation, which are calculated as follows:
based on the x and y coordinates of the image pixels. Based on
these, the output geometrical key points are extracted from the filtered image.
Algorithm III – Geometrical feature extraction
Input:Filtered Image,
Output:Geometrical key points,
Step 1: Estimate the orientation of image matrix using
equation (13)
Step 2: Rotate the image matrix for the updated angle to get
the correct orientation of image matrix by using as shown in equation (17)
Step 3: Estimate the finger tips and valley by checking the upper peaks and lower peaks in the hand edge image For x = 2 to m1loop
For y = 2 to n1 loop
= 1, 1)2 + (, )2
= 2 ( ( 1, 1), (, ))
If( > ), then
1 = {, } // V1 represents the vertices
of higher peaks and {x, y} are the co
ordinates of image pixels. Else
2 = {, } // V2 represents the vertices
of lower peaks and {x, y} are the co
ordinates of image pixels.
End If
= {1, 2}
End loop y
End loop x
= 1 tan1 ( 211
) (13)
2 2002
Where, represents the central moments of the image, which
is estimated as shown in below:


Classification
After analysing the patterns, the Neural Network (NN) classification technique is employed for exactly classifying the
= (( ( )) ( ( )) )
(14)
recognized image. It performs multiple classification processes
based on the image dataset. Here, the texture classification is mainly performed to improve the recognition process by
Where, x and y indicates the image coordinate matrix, and the
value of and are estimated as follows:
= (1 ,0 ) (15)
(0,0)
estimating the network layer connectivity based on the dynamic range of neuron selection. Typically, the NN classification technique is used to classify the hand image patterns in the class of 0 or 1. This type of classification can
efficiently improves the accuracy rate by using the patterns that are extracted in the previous stage. Here, the amount of input samples are trained and patterns are classified with increased accuracy rate.


RESULTS AND DISCUSSION
This section evaluates the performance of the proposed hand geometry feature biometric classification system by using various evaluation measures. Also, some of the existing mechanisms have been considered for comparing the performance values. In this analysis, the NTU hand digit dataset, HKU hand geometry feature dataset, HKU multiangle hand geometry feature dataset, LaRED dataset and MUHand imageASL dataset are taken for validating the performance results. In which, the NTU dataset contains 1000 sample images if 10 hand postures, that are gathered from 10 different cases. Then, the HKU dataset contains 10 hand postures of 5 subjects, where 1000 cases are exist in this dataset. The ASL dataset contains 700 images of 10 different postures and LaRED dataset comprises 243,000 samples of 10 subjects.

Performance Indicators
The commonly used measures for evaluating the results of image classification system are sensitivity, specificity, jaccard, dice, precision, recall, F1measure, Matthews Correlation Coefficient (MCC), and error rate, kappa coefficient, and accuracy, which are calculated as follows:
by using FLTIP and geometrical feature extraction technique.
Proposed SoGFFV
1
0.995
Performance value
0.99
0.985
0.98
0.975
0.97
0.965
0.96
0.955
0.95
Fig 4. Performance measures
Parameters
Proposed
SoGFFV
Sensitivity
0.9964
0.9803
Specificity
0.974
0.962
Precision
0.997
0.961
F1_Score
0.99
0.98
MCC
0.97
0.96
Accuracy
0.99
0.97
Kappa Coefficient
0.97
0.961
Error rate
0.02
0.027
FPR
0.012
0.015
Table 1. Performance evaluation of existing and proposed techniques
Sensitivity TP
TP FN
Specificity TN
TN FP
Jaccard _ Similarity TP
TP FN FP
Dice _ Overlap 2TP
FP 2TP FN
Pr ecision TP
TP FP
Re call TP
TP FN
F1_ Score 2 Pr ecision Re call
Pr ecision Re call
(18)
(19)
(20)
(21)
(22)
(23)
(24)
Fig 5 depicts the accuracy and kappa coefficients of the existing and proposed techniques. Here, the accuracy of the proposed technique is increased to 0.983 and kappa is increased to 0.981, when compared to the existing technique. Fig 6 shows the error rate and FPR of the existing and proposed techniques, where the values are efficiently reduced by the proposed FLTIPgeometrical feature extraction technique. Table 2 shows the AUC analysis of the existing and proposed
MCC
TP TN FP FN
(TP FP)(TP FN )(TN FP)(TN FN )
(25)
techniques, where proposed technique offers an improved AUC by extracting the patterns and geometrical features in an
Accuracy TP TN
TP TN FP FN
Error _ Rate 1 Accuracy
Kappa _ Coeff Po Pe
1 Pe
(26)
(27)
(28)
efficient manner.
Where, TP True Positive, TN True Negative, FP False Positive, FN False Negative. Fig 4 and Table 1 compares the values of sensitivity, specificity, precision, F1score and MCC of both existing [23] and proposed techniques. From the evaluation, it is analyzed that the proposed technique provides an increased performance values, when compared to the existing technique. Because, it efficiently extracts the patterns
Proposed
SoGFFV
0.982
0.98
0.978
0.976
0.974
0.972
0.97
0.968
0.966
Accuracy
Kappa Coeff
Performance value
Performance value
Fig 5. Accuracy and Kappa coefficients
0.03
Proposed SoGFFV
0.025
0.02
0.015
0.01
0.005
0
Error Rate
FPR
Methods
AUC
MCC
0.6
CSM
0.57
MDC
0.86
Bayes
0.73
CS
0.92
SoGFFV
0.92
Proposed
0.96
Fig 6. Error rate and FPR Table 2. AUC analysis

Accuracy Analysis
Table 3 evaluates the accuracy of existing and proposed classification techniques, where the proposed FLTIPNN technique accurately classifies the hand image based on the extracted patterns and geometrical features of the input image. Moreover, the accuracy of the classifier can be determined based on its efficiency in classified label as 0 or 1. From the evaluation, it is analyzed that the proposed classification technique provides an increased accuracy, when compared to the existing techniques.
Methods
Accuracy
MCC
0.85
CSM
0.86
MDC
0.846
Bayes
0.842
CS
0.878
SoGFFV
0.953
Proposed
0.98
Table 3. Accuracy analysis

True Positive Rate and False Positive Rate
0.6
0.4
0.2
0
TPR
Fig 7 shows the Receiver Operating Characteristics (ROC) of the existing and proposed techniques with respect to the values of True Positive Rate (TPR) and False Positive Rate (FPR).The FPRof classification technique is estimated bythe subtraction of specificity from the value one. This analysis shows that the curve reaches the maximum sensitivity value with minimum FPR value.
1.2
1
0.8
SGFFV
Proposed
1 3 5 7 9 11 13 15 17 19 21
FPR
Fig 7. ROC analysis
The overall experimental analysis results depicted that the proposed FLTIPgeometric feature extraction based NN classification technique provides an improved results compared than the other techniques.


CONCLUSION
This paper proposed a new pattern extraction based classification method for hand geometry feature image authentication and recognition. For this purpose, various image processing techniques are employed in this work at the stages of preprocessing, block separation, pattern extraction, and classification. Initially, the WCGF based filtering technique is implemented to reduce the noisy pixels and to smoothen in the input image. During this process, the mask matrix is
constructed in the form of 3Ã—3 and 5Ã—5, which ensures the
reduced peak to signal noise ratio. After filtering the image, the
block separation is performed to increase the overall efficiency of recognition. Here, the FLTIP technique is utilized to extract the most useful patterns by computing the intensity of the center pixel with its neighboring pixels. This type of feature extract increases the overall accuracy of the image recognition system. At last, the NN classifier is deployed to classify whether the image is authenticated or unauthenticated based on the extracted feature vectors. In this paper, an extensive simulation results have been taken for evaluating the performance of the classification system. Then, some of the existing techniques are compared with the proposed mechanism, where the results depicted that the combination of WCGFFLTIPNN technique outperforms the other techniques. Also, it efficiently improves the performance of classification with improved recognition rate, accuracy, and reduced error value.
In future, this work can be extended by implementing a new classification technique for hand geometry feature image authentication system.
REFERENCES
[1] L. N. Evangelin, and A. L. Fred, Biometric authentication of physical characteristics recognition using artificial neural network with PSO algorithm, International Journal of Computer Applications in Technology, vol. 56, no. 3, pp. 219229, 2017. [2] S. Imura, and H. Hosobe, "A hand geometry featurebased method for biometric authentication." pp. 554566. [3] C. Liu, W. Kang, L. Fang, and N. Liang, "Authentication System Design Based on Dynamic Hand Geometry feature." pp. 94103. [4] A. Dayal, N. Paluru, L. R. Cenkeramaddi, and P. K. Yalavarthy, Design and implementation of deep learning based contactless authentication system using hand geometry features, Electronics, vol. 10, no. 2, pp. 182, 2021. [5] B. Belean, M. Streza, S. Crisan, and S. Emerich, Dorsal hand vein pattern analysis and neural networks for biometric authentication, Studies in Informatics and Control, vol. 26, no. 3, pp. 305314, 2017. [6] M. Viswanathan, G. B. Loganathan, and S. Srinivasan, "IKP based biometric authentication using artificial neural network." p. 030030. [7] S. Srivastava, and S. Srivastava, Biometric authentication using local subspace adaptive histogram equalization, Journal of Intelligent & Fuzzy Systems, vol. 32, no. 4, pp. 28932899, 2017. [8] K. Sujatha, S. Krishnan, S. Rani, and M. Bhuvana, Biometric Authentication System with Hand Vein Features using Morphological Processing, Indian Journal Of Science And Technology, vol. 11, pp. 26, 2018. [9] R. Kumar, "Hand image biometric based personal authentication system," Intelligent techniques in signal processing for multimedia security, pp. 201226: Springer, 2017. [10] B. Subramaniam, and S. Radhakrishnan, Multiple features and classifiers for vein based biometric recognition, 2018. [11] Prabu, S., M. Lakshmanan, and V.N. Mohammed, A multimodal authentication for biometric recognition system using intelligent hybrid fusion techniques. Journal of medical systems, 2019. 43(8): p. 19. [12] Bahmed, F., M.O. Mammar, and A. Ouamri, A multimodal hand recognition system based on finger innerknuckle print and finger geometry. Journal of Applied Security Research, 2019. 14(1): p. 48 73. [13] Ghose, S., et al., Fractional local neighborhood intensity pattern for image retrieval using genetic algorithm. Multimedia Tools and Applications, 2020. 79(25): p. 1852718552. [14] Jaswal, G., A. Kaul, and R. Nath, Multimodal biometric authentication system using hand shape, palm print, and hand geometry, in Computational Intelligence: Theories, Applications and Future DirectionsVolume II. 2019, Springer. p. 557570. [15] Mohammed, H.H., S.A. Baker, and A.S. Nori. Biometric identity Authentication System Using Hand Geometry Measurements. in Journal of Physics: Conference Series. 2021. IOP Publishing. [16] Song, Y., Z. Cai, and Z.L. Zhang. Multitouch authentication using hand geometry and behavioral information. in 2017 IEEE symposium on security and privacy (SP). 2017. IEEE. [17] Sagayam, K.M. and D.J. Hemanth, Hand posture and geometry feature recognition techniques for virtual reality applications: a survey. Virtual Reality, 2017. 21(2): p. 91107. [18] Angadi, S. and S. Hatture, Hand geometry based user identification using minimal edge connected hand image graph. IET Computer Vision, 2018. 12(5): p. 744752. [19] Jaswal, G., A. Kaul, and R. Nath, Multimodal Biometric Authentication System Using Local Hand Features, in Advances in Machine Learning and Data Science. 2018, Springer. p. 325336. [20] Shanmugasundaram, K., A.S.A. Mohmed, and N.I.R. Ruhaiyem, Hybrid improved bacterial swarm optimization algorithm for hand based multimodal biometric authentication system. Journal of Information and Communication Technology, 2019. 18(2): p. 123141.
[21] de Christo, L.E. and A. Zimmer. Multimodal Biometric System for Identity Verification Based on Hand Geometry and Hand Palm's Veins. in FedCSIS (Communication Papers). 2017. [22] Khaliluzzaman, M., M. Mahiuddin, and M.M. Islam. Hand geometry based person verification system. in 2018 International Conference on Innovations in Science, Engineering and Technology (ICISET). 2018. IEEE. [23] Fang, L., et al., Realtime hand posture recognition using hand geometric features and fisher vector. Signal Processing: Image Communication, 2020. 82: p. 115729.