Compact Hybrid Domain based Human Recognition using Face Images

DOI : 10.17577/IJERTV9IS030395

Download Full-Text PDF Cite this Publication

Text Only Version

Compact Hybrid Domain based Human Recognition using Face Images

M Shanmugam

Department of ECE, Government Engineering College, Krishnarajapete, Mandya, Visvesvaraya Technological University, Belagavi, India,

V M Viswanatha Department of ECE, HKES SLN College of Engineering, Raichur,

Visvesvaraya Technological University, Belagavi, India

K B Raja

Department of ECE, University Visvesvaraya College of Engineering, Bangalore University,

Bengaluru, India

Abstract An augmented approach to detect persons based on face images captured under uncontrolled conditions is a challenging task. We propose Compact Hybrid Domain based Human Recognition using Face Images in this research. The three sets of features viz., Histogram Intensities (HI), Discrete Wavelet Transform (DWT), Double Density Dual Tree Discrete Wavelet Transform (DDDTDWT) are computed. The first set of features is extracted by HI and considered only the dominant 200 out of 256 coefficient values. The second set of features is extracted using DWT and considering only approximation band coefficients and the number of features are only 1/4th of the original size. The third set of features are extracted by DDDTDWT and considered fifth band coefficients as features. The concluding features are obtained by concatenating all the three features which are effective and compressed. The database and the test face image features are matched using Euclidian Distance (ED) to calculate performance parameters. The results of the projected model are superior to the current techniques and also anticipated that the speed of computation in a real-time system is high as the number of features are compressed.

Key words: Biometrics, DWT, DDDTDWT, Face Recognition, HI, Human Recognition.

  1. INTRODUCTION

    Biometrics is used to recognize the physical and behavioural features of a person. The name biometric is resultant of the Greek terms' bio and metric, where bio means life and metric means to measure. The identification method of human beings is selected from old-styled approaches with PIN numbers and passwords for its precision and occasion sensitiveness. Biometrics are categorized into two clusters such as physiological and behavioural Biometrics. The physiological biometric structures are almost constant and comprise face recognition, fingerprint, hand geometry, iris recognition, etc. The behavioural biometric structures are variable over the age of period and comprise of signature, keystroke, and voice recognition. The biometrics are used in numerous applications such as Airports, Consumer Electronics, Financial transactions, physical access to restricted areas, healthcare, Biometric time and attendance, law & enforcement, social access control, cloud computing, etc. Facial recognition is one of the briskly developing biometric modality as the use of smartphones and unusual computing devices rises. The foremost benefit of facial recognition compared to other biometrics systems is that it is talented to use in mob identification as it does not involve the

    help of human beings. The face recognition systems installed in multiplexes, airports, and other public places can identify individuals among the mob to handle disaster atmosphere by enhancing security.

    Contribution: The Compact Hybrid Domain based Human Recognition using Face Images is proposed. The compressed HI features are measured as the first set of features. The compact transform domain features of DWT and DDDTDWT are considered as the second and third set of features. The last features are obtained by combining all the three sets of features. The performance of the scheme is verified by relating features using ED.

    The rest of the paper is systematized as follows: brief summary of the literature survey of present techniques of face recognition in Section II. Proposed research details are given in Section III, the proposed algorithm is given in Section IV, and section V presents performance evaluation. The section VI comprises the conclusion of this paper.

  2. LITERATURE SURVEY

    The comprehensive research of current techniques using spatial and transform domains for human detection based on physiological face biometric trait is discussed. It includes an examination of pre-processing methods such as image resizing, noise removal, histogram equalization, etc., feature extraction methods viz., the spatial and transform domain techniques. The test and database images are related using distance formulae and classifiers.

    George Azzopardi et al., [1] proposed a method that fuses domain-specific and trainable features to identify gender from face image. Viola-Jones algorithm was applied on the input image to spot the faces and in the next step alignment and resizing was done. For extraction of features SURF descriptors and COSFIRE filters was used. SURF descriptor was used to extract 51 facial landmarks like eyes, nose etc., and COSFIRE filters for train able features. The tests proved on FERET and LFW databases. Saket Karve and VasishtShende [2] incorporate model based methods like distance among dissimilar points on the face, face shape, these methods flop, when an uncommon image is verified and they also proposed the factor analysis method for feature extraction which overtakes Principal component analysis and Independent analysis by using four different classifiers. Haoxi Li, and Haifang Hi [3] proposed two contributions for face recognition namely Age related factor joint task

    convolutional neural networks to address cross age face recognition, which combines an identity judgement network with an age judgement network and next Non Linear age features are not separated from identity features.

    Zhao Jian et al., [4] proposed a research method consisting of two parts viz., facial pose pre recognition and dual dictionary sparse representation contribution has better performance under low training samples. Mikhail V. Alyushin and Alexander Lyubshov [5] The Viola-Jones algorithm used for face reputation in lengthy wave Infrared radiation range is useless due to the need to manner redundant statistics at some stage in the necessary photo illustration and the usage of Haar features. Accordingly, they projected that the use parametric version of the Viola-Jones set of rules will increase the excellent of the processed thermal image in face reputation structures. Yasu et al., [6] traditional face recognition systems grieve from numerous deviations like illumination, expression and misalignment in order to overcome these difficulties, two approaches were proposed. First one includes shape constrained illumination pattern (SCIP) which models illumination deviation. Secondly SCIP based face recognition system deals thru illumination, expression and image misalignment.

    Yankong Zhang et al., [7] proposed an approach by embedding a patch strategy in CNN architecture to learn effective features for FR. In this method the image is cropped into patches so there is no need of extra storage space. The features are extracted from patches used CNN structures. The results were proved experimentally proved LFW and YTF datasets. Zhang Yu et al., [8] recommended a system using neural network for face recognition. The binarization image de-noising method for image denoising and noise drop to extract the peak and valley of the features. Secondly BP neural network classifier used for information on batch read, differences and classifying facial features. Pattarakamon Rangsee et al., [9] proposed nibble-based face identification based on convolution of hybrid features. The technique converts 8-bit binary pixel into Left-Side ibble (LSN) and Right-Side Nibble (RSN) to rise speed of computation. The 4-bit LSN is changed to a number value changing among 0 and 240. Discrete Wavelet Transform is used on LSN to generate 4 bands and only one LL band is considered for features. The RSN values ranging among 0 and 15. Histogram of Oriented Gradient (HOG) is used on RSN to generate another set of features. The concluding features are formed by linear convolution on two sets of features. The test and database features are related by Artificial Neural Network to check the performance of the system.

    Xiong Xiaoqian [10] suggested a face authentication system based on ARM architecture design, software development of face recognition system is taken in ARM embedded platform. Xian Geng et al., [11] proposed an algorithm that can be applied under changes in pose, expression and illumination i.e., face recognition under uncontrolled conditions (FRU) by expressing information of personal characteristics ISS (Individual Stable Space) and realization of ISS was done using ISNN (individual Neural Network). The ISS technique is matched thru 12 current FR methods on 3 databases and achieved finest result. Stan. Z Li and Juwei Lu [12] proposed a technique for simplifying the

    realistic capability of face database. The feature line passes through two feature points and covers more face than attributes facts and thus increases the size of database. In attributes depiction, classification is based on distance among attributes of an image and the experiments were proved on 5 databases. Caixia Liu [13] based on experimental results proposed that results of face authentication depend not only on stationary face authentication system but also on active face authentication system and face image procurement device, processor hardware disturb the rapidity and result of the authentication.

    Priyanka V Bankar and Anjali C Pise., [14] proposed colour local gabor wavelers (CLGWs) and color local binary pattern(CLBP) both are capable to convert discriminative attributes resultant from spatio-chromatic texture shapes of dissimilar spectral channel inside a convinced local area. The research suggested the attributes level union method in directive to mix numerous colour local grain for final classification. Soo-Chang Pei et al., [15] proposed a face magnitude and angle change to improve the existing descriptors LBP and WLBP for face authentication system. The implementation consists of merely sixteen boxes in the event of eight neighbours to integrate thru LBP. The experimental results proved the working on 4 databases. Mehmet KOC, and Cihan TOPAL [16] In this study, novel texture descriptor that utilizes curvature of edge segments had been extracted from input image. Extraction of edge segments as array of consecutive pixels and smoothening them to remove the aliasing effect of edge pixels. Then on computing curvature function from each edge segment and quantizing them according to curvature responses. In the final step by simply accumulating curvature values in a histogram. On the contrary to conventional texture descriptors, proposed HESC and HESC+ descriptors utilize only geometric information extracted from input image. On comparison, recognition results to well-known LBP method and it's shown that HESC and HESC+ outperformed it up to 10% even with a lower dimensional feature vector.

    Muhammad Nazir et.al., [17] presented a crossbreed attribute mining procedure which is reliable, precise, and adept in supervision multi-scale and lighting deviation problems. The face part is mined by Viola and Jones technique. The HOG is developed further by high variance attributes by means of DCT. The method is tested using KNN classifier. Fiqri Malik Abdul Azis et al., [18] presented a system which can recognize the human face correctly during darkness time. During the absence of light, it is very difficult to recognize the different human faces. Image Enhancement can have defined as enhancing the image features to obtain improved image. It uses techniques such as Contrast Limited Adaptive Equalization, Histogram Equalization and Local Enrichment. In order to determine the value, the Eigen face method which uses the Principal Component Analysis. Hae- Min Moon et al., [19] offered a scheme which can implicate face recognition at longer distances. As the distance increases, the recognition rate decreases. The method resolves the change in the recognition rate resulting from 1m to 9m and then reducing the size by bilinear interpolation. The background illumination can be adjusted by histogram equalization and then applying the Convolution Neural

    Network (CNN) to mine the attributes. Convolution Neural Network proposed in the system consist of two convolution layers and two sub sampling phases and finally obtaining feature vector. In the stage of feature matching, Euclidean distance between the trained and training face features.

    Alaa Eleyan and Hasan Demiral [20] proposed a system for automatic facial recognition using the Haralick and GLCM features. The methods can extract the features are obtained from GLCM and in the second method the GLCM is converted to the matrix directly. For the feature matching nearest neighbour and neural network classifiers are used. Preview on practical performance results the GLCM method is superior to the linear discriminant analysis, principal component analysis, local binary pattern and Gabor wavelets. The ORL, FERET, FRAV2D and Yale B databases have shown very good results using the GLCM method with different number of levels. Divya et al., [21] proposed face recognition based on sorting pixels using Discrete Wavelet Transform (DWT) and statistical features. The pixel values are sorted in rising order is presented and values of pixels are bifurcated into 2 portions viz., Low Pixel Values (LPV) and High Pixel Values (HPV). The DWT is used on LPV to produce LL, LH, HL and HH. The LL band coefficients are considered for transformed features which reduces dimensionality. The statistical measures are mean, median, mode, maximum and standard deviation spatial features from HPV. The transform and spatial features are combined for concluding features. The ANN is used as classifier.

  3. PROPOSED RESEARCH

    An innovative compression based face authentication approach via the hybrid domain technique is proposed in our research for the effective identification of a person. The initially compressed features are obtained by using HI, DWT, and DDDTDWT. The final compact features are extracted by concatenating initial features. The ED is used to equate concluding features of test and database images to exam the proposed system.

    1. Face Databases

      The publically available benchmark face databases with face Images under uncontrolled conditions viz., ORL, YALE, Extended YALE, JAFFE, and Indian female are used to exam the proposed model. The performance of the projected model is tested by considering the different numbers of samples in the face image databases.

      1. ORL Database:

        A standard Olivetti Research Laboratory (ORL) face database covers face images captured between 1992 and 1994. Ten different Images of forty distinct individuals were captured under dissimilar facial expressions like open/closed eyes, smiling/not smiling, with/without glasses, varying lighting conditions. The face images were captured in a shady background with upright, frontal positions. The images are in PGM format and each image size is 92×119, ten image samples of a single person are shown in Fig 1.

        Fig 1. ORL samples of single person [22]

      2. Indian Female Database:

        It has 11 dissimilar images of each of twenty different persons, totaling 220 samples. Each image is of size 480x640x3 pixels. The different facial expressions were taken for each person and image samples are shown in Fig 2 with jpeg format.

        Fig 2. Indian female database samples [23]

      3. YALE Database:

        It has 165 grayscale images with a GIF format of 15 persons and 11 samples per person. Each image of different facial expressions such as happy, sad, normal, sleepy, surprised, wink, with glasses, without glasses, center-light, left-light, right-light. The samples of a single person are given in Fig 3.

        Fig 3. Images of Yale database [24]

      4. Extended YALE Database:

        It has 16128 image samples of 28 persons with 9 postures and 64 brightness conditions. The face images used in the experiments are cropped and re-sized to 168×192. The single- subject image samples are shown in Fig 4.

        Fig 4. Images of Extended Yale database [25]

      5. Japanese Female Face Expression (JAFFE)

      It contains 10 distinct persons and twenty different images for each person totaling to 200 samples of images. The size of every image is 256 x 256 gray-scale. The database images were captured with upright, and frontal positions. Figure 5 shows all the images of a single subject which are in jpg format.

      Fig 5. Images of JAFFE database [26]

    2. Pre-Processing

      It is a method to perform some operations on the image, in order to enhance quality in an image. The RGB images are converted to grayscale images to extract features from only 8bit pixel length in place of 24bit pixel length for RGB images to reduce complexity in the hardware and time- consuming. The resizing the original image of different dimensions is converted to the required uniform dimensions. The images of all databases are resized to 240×320 in the proposed method.

    3. Feature Extraction

    The merging of a novel compressed spatial and transform domain scheme is introduced to extract effective features.

    1. Compressed Spatial Domain Features:

      Histogram Intensity (HI) is plotting the frequency of occurrence of pixels for dissimilar intensities of an image is used for spatial domain features. It demonstrates the total number of pixels corresponding to every intensity level of an image. The x-axis has all available grey level pixel intensity values and the y-axis indicates the number of pixels corresponding to each intensity level of an image. The black indicates zero intensity and 255 indicates white. The image sample and its corresponding histogram are as shown in Fig 6.

      Compression: The histogram coefficients which are corresponding to 256 intensity levels are sorted in ascending order and the least coefficient values of 56 levels are disregarded. The end result of HI contains histogram coefficients of only 200 intensity levels which are considered as initial features and a compression ratio of 384:1.

      1. Image (b) Histogram Fig 6. Histogram of an image

    2. The Transform Domain Features Set1

      Discrete Wavelet Transform (DWT) is sampled transformation and able to show both time and frequency information. It decomposes the signal into four bands based on a combination of wavelet filter and scaling filter [27]. The transformation is engaged in the rows of an image using High Pass Filter (HPF) and Low Pass Filters (LPF) simultaneously and sampled by factor 2 in digital image processing. The same operation is further performed on the columns to derive 4 bands. The four sub-band images in each level are one approximation image (LL) and other three detailed bands corresponding to vertical (LH), horizontal (HL) and diagonal details (HH). The 2D-DWT is used on an image of size 240×320 to decompose it into four bands as shown in Fig 7. The LL band consists of significate information of an original image, hence it is almost identical to that of the original image and as shown in Fig 7(b). The band's vertical, horizontal, and diagonal bands consist of insignificant information such as vertical, horizontal, and diagonal edge information. The initial transformed domain features are considered from LL band coefficients and discarding three detailed bands as information is insignificant.

      Compression: The LL band coefficients consist of only one-fourth size of the DWT coefficients by rejecting 3/4th of LH, HL and HH bands coefficients which result in a reduction in a number of features and increase of speed of computation by way of compression.

      1. Original image (240×320) (b) DWT Image (Band of size 120×160) Fig 7. DWT of an Image

    3. The Transform Domain Features Set 2:

      The transformation Double Density Dual Tree Discrete Wavelet Transform (DDDTDWT) is used to generate the second set of transform domain features. The transformation is for de-noising which performs substantially better than critically sampled DWT and also shift invariant. It has characteristics of double-density DWT and the Dual-Tree DWT. The structure of the double-density dual-tree DWT

      [28] contains two oversampled iterated filter banks functioning in parallel, same as the dual-tree DWT. The DDDTDWT is used for image de-noising, image quality improvement, segmentation, motion estimation, and compensation. The fifth sub-band is considered and the corresponding number of coefficients is 4800 which are considered as initial features.

      Compression: The DDDTDWT is applied on Pre- processed face image of dimensions 240×320 and considered the fifth band of the dimension of 4800 as the third set of initial features and the compression ratio of 16:1

    4. Concatenation:

    The original resized face images of size 240×320 and dimensions of 76800 are used to identify a person. The effective overall final compressed features are obtained by concatenating initial features of HI, DWT, and DDDTDWT of lengths corresponding to 200,19200 and 4800 results in 24200 coefficients. The original image dimensions of 76800 are reduced to 24200 in final feature dimensions corresponding to the compression ratio of 3.17:1

    D Euclidean Distance (ED):

    The matching of final database features and test features are compared with ED is given in equation 1 to calculate performance parameters that compute the minimal space between the test and database features. If the space is small, then it indicates that the images are said to be similar

    —————————(1)

    Where Xi and Yi are

    database and test image features

  4. PROPOSED ALGORITHM

    Problem Definition: The face identification system is to be developed to recognize persons for numerous recent applications. The final features are generated based on the fusion of compressed transform and spatial domain features as given in Table 1.

    Objectives: Human beings are recognized efficiently using face images with the variations in pose and intensities and the goals are

    1. To rise Peak Recognition Rate (PRR) and Optimum Recognition Rate (ORR)

    2. To cut in Error Rates

    TABLE 1: – PROPOSED FACE RECOGNITION ALGORITHM

    Input: Face images from standard database

    Output: Human Recognition based on face images and evaluation of performance parameters

    1. The regular face image databases viz., ORL, YALE, Extended YALE, JAFFEE, and Indian Female are used to exam the projected method.

    2. The face images of dissimilar sizes from different databases are resized to a uniform size of 240 x 320 and also color images are transformed into grayscale images.

    3. Initial features are extracted using HI, LL band of DWT and DDDTDWT.

    4. The Histogram is applied to the pre-processed face image size of 240 X 320 = 76800 to obtain HI coefficients of dimension 256. The initial first set of significant features of 200 only are considered, which has a compression ratio of 384: 1.

    5. The DWT is used on Pre-processed face image and first-level LL band of size 120 X 160 = 19200 considered as the second set of initial features. The compression ratio is 4: 1.

    6. The DDDTDWT is applied to a Pre-processed face image and considered the fifth band of a dimension of 4800 as the third set of initial features. The compression ratio of 16: 1

    7. The last features are obtained by concatenating HI, DWT and DDDTDWT initial features of dimension 24200. The compression ratio of 3.17: 1.

    8. The distance formula ED is used to find the distance between a database and test face images to exam the proposed model.

  5. PERFORMANCE EVALUATION

In this unit, definitions of measuring parameters, performance analysis, and comparison results of the proposed method are discussed.

A. Definitions of Measuring Parameters:

  • False Acceptance Rate (FAR):

    It is the number of unapproved persons accepted as approved persons given in equation 2.

    ————–(2)

  • False Rejection Ratio (FRR):

It is the measure of the number of approved persons rejected as unapproved persons as given in equation 3.

————-(3)

  • Equal Error Rate (EER):

    It is the point of meeting of FAR and FRR values at a specific threshold value. The EER value is the compromise between FRR and FAR values. The performance of the algorithm is better if the value of EER is low.

  • Total Success Rate (TSR):

    The number of approved persons successfully matched with the predefined database.

    —————-(4)

  • Peak RR (PRR): It is the highest value of the TSR obtained.

  • Optimal RR (ORR): It is the value of TSR obtained corresponding to the intersection of FRR and FAR.

B Performance Analysis:

The performance of the proposed method is analysed by computing performance parameters for the variations in the threshold values considering various standard face databases.

  1. ORL Face Database:

    The performance analysis of the proposed method using the ORL database for PID and POD variations is discussed

    based on EER, PRR, and ORR. It is witnessed in Table 1, that the percentage values of PRR and ORR decrease with an increase in PID, whereas EER values increase with increase in PID.

    TABLE 1. ORL DATABASE RESULTS

    PID

    POD

    EER%

    PRR%

    ORR%

    10

    10

    12

    95

    87

    10

    20

    12

    95

    87

    10

    30

    11

    97

    89

    20

    10

    20

    90

    80

    20

    20

    20

    90

    80

    30

    10

    20

    90

    80

    The deviations of percentage FRR, FAR, and TSR with threshold values is shown in Fig 8 for PID and POD blend of 10 and 30. The values of EER, PRR, and ORR are noted in figure 8. It is remarked that the values of FRR decrease, whereas FAR values increase through an increase in the threshold. The values of TSR rise with rise in threshold values. The deviations of percentage FRR, FAR, and TSR with threshold values are shown in figure 8 for the PID and POD blend of 10 and 30. The values of EER, PRR, and ORR are noted from the figure8. It is remarked that the values of FRR decrease, whereas FAR values increase through an increase in the threshold. The values of TSR rise with rise in threshold values.

    Fig 8. Deviations of results for PID and POD blend of 10 and 30

  2. Indian Females Face Database:

    The results of the proposed method using the Indian Females Face database for variations in PID and POD are discussed based on parameters viz., EER, PRR, and ORR. It is witnessed in Table 2, that the percentage values of PRR and ORR decrease through an increase in PID, whereas EER values increase with an increase in PID.

    TABLE 2. RESULTS USING INDIAN FEMALES FACE DATABASE

    PID

    POD

    %EER

    %PRR

    %ORR

    7

    11

    18

    100

    82

    9

    11

    18

    100

    82

    11

    7

    20

    100

    80

    11

    9

    18

    100

    82

    11

    11

    18

    100

    82

    The deviations of percentage FRR, FAR, and TSR with threshold values is shown in Fig 9 for PID and POD blend of

    7 and 11. The values of EER, PRR, and ORR are noted in Figure 9. It is remarked that the values of FRR decrease, whereas FAR values increase through an increase in the threshold. The values of TSR rise through rise in threshold values.

    Fig 9. Deviations of results for PID and POD blend of 7 and 11

  3. Yale Face Database:

    The performance investigation of the proposed method using the Yale Face database for deviations in PID and POD is discussed based on EER, PRR, and ORR. It is witnessed in Table 3, that the percentage values of PRR and ORR decrease with an increase in PID, whereas EER values increase through increase in PID.

    TABLE 3. RESULTS USING YALE FACE DATABASE

    PID

    POD

    %EER

    %PRR

    %ORR

    5

    5

    9

    100

    90

    5

    7

    7

    100

    85

    5

    10

    7

    100

    85

    7

    5

    10

    100

    83

    10

    5

    10

    80

    80

    The deviations of percentage FRR, FAR, and RR through threshold values is shown in Fig 10 for PID and POD blend of 5 and 5. The values of EER, PRR, and ORR are noted from the figure10. It is seen that the values of FRR decrease, whereas FAR values increase with an increase in threshold. The values of TSR increase through an increase in the threshold values.

    Fig. 10. Deviations of results for PID and POD blend of 5 and 5

  4. Extended Yale Face Database:

    The performance investigation of the proposed method using the Extended Yale Face database for deviations in PID and POD is deliberated based on EER, PRR, and ORR. It is witnessed from Table 4, that the percentage values of PRR and ORR decrease thru an increase in PID, whereas EER values increase with increase in PID.

    TABLE 4. RESULTS USING EXTENDED YALE FACE DATABASE

    PID

    POD

    %EER

    %PRR

    %ORR

    5

    10

    0

    100

    100

    10

    10

    9

    100

    92

    10

    15

    9

    100

    92

    15

    15

    10

    100

    90

    15

    10

    10

    100

    90

    The deviations of percentage FRR, FAR, and TSR with threshold values is shown in Fig 11 for PID and POD blend of 10 and 10. The values of EER, PRR, and ORR are noted from the figure11. It is seen that the values of FRR decrease, whereas FAR values increase thru an increase in the threshold. The values of TSR rise with rise in threshold values.

    Fig. 11. Deviations of results for PID and POD blend of 10 and 10

  5. JAFFE Face Database:

The performance examination of the proposed method using the JAFFE Face database for deviations in PID and POD is deliberated based on EER, PRR, and ORR. It is witnessed from Table 5, that the percentage values of PRR and ORR decrease thru an increase in PID, whereas EER values increase with an increase in PID.

TABLE 5. RESULTS USING JAFFE FACE DATABASE

PID

POD

%EER

%PRR

%ORR

2

5

0

100

100

4

5

0

100

100

5

2

13

100

82

5

4

11

100

90

5

5

10

100

90

The deviations of percentage FRR, FAR, and TSR with threshold values is shown in Fig 12 for PID and POD blend of 5 and 4. The values of EER, PRR, and ORR are noted from figure12. It is noticed that the values of FRR decrease, whereas FAR values increase thru an increase in threshold. The values of TSR rise thru the rise in threshold values.

Fig. 12. Results for PID and POD blend of 5 and 4

B Proposed Method Comparison with current methods:

The parameter PRR of the projected method is equated with current systems using the ORL face database presented by Mohannad A. Ahizied and Ausif Mahmood [29] and Xiaoyu Xu et al., [30]. It is witnessed that the value of PRR is better in the case of a proposed method than the method presented by Xiaoyu Xu et al. The value of PRR is same in the case of the projected and existing system presented by Mohannad A. Ahizied and Ausif Mahmood, however, the features extracted in the proposed method are an amalgamation of compressed HI, DWT, and DDDTDWT, hence the speed of computation is high in real-time implementation and also the complexity of hardware architecture is less. The technique DDDTDWT used is shift- invariant and well suited for the extraction of effective features. The amalgamation of compressed HI, DWT, and DDDTDWT reduce a total number of effective features which will be helpful in real-time identification systems to reduce computation time.

TABLE 6. PRR COMPARISON USING ORL FACE DATABASE

Authors

Method

PRR

Mohannad A. Ahizied and Ausif Mahmood [29]

LBPH+KNN

0.93

Xiaoyu Xu et al., [30]

Fish(PCA+LDA)

0.85

Proposed Model

HI+DWT+ DDDTDWT

0.93

VI CONCLUSION

The face identification is one of the rapidly growing biometric modality as it is used in almost all consumer electronics and organization. In this research, Compact Hybrid Domain based Human Recognition using Face Images is proposed. The three sets of compressed features are

extracted using HI, DWT, and DDDTDWT. The number of features extracted from HI is only 200 for the image size of 240 X 320. The DWT features are extracted from an LL band of size 120 X 160 and the number of features is 19200. The DDDTDWT feature is extracted from the fifth sub-band and the number of features are 4800. The final features are obtained by concatenating all three and the total number of features are 24200. The test face images are compared with the face images in the database using ED to verify the results of the system. The investigational results show that the projected system outperforms the current techniques with discrete systems and also simulations using numerous feature types. In the future, the ED may be replaced by a neural network or support vector machine classifiers to improve the computation speed in the real-time scenario.

REFERENCES

  1. George Azzopardi, Antonio Greco, Alessia Saggese, Marico Vento, "Fusion of Domain-Specific and Trainable Features for Gender Recognition from Face Images", IEEE journal and Magazines, vol 6, pp 24171 – 24183, 2018.

  2. Saket Kave, and Vasisht Shende, "A Comparative Analysis of Feature Extraction Techniques for Face Recognition", IEEE International conference on Communication, Information and Computing technology, pp1-6, 2018.

  3. Haoxi Li, and Haifang Hi, "Age-Related Factor Guided Joint Task Modelling Convolutional Neural Network for Cross-Age Face Recognition", IEEE Transactions on Information Forensics and Security, vol13, issue 9, pp. 2383 – 2392, 2018.

  4. Zhao Jian, Zhang Chao, Zhang Shunli, LU Tingting, SU Weiwen and JiaJian, "Pre-Detection and Dual-Dictionary Sparse Representation based Face Recognition Algorithm in Non-Sufficient Training Samples", IEEE Journal of Systems Engineering and Electronics, vol- 29, issue 1, pp.196-202, 2018.

  5. Mikhail V. Alyushin and Alexander A. Lyubshov, "The Viola-Jones Algorithm Performance Enhancement for a Persons Face Recognition Task in the Long-wave Infrared Radiation Range", IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering, pp 1813- 1816, 2018.

  6. Ya Su, Zha Liu, and Mengyao Wang, "Sparse Representation-based Face Recognition Against Expression and Illumination", IET International Journal on Image Processing, vol 12, issue 5, pp 826-832, 2018.

  7. Yanhong Zhang, Kun Shang, Jun Wang, Nan Li, and Monica Zhang, "Patch Strategy for Deep Face Recognition", IET International Journal on Image processing, vol 12, issue 5, pp 819-825, 2018.

  8. Zhang Yu, Fen Liu, Rongtao Liao, Yixi Wang, HaoFeng, and Xiaojun Zhu, "Improvement of Face Recognition Algorithm based on Neural Network", IEEE International Conference on Measuring Technology and Mechatronics Automation, pp 229 – 234, 2018.

  9. Pattarakamon Rangsee, K B Raja and Venugopal K R, Nibble-Based Face Recognition using Convolution of Hybrid Features, IEEE International Conference on Imaging, Signal Processing and Communication, pp 112-116, July 27-29, 2019.

  10. Xiong Xiaoqian, "Research and Development of Face Recognition System based on ARM Architecture", IEEE International Conference on Intelligent Transportation, Big Data and Smart City, pp 113 – 116, 2018.

  11. Xian Geng, Zhi-Hua Zhou, and Kate Smith-Miles, "Individual Stable Space: An Approach to Face Recognition Under Uncontrolled Conditions", IEEE Journal on Neural Networks, vol 19, issue 8, pp 1354 1368, 2008.

  12. Stan Z Li and Juwei Lu, "Generalizing Capacity of Face Database for Face Recognition", IEEE International Conference on Automatic Face and Gesture Recognition, pp 402 406, 2018.

  13. Caixia Liu, "The Development Trend of Evaluating Face-Recognition Technology", IEEE International Conference on Mechatronics and Control, pp 1540 – 1544, 2014.

  14. Priyanka V Bankar and Anjali C Pise,"Face Recognition by using GABOR and LBP", IEEE International conference on communication and signal processing, pp 0045 – 0048, 2015.

  15. Soo-Champ Lei, Mei-SHiochen, Yi YU, Su Hua Tang and Chun Lin Zhong, "Compact LAP and WLBP Descriptor with Magnitude and Direction for Face Recognition", IEEE International Conference on Image Processing, pp 1067-1071, 2017.

  16. Mehmet KOC, Cihan TOPAL, Histogram of Edge Segment Curvatures for Texture Recognition, Eskisehir Technical University Journal of Science and Technology, pp 784-795, 2018.

  17. Muhmood Nazir, Zahoor Jan, and Muhammad Sajjad, Facial Expression Recognition using Histogram of Oriented Gradients based Transformed Features, Springer the Journal of Networks, Software Tools and Applications, vol 21, issue 1, pp 539-548, 2017.

  18. Fiqri Malik Abdul Azis, Muhammad Nasrun, Casi Setainingsih, and Muhammad Ary Murti, Face Recognition in Night Day using Method Eigen Face, IEEE International Conference on Signals and Systems, pp.103-108, 018.

  19. Hae-Min Moon, Chang Ho Seo, and Sung Bum Pan, A Face Recognition System based Convolution Neural Network using Multiple Distance Face, Springer-Journal of Soft Computing, vol 21, issue 17, pp.4995-5002, 2016.

  20. Alaa Eleyan and Hasan Demiral, Co-Occurrence Matrix and Its Statistical Features as A New Approach for Face Recognition, Turk Journal of Elec Eng & Comp Sci, Vol.19, No.1, pp 97-107, 2011.

  21. Divya Anjanappa, K B Raja and Venugopal K R, Sorting Pixels based Face Recognition using Discrete Wavelet Transform and Statistical Features, IEEE International Conference on Imaging, Signal Processing and Communication, Singapore, pp 150-154, July 27-29, 2019.

  22. AT&T Laboratories Cambridge, The ORL Database of Faces, 1994, http://www.cl.cam.ac.uk/research/dtg/attractive/face database.htm

  23. IIT Kanpur Campus, Indian Face Database, 2002. http://viswww.cs.umass.edu/~vidit/IndianFaceDatabase/

  24. Yale University, The Yale Face Database, 1997, http://cvc.cs.yale.edu/cvc/projects/yalefaces/yalefaces.html

  25. Georghiades, Peter Belhumeur, and David Kreigmen, From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose, http ://vision.ucsd.edu/~leekc/Ext Yale Database/ExtYaleB.html

  26. Micheal J Lyons, The Japanese Female Face Expression (JAFFE) Database, 1998, http://www.karsl.org/jaffe.html.

  27. Mallat, S. G. A Theory for Multiresolution Signal Decomposition: The Wavelet Representation, IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 11, Issue 7, pp. 674693, July 1989.

  28. Ivan W Selesnick, "The Double-Density Dual-Tree DWT", IEEE Transactions on Signal Processing, vol 52, issue 5, pp 1304-1314, 2004.

  29. Mohannad A. Ahizied and Ausif Mahmood, "Enhanced Human Face Recognition using LBPH Descriptor, Multi-KNN, and Back- Propagation Neural Network", IEEE access, vol 6, pp 20641 – 20651, 2018.

  30. XiaoyuXu, Su Li, and Lan Liu, Face Recognition based on Multi-Level Histogram Sequence Center-symmetric Local Binary Pattern and Fisherface, IEEE Conference on Advanced Information Technology, Electronic and Automation Control Conference, pp. 448-451, 2017.

Leave a Reply