A Review Paper on Face Recognition Methodologies

DOI : 10.17577/IJERTV9IS050798

Download Full-Text PDF Cite this Publication

Text Only Version

A Review Paper on Face Recognition Methodologies

Raghuveer Bohara Information Technology Department

Interntional Institute of Information Technology, Pune (SPPU, Pune)

Ojas Ingale

Information Technology Department Interntional Institute of Information Technology, Pune

(SPPU, Pune)

Gourav Joshi Information Technology Department

Interntional Institute of Information Technology, Pune (SPPU, Pune)

Prof. Anand Bhosale Information Technology Department

Interntional Institute of Information Technology, Pune (SPPU, Pune)

Hitesh Joshi Information Technology Department

Interntional Institute of Information Technology, Pune (SPPU, Pune)

Abstract In the previous few years, the procedures of face recognition have been researched thoroughly. Well-versed reviews, for various human face recognition methodologies, are provided in this paper. Initially, we proffer a summary of face recognition with its application. Followed by a literature review of various face recognition techniques. Several face recognition algorithms are analyzed and elaborated with their limitations as well. It also includes brief overviews regarding various modern approaches like neural networks, line edge mapping, and many others, which are widely used nowadays to make the process of face recognition more efficient. Conclusively, the research results are reviewed and are summarized.

  1. INTRODUCTION

    In various fields and disciplines, face recognition is traversing as a modern research problem. Generally, face recognition includes 2 steps, face detection, and face recognition. Face detection means catching or discovering a face in an image. Then it is followed by recognition which includes identifying or recognizing the detected face. To date, various effective approaches have been introduced. In [1], a conventional method for distinguishing faces is used i.e. Eigen faces. To collect different profiles int the form of curves, calculating their norm and differentiating other profiles based on the deviation from the norm, is what proposed by the author. This results in a vector with independent standards, and further, it can be compared with the other vectors. While in [2] the author proposes a more complex but effective approach. This approach is the combination of KFDA and nearest neighbor where one performs feature extraction and the other performs recognition. [4] Proposes the approach called Hidden Markov Model. In this approach, the hardware is also upgraded to achieve better results. The next methodology [6] is one of the most commonly used approaches in machine learning applications. The support vector machine is a simplistic, yet efficient machine learning model which can be used to classify profiles into multiple classes. In the

    next approach [9] author proposes the use of neural networks for face recognition. This approach uses various algorithms concurrently to obtain the best possible result.

  2. LITERATURE REVIEW

    In this section, we elaborate on different face recognition techniques by reviewing some of the works. The methodologies include Eigen faces, KFDA with Nearest Neighbor, Hidden Markov Model, SVM, and Neural Networks. The OCR architecture is broken down in following stages:

    1. Eigen Faces

      The Eigen face algorithm is the most commonly used approach when it comes to face recognition. In the Eigen face algorithm, the Eigen faces are the eigenvectors. These eigenvectors are derived from the covariance matrix of the dataset. Eigen faces are also sometimes referred to as ghostly images. The main reason for using the Eigen face approach is that it represents the input data efficiently. This is done by representing each face in terms of the linear combination of Eigen faces. To achieve this, a dimension reduction technique is required. Conventionally, the dimension reduction technique, which is used here, is Principal Component Analysis.

      The author in this paper [1] is using face recognition to mark the attendance of the students in the class. So the author here [1] starts by elaborating what is Principal Component Analysis. The author states it used to examine face recognition issues by using it as a dimension reduction technique. It is also mentioned that is comprehended as Eigen face projection. The principal component analysis is used to reduce the dimension of the data and accurately decompose the face structure into orthogonal principal components which we know as 'Eigen faces'. In simple words, PCA is used to remove information that is not useful to generate Eigen faces. Moreover, PCA gives a suitable representation for the face space which otherwise forms a cluster.

      Furthermore, it is also stated that PCA has major applications in various fields, such as image analysis, identifying anonymous faces, and dimensional data reduction. A comparison of test images, with training images, is done by

      calculating the distance between their feature vectors. If this distance is greater than a particular threshold value, then the test image is identified as unknown, or else it recognizes the image the same as the training image. PCA's one of the limitations is also mentioned i.e. to images into Eigen faces, each time PCA needs the full-frontal image to be displayed, otherwise, it results in poor performance.

      The algorithm used here by the author includes 8 steps in total. The first step includes the preparation of the training data set. The images in this data set should be with NXN resolution. In the next step, the column vector is obtained by converting the images. After that, the unique features of the images need to be found to normalize the vectors. Then calculate the average face vector. Then we need to find eigenvectors by calculating the covariance matrix. In the next step, we've to select the best Eigen faces based on facial patterns. Followed by the representation of each database image as a combination of all 'K' eigenvectors. In the final step, the representation of each training database image is done, as weighted vectors.

      Fig. 1 System Architecture of [1]

      The architecture of the system proposed by the author consists of 6 major components. First is the image acquisition, which consists of receiving the images from the students. Next component is preprocessing, where the images are cropped and are converted into the gray level. After that, there is the Eigen face generation and here, using the technique mentioned before, multiple Eigen faces are generated out of which 12 best are taken for the study. Then the face recognition is done by comparing the distances of the vectors of the test image and the database image. It is followed by the face database generation, where the identified images are inserted in the original database. And lastly, the attendance of the recognized person is marked in the database.

      In conclusion, the author explains how facial recognition helps in building a secure environment. The author also states as facial recognition can be used to update and manage the attendance automatically, it has a great advantage in the field of education over all other biometric techniques.

    2. KFDA with Nearest Neighbor

      Starting with [2], the author here begins with mentioning various applications of face recognition in several fields. Further, the author mentions that though it is a minor task for humans to do, still there are many challenges in preparing the machines to recognize faces. Proceeding with this, the author mentions various techniques/approaches that are used to achieve this efficiently such as principal compnent analysis (PCA), independent component analysis (ICA), 2D log-polar Gabor Transform, and neural networks. The author also mentions that they have used Kernel Fisher's Discriminant Analysis and Support Vector Machines to recognize faces in their previous work [3]. This time the author has proposed a different combination which includes 2 algorithms, Kernel Fisher's Discriminant Analysis(KFDA) and Nearest Neighbor(NN). It is also stated that the KFDA is used for feature extraction and NN for classification. Nextly the author has explained the KFDA algorithm by stating initially that the LDA is derived from KFDA. LDA is used to increase the distance between two classes and decrease the scatter within each class. The reason for combining LDA with the kernel function is that ample of data with a high dimensional vector can be processed.

      The main aim here is to deal with abundant data and dimensions and also, simultaneously, classify the data and this won't as efficient as it is if we use only LDA. Then the KFDA algorithm is mentioned which is divided into 7 steps. The initial step is to calculate the matrices K and W. K is the kernel matrix and W is the diagonal matrix. In the next step, using eigenvectors decomposition we calculate K. After that, we calculate the eigenvectors and Eigen values i.e. and of the system. Followed by the calculation of eigenvector using vector . Lastly, the projections of test point z are computed onto the eigenvectors .

      Then the author elaborates on the nearest neighbor algorithm, by asserting its advantage of being simple to design, as a classification method. In the nearest neighbor algorithm classification is done based on the distance between the test points and the data points. Euclidian distance between the test point and the training points is calculated, which then later is compared. Then the test point is included in the class whose training point is nearest to the test point.

      The author then begins with the experiments and their results. The experiments are held in a specific order, starting with creating the database and creating training data vectors and testing data vectors. After that KFDA and DD are used to process the data vectors and classify the testing data, respectively. Ultimately, the success rate of this approach is calculated using 2-fold cross-validation. In 2-fold cross- validation, once the class of the data is determined, the data is

      randomly parted in two folds, each with an equal amount. One of them is called the training subset and the other one is called the testing subset. After this, the whole process is performed again on both the folds. Ten runs are performed of this 2-fold cross-validation. The author has also mentioned the parameters that must be determined before running the program.

      Fig 2. Experimental design in [2]

      These parameters are feature vector's length n, the kernel parameter, and the number of neighbor . In conclusion, the author has mentioned the success rate which is 83.10% and a standard deviation of 3.37%. It is also stated as this parameter set is presented in the previous section, it might be possible to improve the performance of the proposed system by changing the parameter set.

    3. Hidden Markov Model

      In the introduction of [4], the author begins by stating that face recognition is one of the significant researches due to its various applications in different fields. The author also mentions that there has been an enormous development in face recognition in the past few decades but it also resulted in complex algorithms that lead to long processing time and high energy consumption. Then some of the recent works are mentioned in which the Hidden Markov Model is used for face recognition. One of these [5] works is emphasized by stating their method and also by mentioning their success rate

      i.e. 99%. The author specifies that their work is based on the algorithm proposed in this [5] work. Here a parallel computing technique is used i.e. PC+FPGA.

      The first step in the training process is filtering and this includes the elimination of the highlights of the camera flash in the person's eyes. This is done by preprocessing the image with a minimum order-statistic filter. After this block extraction is done in which the image is divided into blocks of a specific height and width. This is followed by feature extraction in which SVD coefficients are computed for each block and a subset of these coefficients is calculated. After feature extraction quantization is done which means converting SVD coefficients into a finite range of discreet values. Each image is marked with an integer number which is in sequence and recognized as an observation vector.

      All images will go through these four steps and then seven-state HMM will be applied to them. For each person in the database, the HMM is trained using the Baum-Welch algorithm. Here the author specifies that the HMM they have used is not the entire HMM i.e. not like the HMM used in [5] on which it is based. The HMM proposed by the author has

      the same number of states as of the [5] HMM has, but the number of observations is much less than that of the [5] HMM. Due to this, the required time for training these HMMs is also decreased. Another advantage of this method is that the training time won't increase rapidly with the observation symbols. Due to this, the number of SVD coefficients and quantization levels can be increased without elevating the training time.

      After the training phase, the author starts with the recognition phase. The first four steps are the same in the recognition phase as that of the training phase. After this, in the recognition phase, an observation sequence will represent an unknown face image then the probability of this sequence will be calculated, given by each trained HMM.

      Starting with the system implementation it must be mentioned initially that there are two parts of this implementation. The first is software implementation on PC and the second is FPGA implementation. In a software implementation, the modules are developed of the training process and the recognition process. And in the FPGA implementation, the foremost device for implementation is SPARTAN-3. Furthermore, there are four more modules which are serial transceiver, receiving buffer, transmitting buffer, and computing and handling the data received from the PC.

      Fig 3. System Architecture in [4]

      Elaborating on the results, the author has mentioned some interesting records. Results show that the approach which is used by the author takes much less time in training the system and recognizing the faces. The success rate, using different coefficient sets, is also mentioned which is between 99% to 100% for each coefficient set. In conclusion, the author has stated that compared to the PC-based-only system the combination of FGPA and PC in the system can perform the process of face recognition faster.

    4. SVM

      The author here [6] starts the introduction by elaborating on the various applications of face recognition in different fields and also emphasizes why is face recognition important nowadays to achieve various goals. Then the author talks about face detection and the algorithms used for face detection. It also mentions a dataset [7] that doesn't require face detection for face recognition. Furthermore, it is mentioned, though face recognition is not the most efficient and reliable technique in biometric techniques, the advantage it has is that the subject's cooperation is not necessary. Then multiple algorithms are mentioned which are used the most for face recognition. These methodologies include the deep- face system which is a nine-layer deep neural network and also is used by Facebook. After this, a feature extraction method is mentioned which new and uses the Hough transform peaks [8]. Along with these peaks, to select optimal features from the feature vector using swarm optimization and then the Eigen face is explained in brief.

      Another new face recognition technique is mentioned, learned local Gabor patterns, which is used for face representation and recognition. In this way elaborating on different techniques the author states that in poor lighting, long or small hairs, e.t.c and using traditional methods, faces can also be recognized, for feature extraction and classification. To recognize more than two faces in an image multi-class classifier should be used and this is why the author's proposed methodology includes multi-class SVM for classification.

      Starting with the proposed methodology, the author specifies that there are three phases of face detection, feature extraction, and classification. Beginning with the first phase

        1. face detection, the author states that they have used the Viola Jones algorithm for face detection and this technique is capable to detect single and multiple faces from an image and even from a video. After this, comes the feature extraction which is done using the BOF: Bag of the Feature method which includes speeded up robust features (SURF) mechanism for feature recognition and classification of features.

          This algorithm's implementation is divided into three parts, points of interest selection where the feature points are selected. In the second phase, the pixel intensity distribution is described with the neighbor point. Finally, in the third phase, the features of different images are compared. Quantizing the features of the feature vectors in which the points of interest are stored using the K-mean clustering algorithm, clusters are formed. Finally, classification is done using multi-class SVM. The author has specifically mentioned that they have upgraded to multi-class SVM to reduce the number of problems that they faced while using binary class SVM.

          The author then elaborated on SVM, by explaining the terms like Hyperplane and Support Vectors. A hyperplane is a decision boundary that separates two classes or clusters. A hyperplane is drawn based on the support vectors. A

          support vector is the nearest data point to the other class/cluster. Starting with the results, the author mentions the dataset that they have used for experiments. ORL face dataset, frontal face dataset, and face recognition database are the datasets that are used here.

          Fig 4. Comparison with different approaches in [6]

          The author compares the success rate if the proposed system with the success rate of other systems. The success rate of this proposed system is 99.21% while the success rate of the other mentioned approaches is between 95.50% to 98.50%. It also has the upper hand when it comes to running time which is 3.24 and it much less than the others. The author concludes by stating that their work will further be used in various applications like the attendance system and for security purposes.

    5. Neural Networks

    In the paper [9], the author starts by stating why the passwords should be replaced by biometrics, as the biometrics are more secure than the passwords and identification cards. The proposed system here has two processes i.e. face verification and face recognition. In face verification, whether the person is an imposter or not is verified. Then the author mentions that they have used neural networks because it can learn adaptively, it can make sense out of complicated and imprecise data, and the structure of the neural networks, to receive input, interface to the real world.

    Then the author elaborates on the types of neural networks, and that are of two types, feed-forward neural network, and back-propagation neural network. A neural network works when we give it some input data. This data is then processed via layers of perceptrons to produce the desired output. Each image is broken down into pixels depending on the dimension of the image. Now, these pixels are represented as matrices which are then fed into the input layer of the neural network, from which it is passed to the hidden layer. Now in the hidden layer weight is assigned to each perceptron and the inputs are then multiplied to their corresponding weight. Furthermore, each perceptron is passed through the transformation function that determines whether the perceptron is activated or not. An activated

    perceptron is used to transmit data to the next layer and in this manner, the data is propagated forward, in the feed- forward neural network, until the data reaches the output layer. Whether the data belongs to class A or B is decided by the probability which is derived at the output layer.

    Now coming to the elaboration on the back- propagation neural network the author starts by mentioning that it is the most popular supervised learning multilayer algorithm. Initially while designing the neural network we initialize weights to each input with some random values. Now, these weights denote the importance of each input variable, therefore if we propagate backward in a neural network and compare the actual output to the predicted output we can readjust weights of each input in such a way that the error is minimized. This results in a more accurate output.

    After elaborating on both the types of connections in the neural network, the author starts with the ANN structure through which the author explains why is ANN used to approximate real-valued function and also is a well known and robust classification technique. Coming to the results the author starts by mentioning that the images are taken from the ORL database. Here inbuilt functions are used to extract data and then each feature (eye co-ordinates, nose point, moth region) is compared separately.

    Fig 5. ANN structure from [9]

    The author concludes by stating that using hidden layer processing even in the faces of the improper image can be recognized and due to these using neural networks is efficient for face recognition.

  3. CONCLUSION

So in this paper, we've wholly reviewed some of the methodologies and we've also learned that way face recognition and different approaches are researched it will be one of the major machine learning applications in the coming future. We've also found that there are various practical methods and approaches to achieve this and to add some greater scope regarding face recognition.

REFERENCES

      1. Rekha.E, Dr. Ramprasad.P(2017), An Efficient Automated Attendance Management System Based on Eigen Face Recognition, 7th International Conference on cloud computing,

        Data Science and engineering-IEEE

      2. Iwan Setyawan, Abraham F. Putra, Ivanna K. Timotius, Andreas A. Febrianto (2011), Face Recognition using Kernel Fishers Discriminant Analysis and Nearest Neighbor , The 6th International Conference on Telecommunication Systems, Services, and Applications 2011, IEEE.

      3. I. K. Timotius, I. Setyawan, A. A. Febrianto, Face Recognition between Two Person using Kernel Principal Component Analysis and Support Vector Machines, International Journal on Electrical Engineering and Informatics, Vol. 2 No. 1, pp. 53-61, March 2010.

      4. Vo Van Trieu, Nguyen Van Cuong (2016), PC and FPGA Design for face recognition system using hidden markov model, 2016 International Conference on Electronics, Information, and communications (ICEIC), IEEE.

      5. H Miar-Naimi, P Davari (2008), A new Fast and Efficient HMM- Based Face Recognition System Using 7-State HMM along with SVD coefficients , International Conference on Computer Vision Theory and Applications (VISAPP), 2008.

      6. Salah Nasr, Muhammad Shoaib, Kais BOUALLEGUE, Hassen Mekki (2017), Face Recognition System Using Bag of Features And Multi-Class SVM For Robot Applications , 2017 International Conference on Control, Automation and Diagnosis (ICCAD), IEEE.

      7. 'The ORL Database of Faces'

      8. R. Varun, Y. Vivekanand, K. Manikantan, and S. Ramachandran, Face Recognition using Hough Transform based Feature Extraction, Procedia – Procedia Comput. ci., vol. 46, no. Icict 2014, pp. 1491 1500, 2015.

      9. Vinita Bhandiwad, Bhanu Tekwani (2017), Face Recognition and Detection using Neural Networks, International Conference on Trends in Electronics and Informatics, ICEI 2017, IEEE

Leave a Reply