An Improved Model for Imperfect Facial Recognition using Python-Open CV

DOI : 10.17577/IJERTV8IS110017

Download Full-Text PDF Cite this Publication

Text Only Version

An Improved Model for Imperfect Facial Recognition using Python-Open CV

Akintoye A. O

Department of Computer Science Ignatius Ajuru University of Education Port Harcourt, Nigeria

Onuodu F. E

Department of Computer Science University of Port Harcourt

Port Harcourt, Nigeria

Abstract A face recognition is prevalently carried out by using perfect data of full-frontal facial images without any sort of damage like due to accident on the image. Actually, there are various circumstances where full frontal faces may not be accessible. In this case, the available imperfect facial pictures that frequently originate from Closed Circuit Television (CCTV) cameras are used in place of the perfect full-frontal images. Hence, the problem of computer-based face recognition using impartial facial data as analysis is still largely an unexplored area of research. For the fact that the way humans carry out facial recognition and verification is different from how machines do, it must be fascinating to see how a machine favors different pieces of the face when exhibited to the difficulties of face recognition. This paper, therefore, investigates the question that encompasses the idea of face recognition using incomplete facial information. The experiment will be based on using Object Oriented Programming Language (OOP) along with OPENCV (Open Computer Vision) for proper classification and identification of a human face.

KeywordsAnalysis; imperfect facial data; face recognition; icomplete face; OOP; OpenCV

  1. INTRODUCTION

    Faces are the most decorated pictures in the visual system within the life time of a human being. it is not surprising that humans have the ability to recognize faces. With regards to face recognition by humans, it is thought that the brain remembers important details such as the shapes and colors of crucial features corresponding to the eyes, nose, forehead, cheeks and the mouth. Thus, this work is aimed at developing a face recognition system that will be able to detect an imperfect human face and tell whose face it is.

    Fig. 1.0 Below is an example of how imperfect faces are shown as input data for a facial recognition system.

    Fig. 1.0 An Example of How Partial Faces may be Presented as Input for

    Face Recognition

    From the above fig. 1, according to [1], in no circumstance should the whole face be made available and only portions of the face such as the forehead, eyes, nose, mouth or the cheeks of the given subject should be made available as input probe data as. [2][3].

  2. RELATED WORKS

    In recent past, a lot of algorithms were designed to solve face recognition problems, for example, those found in [4][5]. Savvides et al. [6] is one of the works that we could associate with this subject. To create quantifiers with discriminative ability, several facial regions are analyzed here. The method of kernel correlation filters was used to reduce image dimensionality and for extraction of features based on gray scale images. They then used Vector Machines (SVM) support to discriminate between different facial features. They dealt with three major facial areas, including the head, nose, and mouth, in their study. They said the results of their experiments suggest a higher verification rate for the eye region compared to the mouth and nose regions [7].

    In a comparable fashion, [8] presented a procedure known as the Dynamic Feature Matching (DFM) for incomplete face recognition. Their examination depended on a blend of fully convolutional networks (FCN) [9], and sparse representations. The motivation behind FCN is to separate a component map of pictures which has the ability to cater for progressively discriminative features. The core of their work is the use of VGG-Face model [10] from which these features was moved to the FCN. This strategy seems to have created great arrangement precision contrasted with other existing strategies.

    Long et al. [11] proposed Subclass Pooling for Classification (SCP) to tackle the twofold impediment issue by utilizing constrained information in a training set. They utilized a fuzzy max pooling technique and normal pooling plans. Their outcomes indicated that an astounding edge of execution can be accomplished.

    Yang et al. [12] of recent, proposed a structure called the Optimized Symmetric Partial Face graph (OSPE) for face recognition under various situations [13]. For instance, impeded face, facial appearance and variety of lighting are a portion of the signals they use in their research. Once more, their experimental outcomes have demonstrated that a few enhancements in recognition rates can be accomplished by presenting imperfect facial information or data.

    Duan et al. [14] presented a method called Topology Preserving Graph Matching (TPGM), so as to upgrade the recognition procedure on account of utilizing incomplete faces. Their strategy depends on building geometric graphed

    for probe and gallery faces. The TPGM strategy limits a geometric and textural cost work. Results of their analyses on four face databases showed that their methodology outflanked other cutting-edge techniques at the time.

    Cai et al. [15] proposed a facial variety demonstrating system for inadequate portrayal for face recognition. In light of a solitary example face, they fabricated facial variety bases to isolate nonpartisan, frontal faces from changed facial views. Their trials show that significant upgrades can be performed for single image face recognition issues.

    Another bit of work which is related to this paper is that which is carried out by [16]. here, they conceived that the human face recognition issue in frontal perspectives with shifting light, mask and occlusion. They presented another strategy for face recognition which extracts a dynamic subspace from pictures and they acquire the unmistakable parts for each subject. An attribute of discriminative segments was represented by those parts so as to give a recognition protocol to order face pictures by utilizing the K-nearest neighbor algorithm (K-NN) [17]. They used their strategy to public databases, for example, ORL and Extended Yale B. The outcomes showed that the recognition rates can be improved utilizing incomplete facial signals.

    Peng et al. [18], presented a method called Locality- Constrained Collaborative Representation (LCCR) to upgrade separation of delegate pictures. The LCCR was applied to various databases with five distant measures. On account of imperfect faces, they utilized three facial features, which included the right eye, nose and mouth with jawline by veiling the first pictures. The outcomes show that the correct eye, mouth and jawline have high recognition rates – for instance, by utilizing LCCR and City Block distance measures [19].

    Murphy et al. [20] carried out a research which was based on facial stimuli shows the system of human face recognition. Their work, alongside that of others, appear, for people, that faces are hard to see when flipped around. According to them, the primary thought in this work was to comprehend the breaking points of human capacity to face observation and recognition. They tried this thought in four different ways, in particular for character, sex, age and appearance under four conditions, which are, upstanding entire face, transformed entire face, upstanding opening and altered gap.

    The outcomes introduced by the spectators were placed into classes of character, sex, age and articulations. Their outcomes demonstrate that the hindering impacts of a reversed entire face were no less in the opening states of demonstrating incomplete faceto the members.

    Nummenmaa et al. [21] also researched on facial recognition on imperfect face subject to facial expressions. For the following facial expressions anxiety, joy, sorrow, disgust, and anger they tested on the face recognition scales. According to [20], in the case of the partial face, the face was separated into two parts, one with the eyes and the other with the nose. A significant result of their work is that only when it comes to the situation of the eye and mouth, humans have poor recognition rates. Then again, they noted that smile expression results in slightly better rates of recognition. However, [20][21] further stated that The quality of current methods decreases dramatically when coping with acute occlusions in the head. Several previous studies have

    recognized that experience tends to be a key factor in identification when it comes to human face recognition. The recognition rate changes, of course, when the target face picture becomes distorted, occluded, with gestures and shifts in the subject's age.

    Machine learning lets a machine build models from input data observations in order to make a more detailed decision. This is a clear edge that algorithms for machine learning seem to have over understanding and recognition of the human face. It is therefore possible to claim that machine learning algorithms can theoretically have better recognition levels on imperfect faces or, in the most pessimistic scenario, can help people achieve better face recognition, especially in difficult cases where very small or partial facial data are provided [21]. Of all the researched carried out machine-based face recognition, we were able to identify that none of them were able to ascertain how machine learning favours in face recognition using incomplete faces. This paper is aimed at developing a face recognition system that will be able to detect an imperfect human face and tell whose face it is. It is geared towards exploring how various imperfect parts of the human face whether observed from close range or from a distance can be used to recognize an individual. The objectives thus are as to:

    • Develop a system that will detect a human face using imperfect data.

    • Create a database were these imperfect data will be stored.

    • Build a Graphical User Interface for the system that will enable interaction with the database.

    Shervin [22] opines that OpenCV supports a wide variety of programming languages such as C++, Python, Java, etc., and is available on different platforms including Windows, Linux, OS X, Android, and iOS. He defined OpenCV-Python as the Python Application Programming Interface (API) for OpenCV which combines the Python programming language and the amazing qualities of the OpenCV C++ API. OpenCV-Python is a library of Python that is built to tackle vision challenges associated with computers. It utilizes NumPy, a profoundly improved library for numerical operations with MATLAB-style syntax.

  3. METHODOLOGY

    The use of an appropriate method enhances effectiveness and efficiency in every research work. A "system development methodology (SDM) refers to the step-by-step procedure used to structure, plan, and control the process of developing an information system." We adopted the Object- Oriented Analysis and Design Methodology (OOAM) in analysis of this program.

    Advantages of the Existing System

    i. It is cost effective.

    Disadvantages of the Existing System

    1. Slow in image processing

    2. Required high memory in processing

      Analysis of the Existing System

      Test Features Set (Input)

      Kernel Correlation Filters

      Output

      Test Features Set (Input)

      Kernel Correlation Filters

      Output

      Figure 2.0 Shows the Architecture of Existing System

      Propose System

      Test Features Set (Input)

      Training Features Set

      OPENCV

      Output

      Test Features Set (Input)

      Training Features Set

      OPENCV

      Output

      Fig 3.0 Architecture of the Proposed System

      Advantages of Proposed System

      1. Fast in image processing

      2. Required less memory for image processing

    1. Feature classifications

      The goal of classification in supervised machine learning is to fabricate a short model of the dissemination of class labels regarding anticipated features. In supervised machine learning, classification is a function that allots accordingly, new observations to which a set of target categories belong to.

    2. Algorithm

    Input: Training set N, with n classes mj = number of images in a given class

    For i=1 to n do

    For j=1 to mj do

    im read an image;

    im resize(image);

    im normalize(image);

    end

    end

  4. SYSTEM IMPLEMENTATION

OpenCV-Python was used for the implementation of this work.

Python is an open-source object-oriented language. Its efficacy cuts across, Application Programming Interface (API), Platform independence, simulation, low-level programming, object linker embedding, network configuration etc. A Python program need not to be compiled each time it is about to run once it is developed.

OpenCV supports a wide variety of programming languages such as C++, Python, Java, etc., and is available on different platforms including Windows, Linux, OS X, Android, and iOS. OpenCV-Python is the Python Application Programming Interface (API) for OpenCV which combines the Python programming language and the amazing qualities of the OpenCV C++ API.

OpenCV-Python is a library of Python that is built to tackle vision challenges associated with computers. It utilizes NumPy, a profoundly improved library for numerical operations with a MATLAB-style syntax.

Fig. 3.0 Computational Framework for Face Recognition using Partial Faces

CONCLUSION

In the field of computer vision and visual computing, the ability of existing machine-based face recognition algorithms to function properly in instances of incomplete facial information such as occluded or zoomed out faces as indications remains a difficult task. In this study, we presented the findings of some of the experiments we carried out to illustrate these problems. To do this, we used managed and unregulated public facial data sets to demonstrate how deep learning can be used with imperfect facial signals for face recognition.

From an implementation point of view, we agree that this work is still preliminary in that we used only datasets that are relatively regulated and far from realistic scenarios. Therefore, extending this work to evaluate its practical applicability in terms of extending our experiments where, for example, real facial Closed Circuit Television (CCTV) footage can be used as recognition signals will be very useful.

REFERENCES

  1. C. Ding, and D. Tao, Pose-invariant Face Recognition with Homography-based Normalization, Pattern Recognition, vol. 66, pp. 144152. 2017. Retrieved on the 20th of September, 2019 from: http://dx.doi.org/10.1016/j.patcog.2016.11.024.

  2. Z. Li, J. Liu, J. Tang, and H. Lu, Robust Structured Subspace Learning for Data Representation, IEEE Trans. Pattern Anal. Mach.

    Intell. 37 (10) 20852098. 2015. Retrived on the 20th September, 2019 from: http://dx.doi.org/10.1109/TPAMI.2015.2400461.

  3. M. A. Turk, and A.P. Pentland, Face recognition using eigenfaces, in: Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Maui, HI, USA, USA, pp. 586591, 1991. Retrieved on the 20th of September, 2019from: http://dx.doi.org/10.1109/CVPR.1991.139758.

  4. A.M. Aguilera, M. Escabias, and M.J. Val-derrama, Using Principal Components for Estimating Logistic Regression with High- Dimensional Multicollinear Data, Comput. Statist. Data Anal, vol. 50, no. 8, pp.19051924. 2006. Retrieved on the 15th of September, 2019 from: http:/dx.doi.org/10.1016/j.csda.2005.03.011.

  5. A. Sleit, R. Abu-Hurra, and W. AlMobaideen, Lower-quarter-based Face Verification Using Correlation Filter, J. Imaging Sci, vol. 59, no. 1, pp. 4148. 2011. Retrieved on the 15th of September, 2019 from: http://dx.doi.org/10.1179/136821910X12863757400286.

  6. M. Savvides, A. Ramzi, H. Jingu, S. Park, X. Chunyan, and V.K. Vijayakumar, Partial and Holistic Face Recognition on FRGC-II Data using Support Vector Machine Kernel Correlation Feature Analysis, In: Computer Vision and Pattern Recognition Workshop, New York, USA, 2006. Retrieved on the 17th of September, 2019 from: http://dx.doi.org/10.1109/CVPRW.2006.153.

  7. P. Omkar, V. Andrea, and Z. Andrew, Deep Face Recognition, in:

    X. Xianghua, Mark J., and Gary T. (Eds.), Proceedings of the British Machine Vision Conference (BMVC), BMVA Press, 2015, pp. 41.1

    41.12. Retrieved on the 15th of September, 2019 from: http://dx.doi.org/10.5244/C.29.41.

  8. L. He, H. Li, Q. Zhang, and Z. Sun, Dynamic Feature Learning for Partial Face Recognition, In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA. 2018. Retrieved on the 15th of September, 2019 from: http://dx.doi.org/10.1109/CVPR.2018.00737

  9. L. Choi, and D. Kim, Facial FraudDiscrimination Using Detection and Classification, Advances in Visual Computing, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 199208, 2010. Retrieved on the 23rd of September, 2019 from: http://dx.doi.org/10.1007/978-3-642- 17277-9_21

  10. J.W. Tanaka, M.D. Kaiser, S. Hagen, and L.J. Pierce, Losing Face: Impaired Discrimination of Featural and Configural Information in the Mouth Region of an Inverted Face, Atten. Percept. Psychophys, vol. 76, no. 4, pp. 1000 1014, 2014. Retrieved on the 19th of September, 2019 from: http://dx.doi.org/10.3758/s13414-014-0628-0.

  11. J. Long, E. Shelhamer, and T. Darrell, Fully Convolutional Networks for Semantic Segmentation, in: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA. 2015. Retrieved on the 15th of September, 2019 from: http://dx.doi.org/10.1109/CVPR.2015.7298965.

  12. L. Yang, Z. Fan, S. Ling, and H. Junwei, Face recognition with a Small Occluded Training Set Using Spatial and Statistical Pooling, Inform. Sci. pp. 634644, 2018. Retrieved on the 20th of September, 2019 from: http://dx.doi.org/10.1016/j.ins.2017.10.042.

  13. L. Badr, L. Syaheerah, V. Ibrahim, B. Mohammed, and S. Rubn, Optimized Symmetric Partial Face graphs for Face Recognition in Adverse Conditions, Inform. Sci. 429, 2018. Pp. 194214. Retrieved on the 15th of September, 2019 from: http://dx.doi.org/10.1016/j.ins.2 017. 11.013.

  14. Y. Duan, L. Jiwen, F. Jianjiang, and F. Jie, Topology Preserving Structural Matching for Automatic Partial Face Recognition, IEEE Trans. Inf. Forensics Security, vol. 13, no. 7, pp. 18231837, 2018. Retrieved on the 15th of September, 2019 from: http://dx.doi.org/10.1109/TIFS.2018.2804919.

  15. J. Cai, J. Chen, S. Liang, Single-Sample Face Recognition Based on Intraclass Differences in A Variation Model, Sensors, vol. 15, no. 1, pp. 10711087, 2015. Retrieved on the 15th of September, 2019 from: http://dx.doi. org/10.3390/s150101071.

  16. H. Li, and Y. Ching, Robust Face Recognition Based on Dynamic Rank Representation, Pattern Recognition, pp. 1324, 2015. Retrieved on the 15th of September, 2019 from: http://dx.doi.org/10.1016/j.patcog.2016.05.014.

  17. Z. Zhang, Introduction to Machine Learning: K-Nearest Neighbors, Ann. Transl. Med. Vol. 4, no. 11, 218. 2016. Retrieved on the 15th of September, 2019 from: http://dx.doi.org/10. 21037/atm.2016.03.37.

  18. X. Peng, L. Zhang, Y. Zhang, and K. Tan, Learning Locality- Constrained Collaborative Representation for Robust Face Recognition, Pattern Recognition, vol. 47, no. 9, pp. 27942806, 2014. Retrieved on the 15th of September, 2019 from: http://dx. doi.org/10.1016/j.patcog.2014.03.013.

  19. Y. Pan, J. Trahan, and R. Vaidyanathan, A Scalable and Efficient Algorithm for Computing the City Block Distance Transform on Reconfigurable Meshes, Computer, vol. 40, no. 7, pp. 435440. 1997. Retrieved on the 15th of September, 2019 from: http://dx.doi.org/10.1093/comjnl/40.7.435.

  20. J. Murphy, and R. Cook R, Revealing the Mechanisms of Human Face Perception using Dynamic Apertures, Cognition, vol. 169, pp. 2535, 2017. Retrieved on the 15th of September, 2019 from: http://dx.doi.org/10.1016 /j. cognition.2017.08.001.

  21. L. Nummenmaa, M. Calvo, and A. Fernández-Martín, Facial Expression Recognition in Peripheral Versus Central Vision: Role of The Eyes and The Mouth, Psychol. Res, vol. 78, no. 2, pp. 180195. 2014. Retrieved on the 28th of September, from: http://dx.doi.org/ 10.1007/s00426-013-0492-x.

  22. E. Shervin, Face Detection and Recognition using OpenCV, 2010, An article retrieved on the 20th of September, 2019 from: http://shervinemami.info/faceRecogniti

Leave a Reply