An Investigation on the Use of LBPH Algorithm for Face Recognition to Find Missing People in Zimbabwe

DOI : 10.17577/IJERTV7IS070045

Download Full-Text PDF Cite this Publication

Text Only Version

An Investigation on the Use of LBPH Algorithm for Face Recognition to Find Missing People in Zimbabwe

1 Peace Muyambo

PhD student, University of Zimbabwe,

Zimbabwe

Abstract – Face recognition is one of the challenging problem in the Computer Vision industry. Many algorithms have been developed to address the issue of facial recognition during the last thirty years. Algorithms based on LDA, PCA, ICA and Artificial Neural Networks have been used to try to address the issue of face recognition. Facial recognition algorithms are affected by illumination thus variation in lighting as well as pose variation. As a result, hybrid methods have been developed which uses a combination of two algorithms. Face Recognition has been greatly used to develop security systems as well as surveillance systems to keep track fraud and criminal activities. In this paper, the researcher used the LBPH (Local Binary Patterns Histograms) algorithm to produce a prototype of a system that will find missing people using facial recognition. The major objective of the research is to determine the accuracy of the system as well as the recognition rate.

Key words: Facial Recognition, Local Binary Patterns Histograms, Artificial Neural Networks, Biometrics, Face Identification

INTRODUCTION

As each day passes by, more people are reported missing, some maybe hiding from serious crimes, abducted or even running away from their families due to some social problems. In Zimbabwe, any case of a missing person is reported to the police and all the details of the case are taken. Children who are abused by their parents tend to live in the streets and they are reported missing.

Media can be used to find missing people for instance the use of newspapers. Media appeals may be the quickest and most effective way of raising awareness of your missing person and helping in the continuing search for him or her. Nevertheless, not everyone feels comfortable using the media. Different newspapers and magazines have different interviewing techniques and styles. Whilst many journalists will be sympathetic, others may appear forceful, cold or aggressive or behave in other ways, which seem insensitive to what you are going through. Some people do not trust the media or want their circumstances made public; others feel overwhelmed by the thought of dealing with journalists and being asked probing and personal questions about their missing friend or relatives.

Additionally, publicity may put already vulnerable people at greater risk by forcing them further away if they do not wish

to be found. Kidnappers can continue to victimize their victims, as they will be aware through media.

However, the use of facial recognition technique makes it easier for us to find the missing people and this will cater for all the disadvantages of using media.

Searching for a missing person using media resulted in many problems for instance publicity may put already vulnerable people at greater risk by forcing them further away if they do not wish to be found.

The news of missing person is advertised on television and newspapers for a certain period. After a few number of days everyone would have forgot that news since the advertisement is not continued for long.

The police find all the possibilities of finding the missing people by using posters and announcing on the media but as a result, they do not have real time solution, which means they cannot track down the missing person if the person to be found is not staying at one position.

To avoid this, we use facial recognition whereby surveillance cameras are installed at convenient places to track people moving via the live video feed. This will be different from searching the whole nation for the missing person; instead, we can narrow down our search to a specific area based on the results produced by the system.

JUSTIFICATION

The system will help to find missing people around Zimbabwe and produce results concerning their whereabouts. Instead of using media to find the missing people, the system will produce live video feeds and reports, which will help us to narrow down our search rather than searching the whole nation. Live video feeds will come from the surveillance cameras located at strategic positions countrywide.

Analysis of Facial Recognition Techniques

Face recognition is one of the most relevant applications of image analysis. Various face recognition techniques are appearance based and feature based approaches.

Appearance Based Approaches

According to (Dass, Rani, & Kumar, 2012), appearance based (or holistic matching) methods use the whole face region as the raw input to a recognition system. The face recognition problem is firstly transformed to a face space analysis problem and then several well-known statistical methods are applied to it. They are quite prone to the limitations caused by facial variations such as illumination, 3D poses and expressions. [1]

Eigen face Technique

The Eigen face technique is one of the mostly used processes for face recognition. The Principal Component Analysis (PCA) is a technique effectively used to achieve dimensionality decline. Face recognition and detection mostly use Principal Component.

Mathematically, Eigen faces are the principal components through which into feature vectors can be obtained from the face. The feature vector data can be attained as of covariance matrix. These Eigenvectors are used to calculate the difference between numerous faces. The faces are categorized by the linear grouping of maximum Eigenvalues. Every face can be measured as a linear grouping of the Eigen faces. The face, having the largest eigenvalues of the eigenvectors can be approximated.

(Patel & Yagnik, 2013) Eigen face is an applied method for face identification. Execution of an Eigen face recognition scheme has become easy because of the ease of its algorithms. The accuracy of Eigen faces rest on numerous things. [2] The Eigen face method finds an approach to make ghost-like faces that characterize the bulk of variance in an image dataset. This method is built on an evidence theory method that decomposes face pictures into a minor set of feature images called Eigen faces, which are in fact the principal components of initial training set.

The problem of Eigen face is, it is profound for lightening environments and location of the Head. Drawback is outcome of the eigenvalues and eigenvectors are phase consuming

Principal component analysis (PCA)

According to (Antony, 2016), the PCA technique converts each two-dimensional image into a one-dimensional vector. This vector then goes through several steps such as Detect Face in Image, Normalize Facial landmarks, Extract Facial Features and then Recognize Face Image. (Antony, 2016) furtherly stated that the technique selects the features of the face, which vary the most from the rest of the image. In the process of decomposition, a large amount of data is discarded as not containing significant information since 90% of the total variance in the face is contained in 5-10% of the components. This means that the data needed to identify an individual is a fraction of the data presented in the image [3]

Linear Discriminant Analysis (LDA)

( Madane & Khandare, 2015) The linear discriminate analysis (LDA) is a powerful method for face recognition. It yields an effective representation that linearly transforms the original data space into a low-dimensional feature space where the data is well separated. In LDA, the goal is to find an efficient

or interesting way to represen the face vector space. However, if the within-class scatter matrix (SW) becomes singular in face recognition and the classical LDA cannot be solved which is the under sampled problem of LDA (also known as small sample size problem). A subspace analysis method for face recognition called kernel discriminate locality preserving projections (MMDLPP) is based on the analysis of LDA, LPP and kernel functions. [4]

Independent Component Analysis (ICA)

(Jain, Jain, & Raja, 2014) defined Independent component analysis (ICA) as a method for finding underlying factors or components from multivariate (multidimensional) statistical data. There is need to implement face recognition system using ICA for facial images having face orientations and different illumination conditions, which will give better results as compared with existing systems. [5]

Feature Based Approaches

According to (Dass et al., 2012), feature based matching methods first extract the local features such as the nose, eyes and mouth. Their locations and local statistics (geometric and/or appearance) are then fed into a structural classifier. [6]

Hidden Markov Model (HMM)

(Sharma & Kaur, 2016) stated that the first efforts to use Hidden Markov Model (HMM) were introduced by Samaria and Young. HMM has been worked effectively for images with variations in lighting, facial expression, and orientation. Thus, it has an advantage over the appearance-based approaches. For processing images using, HMM, the temporal or space sequences has been considered. [7]

Local Binary Patterns Histograms (LBPH)

(Sharma & Kaur, 2016) argued that the local binary pattern (LBP) had been designed for texture description. According to (Wahid, 2013), the face area is first divided into small regions from which Local Binary Patterns (LBP), histograms are extracted and concatenated into a single feature vector. This feature vector forms an efficient representation of the face and is used to measure similarities between images. The major advantage of this algorithm is that it produces better recognition rates in controlled environments and it is not profound to illumination. [7]

Limitations of Local Patterns Histograms (LBPH)

Even though current machine recognition systems have reached a certain level of perfection but still there are many real application conditions which limits their good performance.

  • 3D head pose changes are some unavoidable problems, which appear in the variety of practical applications, since the people are not always frontal to the camera. The difference between the same people under the varied poses is larger than the difference between the distinct persons under the same pose. Therefore, it is difficult for the computer to do the face identification when the poses of the probe are different.[6]

  • Illumination (including indoor / outdoor) variations due to skin reflectance properties and the internal camera control. Face recognition systems encounter difficulties in extreme illumination conditions where significant parts of the face sometimes become invisible. Furthermore, it becomes more difficult when illumination is coupled with pose variation.[6]

  • Facial expression: Faces undergo large deformations under extreme facial expressions and present problems for the algorithms.[6]

  • Occlusion: Due to other objects or accessories (e.g., sunglasses, scarf, etc.) performance of face recognition algorithms gets affected.[6]

  • Time Delay Human face changes over time. There are changes in makeup, presence or absence of facial hair, muscle tension, hairstyle, and appearance of the skin, facial jewelry, glasses and aging effects. [6]

    METHODOLOGY

    The following tools were used in the design of the prototype: MySQL database: MySQL database is the mostly used database for various applications notably web applications amongst others. The MySQL database was choice used by the researcher for saving information about system users and user details. The researcher used

    this database because he was equipped with some experience in using it. Additionally, MySQL database is an open source and relatively simple to use.

    Opencv: It is a computer vision library, which is equipped with various libraries to perform image-processing techniques. OpenCV was designed in C++. It provides algorithms namely the Eigen Face, Fisher Face as well as LBPH (Local Binary Pattern Histograms) algorithm, which was implemented in this research. Furthermore, OpenCV have machine-learning algorithms that were used to train the image datasets.

    Web Camera: this was used for capturing images of individuals for testing the prototype.

    Python Programing Language: Python is a scripting or general-purpose language that was developed by Guido van Rossum. The author opted to use python because of its simplicity and code readability. Additionally, it enables the programmer to express his ideas in fewer lines of code without reducing any readability. Python is also easy for performing OpenCV bindings.

    Modem: a modem modulates outgoing digital signals from a computer or other digital device to analog signals for a conventional copper twisted pair telephone line, demodulates the incoming analog signal, and converts it to a digital signal for the digital device. In the researchers prototype, the modem was used for message dispatching to notify where the missing person has been found.

    Numpy: Is a highly optimized library for numerical operations. Its purpose in the prototype is to handle arrays to and from the OpenCV library.

    Tkinter: A python library for user interface design. The researcher used this library to create the Graphical user interface.

    The Algorithm for the Proposed System

    1. Image acquisition and detection: The images are acquired using a camera to capture image frames. An image is then detected from the video feed using Haar cascade classifiers, which are used with the OpenCV library. Haar cascade classifier is an XML file, which is trained to detect objects on still images or in live videos. Image detection commences whenever the Video Capture (0) method returns a true value thus the camera will be turned on. At this stage, the image is given a label that will be further used for training.

      Figure 1: Image acquisition

    2. Image preprocessing: this involves resizing as well as normalizing the images. At this step, the images are converted to grayscale for further processing.

      Figure 3: Image preprocessing

    3. Feature extraction: At this step, the face is divided into blocks after which Local Binary Pattern Histograms are computed and feature histograms are constructed.

      Figure 4: Feature extraction

    4. Training the dataset: At this stage, an xml file is produced which consist of all the feature vectors of all the images in the image dataset.

      Figure 5: Training dataset

    5. Recognition: The test image is compared against the trained images and classified based on the features extracted in step 3. The recognition algorithm predicts the face in the live video feed by appending a name label on the rectangle around the face. Prediction is dependent upon the confidence interval, the lower the confidence level the better the recognition will be. Higher confidence values result in false recognition.

      Figure 6: Recognition

      Data Collection and Analysis of Results False Acceptance Rate (FAR)

      This is a situation where the face recognition system falsely identifies an unauthorized subject. Relating this error to the developed

      system, false rejection happens when a testing face is regarded as known when it should be regarded as unknown. FAR should remain at a minimum because if the rate is high it means that the system is recognizing wrong. This is also called False Match, False Positive or Type I error. FAR is calculated as follows:

      FAR = Number of False Acceptances / Number of testing faces

      Number of testing faces

      Number of training faces

      False Accepted faces

      FAR

      7

      10

      2

      28.5%

      4

      60

      1

      25%

      2

      100

      0

      0%

      Table 1: False Acceptance Rate

      45.00%

      40.00%

      35.00%

      30.00%

      25.00%

      20.00%

      15.00%

      10.00%

      5.00%

      0.00%

      7

      4

      number of testing faces

      2

      False Acceptance Rate

      Figure7: False Acceptance Rate

      Table 1 and Figure 7 shows the False Acceptance Rate attained from the system. It can be deduced that the system throws a high False Acceptance Rate when we have a small training dataset and contrary produces a lower False Acceptance Rate when a larger training dataset is used.

      False Rejection Rate (FRR)

      The false rejection rate is the instance of a security system failing to verify or identify an

      authorized person. Relating this error to the developed system, false rejection happens when a valid testing face is regarded as unknown. FRR is calculated as follows:

      FRR = Number of False Rejections / Number of testing faces

      Number of testing faces Number of training faces False Rejected faces FRR 7 10 3 42.8%

      4 60 1 25%

      2 100 0 0%

      Table 2: False Rejection Rate

      45.00%

      40.00%

      False Acceptance Rate

      35.00%

      30.00%

      25.00%

      20.00%

      15.00%

      10.00%

      5.00%

      0.00%

      7 4 2

      number of testing faces

      Figure 8: False Rejection Rate

      Table 2 and Figure 8 shows the False Rejection Rate attained from the system. It can be deduced that the system throws a high False Rejection Rate when we have a small training dataset and contrary produces a lower False Rejection Rate when a larger training dataset is used.

      By analyzing both tables, the researcher concluded that the system has a crossover rate of 25%. Cross Over Rate (CER) is the point at False Rejection Rate is equal to False Acceptance Rate. The CER is relatively low therefore, the system is accurate.

      Face Recognition Rate

      Face Recognition rate is defined as the rate at which the system correctly recognizes an individual face and it is expressed as a percentage. Recognition is calculated as follows:

      Face Recognition Rate = number of recognized faces /total number of faces * 100%

      Test Case

      Total Number of images

      Total number of recognized images

      Face Recognition Rate

      1

      20

      18

      90%

      2

      15

      13

      86.66%

      Table 3: Face Recognition Rate

      Table 3 shows the Face Recognition Rate. The average Face Recognition Rate achieved was 67.5%.

      Computational Time

      Measuring the computational time was done programmatically by performing the following steps:

      • Get current time and store it as initial time before the recognition function executes. This is accomplished by using the

        time module as time. clock()

      • Run the function or method whose execution time must be determined

      • Get the current time after the recognition function executes and subtract it from the previously stored initial time.

Device

Computational time for detection and Recognition

HP Intel Core i3 Laptop @2.1 GHZ

4.5sec

Dell Intel Core i5 Laptop @2.5 GHZ

2.9sec

Table 4: Computational time

Table 4 shows the computational time for the system to perform face detection and recognition. This result shows that an increase in the number of cores reduces the computation time of the system thereby making it more efficient.

Research Findings

The implementation of the proposed system helped me answer my research questions which where:

  1. Does the use of facial recognition to find missing people efficient?

    The use of facial recognition to find missing people is efficient because a subject who is missing can be easily detected on the camera and his location will be notified to the officials handling the issues of missing people. The results produced by the developed prototype also shows that a person can be detected and recognized within some seconds. Therefore, we conclude that there is a significant difference in finding missing people by the introduction of the facial recognition system.

  2. Is the use of face recognition to find missing people accurate (Recognition Rate)?

The results after implementing the prototype shows that there is a significant difference in finding missing people by the introduction of the facial recognition system. The Recognition rate is above average and the cross over rate is significantly low.

SUMMARY AND CONCLUSION

The aim of this project was to develop a facial recognition system for finding missing people. All the objectives have been met thus determining the efficiency and accuracy of the

system. The accuracy of the system was based on the face recognition rate and the efficiency of the system was determined by the computational time.

The researcher developed the system using the OpenCV and python that helped him to build the crucial modules thus face detection and recognition. The system was tested and the results produced in the previous chapter shows that it is a good idea to introduce the system because it has a remarkable facial recognition rate and computational time based on the hardware used by the researcher. The enhancement of hardware results in the yield of better results.

The researcher however encountered problems, which are prone to most facial recognition systems. The system was affected by the illumination problem thus variation in lighting conditions. Whenever there was insufficient lighting the room, the recognition rate declined, as there was a significant figure of false positives. Another challenge was that of hardware, facial recognition requires high performance computing hardware and most particularly a high definition camera with a high resolution.

The development of the proposed system was narrowed towards finding missing people. However, the same system can be improved by implementing it on DSP processors and using other Hardware devices like Raspberry Pi.

The developed system could identify human faces in real time so it can be integrated with google maps to track any subject of interest. Furthermore, facial recognition systems can also

be used in the development of automated attendance systems and for investigating criminal activities.

REFERENCES

  1. Dass, R., Rani, R., & Kumar, D. (2012). Face Recognition Techniques: A Review, 4(7), 7078.

  2. Patel, R., & Yagnik, S. B. (2013). A Literature Survey on Face Recognition Techniques, 5(4), 189194.

  3. Antony, J. (2016). Development Phases of Technologies in Face Recognition Systems, 1(2), 1821.

  4. Madane, S. R., & Khandare, P. S. T. (2015). A Survey on Face Recognition in Present Scenario, (4), 610

  5. Jain, P., Jain, N., & Raja, R. (2014). A Survey on Face Recognition in Present Scenario, 3(12), 42064209

  6. Dass, R., Rani, R., & Kumar, D. (2012). Face Recognition Techniques: A Review, 4(7), 7078.

  7. Sharma, N., & Kaur, R. (2016). Review of Face Recognition Techniques, 6(7), 2937.

  8. Sanjeev, Kumar;Harpreet, K. (2012). Face Recognition Techniques: Classification and Comparisons. International Journal of Information Technology and Knowledge Management, 5 No.2, 361363.

  9. Scholars, P. (2005). urpose of the Literature Review Strategies for your Literature Review. Educational Researcher, 34(6), 20052006.

  10. Parmar, D. N., & Mehta, B. B. (2013). Face Recognition Methods & Applications. Int. J. Computer Technology & Applications, 4(1), 8486.

Leave a Reply