Smart Attendance System using OPENCV based on Facial Recognition

DOI : 10.17577/IJERTV9IS030122

Download Full-Text PDF Cite this Publication

Text Only Version

Smart Attendance System using OPENCV based on Facial Recognition

Sudhir Bussa

Department of Electronics and Telecommunication, Bharati Vidyapeeth (Deemed to be) University, College of Engineering,

Dhankawadi, Pune, India. 411043.

Ananya Mani

Department of Electronics and Telecommunication, Bharati Vidyapeeth (Deemed to be) University, College of Engineering,

Dhankawadi, Pune, India. 411043.

Shruti Bharuka

Department of Electronics and Telecommunication, Bharati Vidyapeeth (Deemed to be) University, College of Engineering,

Dhankawadi, Pune, India. 411043.

Sakshi Kaushik

Department of Electronics and Telecommunication, Bharati Vidyapeeth (Deemed to be) University, College of Engineering,

Dhankawadi, Pune, India. 411043.

Abstract – Face is the crucial part of the human body that uniquely identifies a person. Using the face characteristics as biometric, the face recognition system can be implemented. The most demanding task in any organization is attendance marking. In traditional attendance system, the students are called out by the teachers and their presence or absence is marked accordingly. However, these traditional techniques are time consuming and tedious. In this project, the Open CV based face recognition approach has been proposed. This model integrates a camera that captures an input image, an algorithm for detecting face from an input image, encoding and identifying the face, marking the attendance in a spreadsheet and converting it into PDF file. The training database is created by training the system with the faces of the authorized students. The cropped images are then stored as a database with respective labels. The features are extracted using LBPH algorithm.

Keywords – LBPH, OpenCV, camera, attendance, biometric, face recognition, spreadsheet

  1. INTRODUCTION

    Attendance maintenance is a significant function in all the institutions to monitor the performance of the students. Every institute does this in its own way. Some of these institutes use the old paper or file based systems and some have adopted strategies of automatic attendance using some biometric techniques. A facial recognition system is a computerized biometric software which is suited for determining or validating a person by performing comparison on patterns based on their facial appearances. Face recognition systems have upgraded appreciably in their management over the recent years and this technology is now vastly used for various objectives like security and in commercial operations. Face recognition is a powerful field of research which is a computer based digital technology. Face recognition for the intent of marking attendance is a resourceful application of attendance system. It is widely used in security systems and it can be compared with other biometrics such as fingerprint or eye iris recognition systems. As the number of students in an

    educational institute or employees at an organization increases, the needs for lecturers or to the organization also increase the complication of attendance control. This project may be helpful for the explanation of these types of problems. The number of students present in a lecture hall is observed, each person is identified and then the information about the number of students who are present I maintained.

  2. OVERVIEW

    Face recognition being a biometric technique implies determination if the image of the face of any particular person matches any of the face images that are stored in a database. This difficulty is tough to resolve automatically because of the changes that several factors, like facial expression, aging and even lighting can affect the image. Facial recognition among the various biometric techniques may not be the most authentic but it has various advantages over the others. Face recognition is natural, feasible and does not require assistance. The expected system engages the face recognition approach for the automating the attendance procedure of students or employees without their involvement. A web cam is used for capturing the images of students or employees. The faces in the captured images are detected and compared with the images in database and the attendance is marked.

  3. IMAGE PROCESSING

    The facial recognition process can be split into two major stages: processing which occurs before detection involving face detection and alignment and later recognition is done using feature extraction and matching steps.

    1. FACE DETECTION

      The primary function of this step is to conclude whether the human faces emerge in a given image, and what is the location of these faces. The expected outputs of this step are patches which contain each face in the input image. In order to get a more robust and easily designable face recognition system.

      Face alignment is performed to rationalise the scales and orientation of these patches.

    2. FEATURE EXTRACTION

      Following the face detection step the extraction of human face patches from images is done. After this step, the conversion of face patch is done into vector with fixed coordinates or a set of landmark points.

    3. FACE RECOGNITION

    The last step after the representation of faces is to identify them. For automatic recognition we need to build a face database. Various images are taken foe each person and their features are extracted and stored in the database. Then when an input image is fed the face detection and feature extraction is performed and its feature to each face class is compared and stored in the database.

  4. ALGORITHM

    There are various algorithms used for facial recognition. Some of them are as follows:

    1. Eigen faces

    2. Fisher faces

    3. Local binary patterns histograms

    1. EIGEN FACES

      This method is a statistical plan. The characteristic which influences the images is derived by this algorithm. The whole recognition method will depend on the training database that will be provided. The images from two different classes are not treated individually.

    2. FISHER FACES

      Fisher faces algorithm also follows a progressive approach just like the Eigen faces. This method is a alteration of Eigen faces so it uses the same principal Components Analysis. The major conversion is that the fisher faces considers the classes. As mentioned previously, the Eigen faces does not differentiate between the two pictures from two differed classes while training. The total average affects each picture. A Fisher face employs Linear Discriminant Analysis for distinguishing between pictures from a different class.

    3. LOCAL BINARY PATTERNS HISTOGRAMS This method needs the gray scale pictures for dealing with the training part. This algorithm in comparison to other algorithms is not a holistic approach.

    1. PARAMETERS:

      LBPH uses the following parameters:

      1. Radius:

        Generally 1 is set as a radius for the circular local binary pattern which denotes the radius around the central pixel.

      2. Neighbours:

        The number of sample points surrounding the central pixel which is generally 8.The computational cost will increase with increase in number of sample points.

      3. Grid X:

        The number of cells along the horizontal direction is represented as Grid X. With the increase in number of cells the grid becomes finer which results in increase of dimensional feature vector.

      4. Grid Y:

      The number of cells along the vertical direction is represented as Grid Y. With the increase in number of cells the grid becomes finer which results in increase of dimensional feature vector.

    2. ALGORITHM TRAINING:

      For the training purpose of the dataset of the facial images of the people to be recognized along with the unique ID is required so that the presented approach will utilize the provided information for perceiving an input image and providing the output. Same images require same ID.

    3. COMPUTATION OF THE ALGORITHM:

      The intermediate image with improved facial characteristics which corresponds to the original image is created in the first step. Based on the parameters provided, sliding window theory is used in order to achieve so.

      Facial image is converted into gray scale. A 3×3 pixels window is taken which can also be expressed as a 3×3 matrix which contains the intensity of each pixel (0-255). After this we consider the central value of the matrix which we take as the threshold. This value defines the new values obtained from the 8 neighbours. A new binary value is set for each neighbour of the central value. For the values equal to or greater than the threshold value 1 will be the output otherwise 0 will be the output. Only binary values will be present in the matrix and the concatenation is performed at each position to get new values at each position. Then the conversion of this binary value into a decimal value is done which is made the central value of the matrix. It is a pixel of the actual image. As the process is completed, we get a new image which serves as the better characteristics of the original image.

    4. EXTRACTION OF HISTOGRAM:

      The image obtained in the previous step uses the Grid X and Grid Y parameters and the image is split into multiple grids. Based on the image the histogram can be extracted as below:

      1. The image is in gray scale and each histogram will consist of only 256 positions (0-255) which symbolises the existences of each pixel intensity.

      2. After this each histogram is created and a new and bigger histogram is done. Let us suppose that there are 8×8 grids, then there will be 16.384 positions in total in the final histogram. Ultimately the histogram signifies the features of the actual image.

    5. THE FACE RECOGNITION:

    The training of the algorithm is done. For finding the image which is same as the input image, the two histograms are compared and the image corresponding to the nearest histogram is returned. Different approaches are used for the calculation of distance between the two histograms. Here we use the Euclidean distance based on the formula:

    =

    =1

    (1 2)2

    the recording of the frontal face. The number of frame to be taken for consideration can be modified for accuracy levels. These images are then stored in the database along with the Registration ID.

    TRAINING OF FACES:

    Hence the result of this method is the ID of the image which has the nearest histogram. It should return the distance calculated in the form of confidence. Then the threshold and the confidence can be used to automatically evaluate if the image is correctly recognized. If the confidence is less than the given threshold value, it implies that the image has been well recognized by the algorithm.

    ACE

    FACE

    Confidence factor based on output is 2,000-3,000.

    It is 100-400.

    Threshold value is 4,000.

    Threshold value is 400

    Threshold value is 7.

    Principle of dataset generation is component based.

    It is component based.

    It is pixel based.

    Basic principle is PCA.

    Basic principle is LDA.

    Basic principle is Histogram.

    Background noise is maximum.

    Background noise is medium.

    Background noise is minimum.

    Efficiency is minimum.

    Efficiency is

    greater than Eigen face.

    Efficiency is maximum.

    Table 1. Comparison of LBPH with other algorithms.

    ADVANTAGES OF USING LBPH ALGORITHM:

    1. It is one of the simplest algorithms for face recognition.

    2. The local features of the images can be characterized by this algorithm.

    3. Using this algorithm, considerable results can be obtained.

    4. Open CV library is used to implement LBPH algorithm.

  5. BLOCK DIAGRAM

    Fig 1. Block Diagram

    DATABASE CREATION:

    The first step in the Attendance System is the creation of a database of faces that will be used. Different individuals are considered and a camera is used for the detection of faces and

    The images are saved in gray scale after being recorded by a camera. The LBPH recognizer is employed to coach these faces because the coaching sets the resolution and therefore the recognized face resolutions are completely variant. A part of the image is taken as the centre and the neighbours are thresholded against it. If the intensity of the centre part is greater or equal than it neighbour then it is denoted as 1 and 0 if not. This will result in binary patterns generally known as LBP codes.

    FACE DETECTION:

    The data of the trained faces is stored in .py format. The faces are detected using the Haar cascade frontal face module.

    FACE RECOGNITION:

    The data of the trained faces are stored and the detected faces are compared to the IDs of the students and recognized. The recording of faces is done in real time to guarantee the accuracy of the system. This system is precisely dependant on the cameras condition.

  6. FLOW CHART

    Fig 2. Flow-chart of the methodology used for Training Process

    The training process starts with traversing of the training data directory. Each image in the training date is converted into gray scale. A part of the image is taken as center and threshold its neighbours against it. If the intensity of the middle part is more or equal than its neighbour then denote it with 1 and 0 if not. After this the images are resized. Then the images are converted into a numpy array which is the central data structure of the numpy library. Each face in the image is detected. Creation of separate lists of each face is done and the faces are appended into them along with their respective IDs. The faces are then trained with their respective IDs.

    Fig 3. Flow-chart of the methodology used for Face Detection and Recognition

    The input image is read by the camera of the phone. After the image is read it is converted into gray scale. The faces in the image are detected using the Haar Cascade frontal face module. Using the LBPH algorithm, the faces in the image are predicted. After the images are predicted, the recognized faces are shown in a green box along with their names.

  7. SOFTWARE DESCRIPTION

    1. OpenCV

      Open CV (Open Source Computer Vision Library) is a open source computer vision software library for the purpose of machine learning. Open CV was developed to serve the purpose of computer vision applications and to stimulate the usage of machine perception in the commercially viable products. Open CV is a BSD- licensed product which is easy for the utilization and modification of the code. The library contains more than 2500 advanced algorithms including an extensive set of both typical and state-of-the-art computer vision and machine learning algorithms. These algorithms can be employed for the detection and recognition of faces, identification of objects, extraction of 3 D models of objects, production of 3 D point clouds from stereo cameras, stitching images together for production of a high resolution image of an entire scene, finding similar images from an image database, removing red eyes from images taken using flash, following ye movements, recognition of scenery and establishing markers to overlay it with intensified reality etc. It includes C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. Open CV mainly involves real-time vision applications taking advantage of MMX and SSE instructions when available. A full-featured CUDA and Open CL interfaces are being progressively developed. There are over 500 algorithms and about 10 times functions that form or back those algorithms. Open CV is

      written inherently in C++ and has a template interface that works harmoniously with STL containers.

    2. Pandas

      Pandas is an open source Python package that caters diverse tools for data analysis. The package contains various data structures that can be used for many diverse data manipulation tasks. It also includes a range of methods that can be invoked for data analysis, which becomes feasible when working on data science and machine learning problems in Python.

    3. Idle

      IDLE is Pythons Integrated Development and Learning Environment. IDLE is completely coded in Python, using the tkinter GUI toolkit. It works mostly uniformly on Windows, Unix and macOS. It has a Python shell window (interactive interpreter) with colorizing of error messages, code input and code output. There is a multi-window text editor with multiple undo, Python colorizing, smart indent, call tips, auto completion, and other features. Searching within any window, replacing within editor windows and searching through multiple files is possible. It also has configuration, browsers and other dialogs as well.

    4. Microsoft Excel

    Microsoft Excel is a spreadsheet program incorporated in Microsoft Office suite of applications. Spreadsheets prompt tables of values arranged in rows and columns that can be mathematically manipulated using both basic and complex arithmetic functions and operations. Apart from its standard spreadsheet features, Excel also extends programming support via Microsofts Visual Basic for Applications (VBA), the capacity to access data from external sources via Microsofts Dynamic Data Exchange (DDE) and extensive graphing and charting abilities. Excel being electronic spreadsheet program can be used to store, organize and manipulate the data. Electronic spreadsheet programs were formerly based on paper spreadsheets used for accounting purpose. The basic layout of computerized spreadsheets is more or less same as the paper ones. Related data can be stored in tables – which are a group of small rectangular boxes or cells that are standardized into rows and columns.

  8. RESULT ANALYSIS

    The interface for the Smart Attendance System has been created. Using the interface the images of the individual students is being recorded and stored in the training dataset. Simultaneously their information is stored in the database i.e. excel sheet. Finally the images of the students is being tracked and recognized.

    Fig 4. The different folders have been created .

    Fig 5. The interface for the Face Recognition Based Attendance System in which the Id and Name of the respective students are stored.

    Fig 6. The images are stored in a folder named TrainingImages.

    Fig 7. The excel sheet for the student details is created.

    Fig 8. The names of the students have been stored in the StudentDetails excel sheet.

    Fig 9. The images of the students is trained.

    Fig 10. After tracking the images are attendance of the students is marked.

    Fig 11. The excel sheet for attendance of the students is created.

    Fig 12. The students attendance record is stored in the excel sheet.

  9. CONCLUSION

    This paper features the most productive Open CV face recognition method accessible for Attendance Management. The system has been implemented using the LBPH algorithm. LBPH excels other algorithms by confidence factor of 2-5 and has least noise interference. The implementation of the Smart Attendance System portrays the existence of an agreement between the appropriate recognition rate and the threshold value. Therefore LBPH is the most authentic and competent face recognition algorithm found in Open CV for the identification of the students in an educational institute and marking their attendance adequately by averting proxies.

  10. REFERENCES

  1. Smart Attendance System using Computer Vision and Machine Learning Dipti Kumbhar#1 , Prof. Dr. Y. S. Angal*2 # Department of Electronics and Telecommunication, BSIOTR, Wagholi, Pune,

    India 1 diptikumbhar37@gmail.com , 2 yogeshangal@yahoo.co.in

  2. ATTENDANCE SYSTEM USING MULTI-FACE RECOGNITION 1P. Visalakshi, 2Sushant Ashish 1Assistant Professor 1,2Department of Computer Science and Engineering SRM Institute of Science and Technology, Chennai, Tamil Nadu, INDIA

  3. Face Recognition Based Student Attendance System with OpenCV CH. VINOD KUMAR1 , DR. K. RAJA KUMAR2 1 PG Scholar, Dept of CS& SE, Andhra University, Vishakhapatnam, AP, India. 2Assistant Professor, Dept of CS& SE, Andhra University, Vishakhapatnam, AP, India.

  4. Automatic Attendance System Using Face Recognition. Ashish Choudhary1,Abhishek Tripathi2,Abhishek Bajaj3,Mudit Rathi4 and

    B.M Nandini5 1,2,3,4,5 Information Science and Engineering, The National Institue of Engineering,

  5. Face Recognition based Attendance Management System using Machine Learning Anushka Waingankar1, Akash Upadhyay2, Ruchi Shap, Nevil Pooniwala4, Prashant Kasambe5

  6. https://www.superdatascience.com/blogs/opencv-face-recognition

  7. https://towardsdatascience.com/face-recognition-how-lbph-works- 90ec258c3d6b

  8. https://www.pyimagesearch.com/2018/09/24/opencv-face- recognition/

  9. http://nxglabs.in/cloud/impact-biometric-attendance-system- educational-institutes.html

  10. https://iopscience.iop.org/article/10.1088/1757- 899X/263/4/042095/pdf

  11. http://www.ijsrp.org/research-paper-0218/ijsrp-p7433.pdf

  12. https://www.theseus.fi/bitstream/handle/10024/132808/Delbiaggio_ Nicolas.pdf?sequence=1&isAllowed=y

1 thoughts on “Smart Attendance System using OPENCV based on Facial Recognition

Leave a Reply