Smart Attendance System using Machine Learning

DOI : 10.17577/IJERTCONV10IS12002

Download Full-Text PDF Cite this Publication

Text Only Version

Smart Attendance System using Machine Learning

Prof. Shweta S Bagali1, Dr K Amuthabala2, Prof. Iranna Amargol3, Mr H Prajwal4 Department of CSE, Sri Krishna Institute of Technology, Blore-560090, India Department of CSE, Reva University, Blore-560064, India.

Department of CSE, NCET, Blore-562110, India. Department of CSE, NCET, Blore-562110, India.

Abstract:- A description of the computer vision computations used in facial recognition. The main goal of this research is to look into a calculation that can be used in biometric participation frameworks using reasonable methodologies and readily available data. The calculation primarily makes use of histogram-arranged slopes for watching faces, assessing face milestones, a support vector machine to perceive the face, and deep convolutional networks to think about faces. The article depicts the idea as well as the logical facial recognition approach. A basic application for determining the hour of the face appears has also been developed. Participation is imprinted on the csv arrangement. To demonstrate its utility, the essay mostly uses the dlib and facial recognition libraries. This product project makes use of various important Dot Net APIs to collaborate and obtain the results of a nearby camera. It could be a webcam or another camera attached to the computer. These APIs are used to get camera video contributions into our framework. The video data is then used to control and detect faces in real time.

Keywords: Traffic flow, Vehicle Count, Vehicle Detection, Vehicle Track.

  1. INTRODUCTION

    In this era of rapidly increasing computerization and motorization, countless open doors and routes are being created in front of mankind. Face recognition is one of the many approaches that combines a promising and beneficial impact on the general population. Through registration and AI improvements, these advancements have evolved to their full potential in just a few years. PC vision is another industry where technology is quite profitable. Such improvements are now widely used in our daily lives." "One of the first challenges with the PC vision framework is facial recognition. One of the most recent technological advances made by a greater number of people is the numerous innovations made by a larger number of individuals of the tech monsters. During the WWDC event in September 2017, Apple.inc, one of the world's leading development companies, announced the Face ID innovation. On a similar vein, Facebook introduced similar technology in its online entertainment platform, both Instagram and Facebook, where friends' appearances are automatically identified when posting a photo of them. According to Facebook, the calculation's accuracy is roughly 93 percent. Such innovative work on such difficulties has resulted in the development of a plethora of facial recognition frameworks and APIs. These libraries will be used in accordance with our requirements to address our anticipated problem."

  2. RELATED WORK

In the past (traditionally), participation was obtained by physical exertion, which was time-consuming and prone to human mistake. Furthermore, there are numerous" flaws in the sources of attendance records, and the truth be told, a significant percentage of attendance records are not recovered from the genuine situation. The former method of recording student attendance using paper sheets is no longer sustainable at this time. There are several possibilities for correcting this issue, according to the research.

"Attendance System Using NFC Technology with [1]. Embedded Camera on Mobile Device," according to the research diary (Bhise, Khichi, Korde,Lokare, 2015). Using Near Field Communication technology and a portable application, the attendance system is improved.

During their enrolment into the school, every student is issued an NFC label with an unusual ID, as shown by the examination paper. " By pressing or changing these labels, the teacher's mobile phone will be utilised to take attendance for each class. The integrated camera on the phone will then capture the student's face and send the information to the school server for approval and verification.. The advantages of this technique are located in the same location as the NFC, and the pace of association foundation is extremely fast. It unquestionably improves attendance and interaction. In any event, when the NFC tag isn't identified by the first" owner, this framework won't automatically identify the infraction. Aside from that, the convenience of the system, which employs the phone as an NFC scanner, was a major annoyance to the instructor. Consider what would happen if the instructor forgot their phone at work. What would be the method for motivating people to keep track of their attendance? Furthermore, the majority of the instructors are unlikely to use their personal smart phones in this manner due to security concerns. In the future, meaningful data about the student should be used instead of the NFC, such as biometrics or face recognition. This ensures that the genuine student is the first to take part."[2]. To overcome previous attendance system concerns, the second examination diary "Face Recognition Based Attendance Marking System" (Senthamil Selvi, Chitrakala, Antony Jenitha, 2014) relies on the identifying proof of face- recognition. This system employs a camera to capture images of the worker in order to do face identification and recognition. When an outcome is located in the face data set, the acquired picture is compared to the face information base to look for the specialist's face, and participation will be checked. The main advantage of this architecture is that participation is kept separate on the server, which is extremely secure because no

one can verify the involvement of others. Furthermore, the face recognition algorithm is improved in this suggested framework by leveraging the skin characterization technique to increase the precision of the discovery cycle. Despite the fact that more resources are being invested into improving the accuracy of the facial recognition algorithm, the framework is still not compact. This framework necessitates a separate PC, which necessitates a constant power source, which is inconvenientThis structure is only suited for signalling staff participation because they only need to report their presence once per day, as opposed to understudies who expect to register their attendance at each session on a given day. It will be poorly designed if the participation isn't flexible to check framework. As a result, to address this problem, the entire participation the board framework can be built on an inserted plan, so it works in essentially the same way with only batteries, making it adaptable." "[3]. The third research diary, "Fingerprint Based Attendance System Using Microcontroller and LabView" (Kumar Yadav, Singh, Pujari, Mishra, 2015), offered a solution of using fingerprints to verify attendance. To control the fingerprint recognition process, this framework employs two microcontrollers. First, a finger imprint sensor will be used to collect the unique fingerprint example, which will then be sent to microcontroller 1. The data will be passed on to microcontroller 1 next[3]. The third research diary, " The paper "Fingerprint Based Attendance System Using Microcontroller and LabView" (Kumar Yadav, Singh, Pujari, Mishra, 2015) suggested that fingerprints be used to check attendance. In this framework, two microcontrollers control the fingerprint recogniion process. First, a finger imprint sensor will be utilised to gather the unique fingerprint example, which will then be delivered to microcontroller 1. Microcontroller 1 will then send the data to microcontroller 2 for validation against the data base there. Following the detection of a student match, the information is delivered to the PC via sequential correspondence for display. This strategy is excellent since it accelerates progress while maintaining plan adaptability and improving testing. However, this framework is once again connected to a PC, making it inconvenient. Aside than that, the data set data is inaccessible. Intentionally, guardians who are interested in realising their child's participation will be able to access the data only with a lot of work or at a disadvantage. As a result, the data can be sent to a web server for easy access to provide openness of the students' data to the genuine concerned party. While a login screen can be used to validate proper access."

take an image. Then, to remove the front-facing face, ExtractFace() is called. [2] To stack the face.xml as the classifier, ExtractFace() uses the OpenCV haarcascade approach. The classifier's output is binary, with "1" indicating the presence of a face and "0" indicating the absence of one. After the face has been recognised, the "Add Face" button in the face recognition module is used to cut it into a grayscale picture of 50×50 pixels.

  1. Identification and Recognition

    We have a function named recognise() in OpenCV that will run the eigenface recognition. It has three stages, two of which are currently complete: stacking the face image and projecting onto the subspace. The function loadFaceImgArray() loads the face photo from the xml document into faceImgArray. The number of face images is saved in a separate textbox called No. of faces in the scene, and the number of faces is naturally tallied by the number of faces identified. "The global factors, such as no Eigens, preparing image Avg, and Eigen, should be layered".By the name given, OpenCV detects and loads every information value in an xml record. The acknowledgement face's final advancement is projecting each test picture onto PCA subspace and locating the closest projected picture." "This time, instead of using a training image as the principal border, we'll use a test image." cvEigenDecomposite() returns a value that is saved in a local variable (test project face). For storing the projected test face, the framework uses an OpenCV grid."

    Fig. 1. system process.

    1. SYSTEM REQUIREMENT SPECIFICATIONS

      III. IMPLIMENTATION

      There are three essential advances are used for carrying out the proposed framework.

      1. Distinguish and extract the face picture and save the details in a xml record.:

      2. Work out eigenvalue and eigenvector for that picture.:

      3. Recognize the face and match it as per eigenvalues and eigenvectors stored in xml document.:

      4. Store the name of the face showed in Microsoft Access Database.

A. Detection and extraction of face process.

The openCAMCB() function is used to start the camera and

  1. HARDWARE REQUIREMENT SPECIFICATION

    1. Main Processor: Intel Core i3 and above:

    2. RAM: 4 GB and above:

    3. Hard Disk Capacity: 320 GB and above:

    4. Speed: 1.1Ghz and above:

  2. SOFTWARE REQUIREMENT SPECIFICATION

  1. Operating System: windows 7, windows 8, windows 10 and above:

  2. Program Specification: python:

  3. Tools: spyder, Atom, pycharm, SQL:

  1. ANALYSIS AND RESULTS ANALYSIS:

The following advancements are part of this cycle: "Stage 1: Face Recognition and Extraction: Client-side images may be captured via a webcam. Start by dealing with and removing the captured image. The eigenvalue of the captured image should be obtained and compared to the eigenvalues of existing face images in the data set. "If the eigenvalues match, save the new face image data in the face information base" (xml document). End

Fig. 2. UML NOTATION.

Fig. 3. SEQUENCE DIAGRAM.

VI. APPLICATION RELATED DIAGRAMS

"Stage 2: Face Recognition: The rationale for face recognition is PCA, and the following developments for face recognition would be: "To frame the completion of attendance for each student, the name field from the facial recognition module is added to the MS Access Database, along with the date." End"

A.Results

The consequence of the investigation interaction is intro- duced here as grayscale pictures.

Fig. 4. Images in Database

B. EXISTING SYSTEM:

"The existing structure serves as a guide for students. Attendance will be taken in transcribed registers here. Keeping track of the client's records will be a tedious chore. The level of human effort is higher here. Data recovery is challenging, however, because the entries appear to be kept up to date in the manually created registers. This programme necessitates correct input into each individual field. If some unsuitable data sources are used, the application will refuse to function. as a result, the client finds it difficult to use."

  1. PROPOSED SYSTEM:

    The proposed framework was created to address the shortcomings of the current framework. This project aims to reduce administrative effort while also reducing the possibility of obtaining precise results from student attendance. The framework provides the best user interface. This proposed structure can be used to construct successful reports.

  2. ADVANTAGES OF PROPOSED SYSTEM

    1. This is not difficult to work.:

    2. This is moderately extremely quick way to enter atten- dance..:

    3. This is profoundly dependable, inexact outcome from client.:

    4. Best UI.:

    5. Productive reports.:

VII. CONCLUSION

Traditional (manual) attendance systems have a high rate of errors, hence the Automated Attendance System was created to reduce these. The goal is to automate and build a framework that benefits the entire company. Workplace attendance

technique that is both efficient and exact and can replace manual methods. This method is sufficient in terms of safety, dependability, and ease of use. To put the framework into action in the workplace, no extra equipment is required. A camera and a computer are typically used."

REFERENCES

[1] Indrabayu, Rizki Yusliana Bakti, Intan Sari Areni, A. Ais Prayogi "Vehicle Detection and Tracking using Gaussian Mixture Model and Kalman Filter", 2016 International Conference on Computational Intelligence and Cybernetics Makassar, Indonesia, 22-24 November 2016.

[2] Safoora Maqbool, Mehwish Khan, Jawaria -Tahir, Abdul Jalil, Ahmad A l i, Javed Ahmad Vehicle Detection, Tracking and Counting"2018 IEEE 3rd International Conference on Signal and Image Processing (ICSIP) Shenzhen, China, 13-15 July 2018.

[3] R. Krishnamoorthy, Sethu Manickam "Automated Traffic Monitoring Using Image Vision", The 2nd International Conference on Inventive Communication and Computational Technologies (ICICCT2018)

Coimbatore india 20-21 April 2018

[4] J. Redmon, and A. Farhadi YOLOv3:An Incremental

[5] Improvement arXiv preprint arXiv:1804.02767, 2018 [2] J.Redmon, and A.Farhadi YOLOv3:An Incremental Improvement arXiv preprint arXiv:1804.02767, 2018.

[6] R. Girshick,Microsoft Research Fast R-CNN arXiv preprint arXiv:1504.08083, 2015.

S. Ren, K. He, R. Girshick, and J. Sun Faster R-CNN: Tow Ards Real- Time Object Detection with Region Proposal Networks arXiv preprint arXiv:1506.01497, 2016.

[7] sayan mondal , alan yessenbayev, jahya burke, nihar wah Al, A survey of information acquisition in neura object detections systems ,32nd conference on neural information processing systems (NeuralIPS 2018), Montreal, Canada.

[8] Li Xun, Nan Kaikai, Liu Yao, Zuo Tao "A Real-Time Traffic Detection Method Based on Improved Kalman Filter", 2018 3rd International Conference on Robotics and Automation Engineering (ICRAE)Guangzhou, China, 17-19 November 2018.

[9] Conference on Inventive Communication and Computational Technologies (ICICCT2018) Coimbatore, India, 20-21April 2018.

[10] Jess Tyron G. Nodado, Hans Christian P. Morales, Ma Angelica P.Abugan, Jerick L. Olisea, Angelo C. Aralar, Pocholo James M. Loresco " Intelligent Traffic Light System Using Computer Vision with Android Monitoring and Control", Proceedings of TENCON2018

[11] Redmon, S. Divvala, R. Girshick, and A. Farhadi You Only Look Once: Unified, Real-Time Object Detection arXiv preprint arXiv:1506.02640, 2016 preprint arXiv:1506.02640, 2016.