Smart Attendance System Using Image Processing

DOI : 10.17577/IJERTCONV5IS01100

Download Full-Text PDF Cite this Publication

Text Only Version

Smart Attendance System Using Image Processing

Prashant Tirlotkar

Kedar Chitrakathi

Rohit Jadhav

Aniket Khamkar

Student (EXTC)

Student (EXTC)

Student (EXTC)

Student (EXTC)

Atharva College of Engg.

Atharva College of Engg.

Atharva College of Engg.

Atharva College of Engg.

Mumbai, India

Mumbai, India

Mumbai, India

Mumbai, India

Prof. Manoj Mishra Assistant Professor(EXTC) Atharva College Of Engg.

Mumbai, India

Abstract-This project is based on real time attendance using image processing technique. The concept of our project is to provide real time attendance of students in the class to the facultys data base. For this we make use of Image Processing using MATLAB as a platform. Here, in image processing we capture the image using a camera which is situated exactly on the top of class entrance. This camera will capture image of its nearby surrounding and only detect the facial part of that particular image and send this image information to our processing systems. Now as per the algorithm which we are using for image processing and detection, this captured image will undergo various filtering and masking techniques in order to obtain the de sired suitable image format. Then our algorithm will compare the input image with the reference points and detect the desired image and eventually detect the student and send the attendance of this student to database of faculty via IOT.

I.INTRODUCTION

Attendance is a necessary parameter which is required is most of schools and colleges. On an average this attendance is carried out to have accurate count of people or students seating in a particular classroom or any practice area. Traditional method of taking this attendance carried out by humans where the lecturer or teacher manually counts each and every student with their required data like candidates name, serial number, status etc. This process is very much time consuming and maintenance of collected data is difficult. Thus we can digitized the process of taking attendance and make it simple and accurate. Our project will digitize the process of attendance collection by using image processing technique and will automatically update real time data of facultys data base via IOT. Since this project is software based we have few hardware components. Initially we use a stationary camera which is situated above classrooms entrance so that we can scan each and every students face without any interruption. Camera only provides captured real time image as an input to computer node 1 which is acting as a master processor. We use MATLAB as a platform for processing our input image. This image is processed and compared with

preloaded image, as soon as image gets identified our algorithm indicated the master processor to transmit the identified person data to the computer node 2 which is facultys data base via IOT with help of Wi-Fi. As this is digitized process so the real time attendance is updated continuously at facultys data base.

CAMERA

COMPUTER 1

WIFI

COMPUTER 2

Fig.1

  1. WORKING PRINCIPLE

    As seen above our master processor has two inputs, one from the camera which is real time data and another from pre- stored image data. Initially when camera sends input to master processor the face detection algorithm starts functioning. The face detection algorithm is designed in such a way that it will only crop the facial section of input image. This facial section is identified with help of matching coordinates like eyes, nose and mouth which works as reference while cropping as image

    Later this facial section of image is been stored in form of face registration. As we know each face has its own identity in form of dedicated eyes pattern, nose and mouth figure, by considering this our algorithm tags each input face with different labels. Simultaneously master processor is also carrying out another process which feature extraction of pre- stored image. Now this extracted features goes under machine learning stage where machine learns which face belongs to which particular tag or label. By using this parameter both inputs are compared in classifier and if the features matches within particular threshold value then algorithm indicates that face is been identified, later by selecting desired data such as

    name, serial number and status of that identity is transmits the information via IOT to another computer which is assumed to be facultys computer. This is how we get real time attendance.

    1. Viola- Jones Algorithm on Matlab

    The basic principle of the Viola-Jones algorithm is to scan a sub-window capable of detecting faces across a given input image. The standard image processing approach would be to rescale the input image to different sizes and then run the fixed size detector through these images. This approach turns out to be rather time consuming due to the calculation of the different size images. Contrary to the standard approach Viola-Jones rescale the detector instead of the input image and run the detector many times through the image each time with a different size. At first one might suspect both approaches to be equally time consuming, but Viola-Jones have devised a scale invariant detector that requires the same number of calculations whatever the size. This detector is constructed using a so-called integral image and some simple rectangular features reminiscent of Haar wavelets. The first step of the Viola-Jones face detection algorithm is to turn the input image into an integral image. This is done by making each pixel equal to the entire sum of all pixels above and to the left of the concerned pixel. This is demonstrated in Figure 2.

    This allows for the calculation of the sum of all pixels inside any given rectangle using only four values. These values are the pixels in the integral image that coincide with the corners of the rectangle in the input image.

    It has now been demonstrated how the sum of pixels within rectangles of arbitrary size can be calculated in constant time. The Viola-Jones face detector analyzes a given sub- window using features consisting of two or more rectangles. The different types of features are shown in Figure 4.

    Each feature results in a single value which is calculated by subtracting the sum of the white rectangle(s) from the sum of the black rectangle(s). Viola-Jones have empirically found that a detector with a base resolution of 24*24 pixels gives satisfactory results. When allowing for all possible sizes and positions of the features in Figure 4 a total of approximately

    160.000 different features can then be constructed. Thus, the amount of possible features vastly outnumbers the 576 pixels contained in the detector at base resolution. These features may seem overly simple to perform such an advanced task as face detection, but what the features lack in complexity they most certainly have in computational efficiency. One could understand the features as the computers way of perceiving an input image. The hope being that some features will yield large values when on top of a face. Of course operations could also be carried out directly on the raw pixels, but the variation due to different pose and individual characteristics would be expected to hamper this approach. The goal is now to smartly construct a mesh of features capable of detecting faes and this is the topic of the next section.

    The cascaded classifier

    The basic principle of the Viola-Jones face detection algorithm is to scan the detector many times through the same image each time with a new size. Even if an image should contain one or more faces it is obvious that an excessive large amount of the evaluated sub-windows would still be negatives (non- faces). This realization leads to a different formulation of the problem: Instead of finding faces, the algorithm should discard non-faces. The thought behind this statement is that it is faster to discard a non-face than to find a face. With this in mind a detector consisting of only one (strong) classifier suddenly seems inefficient since the evaluation time is constant no matter the input. Hence the need for a cascaded classifier arises. The cascaded classifier is composed of stages each containing a strong classifier. The job of each stage is to determine whether a given sub-window is definitely not a face or maybe a face. When a sub-window is classified to be a non- face by a given stage it is immediately discarded. Conversely a sub-window classified as a maybe-face is passed on to the next stage in the cascade. It follows that the more stages a given sub-window passes, the higher the chance the sub-window actually contains a face.

    In a single stage classifier one would normally accept false negatives in order to reduce the false positive rate. However, for the first stages in the staged classifier false positives are not considered to be a problem since the succeeding stages are expected to sort them out. Therefore Viola-Jones prescribe the acceptance of many false positives in the initial stages. Consequently the amount of false negatives in the final staged classifier is expected to be very small. Viola-Jones also refer to the cascaded classifier as an attentional cascade. This name implies that more attention (computing power) is directed towards the regions of the image suspected to contain faces. It follows that when training a given stage, say n, the negative examples should of course be false negatives generated by stage n-1.

  2. FUTURESCOPE

    1. We can apply the data fed into IOT network to facultys mobile or smartphone using android based platform.

    2. We can introduce IRIS eye scanner for identification of faculty members before starting the lecture.

  3. CONCLUSION

    On considering requirement of simple, fast and accurate attendance system we have designed a digitized attendance system which can accurately send real time data and provides easy and quick access of attendance to faculties.

  4. REFERENCES

    1. Rafael C. Gonzalez, University of Tennessee.

    2. Richard E. Woods, Med Data Interactive.

    3. PT Pham-Ngoc, QL Huynh, "Robust face detection under challenges of rotation pose and occlusion" in Department of Biomedical Engineering Faculty of Applied Science, 2010, Hochiminh University of Technology.

    4. E. Hjelmås, B.K. Low, "Face detection: A survey", Computer vision and image understanding 83, no. 3, pp. 236- 274, 2001.

    5. Kamaruddin Mamata, FarokAzmat, Mobile Learning

Application for Basic Router and Switch Configuration on Android Platform published in Sixth International Conference on University Learning and Teaching (In CULT 2012) 1877- 0428 2013.

Leave a Reply