Image-Based Intelligent Attendance Logging System

DOI : 10.17577/IJERTV2IS4020

Download Full-Text PDF Cite this Publication

Text Only Version

Image-Based Intelligent Attendance Logging System

Pallavi R Patil,

ME(1st)

SSBTCOET,Jalgaon

Guide:

Prof. P. H. Zope, SSBT COET,Jalgaon.

H.O.D,

Prof. S. R. Suralkar, SSBT COET,Jalgaon

Co-Author:

Mr. Jayant V. Varade, BE( Instru & Control) VESIT , Chembur.

Abstract

This paper proposes an extension of surveillance cameras function as an intelligent attendance logging system.

Surveillance is the monitoring of the behavior, activities, or other changing information, usually of people for the purpose of influencing, managing, directing, or protecting. Surveillance cameras are video cameras used for the purpose of observing an area. They are often connected to a recording device or IP network, and may be watched by a security guard or law enforcement officer. The system works as a time recorder within two Phase; learning phase and monitoring phase. By placing a camera inside a working room, the system enters the learning phase by locating the sitting area automatically based on the cameras

images. The system detects the entering occupant the leaving occupant, the sitting occupant, and the standing occupant from their seat. The system also tracks occupants. The working hours of the occupant will be counted as how long they sit in their desk. When the occupant sits at his/her seat, a start time is given. Later, after he/she leaves from their seat, a stop

]time is generated. The result shows that the system can achieve good result.

Surveillance cameras are video cameras used for the purpose of observing an area. They are often connected to a recording device or IP network, and may be watched by a security guard or law enforcement officer. Cameras and recording equipment

used to be relatively expensive and required human personnel to monitor camera footage, but analysis of footage has been made easier by automated software that organizes digital video footage into a searchable database, and by video analysis software (such as VIRAT and HumanID). The amount of footage is also drastically reduced by motion sensors which only record when motion is detected. With cheaper production techniques, surveillance cameras are simple and inexpensive enough to be used in home security systems, and for everyday surveillance.

The time recorder could be mechanic (such as punched-card)or electronic (such as magnetic stripe card, RFID tag, hand- punch, or fingerprint). This paper wants to seek a new way to design a time recorder. Considering that intelligent buildings have increased as a research topic recently and many buildings are installed with surveillance cameras for security reason or as the infrastructure for context-aware application, this paper proposes a technique to implement a system that works like a time recorder. By extending the function of the existing surveillance cameras as an intelligent attendance logging system, the proposed system has purposes to monitor and to report the occupants attendance. This system is

suitable to be implemented in the sedentary working environment. For the occupants, they do not need to bring any tag or badge. The system will monitor the occupants attendance by using a camera that is placed inside the working room and sends the events/messages in real-time. The system is also designed to support the context-aware application such as smart home or intelligent buildings.

To monitor the occupants attendance, the location of the occupants working area should be defined previously. The occupants working area is also called as the sitting area. There are two options to define the sitting area. The first option is the user specifies the locations manually and informs to the system the second option is the system learns to find the locations automatically. After the locations area defined clearly, the system starts to monitor the occupants attendance. The second option is chosen in this paper. Hence, the system is so called intelligent since it has ability to learn from a given environment to locate the sitting areas. The collection of the sitting area called as a map.

Fig.1 shows an example scenario.Ideally,the occupants enters into the room and sits to start working. After that, the occupant stands up from his/her seat and leaves the room. The sitting and

the standing up events will be used to decide where the sitting area of the occupant is and when the occupant works. In the system implementation, this paper also intends to exploit the advantages of the existing open source library for the computer vision, OpenCV and cvBlob.

The goal of this paper is to design an image-based intelligent attendance logging system with at least has the following functions:

  1. Learning phase: detection and identification of seat locations in an unknown environment

  2. Monitoring phase: detection of entering and leaving events for each occupant into and from respective seat

  3. Real-time system: implementation of real-time intelligent attendance logging system.

We made an observation on the sample image taken from the camera in the working environment. The working environment that we used in this paper is our laboratory. There are some desks in a line. To have a better view and considering the limitation of our cameras view, we set the camera perpendicular to the desks. We recorded some image sequences as a video clips. We would like to know what we can learn from the one sample image taken by a camera in the real condition. Fig. 2

shows the snapshot of the image samples. The observations on these samples are listed as the following:

  1. The path of the walking occupants in the scene is horizontal.

  2. A person may walk behind the sitting person.

  3. When a person sits, he/she is occluded by the chair.

  4. The part of the person has similar colour with the surrounding furniture.

  5. An occlusion of two people happened sometimes.

  6. There are seats in the both edge. This could be a problem if the occupant suddenly moves outside of the scene.

The system will be designed by considering the result from the observation. For instance, the horizontal path of the walking occupants can make the grouping algorithm of broken blobs easier. Also, we used four attribute features for tracking which are centroid, size, ground position, and colour. We defined centroid as a center of the occupants bounding box Size feature is pixel density of one detected occupant. Ground position is the position of the occupants foot which is calculated at the bottom-centred of the occupants bounding box. Colour is the RGB histogram from the occupant.

Figure 1. Snapshot of some image samples

  1. Tracking states

    During the appearance of the object in the scene, we would like to keep tracking the object which is an occupant. Since the system can track a person, we just follow the person using the four attribute features. The system has four tracking states that have responsibility to track each person.

    The tracking states are entering state (ES), sitting state (SS), standing-up state (US), and leaving state (LS). Fig. 2 shows the diagram of the system. A person enters to the working room. The person sits into his/her seat.He/her stands up from the seat. Finally, the person leaves the working room. There will be two phases in this system, learning phase and monitoring phase. Both phases are triggered by sitting event and standing-up event. During the learning phase, sitting event is used to locate the seating area. In the moitoring phase, sitting event is used to give a time stamp of the sitting time and standing-up event is used to give time stamp of the leaving time.

    Figure 2.The diagram of the system.

  2. Approaches and Performance Criteria This intelligent attendance logging system is designed based on two assumptions. The first assumption is the environment is unknown, in that, the number of seats and

    the locations of these seats are not known before the system monitors. The second assumption is each occupant has his/her own seat, as such, detecting the presence/absence of a particular seat amounts to answering the presence/absence of that corresponding occupant. In order to achieve the research objectives, the algorithm of the proposed system is shown in Fig. 3. The system consists of an object segmentation unit, a tracking unit, learning phase, and monitoring phase. The report on presence or absence of the occupants is the final output of the system for further analysis.

    Figure 3. The algorithm of the system.

    There are two performance criteria to evaluate the system regarding to the main functions of the system. The main functions of the system are to detect the occupants sitting area and to report the

    monitoring result. The first criterion is the system should detect the sitting areas given by the ground truth. The second criterion is the system should be able to monitor the occupants during their appearance in the scene to generate the accurate report. The system will be evaluated by testing it with some video clips using different scenario. The occupants

    Enter into the scene. They will sit, stand up, leave the scene and sometimes make a group then separate. The first scenario is the occupants enter to the scene and leave one by one without making any occlusion. The objective is to see how good the system can detect the location of the occupants sitting area. The second scenario is the same as the first scenario but the occupants will make a group to see how good the system can handle the occlusion problem. It is assumed that the scene in the video clips has horizontal path only, meaning that the occupants walk in the horizontal direction.

  3. Broken blobs

We would like to have a person corresponding to a single blob. But sometimes, the part of the person has the similar color with the background or the person sits at his/her chair. Then, the whole body is segmented into different parts. After we got the blobs, we need to group them if the unconnected blob

belongs to a person. There are three conditions to examine the broken blobs. The first is the intersection distance of blobs (BI). The second is the nearest vertical distance of blobs (Bdy). The third is the angle of blobs (BA) from their centroids. The unit of BI and Bdy are in pixels and the unit of BA is in degree. If those three conditions are satisfied (1) then the broken blobs are grouped into one blob. TI, Tdy, and TA are the threshold values for the intersection distance, the nearest vertical distance of blobs, and the angle of blobs, respectively. Each broken blob will be marked by a rectangular box. The BI may have zero value or negative value, meaning that the rectangular box of two broken blobs is overlaping each other. The Bdy has absolute value (positive only). The BA has absolute value relative to each other. In the experiments, TI is 0 pixel, Tdy is 50 pixels, and TA is 30°.

Figure 4. Blob grouping result of the occupant which is occluded by chair

Fig. 4 shows the result of blob grouping algorithm when an occupant has the same

colour with the background image. The left image is the current image in the scene. The middle image is shown the broken blobs of the occupant. In this case, according to (1) the logic of G will be ((false) OR (true AND true)), that is 1. Hence, the two broken blobs are grouped. The right image is the result of blob grouping (it is shown by one rectangular box). Here, the blob grouping algorithm is expected to reconstruct the occupant with broken blobs. After the broken blobs are grouped, the centroid of the grouped blobs will be located at the centre of the rectangular box.

PROPOSED APPROACH

Differentiation of occluded individuals A challenging situation may happen when two occupants or more are in the scene. Their objects can make collision each other. We called this situation as merge and split. Merge and split can be detected by using proximity matrix as in. In the merge condition, only centroid feature is used to match the track position to the next possible position since the other three features are not useful when the object merge. After a group of occupants split,

the colour feature will be used to match who is who. This can be done since during tracking, the previous colour information of the occupants is saved continuosly.For this purpose, during the objects merge, their colour information is not saved to maintain the colour information before they merge.

Fig 5: Occluded Occupant problem.

In the experiments, when more than two occupants merge after that they split, sometimes an occupant remains occluded. Later, the occluded occupant splits.Fig 5 shows the occluded problem. In the left image, three occupants merged then split become two, the remaining occupants still occluded. The system detects this event then marks the spited object. system keeps

watching on them until one of the marred objects splits, shown in the right image. When the hiding occupant splits, the system will re-identify each occupant and correct their previous ID number just before they have merged.

We have designed an intelligent attendance logging system by integrating the open source with additional algorithm. The system can achieve real-time performance up to 16 fps. We also demonstrate that the system can handle the occlusion up to three occupants considering that the scene seems become too crowded for more than three occupants. While the regular time recorder only reports the time stamp of the beginning and the ending of the occupants working hour, this system provides more detail about the timing information. In the future, the events generated by this system can be used to deliver a message to another system. It is possible to control the environment automatically such as adjust the lighting, playing a relaxation music, setting the air conditioner when an occupant enters or leaves the room. After the map of sitting areas are found, user may label each sitting area manually or a recognition system can be added. Having a larger view or changing the camera

orientation also becomes our consideration for the future.

  1. Wikipedia, Time clock, http://en.wikipedia.org/ wiki/ Time clock (June 24, 2010).

  2. B. Brumitt, B. Meyers, J. Krum, A. Kern, and S. Shafers, Easy Living technologies for intelligent environments, Lecture Notes in Computer Science, Volume 1927/2000, pp. 97-119, 2000.

  3. S. -L. Chung and W. Y. Chen, My Home: A residential server for smart homes, Lecture Notes in Computer Science, Volume 4693/2010, pp. 664-670, 2007.

  4. Z. Zhou, X. Chen, Y. C. Chung, Z. He, T. X. Han, and J. M. Keller, Activity analysis, summarization, and visualization for indoor human activity monitoring, IEEE Transactions on Circuits and Systems for Video Technology, Volume 18, Issue 11, pp. 1489-1498, 2008. [5]OpenCV.Available:http://sourceforge.n et/projects/ opencvlibrary/ [6]cvBlob.Available: http://code.google.com/p/cvblob/

Leave a Reply