Surveillance Robot with Face Recognition using Raspberry Pi

DOI : 10.17577/IJERTV8IS120298

Download Full-Text PDF Cite this Publication

Text Only Version

Surveillance Robot with Face Recognition using Raspberry Pi

Bhavyalakshmi R

Department of Electrical Engineering University Visvesvaraya College of Engineering,

Bangalore University, Bangalore, India.

B. P. Harish

Department of Electrical Engineering University Visvesvaraya College of Engineering Bangalore University, Bangalore, India.

Abstract International border security has become a very challenging task for any country. It is not always possible for border security forces to monitor long borders round the clock and in all seasons. Deployment of technology in the form a robot for intruder detection at the border and transmission of information to the control center is imperative in the geo- political situation of the world. Many of the high risk jobs in a hostile environment are best performed by a robot while it can be dangerous for soldiers. The proposed work aims to develop an automatic solution to detect the presence of an enemy or any hostile events such as fire/gas leakage in targeted places without loss of human life. It consists of a robotic vehicle for spying the pre-allocated area by continuously monitoring it. Whenever a sensor gets activated, the surveillance system checks for the presence of humans and then runs a face recognition algorithm. If the facial data of the person detected does not match with the pre-stored personal data of soldiers, the system recognizes him as an intruder and activates the laser gun to target him. In case of detection of unusual/dangerous events such as someone carrying a knife or a gun in high security zone, an alert message is sent to the operator. The system also provides the live streaming of surveillance data to the operator using Raspberry Pi and VNC Viewer.

Keywords Surveillance; Raspberry Pi; face recognition; VNC Viewer.

  1. INTRODUCTION

    Security is the primary objective to protect nations from terrorist attacks, infiltration, smuggling etc. Hence international border areas require high level of security round the clock. Surveillance system requires security personnel for monitoring designated areas using surveillance cameras. The manual surveillance system suffers from the following challenges:

    1. It is hard for humans to maintain focus on monitoring continuously.

    2. Continuous monitoring of multiple screens showing video streams from surveillance cameras is challenging.

    3. Security guard falling asleep due to exhaustion while on job results in serious security breach.

    4. Providing 24/7 monitoring requires guards to work in shifts, resulting in high cost.

    In order to address these issues, an automated surveillance system with a robot equipped with various sensors at its core is proposed in this work.

    A mathematical model is proposed to explore the characteristics of a two-wheel self-balancing robot and control its behavior on a planar surface and on an inclined plane [1]. A Raspberry Pi based robotic system is proposed for surveillance to automatically detect intruders and inform the control room. While the system provides remote control of a robot, it fails to distinguish between a known person and an intruder [2]. A security system using Raspberry Pi and OpenCV image processing techniques is presented to detect the presence of human or smoke during night times and provide an Email alert. But it fails to distinguish between a known person and an intruder [3]. A robotic system is proposed using sensors for detection of harmful gases and fire in war fields for military applications. The system sends an alert message to user when any of the sensors become active but it is unsuccessful to decide the presence of human movement from surrounding objects [4]. A microcontroller based automatic gun targeting system is developed to target trespassers in high security zones. The system gets triggered upon activation of sensors, but it fails to recognize the identity of the person [5]. A surveillance system using Raspberry Pi as a controller and Python-OpenCV as a programming language is proposed to provide for real-time video footage, in black and white, using Internet/Wi-Fi as a medium of connectivity [6].

    It is observed from the literature that surveillance systems are designed for detecting human/object movements, gas or flame in target areas. But they suffer from their inability to recognize between a known person and an unknown person. The proposed robotic system addresses this issue by recognizing the known person using image processing techniques. The system generates an alert message in case of dangerous events in areas under surveillance.

    The remainder of the paper is organized as follows: While Section II presents the architecture of the surveillance robot system Section III describes its working. Section IV presents the algorithms for face recognition and unusual event detection. Section V describes the result and discussion and Section VI concludes with an outline of future work.

  2. ARCHITECTURE OF SURVEILLANCE ROBOT Surveillance robot is placed in a remote location to

    monitor the surrounding environment. It consists of Raspberry Pi, USB Camera, Sensors, Motor driver with two DC motors, and a driver relay with target system. The block diagram of surveillance robot is as shown in Fig.1.

    Fig.1: Block diagram of surveillance robot

    Robot is designed using Raspberry Pi as the controller and coded in OpenCV-Python programming language to implement various control functionalities of the robot. The DC motors are controlled by General Purpose Input Output (GPIO) control signals of Raspberry Pi with L293D motor driver module to drive DC motors using H-bridge. Signals from all sensor modules, relay and driver module are given to Raspberry Pi over GPIO pins. All hardware is powered by the supply board provided with the 12 V power supply except sensors requiring 5 V and regulated by LM317 voltage regulator. As the gas sensor requires more than 1.5 A current occasionally, it may affect the performance of the other modules. Hence, gas sensor is powered using LM317 and its DO pin is connected to Raspberry Pi. Fire detection works on sensing either smoke or heat or flame (light) or a combination of them. While IR sensors detect the obstacles in the surrounding area, ultrasonic sensor compute the distance between the sensor and the obstacle depending on the time interval between the emitted wave and the reflected wave. Further, a USB camera is interfaced with Raspberry Pi over USB port. In place of a laser gun that acts against the intruder, as proof of concept, an LED is connected to relay output. Robot system is controlled by Android application or from computer system to provide live streaming of surveillance video footage.

  3. FLOW CHART OF SURVEILLANCE ROBOT The working of surveillance robot is illustrated in the form

    of a flowchart shown in Fig.2. On power up, robot, sensors

    and camera get activated and the robot starts moving around in the region of surveillance with its camera traversing 54º. Robot starts moving forward with IR sensors keep checking for any object in the range of the sensor. If no obstacle is detected robot moves forward, otherwise the sensor output goes low and robot will stop to overcome the obstacle. Flame and gas sensors check for the presence of fire and gas. If sensor output is positive, a message is sent to the Android

    mobile operator in the control room to alert him. Further, the surveillance robot

    Fig.2: Flowchart of working of surveillance robot

    monitors the surrounding area continuously and checks for the presence of human beings. nce the face gets detected, the system takes the facial data and with the help of image processing techniques, the system determines if the detected face is that of known or unknown person by comparing with the pre stored personal data of soldiers in the database. Haar Cascade classifier and Local Binary Pattern Histogram (LBPH) algorithms are used for image processing to determine the presence and identity of the human [7, 8]. The robot moves forward when the face of the detected person matches with the facial information stored in the database. Upon mismatch the intruder is detected, the surveillance robot stops and the relay activate the laser gun which fires at the intruder to deactivate him. However, in this work, this gun target control is replicated with an LED on/off control with the relay activating the LED implying triggering of the target controller. The LED turns on and the robot moves forward. This process continues in an infinite loop to provide for non- stop surveillance.

  4. ALGORITHMS FOR FACE RECOGNITION AND UNUSUAL EVENT DETECTION

    Face detection is carried out by using classifiers that choose image as positive i.e. image with face, or negative i.e. image without face. Classifiers are trained from a large set of

    positive and negative images that are with and without faces, respectively. OpenCV comes with pre-trained classifiers as under:

    • Haar Classifier

    • Local Binary Pattern (LBP) Classifier.

      1. Haar Classifier

        Haar-feature classifier algorithm is used for face detection.

    • Haar classifier has Integral Image concept which provides features for fast computation by the detector.

    • Algorithm is based on Adaboost, which choose a small number of important features from a huge set that provides efficient classifiers.

    • Complex classifiers are combined to form cascade. This rejects if any non-face region is present in the image while focuses on face region.

    1. Face recognition algorithm

      Local binary patterns histogram (LBPH) comprising the generation of local binary patterns for each pixel is used for face recognition. The basic idea is to compare each pixel with neighborhood pixels in an image. First it considers one pixel as a center and compares it with the neighboring pixels. If the intensity value of considered pixel is equal to or greater than the neighboring pixel, then it writes the pixel with a 1. If not, a

      0 is written. Then each binary pattern is converted into a corresponding decimal number and a histogram of all decimal values is drawn.

    2. Algorithms involved in object classification

      Deep neural network is classified as Base network and Detection network. The proposed system deploys MobileNet base network to generate high level features for classification / detection. Single Shot Detector (SSD) algorithm type detection network uses convolution layer upon base network for detection task.

  5. RESULTS AND DISCUSSION.

    A prototype of surveillance robot, shown in Fig.3, consisting of sensors and motor activators interfaced to the controller is developed and all functionalities are demonstrated. The face detection system using Haar-Classifier algorithm is implemented in Open CV Python programming and the working of the robot with face recognition is demonstrated.

    1. Sensor Output

      When robot is moving in forward condition, it keeps checking, in an infinite loop, for an obstacle in its path. Upon activation of left side IR sensor, the system displays of left side object detected. When right side IR sensor gets activated, the system displays right side object detected. Ultrasonic sensor measures the distance between the object and sensor to identify its location. After calculation, it displays the measured distance in the output window. A sample of sensor output generated is shown in Fig.4.

    2. Detection of gas and fire

      The system checks for the presence of fire and gas [9] in surrounding area. If gas/fire is detected, the system displays the message to the operator as gas detected or fire detected, as shown in Fig.5.

      Fig.3: Prototype of Surveillance robot

      Fig.4: Output of IR, Ultrasonic Sensor for obstacle detection

      Fig.5: Gas and fire detection

    3. Unusual event detection

      In case of unusual event or dangerous situation such as someone carrying a gun or knife in high security zone, the system sends an alert message as unusual activity detected. System loads the input image and displays one of top 5 predictions and is shown in Fig.6. The message transmitted to the operator is shown in Fig.7. A screw driver and a syringe are used as unusual objects for testing the system.

    4. Known person recognition

      The system continuously monitors the surrounding area and checks for the presence of facial data of human being. If the detected person is known, the system displays his name as shown in Fig.8.

    5. Unknown person recognition

    When the detected person is unknown i.e., facial data of the image does not match with the database, Robot system stops and the relay activates the laser gun to target him or LED turns on. Result of unknown person recognition is displayed on the system as shown in Fig. 9.

    Fig.6: Object classification for unusual event detection

    Fig.7: Alert message for unusual event detection

    Fig.8: Known person recognition

    Fig.9: Unknown person recognition

  6. CONCLUSIONS AND FUTURE SCOPE

  1. Conclusions

    The proposed surveillance robot system with face recognition is continuously surveying the surrounding area. When the objects get detected, the robot overcomes them and moves forward. The robot is able to sense carbon monoxide and fire in the surrounding area. The system continuously monitors the given area and is able to recognize whether the detected person is known or unknown. If the detected person is unknown, then the system activates laser gun and shoots him down. The system is capable of providing live streaming of images and alert messages. Hence, it is suitable for surveillance application in war field or borders. The proposed work also detects unusual objects carried by the humans in high security zones.

  2. Future Scope

The present work can be extended by adding unusual event detection in order to recognize the activities of unknown or known person. As algorithms used in image processing are illumination affected, advanced algorithm can be deployed to insulate the robot from light effects.

REFERENCES

  1. Pannaga R.M. and B.P. Harish, Modelling and implementation of two- wheel self-balance robot, International Journal of Electrical, Electronics and Computer Science Engineering, Vol. 4, Issue 6, pp. 33- 40, December 2017.

  2. Ghanem Osman Elhaj Abdalla and T. Veeramanikandasamy, Implementation of Spy Robot for A Surveillance System using Internet Protocol of Raspberry Pi, 2nd IEEE International Conference On Recent Trends in Electronics Information & Communication Technology, May 2017, India, pp. 86-89.

  3. Wilson Feipeng Abaya, Jimmy Basa, Michael Sy, Alexander C. Abad and Elmer P. Dadios, Low Cost Smart Security Camera with Night Vision Capability Using Raspberry Pi and OpenCV, 7th IEEE International Conference on Humanoid, Nanotechnology, Information Technology Communication and Control, Environment and Management (HNICEM), November 2014.

  4. Tarunpreet Kaur and Dilip Kumar, Wireless Multifunctional Robot for Military Applications, 2nd International Conference on Recent Advances in Engineering and Computational Sciences (RAECS), December 2015.

  5. Ajay S. Mirchapure, Automatic Gun Targeting System,Imperial Journal of Interdisciplinary Research (IJIR) Vol. 2, Issue 5, 2016, pp. 1178-1181.

  6. Nazmul Hossain, Mohammad Tanzir Kabir, Tarif Riyad Rahman, Mohamed Sajjad Hossen and Fahim Salauddinv, Real-time Surveillance Mini-ROVER Based on OpenCV-Python-JAVA Using Raspberry Pi2, IEEE International Conference on Control System, Computing and Engineering, November 2015, Malaysia, pp. 476-481.

  7. Paul Viola and Michael Jones, Rapid Object Detection using a Boosted Cascade of Simple Features, Accepted Conference on Computer Vision and Pattern Recognition 2001, pp. 1-9.

  8. T. Ojala, M. Pietikäinen, and D. Harwood, "Performance evaluation of texture measures with classification based on Kullback discrimination of distributions", Proceedings of the 12th IAPR International Conference on Pattern Recognition (ICPR 1994), vol. 1, pp. 582 – 585.

  9. Vijayalakshmi M.S. and B.P. Harish, ATmega328 MC Based Air Pollution Monitoring System, International Journal of Electrical, Electronics & Computer Science Engineering, Vol. 6, Issue 1, pp. 20- 24, February 2019.

Leave a Reply