DOI : 10.17577/IJERTV14IS110404
- Open Access
- Authors : Dr. Rekha P S, Rithika A M, Nisarga T L, M Naveenkumar, Madhuri
- Paper ID : IJERTV14IS110404
- Volume & Issue : Volume 14, Issue 11 , November – 2025
- DOI : 10.17577/IJERTV14IS110404
- Published (First Online): 02-12-2025
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Human Body Detection Robot at Military Borders
Dr. Rekha P S, Rithika A M, Nisarga T L, M Naveenkumar, Madhuri
#Department of Electrical & Electronics Engineering, SJB Institute of Technology, Bengaluru, Karnataka, India
Abstract – Here is a more natural, smoother, and human- sounding version of your project description. I kept all the original meaning but improved the tone and readability:This project focuses on designing and building an AI-powered military surveillance and patrolling robot intended to support defense and security operations. The system combines autonomous navigation with the flexibility of manual control through a web-based dashboard. At the core of the robot is an ESP8266 microcontroller, which coordinates a range of onboard sensors. These include ultrasonic sensors for detecting obstacles, gas sensors for identifying hazardous environments, flame sensors for early fire detection, and a camera module that provides real-time video streaming and intruder identification.Mechanically, the robot uses an L298N motor driver to achieve differential drive movement, while a servo motor allows the camera to pan and tilt for better surveillance coverage. One of the key features of this system is its dual operation mode: the robot can follow predefined patrol routes on its own or be manually controlled during more sensitive or tactical situations. Basic computer vision capabilities enable it to detect people and recognize faces, triggering alerts when unauthorized individuals are spotted.The robot also includes a small medicine-delivery mechanism, allowing essential supplies to be transported safely into risky or hard-to-reach locations. Through the web dashboard, users can view live video, monitor sensor readings, receive alerts, and track the robots movement and overall status. Because it is affordable and versatile, the system can serve multiple purposes ranging from perimeter surveillance and hazardous environment monitoring to providing logistical support in military operations. Overall, it offers a practical way to reduce human exposure to danger during high-risk missions.If youd like it more formal, more casual, shorter, longer, or formatted as an abstract or report section, just tell me!
Keywords – AI-based surveillance robot Autonomous patrolling robot,Military security robot,ESP8266 microcontroller,Real-time monitoring,Computer vision,Human detection, Ultrasonic sensor, Gas sensor,Flame sensor, Hazard detection, Intruder alert system, Medicine delivery mechanism
-
INTRODUCTION
Modern military operations are increasingly turning to autonomous and semi-autonomous systems to protect soldiers and improve efficiency in high-risk situations. Activities like border patrol, base security, and reconnaissance in hostile areas put personnel at serious risk from enemy attacks, environmental dangers, and explosives. Traditional surveillance, which depends on people being present at all times, is exhausting, resource-heavy, and cannot provide continuous coverage without a large
team.Thanks to advancements in robotics, artificial intelligence, and IoT technology, autonomous patrolling robots can now handle these tasks more safely and effectively. Equipped with multiple sensors, these robots can monitor areas around the clock, navigate hazardous environments, and deliver real-time intelligence to command centers. By reducing human exposure to danger, these systems make military operations both safer and more reliable.
-
LITERATURE SURVEY
-
Paper 1: N.A.S.N. Nandasena, R. Wijesinghe, W.A.A. Vimukthi, S.L.P. Yasakethu, and H.M.K.K.M.B. Herath, Real-Time Upper Body Motion Tracking Using Computer Vision for Improved Human-Robot Interaction and Teleoperation,2023.
Description: Heres a more natural, human-sounding version of your textclear, simple, and humanized while keeping the meaning the same:The system was able to track upper-body movements in real time, which made interacting with the robot feel smooth and intuitive. However, its performance depended a lot on the lighting in the environmentchanges in brightness sometimes caused the body-landmark detection to become less accurate.To support the motion-tracking features, a custom manipulator was also designed, allowing the robot to follow the users movements more effectively and improving the overall teleoperation experience.
-
Paper 2: Wanchen Li, Robin Passama, Andrea Cherubini, and Vincent Bonnet, A comparison of human skeleton extractors for real-time human-robot interaction,2023.
Description: We tested several different vision-based skeleton-extraction frameworks in a simulated industrial workspace to see how well they performed. Each system was evaluated for its accuracy, reliability, and ability to work in real time, all under controlled conditions so we could compare them fairly. The goal was to understand how effectively each system could track human motion, so all frameworks were tested under the same controlled environment to ensure fair comparison. Accuracy, reliability, and real-time responsiveness were key factors in the assessment, with repeated trials helping reveal the strengths and limitations of each method.
-
Paper 3: Dianyong Yu, Jiaxin Li, and Wenjian Yan,
Research on Human Body Detection and Trajectory Tracking Algorithm Based on Multi-sensor Fusion,2021.
Description: movements accurately and in real time, making human-robot interaction feel smooth and natural. It uses cameras and computer vision, so theres no need to wear any sensors or special equipment. This non- wearable approach means users can move freely without any extra gear getting in the way. The technology captures motion quickly and precisely, allowing the robot to respond almost instantly. Because it relies on vision, the system keeps interactions simple and unobtrusive. This helps create a more comfortable experience for the user. Its especially useful in situations where wearing sensors isnt practical. The design focuses on making communication between humans and robots as intuitive as possible. By tracking movements in real time, the system supports seamless control of robotic devices. Overall, it combines accuracy and ease of use to improve how people work with robots.
-
-
MOTIVATION
The main reason for creating a Human Body Detection Robot for military borders is to make our borders safer and protect soldiers from dangerous situations during patrols. Border areas are often tough to watch because of rough landscapes, bad weather, and remote locations, making it hard for people to keep an eye on things all the time. Soldiers face many threats like enemy sneaking in, illegal crossings, smuggling, and attacks. Unfortunately, some soldiers have lost their lives in these risky conditions, which is why theres a real need for smart machines to help or even take over these dangerous jobs. Thanks to improvements in robotics, artificial intelligence, and sensors, we can now build robots that can watch and detect humans accurately and reliably. This project combines those technologies to create a robot that can spot people and send instant alerts to security teams. Using tools like infrared sensors and thermal cameras, the robot can see through darkness or fogthings humans find hard to manage. Another important goal is to reduce the tiredness and mistakes soldiers face during long patrols. Having this kind of robot can lower the number of casualties, boost efficiency, and keep watch around the clock withoutbreaks. It also fits into the global trend of using automation in defense, showing how technology can save lives and make borders more secure. This project also offers a chance to explore new ideas and apply robotics in real defense situations. In the end, the goal is to build a smart, reliable robot that keeps our borders safe and helps protect the brave soldiers who guard our country. Today, machines already help us reduce human effort in many areas, and military robots are becoming more common for tough and risky tasks. Robots can do heavy lifting and repeat jobs precisely without getting tired or making mistakes. In recent years, Indian border forces have faced serious threats, and soldiers often have to enter dangerous enemy areas. Using robots for these risky missions can save lives and reduce danger for our troops.
-
OBJECTIVES
-
Develop an Autonomous Patrolling System: The goal is to build a mobile robotic platform that can patrol designated routes on its own. Using GPS waypoints, the robot can navigate along predefined paths, while obstacle detection ensures it moves safely around any barriers in its way. This allows the system to operate independently, providing reliable and continuous monitoring without the need for constant human supervision.
-
Implement Multi-Sensor Environmental Monitoring Equip the system with sensors to detect fire, gas leaks, and obstacles, and connect them to alert mechanisms that warn users immediately if any danger is present. This ensures hazards are identified quickly, helping prevent accidents and keeping the environment safe.
-
Create an AI-Based Intruder Detection System: The project involves developing computer vision algorithms to enhance security in restricted areas. These algorithms are designed to detect human bodies, allowing the system to identify when someone enters a monitored zone. In addition, face recognition technology is used to determine whether the person is authorized or not. By combining body detection and face recognition, the system can quickly flag unauthorized individuals. This helps security personnel respond faster to potential threats. The approach reduces the need for constant human monitoring, making surveillance more efficient..
-
Integrate Medicine Delivery Capability: The system is designed to include a secure storage compartment for transporting medical supplies safely. This ensures that important items, such as medicines or emergency equipment, remain protected during transit. Location- tracking technology is integrated into the system to monitor the delivery in real time.
-
Optimize Power Management:The system is equipped with battery monitoring to keep track of power levels in real time. It also uses energy-efficient operation modes to conserve battery life. These features allow the robot to patrol for longer periods without interruptions. By optimizing power usage, the system can perform its tasks reliably over extended missions. This ensures continuous and dependable operation during patrols.
-
-
CHALLENGES
The project faces several challenges that need to be addressed for effective operation. Navigating complex and uneven terrain while avoiding obstacles safely is a major concern. Integrating multiple sensors, such as fire, gas, GPS, and obstacle detectors, to work accurately together is also critical. Battery life and energy management must be optimized to allow long, uninterrupted operation, while the system must remain reliable in harsh conditions like extreme weather, low light, fog, or dust. Ensuring that AI algorithms for human detection and face recognition are fast and accurate is essential, as is keeping data and alerts secure from cyber threats. Additionally, transporting medical supplies safely without damage and maintaining overall system reliability during critical missions are key challenges that must be
carefully managed.
-
METHODOLOGY
Fig 1: Methodology
-
Locomotion System:
The robot is designed with a differential drive system, using two 12V DC gear motors running at around 200 RPM, controlled by an L298N dual H-bridge motor driver. This setup gives the robot excellent maneuverability, allowing it to move forward and backward, rotate in place, and make smooth arc turns. The L298N receives PWM (Pulse Width Modulation) signals from the ESP8266 to control the speed of the motors, while digital signals handle the direction.
-
Camera and Servo System
The robot is equipped with a standard USB camera or an ESP32-CAM module, which can handle dedicated computer vision tasks. The camera is mounted on a servo motor, such as an SG90 or MG996R, allowing it to pan across a 180-degree arc. This setup lets the camera scan the surveillance area without needing to rotate the robot itself. The servo is controlled with PWM signals from the ESP8266, and its positioning is guided either by autonomous scanning algorithms or manual commands sent through a web dashboard, providing flexible and precise monitoring.
-
Ultrasonic Sensor Array
The obstacle detection system employs HC-SR04 ultrasonic sensors positioned at the front and sides of the robot. The front-facing sensor provides a detection range of 2-400cm with approximately 3mm accuracy, enabling reliable obstacle detection during forward movement. Side-mounted sensors (optional for enhanced capability) provide additional awareness for navigation in confined spaces.
-
Ultrasonic Sensor Array:
The robot is equipped with HC-SR04 ultrasonic sensors for obstacle detection, with a main sensor at the front and
optional sensors on the sides. The front sensor can detect objects from 2 to 400 cm away with around 3 mm accuracy, ensuring safe navigation during forward movement. The side sensors provide extra awareness, helping the robot maneuver efficiently in tight or confined spaces. Together, these sensors allow the robot to detect obstacles reliably and avoid collisions.
-
-
IMPLEMENTATION
-
BLOCK DIAGRAM:
Fig 2: block diagram
Block diagram consists of:
-
Gas and Fire Monitoring
Sensor readings are taken every 25 seconds and compared to preset thresholds. A moving average over 510 samples helps reduce false alarms from brief fluctuations. If the threshold is exceeded for three consecutive readings, an alert is triggered.
-
Web Dashboard Development
The web dashboard is built using HTML5, CSS3, and JavaScript, with optional frameworks like React or Vue.js for interactivity. The backend can use Node.js with Express, Python with Flask or Django, or PHP. This setup provides a responsive and user-friendly interface for monitoring and control.
-
Live Video Feed
Video streaming can be implemented using MJPEG for simplicity or WebRTC for lower latency. The ESP8266 or ESP32-CAM sends video frames as a multipart HTTP stream, which the browser displays in an imgage tag that refreshes automatically. Alternatively, WebSocket connections can be used to transmit frames for more efficient streaming.
-
Manual Control Panel
A virtual joystick or button grid allows users to control the robot remotely. Buttons for forward, backward, left, right, and stop send commands via HTTP POST requests
or WebSocket messages to the robots control API. The interface also includes speed settings slow, medium, and fast which adjust the PWM duty cycle to control motor speed.
-
Alert Management Panel
Alert notifications are displayed as modal popups or slide-in panels with oth visual and audio cues, using the browser notification API or audio alerts. All alerts are recorded in a searchable log, with filters for type, date range, and severity. For intruder alerts, a thumbnail of the captured image is included, which can be clicked to view a larger version.
-
-
CIRCUIT DIAGRAM:
Fig 3: Circuit diagram
The circuit diagram illustrates how the main components of the robot are connected and powered. The ESP8266 or ESP32-CAM serves as the central controller, sending commands to the L298N motor driver, which in turn controls the two DC gear motors for differential drive motion. Ultrasonic sensors like the HC-SR04 are connected to the ESP8266 for obstacle detection, providing distance measurements that help the robot avoid collisions. The servo motor for the camera mount is also controlled by the ESP8266 using PWM signals, enabling the camera to pan for surveillance. Additional sensors, such as fire and gas detectors, are integrated with the controller, and alerts are triggered when readings exceed set thresholds. The power supply is distributed to all components with appropriate voltage regulation to ensure stable operation. Communication between the robot and the web dashboard occurs via Wi-Fi, allowing real-time monitoring, video streaming, and control commands. The diagram shows how inputs (sensors) and outputs (motors, servos, alerts) are logically connected to achieve coordinated autonomous patrolling and
environmental monitoring. WORKING:
The robot starts by following predefined patrol routes using GPS waypoints and obstacle detection to navigate safely. Its differential drive system allows it to move forward, backward, tate in place, and make smooth turns, while the motors are controlled via PWM signals from the ESP8266. The camera, mounted on a servo, pans across a 180-degree arc to monitor the surroundings, sending video to the web dashboard for live streaming. Sensors like ultrasonic, fire, and gas detectors continuously monitor the environment, and alerts are triggered if any readings exceed thresholds. The robots AI-based system can detect human presence and recognize faces, identifying unauthorized individuals. Alerts, video feeds, and status updates are sent to the web dashboard in real time, where users can also control the robot using a virtual joystick or button interface. A secure compartment allows safe transport of medical supplies with location tracking for accurate delivery. Battery monitoring and power- efficient modes ensure extended operation without interruptions. Overall, the robot combines autonomous navigation, environmental monitoring, and real-time alerts to enhance security and efficiency in critical areas.
FLOWCHART:
Fig 4: Flow diagram RESULTS
The AI-based military surveillance and autonomous patrolling robot was successfully built and all major systems
functioned as intended. The ESP8266 effectively managed sensors, motor control, and WiFi, while computer vision tasks were offloaded to a separate server. The robot followed predefined waypoints with 23 meter accuracy and maintained heading errors below 10 degrees, avoiding obstacles in 9599% of cases depending on speed. Ultrasonic sensors reliably detected medium-to-large objects, though very small or thin obstacles were occasionally missed. Gas and flame sensors detected hazards effectively, but required calibration and careful placement to reduce false alarms.
WiFi communication and video streaming were stable at moderate distances, with performance decreasing when obstacles were present. Face and full-body detection achieved 8592% accuracy under controlled conditions. Overall, the system demonstrated effective autonomous patrolling and hazard detection, with potential improvements in navigation precision, sensor reliability, and communication range.
CONCLUSION
This project successfully designed, developed, and tested an AI-based military surveillance and autonomous patrolling robot with medicine delivery capability. The system achieved its primary goal of demonstrating cost-effective autonomous surveillance using readily available commercial components. It integrates multiple sensors, including ultrasonic obstacle detection, gas and fire hazard monitoring, and computer vision for intruder detection, all within a mobile platform. The robot is capable of both autonomous waypoint navigation and manual remote control, providing a versatile and reliable solution for surveillance and delivery tasks.
REFERENCES
-
Kumar, A., Sharma, R., & Singh, P. (2022). "Autonomous Navigation and Surveillance in Military Ground Robots: A Comprehensive Review." International Journal of Advanced Research in Computer Science, 13(4), 245-262.
-
Zhang, L., Wang, J., & Chen, H. (2021). "GPS-Based Autonomous Patrolling with Dynamic Obstacle Avoidance for Outdoor Mobile Robots." Robotics and Autonomous Systems, 145, 103856.
-
Patel, S., Desai, M., & Shah, V. (2023). "Lightweight Navigation Algorithms for Resource Constrained Microcontrollers in Autonomous Systems." Journal of Embedded Systems, 11(2), 78-95.
-
Siegwart, R., Nourbakhsh, I. R., & Scaramuzza, D. (2011). Introduction to Autonomous Mobile Robots (2nd ed.). MIT Press.
-
Viola, P., & Jones, M. (2001). "Rapid Object Detection using a Boosted Cascade of Simple Features." Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1, I- 511-I-518.
-
Sharma, K., & Singh, R. (2022). "Comparative Analysis of Human Detection Algorithms for Military Surveillance Applications." Journal of Defense Technology, 18(3), 445-462. [7].Dalal, N., & Triggs, B. (2005). "Histograms of Oriented Gradients for Human Detection." IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1, 886 893.
-
Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). "You Only Look Once: Unified, Real-Time Object Detection." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 779- 788
