🔒
International Knowledge Platform
Serving Researchers Since 2012
IJERT-MRP IJERT-MRP

An Edge Computing Approach for Real-Time Wildlife Detection and Alert System using YOLOv8 on Raspberry Pi 5

DOI : 10.17577/IJERTV14IS080137

Download Full-Text PDF Cite this Publication

Text Only Version

An Edge Computing Approach for Real-Time Wildlife Detection and Alert System using YOLOv8 on Raspberry Pi 5

Smit Lekhadia, Pragnesh Ghoniya, S Bala Vignesh Reddy, Abhay Panshala, Dax Trivedi, Manjeet Singh Department Of Electronics And Communication, Vishwakarma Government Engineering College, Ahmedabad, India

AbstractThis paper presents a real-time deadly animal de- tection system utilizing the Raspberry Pi 5, Pi Camera, and the YOLOv8 object detection framework. The system is designed to detect and issue alerts for invasive animals, including lions, leopards, monkeys, and elephants, with a demonstrated mAP of 90.30%. It also supports user-defined model extensions, allowing customization for additional species as required by specific de- ployment scenarios. The proposed solution runs entirely on edge hardware, leveraging the computational capabilities of the Rasp- berry Pi 5 to perform inference locally. This approach minimizes latency, reduces the dependency on network connectivity, and enhances data privacy. A custom web-based dashboard provides live video monitoring and real-time visual and auditory alerts upon detection. Designed for deployment in forest peripheries, agricultural zones, and wildlife-sensitive areas, the system offers a cost-effective, scalable, and efficient method for mitigating human-animal conflicts using embedded AI-based surveillance.

Keywords Object Detection, Real-time Monitoring, Pi Camera, You Only Look Once(YOLO), YOLOv8, Alert Sys- tem.

  1. INTRODUCTION

    IN regions adjacent to forests and wildlife corridors, the conflict between humans and nature presents a critical chal- lenge. Large mammals, including lions, elephants, leopards, and primates, frequently encroach into agricultural areas in search of resources, posing a threat to human safety and incurring economic losses. These encounters often result in crop destruction, livestock predation, and occasional human casualties, creating significant socio-economic impacts on ru- ral communities. Conventional mitigation strategies, such as physical barriers, acoustic deterrents, manual surveillance, and watchtowers, prove inadequate due to their reactive nature, high costs, and limited coverage of extensive rural territories. The absence of automated monitoring systems results in delayed threat response and substantial damage. Recent advances in computer vision and deep learning have revolutionized real-time object detection capabilities for surveillance applications [1]. Among state-of-the-art architec- tures, the YOLO algorithm family achieves optimal trade- offs between detection precision and computational efficiency [2]. YOLOv8, introduced by Ultralytics in January 2023, in- corporates anchor-free detection, decoupled prediction heads,

    IJERTV14IS080137

    and enhanced feature pyramid networks, delivering superior accuracy with reduced inference latency, suitable for edge computing platforms. [3] [4]. This work presents an automated wildlife detection frame- work that leverages fine-tuned YOLOv8 architectures for the real-time identification of dangerous fauna. The system pro- cesses live video streams to detect target species and generate immediate alerts. The proposed solution addresses the critical gap between traditional reactive approaches and proactive wildlife management strategies. The framework integrates advanced computer vision techniques with edge computing ca- pabilities to enable autonomous monitoring without requiring constant human supervision. The implementation is optimized for Raspberry Pi 5 hardware, facilitating cost-effective deploy- ment in resource- constrained rural environments with power optimization. We use YOLOv8 as the core object detection algorithm to identify the presence of dangerous wild animals in real- time. The key contributions of this paper are as follows:

    1. Construction and curation of an extensive multi-class dataset containing 3,089 meticulously annotated bound- ing the box instances spanning four critical wildlife species (lions, elephants, leopards, monkeys) with com- prehensive coverage of diverse illumination conditions, complex backgrounds, varying object scales, and realistic field deployment scenarios to ensure robust model gen- eralization.

    2. Comprehensive benchmarking and empirical analysis of multiple architectural variants of YOLOv8 through sys- tematic evaluation of detection accuracy, inference la- tency, memory consumption, and computational com- plexity trade-offs specifically optimized for resource- constrained edge computing platforms and real-time de- ployment requirements [5].

      The key innovative aspects and benefits of the proposed work are as follows.

      1. Cost-effective implementation with minimal infrastructure requirements, making it accessible to resource-limited rural communities.

      2. Real-time threat detection capabilities that enable imme- diate response and prevention of human-wildlife conflicts before damage occurs.

      3. Autonomous operation with minimal human intervention, reducing the need for continuous manual surveillance and associated labor cost.

        Fig. 1. System Architecture for Real-Time Wildlife Detection Using YOLOv8 on Raspberry Pi 5 with workflow.

      4. Scalable deployment across extensive agricultural terri- tories, providing comprehensive coverage that traditional methods cannot achieve.

      5. Edge computing enables continuous wildlife monitoring in remote areas with limited connectivity, operating inde- pendently of network infrastructure.

  2. RELATED WORK

    This section reviews contemporary object detection algo- rithms and wildlife monitoring methodologies relevant to our proposed work.

    1. Object Detection Frameworks

      Object detection constitutes a fundamental computer vision task with applications spanning surveillance, autonomous sys- tems, and wildlife monitoring [6] The objective encompasses simultaneous object localization and classification within dig- ital imagery. Contemporary approaches broadly divide into traditional feature-based methods and deep learning archi- tectures utilizing convolutional neural networks (CNNs) [6]. Traditional Approaches: Classical detection pipelines em- ploy sliding window techniques combined with hand-crafted feature descriptors [Histogram of Oriented Gradients(HOG), Scale-Invariant Feature Transform(SIFT), Local Binary Pat- terns(LBP)] and machine learning classifiers such as Support Vector Machines [7]. While computationally efficient, these methods exhibit limited robustness under variable illumination and complex environmental conditions prevalent in wildlife habitats.

    2. Deep Learning Architectures

      Modern detection frameworks leverage CNN-based archi- tectures, categorized as two-stage (Faster R-CNN, Mask R- CNN) and single-stage detectors (YOLO, SSD) [8]. Two- stage methods achieve superior accuracy by generating region proposals followed by classification refinement, albeit with increased computational overhead. Conversely, single-stage

      detectors prioritize inference speed through unified detection pipelines [9]. YOLOv8 exhibits exceptional performance in real-time object detection through anchor-free mechanisms, enhanced feature pyramid networks, and optimized architec- tural design [10]. Its computational efficiency and reduced parameter count make it particularly suitable or edge deploy- ment on resource-constrained platforms such as Raspberry Pi 5.

    3. Wildlife Detection Systems

    Recent wildlife monitoring applications have demonstrated the efficacy of deep learning approaches for species identifica- tion and behavioral analysis. However, limited work addresses real-time threat detection for human-wildlife conflict mitiga- tion in agricultural settings. As depicted in Fig.1, the proposed framework leverages YOLOv8s architectural advantages to detect dangerous wildlife species (lions, elephants, leopards, monkeys) in real-time edge computing environments, address- ing critical gaps in proactive wildlife management systems.

  3. IMPLEMENTATION OF YOLOV8

    This section presents the YOLOv8 network architecture, classifier modifications, dataset construction, and performance evaluation metrics for wildlife detection applications.

    1. YOLOv8 Network Architecture

      YOLOv8, developed by Ultralytics, introduces significant architectural improvements over previous YOLO versions, en- hancing accuracy and computational efficiency for edge device deployment, such as Raspberry Pi 5 [11]. The architecture consists of three core components:

      1. Backbone: The backbone extracts essential features, including edges, textures, shapes, and patterns, from input images. YOLOv8 employs an enhanced CSPDarknet backbone incorporating C2f (Cross Stage Partial with fewer parameters) modules. This structure enables efficient feature extraction with reduced computational overhead, making it suitable for low-power devices while maintaining detection accuracy for wildlife species including lions, elephants, leopards, and mon- keys across varying environmental conditions.

        Fig. 2. Training and validation curves for the modified YOLOv8 model over 100 epochs. Top row (from left to right): Training box loss, classification loss, distribution focal loss, precision, and recall . Bottom row: Validation box loss, classification loss, distribution focal loss, mAP@0.5, and mAP@0.5:0.95.

      2. Neck: The neck component enhances multi-scale object detection capabilities by combining feature maps from differ- ent backbone levels [12]. It implements Feature Pyramid Net- work (FPN) and Path Aggregation Network (PAN) structures for effective feature fusion [3]. The FPN provides top-down information flow from deeper semantic layers to shallower spatial layers, while PAN strengthens bottom-up detail propa- gation. This bidirectional architecture ensures retention of fine details and contextual understanding, enabling accurate animal localization across diverse scales and complex environments.

      3. Head: The head generates final detection outputs, in- cluding object classification and bounding box localization [11]. YOLOv8 introduces a decoupled head design separating classification and regression tasks into independent branches, optimizing each task individually for improved accuracy. The head supports multi-scale predictions across three feature map resolutions, enabling detection of varying animal sizes. Additionally, the anchor-free detection mechanism eliminates manual anchor box definitions, directly predicting object co- ordinates and dimensions for simplified training and enhanced flexibility.

    2. Classifier Modification and dataset construction

      Standard YOLOv8 models accommodate 80 COCO dataset classes. We modified the classifier for wildlife-specific de- tection of four target species: lions, elephants, leopards, and monkeys. This adaptation reduces network parameters and computational overhead while enhancing inference speed and detection accuracy for real-time scenarios [13] [14]. Transfer learning techniques were employed to fine-tune pre-trained weights for improved wildlife detection performance. The optimization proves essential for Raspberry Pi 5 deployment in resource-constrained rural environments. Our dataset contains 3,089 annotated images from wildlife repositories and online databases, representing diverse natural environments including forests, grasslands, and agricultural boundaries. The collection ensures variability in backgrounds, lighting conditions, scales,

      TABLE I

      OVERALL PERFORMANCE METRICS OF THE MODEL

      Metric

      Value

      Precision

      91.2%

      Recall

      84.8%

      mAP@0.5

      90.0%

      mAP@0.5:0.95

      67.6%

      Inference time per image

      1908 ms(CPU)

      angles, and animal postures to enhance real-world detection reliability. Negative samples containing empty landscapes, domestic animals, and humans were incorporated to minimize false positives. All images feature precise bounding box anno- tations and class labels for the four target species, providing high-quality training data for robust model development.

    3. Training Performance Analysis

      The comprehensive training analysis demonstrates the models convergence behavior across multiple metrics over 100 epochs. As illustrated in the Fig.2, the model exhibits stable convergence with consistent loss reduction patterns. The training loss metrics show a steady decline: box loss decreases from 1.4 to approximately 0.3, classification loss reduces from 2.0 to 0.3, and distribution focal loss (DFL) decreases from 1.8 to 0.9. Validation metrics mirror training perfor- mance, indicating effective generalization without overfitting. Performance metrics demonstrate robust detection capabili- ties: Precision stabilizes at 0.9, while Recall reaches 0.86, indicating an excellent balance between detection accuracy and completeness. The Mean Average Precision(mAP)@50 achieves 0.9, and mAP@50-95 reaches 0.68, confirming strong detection performance across varying IoU(Intersection over Union) thresholds. The smooth convergence curves and con- sistent validation performance validate the effectiveness of our wildlife-specific dataset and model optimization for edge deployment scenarios.

      Published by :

      International Journal of Engineering Research & Technology

      TABLE II

      Model Performance per Class

      Class

      Precision

      Recall

      F1 Score

      Elephant

      0.82

      0.86

      0.84

      Leopard

      0.98

      0.95

      0.96

      Lion

      0.83

      0.88

      0.86

      Monkey

      0.86

      0.98

      0.92

    4. Performance Evaluation Metrics

    Model performance assessment employs standard object detection metrics as presented in Table I. mAP combines precision and recall for overall accuracy evaluation across all classes. Precision measures positive prediction accuracy, while Recall evaluates the models ability to capture all relevant instances. TableI shows an exceptional overall performance of the model with precision of 91. 2% and recall of 84. 8%, indicating robust detection capabilities. The mAP@0.5 achieves 90.0%, while mAP@0.5:0.95 reaches 67.6%, con- firming strong detection performance across varying IoU thresholds. The inference time of approximately 1908 ms per image on Central Processing Unit(CPU) demonstrates feasi- bility for real-time applications, though Graphical Process- ing Unit(GPU) acceleration could enhance processing speed. These comprehensive metrics validate the models effective- ness for wildlife detection in edge computing environments.

  4. PERFORMANCE EVALUATION AND DETECTION RESULTS

    The proposed system successfully achieved real-time deadly animal detection n the Raspberry Pi 5 platform. As shown in Table II, the model demonstrated high precision, recall, and F1-scores across all classes, with particularly strong performance for leopard and monkey. The detection system consistently processed video streams at real-time speed, gen- erating alerts with minimal latency. This makes it highly suitable for continuous monitoring in wildlife-sensitive areas. Sample detection outputs are presented in Fig.3, illustrating accurate identification and localization of multiple animal species under diverse environmental conditions. By leveraging edge computing on the Raspberry Pi 5, the system elim- inates reliance on cloud infrastructure, ensuring immediate alerts even in remote or low-connectivity zones. This local processing also enhances data security and reduces inference delays. Combined with the lightweight YOLOv8 model, the system provides an efficient, portable, and scalable solution for intelligent wildlife monitoring.

  5. CONCLUSION

In this work, a real-time deadly animal detection systemwas successfully developed using the Raspberry Pi 5, Pi Camera, and the YOLOv8 detection framework. The system is capable of accurately identifying hazardous wildlife species, including lions, leopards, monkeys, and elephants. The implementation on an edge device ensures low-latency operation, reliability in remote environments, and enhanced data privacy. A user- friendly dashboard was also developed to provide live moni- toring and instant alerts, improving situational awareness and

International Journal of Engineering Research & Technology (IJERT)

ISSN: 2278-0181

Vol. 14 Issue 08, August – 2025

(a) (b)

(c) (d)

Fig. 3. Sample detection results demonstrating successful identification and localization of (a) Lion, (b) Leopard, (c) Monkey, and (d) Elephant.

response times. The systems modular design allows users to extend its functionality by adding new animal classes accord- ing to specific requirements. Overall, the solution demonstrates a cost-effective, scalable, and efficient method for strengthen- ing safety measures in wildlife-prone regions. Future work will focus on expanding the dataset, optimizing detection under varying environmental conditions, and integrating additional communication modules for broader alert dissemination.

REFERENCES

  1. M. A. Tabak et al. Animal Detection and Classification from Camera Trap Images Using Different Mainstream Object Detection Architec- tures. Animals, 12(15):1982, August 2022.

  2. M. Ve´lez-Reyes et al. Animal Species Recognition with Deep Convolu- tional Neural Networks from Ecological Camera Trap Images. Animals, 13(9):1526, May 2023.

  3. S. Kumar et al. YOLO-SAG: An improved wildlife object detection algorithm based on YOLOv8n. Ecological Informatics, 82:102745, July 2024.

  4. Y. Wang, J. Li, and X. Zhang. Nighttime wildlife object detection based on YOLOv8-night. Electronics Letters, 60(15):e13305, August 2024.

  5. M. Chen, L. Wang, and H. Liu. WildARe-YOLO: A lightweight and efficient wild animal recognition model. Ecological Informatics, 80:102539, April 2024.

  6. Geethanjali P., Metun Nivin, and M. Rajeswari. Advances in ecological surveillance: Real-time wildlife detection using mobilenet-ssd v2 cnn machine learning. 11 2023.

  7. Emrah S¸ ims¸ek, Baris¸ O¨ zyer, Levent Bayndr, and Gu¨ls¸ah Tu¨mu¨klu¨ O¨ zyer. Human-Animal Recognition in Camera Trap Images. In 2018 26th Signal Processing and Communications Applications Conference (SIU), pages 14, 2018.

  8. Mohamed Fathy, Mohamed K. Elhadad, and Mohamed A. Elshafey. Comprehensive Performance Evaluation of YOLOv8-11 Models for Ob- ject Recognition in Remote Sensing Imagery. In 2025 15th International Conference on Electrical Engineering (ICEENG), pages 16, 2025.

  9. Rodolfo Bonnin, Claudio Delrieux, and Mar´a Fabiana Piccoli. Deep Learning-Based Visual Aid for Low Vision. IEEE Embedded Systems Letters, pages 11, 2025.

  10. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. SSD: Single Shot MultiBox Detector. In Computer VisionECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 1114, 2016, Pro- ceedings, Part I 14, pages 2137. Springer, 2016.

  11. Cheng-Kai Lu, Jun-Yu Shen, Cheng-Hung Lin, Chung-Yueh Lien, and Ding Su Yen. Efficient Embedded System for Small Object Detection: A Case Study on Floating Debris in Environmental Monitoring. IEEE Embedded Systems Letters, 17(4):264267, 2025.

  12. Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real- time object detectors. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 74647475, 2023.

  13. A. G. Howard et al. MobileNets: Efficient Convolutional Neural Net- works for Mobile Vision Applications. arXiv preprint arXiv:1704.04861, April 2017.

  14. T. Nguyen et al. Wildlife Real-Time Detection in Complex Forest Scenes Basedon YOLOv5s Deep Learning Network. Remote Sensing, 16(8):1350, April 2024.