🏆
Premier Academic Publisher
Serving Researchers Since 2012
IJERT-MRP IJERT-MRP

Drone-Assisted AI Terrain Mapping and Classification

DOI : 10.17577/IJERTV14IS060239

Download Full-Text PDF Cite this Publication

Text Only Version

Drone-Assisted AI Terrain Mapping and Classification

Dr. M. Sundararaj

Dean academics Bharath Institute of Higher Education and Research

Chennai, India

Sanskar Singh

Student

Bharath Institute of Higher Education and Research Chennai, India

P.V Gurudatta Sarma

Student

Bharath Institute of Higher Education and Research Chennai, India

Himanshu Sai Prakash Yadav

Student

Bharath Institute of Higher Education and Research Chennai, India

Abstract This study describes the creation of an AI system for mapping and classifying terrain with drone assistance that combines deep learning models with aerial monitoring. The suggested approach makes use of a specially designed UAV that has insulation to improve battery performance in cold conditions. Convolutional Neural Networks (CNNs) are used by the drone to classify the terrain, outperforming conventional architectures. Furthermore, a YOLOv8-based real-time object detection system improves operating capabilities, and a change detection model based on Faster R-CNN detects changes in the terrain over time. A sub-application using YOLO v11-OBB further classifies objects in multiple images at a time, generating a CVS file for data analysis. This study lays the groundwork for future developments in autonomous mapping technologies by showcasing how AI- driven aerial surveillance may automate terrain analysis, enhance environmental monitoring, disaster response, and agricultural assessment

Keywords Drone AI; Terrain Mapping; UAV Remote Sensing; Object Detection; Thermal Insulation, UAV; Deep Learning; Terrain Classification; Object Detection; Thermal Insulation; Remote Sensing Introduction

  1. INTRODUCTION

    Accuracy has greatly increased across a variety of areas with recent developments in object detection employing deep learning models such as CNNs and vision transformers [1]. Similarly, change detection in remote sensing has been improved using machine learning and algebraic techniques, allowing for more accurate environmental monitoring [2]. Datasets like GeoNat v1.0 have been released for AI-driven natural feature mapping in geospatial research to support these advancements [3].

    At the same time, efficient heat control is still essential, particularly for Li-ion batteries. The safety and heat regulation of composite structures made of graphene and h-BN paraffin have been enhanced [4]. The choice of UAV, whether fixed- wing or multi-rotor, has a significant impact on mission suitability and mapping accuracy [5]. Aerial photography is now even more reliable because to developments in small object detection models like YOLOv8 [6].

    More precise UAV-based monitoring has been made possible by high-resolution terrain tracking, especially in difficult-to- reach places like wooded slopes [7]. Furthermore, human-drone interaction research keeps improving usability and autonomy across a variety of domains [8]. Experimental investigations on thermal runaway have provided important insights into safety concerns surrounding lithium-ion batteries in severe conditions [9]. In the meantime, continuous architectural improvements and increased dataset usage have kept models like YOLO in the forefront of object detection [10].

    Aerodisk-induced flow separation is a significant advancement in re-entry vehicle aerodynamics that increases shock stand-off distance and decreases wave drag. Similar to this, PCMs based on paraffin wax exhibit great promise for improving battery safety and thermal control [11], both of which are essential for UAVs functioning in harsh environments. While unsupervised learning has strengthened supervised terrain classification in remote sensing [12], advancements in small object recognition continue to increase the efficacy of UAV picture analysis [13]. Drone technological developments underscore both the increasing adaptability of drones and the related difficulties [14]. At the same time, battery performance for aerial systems is being improved via thermal solutions employing graphene- paraffin composites [15]. AI-based object detection is now used in multimedia and agriculture applications in addition to remote sensing [16]. New insulating materials also help to increase the energy efficiency and safety of Li-ion batteries [17].

    AI's use in environmental monitoring has been strengthened by the notable advancements made in UAV-based land cover classification through the use of hybrid TransformerCNN models [18]. Flexible energy storage systems are now possible because to advancements in 3D electrode architectures for coaxial fiber batteries [19]. Across all domains, object detection speed and reliability have enhanced thanks to developments in Faster R-CNN [20].

    Environmental studies using RGB-UAV data, combined with deep learning, have improved coastal wetland vegetation classification [21]. Novel polyurethane foams derived from

    suberinic acid-based polyols have shown promise in aerospace thermal insulation [22], while emulsion-template synthesis techniques further enhance polyurethane foam performance for extreme environments [23]. Natural oil-based polyurethane foams are extending insulation effectiveness to cryogenic temperatures, relevant for space applications [24]. Physics- informed neural networks are helping model the thermal conductivity of polyurethane-PCM composites with greater accuracy [25]. Finally, sustainable solutions like recovered polyurethane foams are emerging as eco-friendly alternatives in aerospace insulation and energy systems [26].

  2. TERRAIN MAPPING WITH DRONES

    Drones, or Unmanned Aerial Vehicles (UAVs), have transformed various industries by providing an efficient, cost- effective, and accurate method for data collection, terrain mapping, and environmental monitoring. Unlike traditional manned aircraft or satellite imagery, drones offer real-time, high-resolution data with greater flexibility and accessibility, especially in remote or hazardous environments.

    The integration of Artificial Intelligence (AI) and Machine Learning (ML) in drones has further enhanced their capability to autonomously classify terrains, detect changes over time, and analyze environmental patterns with minimal human intervention. These AI-driven drones are particularly useful in sectors such as disaster management, agriculture, surveillance, and infrastructure inspection, where real-time data analysis is crucial.

    1. Types of Drones For Terrain Mapping

      • Fixed-Wing Drones: These drones resemble small airplanes and are ideal for large-scale terrain mapping due to their long flight endurance and ability to cover vast areas. However, they require runways or launch systems for take-off and landing.

      • Multi-Rotor Drones: These are the most commonly used UAVs for mapping due to their vertical take-off and landing (VTOL) capabilities. They provide high maneuverability, making them suitable for mapping complex or confined areas.

      • Hybrid Drones: Combining the advantages of both fixed- wing and multi-rotor UAVs, hybrid drones can take off vertically and transition into efficient horizontal flight for extended missions.

    2. Applications Of Drone-Based Terrain Mapping

      • Disaster Response: Real-time mapping of landslides, floods, and earthquakes for rapid assessment.

      • Agriculture: Crop health monitoring and land classification to optimize resource use.

      • Forestry and Conservation: Tracking deforestation, biodiversity, and habitat monitoring.

      • Infrastructure Inspection: Surveying roads, bridges, and construction sites with high accuracy.

  3. DESIGN

    Because of its inherent stability, ease of use, and mobility, a quadcopter was selected as the foundation for our drone. In contrast to other designs like tricopters or hexacopters, a quadcopter offers the best possible balance between flying control and structural efficiency. By doing away with a tail rotor and variable pitch systems, this design reduces mechanical complexity, making it simpler to assemble and maintain while guaranteeing steady flying performance. Additionally, because of its better control responsiveness and greater aerodynamics, an X-shaped quadcopter frame was chosen over a plus configuration. Improved stability and mobility result from the center of thrust in an X design better matching the drone's center of gravity. Better clearance for payloads, cameras, or other parts that might be put in the middle of the frame is also made possible by this configuration.

    1. Two-Blade Propeller Selection

        • For the drone, a two-blade propeller was selected because of its effectiveness and aerodynamic benefits. Two-blade propeller designs have the following advantages over three or four-blade models:

        • Increased efficiency: A lower drag results in a superior power-to-thrust ratio, which extends battery life and endurance.

        • Easy balance and tuning: The drone's flight characteristics may be more easily adjusted thanks to fewer blades, which also lessen vibration.

        • Reduced power consumption: Longer flight periods are a result of the motors' reduced load, which guarantees efficient energy use.

    2. Use of Off-the-Shelf Components

      To streamline the development process while maintaining cost- effectiveness and reliability, the drone was constructed using readily available, off-the-shelf components. This approach allowed for quick assembly, easier part replacement, and ensured compatibility with widely used drone control systems. By leveraging standardized components, the design benefited from proven performance and reliability, making it suitable for various applications without requiring extensive custom fabrication.

      Fig. 1. 2D Drawing of F450 frame

      Fig. 2. Isometric view of CAD model

    3. Drone Integration:

    To guarantee that the main goalscost-effectiveness and the capacity to tolerate windy conditions as well as low-pressure and low-temperature areaswere effectively achieved, the drone's component selection process was carefully completed. Every part was selected so that the motor ratings, the potential difference between the battery terminals, and the electronic speed controllers (ESCs) remained compatible with each other. This selection procedure was essential to preventing any one component's performance from impairing the drone's overall effectiveness. Long operational lifespans were also anticipated for the selected components. It was essential that the drone's thrust be at least double its weight to provide stable flight and improved resistance to unexpected wind gusts or imbalances. Once the component selection was finalized, a series of necessary calculationsincluding thrust-to-weight ratio, power requirements, and endurance estimationswere conducted. Additionally, some extra weight margin was factored into the design to accommodate the polyurethane (PU) foam insulation, which was necessary for effective battery thermal management in cold environments. Several necessary calculations, such as thrust-to-weight ratio, power requirements, and endurance estimations, were carried out after the component selection was complete. To accommodate the polyurethane (PU) foam insulation, which was required for efficient battery thermal management in cold settings, an additional weight margin was also incorporated into the design.

      • Here are the components used to assemble the drone with their models and weights, respectively.

        TABLE I. Weight Of The Drone Components

        S. No

        Name of the Components

        Weight (g)

        1

        F450 Frame

        282

        2

        A2212 Motor

        256

        3

        30A ESC (Electronic Speed Controller)

        108

        4

        Radiolink Crossflight Flight

        Controller

        54

        5

        M8N GPS module

        25

        6

        1045 Propellers

        40

        7

        LiPo 3S 4500 mAh Battery

        382

        8

        Akaso EK7000 pro Camera

        60

        9

        PU Foam

        80

        10

        Miscellaneous components

        20

        11

        FlySky Fs-i6b Transmitter and

        FS-iA6b receiver

        15 (receiv

        er)

        12

        Telemetry Kit (917MHz)

        15

        13

        Shock Absorber

        24

        Total

        1361

        • Calculations

    Mechanical Power(P) = V × I (1) Average current drawn = 12A (2)

    Each cell consists of 3.7V Potential difference

    3 cell battery will have 3 × 3.7=11.1V (3)

    V = Voltage = 11.1 Volts (4)

    I = Current =12 Amperes (5)

    P = 133.2 W

    (6)

    Considering the motors efficiency as 80%

    P = 106.6 W

    (7)

    T = × P × Thrust-to-Power Efficiency

    (8)

    T = Thrust (in grams)

    (9)

    = Motor efficiency

    (approximately 0.8 for typical setups)

    (10)

    P = Mechanical power output (133.2 W)

    (11)

    Thrust-to-Power Efficiency: Based on the given configuration mentioned, the thrust-to-power efficiency is 8.4g/W

    T = 0.8 × 133.2 × 8.4 = 895.104 g (12)

    Thrust produced by each motor (ASTAR A2212 1000KV) = 895.104 g. (13)

    Total number of motors = 4

    Total thrust produced by motors =895.104 × 4

    =3580.416 g (14)

    Thrust to Weight Ratio = = 2.63 (15)

    Since the thrust-to-weight ratio is greater than 2, it can swiftly move across the airstream and perform the instructed manoeuvres. Also, it makes room for the further addition of components, if necessary, without compromising on stability. Minimum Thrust to Weight is 2:1 For added stability and considering real-time constraints, 2.4 is considered the thrust-to-weight ratio.

    Thrust required per motor = 816g= 8N (16) Total thrust required = 8 × 4 = 32N = 3263g (17) Lift required per Watt of Power consumed

    () =30.6g/W (18)

    ()

    to changes in air conditions, including temperature, pressure, and humidity. Important information about the drone's operational stability under varying environmental circumstances was revealed by the flying testing. The procedure of adding PU foam insulation to improve the battery's thermal stability sarted as soon as the findings showed satisfactory performance.

    The drone was tested in mountainous areas, including Sikkim and northern West Bengal, at an elevation of 120 meters. The testing took place during the daytime under a temperature of 15°C and a pressure of 101,200 Pa. The trial was conducted over a landslide site on NH-10.

    Endurance = ()

    ()

    (19)

    Fig. 3. Drone footage

    Since we are using 30A ESCs, the maximum current drawn is considered as 28.8A is 0.15625 hrs or 9.375 mins. Including some buffer considering 80% battery capacity. Including some buffer, considering 80% battery capacity.

    Usable battery capacity = 0.8 × 4.5 = 3.6Ah (20) Endurance = 0.125 hrs or 7.5 mins (21)

    Minimum Endurance at 55% Throttle (22)

    I55%

    = I100%

    ×( 55

    1.5

    ) (23)

    100

    I55% = 10.5A × (0.55)1.5 (24)

    I55% = 10.5 × 0.408 (25)

    I55% = 4.284 A (26)

    Total Current (For 4 motors) = 4.284 × 4 = 17.136A (27)

    Endurance at 55% throttle = 15.75 minutes (28)

    While the flight controller was being calibrated, we kept 15% battery reserved for return to base (RTB).

    Now endurance becomes = 85% of 15.75 = 13.38mins(29)

    After the drone's assembly, several stability tests were carried out throughout different regions with varying climates. The purpose of these tests was to assess how resilient the drone was

    Fig. 4. Drone footage

    Fig. 5. Image of assembled drone

    Determining the temperature rise within the battery. Assuming the maximum current is drawn throughout the entire flight

    Endurance when maximum current was drawn by motor(t) = 562s (30)

    ESC Limit(I) = 28.8A (31)

    Mass of the battery(m) = 382gms (32)

    Specific heat at constant pressure (Cp) = 900J/Kg K (33) Resistance of wire = 0.01 (34)

    Power,

    P = I2R =8.29W (35)

    Therefore, at 28.8A ESC, the heat generation is 8.29W Temperature Rise,

    T = × (36)

    ×

    T = 13.56 (37)

    Fig. 6. Box made from acrylic sheets

    1. PU Foam

  4. Experiment

    Fig. 7. PU Foam insulation within acrylic sheets

    1. Development of AI Models

    We created a terrain classification model to identify the terrains

    The installation of PU foam insulation in a drone involved ensuring sufficient space between the frame's two base plates for the battery, which would be completely encased in the insulating material. Five sides of the battery were enclosed, with one side open for easy connection and removal. To prevent chemical reactions, 10mm of acrylic sheets were applied to the battery's five insulated sides to form a barrier of protection. Two open-sided boxes of acrylic sheets were constructed, one of which fitted perfectly inside the other to fill the hollow area with foam. The foam was allowed to expand and adjust to the battery's geometry for two to three hours.

    Lightweight foam boards were used to lower the drone's weight and ensure efficient operation without further aerodynamic problems in flight tests. The PU foam was wrapped in Mylar film to limit direct air exposure and improve its ability to reflect sunlight. This encapsulation maintained the battery's ideal thermal conditions and protected the insulation's integrity. The encapsulated unit was firmly placed between the drone's frame's two base plates. The insulated battery was tested in various climatic conditions to evaluate its performance and contribute to the drone's stability and endurance in various operational scenarios.

    the drone is flying. The EuroSAT dataset, which included 27000 photos of ten different terrainsincluding annual crops, forests, pasture, permanent crops, highways, residential areas, rivers, lakes, and riverswas used to train it. 80% of the 27000 photos, or 21600 images, are utilized to train the model, with the remaining 20% being used for model validation. We developed this model using the CNN (Convolutional Neural Network) paradigm. It proved to be the most effective way to use aerial photography to categorize different terrains. Furthermore, compared to other models, it is easier to integrate and does not sacrifice detection accuracy.

    TABLE II. SUMMARY OF THE MODEL

    Layer(type)

    Output Shape

    Param#

    conv2d(Conv2D)

    (None,62,62,32)

    896

    max_pooling2d(MaxPooling2D)

    (None,31,31,32)

    0

    Conv2d_1(Conv2D)

    (None,29,29,64)

    18,496

    max_pooling2d_1(MaxPooling2D)

    (None,14,14,24)

    0

    Conv2d_2(Conv2D)

    (None,12,12,128)

    73,856

    max_pooling2d_2(Maxpooling2D)

    (None,6,6,128)

    0

    flatten (Flatten)

    (None,4608)

    0

    dense (Dense)

    (None,512)

    2,359,808

    dense_1(Dense)

    (None,10)

    5,130

    Step s

    Epoc h

    Total time(s)

    Time( ms)/st ep

    Accura cy

    Loss

    Validat ion Accura cy

    Validat ion Loss

    675/

    675

    24/3

    0

    273

    404

    0.9798

    0.05

    86

    0.8475

    0.7997

    675/

    675

    25/3

    0

    324

    480

    0.9907

    0.02

    58

    0.8411

    0.8354

    675/

    675

    26/3

    0

    380

    477

    0.9849

    0.04

    75

    0.8484

    0.7763

    675/

    675

    27/3

    0

    330

    400

    0.9835

    0.04

    95

    0.8609

    0.7555

    675/

    675

    28/3

    0

    269

    398

    0.9934

    0.02

    13

    0.8506

    0.9073

    675/

    675

    29/3

    0

    268

    396

    0.9818

    0.05

    49

    0.8518

    0.8924

    675/

    675

    30/3

    0

    322

    396

    0.9881

    0.04

    03

    0.8577

    0.9427

    TABLE III. LAST SIX EPOCHS OF MODEL

    Resacle:1/255

    Subset: Training

    Output:21600

    images

    Rescale:1/255

    Subset: Validation

    Output: 5,400

    images

    Start

    Load EuroSAT dataset Path(dataset_dir)

    Create Data Generator (ImageDataGenerator) Training Data Generator(train_generator)

    Validation Data Generator(val_generator)

    End

    Fig. 8. Flowchart for EuroSat Dataset

    The model was developed for change detection to identify changes in a terrain over time or at various intervals. In this case, two photos of the same area taken at different times must be posted concurrently. This model will show any changes between the two photos, as well as the addition of any new regions or objects. This will be accomplished by eliminating the elements that are shared by the two photos and then emphasizing the remaining new features. It can help identify any new tings within a territory in this way. To identify changes between the two images that are given into the model, this model employs Faster R-CNN (Regional-CNN).

    Anchor boxes in the Region Proposal Network (RPN) are used by Faster R-CNN to identify changes at various scales and aspect ratios. Consistency in features across inputs is guaranteed via shared convolutional layers. In intricate, multi- class change detection settings, Region of Interest (RoI) pooling improves detection precision by standardizing proposal sizes prior to classification and bounding box regression.

    Start

    Input Layer

    Convolutional layer 1

    Conv2D: 32 filters

    Maxpooling2D(2×2)

    Convolutional layer 2

    Conv2D: 64 filters

    Maxpooling2D(2×2)

    Convolutional layer 3

    Conv2D: 128 filters Maxpooling2D(2×2)

    Flatten Layer Dense Layer 1

    Dense Layer 2

    Compile model

    End

    Input Shape: (64,64,3)

Fig. 9. Flowchart for CNN Model

Real-time object detection is possible using the third model, which is a live object detection model. SOD-YOLOv8n (Small Object Detection-You Only Look Once) is its foundation. It can perceive common living things like humans, dogs, cats, planes, etc. DOTA is the dataset that was utilized to train the object models. The user will receive an email stating that "an object is detected" as soon as an object enters the camera's area of view. After that, a new email will be created if an object comes into the camera's range of view.

Fig. 10. Flowchart for Live detection model

CNN and OBB-YOLO11m (Oriented Bounding boxes- YOLO) are combined to create the final model. It can first identify the terrain and then identify every object that is there. To achieve the intended goal, all these models are installed on a website that has been developed using Streamlit. All the models are accessible, and photographs can be uploaded. Additionally, the website has tools for creating a graph of confidence scores with the terrains that were used to build the model.

Fig. 11. Flowchart for Live detection model

  1. RESULT AND DISCUSSION

    The drone tests carried out in various regions turned out to be successful. over its multiple flights, it was subjected to gusts of wind, extreme sunlight, and cold region as well. The drone performed perfectly in all those regions and served its purpose

    1. Drone Performance

      When approximately only 55% of throttle was provided constantly, the drone's endurance turned out to be 13 minutes exactly. The thing that needs to be noted is that the RTL (Return to Launch) was enabled. So, when the battery's endurance tends to be 15%, the receiver will no longer obey the signals of the transmitter and, with the help of GPS, determine its launch position. AS soon as the coordinates are noted, the drone will automatically return to its launch position. 15% endurance is keeping the battery secure and having just enough power to return and cover the distance to its point of launch. Hence, the endurance lies between the range 7.5 mins to 13.38 mins, the experimental flight gave the endurance of 13 mins, which is under the calculated range.

    2. Drone Performance

      Training Acuracy Validation Accuracy

98.81

87.48

88

85

97

96

98

99

87

89

With the first web application, we successfully achieved 98.81% accuracy for Terrain_classification_model. We can see a comparison between other models. Our model achieved a 1.81% higher accuracy than ResNet-50 and mobileNetv2, and our validation accuracy is 2.7% more than ResNet 50 and MobileNetv2, as shown in figure 14.

Fig. 12. Comparison of models with same dataset

The main advantages of the proposed model compared to other models are:

  • Our model has an increased depth and parameters. The model has three convolutional layers with increasing filters (3264128). This deeper architecture allows better feature extraction, which improves classification accuracy, whereas many terrain classification models use two convolutional layers. but adding three layers, it improves feature detection for complex terrain patterns.

  • The optimized pooling strategy has been used where max pooling (2×2) after every convolutional layer helps retain important features while reducing computational load. It

    prevents overfitting and helps the model generalize better unseen data. Pooling ensures that small variations do not degrade classification performance.

    • Our model has a high dimension fully connected layer, where it has dense (512) layers before the output, which allows for better feature abstraction. Since many standard CNNs use 256 or fewer neurons, increasing the dense layer size up to 512 can enhance model expressiveness and classification by making feature representation more detailed.

    • The usage of Adam optimizer for faster convergence, where Adam optimizer adapts the learning rate dynamically, making training faster and more stable than standard SGD (Stochastic Gradient Descent) the Adam optimizer works well for non-uniform datasets, making it ideal for remote sensing imagery, it helps for avoiding local minima and speeds up convergence which is critical for large terrain datasets.

    • We have increased training epochs for our model; that is, the model was trained with 30 epochs, which balances the high accuracy with generalization. Because few epochs would lead to underfitting, while too many epochs would cause overfitting.

    • We have trained our model with the EuroSAT Dataset, which has 27,000 images with 10 different types of terrains (like forest, pastures, residential, highway, etc). Many models use smaller datasets, but our model leverages a larger and balanced dataset, making classification more reliable because a large and diverse dataset improves model accuracy, ensuring real-world usability.

  1. Illustration 1(Terrain Classification Model)

    Fig. 13. Web application for Illustration 1

    The sub-application for the terrain classification model successfully classified objects detected in multiple images with its summary. This even generates a CVS file to store the data. A random image from Google Earth was taken and uploaded on our web application, which successfully classified the image as residential, as depicted in Figure 16.

    Fig. 14. Web application for object detection within the terrain

    Fig. 15. Output Image

    Fig. 16. Summary of Detections

    Fig. 17. Detection frequency of objects

    Fig. 18. Pie chart depicting the proportion of detected objects

    Fig. 19. Heat map of confidence scores

    Fig. 20. Confidence scores distribution

    In figure 17 it is visible that ships were the most common objects, with over 250 of them. The Y-axis represents the images uploaded, and the X-axis shows the confidence scores of objects detected in Figure 16. One out of three images from the output can be seen in Figure 15. Three images were uploaded in which 376 objects in total were detected, out of which ships were the most common object. Figure 16 represents the proportion of objects detected.

    By using confidence scores coupled with heuristic data, it is used to calculate terrain analysis metrics such as drone flight suitability, image quality expected, urban density, risk factor, and recommendation, as depicted in Figure 21. Furthermore, it generates a plot specifying visibility, accessibility, flatness, signal, and obstacles, as shown in Figure 22.

    Fig. 21. Terrain analysis metrics

    Fig. 22. Plot of Radar Chart

  2. Illustration 2 (Change Detection Model)

    For the application for change detected using Faster R-CNN, we successfully captured images from our drone and processed them to get output.

    Fig. 23. Web application for Change Detection

    Fig. 24. Objects detected in first image uploaded

    Fig. 26.

    Fig. 25. Objects detected in second image uploaded

    Fig. 27. New object detected by the model

    Figure 24(First image uploaded) displays all the objects detected by the Change detection model. Figure 25(Second image uploaded) displays the objects detected by the same model. Now, all objects common to both the images were subtracted by the model to detect the new object present. Further, Figure 26 shows a composite image illustrating a new object.

    Fig. 28. Summary of Change Classification

    A summary is generated for the type and number of objects detected by the model. This is done with the use of object tracking as shown in figure 28.

  3. Illustration 3 (Live Object Detection Model)

In our study, SOD-YOLOv8n greatly improved object identification accuracy when incorporated into a drone-based AI system for real-time monitoring. The drone's onboard camera provided live video feeds that the model processed effectively, accurately detecting and categorizing items. Every time an object was discovered, it set off automated email alerts within 3.76 seconds, ensuring quick reaction with low latency. It is very useful for applications like environmental analysis, security monitoring, and disaster response because of its real- time capacity, which improves situational awareness. Its potential to enhance UAV-based aerial surveillance and decision-making systems is demonstrated by the successful deployment in our drone-assisted terrain mapping project.

Fig. 29. Web application for live detection of objects

Fig. 30. Object detection in the real-time feed

Figure 28 shows how we can select the mode, preference of object detection, and confidence threshold, and figure 29 shows how the object is detected in the real-time feed where a person is detected and the confidence score is displayed. Further detection, a mail was also sent after 3.76 seconds.

  1. CONCLUSION

This study effectively uses a drone-based system to integrate AI-driven landscape mapping and classification. We outperformed traditional models, such as ResNet-50 and MobileNetV2, by using a convolutional neural network (CNN) for terrain classification, achieving a high training accuracy of 98.81% and a validation accuracy of 87.48%. By detecting changes in the terrain over time, the use of a Faster R-CNN- based change detection model improves environmental monitoring capabilities even more. Furthermore, situational awareness is greatly enhanced by the combination of an automated email alert system with SOD-YOLOv8n for real- time object identification.

Through rigorous component selection, the drone's performance was maximized, guaranteeing stability in a variety of environmental circumstances. By successfully preserving battery thermal stability, polyurethane (PU) foam insulation increased operating endurance by up to 13 minutes. Our UAV system's resilience in practical applications was confirmed by field testing conducted in a variety of terrains, including high- altitude and disaster-prone zones.

This study demonstrates how AI-powered UAVs might be used for disaster relief, environmental monitoring, and remote sensing. Future research might concentrate on developing transformer-based architectures for better terrain classification models, increasing real-time data analytics for more independent decision-making, and boosting endurance through sophisticated battery technologies. Drone-assisted AI systems will continue to revolutionize security, environmental preservation, and aerial surveying with further developments.

REFERENCES

  1. Amjoud, A. B., & Amrouch, M. Object Detection Using Deep Learning CNNs and Vision Transformers: A Review. IEEE Access (2023);11; 35479-35516

  2. Anjali Goswami, Deepak Sharma. Change Detection in Remote Sensing Image Data Comparing Algebraic and Machine Learning Methods.

    Electronics (2022);11(3);431

  3. Arundel, S. T., Wenwen Li, et.al. GeoNat v1.0: A Dataset for Natural Feature Mapping with Artificial Intelligence and Supervised Learning.

    Transactions in GIS (2020);24;556-572

  4. Bohayra Mortazavi, Hongliu Yang, et.al. Graphene or h-BN Paraffin Composite Structures for the Thermal Management of Li-ion Batteries: A Multiscale Investigation". Applied energy(2017);vol 202;323-334

  5. Boon M. A., Drijfhout, A. P., et.al. Comparison of a Fixed-Wing and Multi-Rotor UAV for Environmental Mapping Applications: A Case Study. The International Archives of the Photogrammetry, Remote Sensing, and Spatial Information Sciences (2017); XLII-2/W6;47-54

  6. Boshra Khalili, Andrew W. Smyth. SOD-YOLOv8Enhancing YOLOv8 for Small Object Detection in Aerial Imagery and Traffic Scenes. Sensors (2024); 24(19); 6209

  7. Chandra Has Singh, Vishal Mishra, et.al. High-Resolution Mapping of Forested Hills Using Real-Time UAV Terrain Following. ISPRS Annals of the Photogrammetry, Remote Sensing, and Spatial Information Sciences. (2023); X-1/W1;665-671

  8. Dante Tezza, Marvin Andujar. The State-of-the-Art of Human-Drone Interaction: A Survey. IEEE Access (2019);7 ;167438 167454.

  9. Di Meng, Jingwen Weng, et.al. Experimental Investigation on Thermal Runaway of Lithium-Ion Batteries under Low Pressure and Low Temperature. Batteries (2024);10;243.

  10. Diwan, T., Anirudh, G., et.al. Object Detection Using YOLO: Challenges, Architectural Successors, Datasets, and Applications.

    Multimedia Tools and Applications (2022);82; 9243-9275

  11. Mohankumar Subramanian et.al. Numerical and Experimental Investigation on Enhancing Thermal Conductivity of Paraffin Wax with Expanded Graphene in Battery Thermal Management System. International Journal of Environmental Research (2022); Vol 16(13)

  12. Michael Happold, Mark Ollis, et.al. Enhancing Supervised Terrain Classification with Predictive Unsupervised Learning. Applied Perception Inc., Pittsburgh, PA (2006);2; 006.

  13. LongYan Xu, YiFan Zhao, et.al. Small Object Detection in UAV Images Based on YOLOv8n. International Journal of Computational Intelligence Systems (2024); vol 17(223)

  14. Kanhaiya powar, Sharad patil. Experimental investigation on thermal performance of battery using composite phase change material. Multiscale and Multidisciplinary Modeling, Experiments and Design (2024); vol 8(54)

  15. Jiajun Zhao,Yin Chen, et.al. A Novel Paraffin Wax/Expanded Graphite/Bacterial Cellulose Powder Phase Change Materials for the Dependable Battery Safety Management. Batteries (2024); 10(10); 363

  16. Nawaz, S. A., Jingbing Li, et.al. AI-Based Object Detection: Latest Trends in Remote Sensing, Multimedia, and Agriculture Applications.

    Frontiers in Plant Science. (2022); 13;1041514

  17. Ting Quan, Qi Xia, et.al. Recent Development of Thermal Insulating Materials for Li-Ion Batteries. Energies (2024);17(17);4412

  18. Tingyu Lu, Luhe Wan, et.al. Land Cover Classification of UAV Remote Sensing Based on TransformerCNN Hybrid Architecture. Sensors (2023);11;5288.

  19. Yoshinari Makimura, Chikaaki Okuda, et.al. Three-dimensional electrode characteristics and size/shape flexibility of coaxial fibers bundled battery. Energy and Environmetal Science. (2024); 17; 2864- 2878

  20. Yu Liu. An Improved Faster R-CNN for Object Detection. 11th International Symposium on Computational Intelligence and Design (ISCID). (2018).

  21. Zheng, J. Y., Hao, Y. Y, et.al. Coastal Wetland Vegetation Classification Using Pixel-Based, Object-Based, and Deep Learning Methods Based on RGB-UAV. Land (2022);11;203

  22. Aiga Ivdre, Arnis Abolins,et.al. Rigid Polyurethane Foams as Thermal Insulation Material from Novel Suberinic Acid-Based Polyols. Polymers (2023);15(14); 3124.

  23. Junsu Chae, Seong-Bae Min, et.al. Thermal insulation properties of a rigid polyurethane foam synthesized via emulsion-template. Macromolecular Research(2024); 33; 225-233.

  24. Katarzyna Uram, Aleksander Prociak, et.al. Natural Oil-Based Rigid Polyurethane Foam Thermal Insulation Applicable at Cryogenic Temperatures. Polymers (2021); 13(24); 4276.

  25. Bokai Liu, Yizheng Wang, et.al. Multi-scale modeling in thermal conductivity of Polyurethane incorporated with Phase Change Materials using Physics-Informed Neural Networks. Renewable Energy(2024); 220; 119565.

  26. Todorka Samardzioska, Milica Jovanoska-Mitrevska. Recycled Rebonded Polyurethane Foam as Sustainable Thermal Insulation. Current Trends in Civil & Structural Engineering(2023);9(4).