Automating Enemy Troop Detection using Image Processing and Machine Learning

Call for Papers Engineering Research Journal June 2019

Download Full-Text PDF Cite this Publication

Text Only Version

Automating Enemy Troop Detection using Image Processing and Machine Learning

Arjun Suvarna1, Akash Shetty D2, Brijesh Kumar Patel3, Chaman K4

    1. Students, Computer Science & Engineering Sahyadri College of Engineering & Management

      Adyar, Mangaluru, Karnataka, India

      Abstract Border security is largely a manual process with patrolling done by border guards trying to observe enemy activity or attempts of illegal immigration. This method leaves a lot of loopholes as borders are generally very long and manual observations can be incompetent. There is a need for automation of monitoring some parts of the border especially the terrains that are hard to monitor on foot. This paper aims to automate a lot of the manual labor by processing images from border taken by drones or stationary thermal cameras and autonomously detect threats or illegal immigration. The system will use prior knowledge of threats to alert the border agency with appropriate threat level, which will be determined by the severity of the threat. Potential attacks or insurgency by enemy troops will trigger the highest threat level. Activities such as smuggling and illegal immigration will trigger lower threat levels. The border agency can determine what kind of response needs to be mounted based on the issued threat level, this kind of operational intelligence will enable effective use of resources. The system will be able to distinguish between normal day-to- day activity in the border zone and potential harmful activity through the prior training given to the system. Activities such as farmers moving about the area, animals crossing and other such normal activities will be ignored by the system or just recorded in an automatic log whereas activities such as group of people moving through the border, buildup of military equipment near the line of control will be marked and flagged. The system will provide an overall greater degree of flexibility to the organization deploying it for their use and will lead to efficient use of real time intelligence.

      Keywords Scene segmentation, classifier, image processing, gray threshold, Neural Network.


        Segmentation in computer vision refers to dividing an image into many segments i.e. each pixel is assigned a label so that pixels with common label have common characteristics. The result of image segmentation process is group of segments that make up the entire image. The individual group of segments represent an object which is part of the entire image.

        Classification in machine learning and statistics refers to the process of predicting to which category a label belongs based on the prior training whose category is known.

        Mrs. Ronnie Merin George5

        Assistant Professor, Computer Science & Engineering Sahyadri College of Engineering & Management Adyar, Mangaluru, Karnataka, India


        There are several works carried out in the area of infrared target detection.

        A histogram based clustering method was proposed which categorizes image pixels into background and foreground pixels [1].

        Concept of change detection is used where the location of objects is mapped with time to identify the change over a given period of time [2].

        Image fusion is used to combine required information from various images of the same scene into one single image which can be perceived by both machines and humans, it can also be used for further image processing. The resulting image can preserve required spatial data so that the target objects can be identified properly [3].

        After background suppression the variance weighted information entropy and region growing method is used for segmenting the target object [4].

        This method goes through Gaussian filtering, total variation minimization, Neighborhood filtering and NL-means algorithm and consistency to give the final result [5].


        Fig. 1 shows the dataflow diagram and Fig. 2 shows the system architecture of the proposed system.

        1. Segmentation

          Here we use Otsus method [3] to perform clustering based image thresholding. The algorithm considers two sets of pixels i.e. foreground pixels and background pixels. An optimum threshold value is calculated so that the foreground and background pixels can be separated. The calculated threshold value should have sum of foreground and background spreads minimum. Based on which of the either two categories the pixel belongs, the grayscale image is converted into binary image using the global threshold value. The original images are saved for referencing in the last stage if a threat is detected.

        2. Classification

          The algorithm runs on Tensorflow platform and is based on the transfer learning methodology. The classifier has

          been pre trained to perform basic classification based on supervised learning methodology. This algorithm is retained for the border threat dataset to classify various levels of threats. The trained algorithm is able to classify new images with accuracy depending on the size and quality of training data set.

          The images received after segmentation are supplied to the system for classification. The system is pre-trained using the classification algorithm and will be able to classify the test image into four threat categories, viz., Illegal immigration, Smuggling, Enemy Invasion or No threat. The algorithm is scheduled to retrain the system in periodic intervals so that the false positive classification can be minimized. A log is maintained to monitor the classification process.

        3. Alert System

        The alert system is to generate report based on the classification result. The report can be accessed by the users using the front-end interface. If the image classified is a false positive, the admin can flag that image and select manually to which threat level the image belongs to so that the classifier retrains taking that image feedback into consideration.

        Fig. 1. Dataflow diagram

        Fig. 2. System architecture

      4. RESULTS

        Segmentation process improves classification efficiency by retaining only the required region and removing background noise. The classification phase identifies the threat level of the detected activity. Manual flagging of incorrect classifications and retraining improves the future classifications.


        This proposed system is going to address the problem of possible human errors on the nations borders. This system doesnt replace soldiers in the border regions but it will give extra insights about the detected threat which may help in taking better decisions for neutralizing the threat.


We thank our guide, coordinators and our college for providing full support.


  1. N. Otsu, "A Threshold Selection Method from Gray-Level Histograms," in IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, Jan. 1979.

  2. A. J. Reno and D. B. David, "An application of image change detection-urbanization," 2015 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2015], Nagercoil, 2015, pp. 1-6.

  3. Zi-Jun Feng, Xiao-Ling Zhang, Li-Yong Yuan, and Jia-Nan Wang, Infrared Target Detection and Location for Visual Surveillance Using Fusion Scheme of Visible and Infrared Images, Mathematical Problems in Engineering, vol. 2013, Article ID 720979, 7 pages, 2013.

  4. Li, Ying, Shi Liang, Bendu Bai, and David Feng. "Detecting and tracking dim small argets in infrared image sequences under complex backgrounds." Multimedia tools and applications 71, no. 3 (2014): 1179-1199.

  5. R. Wu, D. Yu, J. Liu, H. Wu, W. Chen and Q. Gu, "An improved fusion method for infrared and low-light level visible image," 2017 14th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, 2017, pp. 147-151.

Leave a Reply

Your email address will not be published. Required fields are marked *