DOI : https://doi.org/10.5281/zenodo.19287533
- Open Access

- Authors : Dr. B. Lakshmi Narayan Reddy, Dr. C Raju, Munnelli Sreehari, Chandragiri Saranya, Murakambattu Hemasai, Mankar Prashamsha, Mude Vamsi Vardhan Naik
- Paper ID : IJERTV15IS030979
- Volume & Issue : Volume 15, Issue 03 , March – 2026
- Published (First Online): 28-03-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Inception V3-Based Framework for Detection and Categorization of Gastrointestinal Parasites in Caprine Animals
Dr. B. Lakshmi Narayan Reddy
Department of ECE, Sri Venkateswara College of Engineering (Autonomous), Tirupati, A.P.,India
Chandragiri Saranya
Department of ECE, Sri Venkateswara College of Engineering (Autonomous), Tirupati, A.P.,India
Dr. C Raju
Department of ECE, Sri Venkateswara College of Engineering (Autonomous), Tirupati, A.P.,India
Murakambattu Hemasai
Department of ECE, Sri Venkateswara College of Engineering (Autonomous), Tirupati, A.P.,India
Munnelli Sreehari
Department of ECE, Sri Venkateswara College of Engineering (Autonomous), Tirupati, A.P.,India
Mankar Prashamsha
Department of ECE, Sri Venkateswara College of Engineering (Autonomous), Tirupati, A.P.,India
Mude Vamsi Vardhan Naik
Department of ECE, Sri Venkateswara College of Engineering (Autonomous), Tirupati, A.P.,India
Abstract – Caprines have severe disease issues of gastrointestinal parasitic infections. They cause low productivity and immense economic losses on farm animals. The manual analysis of the microscope is a time consuming and expert based diagnosis which used to be traditionally employed in diagnosis. To counter such difficulties, in this paper, we provide a structure of the automatic process of detecting and classifying gastrointestinal parasites by deep learning method with the help of microscopic pictures. It involves the use of convolutional neural network termed Inception V3 that is already trained and capable of transfer learning to extract the significant features that would help categorize the types of the found parasites accurately. The images are resized and converted to RGB and normalized before they are fed into the model. The dataset is split into training, validation and testing subsets to provide a reliable and unbiased assessment of the model’s performance. Experimental results show that the proposed model has high classification accuracy and low inference time. Furthermore, the trained model is embedded into a web-based application created with Streamlit, which makes it possible to predict parasites in real-time. The developed system is an efficient and easy-to-use solution to the problem of automated veterinary diagnosis.
Keywords: Gastrointestinal parasites, Caprine animals, Deep learning, Inception V3, Transfer learning, Image classification, Streamlit.
- INTRODUCTION
The analysis of microscopic images is a valuable component of the contemporary laboratory-based mechanisms of parasite identification and diagnostic. The old microscopic analysis relies on skilled personnel and includes a colossal amount of visual inspection into the structure of the parasite and categorization on morphological features. Even though it is the preferred method and the most used one, it takes a lot of time, and the results can vary from one observer to another. The separation of parasites through manual labor might only be a difficult undertaking owing to subtle variations in shape, texture and internal structure of parasites, when there are two or more classes of the parasites with similar visual patterns. Over the past years, deep learning algorithms have contributed immensely to the classification of image-based data within the medical and biological environments [1], [2].
Convolutional Neural Networks (CNNs) have demonstrated its exceptionally high ability in extracting automatically the hierarchical and discriminative features directly out of the image data not by any means of the handcrafted features. However, unlike the conventional image processing techniques, CNN can learn the intricate spatial structures during the process of training hence, it provides an improved robustness and generalization performance.
Several studies have been done regarding the application of deep learning in microscopy-based diagnostics. For instance, Quinn et al. [3] demonstrated the successful application of convolutional neural networks (CNNs) in the classification of microscopy images for use in point-of-care diagnostic systems.
Similarly, Parra et al. [4] used convolution neural networks for automatic recognition of intestinal parasites and showed the improvement of classification accuracy as compared to classical feature-based approach. These studies provide the growing relevance to the deep learning frameworks in the domain of automated biological image analysis. Among various CNN architectures, Inception family of networks have been given a lot of attention for the optimized convolutional design and efficient feature extraction capabilities.
Inception V3 architecture was introduced by Szegedy et al.
[5] which consists of multi-scale convolution operation in a single module to assist network to extract fine-grained and global features of an image. Its architectural efficiency makes it especially appropriate for classification tasks that are concerned with subtle variations in structures such as differentiation of microscopic parasites.In this study, a deep learning framework using Inception v3 based on the detection and categorization of 8 classes of parasitic worms (Ascaris lumbricoides, Capillaria philippinensis, Enterobius vermicularis, Fasciolopsis buski, hookworm egg, Hymenolepis diminuta, Hymenolepis nana, and Opisthorchis viverrini) is proposed. These classes have been chosen in such a way as to put the model to the test of identifying morphologically similar images of parasitic species in varying visual conditions.
The approach goes as follows: taking a pretrained Inception V3 network and using transfer learning it is fine-tuned to be flexible to the parasite image dataset and train more efficiently. Unlike object detection models like YOLO and SSD which requires bounding box annotations and increase in computational complexity [6],[7].
The present work deals with a classification-based approach. Since there is a predominant parasite structure for every individual microscopic image, an image level categorization is an alternative that is computationally efficient without the loss of predictive power of the model image. Finally, the trained model is embedded into a web application with Streamlit for uploading images and performing real-time prediction of the parasite. The major contribution of this work is the design of a practical classification oriented deep learning methodology for microscopic parasite images that can efficiently be combined with deployable user interface integration.
- RELATED WORK
Automated Image Analysis has become of much interest to medical and biological diagnostics in recent years. Manual observation is used in traditional microscopic identification of parasites and is time consuming and it needs expertise in the
domain. In order to circumvent such shortcomings, scholars have investigated computational methods of microscopic image classification.
As deep learning has advanced, the approach of using Convolutional Neural Networks (CNNs) is common in the diagnosis using images. Quinn et al. [5] has shown the ability of these networks in learning discriminative features from the raw image data. Similarly, Parra et al. [6] used CNN models in automatic detection of intestinal parasites, and they claimed they had a higher accuracy than the conventional image processing methods. YOLO and SSD are two other object detectors which can be used to analyze a biological image [8], [9].
These models can detect and find various objects in one image. Nevertheless, the bounding box annotation of objects is necessary in detection-based models, and the classification task that involves a large portion of the image dominated by that type of parasite requires more computational resources, whereas the image-level classification models can offer a more straightforward and computationally efficient solution.
Transfer learning has improved deep learning by making it possible to use pre-trained models on new datasets, which are adjusted to domain-specific classification. The Inception V3 architecture was proposed by Szegedy et al. [7] and has demonstrated great performance in large image classification, as well as effective feature extraction. Although some research has been conducted using deep learning methods for parasite detection, there have been few studies in which the classification-based methods have been applied to caprine gastrointestinal parasites images combined with real-world implementation. Therefore, in this research, it is used to develop an effective and practical parasite classification system by using the Inception V3 model.
- PROPOSED METHODOLOGY
In this Study, a framework of detection and classification of the gastrointestinal parasites of caprine animals is introduced using an image classification framework through deep learning. The Overall architecture of the proposed system indicates that the workflow consists of data preparation, model training, parasite classification and real-time deployment.
Overall, the proposed model of parasite detection using Inception V3 looks like the one presented in figure 1.
- Prediction and Classification of Parasites.
The model is then tested on unseen images after it has been trained. The trained model predicts the class of the parasite for each input image and a confidence score obtained from the output of the Softmax later. This confidence score shows the reliability of the prediction.
- Model Evaluation
The efficiency of a proposed system is measured by the common measures of classification used in the system such as accuracy, preciseness, recall, and speed of detection. These indicators will allow the establishment in detail of the effectiveness and efficiency of the proposed strategy.
- Using Streamlit to deploy
Figure 1: The system proposed to detect gastrointestinal parasites is made up of architecture with Inception V3.
- Dataset Preparation
The data set contains a microscopic images of the gastrointestinal parasites in caprine animals. To enable successful categorization the data set contains a few categories of parasites. The collected image is divided into training, testing sets and validation to find out how the proposed model can be generalized
- Data Annotation and preprocessing
The pictures of all the parasites are placed manually marked and assigned to the respective classes. The preprocessing is conducted to ensure that it can fit into Inception V3 structure. They will include downsizing all the images to size 299×299 pixels, converting images to RGB and normalizing the values of the pixels. These preprocess procedures improve the stability of the training, as well as improves feature learning.
- Inception V3- Based System Architecture
The proposed system mentions the data of ImageNet and uses the convolutional neural network Inception V3 that was trained using transfer learning. The extraction of the features and the last classification layers set to the size of the number of parasite classes are done through the training layers. This is more appropriate to be trained on smaller datasets and perform better.
- Model Training
The modified Inception V3 model which has been trained is then trained on the preprocessed dataset. It is training by using the right hyperparameters such as epochs, batch size and the learning rate. The model learns discriminative properties which apply in the process of training to differentiate various classes of parasites. Validation data is used to track performance to avoid overfitting.
To demonstrate the practical applicability of the trained model, it is inserted into a given web application provided on Streamlit. Using the application, a person can upload the image of parasites and get the results and scores of classifications immediately which makes it possible to find parasites intuitively and with the interactive process.
- Dataset Preparation
- Prediction and Classification of Parasites.
- DISCUSSION AND EXPERIMENTAL FINDING
This is an experimental review of the suggested gastrointestinal parasite detection system using Inception V3. Microscopic images of parasites are used to evaluate the model performance in form of training, validation and testing sets. The effectiveness and efficiency of the suggested approach are evaluated using standard metrics of classification such as accuracy, precision, recall and detection speed. Also, streaming prediction is displayed using Streamlit-based web application. The reliability & practical applicability of the proposed system is supported by the result of the experiment.
Figure 2: Training & validation accuracy
The training and validation performance curves have stable convergence with little divergence, which means that overfitting is reduced and generalization capability is good.
Performance parameters of a proposed Inception V3-based parasite detection system.
Parameter Value / Description Model Architecture Inception V3 Learning Approach Transfer Learning Input Image Size 299 × 299 Number of Classes 8 Parasite Classes Training Epochs 20 Batch Size 16 Training Time 40 minutes Detection Speed 2-5 Classification Accuracy 9096% Precision 9297% Recall 9196% Deployment Platform Streamlit Web Application Development Environment Visual Studio Code Hardware Used CPU Figure 3: Performance analysis for the proposed Inception V3 based parasite detection system
Figure 4: Output of Hymenolepis diminuta, Hymenolepis nana with Accuracy 95%, 99%
Figure 6: Output of Capillaria 98%
Figure 5: Output of Ascaries Accuracy 98%
TABLE II
Performance evaluation of Inception V3 to Gastrointestinal parasite Classification.
Parasite Class Accuracy (%) Training Time (min) Detection Speed (sec/image) Ascaries Lumbricoides 98.2 40 2-5 Capillaria philippinensis 98.8 40 2-5 Enterobius vermicularis 94.6 40 2-5 Fasciolopsis buski 95.4 40 2-5 Hookworm egg 91.9 40 2-5 Hymenolepis diminuta 95.8 40 2-5 Hymenolepis nana 99.1 40 2-5 Opisthorchis viverrine 93.2 40 2-5 Figure 8: Confusio Matrix
Figure 9: ROC Curve
Figure 7: Streamlit Prediction Output
Figure 10: Confidence graph
TABLE III
Method Accuracy (%) Training Time (min) Detection Speed (sec/image) Traditional Image Processing 78 10 5 Basic CNN 88 45 3 SSD 90 80 2 YOLO 92 95 1 Proposed Inception V3 94 40 2 Comparison of Existing with Proposed Inception V3 Model.
- CONCLUSION AND FUTURE SCOPE
In this paper, an automated framework for gastrointestinal parasites detection in caprine animals using the Inception V3 deep learning (DL) model is proposed. By making use of transfer learning and suitable image preprocessing methods, the system can perform the classification of microscopic parasite images accurately with reliable performance. Experimental analysis shows that the model has efficient performance with stable training and fast prediction speed. The application of the model using Streamlit-based web application is useful for real-time parasite identification. Although the proposed system has shown some promising results, there is still room for improvement. The models performance can be improved by having a larger and more diverse data set and by using advanced data augmentation techniques. Further improvements can be accomplished by optimizing more layers in the Inception V3 network or attempting to ensemble models. In addition, the framework could be extended to include the analysis of real-time video as well as its use for other species of livestock, which could make it more applicable. Integration with cloud or mobile platforms may also be helpful with scalability and accessibility.
ACKNOWLEDGMENT
The authors sincerely thank the project guide and faculty members for their valuable guidance, encouragement and constant support during this work. The authors also express their gratitude to the institution for the facilities and resources required to complete this research successfully.
REFERENCES
- Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature, vol. 521, no.
7553, pp. 436444, 2015.
- G. Litjens et al., A survey on deep learning in medical image analysis,
Medical Image Analysis, vol. 42, pp. 6088, 2017.
- J. A. Quinn, R. Nakasi, P. K. Kiwanuka, A. Byanyima, W. Lubega, and A. Andama, Deep convolutional neural networks for microscopy-based point-of-care diagnostics, in Proc. Machine Learning for Healthcare Conf., 2016, pp. 271281.
- C. Parra, J. J. Garcia, and L. Salazar, Automatic identification of intestinal parasites in microscopic stool images using convolutional neural networks, PLoS One, vol. 17, no. 8, 2022.
- C. Szegedy et al., Rethinking the inception architecture for computer vision, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2016, pp. 28182826.
- K. M. F. Fuhad et al., Deep learning based automatic malaria parasite detection from blood smear images, Diagnostics, vol. 10, no. 5, pp. 1 22, 2020.
- Y. Li, L. Deng, and X. Liu, A low-cost automated parasite diagnostic
system via deep learning, Journal of Biophotonics, vol. 12, no. 9, 2019.
- E. Tasci, C. Uluturk, and A. Ugur, A voting-based ensemble deep learning method focusing on image augmentation and preprocessing variations, Neural Computing and Applications, vol. 33, no. 22, pp. 1554115555, 2021.
- A. Simon, S. Ananthi, and R. Kavitha, Shallow CNN with LSTM layer for tuberculosis detection in microscopic images, International Journal of Recent Technology and Engineering, vol. 7, no. 3, 2019.
- W. Liu et al., SSD: Single shot multibox detector, in Proc. European
Conf. Computer Vision (ECCV), 2016, pp. 2137.
- J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, You only look once: Unified, real-time object detection, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779788.
- A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, YOLOv4: Optimal speed and accuracy of object detection, arXiv preprint arXiv:2004.10934, 2020.
- C. Parra et al., Automatic identification of intestinal parasites in reptiles using microscopic stool images and convolutional neural networks, PLoS One, vol. 17, no. 8, 2022.
- S. Lustigman et al., A research agenda for helminth diseases of humans,
PLoS Neglected Tropical Diseases, vol. 6, no. 4, 2012.
- J. Charlier et al., Chasing helminths and their economic impact on farmed ruminants, Trends in Parasitology, vol. 30, no. 7, pp. 361367, 2014.
