🔒
Global Knowledge Platform
Serving Researchers Since 2012

CervScan : Predictive Model to Detect Cervical Diseases

DOI : https://doi.org/10.5281/zenodo.19603836
Download Full-Text PDF Cite this Publication

Text Only Version

CervScan : Predictive Model to Detect Cervical Diseases

Dr. Naadem Divya

Professor, CSE(DS) SNIST Ghatkesar

Balivada Pranitha

Department of CSE(DS), SNIST(Ghatkesar)

Gone Eeshitha

Department of CSE(DS), SNIST(Ghatkesar)

Anumula Tejaswini

Department of CSE(DS), SNIST(Ghatkesar)

Abstract – Cervical cancer remains the leading cause of mortality to women all over the world, and, therefore, we require early, precise, and comprehendible diagnostic systems. The Cervical Cancer Cell Image Dataset is used in this study to show a full automated deep learning framework for cervical cancer cell analysis. The framework consists of classification, detection, explainability and web-based application. Transfer learning-based convolutional neural networks such as ResNet50, DenseNet201, InceptionV3, and Xception were used as initial points in classifying the images. In order to enhance the capability of making the distinction, a hybrid feature-level fusion mechanism was proposed. This approach involves DenseNet201 and InceptionV3 mixture, principal component analysis (PCA), and lastly a fully connected neural network (FNN). We further considered more hybrid configurations with DenseNet201-ResNet50 feature fusion + PCA and deep FNN, and a residual multilayer perceptron (Residual MLP) to improve the gradient flow and the robustness. DenseNet201 achieved the highest results in classifying with accuracy, precision, recall and F1-score (F1- score=98.0). YOLOv5, YOLOv8, YOLOv9, and YOLOv11 object recognition models were tested for finding abnormal cells and classifying them at the same time. YOLOv9 performed the best, with precision 0.575, recall 0.697, and mAP 0.646. To perform visual interpretation, explainable AI based on Grad-CAM was employed. Lastly, the system was put into practice with the help of a Flask-based web application to do real-time clinical inference.

Keywords Cervical Cancer Classification, Deep Learning, Densenet201, Feature Fusion, Fine Tuning, Hybrid Model, Inceptionv3.

  1. Introduction

    Still, cervical cancer is a major problem for health around the world. It is the fourth prevalent cancer among the women and results in a significant number of illnesses annually [1]. Approximately 0.57 million new cases and 0.31 million deaths occur annually in the global scene due to it. The rate of deaths is mainly high because of the late diagnosis and the failure of the existing screening procedures [2]. The typical method of diagnostics is still considered to be the manual analysis of Pap smear images, which still has numerous issues that render it somewhat less useful. These are processes that are very

    laborious, those that are extremely variable among the observers, cell structures overlap, and morphological heterogeneity which makes diagnosis less accurate and reliable [3]. Therefore, we require automated, fast, and precise diagnostic instruments immediately in order to assist physicians to make decisions and improve the situation of the patients [4].

    The past years have seen the popularity of using DL methods to analyse medical images. They can transform entirely the methods of finding cancer and classifying it [5]. Numerous studies have been conducted on CNNs and, in particular, their architectures, such as ResNet-50, DenseNet- 201, InceptionV3, and Xception, to determine to what extent they can extract distinguishing features of complex medical images [6]. ResNet-50 solves the previously mentioned disappearance gradient problems and enhances the learning of deep features, but can overfit the feature learning task on smaller datasets [7]. DenseNet-201 is a model that is good at gradient propagation and feature reuse due to its dense connectivity, however, it struggles to balance precision and recall [8]. In the same way, both InceptionV3 and Xception are good at recording hierarchical and multi-scale features, but they need a lot of computing power, which makes them impractical in clinical settings with limited resources [9]. It is evident that these architectural boundaries require more sophisticated models capable of striking a reasonable balance between accuracy, generalization and computational efficiency.

    The combination of the most promising characteristics of multiple architectures has become a possible solution to these issues and is called hybrid DL approaches [10]. Specifically, the feature extraction tools of closely connected networks, as well as multi-scale analysis based on inception-based models can provide a more comprehensive image of features of cervical cancer cells. Such types of integrative models could render classification more dependable, reduce the subjective diagnostics, and render the systems more helpful in practice. Developing hybrid architectures to be used in Pap smear image classification in this case is timely and significant. They will contribute to the accurate, usable, and scalable fixing of the issues that are currently occurring.

  2. Related Work

    Much effort has been done in classifying and locating cervical cancer due to the pace at which artificial intelligence and medical image analysis is advancing. In order to avoid the issues of the outdated screening mechanisms, researchers have

    investigated a great number of various CNN architectures, hybrid schemes, and transfer learning approaches. One of the first individuals to work on this problem was Gupta et al. [11]. They proposed the sorting of Pap smear pictures into groups using CNN with respect to cervical cancer. They demonstrated in their model that CNNs would automatically extract distinguishing features, which would imply that they would not need to make use of hand-crafted features or human analysis. This work preconditioned further research in deep learning to locate cervical cancer demonstrating that CNNs are more effective than traditional ML algorithms.

    To continue on this development, Hao et al. [12] developed a new CNN-based model which is aimed at correctly classifying cervical cancer. Unlike the previous models, which concentrated on the general CNN designs, they concentrated on the architecture to be more capable of dealing with cervical cell shapes and sizes. The proposed model performed better than the existing CNN implementations and outperformed them in terms of accuracy and recall, and thus, it was more applicable to clinical use. The Kang et al. work

    [13] contributed to this by addressing the issue of using DL to classify cervical cancer into various groups. They were aware that Pap smear images represent more than two classes of abnormality, not two, normal and abnormal, and thus they designed their model to deal with such complex differences between classes. This multi-class technique was more realistic in clinical practice, in which it is highly significant to distinguish between various phases of a lesion.

    Khan et al. [14] developed the entire framework that involves the combination of DL with enhanced preprocessing strategies to enhance the speed of computation and diagnostics accuracy. Their algorithm guaranteed good feature learning and reduced the cost of computation and became easier to apply on large scale. The research conducted by Luo et al. [15] also examined DL models in the screening of cervical cancer based on how they may be applied in clinical conditions and how they may be scaled. Their work demonstrated that the CNN-based systems can be powerful and can be used in a real- life screening program demonstrating the usefulness of such systems and the ability to easily modify them to match other datasets. Their effort was useful in bridging the gap beteen theory and practice since they highlighted such practical problems as the imbalance of datasets and the limitations of computer power.

    Transfer learning is also a large trend in cervical cancer study. Mittal et al. [16] applied transfer learning techniques in order to automatize cervical cancer screening. They demonstrated that a big dataset such as ImageNet may significantly enhance the accuracy of classification with only minor modifications to models that had been trained with Pap smear datasets and reduced training time. This proved to be particularly effective with the medical imaging positions where there is not much labelled data. To this end, Mukherjee et al. [17] went ahead to propose an all-inclusive DL algorithm that incorporated sophisticated picture enhancement and preprocessing, and transfer learning. Their analysis demonstrated the significance of the application of a combination of more than one approach to make the models more effective in various datasets. This ensures that models are more dependable in more clinical situations.

    Much attention has also been paid to hybrid architectures, since these are capable of integrating the best of different kinds. Santos et al. [18] presented mixed DL models to

    classify cervical cancer with the help of both CNN-based features extraction and ensemble. Their experiment demonstrated that these hybrid systems could work better than single models by eliminating biases and alternative methods of feature representation. Sengupta and Basu [19] also investigated the possibility of using DL to classify cervical cancer with Pap smard pictures with the observation that cells overlap, and there is an imbalance in the classes. The fact that it is important to design architectures that can effectively separate complex cellular structures became clear in the work they had done that will result in a classification that would be more reliable.

    In another similar article, Siddique and Pasha [20] explained how hybrid models and ML can be used to classify cervical cancer. The primary aim of them was to achieve improved performance through the integration of CNNs with popular ML algorithms such as support vector machines and random forests. They demonstrated that hybrid pipelines could simplify the matter of understanding something at the same time that they were highly accurate: they used CNNs to extract deep features and provide them to conventional classifiers. This approach was an integration of end-to-end DL and standard ML to design a system that is applicable to diverse datasets and clinical requirements.

  3. Materials And Methods

    The proposed algorithm of cervical cancer analysis based on the use of the Cervical Cancer Cell Image Dataset is the automatic one. It involves resizing, normalization and data augmentation steps in order to facilitate a more general model [28]. In order to achieve high-level features to use in the classification, transfer learning models such as ResNet50, DenseNet201, InceptionV3, and Xception are adopted. The hybrid design is a combination of DenseNet201 and InceptionV3. PCA and t-SNE are then used to reduce dimensionality and fully connected FNN and residual MLP are used to process the smaller number of features. The YOLO versions (v5, v8, v9, and v11) locate abnormal cells to localize them. GradCAM, which is an explainable AI method, is applied to understand the impact of important regions on predictions. Deploying flask provides you with a web-based interface, which can be scaled up or down to inference in real- time and live visualization [29].

    Fig. 1. System Architecture

    The proposed system architecture end-to-end cervical cancer classification and detection is automated. Following the preparation of data and visualization, classification and

    preprocessing in terms of pre-processing detection with the use of YOLO is followed. Numerous DL systems are trained, such as dimensionality reduction feature-fusion systems. Traditional measures are used to select the best model. With Grad-CAM visualization, explainability is ensured. Lastly, Flask-based interface allows users to add photos to be classified, detected, and interpreted in real-time [30].

    1. Dataset Collection

      Cervical cancer was analyzed with the help of the classification and detection branches of the Cervical Cancer Cell Image Dataset. There were 2,427 training photos and 1,622 testing images, and these were classified into Dyskeratotic, Koilocytotic, Metaplastic, Parabasal and Superficial-Intermediate. YOLO bounding boxes annotations were used to detect 839 training images as well as 100 testing images. To ensure strict assessment, the dataset was divided into train/validation/test sets.

      Fig.2 Cervical Cancer Dataset

      ResNet50: In order to overcome the effect of fading gradients, ResNet50, which is equipped with residual connections, recovers hierarchical features of cervical cell pictures. It enhances predictive accuracy and generalization between datasets by the ability to capture complicated cell patterns required in automated medical picture analysis and anomaly diagnosis [21].

      DenseNet201: Dense connection achieves flow of gradient and reuse of features in order to capture both low and high level properties of cervical cells. It minimizes overfitting, increases representational power and is sensitive to detecting small morphological variations so as to be useful in medical diagnostics as well as making correct categorization [22].

      xl = Hl([xo,x1 , xl-1]) (1)

      InceptionV3: The InceptionV3 is a fine-grained and global cell pattern extractor that employs factorized convolutions and inception modules. The acquisition of different hierarchical structures using cervical cancer cell image collections is an efficient way of increasing the reliability of classification and faster detection [23].

      0 = concat (F1(X), F2(X), F3(X), F4(X)) (2)

      Xception: Xception is an effective cell cervical convolutional network that is able to extract cell feature with depthwise separable convolutions to generate spatial and channel interaction. It reduces complexity, enhances accuracy, detects important parts of the cells [24], and guarantees high classification performance of diagnosis to yield accurate and reliable results.

      Fig.3

    2. Visualization

      The distributions of frequency of the images by class were present in bar graphs to indicate the balance of the data sets. Bounding boxes and annotations of the photos of the samples identified the composition of the classes. These visualizations discovered the presence of classes imbalance and representative cases to detect and classify data, which would be used to train the model with high reliability during preprocessing and augmentation.

    3. Pre-processing

      Preprocessing included the resizing of images to create uniformity, equalization of the pixel intensity level, and increasing the diversity of the datasets and model generalization. The data was divided into training, validation, and testing data to be used in the performance evaluation. Coding bounding box annotations into text files yielded YOLO compatible detection datasets, and image transfer to class labels yielded classification datasets to serve as input to structured model.

    4. lgorithms

      Proposed Model: The hybrid uses combine the fusion and dimensionality reduction method based on PCA to combine DenseNet201 with InceptionV3. It collects complementary multi-scale data, reduces redundancy, and processes compact features to strong cervical cell classification [25] and enhances the medical imaging performance and the presence of abnormalities.

      FNN: Fully CNN processes smaller DenseNet-Inception or denseNet-ResNet compressed features. Nonlinear correlations, classification robustness, and accurate mapping of compact feature representation to cervical typs of cells are represented, which allows making predictions efficiently to rely on accurate medical imaging.

      MLP: Combining both of the features, the multi-layer perceptron with residual links handles expressed complex nonlinear dependencies. It minimizes vanishing gradients, increases classification robustness and projects input data to cervical cell classifications to scale, adaptive and reliable medical imaging classification.

      y = f(WLf(WL-1 f(W1X + b1) + b(L-1)) + bL) (4)

      YOLOv5: YOLOv5 is capable of detecting cervical cells by predicting bounding boxes and probabilities of the classes on a single pass. Fast, accurate abnormality localization makes available real-time diagnostic assistance with good spatial

      features capture and effective abnormality identification in microscopic cervical cell images [26].

      x = a(tx) + Cx, y = a(ty) + Cy w = pw. etw, = ph. eth (5)

      YOLOv8: YOLOv8 is better at feature extraction and prediction of single-stage detection. It localizes cervical cells in a reliable way regardless of size and densities thereby

      True Positive

      Precision = (9)

      True Positive + False Positive

      Recall: ML recall is used to determine the ability of a model to identify all useful examples of a class. It demonstrates the adequacy of a model in the representation of occurrences of a type by the contrast of precisely forecasted positive observations with the total positives.

      enhancing the detection of aberrant cells. It has scalable and high-recall detection architecture [27] that can be used to

      TP

      Recall =

      TP + FN

      (10)

      perform reliable automated analysis in the high-throughput medical imaging processes and diagnostic systems.

      = Aboxbox + Aobj obj + Acls cls (6)

      YOLOv9: To achieve the correct classification and

      F1-Score: The F1 score is used to measure the accuracy of ML model. Integrating model accuracy and recall performances. The statistic of accuracy is used to measure the number of times that a model was right in the entire dataset.

      Recall X Precision

      recognition of cervical cells, the YOLOv9 optimizes the prediction layers of bounding boxes. It runs the overlapping

      F1 Score = 2

      Recall + Precision

      100(11)

      structure and cell size to achieve high recall and strong localization of anomalies to deliver precise and real-time medical imaging analysis to enhance diagnostic processes and the outcome of patients.

      YOLOv9 = Aboxbox + Aobj obj + Acls cls + Adfl

      + dfl (7)

      YOLOv11: YOLOv11 is a simple, fast, and useful detection, where the localization of cervical cells is realistic. The real- time bounding box and label prediction, as well as the fast image processing, allow identifying the abnormalities fast in resource-constrained settings and being sure of the high- throughput diagnostic functions.

    5. Integration of XAI and Flask Framework

    With Flask, it is possible to integrate XAI and transparent interactive and user-friendly AI applications. GradCAM and other XAI techniques focus on important areas that influence choices and, as such, users can identify biases and inaccuracy in model predictions. In healthcare, banking, and security, trust and accountability are very essential hence openness is essential. Flask is also compatible with deploying AI models and has smooth web interfaces and APIs, is lightweight and versatile and is scalable.

    Flask and XAI allow visualizing model insights in real- time, such as GradCAM-based activation maps and feature importance and prediction explanations. Sharing data, receiving predictions, and finding insights are simple, and it provides a bridge between complex AI models and the final consumer and promotes the use of the information.

  4. Experimental Results

    Accuracy: The accuracy of a test is its ability to discriminate between the cases of patients and healthy ones. In order to determine the accuracy of the test, the proportion of true positive and true negative in all the cases tested should be calculated. Mathematically, this is:

    TP + TN

    mAP: Ranking quality MAP. Relevant number of recommendations and position list are taken into consideration. MAP(K) is the arithmetic mean of Average Precision(AP) at K of all the users/queries.

    k=n

    1

    mAP = n L APk ( 12)

    k=1

    Table.1 Performance Evaluation Classification

    ML Model

    Accuracy

    Precision

    Recall

    F1-

    score

    ResNet50

    0.978

    0.978

    0.978

    0.978

    DenseNet-201

    0.980

    0.980

    0.980

    0.980

    InceptionV3

    0.967

    0.967

    0.967

    0.967

    Xception

    0.970

    0.971

    0.970

    0.970

    Proposed

    0.951

    0.951

    0.951

    0.951

    Extension – FNN

    0.961

    0.961

    0.961

    0.961

    Extension – MLP

    0.957

    0.957

    0.957

    0.956

    Table.1 reveals that DenseNet-201 and ResNet50 have a high top cervical cell classification accuracy and reliability.

    Table.2 Performance Evaluation Detection

    ML Model

    Precision

    Recall

    mAP

    YOLO v5

    0.614

    0.588

    0.607

    YOLO v8

    0.556

    0.654

    0.617

    YOLO v9

    0.575

    0.697

    0.646

    YOLO v11

    0.458

    0.562

    0.500

    The total detection performance of YOLOv9 is best as demonstrated in Table.2 because it beats the other models when it comes to detecting cervical cell abnormalities.

    Fig.4 Comparison Graph Classification

    Accuracy =

    TP + FP + TN + FN

    (8)

    Precision: Precision speaks of the proportion of correct classification of cases or samples. Precision = (maximum error)/(maximum error) (maximum error) (maximum error).

    Accuracy Precision Recall F1-score

    As Figure 4 demonstrates, DenseNet-201 is the most suitable in terms of all the metrics. Precision is indicated in light green, accuracy is indicated in green, recall is indicated in light blue, and the F1-score is indicated in dark red.

    Fig.5 Comparison Graph – Detection

    Fig.7 Predicted Results

    The results of the cervical cell picture classification can be seen in Fig.7. It was predicted that the cell type was dyskeratotic.

    Fig.8 Upload Input Image

    The input interface is displayed in figure 8 where users

    As Figure 5 indicates, YOLOv9 performs best in general, surpassing all of the other models. Precision is presented in green, recall in light green and mAP in blue. This demonstrates the ability of YOLOv9 to detect things.

    Fig.6 Upload Input Image

    In Figure 6, a user interface to share pictures of cervical cancer is shown in order to categorize them.

    are allowed to post pictures of cervical cells to receive results produced by automatic classification.

    Fig.9 predicted results

    The cervical cell image was uploaded and this was detected as indicated in Fig9. Abnormal areas are indicated in the detection image output.

  5. Conclusion

    The proposed architecture has a powerful and comprehensible DL-based framework to identify and categorize cervical cancer automatically. It achieves this by integrating the most suitable classification and detection architectures in an overall diagnostic study. The algorithm uses the Cervical Cancer Cell Image Dataset and contains image-level and sorting class labels as well as comments of polygons (calculated by using the YOLO) that identify the position of lesions. DenseNet201 was the most successful model that was tested with 98.0% accuracy, precision, recall, and F1-score. It performed better than ResNet50, Inception V3 and Xception. We enhanced classification accuracy and processing efficiency further during the process of using a hybrid model, which incorporated the characteristics of DenseNet201 and Inception V3. It was then succeeded by PCA dimensionality reduction and a Fully Connected Neural Network. YOLOv9 was more successful in detecting the abnormal cells with an accuracy of 0.575, a recall of 0.697 and an average precision of 0.646. This indicates that it is quite localization-friendly. The framework became simpler to comprehend because the results of Grad-CAM visualizations highlighted significant areas of images that influence predictions. Moreover, a web interface based on Flask was created in order to enable the uploading of pictures in real- time, model inference, and visualization of abnormalities detected in cells. Once these better components are assembled, they will ensure that cervical cancer results are accurate, reliable and clear. This assists in making easy to understand and easy to work with early screening solutions to the doctors.

    Further research is possible to ensure that cervical cancer analysis is even more perfect, and such research can be enhanced with larger and more diverse data. This will assist the models to do well in more imaging conditions and patient groups. The hybrid feature fusion framework could be improved by using sophisticated dimensionality reduction techniques or attention mechanisms to ensure the classification is more accurate and the work with the computer is quicker. By considering next-generation object recognition architectures, such as transformer-based models, could assist in detection with providing a more precise localization and the possibility of addressing low-contrast or overlapping cells. It could also be possible to have a more comprehensive predictive system, which integrates other forms of data, including clinical records or genomic information of patients. It is also possible to consider real- time deployment strategies and edge-computing solutions to make the screening process fast and simple in the hospital environment so that the correct diagnosis is performed in time.

  6. References

  1. Dogan, Y. (2025). AutoEffFusionNet: A new approach for cervical cancer diagnosis using ResNet-based autoencoder with attention mechanism and genetic feature selection. IEEE Access.

  2. Hanzala, A., Akter, T., & Rahman, M. S. (2025). A hybrid approach for cervical cancer detection: Combining D-CNN, transfer learning, and ensemble models. Array, 27, 100434.

  3. Mathivanan, S. K., Francis, D., Srinivasan, S., Khatavkar, V., P, K., & Shah, M. A. (2024). Enhancing cervical cancer detection and robust classification through a fusion of deep learning models. Scientific Reports, 14(1), 10812.

  4. Abinaya, K., & Sivakumar, B. (2024). A deep learning- based approach for cervical cancer classification using 3D CNN and vision transformer. Journal of Imaging Informatics in Medicine, 37(1), 280.

  5. Attallah, O. (2023). CerCan· Net: Cervical cancer classification model via multi-layer feature ensembles of lightweight CNNs and transfer learning. Expert Systems with Applications, 229, 120624.

  6. Abdalla, M., Tageldeen, M., Mohamed, S., & Ali, I. (2021). Deep learning based automatic classification of pap smear images for cervical cancer detection. IEEE Access, 9, 3023730245.

  7. Ali, S., Zhang, L., Khan, S. A., & Ullah, A. (2020). Automated cervical cancer diagnosis based on deep learning approaches. IEEE Access, 8, 143970143978.

  8. Amrane, M., Oukid, S., & Abbas, H. (2022). Hybrid deep learning model for cervical cancer classification. IEEE Access, 10, 5723157241.

  9. Cui, C., Li, Y., Liu, H., & Yin, Y. (2020). High-

    performance cervical cancer cell classification with sequential deep learning. IEEE Access, 8, 9945699467.

  10. Ghosh, A., Majumder, D., Bhattacharyya, S., & De, D. (2021). A comprehensive study on deep learning-based techniques for cervical cancer detection. IEEE Access, 9, 3612736140.

  11. Gupta, A., Dua, R., Agrawal, S., & Bajaj, V. (2020). Cervical cancer detection using convolutional neural networks. IEEE Access, 8, 9362993638.

  12. Hao, X., Yang, F., & Zhang, L. (2023). A novel CNN- based model for accurate cervical cancer classification. IEEE Access, 11, 1120311213.

  13. Kang, J., Gao, L., & Zhang, S. (2021). Multi-class cervical cancer classification with deep learning. IEEE Access, 9, 4628546295.

  14. Khan, M. A., Sharif, M., Akram, T., & Aurangzeb, K. (2021). A framework for efficient cervical cancer classification using deep learning. IEEE Access, 9, 103428103438.

  15. Luo, J., Yu, W., & Zhang, Z. (2022). Deep learning model for cervical cancer screening. IEEE Access, 10, 5724257253.

  16. Mittal, S., Vaish, A., & Tripathi, R. (2020). Automated cervical cancer screening using transfer learning. IEEE Access, 8, 179028179035.

  17. Mukherjee, S., Ray, S., & Majumder, S. (2023). A comprehensive deep learning approach for cervical cancer classification. IEEE Access, 11, 2330123312.

  18. Santos, L., Costa, J. A., & Oliveira, L. (2021). Hybrid deep learning models for cervical cancer classification. IEEE Access, 9, 133241133252.

  19. Sengupta, S., & Basu, S. (2021). Deep learning approach for cervical cancer classification with pap smear images. IEEE Access, 9, 130112130121.

  20. Siddiqi, S., & Pasha, S. (2022). Machine learning-based cervical cancer classification using hybrid models. IEEE Access, 10, 9328193292.

  21. Yin, Q., & Yang, Y. (2022). An efficient deep learning model for cervical cancer screening. IEEE Access, 10, 8361283622.

  22. Pacal, I., & Klcarslan, S. (2023). Deep learning-based approaches for robust classification of cervical cancer. Neural Computing and Applications, 35(25), 18813-

    18828.

  23. Talpur, D. B., Raza, A., Khowaja, A., & Shah, A. (2024). DeepCervixNet: An advanced deep learning approach for cervical cancer classification in pap smear images. VAWKUM Transactions on Computer Sciences, 12(1), 136-148.

  24. Tan, S. L., Selvachandran, G., Ding, W., Paramesran, R., & Kotecha, K. (2024). Cervical cancer classification from pap smear images using deep convolutional neural network models. Interdisciplinary Sciences: Computational Life Sciences, 16(1), 16-38.

  25. Pham, T. A., Hoang, V. D., Tran, D. H., & Le Van, T. L. (2025, April). CerMixer: An Efficient Model for Cervical Cancer Classification Based on Patching and Multi-scale Depthwise Convolutional Fuion. In Asian Conference on Intelligent Information and Database Systems (pp. 240-253). Singapore: Springer Nature Singapore.

  26. Emara, H. M., El-Shafai, W., Soliman, N. F., Algarni, A. D., Alkanhel, R., & Abd El-Samie, F. E. (2024). Cervical cancer detection: A comprehensive evaluation of CNN models, vision transformer approaches, and fusion strategies. IEEE Access.

  27. Gonzalez-Ortiz, O., Ubando, L. A. M., Fuenzalida, G. A. S., & Garza, G. I. M. (2024, June). Evaluating DenseNet121 neural network performance for cervical pathology classification. In 2024 IEEE 37th International Symposium on Computer-Based Medical Systems (CBMS) (pp. 297-302). IEEE.

  28. Khowaja, A., Zou, B., & Kui, X. (2024). Enhancing cervical cancer diagnosis: Integrated attention- transformer system with weakly supervised learning. Image and Vision Computing, 149, 105193.

  29. Jain, S., Jain, A., Jangid, M., & Shetty, S. (2024). Metaheuristic driven framework for classifying cervical cancer on smear images using deep learning approach. IEEE Access.

  30. Mehedi, M. H. K., Khandaker, M., Ara, S., Alam, M. A., Mridha, M. F., & Aung, Z. (2024). A lightweight deep learning method to identify different types of cervical cancer. Scientific Reports, 14(1), 29446.