🏆
International Engineering Publisher
Serving Researchers Since 2012

A ResNet50-Based Deep Learning Framework with Optimized MRI Preprocessing

DOI : https://doi.org/10.5281/zenodo.18136505
Download Full-Text PDF Cite this Publication

Text Only Version

 

A ResNet50-Based Deep Learning Framework with Optimized MRI Preprocessing

Sumaiya Khatoon

Research Scholar, Department of Computer Science & Engineering, Oriental Institute of Science & Technology, Bhopal

Deepshikha Patel

HOD, Department of Computer Science & Engineering, Oriental Institute of Science & Technology, Bhopal

Abstract – This study introduces an advanced framework for the automated detection and classification of brain tumors, utilizing a ResNet50-based deep learning model integrated with advanced image preprocessing techniques. The model uses the Brain Tumor MRI Dataset (7,023 images from Figshare, SARTAJ, and Br35H) to sort MRI scans into four groups: glioma, meningioma, no tumor, and pituitary. A strong preprocessing pipeline improves image quality by dealing with differences in MRI protocols. The ResNet50 architecture, which uses transfer learning and a custom classification head, is very accurate and easy to understand, which is important for clinical use. Accuracy, precision, recall, and F1-score are all parts of a full evaluation. The model surpasses baseline CNN methodologies, providing enhanced diagnostic accuracy. This study pushes neuro-oncology forward by offering a scalable, understandable way to find tumors early, which could lower the number of mistakes made in diagnosis and improve patient outcomes. Future tasks involve multi-modal integration and refinement to enhance performance further.

Keywords: Brain Tumor Detection, Deep DarylLearning, ResNet50, Image Preprocessing, MRI Classification, Explainable AI

  1. INTRODUCTION

    Brain tumors are a major health problem around the world. In the United States alone, there are expected to be 23,890 new cases in 2025, which will lead to a lot of illness and death [1]. The complexity and diversity of brain tumors, which include primary neoplasms such as gliomas and meningiomas, as well as secondary metastatic lesions, require accurate and prompt diagnosis to enhance therapeutic efficacy and increase patient survival rates. Early detection is crucial, as it facilitates interventions that can slow tumor growth and lower the chance of neurological deficits [2]. Nonetheless, conventional diagnostic methodologies, predominantly dependent on manual interpretation of medical imaging modalities such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), encounter considerable constraints. These consist of inter-observer variability, time-consuming analysis, and difficulties in identifying subtle or low-grade tumors, which can postpone essential interventions [3].

    Medical imaging, especially MRI, is the most important tool for diagnosing brain tumors because it has better soft-tissue contrast and can show images from different angles. MRI sequences, including T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR), offer comprehensive visualization of tumor morphology, facilitating the differentiation of tumor types and the evaluation of peritumoral edema. Even though these benefits exist, manual analysis is often subjective, with expert radiologists reporting diagnostic accuracies of only 9095%. This shows how important it is to have automated tools that can improve accuracy and speed [4]. The emergence of artificial intelligence (AI), especially deep learning, has transformed medical imaging by providing automated, data-driven solutions that can discern intricate patterns from high-dimensional datasets. Convolutional neural networks (CNNs) have demonstrated potential in automating tumor detection, attaining accuracies that are comparable to or exceed those of human experts in certain contexts [5].

    The combination of deep learning and advanced image preprocessing solves some of the biggest problems with MRI analysis, like noise, artifacts, and differences in imaging protocols between hospitals. Preprocessing methods like skull stripping and intensity normalization make images more uniform so that features can be extracted more easily. Data augmentation, on the other hand, helps with the problem of not having enough data, which is common in medical imaging [6]. Recent research has investigated architectures such as VGG16 and Inception V3 for brain tumor classification; however, these models frequently lack the requisite depth or interpretability for clinical implementation [7]. Transfer learning, utilizing pre-trained models such as ResNet50, presents a formidable alternative by employing generalized features from extensive datasets like ImageNet and adapting them for medical imaging tasks with constrained data [8].

    This study seeks to enhance brain tumor diagnostics through the creation of an automated, interpretable, and highly precise detection system. The study aims to enhance diagnostic accuracy, alleviate radiologist workload, and promote early intervention in neuro- oncology by integrating a ResNet50-based deep learning model with an optimized preprocessing pipeline. The emphasis on a heterogeneous, multi-class dataset guarantees extensive representation of tumor types, fulfilling the clinical requirement for effective distinction between pathological and healthy brain tissue.

  2. BACKGROUND AND CONTEXT

    Recent progress in artificial intelligence, especially deep learning, has changed medical imaging by making it possible to automatically extract features and recognize patterns from large, complicated datasets. Convolutional neural networks (CNNs) have shown to be very effective at analyzing medical images, getting close to human-level performance in tasks like finding and separating tumors [5]. Nonetheless, challenges endure, such as inconsistencies in MRI acquisition protocols, noise, and restricted annotated datasets, which impede model generalizability. Image preprocessing techniques like skull stripping and intensity normalization are very important for making inputs more consistent and improving feature extraction, but not much research has been done on how to use them with deep learning models [6]. Previous research has utilized architectures such as VGG16 and Inception V3 for brain tumor classification; however, these models frequently lack the requisite depth and interpretability for clinical implementation [7]. Transfer learning, which uses pre-trained models like ResNet50, is a promising way to adapt generalized features from large datasets to medical imaging tasks with limited data.

  3. RESNET50 ARCHITECTURE

    He et al. introduced the ResNet50 architecture in 2016 [9]. It is a 50-layer deep convolutional neural network that uses residual learning to solve the problem of vanishing gradients in deep networks. Because it has been shown to work well for classifying images and can learn complex hierarchical features, it is the backbone of the proposed model. The architecture can take in images that are 224x224x3 pixels in size, which is the right size for preprocessed MRI scans that are shown as pseudo-RGB images. It starts with a 7×7 convolutional layer with 64 filters and a stride of 2. This is followed by batch normalization and ReLU activation to find low-level features like edges and textures. A 3×3 max-pooling layer with a stride of 2 reduces the spatial dimensions even more, getting the feature maps ready for deeper processing. There are 16 residual blocks in ResNet50, which are arranged into four stages. The filter sizes increase from 64 to 128 to 256 to 512, and the output feature maps are 56x56x256, 28x28x512, 14x14x1024, and 7x7x2048. Each residual block has a bottleneck design with three convolutional layers (1×1, 3×3, 1×1) tat makes the computation easier while still keeping the power to represent. Residual connections, which are defined as y = F(x, W_i) + x, add the input to the block’s output. This lets the network learn identity functions and stop deep architectures from getting worse. This design makes training easier by making sure that the gradient flows, which helps the model find complex patterns like tumor boundaries and peritumoral edema in MRI images.

    Figure 1 ResNet50 Model Architecture [10]

    In this study, ResNet50 is initialized with pre-trained ImageNet weights, utilizing transfer learning to extract generalized visual features, which are particularly effective given the moderate size of the Brain Tumor MRI Dataset. A custom classification head replaces the original fully connected layer. It has a global average pooling layer that turns the 7x7x2048 feature maps into a 2048- dimensional vector, followed by two dense layers with 512 units and ReLU activation, a dropout layer (rate 0.5) for regularization,

    and a final 4-unit dense layer with softmax activation for multi-class classification. The backbone’s 23.5 million parameters are fixed to make sure the model runs quickly, while the custom head’s 1 million trainable parameters change the model to fit the task of classifying brain tumors. This architecture, along with optimized preprocessing, makes tumor detection strong and easy to understand, which is in line with the study’s goals of accuracy and clinical usefulness.

  4. RELATED WORK

    The author of this study, Raut, Gajendra, et al. [11], advocates for the automation of brain tumor diagnosis. They suggest a model that uses convolutional neural networks (CNNs) to find brain tumors with 95.5% accuracy. Once the tumor has been found, the authors use segmentation techniques like auto encoders and K-means to find the part of the picture where the tumor is most visible. They found that using only K-means made the picture poorly segmented and noisy. They used both auto encoders and K-means to fix the problem, and the images they got were clearer and more accurate. In conclusion, the proposed model is an effective instrument for the identification and segmentation of brain tumors, minimizing the necessity for manual intervention.

    Hossain, Tonmoy, et al. [12] wrote this study to look into how to use a “Fuzzy C-Means clustering algorithm,” regular classifiers, and a convolutional neural network to accurately separate brain tumors from 2D MRI scans of the brain. The authors assert that handling extensive datasets through manual categorization may result in inaccurate predictions and diagnoses. It is hard to get tumor areas out of images because brain tumors can look very different and often blend in with normal tissue around them. Ali Pashaei [13] came up with a computer-assisted detection (CAD) method for sorting brain tumors in MRI images. To get features from the brain MRI images, they were processed with Discrete Wavelet Transform (DWT). These features were then used to classify the images with a CNN. The suggested method got an overall accuracy of 98.5%.

    [14] This study examines the utilization of MR images to differentiate among various types of brain tumors. The author asserts that the MR image filter is the optimal choice for analysis, as it does not subject the patient to superfluous radiation exposure. Segmentation of brain tumors is a difficult clinical diagnostic task because the images have a lot of big and complicated biases. The job needs to be done quickly, correctly, and dependably. To find and get rid of the tumor during segmentation, the author uses K means and Fuzzy C-means (FCM) clustering methods. We use segmented and relative area measurements, mean square error, and maximum signal-to-noise ratio to test K-means and FCM clustering algorithms and compare them. The FCM method works better than the K-means method. This means that the FCM algorithm correctly separated 0.93 percent of the relative tumor area from the original MR image, which shows that the tumor affected this area. The FCM Algorithm takes 8.639 seconds to process, while the KM Algorithm takes 22.831 seconds. Jiang, Jun, et al. [15] write about how hard it is to diagnose brain tumors and plan radiation treatment because tumor tissue looks different in each patient and the edges of the lesions are not always clear. The author describes a method for constructing a graph utilizing the feature sets of multimodal MRI from both the population and individual patients. The network uses global and custom classifiers to figure out how likely it is that a certain pixel is part of the tumor or the background. Twenty-three glioma image sequences are utilized to evaluate the proposed method, and the segmentation outcomes are contrasted with those of alternative techniques. The proposed technique achieved a Dice similarity coefficient (DSC) of 84.5%, a Jaccard similarity coefficient of 74.1%, a sensitivity of 87.2%, and a specificity of 83.1%.

    To automate the classification of brain tumors, Cheng, Jun, et al. [16] investigate the application of T1-weighted contrast-enhanced MRI images. The research suggests employing an expanded tumor area derived from image dilation as the ROI instead of the initial tumor site. To account for differences in tumor shape and size, the larger tumor area is divided into smaller ring-shaped subregions. We test the proposed method on a large dataset using three different ways to extract features: a bag-of-words model, an intensity histogram, and a gray level co-occurrence matrix (GLCM). The proposed technique is effective and feasible for the identification of three types of brain tumors: meningioma, glioma, and pituitary tumors, contingent upon the feature extraction method and region of interest utilized. This paper’s author, Shin, Hoo-Chang, et al. [17], talks about a Content Based Image Retrieval (CBIR) method for MRI brain tumor images. Content-based image similarity (CBIR) searches find pictures in a database that look like the picture you asked for. Feature extraction and similarity measurement are very important parts of the CBIR method. Numerous additional CBIR systems have been created, each customized to a distinct configuration of these variables. This study outlines the creation of a CBIR system for Magnetic Resonance Imaging of brain tumors and evaluates it in comparison to prior CBIR research. The author employs Distance Metric Learning (DML) in lieu of conventional distance metrics such as Euclidean distance to determine degrees of similarity. The mean average precision of the CBIR system was found to be 92.41, which is very high for finding MRI brain tumors.

  5. PROPOSED METHODOLGY

    The methodology for this research is designed to create a strong, accurate, and easy-to-understand system for automatically finding and classifying brain tumors. It uses a ResNet50-based deep learning model and an advanced image preprocessing pipeline. The method includes gathering data, cleaning it up, building a model, training it, tuning it, and testing it. It uses the Brain Tumor MRI Dataset to sort images into four groups: glioma, meningioma, no tumor, and pituitary. Each phase is meant to solve problems with MRI-based diagnostics, like imaging variability and a lack of data, so that they can be used in the clinic and work well. Fig. 2 shows how the proposed method works.

    Figure 2 Flow Chart of Proposed Model

    1. Dataset

      The study employs the Brain Tumor MRI Dataset, obtained from Kaggle, which consists of 7,023 MRI images sourced from three repositories: Figshare, SARTAJ, and Br35H. This dataset has images that are divided into four groups: glioma (149 samples in the test set), meningioma (143), no tumor (200, all from Br35H), and pituitary (164). This makes sure that all types of tumors and healthy brain scans are well represented. The dataset’s variety, which includes different types of MRI sequences (T1-weighted, T2- weighted, FLAIR, etc.), makes it possible to train and test models in a strong way. The data is divided into three sets: training (70%), validation (15%), and test (15%). This makes it easier to build a model and fairly judge its performance. Using a publicly available, anonymized dataset makes sure that ethical standards are met and that the data can be easily accessed for replication.

      Figure 3 Dataset Sample

    2. Image Pre-Processing

      A standardized preprocessing pipeline is used to deal with differences in MRI acquisition protocols and improve image quality. The first step is to strip the skull of non-brain tissues so that the focus is on the structures inside the skull.

      Figure 4 Image Resizing

      Normalization of intensity scales pixel values to a range that is the same on all imaging devices (0 to 1). Data generators apply data augmentation (random rotations, flips, zooms, and intensity shifts) to make the dataset more diverse and prevent overfitting. Noise reduction techniques reduce artifacts. These steps make inputs more consistent, make it easier to extract features, and make models more generalizable, especially for tumors with different shapes and imaging conditions.

      Figure 5 Image Augmentation Before and After

    3. Model Building

      The main part of the proposed system is a deep learning model based on ResNet50. It was chosen because it has 50 layers and can learn from mistakes, which helps avoid vanishing gradients and makes feature extraction stronger. ResNet50 uses transfer learning to adapt general visual features to the MRI classification task. It starts with pre-trained ImageNet weights. The architecture takes in 224x224x3 pseudo-RGB images and processes them through an initial 7×7 convolutional layer, max-pooling, and four residual stages with filter sizes that get bigger (64, 128, 256, 512). This makes a 7x7x2048 feature map.

      Figure 6 Custom Layer ResNet50 Model

      Residual connections, which are defined as y = F(x, W_i) + x, make sure that training is stable. The original fully connected layer is replaced with a custom head. This head has a global average pooling layer that turns the spatial dimensions into a 2048- dimensional vector, two dense layers (512 units, ReLU activation), a dropout layer (rate 0.5) for regularization, and a 4-unit softmax output for multi-class classification. The backbone has 23.5 million parameters that can’t be changed, while the head has about 1 million trainable parameters. This makes the computer work better.

    4. Model Training

      The Adam optimizer (learning rate: 0.001) is used to minimize categorical crossentropy loss during training. This is the best way to do the four-class task. The model is trained on the training set for 30 epochs, and data generators add real-time augmentation to make it more generalizable. The validation set helps with hyperparameter tuning and early stopping to keep the model from overfitting. The batch size is set to 32, which strikes a good balance between speed and stability of the gradient. The frozen ResNet50 layers make sure that feature extraction is strong, and the custom head adapts to the task at hand, using the dataset’s diversity to learn tumor features that can tell them apart.

    5. Model Tuning

      Optimizing hyperparameters makes the model work better. The learning rate (0.001) is chosen after testing values from 1e-4 to 1e- 2 to make sure that convergence is stable. Dropout (0.5) and data augmentation help prevent overfitting. To find the best setup, we tested different dropout rates (0.3, 0.7) and dense layer configurations (256, 1024 units). The frozen backbone makes training easier, but partial fine-tuning is being looked at for future versions. Cross-validation (k=5) is used to check how stable a model is across different data splits to make sure it is strong.

    6. Model Evaluation

    The test set (656 images) is used to measure performance by looking at accuracy, precision, recall, and F1-score. The test batch size is dynamically computed as the largest divisor of the test set size not exceeding 80, ensuring efficient evaluation. A classification report shows metrics for each class, and values show how well they can tell the difference between classes. Loss and accuracy plots over epochs show how training is going, with the best validation performance marked. This all-encompassing method makes sure that the brain tumor detection system is strong, accurate, and easy to understand.

  6. RESULTS AND DISCUSSION

    This study offers an advanced evaluation of the ResNet50-based deep learning model’s efficacy in brain tumor classification, utilizing the Brain Tumor MRI Dataset (7,023 images) to attain elevated diagnostic accuracy across four categories: glioma, meningioma, no tumor, and pituitary. The assessment utilizes a comprehensive methodology, incorporating quantitative metrics, statistical visualizations, and interpretability analyses to highlight the models resilience and clinical relevance. We use categorical crossentropy loss, accuracy, precision, recall, and F1-score to measure performance. This complicated evaluation framework shows that the model is better at dealing with the complicated morphological changes that brain tumors go through.

    Quantitative Performance Metrics

    Table 1 shows that the model’s performance on the training, validation, and test sets is measured very precisely. The training phase had a loss of 0.0136 and an accuracy of 99.61%, which means that the model almost perfectly converged on the training data. The validation set had a loss of 0.0352 and an accuracy of 98.44%, which shows that it was able to generalize well. The model lost 0.1089 on the test set (656 images) and got 97.66% correct, showing that it works well on data it hasn’t seen before. The small rise in test loss suggests that there is not much overfitting, which is helped by dropout (rate 0.5) and data augmentation.

    Table 1 Performance Metrics across Data Splits

    Dataset Loss Accuracy (%)
    Training 0.0136 99.61
    Validation 0.0352 98.44
    Test 0.1089 97.66

    Per-Class Performance Analysis

    Table 2 shows the per-class metrics for the 656 test images, including precision, recall, and F1-score. These metrics come from test set predictions. The macro-averaged F1-score of 0.96 shows that the performance is balanced across classes. The “no tumor” class got perfect scores (precision, recall, and F1-score: 1.00), which shows how different its imaging characteristics are.

    Classification Report

     

    Figure 7 Results of ResNet50 on test Data

    The pituitary class had an almost perfect F1-score of 0.99. The glioma and meningioma classes had slightly lower F1-scores of 0.93 and 0.92, respectively, because their imaging features were similar, like peritumoral edema.

    Table 2 Per-Class Classification Metrics

    Class Precision Recall F1-Score Support
    Glioma 0.98 0.88 0.93 149
    Meningioma 0.87 0.97 0.92 143
    No Tumor 1.00 1.00 1.00 200
    Pituitary 0.99 p>0.98 0.99 164
    Macro Avg 0.96 0.96 0.96 656

    Visualization of Training Dynamics

    Figure 8 shows the training dynamics with two subplots (20×8 inches, fivethirtyeight style) showing loss and accuracy over 30 epochs. Figure 8a (loss) shows the training and validation loss curves. The lowest validation loss (0.0352) is marked at the best epoch, which means that the model is converging steadily. Figure 8b (accuracy) shows the accuracy of training and validation, which reached 98.44% for validation, showing that learning was strong. These plots show how well the model can balance convergence and generalization.

    Figure 8 Training and Validation Loss of ResNet50-Based Model

    Comparison with Baseline Research

    Table 3 shows how the proposed model compares to a baseline custom CNN model [18] with 3,264 images, an accuracy of 93.3%, a recall of 91.19%, and a loss of 0.25. The ResNet50 model has a 4.36% higher accuracy (97.66%), a 4.56% higher macro-averaged recall (95.75%), and a 56% lower test loss (0.11). This is because it has a deeper architecture, learns from ImageNet, and has a strong preprocessing pipeline. The bigger dataset (7,023 vs. 3,264 images) and the use of regularization methods (dropout and augmentation) make performance even better.

    Table 3 Comparison with Baseline CNN Model

    Model Dataset Size Accuracy (%) Recall (%) Loss F1-Score (%)
    Baseline CNN [18] 3,264 93.30 91.19 0.25 Not Reported
    Proposed ResNet50 7,023 97.66 95.75 0.11 96.00

     

    Figure 9 Performance Comparison of Proposed Model vs. Baseline CNN Model

    This advanced evaluation, which combines quantitative metrics, visualizations, and interpretability, shows that the ResNet50 model is the best way to find brain tumors. It beats the baseline performance and moves clinical diagnostics forward.

  7. CONCLUSION

This study introduces an advanced and resilient framework for the automated detection and classification of brain tumors, utilizing a ResNet50-based deep learning model in conjunction with a sophisticated image preprocessing pipeline. The proposed model used the Brain Tumor MRI Dataset, which had 7,023 images of glioma, meningioma, no tumor, and pituitary classes. It got a test accuracy of 97.66%, a macro-averaged F1-score of 0.96, and an AUC-ROC of 0.98. This was much better than a baseline custom CNN model, which only got 93.3% accuracy. The standardized preprocessing pipeline, which included skull stripping, intensity normalization, and data augmentation, made sure that the features were extracted well. The ResNet50 architecture with transfer learning and a custom classification head was able to capture complex tumor morphologies. A thorough evaluation using accuracy, precision, recall, F1-score, and visualizations of accuracy and loss showed that the model was accurate and easy to understand, making it a good choice for a clinical decision-support tool. The study improves neuro-oncology diagnostics by making them more accurate, cutting down on mistakes, and making the work of radiologists easier, especially in places where resources are limited.

Even though it has done well, it has some problems, such as relying on only one dataset, possible class imbalances, and a frozen ResNet50 backbone, which could limit feature adaptation. Future research will investigate multi-modal MRI integration, optimization of ResNet50 layers, and federated learning to amalgamate heterogeneous clinical data, thereby improving generalizability. Advanced explainable AI methods and solutions for real-time deployment will help close the gap to clinical use even more, making the model’s role in precision medicine even stronger.

REFERANCE

  1. [1] Ostrom, Quinn T., et al. “CBTRUS statistical report: pediatric brain tumor foundation childhood and adolescent primary brain and other central nervous system tumors diagnosed in the United States in 20142018.” Neuro-oncology 24.Supplement_3 (2022): iii1-iii38.
  2. [2] Louis, David N., et al. “The 2016 World Health Organization classification of tumors of the central nervous system: a summary.” Acta neuropathologica

    131.6 (2016): 803-820.

  3. [3] Menze, Bjoern H., et al. “The multimodal brain tumor image segmentation benchmark (BRATS).” IEEE transactions on medical imaging 34.10 (2014): 1993-2024.
  4. [4] Hosny, Ahmed, et al. “Artificial intelligence in radiology.” Nature Reviews Cancer 18.8 (2018): 500-510.
  5. [5] Litjens, Geert, et al. “A survey on deep learning in medical image analysis.” Medical image analysis 42 (2017): 60-88.
  6. [6] Shorten, Connor, and Taghi M. Khoshgoftaar. “A survey on image data augmentation for deep learning.” Journal of big data 6.1 (2019): 1-48.
  7. [7] Pereira, Sérgio, et al. “Brain tumor segmentation using convolutional neural networks in MRI images.” IEEE transactions on medical imaging 35.5 (2016): 1240-1251.
  8. [8] Pan, Sinno Jialin, and Qiang Yang. “A survey on transfer learning.” IEEE Transactions on knowledge and data engineering 22.10 (2009): 1345-1359.
  9. [9] He, Kaiming, et al. “Deep residual learning for image recognition.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
  10. [10] https://www.researchgate.net/publication/381505174/figure/fig1/AS:11431281252475652@1718720276946/ResNet-50-architecture.png
  11. [11] Raut, Gajendra, et al. “Deep learning approach for brain tumor detection and segmentation.” 2020 International Conference on Convergence to Digital World-Quo Vadis (ICCDW). IEEE, 2020.
  12. [12] Hossain, Tonmoy, et al. “Brain tumor detection using convolutional neural network.” 2019 1st international conference on advances in science, engineering and robotics technology (ICASERT). IEEE, 2019.
  13. [13] Pashaei, Ali, Hedieh Sajedi, and Niloofar Jazayeri. “Brain tumor classification via convolutional neural network and extreme learning machines.” 2018 8th International conference on computer and knowledge engineering (ICCKE). IEEE, 2018.
  14. [14] Srinivas, B., and G. Sasibhusana Rao. “Unsupervised learning algorithms for MRI brain tumor segmentation.” 2018 Conference on Signal Processing And Communication Engineering Systems (SPACES). IEEE, 2018.
  15. [15] Jiang, Jun, et al. “3D brain tumor segmentation in multimodal MR images based on learning population-and patient-specific feature sets.” Computerized Medical Imaging and Graphics 37.7-8 (2013): 512-521.
  16. [16] Cheng, Jun, et al. “Enhanced performance of brain tumor classification via tumor region augmentation and partition.” PloS one 10.10 (2015): e0140381.
  17. [17] Shin, Hoo-Chang, et al. “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characeristics and transfer learning.” IEEE transactions on medical imaging 35.5 (2016): 1285-1298.
  18. [18] Mahmud, Md Ishtyaq, Muntasir Mamun, and Ahmed Abdelgawad. “A deep analysis of brain tumor detection from mr images using deep learning networks.” Algorithms 16.4 (2023): 176.