🏆
International Publishing Platform
Serving Researchers Since 2012
IJERT-MRP IJERT-MRP

Optimizing Deep Learning for Bone Cancer Detection: Boosting Diagnostic Accuracy and Efficiency with CNNs

DOI : 10.17577/IJERTV14IS050260

Download Full-Text PDF Cite this Publication

Text Only Version

Optimizing Deep Learning for Bone Cancer Detection: Boosting Diagnostic Accuracy and Efficiency with CNNs

Shreeraksha R Adiga

Presidency School of Computer Science and Engineering Presidency University

Bangalore-560064

Dr Zafar Ali Khan N

Presidency School of Computer Science and Engineering

Presidency University Bangalore-560064

Abstract:

Bone cancer detection and classification is a critical aspect of medical diagnostics. This project focuses on developing a convolutional neural network (CNN)-based system to classify bone cancer as benign or malignant using both X-ray and CT scan images. The solution incorporates advanced preprocessing techniques, including normalization, resizing, and data augmentation, to enhance model robustness across different imaging modalities.By combining machine learning classification with computer vision-based mass analysis, the system aims to improve diagnostic accuracy and reliability. The model is optimized for fast training and high precision, ensuring practical usability in clinical settings. A user-friendly interface built with Streamlit allows for efficient image upload, classification, and result logging, making the system accessible to healthcare professionals.This work aims to provide a reliable and accessible tool for the early detection of bone cancer through multimodal imaging, supporting medical professionals in making timely and informed decisions.

  1. INTRODUCTION

    Bone cancer, although somewhat uncommon, poses serious diagnostic issues because of its symptomatic similarity to benign bone tumors. Timely and correct detection is paramount in order to enhance patient results and individualize effective treatment modalities. Current diagnostic techniques largely depend on manual interpretation of imaging information, which can be tedious, subjective, and susceptible to variability. For bridging these challenges, this work envisions a computer-aided diagnostic (CAD) system supported by deep learning that classifies bone cancer into benign and malignant based on X-ray as well as CT scan images. A convolutional neural network (CNN) backbone that is pre-trained on a multimodal database with rich preprocessing stages such as normalization, resizing, contrast, and data augmentation boosts the system. These measures enhance the generalization capability of the model and minimize the risk of overfitting over various image formats.

    Aside from classification, the system also incorporates computer vision methods to examine mass concentration and structural patterns of the images and provide additional diagnostic information. One of the strengths of the solution is its intuitive and interactive interface that has been created with Streamlit. This interface accommodates smooth image upload, live classification, customizable sensitivity settings to allow users to set their own detection thresholds, and automatic result logging for clinical audit and reporting. Additionally, the system offers visual aids like heatmaps or attention maps to aid in interpreting the model's areas of focus, bringing transparency to the decision-making process. Intended to be accessible, scalable, and clinically useful, this CAD tool is designed to improve diagnostic accuracy, ease the workload for radiologists, and enable timely intervention, ultimately assisting healthcare professionals in providing more accurate and timely care for patients with suspected bone tumors.

  2. RELATED WORKS

    The paper titled "Application of the convolutional neural networks and supervised deep-learning methods for osteosarcoma bone cancer detection"[1] is authored by Sushopti Gawade et al.The study employs various supervised deep-learning methods, focusing particularly on convolutional neural networks (CNNs), to automate the detection of osteosarcoma, a type of bone cancer. The authors experimented with several models, including VGG16, VGG19, DenseNet201, and ResNet101, and found ResNet101 to be the most effective, achieving an accuracy of 90.36% and a precision of 89.51%. The methodology involved using bone X-ray images and biopsies for model training and validation, with performance metrics such as accuracy, F1-score, precision, recall, and AUC being utilized to evaluate the models. Future work suggested by the authors includes enhancing the model's robustness and accuracy by incorporating larger datasets and exploring other deep learning

    architectures and preprocessing techniques to further improve early-stage cancer detection and reduce diagnostic time.

    The paper titled "A decision support system for selecting the most suitable machine learning in healthcare using user parameters and requirements"[2] is authored by Yashodhan Ketkar and Sushopti Gawade. The study proposes an automated machine learning system designed to assist non-technical users in selecting the most suitable machine learning models for healthcare applications. The system uses user-defined parameters and performance evaluation metrics to determine the optimal model. Specifically, the study employs Support Vector Machine (SVM) and Random Forest (RF) algorithms on a Parkinsons disease dataset, achieving accuracies of 80% and 75% respectively. The decision support system takes user preferences into account to assign weightages to different performance parameters, thereby customizing the model selection process. Future work includes expanding the system to incorporate a broader range of machine learning algorithms and datasets, as well as enhancing its usability and robustness for various healthcare applications.

    The research paper titled "Bone cancer detection using machine learning techniques"[3] was authored by Deepshikha Shrivastava et al. The study focused on utilizing machine learning techniques for the detection and classification of bone cancer, specifically using methods such as Decision Tree Algorithm, Support Vector Machine, Random Forest, Evolutionary Algorithms, and Swarm Intelligence. The paper highlighted the challenges faced in bone cancer detection using machine learning and discussed future prospects, emphasizing the need for improved accuracy, noise reduction, and advancements in neural networks for calculating the size, location, and stage of bone cancer. Further research directions were suggested to enhance the predictive power and accuracy of machine learning methodologies in medical image analysis for bone cancer detection.

    In the research article titled "Differentiation of Bone Metastasis in Elderly Patients With Lung Adenocarcinoma Using Multiple Machine Learning Algorithms"[4] by Cheng-Mao Zhou et al., the authors investigated the performance of various machine learning algorithms in classifying bone metastasis in elderly patients with lung adenocarcinoma. The study utilized nine machine learning algorithms, including Logistic regression, Random forest, Gradient Boosting Decision Tree, XGBoost, LightGBM, RF + LR, LGBM + LR, GBDT + LR, and XGB

    + LR. The authors divided 27,627 patients into training and testing groups, evaluating the models' accuracy, precision, recall rate, and AUC values. The study concluded that machine learning algorithms can effectively distinguish bone metastasis in lung cancer patients, suggesting a new research direction for non-invasive identification of bone metastasis. The authors highlighted the need for further prospective multicenter cohort studies to improve the performance of these algorithms in clinical practice.

    The research article titled "Bone Cancer Detection Using Feature Extracton Based Machine Learning Model"[5] is authored by Ashish Sharma, Dhirendra P. Yadav, Hitendra Garg, Mukesh Kumar, Bhisham Sharma, and Deepika Koundal. The study presents a machine learning model for the detection of bone cancer using feature extraction techniques. The authors utilized a combination of texture and shape features, including GLCM-based features and HOG features, to distinguish between healthy and cancerous bone images. The SVM model was trained and evaluated using different sets of features, showing promising results in terms of accuracy, precision, recall, and F1-score. The future work outlined in the paper includes the exploration of feature optimization techniques such as monarch butterfly optimization, earthworm optimization algorithm, and others to further enhance the model's performance.

    Wenle Li et al. developed a machine learning-based predictive model for lymph node metastasis in Ewing's sarcoma[10].Utilizing six machine learning algorithms, including random forest, naive Bayes classifier, decision tree, xgboost, gradient boosting machine, and logistic regression, the study found that the random forest model performed the best with an average area under the curve of 0.764. The model was validated internally and externally, demonstrating its effectiveness in predicting lymph node metastasis in Ewing's sarcoma patients. Future work includes multi-center, prospective, and multi-ethnic validation to further test the model's efficacy and exploring correlation mechanisms between lymph node metastasis and lung metastasis in Ewing's sarcoma patients.

    The research paper titled "Application of Machine Learning for Differentiating Bone Malignancy on Imaging: A Systematic Review" was authored by Wilson Ong et al. [7]The study utilized radiomic techniques, Artificial Intelligence (AI), and deep learning to distinguish bony lesions across various imaging modalities. The systematic review highlighted the high sensitivity, specificity, and accuracy of machine learning techniques in differentiating between benign and malignant lesions, offering potential benefits in the management of bone tumors. The paper also suggested future research directions, emphasizing the need for larger sample sizes, prospective studies, and the exploration of multimodal machine learning techniques to enhance diagnostic accuracy in bone tumor characterization.

  3. SYSTEM DESIGN

    The system for bone cancer detection and classification is structured with a modular architecture to ensure scalability, robustness, and adaptability [18]. Each module is designed to address specific functionalities, making the system easy to maintain, upgrade, and integrate with future advancements. The key components and enhanced functionalities are outlined below:

    1. Data Preprocessing Module

      The data preprocessing module is critical for ensuring the quality, consistency, and suitability of input data for model training and prediction.

      1. Input:

        The input to this module includes raw X-ray and CT scan images in various formats such as JPG, PNG, or DICOM.

      2. Processes:

        Images are resized to a standard resolution of 224×224 pixels to match the CNN input layer requirements [18]. Normalization scales pixel values to the [0, 1] range, promoting efficient learning. Advanced data augmentation techniquessuch as random rotations, zooming, horizontal flips, brightness adjustments, and Gaussian noise are applied to increase dataset variability and improve generalization [16]. Basic artifact removal and histogram equalization may also be used to enhance contrast and clarity, especially for CT images.

      3. Output:

        The output consists of normalized, resized, and augmented image tensors that are optimized for downstream model training and classification.

    2. Model Training Module

      This module develops and trains the deep learning model for binary classification.

      1. Input:

        Preprocessed image tensors and their corresponding labels (benign or malignant) are provided to this module.

      2. Architecture:

        The CNN architecture includes several convolutional layers to extract spatial features such as edges, textures, and shapes, followed by max-pooling layers to downsample feature maps [13]. Dropout layers prevent overfitting by randomly deactivating neurons during training, while batch normalization accelerates convergence and ensures stability. The fully connected layers conclude with a sigmoid activation function to perform binary classification.

      3. Optimization:

        The model is optimized using the Adam optimizer for adaptive learning rates. Early stopping is implemented to monitor validation loss and halt training when overfitting is detected. A learning rate scheduler adjusts the learning rate during plateaus to improve performance. Model checkpoints save the best-performing model based on validation accuracy.

      4. Output:

        The result is a high-accuracy CNN model capable of classifying both X-ray and CT scan images as benign or malignant [7].

    3. Classification Module

      This module enables real-time image classification using the trained model.

      1. Input:

        The module receives uploaded medical images (X-ray or CT) from the user interface.

      2. Processes:

        Images are passed through the preprocessing pipeline before being fed into the trained CNN model. The model outputs the probability of malignancy [8]. In parallel, computer vision techniques such as contour detection, edge detection, thresholding, and morphological analysisare applied to extract suspicious regions and compute the mass ratio, density distribution, and asymmetry index, offering deeper diagnostic insight.

      3. Output:

        The module displays the classification result (benign or malignant) with a confidence score. Additionally, it provides mass concentration metrics and annotated visual outputs (e.g., bounding boxes or heatmaps) on the original image to assist in clinical interpretation [12].

    4. User Interface Module

      This module offers a clean, secure, and interactive frontend for users.

      1. Framework:

        Developed using Streamlit, the interface supports rapid deployment and responsive design for desktop and tablet devices.

      2. Features:

        • Secure login system for authorized access

        • Simple image upload functionality

        • Sensitivity slider to adjust model decision thresholds

        • Visualization of classification results, including original and processed images

        • Option to select either X-ray or CT scan mode for preprocessing and classification

        • Multi-language support for broader accessibility

      3. Customization:

        Users can fine-tune sensitivity settings and toggle additional visual outputs such as region overlays and attention maps. This allows tailored diagnostic workflows depending on the expertise level and requirements [2].

        Fig. 1. User authentication interface of the bone cancer detection system providing secure access to healthcare professionals.

        Fig. 2. Image upload interface allowing healthcare professionals to submit bone X-ray images for cancer classification and analysis.

    5. Result Logging and Evaluation Module

      This module is responsible for maintaining a history of all diagnostic outcomes ad ensuring traceability.

      1. Logging:

        Every classification is recorded with metadata including timestamp, image name, classification label, confidence score, user ID, and selected sensitivity setting. Results are stored in structured CSV or database format for later analysis and integration with hospital information systems.

      2. Evaluation:

        The module supports post-classification analysis, allowing users to filter and review past cases based on outcome, confidence range, or imaging type. A basic analytics dashboard can summarize statistics like average confidence scores, malignancy distribution, and model performance trends over time.[3]

      3. Error Handling:

        In case of failures such as file permission errors or image format issues, clear and actionable error messages are shown to guide the user through resolving the problem.

        Fig. 3. Classification module interface displaying benign/malignant prediction results with confidence scores and highlighted tumor regions for diagnostic support.

  4. SYSTEM IMPLEMENTATION

    The implementation of the bone cancer detection and classification system is structured into interconnected modules, each fulfilling a specific function [14].

    1. Data Preprocessing Module

      This module prepares input images by resizing them to 224×224 pixels, normalizing pixel values to [0, 1], and applying augmentation techniques like rotation, zoom, flipping, brightness adjustment, and noise [16][18]. To handle large datasets efficiently, images and labels are processed in memory-friendly batches, as outlined by Loraksa et al. [16].

    2. Model Training Module

      A CNN model is trained for binary classification using approaches validated by Gawade et al. [1] and Vezakis et al. [17].

      Architecture: Convolutional, max-pooling, dropout, and batch normalization layers extract features and reduce overfitting [8].

      Optimization: Adam optimizer is used with early stopping and learning rate reduction callbacks for efficient training [4].

      Pipeline: The dataset is split into training, validation, and test sets, with augmented data to improve generalization [12][9].

    3. Classification Module

      The module classifies uploaded images as benign or malignant using the trained CNN model [5]. It outputs a confidence score and applies a mass analysis algorithm (contour detection, thresholding) to calculate metrics like mass ratio, density, and symmetry index [11]. These results aid in clinical interpretation [2][7].

      Fig. 4. Real-time analysis results showing classification outcome, probability score, and mass ratio indicators for comprehensive bone cancer assessment.

    4. User Interface Module

      The user interface is developed using Streamlit, offering a simple and interactive platform for end-users, following design principles.

      • Features: It includes secure login functionality, an image upload feature, and an adjustable sensitivity slider for customizing the mass analysis threshold, considerations also emphasized by Nasir et al. [15].

      • Visualization: Uploaded images are displayed alongside highlighted tumor regions and classification results, ensuring comprehensive and user-friendly visualization

      • Usability: The interface caters to healthcare professionals, ensuring they can navigate the system without requiring extensive technical expertise, addressing usability concerns.

    5. Result Logging Module

      The system includes a result logging module to systematically track analysis outcomes [10]. Each outcome is logged with details such as the timestamp, image name, classification result, and confidence score. The results are stored in a CSV file for future reference, with robust error-handling mechanisms to address logging issues effectively. [10][19].

      Fig. 5. Results logging mechanism capturing classification data in structured CSV format for historical analysis and clinical documentation.

  5. SYSTEM EVALUATION

    The system's performance is evaluated to ensure reliability and robustness across multiple dimensions.

    1. Model Metrics

      Training and validation metrics, including accuracy and loss, are monitored during development to ensure optimal performance. These metrics track the models ability to classify data correctly and highlight potential overfitting.

    2. Test Set Evaluation

      The model is tested on a separate dataset comprising unseen images, offering an objective measure of its generalization ability. The test set evaluation provides insights into the system's real-world applicability.

    3. Robustness Testing

      The models robustness is tested by introducing variations such as noise or distortions in the input images. This evaluation ensures the system remains reliable under different imaging conditions. The sensitivity threshold in the mass analysis module is adjusted and evaluated for consistency across cases.

  6. DISCUSSION

    This project demonstrates the successful integration of machine learning and computer vision for medical diagnostics. By combining CNN-based classification with mass analysis, the system provides reliable and multi- faceted insights into bone cancer detection. Data augmentation and regularization significantly enhance model robustness, reducing the risk of overfitting and ensuring consistent performance across diverse datasets.

    One of the system's key strengths is its modular design, which allows for seamless updates and scalability. New datasets or classification tasks can be integrated effortlessly. Furthermore, the Streamlit-based user interface makes the system accessible to healthcare professionals, irrespective of their technical expertise.

    However, certain limitations exist. The models performance is sensitive to input image quality, with low- resolution or noisy images potentially affecting classification accuracy. Additionally, the black-box nature of deep learning models raises concerns about interpretability, which may hinder clinical adoption. Addressing these challenges will be critical for future improvements.

  7. COMPARITIVE ANALYSIS

    Model

    Methodology

    Best Performance

    Strengths

    Limitations

    My Model (CNN- based with preprocessing & mass analysis)

    VGG16/DenseNet121 with dropout, L2 regularization, augmentation, and Streamlit-based UI

    High precision & recall, robust preprocessing, real-time classification

    Enhanced robustness with augmentation, user-friendly interface, real- time mass analysis

    Sensitive to image quality, black-box nature of deep learning

    Decision Support System (Yashodhan Ketkar et al.)

    SVM, RF with user- defined parameters

    80% (SVM)

    Model selection flexibility

    Not specialized for bone cancer

    Machine Learning (Deepshikha Shrivastava et al.)

    SVM, RF, Decision Trees

    Varies based on dataset

    Simple and interpretable models

    Lower accuracy compared to CNNs

    ML-based Bone Metastasis Differentiation (Cheng-Mao Zhou et al.)

    Logistic regression, XGBoost, RF

    High AUC

    Large dataset, clinical relevance

    Lacks image- based deep learning

    Feature Extraction (Ashish Sharma et al.)

    GLCM, HOG, SVM

    High accuracy on extracted features

    Effective feature- based method

    Feature selection needs optimization

    Deep Learning & Image Segmentation (S.Ponalatha et

    al.)

    CNN with segmentation

    Improves tumor boundary detection

    Enhanced interpretability

    Segmentation requires additional processing

  8. FUTURE WORKS

    Future development of the system will target making both diagnostic accuracy and clinical usability better. One of the main directions involves the incorporation of multi-modal imaging, especially the addition of CT scans to X-rays, to give richer and more detailed diagnostic information. Addressing the model's sensitivity to input quality, sophisticated preprocessing algorithms like denoising filters and super-resolution models can be utilized to reduce the impact of noisy or low-resolution images. In order to enhance trust and transparency, explainable

    AI techniques like Grad-CAM or SHAP will be implemented, enabling clinicians to see and comprehend the model's decision-making process. Additionally, thorough validation on various real-world clinical datasets needs to be conducted to prove the generalizability of the system across populations. Integration with hospital systems in a seamless manner and the establishment of a feedback loop where the model learns from clinician feedback and new data over time will also be investigated. Moreover, extending the system to accommodate multi-class classification of different bone tumors and constructing a lightweight cross-platform GUI for mobile platforms will greatly improve accessibility and usefulness, particularly in low-resource health environments. These developments seek to transmogrify the existing prototype into a scalable and clinically applicable diagnostic aid.

  9. CONCLUSION

The bone cancer detection and classification system effectively applies artificial intelligence to healthcare, offering a practical and reliable tool for aiding medical professionals. Its modular design, robust architecture, and user-friendly interface make it suitable for real-world applications. Result logging and visualization enhance its usability and clinical relevance.

Future work will focus on expanding the dataset to include more diverse cases and integrating explainable AI techniques to improve interpretability. Deployment in clinical settings and feedback from healthcare professionals will help refine the system further. This project lays the foundation for integrating AI-driven solutions into routine medical diagnostics, contributing to enhanced efficiency and accuracy in healthcare.

REFERENCES

  1. Sushopti Gawade a, Ashok Bhansali b, Kshitij Patil a, Danish Shaik, Application of the convolutional neural networks and supervised deep-learning methods for osteosarcoma bone cancer detection,Vol.3,November 2023

  2. Yashodhan Ketkar , Sushopti Gawade,A decision support system for selecting the most suitable machine learning in healthcare using user parameters and requirements,Vol.2,November 2022

  3. Deepshikha Shrivastava, Sugata Sanyal , Arnab Kumar Maji and Debdatta Kandar, Bone cancer detection using machine learning techniques,2020,Page No-175-183

  4. Cheng-Mao Zhou, PhD, Ying Wang, MD2, Qiong Xue, MS2, and Yu Zhu, MS1, Differentiation of Bone Metastasis in Elderly Patients With Lung Adenocarcinoma Using Multiple Machine Learning Algorithms,Vol.30,March 2023

  5. Ashish Sharma , Dhirendra P. Yadav ,Hitendra Garg ,Mukesh Kumar ,Bhisham Sharma , Deepika Koundal Bone Cancer Detection Using Feature Extraction Based MachineLearning Model,Volume 2021, Article ID 7433186

  6. Kanimozhi Sampath,Sivakumar Rajagopal,Ananthakrishna Chintanpalli, A comparative analysis of CNN-based deep learning architectures for early diagnosis of bone cancer using CT images,Article number: 2144 (2024),January 2024

  7. Wilson Ong ,Lei Zhu ,Yi Liang Tan ,Ee Chin Teo ,Jiong Hao Tan 3,Naresh Kumar ,Balamurugan A. Vellayappan ,Beng Chin Ooi

    ,Swee Tian Quek ,Andrew Makmur,James Thomas Patrick Decourcy Hallinan-Application of Machine Learning for Differentiating Bone Malignancy on Imaging: A Systematic Review,Vol.15,Issue 6,March 2023

  8. Shunjiro Noguchi, Mizuho Nishio, Ryo Sakamoto, Masahiro Yakami, Koji Fujimoto,Yutaka Emoto,Takeshi Kubo,Yoshio Iizuka, Keita Nakagomi Kazuhiro Miyasa, Kiyohide Satoh Yuji Nakamoto, Deep learningbased algorithm improved radiologists performance in bone metastases detection on CT,Vol.32,Page No.7976-7987,April 2022

  9. Francesco Priolo, Alfonso Cerase:The current role of radiography in the assessment of skeletal tumors and tumor-like lesions,Vol.27,1998

  10. Wenle Li, Qian Zhou, Wencai Liu, Chan Xu, Zhi-Ri Tang, Shengtao Dong, Haosheng Wang, Wanying Li, Kai Zhang, Rong Li, Wenshi Zhang, Zhaohui Hu, Su Shibin, Qiang Liu, Sirui Kuang, Chengliang Yin, A Machine Learning-Based Predictive Model for Predicting Lymph Node Metastasis in Patients With Ewings Sarcoma,Vol.9,April 2022

  11. Abhishek Shrivastava, PhD, and Mukesh Kumar Nag, PhD, Enhancing Bone Cancer Diagnosis Through Image Extraction and Machine Learning: A State-of-the-Art Approach,Vol.31,Issue 1,December 2023

  12. Dr.S.Ponalatha,P Aravindhan,L Boovesh, Deep Learning Based Classification of Bone Tumors using Image Segmentation,

    Vol.91,No.3,ISSN: 0369-8963,2022

  13. Soterios Gyftopoulos, Dana Lin, Florian Knoll, Ankur M. Doshi, Tatiane Cantarelli Rodrigues, and Michael P. Rech, Artificial Intelligence in Musculoskeletal Imaging: Current Status and Future Directions,Vol.213,Issue 3,June 2019

  14. Dhirendra Prasad Yadav,Sandeep Rathor, Bone Fracture Detection and Classification using Deep Learning Approach,ISBN.978-1- 7281-6575-2,2020

  15. Muhammad Umar Nasir, Safiullah Khan, Shahid Mehmood, Muhammad Adnan Khan, Atta-ur Rahman, Seong Oun Hwang, IoMT- Based Osteosarcoma Cancer Detection in Histopathology Images Using Transfer Learning Empowered with Blockchain, Fog Computing, and Edge Computing,22(14),July 2022

  16. Chanunya Loraksa, Sirima Mongkolsomlit, Nitikarn Nimsuk, Meenut Uscharapong, Piya Kiatisevi, Effectiveness of Learning Systems from Common Image File Types to Detect Osteosarcoma Based on Convolutional Neural Networks (CNNs)

    Models,8(1),December 2021

  17. Ioannis A. Vezakis, Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing original draft, Writing review & editing, Visualization, George I. Lambrou, Resources, Writing original draft, Writing review & editing, Funding acquisition,1,2,3 and George K. Matsopoulos, Conceptualization, Validation, Supervision, Project administration, Deep Learning Approaches to Osteosarcoma Diagnosis and Classification: A Comparative Methodological Approach,Vol 15,Issue 8,April 2023

  18. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks ,Vol 60,No 6,June 2017

  19. Daisuke Komura , Shumpei Ishikawa Department of Genomic Pathology, Medical Research Institute, Tokyo Medical and Dental University, Tokyo,Japan, Machine LearningMethods for Histopathological Image Analysis,Computational and Structural Biotechnology Journal 16 (2018) 3442