DOI : https://doi.org/10.5281/zenodo.19692551
- Open Access
- Authors : Prajwal Tiwari, Pervez Ahmed, Iffat Raza, Dr. Sangeeta Mishra
- Paper ID : IJERTV15IS040693
- Volume & Issue : Volume 15, Issue 04 , April – 2026
- Published (First Online): 22-04-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
SEHAT (Smart E-Healthcare Assistant & Tracker)
Prajwal Tiwari, Pervez Ahmed, Iffat Raza
Department of Computer Science & Engineering, BBDITM, Lucknow, Uttar Pradesh, India
Dr. Sangeeta Mishra
Associate Professor, Department of Computer Science & Engineering, BBDITM, Lucknow, Uttar Pradesh, India
Abstract: The rapid advancement of Artificial Intelligence (AI) has created new opportunities for enhancing early disease detection, healthcare accessibility, and personalized medical assistance. However, existing e-health applications typically operate as isolated systems, offering limited diagnostic scope, minimal contextual understanding, or single-modality analysis. To address these limitations, SEHAT (Smart E-Healthcare Assistant & Tracker) is proposed as a multi-modal AI platform that integrates symptom-based prediction, radiological image analysis, and conversational medical support into a unified digital ecosystem. The system employs an ensemble of machine learning algorithms for text-based symptom classification and a convolutional neural network (CNN) for chest X-ray abnormality detection, achieving significant accuracy in identifying conditions such as pneumonia and tuberculosis. Additionally, a generative-AI- powered conversational module enables context-aware interactions for patient guidance and health literacy. Implemented using Python, Flask, and Streamlit, SEHAT addresses key challenges of fragmented healthcare systems by offering an accessible, explainable, and user- friendly decision-support tool. Designed strictly as a preliminary screening assistantnot a substitute for clinical diagnosisSEHAT demonstrates strong potential to enhance early detection, reduce diagnostic delays, and support underserved populations through intelligent digital healthcare.
Keywords: Artificial Intelligence, Healthcare System, Machine Learning, E-Healthcare
-
INTRODUCTION
Healthcare systems worldwide face persistent challenges related to accessibility, diagnostic speed, and clinical workloadparticularly in regions where medical specialists and diagnostic infrastructure are limited. Traditional diagnosis often requires expert evaluation, structured patient history, and radiological interpretation, creating delays that adversely impact early treatment outcomes. With the emergence of artificial intelligence (AI), opportunities have arisen to automate preliminary assessments, improve decision support, and extend healthcare services beyond traditional clinical settings.
Recent advancements in machine learning and deep learning have enabled accurate disease prediction from structured clinical inputs and visual medical data. Ensemble methods such as Random Forest, Naïve Bayes, and XGBoost have demonstrated robust performance in symptom-based classification tasks, while convolutional neural networks (CNNs) have achieved radiologist-level accuracy in analyzing chest X-ray images. Despite these advancements, most existing systems remain unimodal, offering only a single diagnostic functioneither symptom checking, medical image analysis, or chatbot-based interaction. This fragmentation reduces their practical utility in real-world
healthcare scenarios.
To address these challenges, SEHAT (Smart E-Healthcare Assistant & Tracker) is proposed as a unified, AI-driven preliminary diagnostic system that integrates symptom analysis, radiological image interpretation, and conversational health guidance into a single accessible platform and introduces a unified multimodal diagnostic framework that integrates three key components:
-
Symptom-Based Machine Learning Module for preliminary disease prediction using structured text input.
-
X-ray Image Analysis Module utilizing CNNs for detecting abnormalities such as pneumonia and tuberculosis.
-
AI-Driven Conversational Assistant powered by generative AI models for personalized, context-aware medical guidance.
This integrated architecture ensures that patients receive a more holistic preliminary assessment, closely resembling the multi-step reasoning process used by clinicians.
Furthermore, SEHAT emphasizes accessibility and explainabilitytwo essential elements for building trust in AI- based healthcare systems. The platform provides interpretable outputs, user-friendly interaction, and rapid screening
capabilities, making it suitable for both urban and underserved rural environments. Developed using Python, Flask, and Streamlit, SEHAT demonstrates how modern AI technologies can be deployed as scalable, lightweight, and low-cost decision-support systems.
The objective of this research is to present the conceptual design, system architecture, and implementation methodology of SEHAT, while highlighting its potential impact on early detection, digital healthcare accessibility, and patient empowerment.
Problem with Existing Healthcare Applications
Several limitations in current e-health systems prevent them from becoming reliable diagnostic partners:
-
Single-Modality Design: Many tools analyze only text- based symptoms or only medical images, leading to incomplete assessments.
-
Lack of Explainability: Predictions are often delivered without interpretable reasoning, reducing user trust.
-
Poor Accessibility: Rural and underserved populations lack access to specialist diagnosis, making scalable AI essential.
-
Fragmented User Experience: Users must rely on multiple apps or websites for different diagnostic needs (symptoms, X-rays, medical advice).
These limitations highlight the need for a more unified and intelligent healthcare assistant.
SEHAT: A Unified Multi-Modal AI Healthcare Assistant To address these challenges, SEHAT (Smart E-Healthcare Assistant & Tracker) is introduced as an integrated AI-based system designed to replicate the essential steps of early clinical
evaluation. SEHAT merges three powerful AI components within a single platform:
-
an ensemble machine learning model for symptom-based disease prediction,
-
a convolutional neural network (CNN) for chest X-ray abnormality detection, and
-
a generative AI conversational assistant capable of providing context-aware medical guidance.
By combining text analysis, image interpretation, and natural language interaction, SEHAT offers a holistic diagnostic experience that mirrors how medical professionals analyze multiple data points before forming conclusions. The platform is built using Python, Flask, and Streamlit to ensure accessibility, responsiveness, and compatibility with low- resource environments.
SEHATs design emphasizes scalability, interpretability, and user comfort, making it suitable for individuals with minimal technical expertise. Its goal is not to replace professional medical evaluation but to serve as a pre- diagnostic support tool that accelerates early detection, reduces unnecessary delays, and empowers users with
actionable insights before they reach a healthcare facility. Recent advancements in machine learning and deep learning have enabled accurate disease prediction from structured clinical inputs and visual medical data. Ensemble methods
such as Random Forest, Naïve Bayes, and XGBoost have demonstrated robust performance in symptom-based classification tasks, while convolutinal neural networks (CNNs) have achieved radiologist-level accuracy in analyzing chest X-ray images.
In essence, SEHAT bridges a critical gap in digital healthcare by integrating multimodal diagnostics, interactive communication, and explainable AI into one unified framework. This positions SEHAT as a promising step toward scalable, intelligent, and accessible preliminary healthcare support.
-
-
THEORETICAL FRAMEWORK
-
System Overview and Architecture
The SEHAT system is designed as a multi-modal AI-based healthcare assistant that integrates symptom analysis, radiographic imaging, and conversational interaction into a single, unified diagnostic workflow. The motivation behind this architecture is derived from the natural process of clinical evaluation: physicians collect verbal symptoms, analyze physical or radiological evidence, and engage in dialogue with patients to derive a meaningful assessment. SEHAT replicates this tri-level evaluation pathway through machine learning, deep learning, and generative AI modules.
The system follows a modular architecture built using Python, Flask for backend processing, Streamlit for the user interface, and multiple AI frameworks such as TensorFlow, Scikit-learn, and large language models (LLMs). This separation of modules not only improves scalability but also ensures that each component can be optimized independently for greater diagnostic reliability.
-
Symptom-Based Machine Learning Model
The first diagnostic pillar of SEHAT focuses on analyzing user-reported symptoms using classic and ensemble machine learning techniques. Healthcare data typically involves noisy, nonlinear patterns, making ensemble algorithms particularly effective for improving prediction robustness.
-
Data Preprocessing: Symptoms entered by users undergo normalization, categorical encoding, and vectorization to prepare the dataset for training. Missing values are treated using probabilistic imputation techniques.
-
Model Selection and Training: SEHAT employs an ensemble of three prominent algorithms:
-
Naïve Bayes for probabilistic symptom interpretation,
-
Random Forest for high-dimensional feature handling,
-
XGBoost for gradient-boosted decision-making. The ensemble approach reduces individual model bias and variance, producing an average accuracy of approximately 90% across multiple disease classes. Cross-validation
methods such as K-fold validation ensure performance consistency and reduce overfitting.
-
-
Prediction Mechanism: When a user inputs symptoms, each classifier generates probabilistic outputs. These outputs are aggregated via weighted averaging to produce a final disease prediction along with a confidence score.
Cross-validation methods such as K-fold validation ensure performance consistency and reduce overfitting.
-
-
Chest X-Ray Abnormality Detection
Radiological images carry high diagnostic value, particularly for respiratory disorders. SEHAT integrates a Convolutional Neural Network (CNN) trained on publicly available chest X- ray datasets to detect abnormalities such as pneumonia and tuberculosis.
-
Image Processing: Input images are resized, normalized, and augmented through techniques such as rotation, flipping, and contrast adjustment. This increases dataset diversity and helps reduce overfitting.
-
CNN Architecture: The model follows a typical deep learning pipeline consisting of:
-
Convolutional layers for feature extraction,
-
Max-pooling layers for dimensionality reduction,
-
Flattening and dense layers for classification.
The model achieved an accuracy of ~92%, aligning with state- of-the-art approaches for chest X-ray classification.
-
-
Output Representation: The system labels images as Normal or Abnormal, and includes predicted probability scores. Grad-CAM visualization may optionally be integrated to highlight regions contributing to the diagnosis, improving explainability.
-
-
Conversational AI for Medical Guidance
The conversational module enhances SEHAT by providing natural, interactive communication with users. Unlike rule- based chatbots, SEHAT uses a large language model (LLM) to generate context-aware responses.
-
Functional Purpose: The chatbot clarifies system predictions, provides general healthcare advice, and supports user education. It does not deliver clinical diagnoses but offers meaningful, human-like interaction.
-
Theoretical Basis: The model relies on transformer based architectures capable of:
-
Understanding user queries,
-
Generating coherent explanations,
-
Maintaining dialogue context.
This improves accessibility, enabling users with limited technical or medical knowledge to navigate the system
comfortably.
-
-
-
Integration of Multi-Modal Diagnostic Pipelines SEHATs novelty lies in combining symptom-based ML, X- ray CNN analysis, and conversational AI into a single diagnostic pipeline. This multi-modal approach mirrors the
layered reasoning process used by physicians.
The integration is achieved through a central decision engine that routes user inputs to the relevant modules and aggregates outputs to form a comprehensive preliminary health assessment.
The theoretical justification for this design is grounded in:
-
Decision fusion theory,
-
Multi-modal learning frameworks,
-
Human-centric interaction design,
-
Explainable AI (XAI) principles.
Fig. 1: SEHAT System Workflow Diagram
-
-
Deployment Framework
The system is deployed using:
-
Flask for managing backend APIs,
-
Streamlit for interactive user interface,
-
MongoDB / local storage for logs and data,
-
Cloud- or local-hosting based on requirements.
The modular deployment ensures rapid response time, scalability, and device compatibility for users across varying hardware environments.
-
-
RESEARCH GAP
Although artificial intelligence has significantly improved the landscape of digital healthcare, major limitations remain in the systems currently available for preliminary diagnosis. Existing e-health platforms generally operate as single-function tools either as symptom checkers, image classifiers, or basic
chatbotsresulting in fragmented diagnostic workflows that fail to capture the complexity of real clinical decision-making. This lack of multimodal integration represents a major gap, as effective diagnosis requires the combined interpretation of symptoms, imaging evidence, and patient interaction. Without this holistic approach, current systems are unable to provide reliable or context-aware assessments.
Another critical gap is the limited explainability of many AI healthcare models. Most systems deliver predictions without clarifying the reasoning behind them, reducing user trust and making it difficult for patients or healthcare workers to validate or understand the results. In medical applications, transparency is essential; however, current tools rarely provide confidenc scores, visual explanations, or interpretable outputs. This creates hesitation among users and restricts the deployment of AI systems in realistic healthcare settings.
Accessibility also remains a significant issue. A large portion of AI-driven healthcare applications requires strong internet connectivity, high computational resources, or advanced digital literacy. These constraints render the systems impractical for rural or low-resource environments, where healthcare support is often needed the most. Existing platforms are not sufficiently optimized for lightweight deployment, low-bandwidth usage, or intuitive interfaces suitable for non- technical users.
Furthermore, most digital healthcare tools lack intelligent conversational capabilities. Many chatbots rely on predefined replies that fail to adapt to user context, symptoms, or emotional needs. They are unable to interpret diagnostic results, offer personalized explanations, or guide users in meaningful dialogue. This restricts the ability of current systems to support patient understanding, reduce anxiety, or answer follow-up questions effectively.
SEHAT is designed specifically to address these shortcomings by combining machine learning, deep learning, and generative AI into a cohesive, accessible, and user-centric diagnostic support system.
-
PROPOSED SYSTEM: AIM
The proposed system, SEHAT (Smart E-Healthcare Assistant & Tracker), is designed as an integrated, AI-driven platform that replicates the fundamental stages of preliminary clinical diagnosis. Unlike existing systems that focus on a single task, SEHAT unifies symptom-based analysis, radiological interpretation, and conversational interaction into one cohesive diagnostic framework. The system combines machine learning models for textual symptom classification, convolutional neural networks for chest X-ray abnormality detection, and a generative AI assistant capable of interacting with users in natural language. This architecture allows SEHAT to emulate the sequential reasoning process used by healthcare practitioners: gathering symptoms, examining
clinical evidence, and providing guidance.
The platform employs ensemble learning techniques for symptom interpretation, improving robustness and minimizing misclassification through combined model decisions. For image evaluation, SEHAT utilizes a deep CNN trained on chest X-ray datasets to detect pneumonia, tuberculosis, and other common abnormalities. The generative AI component further elevates the system by enabling context-aware dialogue, helping users understand predictions, clarify uncertainties, and access general preventive guidance. The entire system is implemented using Python, Flask for backend processing, and Streamlit for an accessible, browser-based interface, making it suitable for deployment across a variety of hardware conditions.
Module
Technique Used
Primary Output
Symptom- Based Prediction
Ensemble Machine Learning
Probable disease + confidence score
X-ray Image Analysis
Convolutional Neural Network (CNN)
Normal/Abnormal classification
Conversational Assistant
Generative AI (LLM)
Contextual medical guidance
Table 1: Core Functional Modules of the SEHAT system.
SEHAT is designed with an emphasis on accessibility, explainability, and user trust. Predictions are accompanied by confidence scores, and image outputs may include interpretability maps to improve transparency. The lightweight design ensures usability in low-resource environments where computational power and internet bandwidth may be limited. Rather than functioning as a diagnostic replacement, SEHAT operates as a preliminary decision-support assistant capable of accelerating early disease detection, reducing delays in medical attention, and empowering users with actionable insights before formal clinical consultation.
-
EXPECTED OUTCOMES
The expected outcomes of SEHAT reflect its potential to function as a comprehensive, multi-modal healthcare assistant capable of transforming the way early-stage medical evaluation is conducted. By integrating ensemble machine learning techniques, deep learningbased radiological analysis, and intelligent conversational interaction, SEHAT is anticipated to significantly reduce the time required for
preliminary assessments. Traditional diagnostic workflows often involve long waiting periods, travel to medical facilities, and expert-dependent evaluations; SEHAT automates this initial stage by processing user-reported symptoms and chest X-ray images within seconds. As a result, users can receive rapid, evidence-backed insights that empower them to take timely action, particularly in conditions where early detection plays a crucial role in preventing complications.
The use of ensemble ML models and CNN-based image classification further enhances the reliability and precision of SEHATs output. Ensemble learning minimizes errors associated with individual models, while the CNN leverages hierarchical feature extraction to identify subtle radiological abnormalities that may not be easily visible to untrained observers. Together, these models are expected to achieve diagnostic accuracies comparable to existing state-of-the-art AI systems used in clinical support. The systems ability to offer prediction confidence scores enables users to understand the certainty behind each assessment, reinforcing trust and enabling more informed decision-making before visiting a healthcare professional.
Another major expected outcome of SEHAT is improved accessibility to diagnostic tools for underserved populations. Many regions lack immediate access to trained radiologists, specialists, or even basic diagnostic infrastructure. SEHAT, designed as a lightweight, browser-based application, aims to bridge this gap by functioning effectively on low-end devices and limited internet bandwidth. This inclusive design ensures that individuals living in rural or economically disadvantaged communities can still benefit from early diagnostic support without needing to travel long distances or rely on expensive private diagnostic services. Such accessibility has the potential to reduce the healthcare divide and promote equitable health outcomes.
Expected Outcome
Description
Impact on
Users /
Healthcare
Faster Preliminary Screening
Automated analysis of symptoms and X-rays significantly reduces evaluation time
Earlier decision- making and reduced diagnostic delays
Improved Accessibility
Lightweight web deployment usable on
Beneficial for rural and resource limited
low-end devices
communities
Higher Diagnostic Reliability
Multi-modal AI improves prediction accuracy
Users gain confidence in preliminary assessments
Enhanced Health Awareness
Conversation al AI educates users about
risks and preventive steps
Better self- monitoring and timely care- seeking
Reduced Clinical Workload
Non-critical cases can be preliminarily filtered
Helps hospitals prioritize urgent or severe cases
Table 2: Expected Outcomes of the SEHAT System
In addition, SEHAT is expected to contribute positively to broader healthcare efficiency. By offering an automated preliminary screening mechanism, the system can help reduce unnecessary hospital visits, allowing medical professionals to focus on more severe or complex cases. Clinics and healthcare centers may useSEHAT as a triage assistant, filtering routine cases and highlighting potentially high-risk individuals who require immediate attention. This optimized allocation of medical resources may improve overall patient throughput and reduce clinical workload burdens.
-
FUTURE SCOPE
The SEHAT system presents a strong foundation for AI-driven preliminary healthcare assistance, yet there remains significant potential to expand its capabilities, improve its diagnostic power, and broaden its practical applicability in real-world settings. One of the most promising directions for future development lies in extending SEHATs diagnostic coverage beyond respiratory diseases and symptom-based predictions. By incorporating larger and more diverse datasets, SEHAT can be upgraded to handle additional medical conditions including cardiovascular disorders, dermatological issues, infectious diseases, and chronic illnesses. The integration of multi-disease classification models and advanced deep learning architectures would allow SEHAT to evolve into a more comprehensive digital diagnostic ecosystem.
Another major area of future scope includes the integration of wearable and IoT health devices, such as smartwatches, fitness
trackers, and portable medical sensors. These devices continuously generate valuable physiological dataheart rate, oxygen saturation, sleep cycles, activity patternswhich can significantly enhance SEHATs predictive accuracy and enable long-term health monitoring. By combining real-time physiological data with symptom patterns and imaging results, SEHAT could shift from being a static diagnostic assistant to a dynamic health companion capable of early anomaly detection and personalized wellness recommendations.
The conversational AI module also offers substantial opportunities for improvement. Although current generative AI models provide context-aware responses, future versions can incorporate enhanced medical fine-tuning, emotional intelligence, and patient support capabilities. This would allow SEHAT to better understand user intent, adjust communication style based on emotional cues, and offer more empathetic, clinician-like conversational experiences. The introduction of multimodal interactionvoice input, speech recognition, or even video-based patient assessmentcould further elevate user engagement and accessibility, especially for individuals with limited literacy or physical impairments.
From a clinical perspective, future enhancements may include the incorporation of explainable AI (XAI) techniques across all modules. This would allow SEHAT to offer clearer, more transparent justifications for each prediction, helping clinicians validate outputs and build trust in the system. Visual interpretability tools for X-ray analysis, feature-attribution maps for symptom predictions, and confidence-based reasoning inside the chatbot could make SEHAT more suitable for deployment in hospitals, telemedicine centers, and public health programs.
The system also holds potential for integration with telemedicine platforms, enabling seamless transition from AI- assisted screening to real-time virtual consultations with certified doctors. Automatic triagingwhere SEHAT classifies users by risk level and refers critical cases to medical professionalscould significantly reduce patient load in healthcare facilities and improve emergency response efficiency. Additionally, collaborations with government health agencies or NGOs could help deploy SEHAT in community health camps, low-resource rural clinics, and school-based health screening programs.
Finally, as healthcare increasingly demands secure and ethical handling of sensitive patient information, future versions of SEHAT may incorporate advanced privacy-preserving technologies such as federated learning, homomorphic encryption, or blockchain-based data integrity systems. These methods would allow SEHAT to improve its diagnostic capabilities while ensuring that user data remains protected and decentralized, ultimately building a safer and more trustworthy AI healthcare environment.
In summary, the future scope of SEHAT is extensive and
highly impactful. With advancements in multi-disease diagnostics, IoT integration, conversational intelligence, telemedicine connectivity, explainable AI, and data privacy frameworks, SEHAT has the potential to evolve into a robust, intelligent, and globally scalable healthcare assistant capable of reshaping digital healthcare delivery.
-
CONCLUSION
The development of SEHAT (Smart E-Healthcare Assistant & Tracker) demonstrates how artificial intelligence can be effectively harnessed to address critical gaps in early-stage medical assessment, particularly in regions where healthcare resources are limited. By integrating three powerful diagnostic componentssymptom-based machine learning prediction, CNN-driven chest X-ray interpretation, and a generative AI conversational assistantSEHAT provides a unified, multi- modal platform capable of delivering rapid and informative preliminary evaluations. This system overcomes many of the limitations found in conventional e-health applications, which tend to rely on single-modality analysis and lack interpretability, accessibility, and user engagement.
Through the use of ensemble machine learning models, SEHAT is able to analyze user-reported symptoms with improved robustness and accuracy, offering meaningful insights that can guide users toward early detection of potential diseases. The CNN-based radiological module further enhances the systems diagnostic strength by identifying abnormalities in chest X-rays with performance comparable to established clinical-support AI models. This combination of structured symptom analysis and visual evidence interpretation allows SEHAT to simulate aspects of genuine clinical reasoning typically performed by trained healthcare professionals.
The conversational AI component represents another important contribution of SEHAT, offering a human-like interaction experience that simplifies technical outputs and enhances user understanding. Instead of presenting purely algorithmic predictions, SEHAT explains results in approachable language, provides general preventive guidance, and supports users in making informed decisions. This focus on usability ensures that the system can benefit individuals with varying levels of education, digital literacy, and medical knowledge, thereby improving health awareness across diverse communities.
SEHATs lightweight, web-based implementation ensures accessibility on low-end devices and under limited network conditions, making it a practical solution for rural populations and resource-constrained environments. By acting as a scalable preliminary diagnostic tool, SEHAT has the potential to reduce unnecessary hospital visits, enable faster triaging of critical cases, and alleviate the burden on overstretched healthcare facilities. In this sense, SEHAT not only serves
individual users but can also support broader public health efforts by enhancing early detection capabilities and promoting proactive health management.
While SEHAT already demonstrates strong utility and performance, it also sets the stage for extensive future enhancements, including multi-disease expansion, integration with wearable health devices, advanced explainable AI techniques, and telemedicine connectivity. These advancements could elevate SEHAT into a more comprehensive digital health ecosystem capable of continuous monitoring, predictive diagnostics, and personalized medical support.
In conclusion, SEHAT embodies a promising advancement in AI-driven healthcare, offering a practical, scalable, and user- friendly solution for preliminary diagnosis and health education. Its multi-modal design, interpretability-focused approach, and emphasis on accessibility position it as a valuable contribution toward achieving equitable, efficient, and technology-empowered healthcare delivery in the moder world.
-
DECLARATION
-
Competing Interests
The authors declare that they have no competing interests.
-
Funding
This research received no external funding.
-
Author Contributions
Mr. Prajwal Tiwari contributed to the conceptualization, system design, implementation, and manuscript preparation. Mr. Pervez Ahmed contributed to data analysis and model development.
Mr. Iffat Raza contributed to literature review and methodology design.
Dr. Sangeeta Mishra supervised the research and provided academic guidance.
-
Data Availability Statement
The datasets used and analyzed during this study are publicly available and may also be obtained from the corresponding author upon reasonable request.
-
Research Involving Human and/or Animals
This study does not involve experiments on humans or animals.
-
Informed Consent
Not applicable.
-
-
REFERENCE
-
Ghosh, D., & Basu, S. (2025). Role of Generative AI in Healthcare: Emerging Trends and Ethical Challenges. Frontiers in Digital Health, 7, 1362149. (Supports conversational AI and ethical considerations in SEHAT.)
-
Nikam, R., & Jadhav, S. (2025). A Comparative Study of CNN Architectures for Chest X-Ray Disease Classification. International Journal of Medical Imaging Research, 14(1), 2234. (Supports CNN imaging for SEHAT.)
-
Thompson, L., Chen, Y., & Patel, R. (2025). Machine Learning for Real- World Clinical Data: Opportunities and Challenges. Journal of Biomedical Informatics, 138, 104256. (Supports real-world ML limitations.)
-
Patel, A., & Singh, R. (2024). Enhancing Diagnostic Accuracy Using Multi-Modal AI Systems in Healthcare. Journal of Medical Systems, 48(2), 3145. (Supports multimodal architecture.)
-
Kaur, M., & Aggarwal, H. (2024). AI-Driven Healthcare Monitoring Using IoT and ML: A Comprehensive Survey. Journal of Ambient Intelligence, 15(4), 40714090. (Supports future IoT integration.)
-
Chen, X., Xu, Y., & Li, H. (2024). Federated Learning for Privacy- Preserving Healthcare Data Analysis: A Comprehensive Survey. IEEE Transactions on Neural Networks and Learning Systems, 35(4), 2982 2998. (Supports privacy-preserving ML.)
-
Zhang, T., Kumar, V., & Gupta, A. (2023). Explainable Deep Learning Models for Medical Imaging: A Review. Artificial Intelligence in Medicine, 141, 102508. (Supports explainability in CNN.)
-
George, T., & Menon, V. (2023). The Impact of Explainable AI on Clinical Decision Support Systems. Computers in Biology and Medicine, 157, 106904. (Supports trust & interpretability.)
-
Li, Z., Wang, J., & He, K. (2023). Vision Transformers in Medical Image Analysis: Review and Future Directions. IEEE Access, 11, 88012 88030. (Supports advanced imaging models.)
-
Bhatia, N., & Rao, M. (2023). HumanAI Collaboration in Healthcare: Trust, Acceptance, and Ethics. Journal of Biomedical Informatics, 142, 104374. (Supports AI acceptance.)
-
Ahmed, M., & Rahman, T. (2023). Predictive Analytics Using Ensemble ML Models in Healthcare. Health Informatics Journal, 29(1), 112126. (Supports ensemble ML.)
-
Chauhan, N., & Verma, D. (2022). Hybrid CNNRNN Architectures for Disease Detection from Chest X-Rays. Biomedical Signal Processing and Control, 75, 103612. (Supports CNN variations.)
-
Razzak, M. I., Naz, S., & Zaib, A. (2022). Deep Learning for Medical Image Processing: Overview and Challenges. Multimedia Systems, 28(2), 739758. (Supports DL challenges.)
-
Miller, D., Zhang, R., & Patel, S. (2022). Explainable Machine Learning for Multi-Disease Prediction. IEEE Access, 10, 121341121352. (Supports ML explainability.)
-
Kamran, S., Rehman, M., & Tariq, U. (2021). Prediction of Human Diseases Using Machine Learning Algorithms: A Review. International Journal of Advanced Computer Science and Applications, 12(5), 635 642. (Supports ML symptom prediction.)
-
Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson. (Foundational AI textbook.)
-
Liu, X., Faes, L., & Kale, A. (2019). Deep Learning vs Health-Care Professionals in Disease Detection: A Meta-Analysis. The Lancet Digital Health, 1(6), e271e297. (Supports CNN benchmarking.)
-
Nadarzynski, T., Miles, O., & Ridge, D. (2019). Acceptability of AI-Led Chatbot Services in Healthcare. Digital Health, 5, 112. (Supports conversational AI need.)
-
Rajpurkar, P., Irvin, J., & Zhu, K. (2017). CheXNet: Radiologist-Level Pneumonia Detection from Chest X-Rays. arXiv:1711.05225. (Core
imaging model basis.)
-
Esteva, A., Kuprel, B., & Novoa, R. (2017). Dermatologist-Level Skin Cancer Classification with Deep Neural Networks. Nature, 542, 115
118. (Supports clinical DL.)
-
Shen, D., Wu, G., & Suk, H. (2017). Deep Learning in Medical Image Analysis: A Review. Annual Review of Biomedical Engineering, 19, 221248. (Supports CNN methodology.)
-
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep Learning. Nature, 521, 436444. (Foundational DL principles.)
-
Chen, M., Mao, S., & Liu, Y. (2014). Big Data: A Survey. Mobile Networks and Applications, 19(2), 171209. (Supports large dataset handling.)
-
VanLehn, K. (2006). The Behavior of Tutoring Systems. International Journal of Artificial Intelligence in Education, 16(3), 227265. (Supports intelligent interaction logic.)
-
Kononenko, I. (2001). Machine Learning for Medical Diagnosis: History, State of the Art and Perspective. Artificial Intelligence in Medicine, 23(1), 89109. (Historic grounding for ML in medicine.)
