🌏
Global Research Authority
Serving Researchers Since 2012

Heart Attack Prediction System using Retinal Eye Images and Deep Learning

DOI : 10.17577/IJERTCONV14IS010090
Download Full-Text PDF Cite this Publication

Text Only Version

Heart Attack Prediction System using Retinal Eye Images and Deep Learning

Shraddha

Student, St Joseph Engineering College, Mangalore

Jayashree M

Asst Professor, St Joseph Engineering College, Mangalore

Abstract – In this study, we present a new less-invasive method to predict a risk for heart attack, based on retinal eye images with deep learning. A CNN trained on 5,000 retinal images achieved 96% accuracyinclassifying patients ashigh-riskorlow-risk. The system has a React-based user interface for the patients, the doctors and the administrators and comes with clinical decision support for custom made medical suggestions. The solution is a cheap and easy-to-use tool for cardiovascular screening that lowers costs, increases effectiveness, and could change how we find, stop, and treat heart attacks, especially in places where resources are limited.

Keywords: convolutional neural network (CNN), deep learning, clinical decision support, cardiovascular screening, React.js, Flask, TensorFlow, heart attack prediction, and retinal image analysis.

  1. INTRODUCTION

    The worldwide threat from cardiovascular diseases continues to expand and the heart attack is a key and time-sensitive medical emergency in about 40% of cases. It is also a global incidence of 17.9 milliondeathsperyear, translating to 31% of totaldeathsbythe World Health Organization, afigureconsistentwithallformsofheartdisease. Heart attack can be so deadly because it occurs suddenly and the best chance of treatment is through detection and treatment that occurs too far in advance of the heart attack.

    Typicalalgorithmsforcalculatingcardiovascularriskincludeinvasive procedures, full clinical assessments, and multi-dimensional calculators using severalassumptionstoreflect themainphysiological data. Even with the best of the ECG, stress testing, coronary angiography and overall lipid profile testing, heart disease diagnosis has limitations for diagnostic procedures there is consideration that the limitations become confounded with invasive procedures for heart diagnosticsandtreatment; seriousharmorinjuriesareafunctionofthe invasive limits ofcardiacdiagnosticprocedures. Complicationsdueto the invasive nature of the die of cardiac diagnostic procedures.

  2. LITERATURE REVIEW

    1. Retinal Imaging to Identify Cardiovascular Risk

      Conventional cardiovascular risk assessment approaches rely on

      invasive imaging techniques such as angiography or stress echocardiography that are costly and not accessible in rural or resource-constraint settings. Recent studies have suggested using retinal imaging as a valid non-invasive diagnostic technique since anatomical and physiological similarities exist between the retina and the coronary vasculature. Even small differences in vessel calibre, vessel tortuosity, and arteriolar-venular ratio (AVR) can be a signal of systemic cardiovascular issues.

    2. Machine Learning Approaches for Retinal Analysis

      As part of the early detection of heart problems, machine learning has been successfully leveraged to examine retinal characteristics. Support Vector Machines (SVMs), Decision Trees and others were used to classify risk levels based on features of a retina such as vessel width, and/or vessel bifurcation points. While relevant from a clinical perspective and the applied method and variants have reasonable predictive accuracy, The effort of manual feature extraction and anticipation of generalization failure when applied on large heterogeneous datasets, and classical methods do not generalize well if the model is predicated on spatial complexity.

    3. Deep Learning Methods

    Deep learning techniques, and especially Convolutional Neural Networks (CNN), have surpassed traditional techniques on medical image datasets. This is due to their capacity to automatically learn a hierarchy of features. Many research studies have applied CNNs and hybrid models to classify risk of heart attacks using retinal image datasets .Shaikh et al. used transfer learning with the InceptionV3 model and achieved 96% testing accuracy. Rajani et al. proposed a system based on recurrent neural networks (RNN) and the Expectation-Maximization algorithm to classify the risk of heart attack using retinal fundus images and achieved 98.6% accuracy.

    All studies showed noticeable increases in performance after having enhanced the preprocessing procedure (e.g., CLAHE) and/or with segmentation networks such as U-Net, which helps with vessel detection and analysis. Poplin et al. claimed that CNNs learned from large datasets could provide similar risk estimates for cardiovascular risk factors (e.g., blood pressure, age, smoking status) from fundus images, when compared to risk calculations determined with clinicians and existing risk calculators.

  3. METHODOLOGY

    1. Problem Statement

      Cardiovascular diseases (CVDs), especially heart attacks, are still among the leading causes of death globally. Traditional diagnostic methodsECGs, angiography, and blood-based biomarkerstend to be invasive, expensive, and not suitable for large-scale screening population. Moreover, these methods are contingent on specialized instrumentation, expert reading, and centralized health care management, making them particularly impractical in rural and underserved areas. Furthermore, as early-stage cardiovascular disease canbeasymptomatic, screeningandpreventionaredifficultaspatients may only seek treatment for an acute event.

      In light of these shortcomings, it is imperative that a non-invasive, cheap, accessible, and broad-reaching, risk prediction system is developed which will allow for early detection, a personalized approach, and identification of patients for treatment andprevention.

    2. System Architecture

      The system comprised of a hybrid system made with React.js (front-end), Flask (back-end), and TensorFlow (deep learning) that accommodates real-time predictive ability, keeps certain model development modular, and attributes scalability for clinical application. The envisioned heart attack prediction system employs a multi-tier hybrid architecture with React.js for front-end development, Flask for back-end logic, and TensorFlow for deep learning-based functionality. This architecture allows for any real-time, scalable and user-friendly cardiovascular risk assessment with retinal images.

    3. Dataset Description

      The system utilizes a curated dataset containing more than 5,000 high-resolution retinal eye images with classification of:

      Normal Images: 2,500 retinal images depicting healthy vascular patterns

      Abnormal Images: 2,500 images depicting patterns associated with cardiovascular risk .Each image underwent quality checks for resolution, elimination of low quality images, annotation by medical professionals, and inclusion of samples from diverse demographic backgrounds enabled generalization of model outputs.

      Fig 1 – Normal Fig 2 – Abnormal

    4. Data Preprocessing

      Each image is resized to the dimensions of 224×224 pixels, normalized to [0, 1], and enhanced to improve vessel visibility. The augmentation procedures included the following methods: random rotation, randomflip, random enhancement, and noise addition. Allof thesemethods wereemployed for data augmentation to maximize the diversity of our data and to reduce overfitting.

      Architecture of CNN Model

      The CNN model contains convolutional, pooling, dropout, and fully connected layers, extracting vascular features, such as vessel calibre, tortuosity, haemorrhages, and density automatically. The final output layer with activations based on the sigmoid function performs bnary classification (normal/abnormal).

    5. Training Configuration

      The model is trained for 100 epochs, with a batch size of 32, using binary cross-entropy loss and an Adam optimizer. Performance is measured in terms of accuracy, precision, recall, and F1-score with a validation split of 20%.

    6. Web Interface and Backend

      Thefrontendoftheplatformisbuiltusing React.jsandhasmodules for patients, doctors, and admins. The backend also consists of a Flask API capable of authentication, accessing images, creating predictions. A MySQL database keeps track of users, images, results, and clinical information.

    7. Model Integration and Risk Forecasting

      We have integrated the trained CNN into the backend for real-time inference. The uploaded images are verified, pre-processed, and classified. The prediction scores are also stored and displayed through the frontend with corresponding risk categories and recommendations.

    8. Clinical Decision Support

    According to its prediction confidence, this system enables the provision of clinical recommendations or guidance in the form of additional tests, prescriptions, or making a suggestion to change one's lifestyle. There aredisplayed clearly, and immediate access to historic data; patients can build and save predictive risk assessment data into their profile. Theentirefront-endinterfaceisexpected toberesponsive for any device and screen size.

  4. SYSTEM IMPLEMENTATION

    1. Development Environment and Tools

      The system has been developed using a combination of modern tools for web development, machine learning, and data science. The principal environment used for the frontend development using the React.js framework was Visual Studio Code, which provided benefits to component-based structure and real-time debug capabilities. The recommender model was developed in PyCharm due to its easy integration with Python and deep learning libraries, which aided in debugging and code management. Jupyter Notebook was also used for data processing, performance visualization, and incremental development of the machine learning workflows, which allowed for monitoring model metrics, and supplied fast experiments.

    2. Deep Learning Model Deployment

      The heart attack prediction model is a Convolutional Neural Network (CNN) that has been trained on more than

      5,000 annotated retinal images. The architecture of the model was designed for analyzing retinal images and included layers for convolution, pooling, dropout, and dense classification. The training process also included lots of data augmentation and hyperparameter tuning to increase generalization and avoid overfitting. The model achieved a final accuracy of 96%, and the model is deployed in the backend utilizing TensorFlow, so that inference can occur in real-time. When a user uploads a retinal image, the backend preprocesses the image and passes it through the trained CNN and outputs both a classification and confidence score.

    3. Web-Based Interface

      In order to ensure maintainability and scalability, the user interface of the system is designed with React.js in a modular and component-based manner. It supports three primary user roles: administratos, physicians, and patients. The admin interface, allows for user management, account approval or denial, monitoring the system, and general platform administration. The physician interface has functionality for managing prescriptions, follow-ups, AI- based risk predictions, and searching patient records. The patient interface enables users to see risk prediction results, access previous reports, upload retinal images from file folder uploading, and drag-and-drop. The frontend is fully responsive, meaning that users have the same experience regardless of the screen sizes or devices used, including web and mobile.

    4. Report Creation

    The system / software contains an automated report generation module that compiles prediction results, patient demographics, clinical recommendations and model confidence scores into an automated report in PDF format, allowing users to download the report. The reports are generated using the Report Lab Python library, which formats the final report to allow for easy interpretation by clinicians. Each report has timestamps, patient identification and graphical representations of the prediction confidence for each prediction made. Users can download these reports in their entirety and keep the reports secure QED from the Digital Banking module to support documentation, clinical decision-making and facilitate sharing offline if required.

  5. RESULTS AND DISCUSSION

    1. Model performance

      The suggested CNN model achieved a classification accuracy rate of 96% for the test dataset. Further assessment of the evaluation metrics indicates strong performance with precision of 95.5%, recall of 95.8%, specificity of 96.2%, and F1 score of 95.6%. The Area Under the Curve (AUC) of the ROC was measured to be 0.989, which indicates high discriminative ability between the normal and high-risk

      categories. The model was trained over 100 epochs with early stopping, and a 20 % validation split was used.

    2. Performance by Class

      The model obtained 95.8% accuracy, 96.1% sensitivity, and 95.5% specificity for the high-risk class, indicating the model's spatial identification of vascular patterns related to cardiovascular risk. For low-risk cases, the model achieved 96.2% accuracy, 95.5% sensitivity, and 96.9% specificity, showing good ability to limit false positives. These balanced results also demonstrate that the model generalized well across the diagnostic categories.

    3. Analysis of Confidence Scores

      Each of the model's predictions has an associated confidence score that results from the output of the sigmoid activation function. About 82% of the predictions have confidence greater than 90% and only about 2% of the predictions with higher confidence were misclassifications. Any predictions with confidence less than 70%, were flagged for human review to enhance interpretability and clinical safety.

    4. Clinical Recommendations

    The system incorporates AI-generated predictions with automated clinical decision support. In terms of risk classification, the system recommends taking the following actions: High-Risk: referral for ECG, stress test, specialist, immediate – Low Risk: routine monitoring, discussion of lifestyle advice. The system allows doctors to prescribe medications, schedule follow-ups, and suggest further testing, furthering the efforts for a clinical integrated approach.

  6. VALIDATION AND TESTING

    1. Cross-Validation

      A 10-fold cross-validation strategy was employed to assess the robustness of the CNN model. In each fold, 90% of the dataset was used for training the model, while 10% of the dataset was used for validation. The model achieved high performance in each of the iterations, which a standard deviation of accuracy at less than 1% meant that the model could be generalised, with consistent accuracy across all folds, indicating good generalizability and stability of the model architecture.

    2. Testing Evaluation

      An independent test set comprised of 1,000 retinal images, not previously seen during training or testing, was used to evaluate the final model performance. In line with the training phase, the overall accuracy of the model was 96%, confirming the model could generalize well to unseen data. Furthermore, the precision, recall and F1-score were also very similar, adding confidence to the reliability of the system.

    3. Clinical Validation

    Preliminary clinical tests involved five clinical ophthalmologists and general practitioners who independently evaluated the system predictions based on one hundred (100) patient cases. The risk was classified the system classification and expert clinical provisioning, in eighty-three percent (/objective cases. From medical professionals/users' feedback, they found the system easy to use, capable of generating prediction confidence percentage scores, and a helpful visual idea of percentage risk, intended to increase clinical probability, especially in a low-resource environment. We envision adopting it into the 24/7 clinical workflow and assisting with clinical stewardship, especially in a low-resource environment.80% of the dataset was used for training the model, while 20% of the dataset was used for validation. The model achieved high performance in each of the iterations, which a standard deviation of accuracy at less than 1% meant that the model could be generalised, with consistent accuracy across all folds

  7. FUTURE WORK

    1. Model Improvement

      Future work will focus on improving the model's accuracy and interpretability. This includes growth of the dataset by working with more hospitals and ophthalmology centres while also increasing demographic variability. ResNet-50 and Efficient Net architectures will be explored to enhance feature extraction capabilities. Explainable AI techniques, such as Grad-CAM, attention mechanisms will also be included because they are critical to improving clinical trust in AI. Also, there are plans to incorporate multi- modal inputs such as electronic health records or data from wearable sensors to improve prediction accuracy.

    2. System Improvements

      To assure scalability and accessibility of the system and to utilize load balancing, we will establish a cloud-based (AWS or Google Cloud) system that can be accessed from anywhere. Development of a mobile application for Android and IOS is from a point-of-care perspective in a rural or remote area. We are also considering the optimization of real- time processing with edge computing devices to minimize inference latency. Additional system improvements will include improved report generation with graphical overlays, and an automated determination of image quality through statistical techniques to ensure consistency.

    3. Clinical Studies

    In the future, we will execute multi-centre clinical trials across hospitals and diagnostic centres to provide evidence of the system's "real-world" use. We expect to test the system with >5,000 patients to evaluate diagnostic sensitivity (target

    98%) and the effect of the system on clinical workflow (e.g., estimated 30% reduction in turnaround time for diagnostic call). We will also solicit feedback from physicians to assess

    usability and make adjustments to the system based on the feedback to better prepare for regulatory approval and clinical certification.

  8. CONSTRAINTS AND CONCERNS

    1. Dataset Concerns

      Although the project dataset is substantial at 5,000 annotated retinal images, it does not provide a complete representation of the complexity of cardiovascular risk factors in populations. It had imbalances in age and ethnic diversity that constrain the models ability to generalize and adapt to underserved or minority populations. The data were not longitudinal; thus, illustrative of the change in CV risk over time. Images were all collected in controlled settings that may not be illustrative of the diversity associated with real clinical settings that may adversely affect domain adaptation. The absence of multimodal data (e.g., clinical history, lifestyle factors) restricts the model's ability to reflect the complexity of cardiovascular risk factors.

    2. Technical Limitations

      The dependence on deep CNNs means the system requires great computational resources, especially training time during the tuning process and at-scale inference, making it difficult to produce real-time performance without exploiting cloud or edge optimization. As the system scales to accommodate many users or predictions to be made at once, there may be major constraints to any inherent scalability. The current implementation lacks adequate offline functionality meaning it cannot at all be useful in remote or inaccessible places. Model explainability is also a matter of concern since most clinical end-users will want explainable AI to trust the predictions. The system is likely to also perform poorly if the low-resolution and poor quality images are used which is

    3. Difficulties with Clinical Integration

    Despite the system's high predictive accuracy, regulatory approval, validation in a variety of clinical settings, and compatibility with current electronic health record (EHR) systems are necessary for clinical adoption. Adoption may be slowed by resistance from medical professionals who are not familiar with AI-based tools; therefore, training requirements must be met for efficient use. Furthermore, there are still unanswered questions regarding liability, data ownership, and misdiagnosis in AI-driven diagnostics from a legal and ethical standpoint.

  9. CONCLUSION

The heart attack prediction system developed in this study represents a paradigm shift in the non-invasive assessment of cardiovascular risk through the application of deep learning and retinal image analysis. Convolutional neural networks

trained on over 5,000 labelled retinal images achieved 96% classification accuracy which demonstrates the potential of AI as a proof of concept to recognize vasculature characteristics that predispose individuals to cardiovascular risk. With its unique multi-faceted access, real time prediction, and intelligible interaction based on a modular design with a python-flask backend and a react.js frontend, the system could benefit patients, physicians and administrators alike. Clinical benefits exist through its rapid non-invasive assessment of risk, reduction of diagnostic lag time, and the expansion of screening into underserved areas. Furthermore, the inclusion of clinical decision support tools enhances the system's value by giving a medical professional a means of treatment prescribing, providing an informed solution, and ensuring competent management of follow-up care. Technically, the system has an understandable and responsive user experience, suggesting the effective integration of AI and web based technologies in health care. Future developments will analyze more clinical data for multi-modal risk assessment, extend the dataset by involving more demographic and imaging conditions, and utilize explainable artificial intelligence to improve clinical trust and transparency. This study lays the groundwork for scalable, easily accessible, and clinically applicable healthcare solutions in addition to advancing the rapidly expanding field of AI-assisted diagnostics. The system exemplifies the revolutionary role of artificial intelligence in contemporary medicine and has significant potential to lower cardiovascular mortality through earlier detection and targeted intervention due to its strong accuracy, usability, and integration with current workflows.

REFERENCES

  1. S. Prasad, R. Kumar, and A. Mishra, "Analysis of Retinal Fundus Images for Cardiovascular Risk Prediction Using Hybrid Deep Learning Models," Diagnostics, vol. 14, no. 4, pp. 117, February 2024.

  2. P. Rajani and S. Sinha, "Heart Attack Prediction Using Retinal Images and RNN-based Deep Learning," International Journal of Innovative Science and Research Technology (IJISRT), vol. 10, no. 1, pp. 456 462,January,2024

  3. "Heart Disease Prediction using Transfer Learning on Retinal Images," by S. Shaikh, A. Kulkarni, and M. Patel, in Proc. Int. Conf. on Medical Imaging and Bioinformatics,pp.2228,2023.

  4. K. Rose and M. Abraham, "A Comparative Analysis of SVM and CNN for Cardiovascular Risk Identification through Retinal Imaging," Journal of Medical Systems, vol. 47, no. 6, pp. 18, 2023.

  5. A. Prakash and V. Kumar, "Automated Idetification of the Risk of Cardiovascular Disease from Fundus Images Using VGG and U-Net Architectures," Biomedical Signal Processing and Control, vol. 84, pp. 104112, Dec. 2022.