🏆
International Knowledge Platform
Serving Researchers Since 2012

An AI-Powered Wearable System for Context-Aware Memory Assistance in Alzheimer’s and Dementia Care

DOI : https://doi.org/10.5281/zenodo.18068722
Download Full-Text PDF Cite this Publication

Text Only Version

 

An AI-Powered Wearable System for Context-Aware Memory Assistance in Alzheimers and Dementia Care

Dr. S. Malathi

Professor & Head, Dept. of Artificial Intelligence And Data Science Panimalar Engineering College, Chennai, India

Kamesh Raj C

UG Student, Dept. of Artificial Intelligence And Data Science Panimalar Engineering College, Chennai, India

Jarvis J

UG Student, Dept. of Artificial Intelligence And Data Science Panimalar Engineering College, Chennai, India

Kathiravan K

UG Student, Dept. of Artificial Intelligence And Data Science Panimalar Engineering College, Chennai, India

Abstract – Alzheimers and Dementia disease cause major everyday problems, such as increasing memory loss, confusion, and wandering, which reduce patient autonomy and increase carer stress. More sophisticated robotic systems are still too expensive and unfeasible for typical residential use, while traditional assistive technologies from basic alarms to generic GPS trackers are frequently disjointed and fall short of offering comprehensive, context-aware support. The “Cognitive Companion,” a wearable AI-powered system for complete dementia care that is built on a Raspberry Pi platform and has a small wristband form factor, is presented in this study as a solution to these restrictions. By avoiding continuous cloud connectivity, the system offers low-latency replies and data security through the use of lightweight TensorFlow Lite models for private, on-device processing. A wide range of context-aware features are made possible by integrated sensors, including as an IMU, GPS, and a wide-angle Pi Camera Module 3. These features include fall detection, GPS-based geofencing for roaming alarms, emotion detection to deliver adaptive and comforting reminders, and real-time face recognition to identify carers. Through multi-modal feedback using its speaker, vibration motor, and integrated OLED display the system provides personalised medicine reminders. It also has an adaptive AI assistant that provides sympathetic, context-aware assistance. With AI models reaching F1-scores above 92%, important alarms provided in less than two seconds, and more than seven hours of continuous battery life, experimental evaluation shows strong performance. The Cognitive Companion offers a scalable and useful solution that promotes patient

freedom and gives carers greater peace of mind by combining these features into a wearable that is safe, reasonably priced, and easy to use.

Index Terms: Alzheimers care, dementia assistance, wearable devices, context-aware systems, emotion recognition, Raspberry Pi, IoT healthcare, memory support.

  1. INTRODUCTION

    Alzheimer’s disease and related dementias are progressive neurodegenerative diseases that are one of the biggest challenges to modern medical care and senior citizen care around the world. Such conditions, which show up as severe memory loss, confusion, impaired reasoning, and spatial disorientation, gradually and irrevocably erode a person’s cognitive capacities. As their disease worsens, patients often find it extremely difficult to do everyday tasks like personal appointments or prescription routines, which increases their risk of wandering and compromises their safety. As a result, caregivers who are frequently family members are put in a difficult position where they have to cope with the tremendous mental, physical, and financial burden of making sure their loved one is safe and healthy around-the-clock.

    As a result, several assistive technologies have been developed to alleviate this burden, although many of them fall short of offering a fully comprehensive or workable solution. Even if they are straightforward, traditional, non-digital aids like wall calendars, sticky notes, and simple pill dispensers are unsuccessful in intermediate to advanced stages because they are not flexible and rely on the patient’s already deteriorating memory and processing skills. More flexible scheduling and notifications were brought about by the introduction of first-generation digital technologies, such smartphone applications. These technologies do, however, come with a unique set of usability issues for older people, such as touchscreen navigation issues, complicated user interfaces, and the practical problem of always having the gadget charged and available, which lowers its dependability.

    By encouraging social engagement, offering cognitive stimulation, and lowering patient agitation, more

    sophisticated therapy technologies like socially supportive robots have shown great potential in study settings. These systems’ high cost, substantial physical footprint, and the technical difficulty of their upkeep and operation make them unsuitable for general residential use, despite their shown advantages. Conversely, the present market of commercial wearable technology, such as GPS trackers and smartwatches, provides enhanced safety monitoring and portability. These gadgets, however, usually function as fragmented tools that only track location or activity without offering the comprehensive, context-aware cognitive and emotional assistance that dementia patients sorely require. This makes the need for a solution that is intelligent, integrated, economical, and useful for everyday use evident and essential.

    The convergence of multi-modal sensing as evidenced by advanced diagnostic tools that combine data from sources like EEG and eye-tracking and the development of potent, on- device artificial intelligence are now influencing the future of dementia treatment. Recent research on lightweight neural network topologies, such as Graph Neural Networks (GNNs) for complex data analysis, has demonstrated that edge devices with limited resources can achieve high-accuracy modelling. The development of intelligent wearables that are both potent and intrinsically privacy-preserving is made possible by this technical shift to edge computing. These devices can conduct real-time analysis without continuously depending on cloud services, guaranteeing minimal latency and data confidentiality.

    This study introduces the Cognitive Companion, a wearable AI-powered device that fills the gaps left by earlier technologies to provide comprehensive, context-aware assistance to individuals with dementia. By combining a number of AI-driven, context-aware features into a single wrist-worn gadget, our solution significantly enhances the tried-and-true concept of wearable help for Alzheimer’s sufferers. We turn the multi-modal sensing concept from diagnostic tools and the cognitive engagement goal from robotic systems into a practical, everyday wearable. The Cognitive Companion offers on-device AI for fall detection, multi-modal reminders, GPS-based geofencing, and face and emotion recognition by utilising a Raspberry Pi, a wide-angle camera, and a variety of sensors. The system seeks to improve patient autonomy and safety while lessening the heavy strain on carers by providing real-time safety alerts and compassionate, emotion-aware feedback.

  2. LITERATURE SURVEY

    Multimodal sensing and Early Diagnosis

    Recent developments have a strong emphasis on combining several behavioural and physiological modalities to improve the accuracy of Alzheimer’s disease (AD) identification. When compared to single-modality systems, a 2024 study published in IEEE Transactions on Neural Systems and Rehabilitation Engineering showed that integrating EEG,

    eye-tracking, and behavioural data greatly enhanced classification performance for early cognitive imairment [1]. In real-world situations, this encourages the use of multimodal inputs for accurate detection. However, because such lab setups necessitate large equipment, wearable- friendly proxies camera, microphone, and IMU that can replicate multimodal fusion in portable settings are being developed.

    Graph-Based Neural Modeling for Brain Connectivity

    In order to distinguish AD patients from healthy controls, neural network topologies that take advantage of patterns in brain connection have shown promise. An EEG-based Graph Neural Network (GNN) model was proposed in the 2022 IEEE Transactions on Neural Systems and Rehabilitation Engineering, proving that functional-connectivity graphs enhanced model interpretability and diagnostic accuracy [2]. Similarly, spatio-temporal GCN approaches enhanced sensitivity to dynamic changes in brain networks [5]. The lightweight, connectivity-inspired embeddings of the suggested wearable system, which are optimised for edge inference on limited hardware like the Raspberry Pi, are influenced by these research.

    Assistive and Wearable Systems for Cognitive Engagement

    Wearable technologies and socially assistive robotics have showed potential in improving memory recall and participation in dementia care. A social robot that offered AD patients interactive therapy and cognitive activities was presented in the 2023 IEEE Transactions on Robotics [3]. Despite its effectiveness, the robot’s restricted portability and high cost prevent widespread deployment. A low-power wearable that combines motion, location, and alarm modules for Alzheimer’s help was proposed in complementary work presented at IEEE BioCAS 2019 [4]. The design of a small, wrist-worn gadget that blends therapeutic suggestions and real-time monitoring while preserving user comfort and affordability is influenced by these examples.

    Emotion Recognition, Behavioral Biomarkers, and

    Explainability

    Behavioural signals and the ability to recognise emotions have become early markers of cognitive impairment. Better emotion classification was shown in studies that combined local activations and EEG-based functional connections [6]. Early-stage AD detection has also been associated with microstate complexity measurements [7]. Additionally, explainable AI frameworks, such as gated GNN and fuzzy- network models, enhance interpretability and trust in clinical AI applications [8]. By combining these findings, the suggested wearable system ensures transparency and carer confidence in practical use by incorporating emotion-adaptive responses and explainable decision outputs.

    Challenges, Research Gaps, and Contribution

    Despite impressive advancements, wearable implementation of current multimodal and GNN-based frameworks is limited by scalability and energy. While the majority of clinical investigations depend on controlled conditions, high-capacity models require computational resources that are inappropriate for embedded devices. Few technologies are able to monitor continuously while maintaining privacy in everyday situations. The current study aims to fill these gaps by [1] creating quantised multimodal fusion networks for on-device inference, [2] including feature embeddings inspired by graphs for low-power cognition tracking, and [3] making explainable, caregiver-aware feedback loops possible. This strategy complies with IEEE guidelines for moral, comprehensible, and long-term AI in healthcare.

  3. PROPOSED METHODOLOGY

    The “Cognitive Companion,” a wearable AI-powered technology intended to offer full, context-aware assistance to those with dementia and Alzheimer’s disease, is the suggested model. It moves the emphasis from clinical diagnosis to useful, everyday help by combining real-time monitoring, safety alarms, and cognitive support into a single, small gadget.

    1. System Architecture

      Low-latency replies and patient privacy are guaranteed by the system architecture’s continuous and responsive feedback loop, which runs fully on the edge. There are four main stages to it:

      • Sensor Data Acquisition: The system continuously gathers information from its built-in sensors, which include a MEMS microphone for audio, a GPS module for location, an IMU for motion, and a wide-angle Pi Camera Module for visual data. To extract pertinent features for AI analysis, this raw data is locally preprocessed on the device.
      • On-Device AI Analysis: TensorFlow Lite-optimized lightweight AI models receive preprocessed data as input. Real- time input analysis is carried out by the onboard CNN and MobileNetV2 models to detect faces, emotions, and falls.
      • Decision and Alert Generation: To decide when to send out alerts, reminders, or emotional feedback, a decision engine contextualises the AI outputs. Secure MQTT messages are generated in response to critical alarms for missed reminders, wandering, or falls to promptly alert carers.
      • Carer Interface: A user-friendly web dashboard allows carers to observe the patient’s location and control reminder schedules in addition to receiving real-time alarms.
    2. Hardware Configuration

      The hardware is based on an inexpensive, portable stack that is optimised for all-day use and has a comfortable wristband form factor (~6580g).

      • Processing Unit: The Raspberry Pi Zero 2 W, the device’s central component, has a 1GHz quad-core 64-bit ARM Cortex- A53 CPU and 512MB of RAM, making it capable of effectively managing on-device AI applications.
      • Imaging Module: For precise face and expression detection, a Pi Camera Module 3 Wide with a 12MP Sony IMX708 sensor and a 102° field of view is utilised.
      • Sensors: While an IMU (accelerometer and gyroscope) allows for fall detection and activity monitoring, a GPS module offers real-time location tracking for geofencing. Voice is recorded via a MEMS microphone for communication.
      • User input: The gadget offers multi-modal input via a vibration motor for haptic alerts, a tiny speaker for spoken cues, and a 0.96″ OLED display for visual information.
      • Source of Power: The system is powered by a 5000 mAh Li-Po battery pack that can run continuously for 6 to 8 hours.
    3. Software and AI Stack

      By carrying out all necessary processing on the device itself, the software architecture puts efficiency and privacy first.

      • On-Device AI: TensorFlow Lite-optimized lightweight deep learning models are used by the system. This edge-based strategy guarantees the privacy of patient data while enabling quick, low- latency (<2 seconds) performance.
      • AI Models: A bespoke CNN classifier examines facial expressions to determine emotions, while a MobileNetV2 model recognises faces.
      • Communication Protocol: MQTT over TLS is used to securely communicate with the caregiver’s dashboard, guaranteeing that all data and alarms are secured.
  4. SYSTEM ARCHITECTURE

    The Cognitive Companion’s architecture is built for a largely edge-based, continuous, privacy-preserving monitoring and help loop. All sensitive data is processed locally thanks to our on- device strategy, which offers low latency, real-time replies without requiring constant reliance on cloud infrastructure. The logical data flow and physical hardware integration of the system are described in detail in Figures 1 and 2, respectively.

    1. Logical Data Flow

      Data collection from several input sensors is the first step in the system’s logical flow, as illustrate in Fig. 1. The Raspberry Pi receives this raw data directly, including speech from the MEMS Mic, location from the GPS Module, and visual data from the Pi Camera. The data is processed by a set of local AI models on the device for activity analysis, emotion detection, and facial recognition in order to comprehend the context of the user. This analysis informs the system’s decision engine, which then either sends a secure alert to the carer dashboard or causes an instant

      haptic or audible response on the device.

      Figure 2: Hardware block diagram illustrating the power and data connections between system components

      C . Architectural Stages

      Figure 1: High-level system architecture of the Cognitive Companion, illustrating the data flow from input sensors to on- device AI processing and outputs.

    2. Hardware and Power Architecture

      The hardware block diagram in Figure 2 shows the power distribution and physical connections in detail. The system is supplied by a portable Li-Po battery pack that is controlled by a charging circuit and power switch to deliver a steady, 5V regulated supply. CSI for the camera and I2C, USB, or UART for additional sensors are examples of the standard interfaces that are used to link the input devices to the Raspberry Pi. To provide audio and haptic feedback, the Raspberry Pi uses its GPIO pins to interpret these inputs and manage the output devices, which are the speaker and vibration motor.

      There exist four main steps that comprise the system’s end-to-end operation:

        • Sensor Data Acquisition: The Pi camera, GPS module, and MEMS microphone are among the integrated sensors that the system uses to collect data. In order to identify pertinent features for AI research, this raw data is preprocessed locally.
        • On-Device AI Cognition: On the Raspberry Pi, real-time input analysis is performed by lightweight AI models optimised with TensorFlow Lite. Face recognition, emotion detection, and GPS data analysis for roaming are all included in this.
        • Decision and Alert Generation: To initiate warnings, reminders, or emotional feedback, a central decision engine contextualises the AI outputs. Secure MQTT messages are generated by critical notifications to instantly alert carers.
        • Feedback and Carer Interface: Using haptic/audio outputs and visual signals on its screen, the system gives the user multi-modal feedback. Critical notifications are provided to the Carer Dashboard for remote monitoring at the same time.
  5. EXPERIMENTAL SETUP

    An extensive experimental setup and a set of assessment metrics were developed in order to verify the Cognitive Companion’s functionality. The assessment’s main objectives were to determine the system’s responsiveness, the correctness of the AI modules, and the hardware prototype’s practicality in controlled settings.

    1. Dataset Preparation

      =

      +

      (3)

      For reliable performance, a mix of private and public datasets were used to train and assess the AI models. Volunteers who represented family members and carers provided a collection

        • F1-Score: This is a single score that strikes a balance between precision and recall by taking the harmonic mean of the two.

          ×

          of facial photographs for the carer face recognition module. The emotion recognition model was trained and tested using

          1 = 2 ×

          +

          (4)

          a standard public dataset (FER-2013, for example). To verify the dependability of the alarm mechanisms, the safety characteristics of the system were assessed using simulated sensor data logs that represented wandering and fall scenarios.

        • Specificity: This gauges how well the model can detect negative examples. High specificity for wandering alerts makes sure the system doesn’t set off false signals.

          To anonymise any sensitive data, privacy precautions were taken at every stage.

          =

          +

          (5)

    2. Hardware and Software Implementation

      The prototype’s hardware configuration was based on a Raspberry Pi Zero 2 W that was combined with a GPS module, an IMU for motion detection, and a Pi Camera 3 Wide. A wearable, comfortable, and portable battery pack powered the entire device. TensorFlow Lite was used to optimise the AI models for face and emotion detection in order to provide effective on-device inference. The software was constructed using Python. The MQTT protocol was used for secure communication in order to notify carers. Volunteers participated in controlled prototype testing to confirm overall system dependability, device comfort, and sensor accuracy.

    3. Evaluation Metrics

    The AI models’ performance was assessed using standard machine learning metrics, while the hardware performance was assessed using system-level measurements. In the following formulas, True Positives are denoted by TP, True Negatives by TN, False Positives by FP, and False Negatives by FN.

    1. AI Model Performance Metrics: The following metrics were used to assess how well the classification models for fall detection, face recognition, and emotion recognition performed:
      • Accuracy: This indicator assesses how accurate the model is overall.

        +

        = (1)

        + + +

      • Precision: This assesses how well the model predicts favourable results, which is important for characteristics like carer recognition to prevent incorrect identifications.

        = (2)

        +

      • Recall (Sensitivity): This gauges how well the model can locate all pertinent examples. For safety features like fall detection, high recall is essential to ensuring that no incidents are overlooked.
    2. System Performance Metrics: The following system- level metrics were measured in order to evaluate the wearable device’s practicality:
    • Inference Latency: The milliseconds (ms) needed for an AI model to process a single input (such as a single camera frame) and generate an output.
    • System responsiveness: The total amount of time that passes between a physical occurrence (such a fall) and the carer alert being generated.
    • Battery Life: The entire amount of time the gadget can function with all of its modules functioning on a single full charge, expressed in hours.
  6. RESULTS AND DISCUSSION

    The metrics established in the preceding section were used to assess the Cognitive Companion system’s performance. The outcomes verify that the hardware prototype satisfies the essential requirements for a workable, real-world assistive device and validate the efficacy of the AI modules. The performance of the AI model and the total hardware performance of the system are separated out in the evaluation.

    1. AI Model Performance
      1. Emotion Recognition Model:

        The accuracy with which the CNN-based emotion detection algorithm could categorise five main emotional states was assessed. The confusion matrix (Fig. 3) illustrates the model’s high accuracy, with the majority of predictions falling along the diagonal, suggesting little class confusion. With a high precision of 93.8% and an overall balanced F1-score of 94.0%, the classification report (Fig. 4) further quantified this performance and demonstrated its constant and depndable performance across all emotion categories.

        Figure 3. Confusion Matrix for Emotion Recognition Model.

        Figure 4. Classification Report for Emotion Recognition Model.

        Training and validation accuracy curves successfully converged while loss curves flattened, as demonstrated by the model’s training history (Fig. 5). This suggests that the model learnt generalisable features without experiencing severe overfitting. The significant discriminative power of the model was finally confirmed by the ROC curves (Fig. 6), which produced high Area Under the Curve (AUC) values for every class.

        Figure 5. Model Training and Validation History (Accuracy and Loss).

        Figure 6. ROC Curve for Emotion Recognition Model.

      2. Caregiver Recognition Model:

      The capacity of the carer recognition model to accurately identify a group of known people and differentiate them from unknown people was assessed. The most important metrics for an identification system are the False Acceptance Rate (FAR) and the Rank-1 Identification Rate. The system’s Rank-1 ID Rate of 95.7% and extremely low FAR of 1.2%,

      as illustrated in Fig. 7, demonstrate its excellent reliability in differentiating between known carers and strangers.

      Figure 10. ROC Curve for Caregiver Recognition Model.

      Figure 7. Performance Metrics for Caregiver Recognition Model.

      Figure 8. Confusion Matrix for known caregivers.

      The model rarely confused one registered carer with another, according to the confusion matrix created to assess the model’s performance among known carers (Fig. 8). The multi- class ROC curve (Fig. 10) and the comprehensive classification report (Fig. 9) provide additional evidence of the model’s dependability in accurately and reliably identifying every registered person.

      Figure 9. Classification Report for Caregiver Recognition Model.

    2. Hardware and System Performance

      Based on latency and battery life, the physical prototype’s performance was assessed. It was constructed on a Raspberry Pi Zero 2 W. The system demonstrated near real-time responsiveness with an average inference latency of roughly

      200 milliseconds per inference using the optimised TensorFlow Lite models. With an average of 1.8 seconds, the end-to-end system responsiveness for critical alerts continuously fell short of the two-second goal. The device exceeded the goal of 68 hours of runtime with an average battery life of 7.2 hours while operating continuously with all modules active. These findings, which are compiled in Table I, demonstrate that the software and hardware optimisations satisfy the exacting standards needed for a dependable assistive device.

      Table 1: Hardware Performance Metrics

      Performance Indicator Target Measured Result
      AI Inference Latency < 500

      ms

      ~200 ms
      System Alert Latency < 2.0 s 1.8 s
      Battery Life 68

      hours

      hours
  7. CONCLUSION AND FUTURE WORKS
    1. Conclusion

      The Cognitive Companion, an AI-powered wearable that offers a comprehensive, on-device dementia care solution, was presented in this study. The solution overcomes the drawbacks of dispersed and cloud-dependent technologies by effectively combining context-aware reminders, emotion and facial recognition, and vital safety alarms into a single, privacy-preserving gadget. The outcomes validate its viability

      and show that the system goes beyond basic monitoring to offer proactive assistance that gives patients more autonomy and self-respect. In the end, the Cognitive Companion is a major development in assistive technology that gives carers a crucial instrument for guaranteeing security and comfort.

    2. Future Works

    Future development will concentrate on improving the system’s functionality, usability, and practical impact. Further hardware optimisation to extend battery life and improving the assistive AI with more lifelike voice interaction capabilities are important areas for development. To demonstrate its practical efficacy, the next essential steps entail carrying out extensive clinical validation and usability tests with patients and carers. In conclusion, we will investigate avenues for broader implementation, such as a specialised mobile application and optional cloud analytics for extended data recording.

  8. REFERENCES
  9. Chen, S., Zhang, C., Yang, H., Peng, L., Xie, H., Lv, Z., & Hou, Z.-G. (2024). A multi-modal classification method for early diagnosis of Mild Cognitive Impairment and Alzheimers disease using three paradigms with various task difficulties. IEEE Transactions on Neural Systems and Rehabilitation Engineering: A Publication of the IEEE Engineering in Medicine and Biology Society, 32, 14771486. https://doi.org/10.1109/TNSRE.2024.3379891
  10. Klepl, D., He, F., Wu, M., Blackburn, D. J., & Sarrigiannis, P. (2022). EEG-based graph neural network classification of Alzheimers disease: An empirical evaluation of functional connectivity methods. IEEE Transactions on Neural Systems and Rehabilitation Engineering: A Publication of the IEEE Engineering in Medicine and Biology Society, 30, 26512660. https://doi.org/10.1109/TNSRE.2022.3204913
  11. Klepl, D., He, F., Wu, M., Blackburn, D. J., & Sarrigiannis, P. G. (2023). Adaptive gated graph convolutional network for explainable diagnosis of Alzheimers disease using EEG data. In arXiv [q-bio.NC]. http://arxiv.org/abs/2304.05874
  12. Lee, K., Choi, K.-M., Park, S., Lee, S.-H., & Im, C.-H. (2022). Selection of the optimal channel configuration for implementing wearable EEG devices for the diagnosis of mild cognitive impairment. Alzheimers Research & Therapy, 14(1), 170. https://doi.org/10.1186/s13195-022-

    01115-3

  13. Li, P., Liu, H., Si, Y., Li, C., Li, F., Zhu, X., Huang, X., Zeng, Y., Yao,

    D., Zhang, Y., & Xu, P. (2019). EEG based emotion recognition by combining functional connectivity network and local activations. IEEE Transactions on Bio-Medical Engineering, 66(10), 28692881. https://doi.org/10.1109/TBME.2019.2897651

  14. Sarita, Choudhury, T., Mukherjee, S., Dutta, C., Sharma, A., & Sar, A. (2024). A wearable device for assistance of Alzheimers disease with computer aided diagnosis. EAI Endorsed Transactions on Pervasive Health and Technology, 10.

    https://doi.org/10.4108/eetpht.10.5483

  15. Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini,

    G. (2009). The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 6180. https://doi.org/10.1109/TNN.2008.2005605

  16. Shan, X., Cao, J., Huo, S., Chen, L., Sarrigiannis, P. G., & Zhao, Y. (2022). Spatial-temporal graph convolutional network for Alzheimer classification based on brain functional connectivity imaging of electroencephalogram. Human Brain Mapping, 43(17), 51945209.

    https://doi.org/10.1002/hbm.25994

  17. Song, Z., Deng, B., Wang, J., & Wang, R. (2019). Biomarkers for Alzheimers disease defined by a novel brain functional network measure. IEEE Transactions on Bio-Medical Engineering, 66(1), 41

    49. https://doi.org/10.1109/TBME.2018.2834546

  18. Tait, L., Tamagnini, F., Stothart, G., Barvas, E., Monaldini, C., Frusciante, R., Volpini, M., Guttmann, S., Coulthard, E., Brown, J. T., Kazanina, N., & Goodfellow, M. (2020) EEG microstate complexity for aiding early diagnosis of Alzheimers disease. Scientific Reports, 10(1), 17627. https://doi.org/10.1038/s41598-020-74790-7
  19. Tavares, G., San-Martin, R., Ianof, J. N., Anghinah, R., & Fraga, F. J. (2019). Improvement in the automatic classification of Alzheimers disease using EEG after feature selection. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC).
  20. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Yu, P. S. (2021). A

    comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1), 424. https://doi.org/10.1109/TNNLS.2020.2978386

  21. Yu, H., Lei, X., Song, Z., Liu, C., & Wang, J. (2020). Supervised network-based fuzzy learning of EEG signals for Alzheimers disease identification. IEEE Transactions on Fuzzy Systems: A Publication of the IEEE Neural Networks Council, 28(1), 6071. https://doi.org/10.1109/tfuzz.2019.2903753
  22. Yuan, F., Boltz, M., Bilal, D., Jao, Y.-L., Crane, M., Duzan, J., Bahour, A., & Zhao, X. (2023). Cognitive exercise for persons with Alzheimers Disease and related dementia using a social robot. IEEE Transactions on Robotics: A Publication of the IEEE Robotics and Automation Society, 39(4), 33323346. https://doi.org/10.1109/tro.2023.3272846
  23. Zhanga, Y., He, X., Chan, Y. H., Teng, Q., & Rajapakse, J. C. (2023). Multi-modal graph neural network for early diagnosis of Alzheimers disease from sMRI and PET scans. In arXiv [eess.IV]. http://arxiv.org/abs/2307.16366.
  24. Zhao, Y., Zhao, Y., Durongbhan, P., Chen, L., Liu, J., Billings, S. A.,

Zis, P., Unwin, Z. C., De Marco, M., Venneri, A., Blackburn, D. J., & Sarrigiannis, P. G. (2020). Imaging of nonlinear and dynamic functional brain connectivity based on EEG recordings with the application on the diagnosis of Alzheimers disease. IEEE Transactions on Medical Imaging, 39(5), 15711581.

https://doi.org/10.1109/TMI.2019.2953584.