International Research Platform
Serving Researchers Since 2012

Beyond Static Aids: A Contextual AR Framework for Prospective Memory Retrieval

DOI : https://doi.org/10.5281/zenodo.20038594
Download Full-Text PDF Cite this Publication

Text Only Version

Beyond Static Aids: A Contextual AR Framework for Prospective Memory Retrieval

Milton Lucas Kakupa

Nitte (Deemed to be University) NMAM Institute of Technology (NMAMIT) Dept. of CSE Nitte, Karnataka, India

Dr. Venkatramana Bhat P

Nitte (Deemed to be University) NMAM Institute of Technology (NMAMIT) Professor, Dept. of CSE Nitte, Karnataka, India

Abstract – Event-Based Prospective Memory (EBPM) is essen-tial for everyday functioning yet prone to failure, particularly in older adults and people with cognitive impairment. This project introduces EB-VMEM, a mobile Augmented Reality (AR) framework that uses on-device, real-time object detection to deliver contextually anchored reminders tied to physical objects. EB-VMEMs system architecture comprises three coordinated subsystems an Object Detection Subsystem (real-time object recognition via Niantic Lightship), a Memory Subsystem (local objecttask associations), and a Cue Display Subsystem (spa-tially anchored AR text overlays and short audio cues) that together enable reminders to appear precisely when and where a registered object is encountered. Implemented in Unity with the Niantic Lightship SDK, the prototype emphasizes privacy through on-device inference, low cognitive load via automatic timely cues, and ecological integration by anchoring prompts in the users real environment.

Remaining work includes comprehensive user studies to mea-sure effects on task completion, cognitive load, and acceptance, and extensions such as LLM-driven contextualization and wear-able AR support.

Index TermsProspective Memory (PM), Event-Based Prospective Memory (EBPM), Augmented Reality (AR), Object Detection, Reminder System, EB-VMEM, Cost-Effective, Ecological Integration.

  1. Introduction

    Prospective memory (PM) represents one of the most critical yet vulnerable aspects of human cognition, enabling individ-uals to remember to perform intended actions at appropriate future instances. Unlike retrospective memory, which involves recalling past events or learned information, prospective mem-ory demands that individuals initiate actions when encounter-ing specic environmental cues (event-based) or at designated times (time-based). The practical importance of prospective memory cannot be overstatedfrom remembering to take medications when seeing a pill bottle, to completing work tasks when entering ones ofce; everyday functioning relies heavily on successful prospective memory [1, 2, 3].

    Within PM, event-based prospective memory (EBPM) refers to remembering to act when a specic event or cue is encountered, as opposed to time-based prospective mem-ory (TBPM), which depends on self-initiated monitoring of elapsed time [4, 5]. Failures in EBPM are common across the lifespan and are particularly pronounced in populations with

    mild cognitive impairment (MCI), Alzheimers disease (AD), and other neurocognitive disorders [6].

    EBPM tasks require individuals to remember to perform a previously planned action when they encounter a specic environmental cue or event. The fundamental challenge lies in detecting the prospective memory cue while engaged in ongoing activities, then successfully retrieving and executing the intended action [1, 2].

    According to the multiprocess framework, event-based prospective memory can operate through two distinct mech-anisms: strategic monitoring and spontaneous retrieval [7]. Distinctive, salient cues are more likely to capture attention automatically and trigger intention retrieval. Research using event-related potentials (ERPs) has identied neural compo-nents associated with cue detection and stimulus processing [5, 8].

    Traditional approaches to supporting PMsuch as alarms, calendars, and written notesoften lack contextual specicity and may not align with the cognitive mechanisms underlying EBPM. Augmented reality (AR), by contrast, offers the poten-tial to deliver reminders precisely when and where they are needed, leveraging real-world object recognition and spatial context to trigger intentions [9].

    Augmented reality technology offers unprecedented oppor-tunities for cognitive augmentation by seamlessly integrating digital information with the physical environment. Unlike vir-tual reality, which replaces the real world, AR enhances real-world perception by overlaying contextually relevant digital content [10, 11].

    While substantial research has investigated prospective memory mechanisms and age-related decline, and separate work has explored AR applications for various cognitive tasks, limited research has specically examined AR-based systems that leverage real-world object detection as event-based cues for prospective memory support. Most existing memory aids rely on time-based reminders (smartphone notications, sticky notes) or static cues, rather than actively detecting environmen-tal cues and providing contextually appropriate interventions [10, 11, 12].

  2. Related Work

    While EBPM and TBPM rely on overlapping neural net-works, event-based tasks show unique activation patterns re-lated to cue monitoring and depend more on environmental cues, unlike time-based tasks that involve self-initiated moni-toring [5].

    External memory aids like notes and object placement im-prove prospective memory. However, their nature is static and electronic reminders are not context-aware [12, 13]. Process-based and strategy-based training, such as implementation intentions and rehearsal, can improve prospective memory in older adults, but effects often diminish over time, highlighting the need for interventions that support real-world functioning [14, 15].

    AR is used in healthcare for navigation and motor cuing, and for cognitive rehabilitation by supporting spatial naviga-tion and memory. Previous AR memory interventions often used abstract or non-object cues [11, 12]. Early projects like NeverMind explored AR for memory palace techniques, while recent work focuses on leveraging object detection and spatial anchoring to support task recall [9, 11, 15].

    Advances in AR and computer vision now allow real-world object detection in mobile apps, which research shows is more memorable and effective as a cue compared to images or 2D stimuli [15, 16]. Contextual triggerssuch as recognizing a registered object in the camera viewcan be used to deliver timely reminders, leveraging the automaticity of event-based retrieval processes [17, 18]. The Niantic Lightship SDK for Unity provides robust APIs for object detection, semantic labeling, and spatial anchoring, facilitating the development of scalable, cross-platform AR memory aids [17, 18].

  3. Problem Statement

    The challenge is to design and implement a Mobile Aug-mented Reality (AR) framework for smartphones that directly addresses prospective memory failure, which is the difculty in remembering to perform an action at a future time or specic location. This framework will work by replacing disruptive traditional notications with context-aware AR reminders; specically, it will use the phones sensors to recognize a pre-dened physical trigger location (like keys or a specic desk) and then display the reminder as a digital overlay anchored to a relevant object in the users real-world view, thereby providing a more intuitive, spatial, and timely prompt that signicantly increases the users chance of successfully executing their intended action.

  4. System Design

    The AR prospective memory support system, which we name EB-VMEM, comprises three primary components asshown in Figure 1:

    1. Object Detection Subsystem: The Lightship Object De-tection Subsystem sends real-time detection information in the form of XRObjects, with a list of 206 pre-trained object classes. These classes include person, bus, car, and ag. Some of these, in turn, contain their respective subclasses; for example,

      Fig. 1. System Architecture.

      glasses, sunglasses, and goggles are subclasses of the glasses class; likewise, fast food, hot dog, French fries, and cookies are subclasses of the food class.

    2. Memory Subsystem: The memory subsystem stores in-formation in the form of key-value pair associations. An object is stored in the memory with the associated task (key: object name, value: task).

    3. Cue Display Subsystem: The cue display subsystem is responsible for reminding the user to perform an intended action by displaying text while playing a sound that will trigger the prospective memory (value, task).

    When an object (key) from the memory subsystem is matched with one of the 206 pre-trained object classes by the Object Detection Subsystem, the Cue Display Subsystem is activated and the task (value) is displayed to the user monitor followed by a sound.

    The system follows a context-aware computing paradigm, where environmental conditions (presence of specic ob-jects) trigger appropriate informational interventions (task reminders). This approach aligns with theoretical frameworks emphasizing the importance of cue detection in event-based prospective memory.

  5. Methodology

    1. Development Environment

      To achieve the proposed objectives, the following tools were utilized:

      • Unity 6000.0.58f2 LTS as the primary development platform. Unity was selected due to its cross-platform capabilities, robust AR support, and extensive developer community. It served as the core engine for creating the MobileAR application.

      • Niantic Lightship 3.17 SDK (formerly Lightship ARDK) for AR functionality. The Niantic Lightship SDK was integrated to leverage advanced AR features, specically real-time object detection [17, 18].

        This technology stack was selected for its cross-platform compatibility, robust AR featuresincluding object detection and occlusion handlingand extensive developer community support.

    2. Object Detection Subsystem

      The Object Detection Subsystem is implemented by lever-aging the Niantic Lightship framework. Lightships Object Detection subsystem enhances contextual awareness by creat-ing semantically labeled 2D bounding boxes that dynamically update as real-world objects appear on-screen.

      By placing Lightships ARObjectDetectionManager in the scene and subscribing to the ObjectDetectionsUpdated event, the system receives real-time detection information in the form of XRDetectedObjects. The system also listens to the MetadataInitialized event to retrieve the list of available object classes once the model is ready. An ObjectRecognition.cs script was created to manage this component and is attached to an empty Object Recognition Game Object.

      Identication Scene Initialization: The Object Recognition Manager activates when the identication scene is initialized. It registers the OnMetadataInitialized event; once the feature is ready, it registers ObjectDetectionsUpdated to automatically receive detection results.

      Processing Detections: The ObjectDetectionsUpdated func-tion collects the results and presents them to the user. Since each result can contain multiple object categories, the system lters and sorts the results by their condence values to display only the most likely classication.

    3. Memory Subsystem

      The Memory subsystem performs two primary functions: Store the classes of objects and create association between the objects and the tasks that will trigger prospective memory. Figure 2 illustrates the user-facing association process.

      • User Workow: Users select a target object name, enter a specic task, and save the association locally (Figure 2).

      • Data Structure: Each object-task pair is stored with the object name serving as the unique identier.

      • Recognition Matching: Upon detection, the system matches the recognized object to registered entries using feature descriptors and condence scores.

    4. Cue Display Subsystem

      The Cue Display Subsystem manages the output presented to the user, acting as the bridge between the Object Detection Subsystem and the associations stored in the Memory Subsys-tem.

      The system triggers the Cue Display Subsystem when the Object Detection Subsystem recognizes a real-world object that exists as a registered object:task association. When a

      Fig. 2. Object-Task Association.

      match occurs, a text overlay is displayed on the screen as shown in gure 3, accompanied by a short audio alert to stimulate the users prospective memory.

      • Contextual Reminders: When a registered object is detected, a reminder overlay appears displaying the asso-ciated task for 10 seconds.

      • Reminder Frequency: To maintain effectiveness without causing notication fatigue, the reminder is repeated every 60 minutes for any specic task.

  6. Results and Discussion

    This project presents the design and implementation of a MobileAR-based prospective memory support system. The system was successfully developed and tested for technical functionality on smartphones, demonstrating reliable real-time

    object detection and user-driven task associations within sim-ulated scenarios. Informal technical validation establishes core system capabilities, conrming that MobileAR can deliver contextually triggered memory cues when registered objects are detected in the environment.

    AR enables digital interventions to appear precisely when the relevant object is encountered, anchoring cognitive support in physical space. This aligns with theoretical models emphasiz-ing the value of salient, spatially relevant cues for automatic intention retrieval [11, 12, 15].

    The system grounds prospective memory theory in practical AR design, suggesting that real-world object cues offer distinct advantages for event-triggered intention retrieval. The ap-proach integrates contextual computing principles with estab-lished cognitive research, providing a bridge from laboratory paradigms to real-world environmentsat least at a prototype stage.

    Fig. 3. Object Detection and Task Overlay.

    The developed MobileAR system demonstrates technical feasibility for real-time detection and context-aware reminder delivery using smartphones. By leveraging real-world object recognition as event-based cues, the application provides a novel approach to prospective memory support that can po-tentially enhance cue salience, reduce monitoring burdens, and improve ecological integration [10, 11, 15, 19].

    This work contributes to event-based prospective memory augmentation in naturalistic environments. While traditional memory aids rely on time-based reminders or static cues,

  7. Conclusion

    This project has successfully demonstrated the technical feasibility of EB-VMEM, an Augmented Reality (AR) frame-work designed to mitigate event-based prospective memory (EBPM) failures through real-time, context-aware assistance. While traditional memory aids like alarms and notes remain largely static and time-dependent, this research proves that MobileAR can effectively bridge the gap between planned intentions and environmental cues.

    By leveraging the Unity engine and the Niantic Lightship SDK, the developed system provide a robust platform for associating tasks with physical objects, such as doorways or pill bottles, and triggering digital overlays precisely when those objects are detected in the users environment. Technical validation conrmed that the system can perform on-device inference to maintain user privacy while delivering salient, spatially-anchored reminders that align with natural cognitive processes.

    However, the transition from technical prototype to a prac-tical cognitive tool requires addressing several critical fron-tiers. The current reliance on handheld smartphones imposes ergonomic and battery constraints that may hinder long-term habituation. Furthermore, while initial performance metrics are promising, comprehensive user-centered studies are essential to quantify the systems impact on actual task completion rates and cognitive load in real-world settings.

    The ultimate potential of this work lies in its evolution toward wearable AR glasses, which will enable seamless, hands-free cognitive augmentation. By grounding AR design in the principles of prospective memory theory, this project establishes a foundation for future ubiquitous memory assis-tants that empower individualsparticularly those with cog-nitive impairmentsto navigate their daily lives with greater autonomy and condence.

  8. Future Directions

To build upon the current prototype and address the lim-itations of static object-to-task mapping, several avenues for future research are proposed:

LLM Integration for Dynamic Context: Integrating Large Language Models (LLMs) could allow the system to interpret complex environments. Rather than simple key-value pairs, the system could understand that detecting a backpack on

a Friday morning implies a reminder for gym clothes, providing a more nuanced layer of cognitive support [13, 19]. Enhanced Spatial Persistence: Future iterations will leverage advanced spatial anchoring to ensure that reminders remain locked to physical objects even when the user moves the smartphone camera away, utilizing the full potential of the

Niantic Lightship persistence APIs [17, 20].

Wearable AR Integration: Moving the framework from smartphones to AR glasses would further reduce the moni-toring burden, allowing cues to appear naturally in the users eld of vision without the need for active handheld device interaction [9, 21].

Personalization and Adaptation: Future versions could im-plement machine learning algorithms to track hit rates of intended actions, automatically adjusting the salience (size, color, or sound) of reminders based on the users past perfor-mance [1, 14].

User-Centered Evaluation: The next key step is conducting comprehensive user studies to measure the systems impact on prospective memory performance, cognitive workload, and user acceptance. Such studies should recruit target populations and deploy the app in daily-life settings, comparing AR-based reminders to traditional aids.

References

  1. O. J. Nwobodo, G. S. Kuaban, K. Wereszczyn´ski, and

    K. A. Cyran, Enhancing learning in augmented reality (ar): A deep learning framework for predicting memory retention in ar environments, in Computational Science ICCS 2025 Workshops, ser. Lecture Notes in Computer Science, M. Paszynski, A. S. Barnard, and Y. J. Zhang, Eds., vol. 15912. Cham: Springer Nature Switzerland, 2025, pp. 92106.

  2. L. Strickland, A. Heathcote, M. S. Humphreys, and

    S. Loft, Target learning in event-based prospective memory. J. Exp. Psychol. Learn. Mem. Cogn., vol. 48, no. 8, pp. 11101126, Aug 2022.

  3. Time-based prospective memory. Wikipedia. [Online]. Available: https://en.wikipedia.org/wiki/Time-basedpro spectivememory

  4. S. Pathak, A. S. Tiwari, P. Gupta, and K. Tiwari, Road to successful prospective memory execution: Comparing event based and time based intentions, Int. J. Indian Psychol., vol. 13, no. 04, pp. 111, Oct 2025.

  5. P. Kourtesis and S. E. MacPherson, An ecologi-cally valid examination of event-based and time-based prospective memory using immersive virtual reality: The inuence of attention, memory, and executive function processes on real-world prospective memory, Neuropsy-chol. Rehabil., vol. 33, no. 2, pp. 255280, Feb 2023.

  6. C. Cuesta, E.-V. Cores, M. Rivara, L. Soccini, and D.-

    G. Politis, Prospective memory in patients with mild cogntive impairment, J. Appl. Cogn. Neurosci., vol. 4, no. 1, p. e00384887, Jun 2023.

  7. M. K. Scullin, M. A. McDaniel, and J. T. Shelton, The dynamic multiprocess framework: Evidence from

    prospective memory with contextual variability, Cognit. Psychol., vol. 67, no. 12, pp. 5571, Aug 2013.

  8. Z. Makhataeva, T. Akhmetov, and H. A. Varol, Aug-mented reality-based human memory enhancement using articial intelligence, Jul 2023.

  9. O. Rosello, M. Exposito, and P. Maes, Nevermind: Using augmented reality for memorization, in Proceed-ings of the 29th Annual Symposium on User Interface Software and Technology. Tokyo Japan: ACM, Oct 2016, pp. 215216.

  10. S. Katai, T. Maruyama, T. Hashimoto, and S. Ikeda, Event based and time based prospective memory in parkinsons disease, J. Neurol. Neurosurg. Psychiatry, vol. 74, no. 6, p. 704, Jun 2003.

  11. Beyond the bubble: How context-aware memory systems are changing the game in 2025 tribe ai. [Online]. Available: https://www.tribe.ai/applied-ai/beyond-the-b ubble-how-context-aware-memory-systems-are-changin g-the-game-in-2025

  12. P. Jadaun, C. Cui, S. Liu, and J. A. C. Incorvia, Adaptive cognition implemented with a context-aware and exible neuron for next-generation articial intelligence, PNAS Nexus, vol. 1, no. 5, p. pgac206, Nov 2022.

  13. Xiangrong et al., Intelligent interaction strategies for context-aware cognitive augmentation, 2025.

  14. L. Ji et al., Effect of executive function on event-based prospective memory for different forms of learning disabilities, Front. Psychol., vol. 12, p. 528883, Mar 2021.

  15. R. Roma´n-Caballero and G. Mioni, Time-based and event-based prospective memory in mild cognitive im-pairment and alzheimers disease patients: A systematic review and meta-analysis, Neuropsychol. Rev., vol. 35, no. 1, pp. 102125, Mar 2025.

  16. R. A. E. Haddad, Z. Wang, and Y. Shin, Ar secretary agent: Real-time memory augmentation via llm-powered augmented reality glasses, 2024.

  17. How to enable object detection niantic spatial platform. [Online]. Available: https://lightship.dev/docs

    /ardk/how-to/ar/objectdetection/

  18. ar-object-detection/readme.md at main · tongzhou-yu/ar-object-detection. GitHub. [Online]. Available: https:

    //github.com/Tongzhou-Yu/ar-object-detection/blob/ma in/README.md

  19. How memory augmentation can improve large language models. IBM Research. [Online]. Available: https:

    //research.ibm.com/blog/memory-augmented-LLMs

  20. Recording and playback. [Online]. Available: https:

    //www.nianticspatial.com/docs/ardk/features/playback/

  21. R. A. E. Haddad, Z. Wang, Y. Shin, R. Liu, Y. Wang, and C. Yu, Ar secretary agent: Real-time memory aug-mentation via llm-powered augmented reality glasses, 2025.