International Academic Platform
Serving Researchers Since 2012
IJERT-MRP IJERT-MRP

Real-Time Anomaly Detection on Wearables using Edge AI

DOI : 10.17577/IJERTV14IS110345
Download Full-Text PDF Cite this Publication

  • Open Access
  • Authors : Aswathnarayan Muthukrishnan Kirubakaran, Lokesh Butra, Suhas Malempati, Akash Kumar Agarwal, Sumit Saha, Abhirup Mazumder
  • Paper ID : IJERTV14IS110345
  • Volume & Issue : Volume 14, Issue 11 , November – 2025
  • DOI : 10.17577/IJERTV14IS110345
  • Published (First Online): 28-11-2025
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version

 

Real-Time Anomaly Detection on Wearables using Edge AI

Aswathnarayan Muthukrishnan Kirubakaran

California, USA

0009-0006-6652-2663

Akash Kumar Agarwal

California, USA

0009-0006-7872-3446

Lokesh Butra

North Carolina, USA 0009-0009-0286-9635

Sumit Saha

California, USA

0009-0009-5888-3110

Suhas Malempati

South Carolina, USA 0009-0009-3855-0423

Abhirup Mazumder

Texas, USA

0009-0008-4811-8477

AbstractWearable devices are increasingly used for contin- uous monitoring of physiological signals and human movement; however, existing systems often rely on cloud-dependent analytics, single-sensor thresholds, or delayed inference workflows that limit their usefulness during real-time emergencies. This paper introduces EdgeSense Health, an edge-native architecture for low- latency detection of physiological and mobility anomalies using multi-modal sensor fusion. The framework integrates synchro- nized accelerometer, gyroscope, ECG, PPG, SpO2, and skin- temperature streams with a lightweight deep learning pipeline combining convolutional feature extraction, Transformer-based temporal modeling, and variational autoencoderdriven anomaly scoring. To support real-time operation, inference executes di- rectly on wearable-class microcontrollers or embedded proces- sors, avoiding cloud latency and strengthening privacy by mini- mizing data egress. A prototype evaluation involving controlled physiological stressors, tremor events, gait perturbations, falls, and hypoxia simulations demonstrates that EdgeSense Health achieves detection latencies under 20 ms and maintains high accuracy across anomaly categories. This architecture provides a practical and scalable foundation for next-generation wearable health monitoring and human-state awareness applications.

Index TermsEdge AI, sensor fusion, anomaly detection, multi-modal wearables, physiological monitoring.

  1. Introduction

    Wearable sensing platforms have evolved significantly over the past decade, transitioning from simple activity-tracking devices into powerful multi-modal health assessment tools ca- pable of analyzing complex physiological and biomechanical patterns [1]. Modern wearables incorporate optical, electrical, inertial, and thermal sensing capabilities, making continu- ous monitoring of cardiovascular, respiratory, and movement- related indicators possible outside clinical settings [2], [3]. These signals often contain early markers of adverse physio- logical events, including arrhythmias, hypoxic episodes, acute respiratory irregularities, tremor intensification, gait instability, and sudden collapse [4], [5].

    Despite this potential, current commercial systems exhibit several architectural limitations [6]. Many rely on cloud- centric processing pipelines in which raw or lightly processed sensor data are streamed to remote servers for inference [7]. Such designs impose non-deterministic latency, degrade performance under intermittent connectivity, increase cost of

    data transmission, and raise privacy concerns due to the con- tinuous movement of raw physiological data. Simultaneously, threshold-based on-device algorithms lack the sensitivity and adaptability required for early anomaly detection, particularly under noisy real-world conditions [8].

    Edge-native AI provides an attractive alternative by exe- cuting inference directly on the wearable device or a nearby gateway [9]. However, this poses technical challenges: multi- modal physiological and inertial streams differ in frequency, noise characteristics, and sampling artifact behavior, and must be synchronized precisely under strict memory and compu- tational constraints. Temporal dependencies especially those spanning tens of seconds are difficult to model on low- power microcontrollers. Additionally, personalization is cru- cial; physiological baselines vary significantly across individ- uals and contexts [10].

    This paper proposes EdgeSense Health, a unified multi- modal edge-native anomaly detection architecture that ad- dresses these challenges. The system integrates synchronized sensor fusion, hybrid deep neural modeling, and personalized anomaly scoring while operating within the severe computa- tional limits of embedded hardware. EdgeSense Health demon- strates that clinically meaningful physiological and mobility anomalies can be detected reliably and consistently without relying on cloud computation, making the approach suitable for continuous monitoring in real-world environments [11].

  2. Related Work
    1. Physiological Signal Analysis

      Classical physiological monitoring approaches rely heav- ily on handcrafted features derived from ECG, PPG, and respiration signals. While effective for certain tasks, these systems often underperform under motion-induced noise or sensor misalignment. Deep learningbased approaches have shown improved robustness by learning morphological and temporal dependencies directly from data [12]; however, they typically require inference on mobile phones or the cloud, limiting their scalability and real-time applicability in low- power environments [13], [14].

      ‌Fig. 1. EdgeSense Health layered system architecture.

    2. Mobility Anomaly Detection

      Fall detection, seizure motion characterization, and gait dis- turbance detection are commonly addressed via inertial sensor analysis [15], [16]. Threshold-based systems are computation- ally efficient but frequently produce false positives during high-dynamic daily activities [17]. Deep learning methods offer improved accuracy but are computationally expensive, making deployment on microcontrollers challenging [18] . Hy- brid approaches rarely integrate physiological modalities, even though physiological signals often provide essential context for distinguishing genuine medical emergencies from benign high-motion events.

    3. Edge AI for Wearables

    Emerging research highlights the potential of microcontroller-based inference, including optimizations

    [19] such as quantization, pruning, operator fusion, and lightweight neural architectures. Still, most existing work focuses on single-modal data [20]. Real-time, multi-modal sensor fusion especially combining ECG, PPG, oxygenation, and IMU signals remains underexplored in resource constrained environments [21]. EdgeSense Health extends this landscape by unifying these modalities under a cohesive architecture designed for real-time anomaly detection [22].
  3. System Architecture

    EdgeSense Health is structured into four coordinated layers that together support sensing, synchronized fusion, embedded inference, and low-latency alerting. Fig. 1 provides a schematic overview of the system

    1. Multi-Modal Sensing Layer

      The sensing layer combines electrical cardiac measure- ments, optical blood-flow photoplethysmography, blood- oxygenation readings, skin-temperature trends, and inertial signals from a 3-axis accelerometer and gyroscope. These signals exhibit different sampling requirements, with ECG and PPG typically captured at higher frequencies than temperature and IMU signals. To enable coherent analysis, all streams follow a unified timestamping mechanism with drift correction to ensure consistent alignment across modalities.

    2. Fusion and Preprocessing Layer

      Before being fed into the deep learning pipeline, each signal undergoes targeted preprocessing. ECG samples pass through a bandpass filter to remove bseline wander and high- frequency noise, while PPG signals are corrected for motion artifacts using IMU-informed adaptive filtering. All signals are segmented using overlapping windows to preserve temporal continuity. A fusion routine aligns the modalities at consistent temporal boundaries, producing a fused tensor that captures cardiac morphology, pulse dynamics, motion behavior, and thermal context.

    3. Edge AI Inference Engine

      The inference engine integrates several lightweight neural components chosen for their complementary strengths in mod- eling multi-modal physiological and biomechanical patterns. Convolutional layers extract local morphological character- istics from ECG and PPG signals, identifying features such as R-peak sharpness, pulse-wave dispersion, and beat-to-beat variability that are often indicative of cardiovascular instability. These localized representations are passed to a Transformer encoder, which models long-range temporal dependencies across windows and captures slow-evolving physiological states such as respiratory irregularities or progressive oxygen desaturation. An unsupervised variational autoencoder (VAE) further learns individualized latent distributions, enabling per- sonalized anomaly scoring by detecting deviations from each users typical physiological baseline.

      To characterize mobility-related anomalies, the system in- corporates a dedicated CNNLSTM module tailored to the dynamics of inertial data. In this subnetwork, a series of temporal convolutional layers first operate on raw accelerom- eter and gyroscope sequences to extract short-term motion primitives such as impact transients, tremor oscillations, and stride-phase transitions. These convolutions use small kernel sizes and moderate stride lengths to preserve fine-grained temporal details while reducing the dimensionality of high- frequency IMU streams. The resulting feature maps are robust to sensor noise and minor placement variations, providing a stable representation of motion signatures.

      Following convolutional encoding, the extracted motion fea- tures are fed into a stack of Long Short-Term Memory (LSTM) layers that model medium- and long-range temporal depen- dencies in the IMU data. The LSTM captures the evolution of motion sequences over time, differentiating abrupt impact patterns characteristic of falls from voluntary high-acceleration movements, and distinguishing sustained rhythmic oscillations associated with tremor-like events from sporadic hand mo- tions. In the case of gait instability, the recurrent architecture encodes deviations in step timing, stride symmetry, and lateral sway, which often emerge over several gait cycles rather than in a single window.

      The combination of CNN-based spatialtemporal feature extraction and LSTM-based sequence modeling enables the system to interpret IMU data with high precision under real-world variability. This hybrid architecture is particularly

      ‌effective in wearable deployments where signal quality may fluctuate due to subtle shifts in device orientation, intermittent noise, or variations in user movement patterns. By coupling the CNNLSTM mobility detector with physiological context derived from other modalities, EdgeSense Health achieves a more reliable and clinically meaningful interpretation of mobility anomalies than either modality could provide in isolation.

    4. Monitoring and Feedback Layer

    Upon detection of an anomaly, the wearable device gen- erates an on-device alert through haptic or visual feedback. When connectivity is available, summarized event metadata rather than raw signals may be transmitted to caregivers or remote dashboards. EdgeSense Health intentionally minimizes reliance on persistent cloud connectivity, supporting safe op- eration even in connectivity-limited settings.

  4. Threat Model

    Deploying AI-driven inference at the edge enhances privacy by keeping sensitive physiological data on-device. However, edge deployment also introduces unique risks. We assume an honest-but-curious threat environment in which adversaries may attempt to manipulate device firmware, extract stored parameters, or induce anomalous sensor patterns to compro- mise detection reliability. To mitigate these risks, the system employs encrypted storage, authenticated firmware updates, and constant-time inference routines that reduce susceptibility to timing-based side-channel analysis. By avoiding long-term raw data storage and limiting external communication to summary-level information, the system significantly reduces exposure of sensitive health data.

  5. Edge AI Model Design
    1. Physiological Feature Extraction

      Time-domain heart rate variability (HRV) is computed as:

  6. Prototype Implementation

    The EdgeSense Health prototype was implemented on a Cortex-M7class microcontroller using TensorFlow Lite Mi- cro for embedded inference. The device executes sensing, preprocessing, and inference under a lightweight real-time scheduling environment that ensures deterministic handling of multi-modal data windows.

    To obtain representative input signals for evaluation, data were collected from structured sessions involving diverse physiological and biomechanical variations. These sessions included controlled breathing exercises designed to elicit changes in heart rate variability, short-duration shallow breath- ing to approximate mild oxygen desaturation patterns, and simple cognitive stress tasks intended to induce autonomic responses. Mobility variations were introduced through su- pervised fall simulations onto crash mats, guided tremor- emulation movements, and gait perturbations such as staggered stepping or intentional lateral sway. All recording sessions were conducted under safe, non-invasive conditions.

    The resulting dataset reflects realistic variations in combined physiological and motion behavior without being clinically exhaustive. It provides sufficient diversity to evaluate the architectures ability to maintain synchronized fusion, robust inference, and low-latency anomaly detection across a range of signal conditions.

  7. RESULTS
    1. Physiological Anomalies

      The physiological anomaly detection results demonstrate the value of integrating ECG, PPG, SpO2, and temperature signals into a unified inference pipeline. Arrhythmia detec- tion benefited strongly from the hybrid CNNTransformer architecture, which captures the subtle morphological varia- tions in consecutive ECG beats and the longer-term temporal irregularities that often precede clinically significant rhythm deviations. Hypoxia detection, which relies on both absolute SpO2 readings and waveform dynamics in the PPG signal, also

      IIt

       

      RMSSD = r 1

      N 1

      N 1

      L

       

      (RRi+1

      i=1

      RRi)2 (1)

      achieved strong performance. We observe that the fused multi- modal approach reduces false positives compared to using PPG alone, particularly during periods of moderate physical activity where motion artifacts often distort optical measurements.

      Frequency-domain balance is captured as:

      PLF

      LF/HF =

      PHF

    2. Transformer Attention

      (2)

      Respiratory distress events exhibited more gradual temporal signatures, and Transformer-based temporal modeling proved essential in recognizing slow-onset deviations in breathing depth and frequency. Hypoglycemia patterns, approximated through combined HRV suppression, thermal fluctuations, and

      Temporal relationships are modeled with multi-head self- attention:

      QKT

      shallow-breathing markers, were captured with high relia- bility, although performance was slightly lower due to the inherent subtleness and inter-individual variability of these

      episodes. Overall, the physiological results highlight the bility

      k

       

      Attention(Q, K, V ) = softmax d

    3. VAE Anomaly Score

    Personalized anomaly scoring uses:

    V (3)

    of EdgeSense Health to detect both acute and progressive physiological changes under real-time operational constraints.

    1. Mobility Anomalies

      The mobility anomaly results show that combining IMU sig-

      Score = x x2 + KL(q(z|x)p(z)) (4)

      nals with physiological cues markedly improves the discrimi-

      ‌TABLE I

      Physiological Anomaly Detection Performance

      Anomaly Accuracy AUC
      Arrhythmia 97.1% 0.986
      Hypoxia 95.4% 0.972
      Respiratory Distress 93.2% 0.948
      Hypoglycemia Pattern 91.5% 0.934

      nation between true emergency events and benign high-motion activities. Fall detection, traditionally challenging due to over- lapping acceleration patterns with running, stair descent, or abrupt posture transitions, achieved high accuracy due to the CNNLSTM modules ability to model both instantaneous impact signatures and short-term pre-impact motion trajecto- ries. Seizure-like tremor episodes were recognized through the characteristic rhythmic oscillations present in the gyroscope and accelerometer data, and the addition of physiological signals helped differentiate simulated tremor bursts from vol- untary rapid movements. Gait instability detection, although slightly lower in accuracy, provided consistent identification of deviations in stride symmetry and lateral sway patterns that may be early indicators of neurological impairment or physical fatigue. Importantly, joint analysis of physiological and inertial signals allowed the system to recognize cases where a high- impact movement was not medically concerning, reducing false alarms. These results emphasize the importance of multi- modal fusion in mobility anomaly detection, especially for real-world environments with highly diverse motion behavior.

      TABLE II

      Mobility Anomaly Detection Performance

      Mobility Event Accuracy

      Fall 96.4%
      Seizure-like Tremor 92.8%
      Gait Instability 89.7%

      Inference latency remained between 1620 ms with a 38% reduction in power consumption compared to cloud inference.

  8. Discussion

    The evaluation results demonstrate the advantages of a multimodal, edge-native architecture for anomaly detection in wearable systems. Fusing ECG, PPG, SpO2, temperature, and IMU data yields a richer representation of physiological and biomechanical state than any single modality. ECG and PPG provide morphological detail that helps distinguish true cardiovascular risk events from motion-induced artifacts com- mon in daily activity. This is especially important in fall and collapse scenarios, where IMU spikes alone are ambiguous; integrated physiological context reveals whether an impact coincides with arrhythmia, desaturation, or HRV suppression, improving decision confidence and reducing false alarms.

    The Transformer module strengthens reliability by modeling long-range temporal dependencies that often precede anoma- lies. Many adverse events, such as hypoxia or respiratory

    distress, appear as gradual shifts rather than sudden spikes. While traditional window-based models struggle with such evolution, attention mechanisms capture multi-scale temporal cues across extended sequences. A VAE-based personalization layer further enhances robustness by learning each users baseline physiological distribution, mitigating variability in fitness, stress response, skin perfusion, and sensor placement. This allows detection of deviations meaningful to an individual rather than relying solely on population-level thresholds.

    Several limitations remain. The evaluation dataset, though diverse, reflects controlled conditions and cannot fully capture real-world variability. Factors such as humidity, perspiration, prolonged activity, loose contact, and long-term optical or electrical drift may degrade signal quality and induce false alarms without recalibration. Environmental influences, from ambient temperature to external vibrations, also affect PPG and IMU measurements. Although inference latency stays below 20 ms, energy consumption depends on sampling rate, window length, and Transformer attention density, motivating adaptive scheduling where sensing and inference scale with user state or risk. Finally, while the prototype proves feasibility on one microcontroller platform, broader studies are needed to assess compatibility across hardware architectures and battery capacities.

  9. Future Work

    Future research will focus on expanding both the scale and ecological validity of the dataset by including a broader pop- ulation across varied age groups, health conditions, and daily activity contexts. Collecting data over extended periods will allow the system to better model slow physiological drifts, cir- cadian patterns, and behavioral routines that influence anomaly scoring. Incorporating on-device continual learning or meta- learning approaches may further enhance personalization by allowing models to adapt to each users evolving physiological profile without requiring explicit retraining. Such techniques could help reduce the impact of sensor placement changes, fitness variations, or long-term baseline shifts.

    Another promising direction involves integrating EdgeSense Health with augmented reality (AR) interfaces for clinicians or caregivers, enabling seamless visualization of anomalies, historical trends, and contextual explanations. The combina- tion of AR overlays and edge-local inference could provide a powerful real-time support tool in home care, remote mon- itoring, and emergency-response scenarios while maintaining strong data privacy guarantees.

    In parallel, additional engineering efforts will investigate energy-aware sensing schedules, where sampling rates and model execution frequency are dynamically modulated based on user activity level and signal stability. Techniques such as model distillation, operator fusion, and ultra-low-power accelerator utilization may further reduce compute overhead, extending battery life for multi-day or multi-week wear.

    Finally, federated learning frameworks offer an avenue for population-scale model improvement without exposing raw physiological data. By exchanging encrypted gradient updates

    ‌rather than sensor streams, future systems could leverage cross-user learning to improve generalization while preserving privacy. Combining federated updates with edge-level person- alization would yield a hybrid architecture capable of adapting globally and locally, reinforcing the robustness and long-term utility of edge-native anomaly detection in wearable health systems.

  10. Conclusion

This paper presented EdgeSense Health, an edge-native multi-modal architecture for real-time detection of physio- logical and mobility anomalies. By combining synchronized sensor fusion, deep temporal modeling, and personalized anomaly scoring, the system achieves robust accuracy and sub- 20 ms inference latency on microcontroller-class hardware. These results highlight the viability of advanced edge AI in next-generation health and safety monitoring devices, reducing dependence on cloud infrastructure while enhancing privacy and responsiveness.

References

  1. G. Ji, C. Msigwa, D. Bernard, G. Lee, J. Woo, and J. Yun, Healtp4: Health-related Data Collection from Wearable and Mobile Devices in Everyday Lives, in Proc. IEEE Int. Conf. Big Daa and Smart Computing (BigComp), 2023, pp. 336337, doi: 10.1109/Big- Comp57234.2023.00074.
  2. D. M. Bidkar, A. G. Parthi, D. Maruthavanan, B. Pothineni, and S. R. Sankiti, Developing user-facing experiences in Android applications: A focus on push notifications and background operations, Int. J. Res. Anal. Rev., vol. 11, no. 4, pp. 721725, Nov. 2024. doi: 10.5281/zen- odo.14235549.
  3. L. Zhou, T. Rackoll, L. Ekrod, M.-G. Balc, F. Klostermann, B. Arnrich, and A. H. Nave, Monitoring and Visualizing Stroke Re- habilitation Progress using Wearable Sensors, in Proc. 46th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), 2024, pp. 16, doi: 10.1109/EMBC53108.2024.10782489.
  4. S. G. Aarella, A. K. Tripathy, S. P. Mohanty, and E. Kougianos, EasyBand2.0: A Framework with Context-Aware Recommendation Mechanism for Safety-Aware Mobility during Pandemic Outbreaks, in Proc. 23rd Int. Symp. Quality Electronic Design (ISQED), Santa Clara, CA, USA, 2022, pp. 16, doi: 10.1109/ISQED54688.2022.9806250.
  5. W. K. Wong, H. L. Lim, C. K. Loo, and W. S. Lim, Home Alone Faint Detection Surveillance System Using Thermal Camera, in Proc. 2010 Second International Conference on Computer Research and Development, 2010, pp. 747751, doi: 10.1109/ICCRD.2010.163.
  6. S. G. Aarella, S. P. Mohanty, E. Kougianos, and D. Puthal, Fortified-Edge 2.0: Machine Learning based Monitoring and Au- thentication of PUF-Integrated Secure Edge Data Center, in Proc. IEEE ISVLSI, Foz do Iguacu, Brazil, 2023, pp. 16, doi: 10.1109/ISVLSI59464.2023.10238517.
  7. R. Nagarkar, C. Bennie, K. Wang, M. Lam, D. Gonzalez, and

    M. B. Chaudhari, Integrating Multiple Cloud Platforms to Build a Data Pipeline for Recommendation Systems, in Proc. 2024 7th International Conference on Data Science and Information Technology (DSIT), 2024,

    pp. 15, doi: 10.1109/DSIT61374.2024.10881634.

  8. V. Punniyamoorthy, A. G. Parthi, M. Palanigounder, R. K. Kodali,

    B. Kumar, and K. Kannan, A Privacy-Preserving Cloud Architecture for Distributed Machine Learning at Scale, International Journal of En- gineering Research and Technology (IJERT), vol. 14, no. 11, Nov. 2025.

  9. J. V. Anchitaalagammai, S. Kavitha, R. Buurvidha, T. S. Santhiya,

    M. D. Roopa, and S. S. Sankari, Edge Artificial Intelligence for Real- Time Decision Making using NVIDIA Jetson Orin, Google Coral Edge TPU and 6G for Privacy and Scalability, in Proc. 2025 International Conference on Visual Analytics and Data Visualization (ICVADV), 2025,

    pp. 150155, doi: 10.1109/ICVADV63329.2025.10960953.

  10. N. Chockalingam, A. Chakrabortty, and A. Hussain, Mitigating Denial- of-Service attacks in wide-area LQR control, in Proc. 2016 IEEE Power and Energy Society General Meeting (PESGM), 2016, pp. 15. doi: 10.1109/PESGM.2016.7741285.
  11. M. Pradeep, K. A. Jyotsna, R. Vedantham, V. V. Bolisetty,

    G. R. Yadav, and S. Kanakaprabha, Next-Gen Telehealth: A Low-Latency IoT and Edge AI Framework for Personalized Re- mote Diagnosis, in Proc. 2025 International Conference on Inven- tive Computation Technologies (ICICT), 2025, pp. 18701874, doi: 10.1109/ICICT64420.2025.11004688.

  12. S. G. Aarella, V. P. Yanambaka, S. P. Mohanty, and E. Kougianos, Fortified-Edge 2.0: Advanced Machine-Learning-Driven Framework for Secure PUF-Based Authentication in Collaborative Edge Comput- ing, Future Internet, vol. 17, p. 272, 2025, doi: 10.3390/fi17070272.
  13. V. Avhad and J. W. Bakal, Smart Iridology: Deep Learning for Predictive Health Insights, in Proc. 2024 International Conference on Recent Advances in Science and Engineering Technology (ICRASET), 2024, pp. 16, doi: 10.1109/ICRASET63057.2024.10895403.
  14. D. M. Bidkar, V. Jayaram, M. S. Krishnappa, A. R. Banarse, G. Mehta,

    K. K. Ganeeb, S. Joseph, and P. K. Veerapaneni, Power Restrictions for Android OS: Managing Energy Efficiency and System Performance, Int. J. Comput. Sci. Inf. Technol. Res., vol. 5, no. 4, pp. 116, 2024. doi: 10.5281/zenodo.14028551.

  15. J. M. Parra-Ullauri, O. Dilley, H. Madhukumar, and D. Simeonidou, Profiling AI Models: Towards Efficient Computation Offloading in Heterogeneous Edge AI Systems, in Proc. 2024 3rd International Conference on 6G Networking (6GNet), 2024, pp. 164166, doi: 10.1109/6GNet63182.2024.10765637.
  16. B. Ramdoss, A. M. Kirubakaran, B. S. Prabakaran,
    1. Sweetlin Hemalatha, and V. Vaidehi, Human Fall Detection Using Accelerometer Sensor and Visual Alert Generation on Android Platform, SSRN, Mar. 2014. [Online]. Available: https://ssrn.com/abstract=5785544. doi: 10.2139/ssrn.5785544.
  17. S. Li, H. Luo, T. Shao, and T. Kishi, An Enhanced Human Fall Detection System Using mmWave Sensors for Indoor Smart Space, in Proc. 2024 4th International Conference on Digital So- ciety and Intelligent Systems (DSInS), 2024, pp. 181185, doi: 10.1109/DSInS64146.2024.10992051.
  18. S. Joseph, A. G. Parthi, D. Maruthavanan, V. Jayaram, P. K. Veerapaneni, and V. Parlapalli, Transfer Learning in Natural Language Processing, in Proc. 7th Int. Conf. Information and Communications Technology (ICOIACT), 2024, pp. 3036, doi: 10.1109/ICOIACT64819.2024.10912895.
  19. X. Lv, Z. Gao, C. Yuan, M. Li, and C. Chen, Hybrid Real-Time Fall Detection System Based on Deep Learning and Multi-sensor Fusion, in Proc. 6th Int. Conf. Big Data and Information Analytics (BigDIA), 2020, pp. 386391, doi: 10.1109/BigDIA51454.2020.00069.
  20. V. Parlapalli, B. Pothineni, A. G. Parthi, P. K. Veerapaneni, D. Marutha- vanan, A. Nagpal, R. K. Kodali, and D. M. Bidkar, From complexity to clarity: One-step preference optimization for high-performance LLMs, Int. J. Artif. Intell. Mach. Learn. (IJAIML), vol. 4, no. 1, pp. 112125, 2025, doi: 10.34218/IJAIML0401008.
  21. L. Nkenyereye, B. G. Lee, K. Go, X. Mao, and W. Y. Chung, On- Demand Provisioning of Wearable Sensors Data Processing Services in Edge Computing, in Proc. 2023 IEEE SENSORS, 2023, pp. 14, doi: 10.1109/SENSORS56945.2023.10325028.
  22. S. G. Aarella, S. P. Mohanty, E. Kougianos, and D. Puthal, PUF- based Authentication Scheme for Edge Data Centers in Collabo- rative Edge Computing, in Proc. IEEE Int. Symp. Smart Elec- tronic Systems (iSES), Warangal, India, 2022, pp. 433438, doi: 10.1109/iSES54909.2022.00094.