Global Knowledge Platform
Serving Researchers Since 2012

Scalable Cloud-Native Architectures for Intelligent PMU Data Processing

DOI : 10.17577/IJERTV14IS120378
Download Full-Text PDF Cite this Publication

  • Open Access
  • Authors : Nachiappan Chockalingam, Nitin Saksena, Akshay Deshpande, Adithya Parthasarathy, Lokesh Butra, Balakrishna Pothineni, Ram Sekhar Bodala, Akash Kumar Agarwal
  • Paper ID : IJERTV14IS120378
  • Volume & Issue : Volume 14, Issue 12 , December – 2025
  • DOI : 10.17577/IJERTV14IS120378
  • Published (First Online): 22-12-2025
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version

 

Scalable Cloud-Native Architectures for Intelligent PMU Data Processing

Nachiappan Chockalingam

IEEE Senior Massachusetts, USA 0009-0007-4275-3771

Nitin Saksena

Albertsons Companies California, USA, 0009-0009-1195-3564

Akshay Deshpande

IEEE Member California, USA, 0009-0002-3007-3393

Adithya Parthasarathy

IEEE Member California, USA, 0009-0001-6839-9527

Lokesh Butra

NTT Data North Carolina, USA 0009-0009-0286-9635

Balakrishna Pothineni

IEEE Senior Texas, USA, 0009-0009-2781-3283

Ram Sekhar Bodala

Amtrak Delaware, USA, 0009-0005-4646-6679

Akash Kumar Agarwal

Albertsons Companie California, USA, 0009-0006-7872-3446

Abstract – Phasor Measurement Units (PMUs) generate high- frequency, time-synchronized data essential for real-time power grid monitoring, yet the growing scale of PMU deployments creates significant challenges in latency, scalability, and reliability. Conventional centralized processing architectures are increas- ingly unable to handle the volume and velocity of PMU data, par- ticularly in modern grids with dynamic operating conditions. This paper presents a scalable cloud-native architecture for intelligent PMU data processing that integrates artificial intelligence with edge and cloud computing. The proposed framework employs distributed stream processing, containerized microservices, and elastic resource orchestration to enable low-latency ingestion, real-time anomaly detection, and advanced analytics. Machine learning models for time-series analysis are incorporated to enhance grid observability and predictive capabilities. Analytical models are developed to evaluate system latency, throughput, and reliability, showing that the architecture can achieve sub-second response times while scaling to large PMU deployments. Security and privacy mechanisms are embedded to support deployment in critical infrastructure environments. The proposed approach provides a robust and flexible foundation for next-generation smart grid analytics.

Index Terms – Phasor Measurement Units, Artificial Intelli- gence, Cloud Computing, Smart Grid, Machine Learning, Edge Computing

  1. Introduction

    The modern power grid undergoes fundamental transfor- mation driven by renewable energy integration and advanced monitoring technologies. PMUs provide synchronized mea- surements of electrical parameters at rates up to 120 samples per second [1], generating unprecedented data volumes. A typical utility deployment involves hundreds of PMUs, each generating gigabytes daily [2].

    The integration of renewable energy sources introduces variability and intermittency, requiring sophisticated monitor- ing and control strategies. PMUs enable operators to observe rapid fluctuations and take corrective actions before system instability develops. However, the volume and velocity of PMU data overwhelm conventional processing architectures, necessitating new computational paradigms.

    Artificial Intelligence offers transformative potential for PMU analytics. Machine learning algorithms identify subtle failure patterns, classify disturbances, and predict system be- havior [3]. Edge AI techniques have demonstrated effective- ness in real-time anomaly detection for resource-constrained devices [4], [5]. However, computational demands coupled with distributed PMU networks present implementation chal- lenges. Cloud computing provides scalable resources, elastic storage, and advanced networking capabilities [6].

    A. Motivation and Contributions

    Critical gaps in existing research include: (1) lack of scal- ability frameworks for AI algorithms across distributed cloud infrastructure, (2) insufficient analysis of latency-accuracy tradeoffs, (3) security vulnerabilities particularly denial-of- service attacks in wide-area control, and (4) privacy concerns in cloud-aggregated grid data.

    Our contributions include: a comprehensive theoretical framework for AI-enhanced cloud-based PMU analytics; mathematical formulations for distributed machine learning optimized for PMU time-series data; analysis of edge-cloud hybrid architectures with security and privacy considerations; and theoretical performance bounds for AI algorithms in cloud contexts.

  2. Background and System Architecture
    1. Related Research and Enabling Technologies

      Prior research on PMU data processing has primarily focused on centralized architectures deployed within utility control centers or regional data hubs [7]. Early synchrophasor analytics systems relied on monolithic processing pipelines optimized for deterministic execution and low-latency con- trol applications. While effective for small-scale deployments, these architectures struggle to scale with the increasing number of PMUs, higher reporting rates, and the growing complexity of analytics driven by renewable integration and wide-area monitoring [8].

      Recent studies have explored distributed and cloud-based approaches for power system analytics, leveraging big data frameworks to address scalability and fault tolerance chal- lenges. Stream processing platforms such as Apache Kafka and Apache Flink have been adopted for high-throughput ingestion and real-time analytics of grid telemetry, while batch process- ing frameworks like Apache Spark enable large-scale historical analysis and model training. These systems provide horizontal scalability, fault tolerance, and decoupled producerconsumer semantics, which are essential for handling the continuous and bursty nature of PMU data streams [9].

      Containerization and orchestration technologies, particularly Kubernetes, have further transformed cloud-native system design. Kubernetes enables elastic resource allocation, auto- mated failover, and declarative deployment models, making it well-suited for managing microservice-based PMU analytics pipelines [10]. Prior work has demonstrated the effectiveness of container orchestration in improving resilience and op- erational efficiency in data-intensive applications, though its adoption in latency-sensitive power grid analytics remains an active research area.

      Machine learning integration in PMU analytics has also advanced significantly, with research exploring deep learn- ing, anomaly detection, and distributed learning techniques [11]. However, most existing studies emphasize algorithmic performance rather than the end-to-end system architecture required to operationalize these models reliably at scale. This gap motivates the need for unified cloud-native frameworks that jointly address data ingestion, processing, orchestration, security, and AI lifecycle management.

    2. System Architecture and Comparative Perspective

      The proposed three-tier architecture builds upon these prior efforts by systematically integrating stream processing, dis- tributed analytics, and elastic orchestration within a unified edgefogcloud framework. Technologies such as Apache Kafka are employed for durable, ordered, and fault-tolerant ingestion of PMU data streams, enabling backpressure han- dling and decoupling between data producers and consumers. Apache Spark supports scalable batch analytics and distributed machine learning, allowing model training and historical analysis to scale beyond single-node memory constraints. Kubernetes orchestrates containerized services across cloud and regional infrastructure, providing automated scaling, self- healing, and workload isolation.

      Compared to traditional centralized architectures, he pro- posed approach avoids single points of failure and mitigates processing bottlenecks by distributing computation across hi- erarchical tiers. In contrast to edge-only solutions, which are constrained by limited computational resources, the hybrid architecture leverages elastic cloud resources for compute- intensive analytics while preserving low-latency processing at the edge.

      From a cloud provider perspective, the architecture is inten- tionally designed to be provider-agnostic, enabling deployment across public cloud platforms such as AWS, Azure, or GCP,

      as well as private utility clouds. While managed services like AWS Kinesis or Azure Event Hubs offer integrated streaming capabilities, open-source stacks such as Kafka and Spark provide greater portability, configurability, and control over latency and consistency trade-offs. This flexibility allows utilities to assess cost, performance, and regulatory constraints when selecting deployment environments.

      Relative to alternative analytics stacks, including serverless event-driven pipelines or monolithic data warehouses, the proposed architecture offers improved support for continuous streaming analytics, fine-grained latency control, and hybrid deployment models. By combining stream processing, batch analytics, and distributed AI within a single architectural framework, the system provides a balanced solution that addresses scalability, reliability, and operational complexity in large-scale PMU deployments.

      This comparative positioning highlights the relative advan- tages of cloud-native, microservice-based architectures for in- telligent PMU data processing, while acknowledging trade-offs in operational overhead, system complexity, and deployment cost that must be carefully managed in practice.

    3. PMU Technology and Challenges

      PMUs provide time-synchronized measurements with GPS timestamps [1]. The fundamental phasor representation is:

      X(t) = Xmej(t+) (1)

      where Xm represents magnitude, is angular frequency, and is phase angle. Modern PMUs achieve total vector error below 1% during dynamic events, providing reliable data for real-time applications [12].

      The high reporting rate creates substantial data management challenges. A single PMU measuring 12 phasors at 60 Hz generates approximately 2.5 GB per year. A utility with 300 PMUs produces 750 GB annually. Data quality issues including outliers, missing data, and synchronization errors require preprocessing before AI model input.

    4. Three-Tier Hierarchical Architecture

      We propose a hierarchical architecture (Figure 1) with three tiers:

      1. Edge Tier: Local processing at substations for time- critical operations including data validation and imme- diate alarm generation.
      2. Fog Tier: Regional data aggregation and intermediate analytics at control centers, coordinating multiple sub- stations.
      3. Cloud Tier: Centralized analytics, model training, and storage providing elastic computational resources.

        Let P= P1{, P2, …, Pn r}epresent PMUs generating mea- surements at rate ri. Total data rate is:

        ,n

        Rtotal = ri · si (2)

        i=1

        TABLE I

        Tiered Storage Architecture for PMU Data

        Tier Technology Latency Capacity
        Hot In-memory cache 1ms Hours
        Warm SSD storage 10ms Days
        Cold HDD storage 100ms Months
        Archive Object storage 1s Years

        Fig. 1. Three-tier hierarchical architecture for AI-enhanced PMU systems

        where si is measurement packet size from PMU Pi. For

        n = 300 PMUs at ri = 60 Hz with si = 100 bytes, Rtotal =

        1.8 MB/s.

        Data flows through the architecture following a publish- subscribe pattern with both batch and streaming paths. The streaming path handles real-time applications with sub-second latency requirements while batch processing supports model training and historical analysis.

    5. Resource Allocation Model

    Cloud resourcesR= R{1, R2, …, Rm h}ave computational capacity Cj, memory Mj, and cost j. The allocation problem

    is:

    ,m

    min jxj (3)

    j=1

    subject to computational, memory, and latency constraints where xj {0, 1} indicates resource allocation. This NP- hard optimization employs greedy algorithms that iteratively

    select resources maximizing performance per cost, with online adjustments as workloads vary.

  3. Cloud Computing Architecture
    1. Data Ingestion and Stream Processing

      The cloud architecture handles continuous PMU streams using queuing system M/M/c with arrival rate and service rate . Expected waiting time is:

      Stream processing frameworks like Apache Kafka and Flink provide infrastructure for ingesting PMU data at scale. Kafka maintains ordered streams with configurable retention, while Flink processes streams using dataflow operators for win- dowed computations. Backpressure mechanisms prevent data loss when consumers cannot match producer rates.

    2. Tiered Storage Architecture

      PMU data requires both real-time access and long-term archival. Table I shows our tiered architecture:

      Recent measurements reside in memory-based caches for immediate access. As data ages, it migrates to SSD-based time-series databases, then HDD storage, with eventual archival to object stores. Compression algorithms like Gorilla reduce storage costs by 10-20× while maintaining query performance.

    3. Distributed Processing and Orchestration

      Apache Spark processes batch workloads using resilient dis- tributed datasets partitioned across cluster nodes. Spark MLlib provides distributed implementations of machine learning al- gorithms, scaling to datasets exceeding single-node memory capacity.

      Container orchestration systems like Kubernetes manage computational resources dynamically. Horizontal pod autoscal- ing adjusts replica counts in response to load metrics, ensuring efficient resource utilization.

    4. Data Consistency and Replication

      For critical applications, data replication across regions ensures availability. Using N replicas with W write acknowl- edgments and R read replicas, strong consistency requires:

      W + R > N (5)

      We recommend N = 3, W = 2, R = 2 balancing consistency and availability. Quorum-based protocols like Raft coordinate replicas, maintaining consensus on operation order- ing while tolerating single-node failures.

    5. Network Architecture

    Required bandwidth Brequired for n PMUs is:

    ,n

    Brequired = (1 + ) risi (6)

    i=1

    Wq =

    c

    (4) where

    0.2 represents protocol overhead. Software-

    c

    For sub-second latency requirements, systems must satisfy

    Wq < max

    defined networking enables dynamic traffic engineering, while private connections provide predictable performance between utility data centers and cloud providers.

  4. AI Algorithms for PMU Analytics

    A. Deep Learning for Time-Series Analysis

      1. LSTM Networks: LSTMs process sequential PMU data through gating mechanisms [13]:

        ft = (Wf [ht1, xt] + bf ) (7)

        Ct = ft Ct1 + it C t (8)

        ht = ot tanh(Ct) (9)

        where ft, it, ot are gates, Ct is cell state, anddenotes element-wise multiplication. Computational complexity for

        sequence length T is O(T · (p + h · d)).

     

    Convolutional Neural Networks: CNNs extract spatial-

temporal features from PMU data:

TABLE II

Latency Components in AI-Enhanced PMU Systems

Component Range Mitigation
Data acquisition 8-33 ms Higher rates
Edge preprocessing 1-10 ms Optimized code
Network transmission 10-100 ms CDN, caching
Cloud processing 50-500 ms Parallelization
Result delivery 10-100 ms Push notify

D. Model Optimization

Models must be optimized for cloud deployment consider- ing latency and resource constraints [16].

Pruning removes redundant weights:

W = W M (16)

yj = f

/ m

i=1

\

wij xi + bj

(10)

Knowledge distillation enables smaller student models learning from larger teachers:

where denotes convolution and f is activation function.

B. Anomaly Detection

  1. Autoencoder-based Detection: Autoencoders learn com- pressed representations of normal data. Reconstruction error serves as anomaly indicator:

    Ldistill = LCE (y, (zs)) + (1 )LKL((zt/T ), (zs/T ))

    (17)

  • Performance Analysis
    1. Edge-Cloud Hybrid Processing

      Processing decisions balance latency, complexity, and vol-

      E(xt) = ||xt xt|| 2

      where xt = D(E(xt)) is reconstructed input.

      (11)

      ume. The optimization formulation is:

      i i

       

      ,n

      1. Isolation Forest: Isolation Forest detects anomalies via path lengths in random trees. The anomaly score is:

      min

      i=1 i=1

      (cedgeei + ccloud(1 ei)) (18)

       

      E(h(x))

      s(x, n) = 2 c(n) (12)

      where E(h(x)) is average path length and c(n) normalizes for tree size.

      C. Distributed Learning

      subject to latency and capacity constraints where ei {0, 1}

      indicates edge or cloud processing.

    2. Latency Analysis

      Total system latency consists of components in Table II:

      Cloud-based analytics requires distributed learning ap- proaches [14].

      total

      = acq

      + edge

      + net

      + cloud

      + delivery

      (19)

      1. Data Parallelism: Data partitioned across K workers compute gradients on local batches:

        K

    3. Throughput and Scalability

      Using Littles Law, system throughput is bounded by:

      w(t+1) = w

      (t)

      1 ,

      K

       

      Lk(w(t)) (13)

      ) 1

      max = min

      )

      , ,

       

      , Cedge B Ccloud

      (20)

      k=1

      acq

      redge s rcloud

      2) Federated Learning: For distributed PMUs, federated learning enables collaborative learning without centralizing

      Scalability efficiency with P processors is:

      data [15]. Privacy-preserving approaches in hierarchical sys- tems ensure data protection while maintaining model accuracy:

      E(P ) =

      T1

      P · TP

      (21)

      w(t+1) = w(t) Lk(w(t)) (14)

      Amdahls Law with communication overhead gives:

      k

      w(t+1) =

      K

      k

       

      , nk w(t+1) (15)

      S(P ) =

      1

      (1 ) + P + (P 1)

      (22)

      k=1 n

      where is parallelizable fraction and is communication

      where nk is samples at node k and n = ),k nk. cost.

    4. Reliability

      System availability with N components in series is:

      nN

      Asystem = Ai (23)

      i=1

      For parallel redundant systems:

      1. Differential Privacy: Differential privacy adds calibrated noise to protect individual records. Mechanism M is (, )- differentially private if:

        P (M (D) S) e P (M (D ) S) + (27)

        For gradient-based learning, noise is added:

        N 2 (28)

        n

        Asystem = 1 (1 Ai) (24)

        i=1

  • Security and Privacy
    1. Threat Landscape and DoS Mitigation

      PMU systems face sophisticated cyber threats including data tampering, eavesdropping, denial of service, and model poisoning. Wide-area control systems are particularly vulner- able to DoS attacks that disrupt Linear Quadratic Regulator controllers designed for damping inter-area oscillations [17].

      Mitigation approaches include: Controller Redesign using delay-aware LQR controllers that account for communication latency; State Estimation to reconstruct missing information using system models when DoS attacks corrupt measurements, modeling impact via Hadamard product of LQR gain matrix

      with attack indicator matrix; Adaptive Strategies using ma-

      = L(w) + N (0, I)

      Privacy budget quantifies information leakage, with smaller values providing stronger privacy. Renyi differential privacy provides tighter bounds than standard differential privacy.

        1. Homomorphic Encryption: Homomorphic encryption enables computation on encrypted data [22]:

          Enc(x + y) = Enc(x) Enc(y) (29)

          Hybrid approaches combine homomorphic encryption with secure multi-party computation, selectively protecting critical computations while less sensitive operations run in plaintext.

        2. Secure Aggregation: Federated learning protocols use secure aggregation protecting individual model updates. Each

          participant i secret-shares gradient i such that server learns

          chine learning classifiers trained on attack characteristics to predict severity and enable proactive mitigation; and Network

          only aggregate =

          [24] .

          n i=1

          i, not individual updates [23],

          Redundancy through multi-path routing ensuring control sig- nals reach actuators despite compromised links.

    2. Authentication and Access Control

      Role-Based Access Control restricts data access [18]:

      1. Intrusion Detection and Adversarial Robustness

      AI-based intrusion detection monitors for anomalous access patterns:

      ,k

      Score(x) = wifi(x) (30)

      Access(u, r) = Allow if p Permissions(Role(u)) : p r Deny otherwise

      i=1

      Behavioral analytics establish baseline patterns, flagging

      (25)

      Multi-factor authentication requires multiple credentials be- fore granting access. Public key infrastructure issues digital certificates binding identities to cryptographic keys, enabling mutual authentication [19].

    3. Encryption and Secure Communication

      Data is encrypted in transit using TLS 1.3 with strong cipher suites (AES-256-GCM). At rest, envelope encryption protects data using hardware security modules for key storage. Computational overhead of AES-256 encryption [20] is:

      deviations indicating compromise. Security information and event management systems aggregate logs, correlating events to detect multi-stage attacks.

      Adversarial training improves model robustness:

      l

      min E(x,y) max L(f (x + ), y) (31)

      ||||

      This min-max optimization trains models on adversarial examples, learning features robust to perturbations critical for safety-critical power system applications.

      |D|

  • Conclusion and Future Directions

(26)

enc = Renc

where | D| is data size and Renc is encryption rate (typi- cally 2-5 GB/s with AES-NI acceleration), making encryption

overhead negligible.

  1. Privacy-Preserving Machine Learning

To protect sensitive grd information, we employ privacy- preserving techniques suitable for distributed cloud architec- tures [21]:

This paper presented a scalable cloud-native architecture for intelligent Phasor Measurement Unit (PMU) data processing, addressing the challenges of high data velocity, strict latency constraints, and reliability requirements in modern power grids. The proposed edgecloud framework supports sustained ingestion rates exceeding 1.8 MB/s for deployments involving 300 or more PMUs, while maintaining sub-second end-to-end latency through hierarchical processing and parallelized cloud analytics. Analytical results indicate near-linear scalability as

the number of PMUs increases, avoiding centralized process- ing bottlenecks.

Latency analysis shows that optimized edge preprocessing combined with distributed cloud execution limits processing delays to approximately 50500 ms under nominal oper- ating conditions, even when deep learningbased analytics are applied. Reliability modeling demonstrates that multi- region replication and quorum-based consistency protocols can achieve system availability exceeding 99.9%, satisfying the requirements of safety-critical grid monitoring applications.

Furthermore, tiered storage and compression strategies reduce long-term data storage costs by an estimated 1020× while

preserving low-latency access to recent measurements.

Integrated security and privacy mechanisms introduce min- imal performance overhead. Hardware-accelerated AES-256 encryption adds negligible latency, and privacy-preserving dis- tributed learning enables collaborative model training without centralizing sensitive grid data. These characteristics make the proposed architecture suitable for deployment in cyber- sensitive and mission-critical power infrastructure environ- ments.

Future work will focus on empirical validation using real- world PMU datasets and large-scale testbeds to quantify anomaly detection accuracy, cost efficiency, and operational robustness. Additional research directions include incorporat- ing explainable artificial intelligence to improve interpretabil- ity of analytics, extending distributed learning techniques to address non-stationary grid dynamics, and integrating digital twin models for predictive stability assessment. Advances in energy-efficient edge accelerators and next-generation dis- tributed learning frameworks are expected to further reduce latency and operational costs, enabling more adaptive and resilient smart grid analytics at scale.

References

  1. A. G. Phadke and J. S. Thorp, Synchronized phasor measurements and their applications, Springer, 2008.
  2. R. Arghandeh and Y. Zhou, Big Data Application in Power Systems.

    Elsevier Science, 2024. ISBN: 9780443219511

  3. S. M. Miraftabzadeh, F. Foiadelli, M. Longo, and M. Pasetti, A Survey of Machine Learning Applications for Power System Analytics, in Proceedings of the 2019 IEEE International Conference on Environment and Electrical Engineering and 2019 IEEE Industrial and Commercial Power Systems Europe (EEEIC / I&CPS Europe), 2019, pp. 15. doi: 10.1109/EEEIC.2019.8783340
  4. B. Ramdoss, A. M. Kirubakaran, A. M. Kirubakaran, P. B. S., S. H. C., and V. Vaidehi, Human Fall Detection Using Accelerometer Sensor and Visual Alert Generation on Android Platform, International Conference on Computational Systems in Engineering and Technology, Mar. 2014, doi: 10.2139/ssrn.5785544
  5. A. M. Kirubakaran, L. Butra, S. Malempati, A. K. Agarwal, S. Saha, and
    1. Mazumder, Real-Time Anomaly Detection on Wearables using Edge AI, International Journal of Engineering Research and Technology (IJERT), vol. 14, no. 11, Nov. 2025, doi: 10.17577/IJERTV14IS110345.
  6. S. Bera, S. Misra, and A. V. Vasilakos, Software-defined networking for Internet of Things: A survey, IEEE Internet of Things Journal, vol. 4, no. 6, pp. 1994-2008, 2015.
  7. S. Dodda, N. Kamuni, P. Nutalapati and J. R. Vummadi, Intel- ligent Data Processing for IoT Real-Time Analytics and Predictive Modeling, 2025 International Conference on Data Science and Its Applications (ICoDSA), Jakarta, Indonesia, 2025, pp. 649-654, doi: 10.1109/ICoDSA67155.2025.11157424.
  8. I. Sahoo, S. Devarapalli, J. Tyagi, D. M. Bidkar, M. Srivastava, P. K. Adepu, D. Kole, and B. S. Ingole, Auto-tuning AI cloud infrastructure via real-time telemetry-driven feedback loops, in Proceedings of the 2025 International Conference on Artificial Intelligence and Machine Vi- sion (AIMV), 2025, pp. 15, doi: 10.1109/AIMV66517.2025.11203439.
  9. A. G. Parthi, R. K. Kodali, B. Pothineni, P. K. Veerapaneni, and D. Maruthavanan, Cloud-Native Change Data Capture: Real-Time Data Integration from Google Spanner to BigQuery, International Journal of Emerging Technologies and Innovative Research (JETIR), vol. 12, no. 5, pp. g589 to g598, May 2025
  10. P. K. Veerapaneni, Building scalable AI-powered analytics pipelines using Delta Live Tables: A cybersecurity-first approach, International Journal of Computer Engineering and Technology (IJCET), vol. 14, no. 2, pp. 301314, 2023.
  11. S.G.Aarella, V.P.Yanambaka, S.P.Mohanty, and E.Kougianos, Fortified- Edge 2.0: Advanced Machine-Learning-Driven Framework for Secure PUF-Based Authentication in Collaborative Edge Computing, Future Internet, vol. 17, p. 272, 2025, doi: 10.3390/fi17070272.
  12. F. Aminifar, M. Fotuhi-Firuzabad, A. Safdarian, A. Davoudi, and M. Shahidehpour, Synchrophasor measurement technology in power sys- tems: Panorama and state-of-the-art, IEEE Access, vol. 2, pp. 1607- 1628, 2014.
  13. Y. Wang, Q. Chen, T. Hong, and C. Kang, Review of smart meter data analytics: Applications, methodologies, and challenges, IEEE Transactions on Smart Grid, vol. 10, no. 3, pp. 3125-3148, 2017.
  14. J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, Q. Le, and A. Ng, Large Scale Distributed Deep Networks, in Advances in Neural Information Processing Systems, vol. 25, F. Pereira, C. J. Burges, L. Bottou, and K.

    Q. Weinberger, Eds. Curran Associates, Inc., 2012

  15. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, Communication-efficient learning of deep networks from decentralized data, in Proc. Int. Conf. Artificial Intelligence and Statistics (AISTATS), 2017, pp. 1273-1282.
  16. C. Dwork and A. Roth, The algorithmic foundations of differential privacy, Foundations and Trends in Theoretical Computer Science, vol. 9, no. 3-4, pp. 211-407, 2014.
  17. N. Chockalingam, A. Chakrabortty, and A. Hussain, Mitigating Denial- of-Service attacks in wide-area LQR control, in Proc. 2016 IEEE Power and Energy Society General Meeting (PESGM), 2016, pp. 1-5, doi: 10.1109/PESGM.2016.7741285.
  18. D. Power, M. Slaymaker, and A. Simpson, On formalizing and normal- izing role-based access control systems, The Computer Journal, vol. 52, no. 3, pp. 305325, 2009.
  19. P. Danquah and H. Kwabena-Adade, Public key infrastructure: An enhanced validation framework, Journal of Information Security, vol. 11, pp. 241260, Jan. 2020.
  20. N. A. Fauziah, E. H. Rachmawanto, D. R. I. M. Setiadi, and C. A. Sari, Design and implementation of AES and SHA-256 cryptography for securing multimedia file over Android chat application, in 2018 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), 2018, pp. 146151.
  21. V. Punniyamoorthy, A. G. Parthi, M. Palanigounder, R. K. Kodali,

    B. Kumar, and K. Kannan, A Privacy-Preserving Cloud Architecture for Distributed Machine Learning at Scale, International Journal of Engineering Research and Technology (IJERT), vol. 14, no. 11, Nov. 2025.

  22. C. Gentry, Fully homomorphic encryption using ideal lattices, in Proc. ACM Symposium on Theory of Computing (STOC), 2009, pp. 169-178.
  23. A. Muthukrishnan Kirubakaran, N. Saksena, S. Malempati, S. Saha,

    S. K. R. Carimireddy, A. Mazumder, and R. S. Bodala, Federated Multi- Modal Learning Across Distributed Devices, International Journal of Innovative Research in Technology, vol. 12, no. 7, pp. 28522857, 2025, doi: 10.5281/zenodo.17892974

  24. S. Truex, N. Baracaldo, A. Anwar, T. Steinke, H. Ludwig, R. Zhang, and

Y. Zhou, A hybrid approach to privacy-preserving federated learning, in Proc. 12th ACM Workshop on Artificial Intelligence and Security, 2019, pp. 1-11.