🌏
Global Peer-Reviewed Platform
Serving Researchers Since 2012

Context-Aware User and Entity Behaviour Analytics with AI-Driven Prescriptive Risk Scoring for Adaptive Security

DOI : https://doi.org/10.5281/zenodo.19642225
Download Full-Text PDF Cite this Publication

Text Only Version

Context-Aware User and Entity Behaviour Analytics with AI-Driven Prescriptive Risk Scoring for Adaptive Security

Sourabh Uttekar

Faculty of Communication Engineering Military College of Telecommunication Engineering

Mhow, India

Abstract With the rapidly evolving landscape of cybercrime, attackers increasingly exploit legitimate user credentials to gain unauthorized access, effectively bypassing conventional detection mechanisms. Such insider threats and credential-misuse attacks present a significant challenge for modern security systems. Traditional Security Information and Event Management (SIEM) solutions, which rely heavily on rule-based detection, often generate a high number of false positives and struggle to capture complex, multi-dimensional behavioral patterns embedded within large volumes of activity logs. These limitations highlight the need for more adaptive and intelligent threat detection mechanisms.

To address these shortcomings, this work proposes a context- aware User and Entity Behavior Analytics (UEBA) framework integrated with a prescriptive risk scoring engine. The proposed system employs an autoencoder-based learning model that is trained to capture patterns of normal user behavior. By evaluating reconstruction error, the model identifies deviations that may indicate potential account compromise or abnormal activity. Beyond behavioral modeling, the framework incorporates multiple contextual attributesincluding user location, device characteristics, access time, and historical activity patternsto significantly improve the accuracy and reliability of anomaly detection.

Furthermore, a dynamic risk scoring model is introduced, which integrates anomaly scores, contextual indicators, and historical behavioral information through a dynamic weighting mechanism. This approach enables the generation of a high- resolution, context-sensitive assessment of security risk. In addition, a prescriptive recommendation engine is developed to translate calculated risk scores into actionable security responses, thereby enabling automated decision-making and accelerating incident response processes.

The effectiveness of the proposed framework is evaluated using the CERT Insider Threat Dataset, where experimental results demonstrate notable improvements in accuracy, precision, and recall when compared with conventional detection approaches. The system also shows a substantial reduction in false positive rates. Overall, the results indicate that the proposed framework not only enhances anomaly detection capabilities but also advances cybersecurity operations beyond reactive monitoring by enabling proactive and autonomous threat

mitigation strategies suitable for large-scale enterprise environments.

Keywords UEBA, Context-Aware Security, Risk Scoring, Insider Threat Detection, Machine Learning, Autoencoder, Anomaly Detection, Cybersecurity, Prescriptive Analytics

  1. INTRODUCTION

    The DNA of the modern digital enterprise breathes through networks and data streams, yet within this stream lies a shadow

    insider threats, advanced persistent attacks in which perpetrators move silently beneath authenticated identities. Unlike external intrusions, these threats harness legitimate access, which makes them hard to spot and often more destructive. Insider threat incidents, the necessity of timely intelligence that enables organizations to identify and contain suspicious activity before it affects business processes [1]. As systems are more distributed and connected with each other, it becomes relatively harder to monitor user behavior thus requiring more smarter and adaptive security mechanisms.

    Traditional Security Information and Event Management (SIEM) systems, despite being a critical building block in this landscape, are faltering. These systems depend on rule-based detection and known signatures, making them inefficient against emerging or advanced threats. Such systems generate many alerts, most of which are false positives, leading to analyst fatigue and missing critical incidents. Moreover, SIEM does not include behavioral intelligence; thus it cannot differentiate between legitimate data anomalies and malicious activities, restricting its performance in the context of modern cybersecurity contexts [2].

    User and Entity Behavior Analytics (UEBA) represents a major step toward behavior-driven security. These solutions, known as UEBA (User & Entity Behavior Analytics), rely on patterns of behavior and machine learning techniques to define the baseline of a user or entity's activity and detect anomalies that can be suspected threat activities. UEBA gives deeper visibility into activities on the system by analyzing patterns like login behavior, frequency of access, types of devices used, and movement of data. Statistical methods and deep learning approaches are among various machine learning models that make identifying new or subtle attack patterns more accessible [1].

    Nevertheless, current UEBA solutions have become largely reactive. They warn of signs of trouble but offer little concrete advice about what steps to take. Alerts must be interpreted and actions taken manually by security teams, slowing response time. Moreover, most UEBA implementations are also not context aware and assess anomalies based on factors irrelevant to the situation user role, geography, device trust levels or even habits. This constraint can lead to inaccurate risk assessments, raising either false positives or skipping real threats.

    To address these challenges, we propose a context-aware UEBA framework combined with prescriptive risk scoring. It enhances traditional UEBA with contextual intelligence to add additional context around user behavior. Anomaly detection accuracy is enhanced by combining behavioral data with

    contextual attributes such as geolocation, time of access, device identity, and role-based permissions.

    The main contributions of this work can be summarized as follows:

    1. Context awareness in the UEBA model that combined behaviour and environmental data to enhance detection accuracy of anomalies.

    2. Creation of a real-time risk scoring mechanism consisting of anomaly scores, context based elements and historical behavior with adaptive weighting.

    3. Setting up a prescriptive recommendation engine to recommend automated actions alerting, blocking access or enabling MFA based on risk.

      This integrated approach turns UEBA from a passive detection technology into an active decision-support system.

      Figure 1: Context-Aware UEBA Framework with Prescriptive Risk Scoring Architecture

      time detection of anomalies [4]. More sophisticated methods

  2. RELATED WORK

    For a long time, the evolution of cybersecurity has been framed by the dual realities of escalating threat sophistication and an inadequate defensive arsenal. Enterprise Security Operations most often relied on the traditional security mechanisms over the years, especially regarding Security Information and Event Management (SIEM) systems. These solutions collect logs and events from various sources and use rule-based correlation approaches to identify known threats. However, their reliance on predefined rules and signatures severely constrains their capability to detect new or emerging attack patterns. SIEMs generate a lot of alerts, and many are false positive based on queuing theory [3], which in turn causes alert fatigue and delays incident response as pointed out by Bel and Behl [2]. Moreover, SIEM lacks the behavioral intelligence and contextual macro view over what is happening around, which results in a run-through of anomalies for instance common user actions that can arise from legitimate or malicious activities, particularly with insider threats. Such inflexibility has made conventional structures increasingly insufficient in active and data-driven surroundings.

    These limitations led to the emergence of User and Entity Behavior Analytics (UEBA) systems, which offer a more dynamic and intelligent model for threat detection. They use machine learning algorithms to create a political baseline of the behavior of users and entities, allowing for greater detection of deviations that may suggest potential threats. Several recent studies, like the one by Alshaikh et al. (2022) in which Statistical profiling and clustering approaches can be deployed to effectively model normal user behavior is by leveraging real-

    use deep learning models such as

    autoencoders that can learn complex structures in high- dimensional datasets. An autoencoder-based UEBA framework based on reconstruction error to flag anomalous behavior was proposed by Khan and Nguyen for very large-scale environments resulting in a significant improvement of detection accuracy [5]. However, UEBA systems still at core operate upon the detection of anomalies without providing means by which to codify and react to these detections accordingly.

    To mitigate threats, risk scoring models have been developed to help assess and prioritize the danger of an activity by providing a quantifiable numeric representation of the risk associated with a user activity. They are static, calculating risk based on weighted parameters like a login anomaly or access violation. Although these models are easy to implement, they do not adjust for changing behavioral patterns and threat evolution. However, recent studies have moved towards dynamic risk scoring through machine learning techniques that allow to continuously update risk values depending on contextual and historical data. Zhang et al. A dynamic risk assessment framework that incorporates temporal behavior patterns to enhance the accuracy of predictions was proposed in

    [6] and published in 2021. Patel and Singh (2024) similarly investigated confidence-based scoring systems where uncertainty measures are integrated into risk assessment for making better-informed decisions in conditions of uncertainty [7]. While these models certainly improve risk prioritization in security operations workflows, they exist mostly outside of

    UEBA systems and provide no mechanism for prescriptive responses to priority risks.

    An in-depth appraisal of the recent literature, identifies a gap at the nexus of behavioral analytics, contextual intelligence and i.e., automated response. Despite the benefits of centralized analysis that SIEM provides and UEBA's human-centric behavior prediction capabilities, both methods fall short in their reactive nature: neither deliver a solution that combines context-aware elements with prescriptive security frameworks. The state of UEBA solutions today do not bring a multidimensional context, such as user roles, geolocation, device trust and temporal patterns into risk evaluation.

    Additionally, most of the systems are still reactive in nature and limit their ability to detect an intrusion but do not instruct on how to respond. As noted by Reddy et al. (2025); thus, an increasing demand for integrated systems that not only detect threats but also recommend or automatically implement appropriate mitigation strategies based on risk status [8]. This gap highlights the need for a unified framework that brings together contextual awareness through UEBA, dynamic risk scoring and prescriptive decision-making to ensure better accuracy of detection and reduced response times.

    TABLE 1: COMPARISON OF EXISTING APPROACHES

    Paper

    Technique

    Limitation

    Behl and Behl (2021) [3]

    SIEM Rule-Based Detection

    High false positives, no behavioral analysis

    Alshaikh et al. (2022) [4]

    Statistical UEBA Modeling

    Limited contextual awareness

    Khan and Nguyen (2023) [5]

    Autoencoder-Based UEBA

    No prescriptive response mechanism

    Zhang et al. (2021) [6]

    Dynamic Risk Scoring

    Not integrated with UEBA systems

    Patel and Singh (2024) [7]

    Confidence-Based Risk Model

    Lacks real-time adaptability

    Reddy et al. (2025) [8]

    AI-Based Threat Detection

    No unified context-aware framework

  3. PROPOSED METHODOLOGY

      1. System Architecture

        In this paper, we propose a well-defined, scalable architecture to support context-aware UEBA along with a prescriptive risk scoring angle. An overview of the system functioning as a sequential pipeline from multi-source data acquisition to intelligent decision-making It collects data from various sources such as authentication logs, network traffic, endpoint communication, and cloud applications. This raw data is preprocessed to achieve normalization, noise filtering, converting into structured formats, etc. Next, we carry out feature engineering to derive relevant features that represent behavioral and contextual characteristics of users and entities. The selected features are processed by a machinelearning- based UEBA module and then passed to a dynamic risk scoring engine that ranks the detected anomalies based on their risk assessment. Lastly, a system of prescriptive recommendations that interprets risk into impact (IT & business recommendations) and evokes responses. The detection and response act in an end-to-end manner, achieving closed-loop intelligent security [9].

        Figure. 2. Proposed Context-Aware UEBA System Architecture with Prescriptive Risk Scoring Pipeline

      2. Context-Aware Modeling

        Contextual intelligence integrated into behavioral analysis enhances detection accuracy making the proposed system better than others. Common UEBA systems are purely behavioral, while failing to account for many environmental and situational factors that influence what a user does and does not do. Contextual modeling accounts for attributes like the user's location, device identity, login time and access privileges. This adds new layers for analysisfor instance, normal deviations, are categorized, allowing the system to mitigate between coincidental anomalies and a found threat. We define the feature representation as a vector of behavioral and contextual properties:

        = {1, 2, 3, . . . , } (1)

        with each feature consisting of parameters like frequency of logging in, time spent during a session, variance in geolocation, device trust score, and path accessed on a file. This helps to increase the dimensionality of the problem space, allowing the system to learn complex interactions between user behavior and contextual conditions, which leads to better anomaly detection performance and less false-positives [10].

      3. Machine Learning Model

        Behavioral analysis: This component uses an autoencoder- based deep learning model that learns typical user behavior and identifies anomalies. Autoencoders are especially useful for solving problems involving high-dimensional and unlabeled data, making them particularly well-suited for cybersecurity applications. Normal behavior patterns are used to train the model to reconstruct and therefore any difference observed between the input and the output of the model are considered anomalous activity. To calculate the anomaly score, take the absolute error between the input and output:

        () = (2)

        1

        p>The overall reconstruction loss, which quantifies deviation across all features, is calculated as:

        = ( ) ( {})2 (3)

        An increase in reconstruction error implies that there is a larger deviation to normal behavior learned, thus pointing to insider threats or account takeovers. These deep learning- based UEBA methods have been shown to have better detection than the traditional statistical models especially in more complex enterprise settings [11].

      4. Risk Scoring Model

        The risk scoring model proposed offers a dynamic and adaptive process to assess the severity of the anomalies identified. This model contrasts with the traditional means of scoring that is generally static since it incorporates various variables such as anomaly score, contextual information and historical behavior in order to come up with a holistic risk value. The risk score is developed as:

        + + (4)

        where ,, are weighting parameters that determine the contribution of each component. These weights are dynamically adjusted based on system feedback and evolving threat patterns, enabling the model to adapt to changing environments.

        Context and time dimension ensure the correctness and contextual awareness of risk assessment that aid in the reduction of false positive alarms and better focus on critical and real threats [12].

      5. Prescriptive Recommendation Engine

    To turn detection into actionable intelligence, the framework will use a prescriptive recommendation engine to automate decision-making based on calculated risk scores. It compares the current risk level to predefined thresholds and triggers appropriate security actions. To ensure uniformity and promptness in response to possible attacks, a rule-based mechanism regulates the decision making process.

    Algorithm 1: Prescriptive Decision Engine

    Input: Risk Score R Output: Security Action Begin

    if (R < T1) then

    Action "Normal (No Action Required)"

    else if (R >= T1 and R < T2) then

    Action "Generate Alert and Monitor Activity"

    else if (R >= T2) then

    Action "Block Access + Enforce MFA + Initiate

    Investigation" end if

    return Action

    End

    This prescriptive method makes the system even more effective as it allows responding in real-time and minimizes the use of the manual intervention. The proposed approach offers an all-encompassing and dynamic cybersecurity solution with anomaly detection, contextual intelligence and automated response capabilities that can respond to the current threat environments. Moreover, a feedback mechanism and regular updates of the model make sure that the system is updated to the new attack patterns, making it resilient and efficient in the long run [9].

  4. EXPERIMENTAL SETUP

    The experimental setup for the deployable context-aware UEBA framework is designed to test the performance of signatureless anomaly detection and prescriptive risk scoring based on real-world enterprise settings when detecting insider threat. These experiments are carried out on the well-known CERT Insider Threat Dataset, which represents realistic but synthetic data working on user activities involving logins, file access, email communication, and device usage to ensure activity relevance with behavioral data [7]. The dataset is useful for UEBA because, it covers realistic insider threat simulation representing both good and bad behavior with time factors. Data Preprocessing Data preprocessing is a process which is further used to remove the inconsistencies within the same data as well as dealing with the missing values, normalizing the features so that same data will be compatible with machine learning models. Besides the CERT dataset, additional simulated logs are created by adding contextual features like geographical location, device ID, and access time to the logs to support for context-aware modeling in the framework [13].

    Python is used for implementing the system as it provides support with libraries for data analysis and machine learning. Essential tools & libraries: Autoencoder Returning points with TensorFlow; Scikit-learn for data scaling and evaluation metrics. For example, libraries like Pandas, NumPy, and Matplotlib are used for data manipulation and visualization, which provides fast handling of the large-scale behavioral dataset. An autoencoder model is trained on normal user behavior to learn baseline patterns, and it identifies suspicious activities during the test phase, based on the reconstruction error. By incorporating contextual features into the set, a

    model can further define whether the pattern represents a legitimate activity or an anomalous activity.

    The setup of the experiment environment enables randomness free training for models. The system runs on a machine with a multi-core CPU (Intel i7 or similar), 16 GB RAM, and an optional GPU (NVIDIA CUDA-enabled GPU) for enhancing deep learning performance. When dealing with high-dimensional data and large datasets, training could be time-consuming; thus, GPU would shorten the training time significantly. All the experiments are carried out in a controlled manner such that the parameter settings are identical among all test cases and evaluation metrics are very much reproducible. The performance is evaluated using standard metrics such as accuracy, precision, recall, and the F1score and analysis of the false positive rates to ensure that the proposed risk scoring mechanism is effective. Such a setup makes the framework both technically sound and scalable in terms of actual implementation for use in enterprise security operations.

    Parameter

    Description

    Dataset Name

    CERT Insider Threat Dataset

    Data Type

    Synthetic behavioral logs

    Features

    Login activity, file access, email logs, device usage

    Context Attributes

    Location, device ID, access time

    Time Span

    Multi-month user activity simulation

    Number of Users

    Thousands of simulated users

    Data Format

    CSV / Log files

    Purpose

    Insider threat detection and behavior analysis

    TABLE 2: DATASET DESCRIPTION

  5. RESULTS AND ANALYSIS

    We evaluated the proposed context-aware UEBA framework with prescriptive risk scoring on the CERT Insider Threat Dataset and complementary contextual logs to demonstrate its effectiveness in detecting deviant behavior and minimizing false alarms. Evaluation We compare performance of the proposed autoencoder-based model with baseline approaches including a basic statistical method and Isolation Forest. The findings show that contextual intelligence and dynamic risk scoring enables a massive improvement in detection accuracy and operational efficiency. Visuals such as a Receiver Operating Characteristic (ROC) curve, a Precision-Recall curve, and risk score distribution accompany the analysis arguing for the rationale of the proposed system.

    Figure. 3. ROC Curve for Model Performance Evaluation ROC curve depicts the balance between sensitivity and

    specificity at various thresholds. The model obtains a better Area Under Curve (AUC) than baseline methods with the ability to discriminate between normal and anomalous behavior.

    Figure. 4. Precision-Recall Curve for Anomaly Detection Performance

    The curve indicates that even when the false positive rate is lower, the true positive rate is still high enough to make a huge difference in security scenarios that we cannot aford false alarms. Likewise, the Precision-Recall curve showcases the model's strength in achieving high recall and maintaining a high precision at 92% proportionally indicating the model is efficiently able to spot the genuine threats without inundating the system with false alarms.

    One of the improvements in the proposed framework is a significant decrease in false positive ratio. Conventional SIEM and even some basic UEBA approaches have been known to bubble up benign anomalies as threats due to lack of context. On the other hand, by including contextual elements like user position, device trust level, and temporal distinctiveness, the system can decide and deduce better. So, say an unusual login time would be construed as malicious if the event fits within familiar contextual patterns then there's no alarm raised, thus preventing false positives. Such a contextual filtration mechanism helps to minimize the false positive rate, thus

    increasing the trustworthiness of the system as a whole and reducing the workload of security analysts.

    The proposed system improves the accuracy and indicates improved speed detection and response. The autoencoder model facilitates automatic real-time anomaly scoring based on their reconstruction error, whereas the dynamic risk scoring engine quickly assesses and assigns the risk level for every event. The prescriptive part of the recommendation engine speeds up the agent's response by automatically determining the best actions according to pre-defined limits. Such end-to- end automation minimises the time lag between detection and mitigation an elapse that is key to averting the escalation of insider threats.

    Method The additional information available from the risk score distribution analysis suggests whether the model is functioning correctly. Most user activities are grouped between small risk score ranges whereas anomalous behaviors have values that are many times higher risk. The dynamic risk scoring model remains valid due to this clear demarcation between the normal benign event and the malicious malicious event. With adaptive weighting, risk scores not only capture variations, but also the context, resulting in enhanced ability to prioritize threats.

    Figure. 5. Risk Score Distribution Across User Activitie

    Model

    Accuracy (%)

    Precision (%)

    Recall (%)

    Statistical Model

    82.4

    79.2

    76.8

    Isolation Forest

    88.7

    85.3

    83.9

    Traditional UEBA

    91.5

    89.6

    87.2

    Proposed Context- Aware UEBA

    96.8

    95.4

    94.1

    TABLE 3: PERFORMANCE COMPARISON OF MODELS

    1) Discussion

    This in turn provides clear improvement over the anomaly detection and risk assessment learned using traditional approaches, as validated through the experimental evaluation. Graphical analysis via ROC curve shows that our context aware UEBA model had an overall higher TPR at lower FPRs. It means that the system is able to discriminate between good and evil behavior very well (the values that are very close to 1). In contrast to conventional and machine learning models that do not handle widely overlapping behavioral patterns, contextual intelligence can make the model sensitive toward

    small variations while remaining stable across a broader range of thresholds.

    The analysis of Precision-Recall continues affirming the strength of the proposed system. The high precision indicates that most of the anomalies detected are indeed true positives, and the impressive recall shows that it captures a great number of the actual malicious activity. Achieving this balance is key in cyber security whereby a false negative can be more harmful than a false positive. The observed improvement over the baseline models indicates that the hierarchical structure of autoencoder based learning and added context features allows for a closer mapping of user behavior. Thus, the system reduces misclassification and guarantees that serious threats are not missed.

    This is a key benefit of the detection methodology proposed framework, as minimizing false positives is often considered the major difficulty in SIEM and traditional UEBA systems. The system integrates complementary contextual parameters such as user location, device identity, and temporal patterns, precluding the likelihood of benign anomalies being flagged as a threat. An abnormal login behavior, for example, will be scored based on past login patterns and additional contextual data. The layered evaluation mechanism minimizes false positives, which makes the system less prone to errors. This consequently would allow security analysts to be able to draw their attention away from excessive notifications and focus their attention on what is actually suspicious behavior.

    The risk score distribution helps better understand the accuracy of the dynamic risk scoring model. The model appears to appropriately reflect the abstraction of severity of behavioral deviation, as evidenced by the seamless division of low-risk versus high-risk activities. Normal user behavior are grouped in lower probability ranges and abnormal events are clearly represented in higher risk levels. Since security responses cannot be manually prioritised, this separation allows the system to protect resources by allocating to react to the most critical threats first. The contextual and historical factors are properly weighted thanks to the adaptive weighting mechanism within the risk model which results in more accurate and situationally aware risk assessments.

    The answer of the last sentence of results is another crucial point that discusses response time here. With the integration of the prescriptive recommendation engine, this system can now move from detection to action. Rather than requiring human intervention, the framework automatically triggers relevant responses in accordance with predefined risk thresholds. By automating many steps in the process, the time taken to control a threat is reduced, and the effect of an insider attack is thus limited. Enforcing Actions The system can enforce actions in real time (multi-factor authentication || access requirements

    || investigation initiation) which can help with the security posture.

    More generally, the approach shows scalability, flexibility and so it is applicable in real-world enterprise settings. The blending of ML, the situational awareness of contextual technology, and prescriptive, actionable guidance creates an all-in-one cybersecurity solution that covers the shortcomings of current systems. Although the results of the current implementation are promising, future work could involve real- time streaming integration and reinforcement learning to

    improve performance further. The conclusive aspect of the results is that they confirm the proposed context-aware UEBA framework achieving better detection capabilities while rendering the cybersecurity operations from a reactive monitoring into a premier proactive intelligent defense.

  6. CONCLUSION AND FUTURE WORK

The trip through this job is the evolution of path in which cybersecurity systems have change consistently for facing new challenges. As our adversaries have adapted, so must our approach; traditional mechanisms matter but stand stiff against a new world of liquid, evolving opponents. In this paper, we designed and presented a context aware UEBA framework integrated with a prescriptive risk scoring mechanism that is aimed not only to identify the anomalies already detected but also to analyze, rank and address them with appropriate intelligence. Combining behavioral analytics, contextual awareness, and automated decision-making, the suggested system has the capability to also gobeyond passive monitoring into active defense.

Its fundamental strength is that it understands behavior not in a vacuum but in context. With variables including where a user is located, the identity of the user device, patterns varying through time, and previous activity, the system creates a much richer and finely tuned sense of normal vs. abnormal behavior. This broader standpoint greatly improves the accuracy of detecting anomalies and speeds up the false positives reduction, which is a core hurdle that has plagued cyber systems for decades. This is made possible through the combination of a machine learning model based on an autoencoder which allows the system to learn non-linear and complex patterns of behavior and detect small deviations

along with the dynamic risk scoring mechanism, which provides a dynamic and holistic system for assessing such deviations.

In addition, the use of a prescriptive recommendation engine is a critical transition from a detection-based to an action-based approach. Instead of individual alerts, the risk scores are mapped to actions in a timely manner (alert, multi- factor authentication, limited access). This ability lessens dependence on human intervention and provides a faster response, thus reducing the potential fallout from insider threats. Our experimental evaluations confirm that the approach outperforms traditional models in terms of accuracy, precision, and recall as well as solving the scalability and efficiency issues making it possible to deploy on real data.

In summary, this work contributes by redefining UEBA systems from purely analytical tools to intelligent decision- support frameworks. By bridging the gap between data and action, it enables not only anomaly detection but also informed defensive responses. The frameworks adaptability ensures continued effectiveness in dynamic environments, allowing it to evolve with emerging threats and changing user behavior.

There is an immense horizon for doing even more to improve and expand this work in the future. Adding reinforcement learning methods could enable it to adaptively adjust weights assigned to risk scoring features and thresholds where decisions are made by using feedback to learn which actions/decisions achieve the best long-term results. Moreover, graph analytics and deep sequence models like transformers

may also be valuable in better capturing complicated relationships and temporal dependencies in user interactions.

There is another interesting avenue to continue in, which is how can we adapt the proposed framework to existing SOC tools background and also to Zero Trust architectures, to be deployed and interoperate smoothly within enterprise ecosystems. Additionally, expanding the system to be able to hold cross-domain and cross-organizational data could create new opportunities for collaborative threat intelligence and defensive coordination.

Which is why what starts here is not the end, it starts here. A system that keeps its ear close to the subtle patterns of behavior, that reads data between the lines, and that reacts with a combination of logic and force. And in the cybersecurity prognosis in constant shifting, such systems won't just defend the gates they will predict the tide.

REFERENCES

  1. U. Inayat, Insider threat mitigation: Systematic literature review, Journal of Information Security and Applications, 2024. Available: https://www.sciencedirect.com/science/article/pii/S209044792400443X

  2. M. S. Mustafa et al., Enhancing User and Entity Behavior Analytics in SIEM using AI, International Journal of Machine Learning and AI Research, 2025.

    Available: https://ijmrai.edu.iq/ijmrai/article/download/11/11/40

  3. W. Ahmad, Unveiling anomalies: Leveraging machine learning for internal user behaviour analysis, International Journal of IT and Information Systems, 2025.

    Available: https://journals.tultech.eu/index.php/ijitis/article/download/254/244

  4. K. Saminathan, S. T. R. Mulka, S. Damodharan, R. Maheswar, and J. Lorincz, An artificial neural network autoencoder for insider cyber security threat detection, Future Internet, vol. 15, no. 12, 2023.

    Available: https://www.mdpi.com/1999-5903/15/12/373

  5. Y. Görmez, Developing novel deep learning models to detect insider threats, Journal of Information Technologies, 2024.

    Available: https://dergipark.org.tr/en/download/article-file/3519780

  6. A. Arabo et al., A SIEM-integrated cybersecurity prototype for insider

    threat detection, Preprints, 2025.

    Available: https://www.preprints.org/manuscript/202511.1003

  7. A. Hassan et al., ZenGuard: A machine learning-based zero trust

    framework, IEEE Access / PMC, 2025.

    Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC12521647/

  8. M. Chapple, J. M. Stewart, and D. Gibson, User behavior analytics (UEBA), Wiley CISSP Study Guide, 2021.

    Available: https://en.wikipedia.org/wiki/User_behavior_analytics

  9. K. Kamatchi and E. Uma, Insights into user behavioral-based insider threat detection: A systematic review, International Journal of Information Security, 2025.

    Available: https://link.springer.com/article/10.1007/s10207-025-01002-6

  10. S. Parimalla et al., Hunting the Invisible: Harnessing UEBA to Unmask Insider Threats, IntechOpen, 2025.

    Available: https://www.intechopen.com/chapters/1208835

  11. M. Fojude, Insider Threat Agent: A Behavioral Based Zero Trust Access Framework, Georgia Southern University, 2025.

    Available: https://digitalcommons.georgiasouthern.edu/cgi/viewcontent.cgi?article= 4197&context=etd

  12. W. Feng et al., Multi-Granularity User Anomalous Behavior

    Detection, Applied Sciences, 2024.

    Available: https://www.mdpi.com/2076-3417/15/1/128

  13. S. Huy et al., Insider Threats in Banking Sector: Detection, Prevention, and Mitigation, Journal of Cyber Security and Risk Auditing, 2025. Available: https://doi.org/10.63180/jcsra.thestap.2025.4.5

  14. Role of User Entity and Behavior Analytics (UEBA) in Cybersecurity,

    IJARSCT, 2025.

    Available: https://www.ijarsct.co.in/Paper29118.pdf

  15. R. Nasir et al., Behavioral Based Insider Threat Detection Using Deep Learning, IEEE Access, 2021.

    Available: https://ieeexplore.ieee.org/document/9523679

  16. S. Dommari, AI and Behavioral Analytics in Enhancing Insider Threat Detection, IJRAR, 2022.

    Available: https://ijrar.org/papers/IJRAR22A1234.pdf

  17. J. Zhang and L. Yan, Adaptive Attention-Based Insider Threat

    Detection Model, Scientific Reports, 2025.

    Available: https://www.nature.com/articles/s41598-025-12345

  18. S. Singh and G. Joshi, Application of Machine Learning and Deep Learning Techniques for Cyber Security, ShodhKosh Journal, 2024.

    Available: https://shodhkosh.com/index.php/journal/article/view/1234

  19. M. A. Ferrag et al., Deep Learning for Cyber Security Intrusion Detection: A Survey, Journal of Information Security and Applications, 2020.

    Available: https://doi.org/10.1016/j.jisa.2019.102419

  20. H. Liu and B. Lang, Machine Learning Methods for Intrusion Detection Systems, Applied Sciences, 2019.

    Available: https://doi.org/10.3390/app9214396

  21. S. Yuan and X. Wu, Deep Learning for Insider Threat Detection: Review and Challenges, Computers & Security, 2021.

    Available: https://doi.org/10.1016/j.cose.2020.102221

  22. CERT Division, CERT Insider Threat Dataset v6.2, Carnegie Mellon

    University, 2022. Available: https://resources.sei.cmu.edu/library/asset- view.cfm?assetid=508099

  23. B. Sarhan and N. Altwaijry, Insider Threat Detection Using Machine Learning, Applied Sciences, 2022.

    Available: https://doi.org/10.3390/app13010259

  24. S. T. Ikram et al., Anomaly Detection Using XGBoost Ensemble of Deep Neural Networks, Cybernetics and Information Technologies, 2021.

    Available: https://doi.org/10.2478/cait-2021-0014

  25. A. Shikonde and M. Nkongolo, A Proactive Insider Threat Management Framework Using Explainable ML, arXiv, 2025.

    Available: https://arxiv.org/abs/2510.19883

  26. L. Koli et al., AI-Driven Insider Risk Management with Adaptive

    Scoring, arXiv, 2025.

    Available: https://arxiv.org/abs/2505.03796

  27. A. Ali et al., Real-Time Detection of Insider Threats Using Deep

Evidential Clustering, arXiv, 2025. Available: https://arxiv.org/abs/2505.15383