Uncertainty-Aware, Risk-Adaptive Access Control for Agentic Systems using an LLM-Judged TBAC Model
By: Charles Fleming, Ashish Kundu, Ramana Kompella
Potential Business Impact:
Keeps AI safe by checking risky new jobs.
The proliferation of autonomous AI agents within enterprise environments introduces a critical security challenge: managing access control for emergent, novel tasks for which no predefined policies exist. This paper introduces an advanced security framework that extends the Task-Based Access Control (TBAC) model by using a Large Language Model (LLM) as an autonomous, risk-aware judge. This model makes access control decisions not only based on an agent's intent but also by explicitly considering the inherent \textbf{risk associated with target resources} and the LLM's own \textbf{model uncertainty} in its decision-making process. When an agent proposes a novel task, the LLM judge synthesizes a just-in-time policy while also computing a composite risk score for the task and an uncertainty estimate for its own reasoning. High-risk or high-uncertainty requests trigger more stringent controls, such as requiring human approval. This dual consideration of external risk and internal confidence allows the model to enforce a more robust and adaptive version of the principle of least privilege, paving the way for safer and more trustworthy autonomous systems.
Similar Papers
A Vision for Access Control in LLM-based Agent Systems
Multiagent Systems
Lets AI agents share information safely and smartly.
A Vision for Access Control in LLM-based Agent Systems
Multiagent Systems
Makes AI agents share information safely and smartly.
Securing AI Agents: Implementing Role-Based Access Control for Industrial Applications
Artificial Intelligence
Keeps AI agents safe from hackers.