A Vision for Access Control in LLM-based Agent Systems
By: Xinfeng Li , Dong Huang , Jie Li and more
Potential Business Impact:
Makes AI agents share information safely and smartly.
The autonomy and contextual complexity of LLM-based agents render traditional access control (AC) mechanisms insufficient. Static, rule-based systems designed for predictable environments are fundamentally ill-equipped to manage the dynamic information flows inherent in agentic interactions. This position paper argues for a paradigm shift from binary access control to a more sophisticated model of information governance, positing that the core challenge is not merely about permission, but about governing the flow of information. We introduce Agent Access Control (AAC), a novel framework that reframes AC as a dynamic, context-aware process of information flow governance. AAC operates on two core modules: (1) multi-dimensional contextual evaluation, which assesses not just identity but also relationships, scenarios, and norms; and (2) adaptive response formulation, which moves beyond simple allow/deny decisions to shape information through redaction, summarization, and paraphrasing. This vision, powered by a dedicated AC reasoning engine, aims to bridge the gap between human-like nuanced judgment and scalable Al safety, proposing a new conceptual lens for future research in trustworthy agent design.
Similar Papers
A Vision for Access Control in LLM-based Agent Systems
Multiagent Systems
Lets AI agents share information safely and smartly.
Uncertainty-Aware, Risk-Adaptive Access Control for Agentic Systems using an LLM-Judged TBAC Model
Cryptography and Security
Keeps AI safe by checking risky new jobs.
Secure and Efficient Access Control for Computer-Use Agents via Context Space
Cryptography and Security
Keeps AI from messing up your computer.