SoK: Trust-Authorization Mismatch in LLM Agent Interactions
By: Guanquan Shi , Haohua Du , Zhiqiang Wang and more
Potential Business Impact:
Makes AI agents safer and more trustworthy.
Large Language Models (LLMs) are rapidly evolving into autonomous agents capable of interacting with the external world, significantly expanding their capabilities through standardized interaction protocols. However, this paradigm revives the classic cybersecurity challenges of agency and authorization in a novel and volatile context. As decision-making shifts from deterministic code logic to probabilistic inference driven by natural language, traditional security mechanisms designed for deterministic behavior fail. It is fundamentally challenging to establish trust for unpredictable AI agents and to enforce the Principle of Least Privilege (PoLP) when instructions are ambiguous. Despite the escalating threat landscape, the academic community's understanding of this emerging domain remains fragmented, lacking a systematic framework to analyze its root causes. This paper provides a unifying formal lens for agent-interaction security. We observed that most security threats in this domain stem from a fundamental mismatch between trust evaluation and authorization policies. We introduce a novel risk analysis model centered on this trust-authorization gap. Using this model as a unifying lens, we survey and classify the implementation paths of existing, often seemingly isolated, attacks and defenses. This new framework not only unifies the field but also allows us to identify critical research gaps. Finally, we leverage our analysis to suggest a systematic research direction toward building robust, trusted agents and dynamic authorization mechanisms.
Similar Papers
Measuring the Security of Mobile LLM Agents under Adversarial Prompts from Untrusted Third-Party Channels
Cryptography and Security
Finds ways bad apps trick phone AI.
Measuring the Security of Mobile LLM Agents under Adversarial Prompts from Untrusted Third-Party Channels
Cryptography and Security
Finds ways bad apps trick phone AI.
The Trust Paradox in LLM-Based Multi-Agent Systems: When Collaboration Becomes a Security Vulnerability
Cryptography and Security
Makes AI helpers safer when they work together.