Who Grants the Agent Power? Defending Against Instruction Injection via Task-Centric Access Control
By: Yifeng Cai , Ziming Wang , Zhaomeng Deng and more
Potential Business Impact:
Stops apps from tricking AI into doing bad things.
AI agents capable of GUI understanding and Model Context Protocol are increasingly deployed to automate mobile tasks. However, their reliance on over-privileged, static permissions creates a critical vulnerability: instruction injection. Malicious instructions, embedded in otherwise benign content like emails, can hijack the agent to perform unauthorized actions. We present AgentSentry, a lightweight runtime task-centric access control framework that enforces dynamic, task-scoped permissions. Instead of granting broad, persistent permissions, AgentSentry dynamically generates and enforces minimal, temporary policies aligned with the user's specific task (e.g., register for an app), revoking them upon completion. We demonstrate that AgentSentry successfully prevents an instruction injection attack, where an agent is tricked into forwarding private emails, while allowing the legitimate task to complete. Our approach highlights the urgent need for intent-aligned security models to safely govern the next generation of autonomous agents.
Similar Papers
Secure and Efficient Access Control for Computer-Use Agents via Context Space
Cryptography and Security
Keeps AI from messing up your computer.
Towards Automating Data Access Permissions in AI Agents
Cryptography and Security
Lets AI ask permission before acting.
CommandSans: Securing AI Agents with Surgical Precision Prompt Sanitization
Cryptography and Security
Stops AI from following bad hidden instructions.