It's a TRAP! Task-Redirecting Agent Persuasion Benchmark for Web Agents
By: Karolina Korgul , Yushi Yang , Arkadiusz Drohomirecki and more
Web-based agents powered by large language models are increasingly used for tasks such as email management or professional networking. Their reliance on dynamic web content, however, makes them vulnerable to prompt injection attacks: adversarial instructions hidden in interface elements that persuade the agent to divert from its original task. We introduce the Task-Redirecting Agent Persuasion Benchmark (TRAP), an evaluation for studying how persuasion techniques misguide autonomous web agents on realistic tasks. Across six frontier models, agents are susceptible to prompt injection in 25\% of tasks on average (13\% for GPT-5 to 43\% for DeepSeek-R1), with small interface or contextual changes often doubling success rates and revealing systemic, psychologically driven vulnerabilities in web-based agents. We also provide a modular social-engineering injection framework with controlled experiments on high-fidelity website clones, allowing for further benchmark expansion.
Similar Papers
TRAP: Targeted Redirecting of Agentic Preferences
Artificial Intelligence
Tricks AI robots into making wrong choices.
WASP: Benchmarking Web Agent Security Against Prompt Injection Attacks
Cryptography and Security
AI helpers can be tricked by simple tricks.
Securing AI Agents Against Prompt Injection Attacks
Cryptography and Security
Protects smart AI from being tricked by bad instructions.