SusBench: An Online Benchmark for Evaluating Dark Pattern Susceptibility of Computer-Use Agents
By: Longjie Guo , Chenjie Yuan , Mingyuan Zhong and more
Potential Business Impact:
Tests if smart computer helpers can be tricked.
As LLM-based computer-use agents (CUAs) begin to autonomously interact with real-world interfaces, understanding their vulnerability to manipulative interface designs becomes increasingly critical. We introduce SusBench, an online benchmark for evaluating the susceptibility of CUAs to UI dark patterns, designs that aim to manipulate or deceive users into taking unintentional actions. Drawing nine common dark pattern types from existing taxonomies, we developed a method for constructing believable dark patterns on real-world consumer websites through code injections, and designed 313 evaluation tasks across 55 websites. Our study with 29 participants showed that humans perceived our dark pattern injections to be highly realistic, with the vast majority of participants not noticing that these had been injected by the research team. We evaluated five state-of-the-art CUAs on the benchmark. We found that both human participants and agents are particularly susceptible to the dark patterns of Preselection, Trick Wording, and Hidden Information, while being resilient to other overt dark patterns. Our findings inform the development of more trustworthy CUAs, their use as potential human proxies in evaluating deceptive designs, and the regulation of an online environment increasingly navigated by autonomous agents.
Similar Papers
DarkBench: Benchmarking Dark Patterns in Large Language Models
Computation and Language
Finds sneaky tricks in AI that trick people.
Dark Patterns Meet GUI Agents: LLM Agent Susceptibility to Manipulative Interfaces and the Role of Human Oversight
Human-Computer Interaction
Helps computers spot tricky online tricks.
Investigating the Impact of Dark Patterns on LLM-Based Web Agents
Cryptography and Security
Protects online helpers from tricky website tricks.