Score: 0

The Siren Song of LLMs: How Users Perceive and Respond to Dark Patterns in Large Language Models

Published: September 13, 2025 | arXiv ID: 2509.10830v2

By: Yike Shi , Qing Xiao , Qing Hu and more

Potential Business Impact:

AI tricks people with fake helpfulness in chats.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models can influence users through conversation, creating new forms of dark patterns that differ from traditional UX dark patterns. We define LLM dark patterns as manipulative or deceptive behaviors enacted in dialogue. Drawing on prior work and AI incident reports, we outline a diverse set of categories with real-world examples. Using them, we conducted a scenario-based study where participants (N=34) compared manipulative and neutral LLM responses. Our results reveal that recognition of LLM dark patterns often hinged on conversational cues such as exaggerated agreement, biased framing, or privacy intrusions, but these behaviors were also sometimes normalized as ordinary assistance. Users' perceptions of these dark patterns shaped how they respond to them. Responsibilities for these behaviors were also attributed in different ways, with participants assigning it to companies and developers, the model itself, or to users. We conclude with implications for design, advocacy, and governance to safeguard user autonomy.

Country of Origin
🇺🇸 United States

Page Count
22 pages

Category
Computer Science:
Human-Computer Interaction