Score: 1

Language Model Agents Under Attack: A Cross Model-Benchmark of Profit-Seeking Behaviors in Customer Service

Published: December 30, 2025 | arXiv ID: 2512.24415v1

By: Jingyu Zhang

BigTech Affiliations: University of Washington

Potential Business Impact:

Stops chatbots from giving away free stuff.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Customer-service LLM agents increasingly make policy-bound decisions (refunds, rebooking, billing disputes), but the same ``helpful'' interaction style can be exploited: a small fraction of users can induce unauthorized concessions, shifting costs to others and eroding trust in agentic workflows. We present a cross-domain benchmark of profit-seeking direct prompt injection in customer-service interactions, spanning 10 service domains and 100 realistic attack scripts grouped into five technique families. Across five widely used models under a unified rubric with uncertainty reporting, attacks are highly domain-dependent (airline support is most exploitable) and technique-dependent (payload splitting is most consistently effective). We release data and evaluation code to support reproducible auditing and to inform the design of oversight and recovery workflows for trustworthy, human centered agent interfaces.

Country of Origin
🇺🇸 United States

Page Count
14 pages

Category
Computer Science:
Cryptography and Security