ScamAgents: How AI Agents Can Simulate Human-Level Scam Calls
By: Sanket Badhe
Potential Business Impact:
Creates fake phone calls to trick people.
Large Language Models (LLMs) have demonstrated impressive fluency and reasoning capabilities, but their potential for misuse has raised growing concern. In this paper, we present ScamAgent, an autonomous multi-turn agent built on top of LLMs, capable of generating highly realistic scam call scripts that simulate real-world fraud scenarios. Unlike prior work focused on single-shot prompt misuse, ScamAgent maintains dialogue memory, adapts dynamically to simulated user responses, and employs deceptive persuasion strategies across conversational turns. We show that current LLM safety guardrails, including refusal mechanisms and content filters, are ineffective against such agent-based threats. Even models with strong prompt-level safeguards can be bypassed when prompts are decomposed, disguised, or delivered incrementally within an agent framework. We further demonstrate the transformation of scam scripts into lifelike voice calls using modern text-to-speech systems, completing a fully automated scam pipeline. Our findings highlight an urgent need for multi-turn safety auditing, agent-level control frameworks, and new methods to detect and disrupt conversational deception powered by generative AI.
Similar Papers
Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams
Cryptography and Security
AI helps scammers trick people into losing money.
Bot Wars Evolved: Orchestrating Competing LLMs in a Counterstrike Against Phone Scams
Computation and Language
Fights phone scams by making fake victims talk to scammers.
AI-in-the-Loop: Privacy Preserving Real-Time Scam Detection and Conversational Scambaiting by Leveraging LLMs and Federated Learning
Cryptography and Security
Stops online scams during chats.