Score: 0

ASRJam: Human-Friendly AI Speech Jamming to Prevent Automated Phone Scams

Published: June 10, 2025 | arXiv ID: 2506.11125v1

By: Freddie Grabovski, Gilad Gressel, Yisroel Mirsky

Potential Business Impact:

Stops scam calls by making them hard for computers.

Business Areas:
Speech Recognition Data and Analytics, Software

Large Language Models (LLMs), combined with Text-to-Speech (TTS) and Automatic Speech Recognition (ASR), are increasingly used to automate voice phishing (vishing) scams. These systems are scalable and convincing, posing a significant security threat. We identify the ASR transcription step as the most vulnerable link in the scam pipeline and introduce ASRJam, a proactive defence framework that injects adversarial perturbations into the victim's audio to disrupt the attacker's ASR. This breaks the scam's feedback loop without affecting human callers, who can still understand the conversation. While prior adversarial audio techniques are often unpleasant and impractical for real-time use, we also propose EchoGuard, a novel jammer that leverages natural distortions, such as reverberation and echo, that are disruptive to ASR but tolerable to humans. To evaluate EchoGuard's effectiveness and usability, we conducted a 39-person user study comparing it with three state-of-the-art attacks. Results show that EchoGuard achieved the highest overall utility, offering the best combination of ASR disruption and human listening experience.

Page Count
13 pages

Category
Computer Science:
Computation and Language