Score: 1

Attacker's Noise Can Manipulate Your Audio-based LLM in the Real World

Published: July 7, 2025 | arXiv ID: 2507.06256v1

By: Vinu Sankar Sadasivan , Soheil Feizi , Rajiv Mathews and more

BigTech Affiliations: Google

Potential Business Impact:

Makes voice assistants unsafe to use.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper investigates the real-world vulnerabilities of audio-based large language models (ALLMs), such as Qwen2-Audio. We first demonstrate that an adversary can craft stealthy audio perturbations to manipulate ALLMs into exhibiting specific targeted behaviors, such as eliciting responses to wake-keywords (e.g., "Hey Qwen"), or triggering harmful behaviors (e.g. "Change my calendar event"). Subsequently, we show that playing adversarial background noise during user interaction with the ALLMs can significantly degrade the response quality. Crucially, our research illustrates the scalability of these attacks to real-world scenarios, impacting other innocent users when these adversarial noises are played through the air. Further, we discuss the transferrability of the attack, and potential defensive measures.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
13 pages

Category
Computer Science:
Cryptography and Security