Score: 0

Are Deep Speech Denoising Models Robust to Adversarial Noise?

Published: March 14, 2025 | arXiv ID: 2503.11627v1

By: Will Schwarzer , Philip S. Thomas , Andrea Fanelli and more

Potential Business Impact:

Makes voice helpers hear wrong words on purpose.

Business Areas:
Darknet Internet Services

Deep noise suppression (DNS) models enjoy widespread use throughout a variety of high-stakes speech applications. However, in this paper, we show that four recent DNS models can each be reduced to outputting unintelligible gibberish through the addition of imperceptible adversarial noise. Furthermore, our results show the near-term plausibility of targeted attacks, which could induce models to output arbitrary utterances, and over-the-air attacks. While the success of these attacks varies by model and setting, and attacks appear to be strongest when model-specific (i.e., white-box and non-transferable), our results highlight a pressing need for practical countermeasures in DNS systems.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
Sound