Score: 1

Are Modern Speech Enhancement Systems Vulnerable to Adversarial Attacks?

Published: September 25, 2025 | arXiv ID: 2509.21087v1

By: Rostislav Makarov, Lea Schönherr, Timo Gerkmann

Potential Business Impact:

Makes voices say different things with hidden sounds.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Machine learning approaches for speech enhancement are becoming increasingly expressive, enabling ever more powerful modifications of input signals. In this paper, we demonstrate that this expressiveness introduces a vulnerability: advanced speech enhancement models can be susceptible to adversarial attacks. Specifically, we show that adversarial noise, carefully crafted and psychoacoustically masked by the original input, can be injected such that the enhanced speech output conveys an entirely different semantic meaning. We experimentally verify that contemporary predictive speech enhancement models can indeed be manipulated in this way. Furthermore, we highlight that diffusion models with stochastic samplers exhibit inherent robustness to such adversarial attacks by design.

Repos / Data Links

Page Count
5 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing