Whisper Smarter, not Harder: Adversarial Attack on Partial Suppression
By: Zheng Jie Wong, Bingquan Shen
Potential Business Impact:
Makes voice assistants safer from sneaky tricks.
Currently, Automatic Speech Recognition (ASR) models are deployed in an extensive range of applications. However, recent studies have demonstrated the possibility of adversarial attack on these models which could potentially suppress or disrupt model output. We investigate and verify the robustness of these attacks and explore if it is possible to increase their imperceptibility. We additionally find that by relaxing the optimisation objective from complete suppression to partial suppression, we can further decrease the imperceptibility of the attack. We also explore possible defences against these attacks and show a low-pass filter defence could potentially serve as an effective defence.
Similar Papers
Selective Masking Adversarial Attack on Automatic Speech Recognition Systems
Cryptography and Security
Tricks voice assistants to hear only one person.
Whispering Under the Eaves: Protecting User Privacy Against Commercial and LLM-powered Automatic Speech Recognition Systems
Cryptography and Security
Keeps your voice private from listening computers.
Are Modern Speech Enhancement Systems Vulnerable to Adversarial Attacks?
Audio and Speech Processing
Makes voices say different things with hidden sounds.