Acoustic Simulation Framework for Multi-channel Replay Speech Detection
By: Michael Neri, Tuomas Virtanen
Potential Business Impact:
Makes voice assistants safer from fake voices.
Replay speech attacks pose a significant threat to voice-controlled systems, especially in smart environments where voice assistants are widely deployed. While multi-channel audio offers spatial cues that can enhance replay detection robustness, existing datasets and methods predominantly rely on single-channel recordings. In this work, we introduce an acoustic simulation framework designed to simulate multi-channel replay speech configurations using publicly available resources. Our setup models both genuine and spoofed speech across varied environments, including realistic microphone and loudspeaker impulse responses, room acoustics, and noise conditions. The framework employs measured loudspeaker directionalities during the replay attack to improve the realism of the simulation. We define two spoofing settings, which simulate whether a reverberant or an anechoic speech is used in the replay scenario, and evaluate the impact of omnidirectional and diffuse noise on detection performance. Using the state-of-the-art M-ALRAD model for replay speech detection, we demonstrate that synthetic data can support the generalization capabilities of the detector across unseen enclosures.
Similar Papers
EchoFake: A Replay-Aware Dataset for Practical Speech Deepfake Detection
Audio and Speech Processing
Stops fake voices from tricking people over the phone.
Diffusion-based Surrogate Model for Time-varying Underwater Acoustic Channels
Sound
Creates realistic underwater sound for better communication.
Room-acoustic simulations as an alternative to measurements for audio-algorithm evaluation
Audio and Speech Processing
Makes sound programs work better with fake rooms.