Score: 1

Backdoor Attacks Against Speech Language Models

Published: October 1, 2025 | arXiv ID: 2510.01157v1

By: Alexandrine Fortier , Thomas Thebaud , Jesús Villalba and more

Potential Business Impact:

Makes AI that understands speech easier to trick.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) and their multimodal extensions are becoming increasingly popular. One common approach to enable multimodality is to cascade domain-specific encoders with an LLM, making the resulting model inherit vulnerabilities from all of its components. In this work, we present the first systematic study of audio backdoor attacks against speech language models. We demonstrate its effectiveness across four speech encoders and three datasets, covering four tasks: automatic speech recognition (ASR), speech emotion recognition, and gender and age prediction. The attack consistently achieves high success rates, ranging from 90.76% to 99.41%. To better understand how backdoors propagate, we conduct a component-wise analysis to identify the most vulnerable stages of the pipeline. Finally, we propose a fine-tuning-based defense that mitigates the threat of poisoned pretrained encoders.

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
Computation and Language