SARSteer: Safeguarding Large Audio Language Models via Safe-Ablated Refusal Steering
By: Weilin Lin , Jianze Li , Hui Xiong and more
Potential Business Impact:
Keeps AI from saying bad things when it hears.
Large Audio-Language Models (LALMs) are becoming essential as a powerful multimodal backbone for real-world applications. However, recent studies show that audio inputs can more easily elicit harmful responses than text, exposing new risks toward deployment. While safety alignment has made initial advances in LLMs and Large Vision-Language Models (LVLMs), we find that vanilla adaptation of these approaches to LALMs faces two key limitations: 1) LLM-based steering fails under audio input due to the large distributional gap between activations, and 2) prompt-based defenses induce over-refusals on benign-speech queries. To address these challenges, we propose Safe-Ablated Refusal Steering (SARSteer), the first inference-time defense framework for LALMs. Specifically, SARSteer leverages text-derived refusal steering to enforce rejection without manipulating audio inputs and introduces decomposed safe-space ablation to mitigate over-refusal. Extensive experiments demonstrate that SARSteer significantly improves harmful-query refusal while preserving benign responses, establishing a principled step toward safety alignment in LALMs.
Similar Papers
Automating Steering for Safe Multimodal Large Language Models
Computation and Language
Keeps AI from saying bad things when tricked.
Automating Steering for Safe Multimodal Large Language Models
Computation and Language
Keeps AI from saying bad things when tricked.
SafeSteer: Interpretable Safety Steering with Refusal-Evasion in LLMs
Machine Learning (CS)
Makes AI say safe things without refusing.