Investigating Safety Vulnerabilities of Large Audio-Language Models Under Speaker Emotional Variations
By: Bo-Han Feng , Chien-Feng Liu , Yu-Hsuan Li Liang and more
Potential Business Impact:
Makes AI safer by understanding angry voices.
Large audio-language models (LALMs) extend text-based LLMs with auditory understanding, offering new opportunities for multimodal applications. While their perception, reasoning, and task performance have been widely studied, their safety alignment under paralinguistic variation remains underexplored. This work systematically investigates the role of speaker emotion. We construct a dataset of malicious speech instructions expressed across multiple emotions and intensities, and evaluate several state-of-the-art LALMs. Our results reveal substantial safety inconsistencies: different emotions elicit varying levels of unsafe responses, and the effect of intensity is non-monotonic, with medium expressions often posing the greatest risk. These findings highlight an overlooked vulnerability in LALMs and call for alignment strategies explicitly designed to ensure robustness under emotional variation, a prerequisite for trustworthy deployment in real-world settings.
Similar Papers
Hidden in the Noise: Unveiling Backdoors in Audio LLMs Alignment through Latent Acoustic Pattern Triggers
Sound
Makes AI that hears unsafe from hidden sounds.
Synthetic Voices, Real Threats: Evaluating Large Text-to-Speech Models in Generating Harmful Audio
Sound
Makes AI voice generators say bad things.
Multilingual and Multi-Accent Jailbreaking of Audio LLMs
Sound
Makes AI understand bad audio from many languages.