Style Amnesia: Investigating Speaking Style Degradation and Mitigation in Multi-Turn Spoken Language Models
By: Yu-Xiang Lin, Cheng-Han Chiang, Hung-yi Lee
Potential Business Impact:
AI voices forget how to sound after talking.
In this paper, we show that when spoken language models (SLMs) are instructed to speak in a specific speaking style at the beginning of a multi-turn conversation, they cannot maintain the required speaking styles after several turns of interaction; we refer to this as the style amnesia of SLMs. We focus on paralinguistic speaking styles, including emotion, accent, volume, and speaking speed. We evaluate three proprietary and two open-source SLMs, demonstrating that none of these models can maintain a consistent speaking style when instructed to do so. We further show that when SLMs are asked to recall the style instruction in later turns, they can recall the style instruction, but they fail to express it throughout the conversation. We also show that explicitly asking the model to recall the style instruction can partially mitigate style amnesia. In addition, we examine various prompting strategies and find that SLMs struggle to follow the required style when the instruction is placed in system messages rather than user messages, which contradicts the intended function of system prompts.
Similar Papers
Analyzing Mitigation Strategies for Catastrophic Forgetting in End-to-End Training of Spoken Language Models
Computation and Language
Keeps AI from forgetting speech skills during training.
Dual Information Speech Language Models for Emotional Conversations
Computation and Language
Lets computers understand feelings in spoken words.
VStyle: A Benchmark for Voice Style Adaptation with Spoken Instructions
Sound
Computers learn to change their voice on command.