Dual Information Speech Language Models for Emotional Conversations
By: Chun Wang , Chenyang Liu , Wenze Xu and more
Potential Business Impact:
Lets computers understand feelings in spoken words.
Conversational systems relying on text-based large language models (LLMs) often overlook paralinguistic cues, essential for understanding emotions and intentions. Speech-language models (SLMs), which use speech as input, are emerging as a promising solution. However, SLMs built by extending frozen LLMs struggle to capture paralinguistic information and exhibit reduced context understanding. We identify entangled information and improper training strategies as key issues. To address these issues, we propose two heterogeneous adapters and suggest a weakly supervised training strategy. Our approach disentangles paralinguistic and linguistic information, enabling SLMs to interpret speech through structured representations. It also preserves contextual understanding by avoiding the generation of task-specific vectors through controlled randomness. This approach trains only the adapters on common datasets, ensuring parameter and data efficiency. Experiments demonstrate competitive performance in emotional conversation tasks, showcasing the model's ability to effectively integrate both paralinguistic and linguistic information within contextual settings.
Similar Papers
Incorporating Contextual Paralinguistic Understanding in Large Speech-Language Models
Computation and Language
Teaches computers to understand feelings in voices.
Evaluating Emotion Recognition in Spoken Language Models on Emotionally Incongruent Speech
Computation and Language
Computers hear emotions better, not just words.
Evaluating Emotion Recognition in Spoken Language Models on Emotionally Incongruent Speech
Computation and Language
Computers hear emotions better, not just words.