Emotion Omni: Enabling Empathetic Speech Response Generation through Large Language Models
By: Haoyu Wang , Guangyan Zhang , Jiale Chen and more
Potential Business Impact:
Makes AI assistants understand and reply with feelings.
With the development of speech large language models (speech LLMs), users can now interact directly with assistants via speech. However, most existing models simply convert the response content into speech without fully understanding the rich emotional and paralinguistic cues embedded in the user's query. In many cases, the same sentence can have different meanings depending on the emotional expression. Furthermore, emotional understanding is essential for improving user experience in human-machine interaction. Currently, most speech LLMs with empathetic capabilities are trained on massive datasets. This approach requires vast amounts of data and significant computational resources. Therefore, a key challenge lies in how to develop a speech LLM capable of generating empathetic responses with limited data and without the need for large-scale training. To address this challenge, we propose Emotion Omni, a novel model architecture designed to understand the emotional content of user speech input and generate empathetic speech responses. Additionally, we developed a data generation pipeline based on an open-source TTS framework to construct a 200k emotional dialogue dataset, which supports the construction of an empathetic speech assistant. The demos are available at https://w311411.github.io/omni_demo/
Similar Papers
Empathy Omni: Enabling Empathetic Speech Response Generation through Large Language Models
Computation and Language
Makes AI assistants understand and respond with feelings.
Heartificial Intelligence: Exploring Empathy in Language Models
Computation and Language
Computers understand feelings better than people.
LLaMA-Omni2: LLM-based Real-time Spoken Chatbot with Autoregressive Streaming Speech Synthesis
Computation and Language
Lets computers talk and understand you instantly.