Score: 0

Emotion Omni: Enabling Empathetic Speech Response Generation through Large Language Models

Published: August 26, 2025 | arXiv ID: 2508.18655v1

By: Haoyu Wang , Guangyan Zhang , Jiale Chen and more

Potential Business Impact:

Makes AI assistants understand and reply with feelings.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

With the development of speech large language models (speech LLMs), users can now interact directly with assistants via speech. However, most existing models simply convert the response content into speech without fully understanding the rich emotional and paralinguistic cues embedded in the user's query. In many cases, the same sentence can have different meanings depending on the emotional expression. Furthermore, emotional understanding is essential for improving user experience in human-machine interaction. Currently, most speech LLMs with empathetic capabilities are trained on massive datasets. This approach requires vast amounts of data and significant computational resources. Therefore, a key challenge lies in how to develop a speech LLM capable of generating empathetic responses with limited data and without the need for large-scale training. To address this challenge, we propose Emotion Omni, a novel model architecture designed to understand the emotional content of user speech input and generate empathetic speech responses. Additionally, we developed a data generation pipeline based on an open-source TTS framework to construct a 200k emotional dialogue dataset, which supports the construction of an empathetic speech assistant. The demos are available at https://w311411.github.io/omni_demo/

Page Count
5 pages

Category
Computer Science:
Computation and Language