InteractiveOmni: A Unified Omni-modal Model for Audio-Visual Multi-turn Dialogue
By: Wenwen Tong , Hewei Guo , Dongchuan Ran and more
Potential Business Impact:
Lets computers understand and talk about videos.
We introduce InteractiveOmni, a unified and open-source omni-modal large language model for audio-visual multi-turn interaction, ranging from 4B to 8B parameters, designed to lead the field of lightweight models by offering comprehensive omni-modal understanding and speech generation capabilities. To achieve this, we integrate the vision encoder, audio encoder, large language model, and speech decoder into a unified model for understanding and generation tasks. We design a multi-stage training strategy to ensure robust cross-modal capabilities, including pre-training for omni-modal understanding, followed by post-training with speech conversation and audio-visual interaction. To enable human-like long-term conversational ability, we meticulously curate a multi-turn training dataset that enhances the model's ability to handle complex and multi-turn interactions. To effectively evaluate the multi-turn memory and speech interaction capabilities, we construct the multi-modal multi-turn memory benchmark and the multi-turn speech interaction benchmark. Experiments demonstrate that InteractiveOmni significantly outperforms leading open-source models and provides a more intelligent multi-turn audio-visual experience, particularly in its long-term memory capabilities. Notably, InteractiveOmni-4B is comparable to the much larger model like Qwen2.5-Omni-7B on general benchmarks, and it can retain 97% of the performance of the InteractiveOmni-8B while utilizing only 50% of the model size. Achieving state-of-the-art results against similarly sized models across image, audio, video understanding, and speech generation tasks, InteractiveOmni is an accessible, open-source foundation for next-generation intelligent interactive systems.
Similar Papers
Stream-Omni: Simultaneous Multimodal Interactions with Large Language-Vision-Speech Model
Artificial Intelligence
Computer understands talking, seeing, and writing together.
OmniVinci: Enhancing Architecture and Data for Omni-Modal Understanding LLM
CV and Pattern Recognition
Lets computers understand sights and sounds together.
LongCat-Flash-Omni Technical Report
Multimedia
Lets computers understand sound, sights, and words together.