Agent-Omni: Test-Time Multimodal Reasoning via Model Coordination for Understanding Anything
By: Huawei Lin , Yunzhi Shi , Tong Geng and more
Potential Business Impact:
Lets computers understand all kinds of information together.
Multimodal large language models (MLLMs) have shown strong capabilities but remain limited to fixed modality pairs and require costly fine-tuning with large aligned datasets. Building fully omni-capable models that can integrate text, images, audio, and video remains impractical and lacks robust reasoning support. In this paper, we propose an Agent-Omni framework that coordinates existing foundation models through a master-agent system, enabling flexible multimodal reasoning without retraining. The master agent interprets user intent, delegates subtasks to modality-specific agents, and integrates their outputs into coherent responses. Extensive experiments across text, image, audio, video, and omni benchmarks show that Agent-Omni consistently achieves state-of-the-art performance, particularly on tasks requiring complex cross-modal reasoning. Its agent-based design enables seamless integration of specialized foundation models, ensuring adaptability to diverse inputs while maintaining transparency and interpretability. In addition, the framework is modular and easily extensible, allowing future improvements as stronger models become available. %We release an open-source implementation to support continued research on scalable and reliable omni-modal reasoning.
Similar Papers
Agent-Omni: Test-Time Multimodal Reasoning via Model Coordination for Understanding Anything
Artificial Intelligence
Lets computers understand many things together, like pictures and words.
Omni-AutoThink: Adaptive Multimodal Reasoning via Reinforcement Learning
Artificial Intelligence
Helps AI think better for different tasks.
Omni-AutoThink: Adaptive Multimodal Reasoning via Reinforcement Learning
Artificial Intelligence
Helps AI think smarter, not harder, on tasks.