Ming-Omni: A Unified Multimodal Model for Perception and Generation
By: Inclusion AI , Biao Gong , Cheng Zou and more
Potential Business Impact:
One computer understands and makes pictures, sound, and words.
We propose Ming-Omni, a unified multimodal model capable of processing images, text, audio, and video, while demonstrating strong proficiency in both speech and image generation. Ming-Omni employs dedicated encoders to extract tokens from different modalities, which are then processed by Ling, an MoE architecture equipped with newly proposed modality-specific routers. This design enables a single model to efficiently process and fuse multimodal inputs within a unified framework, thereby facilitating diverse tasks without requiring separate models, task-specific fine-tuning, or structural redesign. Importantly, Ming-Omni extends beyond conventional multimodal models by supporting audio and image generation. This is achieved through the integration of an advanced audio decoder for natural-sounding speech and Ming-Lite-Uni for high-quality image generation, which also allow the model to engage in context-aware chatting, perform text-to-speech conversion, and conduct versatile image editing. Our experimental results showcase Ming-Omni offers a powerful solution for unified perception and generation across all modalities. Notably, our proposed Ming-Omni is the first open-source model we are aware of to match GPT-4o in modality support, and we release all code and model weights to encourage further research and development in the community.
Similar Papers
Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation
CV and Pattern Recognition
AI understands and creates pictures, speech, and text better.
Ming-Lite-Uni: Advancements in Unified Architecture for Natural Multimodal Interaction
CV and Pattern Recognition
Makes computers create and change pictures from words.
MGM-Omni: Scaling Omni LLMs to Personalized Long-Horizon Speech
Sound
Computer talks like you, understands everything.