VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo
By: Qianli Ma , Yaowei Zheng , Zhelun Shi and more
Potential Business Impact:
Teaches computers to understand all kinds of information faster.
Recent advances in large language models (LLMs) have driven impressive progress in omni-modal understanding and generation. However, training omni-modal LLMs remains a significant challenge due to the heterogeneous model architectures required to process diverse modalities, necessitating sophisticated system design for efficient large-scale training. Existing frameworks typically entangle model definition with parallel logic, incurring limited scalability and substantial engineering overhead for end-to-end omni-modal training. We present VeOmni, a modular and efficient training framework to accelerate the development of omni-modal LLMs. VeOmni introduces model-centric distributed recipes that decouples communication from computation, enabling efficient 3D parallelism on omni-modal LLMs. VeOmni also features a flexible configuration interface supporting seamless integration of new modalities with minimal code change. Using VeOmni, a omni-modal mixture-of-experts (MoE) model with 30B parameters can be trained with over 2,800 tokens/sec/GPU throughput and scale to 160K context lengths via 3D parallelism on 128 GPUs, showcasing its superior efficiency and scalability for training large omni-modal LLMs.
Similar Papers
VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo
Computation and Language
Trains AI to understand all types of information faster.
VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo
Computation and Language
Trains AI to understand many things faster.
Uni-MoE-2.0-Omni: Scaling Language-Centric Omnimodal Large Model with Advanced MoE, Training and Data
Computation and Language
Computer understands and makes text, images, and sound.