Score: 3

VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo

Published: August 4, 2025 | arXiv ID: 2508.02317v1

By: Qianli Ma , Yaowei Zheng , Zhelun Shi and more

BigTech Affiliations: ByteDance

Potential Business Impact:

Trains AI to understand many things faster.

Recent advances in large language models (LLMs) have driven impressive progress in omni-modal understanding and generation. However, training omni-modal LLMs remains a significant challenge due to the heterogeneous model architectures required to process diverse modalities, necessitating sophisticated system design for efficient large-scale training. Existing frameworks typically entangle model definition with parallel logic, incurring limited scalability and substantial engineering overhead for end-to-end omni-modal training. % We present \veomni, a modular and efficient training framework to accelerate the development of omni-modal LLMs. \veomni introduces model-centric distributed recipes that decouples communication from computation, enabling efficient 3D parallelism on omni-modal LLMs. \veomni also features a flexible configuration interface supporting seamless integration of new modalities with minimal code change. % Using \veomni, a omni-modal mixture-of-experts (MoE) model with 30B parameters can be trained with over 2,800 tokens/sec/GPU throughput and scale to 160K context lengths via 3D parallelism on 128 GPUs, showcasing its superior efficiency and scalability for training large omni-modal LLMs.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
23 pages

Category
Computer Science:
Computation and Language