Scaling Laws for Native Multimodal Models
By: Mustafa Shukor , Enrico Fini , Victor Guilherme Turrisi da Costa and more
Potential Business Impact:
Makes AI understand pictures and words better, faster.
Building general-purpose models that can effectively perceive the world through multimodal signals has been a long-standing goal. Current approaches involve integrating separately pre-trained components, such as connecting vision encoders to LLMs and continuing multimodal training. While such approaches exhibit remarkable sample efficiency, it remains an open question whether such late-fusion architectures are inherently superior. In this work, we revisit the architectural design of native multimodal models (NMMs)-those trained from the ground up on all modalities-and conduct an extensive scaling laws study, spanning 457 trained models with different architectures and training mixtures. Our investigation reveals no inherent advantage to late-fusion architectures over early-fusion ones, which do not rely on image encoders or tokenizers. On the contrary, early-fusion exhibits stronger performance at lower parameter counts, is more efficient to train, and is easier to deploy. Motivated by the strong performance of the early-fusion architectures, we show that incorporating Mixture of Experts (MoEs) allows models to learn modality-specific weights, significantly benefiting performance.
Similar Papers
NaViL: Rethinking Scaling Properties of Native Multimodal Large Language Models under Data Constraints
CV and Pattern Recognition
Makes AI understand pictures and words together better.
HaploVL: A Single-Transformer Baseline for Multi-Modal Understanding
Computation and Language
Makes AI understand pictures and words together better.
Group then Scale: Dynamic Mixture-of-Experts Multilingual Language Model
Computation and Language
Helps computers learn many languages better.