Score: 0

Scaling Laws for Native Multimodal Models

Published: April 10, 2025 | arXiv ID: 2504.07951v4

By: Mustafa Shukor , Enrico Fini , Victor Guilherme Turrisi da Costa and more

Potential Business Impact:

Makes AI understand pictures and words better, faster.

Business Areas:
Multi-level Marketing Sales and Marketing

Building general-purpose models that can effectively perceive the world through multimodal signals has been a long-standing goal. Current approaches involve integrating separately pre-trained components, such as connecting vision encoders to LLMs and continuing multimodal training. While such approaches exhibit remarkable sample efficiency, it remains an open question whether such late-fusion architectures are inherently superior. In this work, we revisit the architectural design of native multimodal models (NMMs)-those trained from the ground up on all modalities-and conduct an extensive scaling laws study, spanning 457 trained models with different architectures and training mixtures. Our investigation reveals no inherent advantage to late-fusion architectures over early-fusion ones, which do not rely on image encoders or tokenizers. On the contrary, early-fusion exhibits stronger performance at lower parameter counts, is more efficient to train, and is easier to deploy. Motivated by the strong performance of the early-fusion architectures, we show that incorporating Mixture of Experts (MoEs) allows models to learn modality-specific weights, significantly benefiting performance.

Page Count
22 pages

Category
Computer Science:
CV and Pattern Recognition