AMoE: Agglomerative Mixture-of-Experts Vision Foundation Model
By: Sofian Chaybouti , Sanath Narayan , Yasser Dahou and more
Vision foundation models trained via multi-teacher distillation offer a promising path toward unified visual representations, yet the learning dynamics and data efficiency of such approaches remain underexplored. In this paper, we systematically study multi-teacher distillation for vision foundation models and identify key factors that enable training at lower computational cost. We introduce Agglomerative Mixture-of-Experts Vision Foundation Models (AMoE), which distill knowledge from SigLIP2 and DINOv3 simultaneously into a Mixture-of-Experts student. We show that (1) our Asymmetric Relation-Knowledge Distillation loss preserves the geometric properties of each teacher while enabling effective knowledge transfer, (2) token-balanced batching that packs varying-resolution images into sequences with uniform token budgets stabilizes representation learning across resolutions without sacrificing performance, and (3) hierarchical clustering and sampling of training data--typically reserved for self-supervised learning--substantially improves sample efficiency over random sampling for multi-teacher distillation. By combining these findings, we curate OpenLVD200M, a 200M-image corpus that demonstrates superior efficiency for multi-teacher distillation. Instantiated in a Mixture-of-Experts. We release OpenLVD200M and distilled models.
Similar Papers
AMMKD: Adaptive Multimodal Multi-teacher Distillation for Lightweight Vision-Language Models
CV and Pattern Recognition
Makes phone apps understand pictures and words better.
Equipping Vision Foundation Model with Mixture of Experts for Out-of-Distribution Detection
CV and Pattern Recognition
Helps computers spot weird, new things.
When Better Teachers Don't Make Better Students: Revisiting Knowledge Distillation for CLIP Models in VQA
CV and Pattern Recognition
Makes smart AI models smaller and faster.