AsyMoE: Leveraging Modal Asymmetry for Enhanced Expert Specialization in Large Vision-Language Models
By: Heng Zhang , Haichuan Hu , Yaomin Shen and more
Potential Business Impact:
Helps computers understand pictures and words better.
Large Vision-Language Models (LVLMs) have demonstrated impressive performance on multimodal tasks through scaled architectures and extensive training. However, existing Mixture of Experts (MoE) approaches face challenges due to the asymmetry between visual and linguistic processing. Visual information is spatially complete, while language requires maintaining sequential context. As a result, MoE models struggle to balance modality-specific features and cross-modal interactions. Through systematic analysis, we observe that language experts in deeper layers progressively lose contextual grounding and rely more on parametric knowledge rather than utilizing the provided visual and linguistic information. To address this, we propose AsyMoE, a novel architecture that models this asymmetry using three specialized expert groups. We design intra-modality experts for modality-specific processing, hyperbolic inter-modality experts for hierarchical cross-modal interactions, and evidence-priority language experts to suppress parametric biases and maintain contextual grounding. Extensive experiments demonstrate that AsyMoE achieves 26.58% and 15.45% accuracy improvements over vanilla MoE and modality-specific MoE respectively, with 25.45% fewer activated parameters than dense models.
Similar Papers
MoIIE: Mixture of Intra- and Inter-Modality Experts for Large Vision Language Models
CV and Pattern Recognition
Makes AI understand pictures and words better, faster.
MedMoE: Modality-Specialized Mixture of Experts for Medical Vision-Language Understanding
CV and Pattern Recognition
Helps doctors understand medical images better.
MoE-Inference-Bench: Performance Evaluation of Mixture of Expert Large Language and Vision Models
Machine Learning (CS)
Makes AI smarter and faster by using many smart parts.