Multi-Layer Visual Feature Fusion in Multimodal LLMs: Methods, Analysis, and Best Practices
By: Junyan Lin , Haoran Chen , Yue Fan and more
Potential Business Impact:
Makes AI understand pictures better by picking the best parts.
Multimodal Large Language Models (MLLMs) have made significant advancements in recent years, with visual features playing an increasingly critical role in enhancing model performance. However, the integration of multi-layer visual features in MLLMs remains underexplored, particularly with regard to optimal layer selection and fusion strategies. Existing methods often rely on arbitrary design choices, leading to suboptimal outcomes. In this paper, we systematically investigate two core aspects of multi-layer visual feature fusion: (1) selecting the most effective visual layers and (2) identifying the best fusion approach with the language model. Our experiments reveal that while combining visual features from multiple stages improves generalization, incorporating additional features from the same stage typically leads to diminished performance. Furthermore, we find that direct fusion of multi-layer visual features at the input stage consistently yields superior and more stable performance across various configurations. We make all our code publicly available: https://github.com/EIT-NLP/Layer_Select_Fuse_for_MLLM.
Similar Papers
Rethinking Visual Layer Selection in Multimodal LLMs
CV and Pattern Recognition
Helps computers understand pictures better for different jobs.
Towards LLM-Centric Multimodal Fusion: A Survey on Integration Strategies and Techniques
Computation and Language
AI learns from pictures, sounds, and words together.
Layer-Aware Embedding Fusion for LLMs in Text Classifications
Computation and Language
Improves AI understanding by mixing word meanings.