Towards LLM-Centric Multimodal Fusion: A Survey on Integration Strategies and Techniques
By: Jisu An , Junseok Lee , Jeoungeun Lee and more
Potential Business Impact:
AI learns from pictures, sounds, and words together.
The rapid progress of Multimodal Large Language Models(MLLMs) has transformed the AI landscape. These models combine pre-trained LLMs with various modality encoders. This integration requires a systematic understanding of how different modalities connect to the language backbone. Our survey presents an LLM-centric analysis of current approaches. We examine methods for transforming and aligning diverse modal inputs into the language embedding space. This addresses a significant gap in existing literature. We propose a classification framework for MLLMs based on three key dimensions. First, we examine architectural strategies for modality integration. This includes both the specific integration mechanisms and the fusion level. Second, we categorize representation learning techniques as either joint or coordinate representations. Third, we analyze training paradigms, including training strategies and objective functions. By examining 125 MLLMs developed between 2021 and 2025, we identify emerging patterns in the field. Our taxonomy provides researchers with a structured overview of current integration techniques. These insights aim to guide the development of more robust multimodal integration strategies for future models built on pre-trained foundations.
Similar Papers
A Survey of Generative Categories and Techniques in Multimodal Large Language Models
Multimedia
Computers can now create pictures, music, and videos.
Towards Cross-Modality Modeling for Time Series Analytics: A Survey in the LLM Era
Machine Learning (CS)
Helps computers understand time data like words.
Graph-MLLM: Harnessing Multimodal Large Language Models for Multimodal Graph Learning
Machine Learning (CS)
Helps computers understand pictures and words together.