Score: 0

X-Fusion: Introducing New Modality to Frozen Large Language Models

Published: April 29, 2025 | arXiv ID: 2504.20996v1

By: Sicheng Mo , Thao Nguyen , Xun Huang and more

Potential Business Impact:

Lets computers understand and create pictures from words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We propose X-Fusion, a framework that extends pretrained Large Language Models (LLMs) for multimodal tasks while preserving their language capabilities. X-Fusion employs a dual-tower design with modality-specific weights, keeping the LLM's parameters frozen while integrating vision-specific information for both understanding and generation. Our experiments demonstrate that X-Fusion consistently outperforms alternative architectures on both image-to-text and text-to-image tasks. We find that incorporating understanding-focused data improves generation quality, reducing image data noise enhances overall performance, and feature alignment accelerates convergence for smaller models but has minimal impact on larger ones. Our findings provide valuable insights into building efficient unified multimodal models.

Page Count
18 pages

Category
Computer Science:
CV and Pattern Recognition