Score: 0

Rethinking Visual Information Processing in Multimodal LLMs

Published: November 13, 2025 | arXiv ID: 2511.10301v1

By: Dongwan Kim , Viresh Ranjan , Takashi Nagata and more

Potential Business Impact:

Lets computers understand pictures and words together better.

Business Areas:
Computer Vision Hardware, Software

Despite the remarkable success of the LLaVA architecture for vision-language tasks, its design inherently struggles to effectively integrate visual features due to the inherent mismatch between text and vision modalities. We tackle this issue from a novel perspective in which the LLM not only serves as a language model but also a powerful vision encoder. To this end, we present LLaViT - Large Language Models as extended Vision Transformers - which enables the LLM to simultaneously function as a vision encoder through three key modifications: (1) learning separate QKV projections for vision modality, (2) enabling bidirectional attention on visual tokens, and (3) incorporating both global and local visual representations. Through extensive controlled experiments on a wide range of LLMs, we demonstrate that LLaViT significantly outperforms the baseline LLaVA method on a multitude of benchmarks, even surpassing models with double its parameter count, establishing a more effective approach to vision-language modeling.

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition