Score: 1

Reading Images Like Texts: Sequential Image Understanding in Vision-Language Models

Published: September 23, 2025 | arXiv ID: 2509.19191v1

By: Yueyan Li , Chenggong Zhao , Zeyuan Zang and more

Potential Business Impact:

Helps computers "see" and understand pictures better.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-Language Models (VLMs) have demonstrated remarkable performance across a variety of real-world tasks. However, existing VLMs typically process visual information by serializing images, a method that diverges significantly from the parallel nature of human vision. Moreover, their opaque internal mechanisms hinder both deeper understanding and architectural innovation. Inspired by the dual-stream hypothesis of human vision, which distinguishes the "what" and "where" pathways, we deconstruct the visual processing in VLMs into object recognition and spatial perception for separate study. For object recognition, we convert images into text token maps and find that the model's perception of image content unfolds as a two-stage process from shallow to deep layers, beginning with attribute recognition and culminating in semantic disambiguation. For spatial perception, we theoretically derive and empirically verify the geometric structure underlying the positional representation in VLMs. Based on these findings, we introduce an instruction-agnostic token compression algorithm based on a plug-and-play visual decoder to improve decoding efficiency, and a RoPE scaling technique to enhance spatial reasoning. Through rigorous experiments, our work validates these analyses, offering a deeper understanding of VLM internals and providing clear principles for designing more capable future architectures.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
36 pages

Category
Computer Science:
CV and Pattern Recognition