Grounding Everything in Tokens for Multimodal Large Language Models
By: Xiangxuan Ren , Zhongdao Wang , Liping Hou and more
Potential Business Impact:
Helps computers see and point to things.
Multimodal large language models (MLLMs) have made significant advancements in vision understanding and reasoning. However, the autoregressive Transformer architecture used by MLLMs requries tokenization on input images, which limits their ability to accurately ground objects within the 2D image space. This raises an important question: how can sequential language tokens be improved to better ground objects in 2D spatial space for MLLMs? To address this, we present a spatial representation method for grounding objects, namely GETok, that integrates a specialized vocabulary of learnable tokens into MLLMs. GETok first uses grid tokens to partition the image plane into structured spatial anchors, and then exploits offset tokens to enable precise and iterative refinement of localization predictions. By embedding spatial relationships directly into tokens, GETok significantly advances MLLMs in native 2D space reasoning without modifying the autoregressive architecture. Extensive experiments demonstrate that GETok achieves superior performance over the state-of-the-art methods across various referring tasks in both supervised fine-tuning and reinforcement learning settings.
Similar Papers
Harnessing Object Grounding for Time-Sensitive Video Understanding
CV and Pattern Recognition
Helps AI understand videos by seeing objects.
AdaTok: Adaptive Token Compression with Object-Aware Representations for Efficient Multimodal LLMs
CV and Pattern Recognition
Makes AI understand pictures using fewer computer steps.
Direct Visual Grounding by Directing Attention of Visual Tokens
CV and Pattern Recognition
Makes AI better at answering questions about pictures.