Thinking with Blueprints: Assisting Vision-Language Models in Spatial Reasoning via Structured Object Representation
By: Weijian Ma , Shizhao Sun , Tianyu Yu and more
Potential Business Impact:
Helps computers understand where things are.
Spatial reasoning -- the ability to perceive and reason about relationships in space -- advances vision-language models (VLMs) from visual perception toward spatial semantic understanding. Existing approaches either revisit local image patches, improving fine-grained perception but weakening global spatial awareness, or mark isolated coordinates, which capture object locations but overlook their overall organization. In this work, we integrate the cognitive concept of an object-centric blueprint into VLMs to enhance spatial reasoning. Given an image and a question, the model first constructs a JSON-style blueprint that records the positions, sizes, and attributes of relevant objects, and then reasons over this structured representation to produce the final answer. To achieve this, we introduce three key techniques: (1) blueprint-embedded reasoning traces for supervised fine-tuning to elicit basic reasoning skills; (2) blueprint-aware rewards in reinforcement learning to encourage the blueprint to include an appropriate number of objects and to align final answers with this causal reasoning; and (3) anti-shortcut data augmentation that applies targeted perturbations to images and questions, discouraging reliance on superficial visual or linguistic cues. Experiments show that our method consistently outperforms existing VLMs and specialized spatial reasoning models.
Similar Papers
Vision-Language Memory for Spatial Reasoning
CV and Pattern Recognition
Robots understand 3D space better from videos.
Enhancing Spatial Reasoning through Visual and Textual Thinking
CV and Pattern Recognition
Helps robots understand where things are in space.
A Multi-Modal Neuro-Symbolic Approach for Spatial Reasoning-Based Visual Grounding in Robotics
Robotics
Helps robots understand where things are.