CVP: Central-Peripheral Vision-Inspired Multimodal Model for Spatial Reasoning
By: Zeyuan Chen , Xiang Zhang , Haiyang Xu and more
Potential Business Impact:
Helps computers understand 3D spaces like humans.
We present a central-peripheral vision-inspired framework (CVP), a simple yet effective multimodal model for spatial reasoning that draws inspiration from the two types of human visual fields -- central vision and peripheral vision. Existing approaches primarily rely on unstructured representations, such as point clouds, voxels, or patch features, and inject scene context implicitly via coordinate embeddings. However, this often results in limited spatial reasoning capabilities due to the lack of explicit, high-level structural understanding. To address this limitation, we introduce two complementary components into a Large Multimodal Model-based architecture: target-affinity token, analogous to central vision, that guides the model's attention toward query-relevant objects; and allocentric grid, akin to peripheral vision, that captures global scene context and spatial arrangements. These components work in tandem to enable structured, context-aware understanding of complex 3D environments. Experiments show that CVP achieves state-of-the-art performance across a range of 3D scene understanding benchmarks.
Similar Papers
Vision-Language Memory for Spatial Reasoning
CV and Pattern Recognition
Robots understand 3D space better from videos.
Video4Spatial: Towards Visuospatial Intelligence with Context-Guided Video Generation
CV and Pattern Recognition
Teaches computers to understand space from videos.
COOPER: A Unified Model for Cooperative Perception and Reasoning in Spatial Intelligence
CV and Pattern Recognition
Helps computers understand 3D space better.