From Macro to Micro: Benchmarking Microscopic Spatial Intelligence on Molecules via Vision-Language Models
By: Zongzhao Li , Xiangzhe Kong , Jiahui Su and more
Potential Business Impact:
Helps computers understand tiny things' positions.
This paper introduces the concept of Microscopic Spatial Intelligence (MiSI), the capability to perceive and reason about the spatial relationships of invisible microscopic entities, which is fundamental to scientific discovery. To assess the potential of Vision-Language Models (VLMs) in this domain, we propose a systematic benchmark framework MiSI-Bench. This framework features over 163,000 question-answer pairs and 587,000 images derived from approximately 4,000 molecular structures, covering nine complementary tasks that evaluate abilities ranging from elementary spatial transformations to complex relational identifications. Experimental results reveal that current state-of-the-art VLMs perform significantly below human level on this benchmark. However, a fine-tuned 7B model demonstrates substantial potential, even surpassing humans in spatial transformation tasks, while its poor performance in scientifically-grounded tasks like hydrogen bond recognition underscores the necessity of integrating explicit domain knowledge for progress toward scientific AGI. The datasets are available at https://huggingface.co/datasets/zongzhao/MiSI-bench.
Similar Papers
Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models
CV and Pattern Recognition
Computers still struggle to understand space.
MMSI-Video-Bench: A Holistic Benchmark for Video-Based Spatial Intelligence
CV and Pattern Recognition
Tests AI's ability to understand videos like humans.
SpatialBench: Benchmarking Multimodal Large Language Models for Spatial Cognition
Artificial Intelligence
Tests how well computers understand space and plan.