ST-Think: How Multimodal Large Language Models Reason About 4D Worlds from Ego-Centric Videos
By: Peiran Wu , Yunze Liu , Miao Liu and more
Potential Business Impact:
Teaches computers to understand videos like people.
Humans excel at spatial-temporal reasoning, effortlessly interpreting dynamic visual events from an egocentric viewpoint. However, whether multimodal large language models (MLLMs) can similarly understand the 4D world remains uncertain. This paper explores multimodal spatial-temporal reasoning from an egocentric perspective, aiming to equip MLLMs with human-like reasoning capabilities. To support this objective, we introduce \textbf{Ego-ST Bench}, a novel benchmark containing over 5,000 question-answer pairs across four categories, systematically evaluating spatial, temporal, and integrated spatial-temporal reasoning. Additionally, we propose \textbf{ST-R1} training paradigm, a video-based reasoning model that incorporates reverse thinking into its reinforcement learning process, significantly enhancing performance. We combine long-chain-of-thought (long-CoT) supervised fine-tuning with Group Relative Policy Optimization (GRPO) reinforcement learning, achieving notable improvements with limited high-quality data. Ego-ST Bench and ST-R1 provide valuable insights and resources for advancing video-based spatial-temporal reasoning research.
Similar Papers
Actial: Activate Spatial Reasoning Ability of Multimodal Large Language Models
CV and Pattern Recognition
Teaches computers to understand 3D objects from different views.
V-STaR: Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning
CV and Pattern Recognition
Teaches computers to understand video actions like people.
Spatial Reasoning with Vision-Language Models in Ego-Centric Multi-View Scenes
CV and Pattern Recognition
Helps robots understand 3D space from their own eyes.