Score: 0

ST-Think: How Multimodal Large Language Models Reason About 4D Worlds from Ego-Centric Videos

Published: March 16, 2025 | arXiv ID: 2503.12542v2

By: Peiran Wu , Yunze Liu , Miao Liu and more

Potential Business Impact:

Teaches computers to understand videos like people.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Humans excel at spatial-temporal reasoning, effortlessly interpreting dynamic visual events from an egocentric viewpoint. However, whether multimodal large language models (MLLMs) can similarly understand the 4D world remains uncertain. This paper explores multimodal spatial-temporal reasoning from an egocentric perspective, aiming to equip MLLMs with human-like reasoning capabilities. To support this objective, we introduce \textbf{Ego-ST Bench}, a novel benchmark containing over 5,000 question-answer pairs across four categories, systematically evaluating spatial, temporal, and integrated spatial-temporal reasoning. Additionally, we propose \textbf{ST-R1} training paradigm, a video-based reasoning model that incorporates reverse thinking into its reinforcement learning process, significantly enhancing performance. We combine long-chain-of-thought (long-CoT) supervised fine-tuning with Group Relative Policy Optimization (GRPO) reinforcement learning, achieving notable improvements with limited high-quality data. Ego-ST Bench and ST-R1 provide valuable insights and resources for advancing video-based spatial-temporal reasoning research.

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition