Score: 1

EgoThinker: Unveiling Egocentric Reasoning with Spatio-Temporal CoT

Published: October 27, 2025 | arXiv ID: 2510.23569v1

By: Baoqi Pei , Yifei Huang , Jilan Xu and more

Potential Business Impact:

Helps computers understand what people see and do.

Business Areas:
Image Recognition Data and Analytics, Software

Egocentric video reasoning centers on an unobservable agent behind the camera who dynamically shapes the environment, requiring inference of hidden intentions and recognition of fine-grained interactions. This core challenge limits current multimodal large language models MLLMs, which excel at visible event reasoning but lack embodied, first-person understanding. To bridge this gap, we introduce EgoThinker, a novel framework that endows MLLMs with robust egocentric reasoning capabilities through spatio-temporal chain-of-thought supervision and a two-stage learning curriculum. First, we introduce EgoRe-5M, a large-scale egocentric QA dataset constructed from 13M diverse egocentric video clips. This dataset features multi-minute segments annotated with detailed CoT rationales and dense hand-object grounding. Second, we employ SFT on EgoRe-5M to instill reasoning skills, followed by reinforcement fine-tuning RFT to further enhance spatio-temporal localization. Experimental results show that EgoThinker outperforms existing methods across multiple egocentric benchmarks, while achieving substantial improvements in fine-grained spatio-temporal localization tasks. Full code and data are released at https://github.com/InternRobotics/EgoThinker.

Country of Origin
🇯🇵 Japan

Repos / Data Links

Page Count
29 pages

Category
Computer Science:
CV and Pattern Recognition