EgoThinker: Unveiling Egocentric Reasoning with Spatio-Temporal CoT
By: Baoqi Pei , Yifei Huang , Jilan Xu and more
Potential Business Impact:
Helps computers understand what people see and do.
Egocentric video reasoning centers on an unobservable agent behind the camera who dynamically shapes the environment, requiring inference of hidden intentions and recognition of fine-grained interactions. This core challenge limits current multimodal large language models MLLMs, which excel at visible event reasoning but lack embodied, first-person understanding. To bridge this gap, we introduce EgoThinker, a novel framework that endows MLLMs with robust egocentric reasoning capabilities through spatio-temporal chain-of-thought supervision and a two-stage learning curriculum. First, we introduce EgoRe-5M, a large-scale egocentric QA dataset constructed from 13M diverse egocentric video clips. This dataset features multi-minute segments annotated with detailed CoT rationales and dense hand-object grounding. Second, we employ SFT on EgoRe-5M to instill reasoning skills, followed by reinforcement fine-tuning RFT to further enhance spatio-temporal localization. Experimental results show that EgoThinker outperforms existing methods across multiple egocentric benchmarks, while achieving substantial improvements in fine-grained spatio-temporal localization tasks. Full code and data are released at https://github.com/InternRobotics/EgoThinker.
Similar Papers
Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning
CV and Pattern Recognition
Lets computers understand videos lasting weeks.
OneThinker: All-in-one Reasoning Model for Image and Video
CV and Pattern Recognition
One model understands images and videos for many tasks.
OneThinker: All-in-one Reasoning Model for Image and Video
CV and Pattern Recognition
One AI understands pictures and videos for many tasks.