Fine-grained Spatiotemporal Grounding on Egocentric Videos
By: Shuo Liang , Yiwu Zhong , Zi-Yuan Hu and more
Potential Business Impact:
Helps robots see and understand what they are looking at.
Spatiotemporal video grounding aims to localize target entities in videos based on textual queries. While existing research has made significant progress in exocentric videos, the egocentric setting remains relatively underexplored, despite its growing importance in applications such as augmented reality and robotics. In this work, we conduct a systematic analysis of the discrepancies between egocentric and exocentric videos, revealing key challenges such as shorter object durations, sparser trajectories, smaller object sizes, and larger positional shifts. To address these challenges, we introduce EgoMask, the first pixel-level benchmark for fine-grained spatiotemporal grounding in egocentric videos. It is constructed by our proposed automatic annotation pipeline, which annotates referring expressions and object masks across short-, medium-, and long-term videos. Additionally, we create EgoMask-Train, a large-scale training dataset to facilitate model development. Experiments demonstrate that the state-of-the-art spatiotemporal grounding models perform poorly on our benchmark EgoMask, but fine-tuning on EgoMask-Train yields significant improvements, while preserving performance on exocentric datasets. Our work thus provides essential resources and insights for advancing egocentric video understanding. Our code is available at https://github.com/LaVi-Lab/EgoMask .
Similar Papers
ToG-Bench: Task-Oriented Spatio-Temporal Grounding in Egocentric Videos
CV and Pattern Recognition
Helps robots understand what to do in a room.
Object-Shot Enhanced Grounding Network for Egocentric Video
CV and Pattern Recognition
Helps robots understand what you're looking at.
EgoThinker: Unveiling Egocentric Reasoning with Spatio-Temporal CoT
CV and Pattern Recognition
Helps computers understand what people see and do.