Unsupervised Ego- and Exo-centric Dense Procedural Activity Captioning via Gaze Consensus Adaptation
By: Zhaofeng Shi , Heqian Qiu , Lanxiao Wang and more
Potential Business Impact:
Helps computers understand actions from different views.
Even from an early age, humans naturally adapt between exocentric (Exo) and egocentric (Ego) perspectives to understand daily procedural activities. Inspired by this cognitive ability, we propose a novel Unsupervised Ego-Exo Dense Procedural Activity Captioning (UE$^{2}$DPAC) task, which aims to transfer knowledge from the labeled source view to predict the time segments and descriptions of action sequences for the target view without annotations. Despite previous works endeavoring to address the fully-supervised single-view or cross-view dense video captioning, they lapse in the proposed task due to the significant inter-view gap caused by temporal misalignment and irrelevant object interference. Hence, we propose a Gaze Consensus-guided Ego-Exo Adaptation Network (GCEAN) that injects the gaze information into the learned representations for the fine-grained Ego-Exo alignment. Specifically, we propose a Score-based Adversarial Learning Module (SALM) that incorporates a discriminative scoring network and compares the scores of distinct views to learn unified view-invariant representations from a global level. Then, the Gaze Consensus Construction Module (GCCM) utilizes the gaze to progressively calibrate the learned representations to highlight the regions of interest and extract the corresponding temporal contexts. Moreover, we adopt hierarchical gaze-guided consistency losses to construct gaze consensus for the explicit temporal and spatial adaptation between the source and target views. To support our research, we propose a new EgoMe-UE$^{2}$DPAC benchmark, and extensive experiments demonstrate the effectiveness of our method, which outperforms many related methods by a large margin. Code is available at https://github.com/ZhaofengSHI/GCEAN.
Similar Papers
EgoExo-Gen: Ego-centric Video Prediction by Watching Exo-centric Videos
CV and Pattern Recognition
Makes videos show what your hands are doing.
EgoX: Egocentric Video Generation from a Single Exocentric Video
CV and Pattern Recognition
Turns normal videos into your own first-person view.
EgoM2P: Egocentric Multimodal Multitask Pretraining
CV and Pattern Recognition
Helps robots and computers understand what you see.