Score: 1

Unsupervised Ego- and Exo-centric Dense Procedural Activity Captioning via Gaze Consensus Adaptation

Published: April 7, 2025 | arXiv ID: 2504.04840v3

By: Zhaofeng Shi , Heqian Qiu , Lanxiao Wang and more

Potential Business Impact:

Helps computers understand actions from different views.

Business Areas:
Image Recognition Data and Analytics, Software

Even from an early age, humans naturally adapt between exocentric (Exo) and egocentric (Ego) perspectives to understand daily procedural activities. Inspired by this cognitive ability, we propose a novel Unsupervised Ego-Exo Dense Procedural Activity Captioning (UE$^{2}$DPAC) task, which aims to transfer knowledge from the labeled source view to predict the time segments and descriptions of action sequences for the target view without annotations. Despite previous works endeavoring to address the fully-supervised single-view or cross-view dense video captioning, they lapse in the proposed task due to the significant inter-view gap caused by temporal misalignment and irrelevant object interference. Hence, we propose a Gaze Consensus-guided Ego-Exo Adaptation Network (GCEAN) that injects the gaze information into the learned representations for the fine-grained Ego-Exo alignment. Specifically, we propose a Score-based Adversarial Learning Module (SALM) that incorporates a discriminative scoring network and compares the scores of distinct views to learn unified view-invariant representations from a global level. Then, the Gaze Consensus Construction Module (GCCM) utilizes the gaze to progressively calibrate the learned representations to highlight the regions of interest and extract the corresponding temporal contexts. Moreover, we adopt hierarchical gaze-guided consistency losses to construct gaze consensus for the explicit temporal and spatial adaptation between the source and target views. To support our research, we propose a new EgoMe-UE$^{2}$DPAC benchmark, and extensive experiments demonstrate the effectiveness of our method, which outperforms many related methods by a large margin. Code is available at https://github.com/ZhaofengSHI/GCEAN.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Multimedia