Score: 1

Robust Egocentric Referring Video Object Segmentation via Dual-Modal Causal Intervention

Published: December 30, 2025 | arXiv ID: 2512.24323v1

By: Haijing Liu , Zhiyuan Song , Hefeng Wu and more

Potential Business Impact:

Helps cameras understand what you're doing.

Business Areas:
Image Recognition Data and Analytics, Software

Egocentric Referring Video Object Segmentation (Ego-RVOS) aims to segment the specific object actively involved in a human action, as described by a language query, within first-person videos. This task is critical for understanding egocentric human behavior. However, achieving such segmentation robustly is challenging due to ambiguities inherent in egocentric videos and biases present in training data. Consequently, existing methods often struggle, learning spurious correlations from skewed object-action pairings in datasets and fundamental visual confounding factors of the egocentric perspective, such as rapid motion and frequent occlusions. To address these limitations, we introduce Causal Ego-REferring Segmentation (CERES), a plug-in causal framework that adapts strong, pre-trained RVOS backbones to the egocentric domain. CERES implements dual-modal causal intervention: applying backdoor adjustment principles to counteract language representation biases learned from dataset statistics, and leveraging front-door adjustment concepts to address visual confounding by intelligently integrating semantic visual features with geometric depth information guided by causal principles, creating representations more robust to egocentric distortions. Extensive experiments demonstrate that CERES achieves state-of-the-art performance on Ego-RVOS benchmarks, highlighting the potential of applying causal reasoning to build more reliable models for broader egocentric video understanding.

Country of Origin
🇨🇳 China

Page Count
32 pages

Category
Computer Science:
CV and Pattern Recognition