Intention-Guided Cognitive Reasoning for Egocentric Long-Term Action Anticipation
By: Qiaohui Chu , Haoyu Zhang , Meng Liu and more
Potential Business Impact:
Predicts your next actions to help proactively
Long-term action anticipation from egocentric video is critical for applications such as human-computer interaction and assistive technologies, where anticipating user intent enables proactive and context-aware AI assistance. However, existing approaches suffer from three key limitations: 1) underutilization of fine-grained visual cues from hand-object interactions, 2) neglect of semantic dependencies between verbs and nouns, and 3) lack of explicit cognitive reasoning, limiting generalization and long-term forecasting ability. To overcome these challenges, we propose INSIGHT, a unified two-stage framework for egocentric action anticipation. In the first stage, INSIGHT focuses on extracting semantically rich features from hand-object interaction regions and enhances action representations using a verb-noun co-occurrence matrix. In the second stage, it introduces a reinforcement learning-based module that simulates explicit cognitive reasoning through a structured process: visual perception (think) -> intention inference (reason) -> action anticipation (answer). Extensive experiments on Ego4D, EPIC-Kitchens-55, and EGTEA Gaze+ benchmarks show that INSIGHT achieves state-of-the-art performance, demonstrating its effectiveness and strong generalization capability.
Similar Papers
Vision and Intention Boost Large Language Model in Long-Term Action Anticipation
CV and Pattern Recognition
Predicts future actions by watching and understanding.
Ego-centric Predictive Model Conditioned on Hand Trajectories
CV and Pattern Recognition
Predicts what you'll do and what happens next.
Ego-centric Predictive Model Conditioned on Hand Trajectories
CV and Pattern Recognition
Predicts actions and what happens next.