Score: 2

Action Anticipation at a Glimpse: To What Extent Can Multimodal Cues Replace Video?

Published: December 2, 2025 | arXiv ID: 2512.02846v1

By: Manuel Benavent-Lledo , Konstantinos Bacharidis , Victoria Manousaki and more

Potential Business Impact:

Predicts what happens next from just one picture.

Business Areas:
Image Recognition Data and Analytics, Software

Anticipating actions before they occur is a core challenge in action understanding research. While conventional methods rely on extracting and aggregating temporal information from videos, as humans we can often predict upcoming actions by observing a single moment from a scene, when given sufficient context. Can a model achieve this competence? The short answer is yes, although its effectiveness depends on the complexity of the task. In this work, we investigate to what extent video aggregation can be replaced with alternative modalities. To this end, based on recent advances in visual feature extraction and language-based reasoning, we introduce AAG, a method for Action Anticipation at a Glimpse. AAG combines RGB features with depth cues from a single frame for enhanced spatial reasoning, and incorporates prior action information to provide long-term context. This context is obtained either through textual summaries from Vision-Language Models, or from predictions generated by a single-frame action recognizer. Our results demonstrate that multimodal single-frame action anticipation using AAG can perform competitively compared to both temporally aggregated video baselines and state-of-the-art methods across three instructional activity datasets: IKEA-ASM, Meccano, and Assembly101.

Country of Origin
🇪🇸 Spain

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
CV and Pattern Recognition