Action Anticipation at a Glimpse: To What Extent Can Multimodal Cues Replace Video?
By: Manuel Benavent-Lledo , Konstantinos Bacharidis , Victoria Manousaki and more
Potential Business Impact:
Predicts what happens next from just one picture.
Anticipating actions before they occur is a core challenge in action understanding research. While conventional methods rely on extracting and aggregating temporal information from videos, as humans we can often predict upcoming actions by observing a single moment from a scene, when given sufficient context. Can a model achieve this competence? The short answer is yes, although its effectiveness depends on the complexity of the task. In this work, we investigate to what extent video aggregation can be replaced with alternative modalities. To this end, based on recent advances in visual feature extraction and language-based reasoning, we introduce AAG, a method for Action Anticipation at a Glimpse. AAG combines RGB features with depth cues from a single frame for enhanced spatial reasoning, and incorporates prior action information to provide long-term context. This context is obtained either through textual summaries from Vision-Language Models, or from predictions generated by a single-frame action recognizer. Our results demonstrate that multimodal single-frame action anticipation using AAG can perform competitively compared to both temporally aggregated video baselines and state-of-the-art methods across three instructional activity datasets: IKEA-ASM, Meccano, and Assembly101.
Similar Papers
Multi-level and Multi-modal Action Anticipation
CV and Pattern Recognition
Helps computers guess what you'll do next.
Towards Adaptive Fusion of Multimodal Deep Networks for Human Action Recognition
CV and Pattern Recognition
Lets computers understand actions by watching, listening, and feeling.
Learning Visual Affordance from Audio
CV and Pattern Recognition
Lets robots understand objects by hearing them.