Score: 2

Grounding Foundational Vision Models with 3D Human Poses for Robust Action Recognition

Published: November 6, 2025 | arXiv ID: 2511.05622v1

By: Nicholas Babey , Tiffany Gu , Yiheng Li and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Teaches robots to understand actions by watching.

Business Areas:
Motion Capture Media and Entertainment, Video

For embodied agents to effectively understand and interact within the world around them, they require a nuanced comprehension of human actions grounded in physical space. Current action recognition models, often relying on RGB video, learn superficial correlations between patterns and action labels, so they struggle to capture underlying physical interaction dynamics and human poses in complex scenes. We propose a model architecture that grounds action recognition in physical space by fusing two powerful, complementary representations: V-JEPA 2's contextual, predictive world dynamics and CoMotion's explicit, occlusion-tolerant human pose data. Our model is validated on both the InHARD and UCF-19-Y-OCC benchmarks for general action recognition and high-occlusion action recognition, respectively. Our model outperforms three other baselines, especially within complex, occlusive scenes. Our findings emphasize a need for action recognition to be supported by spatial understanding instead of statistical pattern recognition.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
CV and Pattern Recognition