Zero-Shot Open-Vocabulary Human Motion Grounding with Test-Time Training
By: Yunjiao Zhou , Xinyan Chen , Junlang Qian and more
Potential Business Impact:
Lets computers understand actions without being taught.
Understanding complex human activities demands the ability to decompose motion into fine-grained, semantic-aligned sub-actions. This motion grounding process is crucial for behavior analysis, embodied AI and virtual reality. Yet, most existing methods rely on dense supervision with predefined action classes, which are infeasible in open-vocabulary, real-world settings. In this paper, we propose ZOMG, a zero-shot, open-vocabulary framework that segments motion sequences into semantically meaningful sub-actions without requiring any annotations or fine-tuning. Technically, ZOMG integrates (1) language semantic partition, which leverages large language models to decompose instructions into ordered sub-action units, and (2) soft masking optimization, which learns instance-specific temporal masks to focus on frames critical to sub-actions, while maintaining intra-segment continuity and enforcing inter-segment separation, all without altering the pretrained encoder. Experiments on three motion-language datasets demonstrate state-of-the-art effectiveness and efficiency of motion grounding performance, outperforming prior methods by +8.7\% mAP on HumanML3D benchmark. Meanwhile, significant improvements also exist in downstream retrieval, establishing a new paradigm for annotation-free motion understanding.
Similar Papers
VideoGEM: Training-free Action Grounding in Videos
CV and Pattern Recognition
Finds actions in videos without extra training.
Grounding Foundational Vision Models with 3D Human Poses for Robust Action Recognition
CV and Pattern Recognition
Teaches robots to understand actions by watching.
OmniGround: A Comprehensive Spatio-Temporal Grounding Benchmark for Real-World Complex Scenarios
CV and Pattern Recognition
Helps computers find things in videos using words.