SUGAR: Learning Skeleton Representation with Visual-Motion Knowledge for Action Recognition
By: Qilang Ye , Yu Zhou , Lian He and more
Potential Business Impact:
Teaches computers to understand human movements.
Large Language Models (LLMs) hold rich implicit knowledge and powerful transferability. In this paper, we explore the combination of LLMs with the human skeleton to perform action classification and description. However, when treating LLM as a recognizer, two questions arise: 1) How can LLMs understand skeleton? 2) How can LLMs distinguish among actions? To address these problems, we introduce a novel paradigm named learning Skeleton representation with visUal-motion knowledGe for Action Recognition (SUGAR). In our pipeline, we first utilize off-the-shelf large-scale video models as a knowledge base to generate visual, motion information related to actions. Then, we propose to supervise skeleton learning through this prior knowledge to yield discrete representations. Finally, we use the LLM with untouched pre-training weights to understand these representations and generate the desired action targets and descriptions. Notably, we present a Temporal Query Projection (TQP) module to continuously model the skeleton signals with long sequences. Experiments on several skeleton-based action classification benchmarks demonstrate the efficacy of our SUGAR. Moreover, experiments on zero-shot scenarios show that SUGAR is more versatile than linear-based methods.
Similar Papers
SkeletonAgent: An Agentic Interaction Framework for Skeleton-based Action Recognition
CV and Pattern Recognition
Helps computers understand human movements better.
Foundation Model for Skeleton-Based Human Action Understanding
CV and Pattern Recognition
Teaches robots to understand human movements better.
3D Skeleton-Based Action Recognition: A Review
CV and Pattern Recognition
Helps computers understand human movements from stick figures.